rtic-rs / rtic

Real-Time Interrupt-driven Concurrency (RTIC) framework for ARM Cortex-M microcontrollers

Home Page:https://rtic.rs

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question: are task capacities supported in RTIC 2?

mryndzionek opened this issue · comments

I'm trying to move one small project to RTIC 2 and task capacities result in unexpected argument.

Hi, there is no capacity field attribute, instead you setup a channel. See https://rtic.rs/2/book/en/by-example/channel.html for more details. Let us know if something remains unclear.

(You can still initially spawn with an argument, similar to a 1-sized queue.)

Okay, understood and implemented. One more question however, previously I had a lock_free shared resource used in software tasks. Now it seems to be not allowed:

Lock free shared resource "xxx" is used by an async tasks, which is forbidden

Is this true? Do I need a lock now?

You can still have local resources (being lock free). The thing is that async tasks at the same priority level might cause a race condition on non atomic shared lock free resources (as an effect of the cooperative multi tasking). Async tasks are no longer run-to-end.

You can still have local resources (being lock free). The thing is that async tasks at the same priority level might cause a race condition on non atomic shared lock free resources (as an effect of the cooperative multi tasking). Async tasks are no longer run-to-end.

Okay, thanks for quick response.

Sorry for commenting on an old issue, but I don't completely understand how channels replace the functionality of capacity. As I understand, channels are a way of communicating between tasks and do not aid in spawning the same software task multiple times. Or have I misunderstood something?

I'm not 100% sure, but I think you misunderstand the capacity. The capacity docs says:

In the example below, the capacity of task foo is 3, allowing three simultaneous pending spawns of foo.

These are "pending spawns", so equivalent to a channel/queue length.

Yes, exactly. I want to be able to queue 3 tasks at the same time, without having to redefine the same task 3 different times. I don't want to make my own logic for dispatching the tasks, I just want to say "when possible, run this task 3 times".

I am doing some benchmarking, and I want to try classical matrix multiplication using multiple tasks where each task is assigned their own iteration of the outer most for loop in a matrix multiplication algorithm.

I think you are misunderstanding how RTIC works in general. It doesn't allocate tasks dynamically. The tasks are static and additionally, in RTIC 1.x, are run to completion, especially purely computational tasks (like matrix multiplication). In RTIC 2.x you can have async tasks, so concurrency is possible, but still, not for purely computational (non-blocking) tasks.

I am aware that tasks aren't allocated dynamically, which is why a capacity field is needed to begin with (to reserve space in the stack for up to a certain amount of tasks).

Another example of another test I am doing is having some sort of background task doing nonsense work while I have another task that is doing important stuff. Then, I want to be able to have multiple instances of that background task being queued for execution at the same time. Without the capacity field, I am unsure of how to implement this in RTIC 2.x. I don't want to create multiple tasks that has the exact same functionality, since it feels like an absurd way of implementing what it is I want to do.

For example, in the same page that was first sent by perlindgren in this thread to an example of channels. 3 different tasks are created doing the exact same purpose (acting as senders), which would not be needed if a capacity field was present. Then one task could be defined, and then 3 different variations if it being queued.

What you want is preemptive concurrency. RTIC, at least for the same priority tasks, is cooperative. There is no time slicing. This means that you would gain nothing spawning multiple tasks, as the other tasks would run only after the first task finishes. I think this is why this was removed from RTIC 2.x, as channels offer the same behavior + are a little more flexible. But as I said, @perlindgren is probably best to clarify.

The reason for running multiple tasks is to capture the overhead added by context switches, which would be lost if it was all one task running. But I also understand that there is no great value in using capacities in a normal situation, it would just be nice for my niche case :)

capture the overhead added by context switches

Oh yeah, so if measuring this is the goal I think your setup is no good for cooperative scheduler.

Well, partly. The goal is to compare it to a similar application written in C (FreeRTOS), so I also want to capture the performance of the languages as well as everything connected to the RTOSs.

In RTIC 2.x you could try to measure the async tasks context switches. Create two async tasks sharing two channels and implement a simple ping-pong exchange and then measure the time it takes to do X ping-pongs.

There are different types of context switching:

  1. RTIC binds hw tasks to interrupts, in this case no added OH to the hw interrupt dispatch.

  2. sw tasks (RTICv1) use a single dispatcher per priority level. It will also allow passing arguments. The implementation is fairly simple each task holds a message queue. On top of that there is a timer queue, but its kept separate so no additional copy of message data is needed due to that. The scheduling cost is the underlying hw cost + dispatcher, so cost is payload dependent. The dispatcher and user tasks are compiled together in the generated code, so efficient in comparison to a traditional kernel, where the OS is compiled separately.

  3. async tasks (RTICv2) binds to an async executor. The actual OH of the context, switch is largely up to how well Rust is able to generate code for storing/restoring context. The "glue" generated by RTIC is minimal. One can await "anything", including channels, overhead of that is completely up to the channel implementation and not part of the RTIC core.

If you access local resources (or shared lock free) these are accessed directly (no added oh), if you access shared (non-lock free), its zero-cost in Rust terms, and implemented by manipulating either interrupt enable register or BASEPRI (where available). This cost of resource accesses is independent of task type.

As mentioned in earlier comments, there is no time-slicing. Tasks runs to completion (or yield on await for v2). One can mimic capacity v1 through channels in v2, but they are not exactly the same.

How is starvation handled in this case? Let's compare RTIC v1 and RTIC v2 again. If one would want a task A to run 10 times in v1, one uses capacity to schedule the task 10 times. And then we schedule another task B, and then we schedule more instances of task A, Then, if Task A and B have the same priorities, B would run after the first 10 instances of A.

Let's imagine the same case for v2, where we use channels instead. I imagine that one would then have task A in an infinite loop reading from a channel and getting the arguments from that channel. If we send 10 values to that channel (as to have A run 10 times), schedule a task B, and after some time (before task A has worked through the first 10 items) send more items to the channel. Wouldn't B possibly be starved here if this continuous forever, or am I misunderstanding something?