xopxe / lumen

Lua Multitasking Environment.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Bugs in test-buff.lua and test-pause.lua

gschmottlach opened this issue · comments

There are some task order definition bugs in test-buff.lua and test-pause.lua programs. Basically, if you define your "sender" tasks before the "receiver" tasks you risk losing the first message. For instance, when testing with buffers in test-buff.lua, if the "sender" task definition comes before the "receiver" task when the receiver is run (for the very first time) it will miss the sender's output because the buffer isn't established yet. As a result it misses the first signal. Look at the output and you will see the problem.

Likewise, test-pause.lua suffers from a similar problem. It will miss the first signal from the sender if the receiver is defined after the sender.

I'm not sure if these behaviors are intentional to illustrate the tasking nature of the scheduler and it's dependency on which task is run/scheduled first. Still, it's a bit disconcerting when running the test programs and observing the behavior. It makes you believe there is a bug in the scheduler.

Hope this helps . . .

Fixed. It's not that I intentionally left that to illustrate how signals work, it's just me knowing that this behavior is "as designed" it did not attract my attention. Nevertheless, I do agree it is a bit unsettling to see that in a demo test code... It just looks broken.

Anyway, notice that the cause of the "missing events" is not a starting order problem: a receiver started first can still miss events if goes to sleep or to wait for some other events. If you want to still receive those missed events, you can use a buffering wait-descriptor. Also, you can block a task waiting for another to appear using catalogs.

With that in mind, I was tempted to "fix" the demos simply adding a 1 second wait (or even a simple yield) at the beginning of the emitter, to give time to the receiver to start. But finally put some minimally more sophisticated :)

Thanks for addressing this. I think it will minimize any concerns developers might have when evaluating your scheduler. Without understanding what's going on they may falsely assume something is fundamentally broken in the scheduler (when in fact there is not).