Cancelled asyncio tasks
tu500 opened this issue · comments
I encountered some strange behavior in my setup. I tried to come up with a minimal example, but the behavior varied unexplainably. This is what I've come up with.
The original error I was tracking are the cancelled asyncio tasks. For me the behavior changes for example when changing the number of child levelB's I create. So this looks like some sort of timing/order/race condition issue?
Also, at some point during minifying the DBUS error appeared additionally. Though it probably is a separate issue, as I haven't been able to isolate it, I just sticked with it.
I realize this might very well be some issue with my specific setup/machine/... But for me this is reproducible. Any ideas?
I'm running a current Fedora 29 with D-Bus Message Bus Daemon 1.12.10
.
Seems there are a couple of libdbus issues, which I have addressed in commit 0f70846 and also raised on the D-Bus list.
With these fixes, your example mostly works -- I still get the occasional segfault in the client. I will note that your server LevelA.start_first_stage()
method is not actually returning a reply, which might be a factor.
OK, there is something else mysterious going on. Looks like about half the client-side on_new_levelB()
calls get into LevelB.init()
, and thence into the Ravel get_async_interface()
call, and then we see these “Task was destroyed but it is pending!” messages, which would seem to indicate that the task is being lost instead of going into the scheduler queue. Why? I don’t know.
For example, here is a quick hack
diff --git a/ravel.py b/ravel.py
index 67a6e8d..569df08 100644
--- a/ravel.py
+++ b/ravel.py
@@ -22,6 +22,8 @@ from dbussy import \
DBUSX, \
Introspection
+KEEPIT = []
+
#+
# High-level bus connection
#-
@@ -2344,7 +2346,7 @@ def _message_interface_dispatch(connection, message, bus) :
#end await_result
bus.loop.create_task(await_result(result))
else :
- bus.loop.create_task(result)
+ KEEPIT.append(bus.loop.create_task(result))
#end if
result = DBUS.HANDLER_RESULT_HANDLED
elif isinstance(result, bool) :
which makes the task-destroyed error go away. But of course it is not acceptable as a proper fix, because it leads to memory leaks.
Thanks for looking into that. Yes, the commit stops the DBus errors.
For the other part: My testing shows that apparently asyncio does not force references to be kept to created tasks. (Which from my point of view would be unexpected behavior.) I have asked the python IRC channel if this should be the case, answer pending.
If so, maybe you need to keep track of the task futures and await them somewhere?
If this turns out to be a feature, not a bug, of asyncio, then I’m going to have to. ;)
OK, I have added a TaskKeeper
base class 57b1640 to keep strong references to created tasks and reap them as they end. See how that helps.
No further trouble?