sanic-org / sanic

Accelerate your web app development | Build fast. Run fast.

Home Page:https://sanic.dev

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

I can't stop background task with cancel_task!

sedlice opened this issue · comments

Is there an existing issue for this?

  • I have searched the existing issues

Describe the bug

I created a background task using add_task:

request.app.add_task(run_predict(request), name="predict_task")

And the task was running, you can see, task name is 'predict_task':

Executing <Task pending name='predict_task' coro=<run_predict() running at /home/sedlice/gitcode/projectdb/server/api/machine_learning/classify.py:132> wait_for=<Future pending cb=[Task.task_wakeup()] created at /usr/local/lib/python3.11/asyncio/streams.py:520> created at /home/sedlice/gitcode/projectdb/server/.venv/lib/python3.11/site-packages/sanic/app.py:1229> took 4.227 seconds

But when I used cancel_task to stop it,

await request.app.cancel_task("predict_task")

I got a error log:

[2023-11-08 09:47:57 +0800] [6817] [ERROR] Exception occurred while handling uri: 'http://127.0.0.1:8000/v1/machine_learning/classify/predict/cancel'
Traceback (most recent call last):
  File "/home/sedlice/gitcode/projectdb/server/.venv/lib/python3.11/site-packages/sanic/app.py", line 1288, in get_task
    return self._task_registry[name]
           ~~~~~~~~~~~~~~~~~~~^^^^^^
KeyError: 'predict_task'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/sedlice/gitcode/projectdb/server/.venv/lib/python3.11/site-packages/sanic/app.py", line 974, in handle_request
    response = await response
               ^^^^^^^^^^^^^^
  File "/home/sedlice/gitcode/projectdb/server/utils/auth.py", line 181, in decorated_function
    response = await f(request, *args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sedlice/gitcode/projectdb/server/api/machine_learning/classify.py", line 175, in classify_predict_cancel
    await request.app.cancel_task("predict_task")
  File "/home/sedlice/gitcode/projectdb/server/.venv/lib/python3.11/site-packages/sanic/app.py", line 1303, in cancel_task
    task = self.get_task(name, raise_exception=raise_exception)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/sedlice/gitcode/projectdb/server/.venv/lib/python3.11/site-packages/sanic/app.py", line 1291, in get_task
    raise SanicException(
sanic.exceptions.SanicException: Registered task named "predict_task" not found.

Is it a bug? Or just I used wrong code? Somebody can help me?

Code snippet

No response

Expected Behavior

No response

How do you run Sanic?

As a script (app.run or Sanic.serve)

Operating System

Linux

Sanic Version

23.3.0

Additional context

Python version is 3.11.4

I am wondering if you are running into a multiple worker issue? 🤔

Here is a working example to show you that it works:

from sanic import Sanic, response, Request
from asyncio import sleep

app = Sanic("Test")


@app.route("/")
async def handler(request: Request):
    await request.app.cancel_task("some_task")
    return response.text("Hello World!")


async def some_task():
    while True:
        print("I'm doing something!")
        await sleep(1)


@app.before_server_start
async def start_some_task(app, loop):
    app.add_task(some_task(), name="some_task")

if __name__ == "__main__":
    app.run(port=7777, dev=True)

However, remember that the tasks are local to the running instance. If you have multiple workers running, they will not cancel tasks that exist somewhere else.

LMK if something else is going on.

Thanks! Yeah, you are right. I'm using multiple workers to run the sanic server.
You mean that if I want to use cancel_task(), I can't use multiple worker?
But if running with a single worker, wouldn't the efficiency decrease?

You mean that if I want to use cancel_task(), I can't use multiple worker?

The task is local to the instance it is running on. When you cancel, it will only cancel it if it exists on that instance. If you want a multi-worker solution, then you need to come up with a pattern to share that between workers (a pub/sub or something similar) and then cancel (and also ignore if it doesn't exist on that instance).

Might need to think about the right solution to meet your use case.

So, you can use it, but you will need to deal with the extra complexity.

But if running with a single worker, wouldn't the efficiency decrease?

For sure.