tobymao / saq

Simple Async Queues

Home Page:https://saq-py.readthedocs.io/en/latest/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Fast clear all in queue

jnordberg opened this issue · comments

Is there some way to quickly abort all job in a queue? I'm able to enqueue at ~6k operations per second but when aborting it slows down to just 30 per second. Bottleneck seems to be redis server which is pegged at 100% cpu during this.

To add to this the time to abort grows with the number of jobs in the queue, the numbers above is for a queue with 500k jobs waiting queued.

to unsafely but efficiently abort all jobs in the queue, you can probably send a mass deletion command in redis. so just tell redis to mass delete all keys.

if you look at what abort does

 async def abort(self, job: Job, error: str, ttl: float = 5) -> None:
        async with self._op_sem:
            async with self.redis.pipeline(transaction=True) as pipe:
                dequeued, *_ = await (
                    pipe.lrem(self._queued, 0, job.id)
                    .zrem(self._incomplete, job.id)
                    .expire(job.id, ttl + 1)
                    .setex(job.abort_id, ttl, error)
                    .execute()
                )

            if dequeued:
                await job.finish(Status.ABORTED, error=error)
                await self.redis.delete(job.abort_id)
            else:
                await self.redis.lrem(self._active, 0, job.id)

it does it all in a transaction. to make sure it's done safely. you could probably bulk do these steps without the pipeline / transaction and also not call the finished call back

Doing this seems to work:

await redis.delete(queue._queued, queue._incomplete, queue._active)

But what do you mean by unsafe in this context? Is it possible to get active workers in a bad state by doing this?

workers may be in the process of processing jobs. so those workers will finish and their jobs won't exist anymore. this behavior is undefined and i haven't tested it.

Ok, thanks. So currently no way of doing it fast with active workers.

I'll look at sending a PR that makes sure the workers gracefully handles a job that's yanked from underneath them.