queue and multiple kubernetes replicas
ChrisBampis opened this issue · comments
## Context
Hi everyone. I m trying to set up a queuing system where my worker will have concurrency 1.
For my server I have two replicas and the queues and handler are the same in both replicas.
I would have expected that when a job is added in the queue then if there is an active one to be added in the waiting list.
This is happening when I have only one replica.
However when triggering an api call with 2 parallel users (using Jmeter) I see that the calls are added straight away in the active session. If I have more users always the number of active sessions will be the number of kubernetes replicas.
//Setting up the queue
queue = new Bull('queueName', url, {redis:{db:1}})
// add process
queue.process('process', 1, async(job, done)=>{ do process})
//Add job in the queue
const job = await queue.add('process', data)
return job.finished();
- Is this the intended behaviour? and
- How can I make sure that my queue is handling one job at a time?
It is documented that you should not use async and the done callback at the same time in your processor function.
If you are new to BullMQ please consider using the newer BullMQ: https://github.com/taskforcesh/bullmq
Thanks @manast for the reply. This is pseudocode. I missed that. The question if I can achieve global concurrency of 1 with more that on replica set of my server. The more I read about it I see that I can achieve that with only Bullmq-pro. Is there another way with bull queue or bullmq?
Currently, it is not possible to achieve a specific global concurrency with Bull or BullMQ. With the Pro version though, you can specify global concurrency for groups, and so if you only use 1 group you would effectively achieve just that.
Great thank you. I ll give it a try with the trial version to see if this is working for my usecase
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.