droplit / worqr

WORQR - Atomic Redis Queue

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

controlling the speed at which a queue is consumed

jhiver opened this issue · comments

Hello & thanks for your interesting lib. I've had a play with it but it seems that the worker is consuming the queue as fast as it can, e.g. worqr.on('test_queue') keeps receiving events even though the task that is being processed hasn't yet finished. Is there a way to control how many tasks can be done in parrallel? e.g. if I want my worker to do max 1 tasks at a time, then only get a work event when the task is complete, is there a way to do it?

This is my consumer code:

Worqr = require('worqr').Worqr
worqr = new Worqr host: 'localhost', port: 6379

waitSomeTime = ->
  return new Promise (resolve) ->
    setTimeout resolve, 2000

doWork = ->
    job = await worqr.dequeue 'test_queue'
    return unless job
    console.log "starting", job
    await waitSomeTime()
    console.log "stopping", job
    await worqr.finishProcess(job.id)

worqr.on 'test_queue', (type, message) ->
  if type is 'work'
    doWork()
  else
    console.log type, message

main = ->
 await worqr.startWorker()
 await worqr.startWork('test_queue')

main()

Kind Regards
JM

@jhiver From what I understand, you should be able to just have a worker keep track of how many processes it currently working on (for your example of max 1 task, some isWorking boolean value ) and don't dequeue a queue if the worker is working. Once your worker completes its process, you should be able to call dequeue to get a new task, if there exists one (returns null if queue is empty), or list tasks in the queue (getTasks('test_queue')) and decide to call dequeue. If there isn't a task in the queue, then your worker will wait until worqr.on 'test_queue fires again.

Worqr = require('worqr').Worqr
worqr = new Worqr host: 'localhost', port: 6379

isWorking = false

waitSomeTime = ->
  return new Promise (resolve) ->
    setTimeout resolve, 2000

doWork = ->
    if isWorking
        console.log "already working"
        return
    isWorking = true
    job = await worqr.dequeue 'test_queue'
    return unless job
    console.log "starting", job
    await waitSomeTime()
    console.log "stopping", job
    await worqr.finishProcess(job.id)
    isWorking = false
    doWork()

worqr.on 'test_queue', (type, message) ->
  if type is 'work'
    doWork()
  else
    console.log type, message

main = ->
 await worqr.startWorker()
 await worqr.startWork('test_queue')

main()

OK. So basically you use events to signal that queue is non empty, then dequeue at your own pace until job is null. I understand the logic of it, yet I would have expected worqr.dequeue to simply block until a job is ready to be dequeued, much like it works in beanstalk, etc.

This way to need to listen for events or anything in your worker... you just do

while job = await worqr.dequeue(QUEUE_NAME)
  await doStuff(job)

Kind Regards
JM

@jhiver thanks for reaching out with the idea. Currently, as @chriswoodle described, there is no native way of blocking the dequeue of events, but we like the idea. I'd say you're more than welcome to make a pull request and one of us can review it but I believe it's in the pipe to be added to the lib shortly. Hope that info is useful to you! 👍