amqp-node / amqplib

AMQP 0-9-1 library and client for Node.JS

Home Page:https://amqp-node.github.io/amqplib/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Channel.cancel, and then change prefetch, and then consume again, cause consume to ignore previous unacked messages

guya-activefence opened this issue · comments

Hi all,
I'm trying to change consumer's prefetch on runtime, after consume started.

To to that I'm doing following

  1. Create channel
  2. set prefetch to 10
  3. start consume
  4. at this point, 10 unacked messages
  5. channel cancel
  6. prefetch 100
  7. start consume again
  8. now there are 110 unacked messages instead of a total 100

Is that a known bug?
Is that a bug of the lib or rabbit's server bug?
Is there an alternative to change prefetch on runtime? (My processing throughput is affected by exogenic reasons)
Thanks for your help

Hi @guya-activefence,
Interesting question. I see the same behaviour as you, but it doesn't appear to be an issue with amqplib, since it is sending the correct prefetch-count in the Basic.Qos method.

const amqplib = require('amqplib');

(async () => {

  const connection = await amqplib.connect('amqp://localhost');
  const channel = await connection.createConfirmChannel();

  await createTestQueue();
  await sendMessages(200);
  await channel.prefetch(10);
  await consumeMessages();
  await channel.prefetch(100);
  await consumeMessages();

  async function createTestQueue() {
    await channel.assertQueue('test-q', { durable: false });
    await channel.purgeQueue('test-q');
  }

  async function sendMessages(n) {
    const promises = new Array(n).fill(0).map((_, i) => {
      return channel.sendToQueue('test-q', Buffer.from(String(i)));
    })
    await Promise.all(promises);
  }

  async function consumeMessages() {
    return new Promise(async (resolve, reject) => {
      const consumerTag = `CT-${Date.now()}`;

      let count = 0;
      channel.consume('test-q', async (message) => {
        console.log(`Received ${++count}`)
      }, { consumerTag })    

      process.once('SIGINT', async () => {
        await channel.cancel(consumerTag);
        resolve();
      })
    })
  }

})();

Initial 10 unacknowledged messages

issue-698 % node index.js
Received 1
Received 2
Received 3
...
Received 8
Received 9
Received 10
^C

Subsequent 100 unacknowledged messages

Received 1
Received 2
Received 3
...
Received 98
Received 99
Received 100

Total unacknowledged messages

Screenshot 2022-08-07 at 14 01 31

Wireshark debug From channel.prefetch(10)

Advanced Message Queueing Protocol
    Type: Method (1)
    Channel: 1
    Length: 11
    Class: Basic (60)
    Method: Qos (10)
    Arguments
        Prefetch-Size: 0
        Prefetch-Count: 10
        .... ...0 = Global: False

Wireshark debug from channel.prefetch(100)

Advanced Message Queueing Protocol
    Type: Method (1)
    Channel: 1
    Length: 11
    Class: Basic (60)
    Method: Qos (10)
    Arguments
        Prefetch-Size: 0
        Prefetch-Count: 100
        .... ...0 = Global: False

There's some relevant information in the RabbitMQ consumer prefetch documentation...

AMQP 0-9-1 specifies the basic.qos method to make it possible to limit the number of unacknowledged messages on a channel (or connection) when consuming (aka "prefetch count"). Unfortunately the channel is not the ideal scope for this - since a single channel may consume from multiple queues, the channel and the queue(s) need to coordinate with each other for every message sent to ensure they don't go over the limit. This is slow on a single machine, and very slow when consuming across a cluster.

Furthermore for many uses it is simply more natural to specify a prefetch count that applies to each consumer.

Therefore RabbitMQ redefines the meaning of the global flag in the basic.qos method:

global Meaning of prefetch_count in AMQP 0-9-1 Meaning of prefetch_count in RabbitMQ
false shared across all consumers on the channel applied separately to each new consumer on the channel
true shared across all consumers on the connection shared across all consumers on the channel

So, by default, RabbitMQ applies the limit separately to each consumer. If when specifying the prefetch, you also state that global is true, you get the behaviour you desire. i.e.

  await createTestQueue();
  await sendMessages(200);
  await channel.prefetch(10, true);
  await consumeMessages();
  await channel.prefetch(100, true);
  await consumeMessages();

However if you decide to set global to true, please be note the warning about performance...

the channel and the queue(s) need to coordinate with each other for every message sent to ensure they don't go over the limit. This is slow on a single machine, and very slow when consuming across a cluster.

Thank you for your quick response and amazingly detailed answer