OptimalBits / bull

Premium Queue package for handling distributed jobs and messages in NodeJS.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question: Redeploy consumers after updating rate limiter

khanhdn308 opened this issue · comments

Hi.
I'm using bull through nestJS (@nestjs/bull) and loving it so far
We have 4 consumers polling the same queue at the moment. We're planning to reduce the interval between each requests to speed things up. I have some questions related to this.

  • After we redeploy our 4 instances, when can I expect the queue to process according to the new rate
  • Is there a way to check rate limit config through redis client? To see if the change takes effect or not
  • What will happen if the instances's config mismatch? Since we plan to perform a rollout, there will be time when instances have different rate limit config

Updated: We redeployed all of our instances but the rate is much slower compare to our config

@Module({
    imports: [
        BullModule.forRootAsync('JOB_QUEUE', {
            imports: [SharedModule],
            useFactory: async (configService: ConfigService) => ({
                redis: {
                    host: configService.redisHost,
                    port: configService.redisPort,
                    password: configService.redisPassword,
                    tls: configService.tlsSettings,
                },
            }),
            inject: [ConfigService],
        }),
        BullModule.registerQueue({
            configKey: 'JOB_QUEUE',
            name: 'JOB_CHANNEL',
            limiter: {
                max: 1,
                duration: 100,
            },
        }),
        SharedModule,
    ],
})
export class QueueModule {}