confluentinc / confluent-kafka-python

Confluent's Kafka Python Client

Home Page:http://docs.confluent.io/current/clients/confluent-kafka-python

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Broker list is not updated in the client

kyrsideris opened this issue · comments

Hello,

I am experimenting with confluent-kafka and I met a possible limitation. The broker list is not updated beyond the initial list even when new brokers join the cluster. In order to demonstrate this experiment I create a small repo accessible here:
https://github.com/kyrsideris/confluent-kafka-python-example

In a nutshell, even with api.version.request': 'true' in the consumer, to force Kafka 0.10 use, the list of Kafka brokers is not updated in the consumer as expected and explained in the protocol:
https://kafka.apache.org/0100/documentation.html#newconsumerconfigs

To demonstrate, I created a zookeeper, kafka broker, consumer and producer docker, then I scaled up kafka and killed the first kafka to simulate failure. The consumer never got the updated list of kafka brokers and started failing.

Can you explain if this is a limitation of the client and whether this functionality will be supported in the future?

Thanks in advance

The underlying client (librdkafka) will query connected brokers for metadata at regular intervals (topic.metadata.refresh.interval.ms) and add any new brokers to its list of brokers - thus connecting to them too.

What does mean in your case is that you'll need to give it time to find the newly added broker (at least topic.metadata.refresh.interval.ms) before bringing down the old broker, or the client wont have any way to know of the new broker.

@edenhill Thanks very much for the helpful and prompt response.
I have tested with 'metadata.max.age.ms': 1000 and it seems to work fine!

Can I ask you about the configuration?

property version doc
metadata.max.age.ms 0.10 producerconfigs and newconsumerconfigs
topic.metadata.refresh.interval.ms 0.8 producerconfigs

So If I set api.version.request': 'true' then I should use the 0.10 property (metadata.max.age.ms) otherwise the corresponding properties defined by the broker.version.fallback version?
Do the broker.version.fallback and api.version.request define the list of properties, for the corresponding version of Kafka, to be used in producer and consumer?

Thanks again for your reply!

The properties you mention are for the Java consumer.
The Python client uses librdkafka which has its own configuration properties, see CONFIGURATION.md for the full list.
Also note that librdkafka supports all broker versions, but for brokers >= 0.10 you need to specify api.version.request=true to get the full protocol feature set (for metadata requests it doesnt matter though)