manub / scalatest-embedded-kafka

A library that provides an in-memory Kafka instance to run your tests against.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Producer failing using atomic writes (exactly-once semantic) from Kafka 0.11.0.0

frossi85 opened this issue · comments

The new version of kafka now support exactly-once semantic. See https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/

I am working on a project that need it, so I added the changes in the project and added a new test. But they are failing because the new kafka version added a few new kafka/zookeeper configurations like transaction.state.log.replication.factor.

That configuration have a default value of 3 and should be same number as the number alive brokers in kafka.

Here the error that I am getting:

ERROR KafkaApis:99 - [KafkaApi-0] Number of alive brokers '1' does not meet the required replication factor '3' for the transactions state topic (configured via 'transaction.state.log.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet.

Hi @frossi85,
I had the same issue and I believe it would be nice for the library to set the replication factor to 1 in the broker config.
In the meantime the workaround is simple; just pass the customBrokerProperties to the EmbeddedKafkaConfig in your test as follows:

implicit val kafkaConfig = EmbeddedKafkaConfig(customBrokerProperties = Map(kafka.server.KafkaConfig.KafkaConfig.OffsetsTopicReplicationFactorProp -> "1"))

I have this same issue, the above solution from @claudio-scandura works if the line is in fact;

implicit val kafkaConfig = EmbeddedKafkaConfig(customBrokerProperties = Map(kafka.server.KafkaConfig.KafkaConfig.TransactionsTopicReplicationFactorProp -> "1"))

Note this is for the TRANSACTIONS and not the OFFSETS topic.

Once this is set, the error goes away, to be replaced by a new error;

org.apache.kafka.common.errors.NotEnoughReplicasException: Number of insync replicas for partition __transaction_state-30 is [1], below required minimum [2]

The not so obvious thing to do is to set;

kafka.server.KafkaConfig.TransactionsTopicMinISRProp-> "1"

This moved me on further...

Added this as a default behaviour, thanks @ryanworsley

Thanks @manub - for reasons that aren't clear, my stream processor doesn't process messages when exactly once is set. I've been speaking to Matthias Sax over on the confluentcommunity slack channel and he's asked me to raise this Jira. It might be something you'd be interested in watching - probably I'm just doing something stupid though.