Skip to content
Snippets Groups Projects
Commit a273e28e authored by Doug Szumski's avatar Doug Szumski
Browse files

Set Kafka default replication factor

This ensures that when using automatic Kafka topic creation, with more than one
node in the Kafka cluster, all partitions in the topic are automatically
replicated. When a single node goes down in a >=3 node cluster, these topics will
continue to accept writes providing there are at least two insync replicas.

In a two node cluster, no failures are tolerated. In a three node cluster, only a
single node failure is tolerated. In a larger cluster the configuration may need
manual tuning.

This configuration follows advice given here:

[1] https://docs.cloudera.com/documentation/kafka/1-2-x/topics/kafka_ha.html#xd_583c10bfdbd326ba-590cb1d1-149e9ca9886--6fec__section_d2t_ff2_lq

Closes-Bug: #1888522

Change-Id: I7d38c6ccb22061aa88d9ac6e2e25c3e095fdb8c3
parent 61e32bb1
No related branches found
No related tags found
No related merge requests found
......@@ -8,6 +8,7 @@ socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/var/lib/kafka/data
min.insync.replicas={{ kafka_broker_count if kafka_broker_count|int < 3 else 2 }}
default.replication.factor={{ kafka_broker_count if kafka_broker_count|int < 3 else 3 }}
num.partitions=30
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor={{ kafka_broker_count if kafka_broker_count|int < 3 else 3 }}
......
---
fixes:
- |
An issue where when Kafka default topic creation was used to create a
Kafka topic, no redundant replicas were created in a multi-node cluster.
`LP#1888522 <https://launchpad.net/bugs/1888522>`__. This affects Monasca
which uses Kafka, and was previously masked by the legacy Kafka client used
by Monasca which has since been upgraded in Ussuri. Monasca users with
multi-node Kafka clusters should consultant the Kafka `documentation
<https://kafka.apache.org/documentation/>`__ to increase the number of
replicas.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment