How does kafka handle network partitions?

I haven't tested this, but I think the accepted answer is wrong and Lars Francke is correct about the possibility of brain-split.

Zookeeper quorum requires a majority, so if ZK ensemble partitions, at most one side will have a quorum.

Being a controller requires having an active session with ZK (ephemeral znode registration). If the current controller is partitioned away from ZK quorum, it should voluntarily stop considering itself a controller. This should take at most zookeeper.session.timeout.ms = 6000. Brokers still connected to ZK quorum should elect a new controller among themselves. (based on this: https://stackoverflow.com/a/52426734)

Being a topic-partition leader also requires an active session with ZK. Leader that lost a connection to ZK quorum should voluntarily stop being one. Elected controller will detect that some ex-leaders are missing and will assign new leaders from the ones in ISR and still connected to ZK quorum.

Now, what happens to producer requests received by the partitioned ex-leader during ZK timeout window? There are some possibilities.

If producer's acks = all and topic's min.insync.replicas = replication.factor, then all ISR should have exactly the same data. The ex-leader will eventually reject in-progress writes and producers will retry them. The newly elected leader will not have lost any data. On the other hand it won't be able to serve any write requests until the partition heals. It will be up to producers to decide to reject client requests or keep retrying in the background for a while.

Otherwise, it is very probable that the new leader will be missing up to zookeeper.session.timeout.ms + replica.lag.time.max.ms = 16000 worth of records and they will be truncated from the ex-leader after the partition heals.

Let's say you expect longer network partitions than you are comfortable with being read-only.

Something like this can work:

  • you have 3 availability zones and expect that at most 1 zone will be partitioned from the other 2
  • in each zone you have a Zookeeper node (or a few), so that 2 zones combined can always form a majority
  • in each zone you have a bunch of Kafka brokers
  • each topic has replication.factor = 3, one replica in each availability zone, min.insync.replicas = 2
  • producers' acks = all

This way there should be two Kafka ISRs on ZK quorum side of the network partition, at least one of them fully up to date with ex-leader. So no data loss on the brokers, and available for writes from any producers that are still able to connect to the winning side.


In a Kafka cluster, one of the brokers is elected to serve as the controller.

Among other things, the controller is responsible for electing new leaders. The Replica Management section covers this briefly: http://kafka.apache.org/documentation/#design_replicamanagment

Kafka uses Zookeeper to try to ensure there's only 1 controller at a time. However, the situation you described could still happen, spliting both the Zookeeper ensemble (assuming both sides can still have quorum) and the Kafka cluster in 2, resulting in 2 controllers.

In that case, Kafka has a number of configurations to limit the impact:

  • unclean.leader.election.enable: False by default, this is used to prevent replicas that were not in-sync to ever become leaders. If no available replicas are in-sync, Kafka marks the partition as offline, preventing data loss
  • replication.factor and min.insync.replicas: For example, if you set them to 3 and 2 respectively, in case of a "split-brain" you can prevent producers from sending records to the minority side if they use acks=all

See also KIP-101 for the details about handling logs that have diverged once the cluster is back together.