Why could Kafka warn "partitions have leader brokers without a matching listener"?

As stated in the comments to your question the problem seems to be with the advertised name for the Kafka broker. According to your docker-compose you should be using 192.168.23.134 but your email-service is using kafka:9092. You can try with this docker-compose. I replaced the wurstmeister services with the latest Zookeeper and Kafka provided by confluentinc and added your email-service.

---
version: '2'
services:
zookeeper:
    image: confluentinc/cp-zookeeper:latest
    environment:
    ZOOKEEPER_CLIENT_PORT: 2181
    ZOOKEEPER_TICK_TIME: 2000

kafka:
    image: confluentinc/cp-kafka:latest
    depends_on:
    - zookeeper
    ports:
    - 9092:9092
    environment:
    KAFKA_BROKER_ID: 1
    KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
    KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
    KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
    KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

email-service:
  build: ./email-service
  environment:
   SPRING_KAFKA_BOOTSTRAPSERVERS: kafka:29092
  ports:
   - "8081:8081"
  depends_on:
   - kafka

advertised.listeners: Listeners to publish to ZooKeeper for clients to use, if different than the listeners config property. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for listeners will be used. Unlike listeners it is not valid to advertise the 0.0.0.0 meta-address.

Please note that KAFKA_ADVERTISED_HOST_NAME has been deprecated and it's recommended to use KAFKA_ADVERTISED_LISTENERS instead. For more information about KAFKA_ADVERTISED_LISTENERS check here.


This is Apache Kafka 2.4.0.

I'm sharing the low-level code-based findings to shed more light when this WARN message could be printed out and why. That's certainly a misconfiguration of a Kafka cluster. Read on and comment if there's something missing. Thanks!


The WARN message is printed out when the DefaultMetadataUpdater (of NetworkClient) is requested to handle a completed metadata response.

[count] partitions have leader brokers without a matching listener, including [partitions]

It is a warning that corresponds to Errors.LISTENER_NOT_FOUND that has the following default exception text:

There is no listener on the leader broker that matches the listener on which metadata request was processed.

That's on the client side.

Digging deeper you can find that this Errors.LISTENER_NOT_FOUND is used on a Kafka broker when MetadataCache is requested to find partition metadata. That's where you can find just before there's this DEBUG message:

Error while fetching metadata for [topicPartition]: listener [listenerName] not found on leader [leaderBrokerId]

Simply turn the DEBUG logging level for kafka.server.MetadataCache logger and you should see it in the controller broker's logs.

In this particular case, this MetadataCache is used by a broker (via KafkaApis) to handle TopicMetadata request where they say:

// In versions 5 and below, we returned LEADER_NOT_AVAILABLE if a matching listener was not found on the leader.

// From version 6 onwards, we return LISTENER_NOT_FOUND to enable diagnosis of configuration errors.

And at that moment, it's clear that the WARN message in question is for a connection on the listenerName.


In my case, when I was debugging the issue, it turned out that I used SSL://:9093 to connect to a Kafka broker while the partition leader was neither available nor configured to listen to the listeners configuration property.

I used kafka-topics to review the partition configuration and then reviewed the state of partitions in ZooKeeper.

get /brokers/topics/ssl/partitions/0/state
{"controller_epoch":1,"leader":0,"version":1,"leader_epoch":0,"isr":[0]}

I had -1 for the leader, but the isr showed a broker that was simply misconfigured. That's why people reported they fixed the issue by restarting their clusters (to get all the brokers up and running) or fixing the broker ID to the one that worked previously.