Second and Third Distributed Kafka Connector workers failing to work correctly

I'm posting an answer to an old question, since Kafka Connect has moved on a lot in three years.

In the latest version (2.3.1) there is incremental rebalancing which massively improves the behaviour of Kafka Connect.

It's also worth noting that when configuring Kafka Connect rest.advertised.host.name must be set correctly, as if it's not you will see errors including the one quoted

{"error_code":500,"message":"IO Error trying to forward REST request: Connection refused"}

See this post for more details.


I've encountered with similar issue in the same situation as yours.

  1. Task.max is configured for a topic and distributed workers automatically decide what nodes handle topic. So, if you have 3 workers in a cluster and your topic configuration says task.max=2 then only 2 of 3 workers will process the topic. In theory, if one of workers fails, 3rd should pick up workload. But..
  2. The distributed connector turned out to be very unreliable: once you add\remove some nodes, the cluster broke down and all workers did nothing but tried to choose leader and failed. The only way to fix was to restart whole cluster and preferably all workers simultaneously.

I chose another way - I used standalone worker and it works like a charm to me because distribution of load is implemented on Kafka client level and once some worker dropped, the cluster re-balances automatically and clients connected to unoccupied topics.

PS. Maybe it will be useful for you too. Confluent connector is not tolerate to invalid payload that does not match topic's schema. Once the connector get some invalid message it silently dies. The only way to find out is to analyze metrics.