developer tip

Zookeeper 3.4.6과 함께 Kafka 0.8.1을 사용할 때 LeaderNotAvailableException으로 실행

copycodes 2020. 11. 15. 11:25
반응형

Zookeeper 3.4.6과 함께 Kafka 0.8.1을 사용할 때 LeaderNotAvailableException으로 실행


웹 사이트에 따라 안정적인 버전의 kafka (0.8.1 및 2.9.2 Scala)를 설치하고 3 노드 사육사 앙상블 (3.4.6)으로 실행 중입니다. 테스트 주제를 만들려고했지만 주제의 파티션에 할당 된 리더가 없음을 계속 확인했습니다.

[kafka_2.9.2-0.8.1]$ ./bin/kafka-topics.sh --zookeeper <zookeeper_ensemble> --describe --topic test-1
Topic:test-1    PartitionCount:1    ReplicationFactor:3 Configs:
    Topic: test-1   Partition: 0    **Leader: none**    Replicas: 0,1,2 **Isr:** 

어쨌든 콘솔 제작자를 사용하여 주제에 쓰려고했지만 LeaderNotAvailableException 예외가 발생했습니다.

[kafka_2.9.2-0.8.1]$ ./kafka-console-producer.sh --broker-list <broker_list> --topic test-1

hello world

[2014-04-22 11:58:48,297] WARN Error while fetching metadata [{TopicMetadata for topic test-1 -> 
No partition metadata for topic test-1 due to kafka.common.LeaderNotAvailableException}] for topic [test-1]: class kafka.common.LeaderNotAvailableException  (kafka.producer.BrokerPartitionInfo)

[2014-04-22 11:58:48,321] WARN Error while fetching metadata [{TopicMetadata for topic test-1 -> 
No partition metadata for topic test-1 due to kafka.common.LeaderNotAvailableException}] for topic [test-1]: class kafka.common.LeaderNotAvailableException  (kafka.producer.BrokerPartitionInfo)

[2014-04-22 11:58:48,322] ERROR Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: test-1 (kafka.producer.async.DefaultEventHandler)

[2014-04-22 11:58:48,445] WARN Error while fetching metadata [{TopicMetadata for topic test-1 -> 
No partition metadata for topic test-1 due to kafka.common.LeaderNotAvailableException}] for topic [test-1]: class kafka.common.LeaderNotAvailableException  (kafka.producer.BrokerPartitionInfo)

[2014-04-22 11:58:48,467] WARN Error while fetching metadata [{TopicMetadata for topic test-1 -> 
No partition metadata for topic test-1 due to kafka.common.LeaderNotAvailableException}] for topic [test-1]: class kafka.common.LeaderNotAvailableException  (kafka.producer.BrokerPartitionInfo)

[2014-04-22 11:58:48,467] ERROR Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: test-1 (kafka.producer.async.DefaultEventHandler)

[2014-04-22 11:58:48,590] WARN Error while fetching metadata [{TopicMetadata for topic test-1 -> 
No partition metadata for topic test-1 due to kafka.common.LeaderNotAvailableException}] for topic [test-1]: class kafka.common.LeaderNotAvailableException  (kafka.producer.BrokerPartitionInfo)

[2014-04-22 11:58:48,612] WARN Error while fetching metadata [{TopicMetadata for topic test-1 -> 
No partition metadata for topic test-1 due to kafka.common.LeaderNotAvailableException}] for topic [test-1]: class kafka.common.LeaderNotAvailableException  (kafka.producer.BrokerPartitionInfo)

[2014-04-22 11:58:48,612] ERROR Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: test-1 (kafka.producer.async.DefaultEventHandler)

[2014-04-22 11:58:48,731] WARN Error while fetching metadata [{TopicMetadata for topic test-1 -> 
No partition metadata for topic test-1 due to kafka.common.LeaderNotAvailableException}] for topic [test-1]: class kafka.common.LeaderNotAvailableException  (kafka.producer.BrokerPartitionInfo)

[2014-04-22 11:58:48,753] WARN Error while fetching metadata [{TopicMetadata for topic test-1 -> 
No partition metadata for topic test-1 due to kafka.common.LeaderNotAvailableException}] for topic [test-1]: class kafka.common.LeaderNotAvailableException  (kafka.producer.BrokerPartitionInfo)

[2014-04-22 11:58:48,754] ERROR Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: test-1 (kafka.producer.async.DefaultEventHandler)

[2014-04-22 11:58:48,876] WARN Error while fetching metadata [{TopicMetadata for topic test-1 -> 
No partition metadata for topic test-1 due to kafka.common.LeaderNotAvailableException}] for topic [test-1]: class kafka.common.LeaderNotAvailableException  (kafka.producer.BrokerPartitionInfo)

[2014-04-22 11:58:48,877] ERROR Failed to send requests for topics test-1 with correlation ids in [0,8] (kafka.producer.async.DefaultEventHandler)

[2014-04-22 11:58:48,878] ERROR Error in handling batch of 1 events (kafka.producer.async.ProducerSendThread)
kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.
    at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
    at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
    at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
    at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
    at scala.collection.immutable.Stream.foreach(Stream.scala:547)
    at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
    at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)

나는 또한 이것이 처음에 며칠 동안 작동했다가 갑자기 생성 된 모든 주제에이 누락 된 리더 문제가 있음을 언급해야합니다.


Kafka는 외부 조정 프레임 워크 (기본적으로 Zookeeper)를 사용하여 구성을 유지합니다. 구성이 이제 Kafka 로그 데이터와 동기화되지 않은 것 같습니다. 이 경우 영향을받는 주제 데이터 및 관련 Zookeeper 데이터를 제거합니다.

테스트 환경의 경우 :

  1. 중지 Kafka-server하고Zookeeper-server
  2. 기본적으로 /tmp/kafka-log/tmp/zookeeper입니다. 두 서비스의 데이터 디렉토리를 제거합니다 .
  3. 시작 Kafka-server하고 Zookeeper-server다시
  4. 새 주제 만들기

이제 주제에 대해 다시 작업 할 수 있습니다.

프로덕션 환경의 경우 :

Kafka 토픽은 다른 디렉토리에 저장되므로 특정 디렉토리를 제거해야합니다. 또한 /brokers/{broker_id}/topics/{broken_topic}Zookeeper 클라이언트를 사용하여 Zookeeper에서 제거해야합니다 .

어리석은 일을하기 전에 구성 구조를 확인하기 위해 Kafka 문서를주의 깊게 읽으십시오. Kafka는 문제를보다 쉽게 ​​해결할 수 있도록 주제 삭제 기능 ( KAFKA-330 )을 출시 하고 있습니다.


I had the same issue. It turns out that Kafka requires the machine's hostname to be resolveable to connect back to itself.

I updated the hostname on my machine and, after a restart of zookeeper and kafka, the topic could be written to correctly.


I had solved this problem by adding an entry into /etc/hosts for 127.0.0.1 with fully qualified host name:

127.0.0.1       x4239433.your.domain.com x4239433

Producer and consumer started working fine.


I had the same problem. In the end I had to delete stop the Kafka nodes, then follow the advice here on how to delete Kafka topics. Once I had got rid of the broken topics, I was able to start Kafka again successfully.

I would like to know if there is a better approach, and how to avoid this happening in the future.


나는이 문제를 몇 번 만났고 마침내 왜 문제가 발생했는지 알아 냈습니다. 여기에도 결과를 추가하겠습니다. 저는 Linux VM을 사용하고 있습니다. 짧은 대답은 VM이 새 IP를 받았기 때문에이 문제가 발생했습니다. 구성 파일을 살펴보고 server.properties를 열면 다음 줄이 표시됩니다.

advertised.host.name = xx.xx.xx.xxx 또는 localhost.

이 IP가 현재 IP와 일치하는지 확인 하세요 . 여기에서 IP를 확인할 수 있습니다 .

이 문제를 해결하면 모든 것이 제대로 작동하기 시작했습니다. 0.9.0.0 버전을 사용하고 있습니다.

누군가에게 도움이되기를 바랍니다.


나는 같은 문제가 있었고 JDK를 1.7에서 1.6으로 해결했습니다.


같은 문제가있었습니다. 소비자 / 생산자가 사용하는 각 파티션에 적어도 하나의 주제가 있는지 확인하십시오. 해당 파티션을 사용하는 항목이없는 경우 Zookeeper는 파티션의 리더를 찾지 못합니다.


It is the problem with JDK.

I have installed openjdk

java version "1.7.0_51"
OpenJDK Runtime Environment (IcedTea 2.4.4) (7u51-2.4.4-0ubuntu0.12.04.2)
OpenJDK 64-Bit Server VM (build 24.45-b08, mixed mode)

But I changed that to oracle jdk (follow this link : http://www.webupd8.org/2012/06/how-to-install-oracle-java-7-in-debian.html)

java version "1.7.0_80" Java(TM) SE Runtime Environment (build
1.7.0_80-b15) Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)

Now it works fine. Hope this helps.


So one more possible answer -- the IP address in the advertised.hostname in the kafka config/server.properties may be mistyped with an extra space.

In my cases

advertised.host.name=10.123.123.211_\n (where _ is an extra space)

instead of the correct

advertised.host.name=10.123.123.211\n

For some reason this was working for 6 months without issues, and presumably some library update removed the relaxed lookup of the IP address trimming off the extra space.

A simple fix of the config file and restart of kafka solves this problem.


I faced exactly the same problem when I was trying to play with Kafka in my local system (mac OS X El Capitan). The problem was with my zookeeper, it was not referring to correct config file. Restart the zookeeper, and then Kafka and execute the following command. check if Leader is not None. If Leader is none, delete that topic and re-create it.

kafka-topics --zookeeper localhost:2181 --describe --topic pytest

Output will be like

Topic:pytest    PartitionCount:1    ReplicationFactor:1 Configs:
Topic: pytest   Partition: 0    Leader: 0   Replicas: 0 Isr: 0

I hope this should help.


I faced the issue with Kafka, Zookeeper pod in Openshift and the Kafka was TLS enabled. I had to add the below environment variables to Kafka,

  • KAFKA_ZOOKEEPER_CONNECT

  • KAFKA_SSL_KEYSTORE_LOCATION

  • KAFKA_SSL_TRUSTSTORE_LOCATION

  • KAFKA_SSL_KEYSTORE_PASSWORD

  • KAFKA_SSL_TRUSTSTORE_PASSWORD

  • KAFKA_ADVERTISED_LISTENERS

  • KAFKA_INTER_BROKER_LISTENER_NAME

  • KAFKA_LISTENERS

And after setting the variables, I had to delete and recreate the pods, to have it work.


Add "advertised.host.name=localhost" in config/server.properties and restart the Kafka server. It worked for me

참고URL : https://stackoverflow.com/questions/23228222/running-into-leadernotavailableexception-when-using-kafka-0-8-1-with-zookeeper-3

반응형