Kafka brokers not starting up - apache-kafka

I have 2 broker cluster of kafka 0.10.1 running up previously on my development servers with zookeeper 3.3.6 correctly.
I recently tried upgrading broker version to latest kafka 2.3.0 but it didn't start.
There is nothing much changed in the configuration.
Can anybody direct me what possibly could go wrong. Why brokers are not getting started?
Changed server.properties on broker server 1
broker.id=1
log.dirs=/mnt/kafka_2.11-2.3.0/logs
zookeeper.connect=local1:2181,local2:2181
listeners=PLAINTEXT://local1:9092
advertised.listeners=PLAINTEXT://local1:9092
Changed server.properties on broker server 2
broker.id=2
log.dirs=/mnt/kafka_2.11-2.3.0/logs
zookeeper.connect=local1:2181,local2:2181
listeners=PLAINTEXT://local2:9092
advertised.listeners=PLAINTEXT://local2:9092
NOTE:
1. Zookeeper is running on both servers
2. Kafka directories namely /brokers, /brokers/ids, /consumers etc are getting created.
3. Nothing is getting registered under /brokers/ids. Zookeeper CLI get /brokers/ids returns
[]
4. Command lsof -i tcp:9082 returns
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 18290 cass 118u IPv6 52133 0t0 TCP local2:9092 (LISTEN)
4. logs/server.log has no errors logged.
5. No more logs are getting appended to server.log.
Server logs
[2019-07-01 10:56:14,534] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2019-07-01 10:56:14,801] INFO Awaiting socket connections on local2:9092. (kafka.network.Acceptor)
[2019-07-01 10:56:14,829] INFO [SocketServer brokerId=1] Created data-plane acceptor and processors for endpoint : EndPoint(local2,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
[2019-07-01 10:56:14,830] INFO [SocketServer brokerId=1] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
[2019-07-01 10:56:14,850] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-07-01 10:56:14,851] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-07-01 10:56:14,851] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-07-01 10:56:14,852] INFO [ExpirationReaper-1-ElectPreferredLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-07-01 10:56:14,860] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2019-07-01 10:56:14,892] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient)

From the docs regarding ZooKeeper
Stable version
The current stable branch is 3.4 and the latest release of that branch is 3.4.9.
Upgrading zookeeper version to latest 3.5.5 helped and Kafka broker started correctly.
It would have been great if docs had stated the incompatibility with previous zookeeper version.
PS: Answer added to help someone struck with similar issue because of zookeeper version.

Related

Kafka and Zookeeper not working give an error (Kafka shutting down and INFO ZooKeeper audit is disabled despite enabling it)

I am a beginner and I have to use Kafka for data transfer into/from Hadoop FS (or any other application, not just through put or copyFromLocal commands),kafka needs zookeeper as well, I enabled Zooekeeper audit logging but I still get errors.
For Kafka, when I want to start it:
JMX_PORT=8004 bin/kafka-server-start.sh config/server.properties
I get the error:
[2022-02-16 13:56:45,939] INFO shutting down (kafka.server.KafkaServer)
[2022-02-16 13:56:46,114] INFO App info kafka.server for 0 unregistered (org.apache.kafka.common.utils.AppInfoParser)
[2022-02-16 13:56:46,133] INFO shut down completed (kafka.server.KafkaServer)
[2022-02-16 13:56:46,133] ERROR Exiting Kafka. (kafka.Kafka$)
[2022-02-16 13:56:46,165] INFO shutting down (kafka.server.KafkaServer)
And when I want to start Zookeeper using the command:
bin/zookeeper-server-start.sh config/zookeeper.properties
I get the following (and it gets stuck on it):
[2022-02-16 14:03:13,954] INFO zookeeper.request_throttler.shutdownTimeout = 10000 (org.apache.zookeeper.server.RequestThrottler)
[2022-02-16 14:03:13,955] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor)
[2022-02-16 14:03:14,136] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager)
[2022-02-16 14:03:14,138] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider)
Does anyone know how to work this out? I enabled audit logging but still. Same problem.
Zookeeper CLI isn't "stuck"; it's waiting for connections.
Open a new terminal and start Kafka
Alternatively, you could use Docker Compose / Kubernetes, if you think your host / local JVM is causing issues

Can't change kafka broker-id in Incubator Helm chart?

I have one Zookeeper server (say xx.xx.xx.xxx:2181) running on one GCP Compute Instance VM separately.
I have 3 GKE clusters all in different regions on which I am trying to install Kafka broker nodes so that all nodes connect to one Zookeeper server(xx.xx.xx.xxx:2181).
I installed the Zookeeper server on the VM following this guide with zookeeper properties looking like below:
dataDir=/tmp/data
clientPort=2181
maxClientCnxns=0
initLimit=5
syncLimit=2
tickTime=2000
# list of servers
server.1=0.0.0.0:2888:3888
I am using this Incubator Helm Chart to deploy the brokers on GKE clusters.
As per the README.md I am trying to install with the below command:
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
helm install --name my-kafka \
--set replicas=1,zookeeper.enabled=false,configurationOverrides."broker\.id"=1,configurationOverrides."zookeeper\.connect"="xx.xx.xx.xxx:2181" \
incubator/kafka
Error
When I deploy using any of the above ways described above on all of the three GKE Clusters, only one of the brokers gets connected to the Zookeeper server and the other two pods just restart infinitely.
When I check the Zookeeper log (on the VM), it looks something like below:
...
[2019-10-30 14:32:30,930] INFO Accepted socket connection from /xx.xx.xx.xxx:54978 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2019-10-30 14:32:30,936] INFO Client attempting to establish new session at /xx.xx.xx.xxx:54978 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-10-30 14:32:30,938] INFO Established session 0x100009621af0057 with negotiated timeout 6000 for client /xx.xx.xx.xxx:54978 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-10-30 14:32:32,335] INFO Got user-level KeeperException when processing sessionid:0x100009621af0057 type:create cxid:0xc zxid:0x422 txntype:-1 reqpath:n/a Error Path:/config/users Error:KeeperErrorCode = NodeExists for /config/users (org.apache.zookeeper.server.PrepRequestProcessor)
[2019-10-30 14:32:34,472] INFO Got user-level KeeperException when processing sessionid:0x100009621af0057 type:create cxid:0x14 zxid:0x424 txntype:-1 reqpath:n/a Error Path:/brokers/ids/0 Error:KeeperErrorCode = NodeExists for /brokers/ids/0 (org.apache.zookeeper.server.PrepRequestProcessor)
[2019-10-30 14:32:35,126] INFO Processed session termination for sessionid: 0x100009621af0057 (org.apache.zookeeper.server.PrepRequestProcessor)
[2019-10-30 14:32:35,127] INFO Closed socket connection for client /xx.xx.xx.xxx:54978 which had sessionid 0x100009621af0057 (org.apache.zookeeper.server.NIOServerCnxn)
[2019-10-30 14:36:49,123] INFO Expiring session 0x100009621af003b, timeout of 6000ms exceeded (org.apache.zookeeper.server.ZooKeeperServer)
...
I am sure I have created firewall rules to open necessary ports and that is not a problem because one of the broker nodes is able to connect (the one who reaches first).
To me, this seems like the borkerID are not getting changed for some reason and that is the reason why Zookeeper is rejecting the connections.
I say this because kubectl logs pod/my-kafka-n outputs something like below:
...
[2019-10-30 19:56:24,614] INFO [SocketServer brokerId=0] Shutdown completed (kafka.network.SocketServer)
...
[2019-10-30 19:56:24,627] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)
...
As we can see above says brokerId=0 for all of the pods in all the 3 clusters.
However, when I do kubectl exec -ti pod/my-kafka-n -- env | grep BROKER, I can see the environment variable KAFKA_BROKER_ID is changed to 1, 2 and 3 for different brokers as I set.
What am I doing wrong? What is the correct way to change the kafka-broker id or to make all brokers connect to one Zookeeper instance?
make all brokers connect to one Zookeeper instance?
Seems like you are doing that okay via the configurationOverrides option. That'll deploy all pods with the same configuration.
That being said, the broker ID should not be the same per pod. If you inspect the statefulset YAML, it appears that the broker ID is calculated based on the POD_NAME variable
Sidenote
3 GKE clusters all in different regions on which I am trying to install Kafka broker nodes so that all nodes connect to one Zookeeper server
It's not clear to me how you would able to deploy to 3 sepearate clusters in one API call. But, this architecture isn't recommended by Kafka, Zookeeper, or Kubernetes communities unless these regions are "geographically close"

kafka is failing to start because cluster/id is deleted from Zookeeper

Apache Kafka is failing to start. It shows in its logs "Failed to get cluster id from Zookeeper. This can happen if /cluster/id is deleted from Zookeeper."
How can I check the "/cluster/id"?
previously, when kafka is failing to start it is because I updated java on my server. So, I need to re-direct $JAVA_HOME to the new path & restart kafka and it will work again just fine. But, this case is different, I checked $JAVA_HOME & it's correct. So, I wanted to know more details of this issue. I read one of the logs file in /opt/kafka/logs and I got this log:
[2019-08-21 10:18:40,472] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: Failed to get cluster id from Zookeeper. This can happen if /cluster/id is deleted from Zookeeper.
at kafka.zk.KafkaZkClient.$anonfun$createOrGetClusterId$1(KafkaZkClient.scala:1498)
at scala.Option.getOrElse(Option.scala:138)
at kafka.zk.KafkaZkClient.createOrGetClusterId(KafkaZkClient.scala:1498)
at kafka.server.KafkaServer.$anonfun$getOrGenerateClusterId$1(KafkaServer.scala:390)
at scala.Option.getOrElse(Option.scala:138)
at kafka.server.KafkaServer.getOrGenerateClusterId(KafkaServer.scala:390)
at kafka.server.KafkaServer.startup(KafkaServer.scala:208)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:75)
at kafka.Kafka.main(Kafka.scala)
[2019-08-21 10:18:40,476] INFO shutting down (kafka.server.KafkaServer)
[2019-08-21 10:18:40,488] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2019-08-21 10:18:40,500] INFO Session: 0x100494a8714000a closed (org.apache.zookeeper.ZooKeeper)
[2019-08-21 10:18:40,505] INFO EventThread shut down for session: 0x100494a8714000a (org.apache.zookeeper.ClientCnxn)
[2019-08-21 10:18:40,507] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2019-08-21 10:18:40,517] INFO shut down completed (kafka.server.KafkaServer)
[2019-08-21 10:18:40,517] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2019-08-21 10:18:40,554] INFO shutting down (kafka.server.KafkaServer)
when I read the previous Error
Failed to get cluster id from Zookeeper. This can happen if /cluster/id is deleted from Zookeeper.
I though that "zookeeper id" in this path was deleted by mistake /tmp/zookeeper/myid but the file is still there with the corresponding server number written in it & zk is working fine. my kafka version is 2.2.0 .
I searched online for this ERROR "Failed to get cluster id from Zookeeper. This can happen if /cluster/id is deleted from Zookeeper."
but honestly, I did not find any thing helpful.

I can't run zookeeper

i am new to the kafka world,
i want to start zookeeper then when i type this
bin/zookeeper-server-start.sh config/zookeeper.properties
I got the following error
ERROR Unexpected exception, exiting abnormally (org.apache.zookeeper.server.ZooKeeperServerMain)
java.net.BindException: Address already in use
ERROR Unexpected exception, exiting abnormally (org.apache.zookeeper.server.ZooKeeperServerMain)
java.net.BindException: Address already in use
Then i tried netstat -nlp|grep 2181
but there is no process running
tcp 0 0 0.0.0.0:2181 0.0.0.0:* LISTEN -
Some light please
For this case,
You need to see if zookeeper is running or not
use below command
sudo lsof -i :2181
You will get
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 1005 zookeeper 33u IPv6 17209 0t0 TCP *:2181 (LISTEN)
java 1005 zookeeper 34u IPv6 327225 0t0 TCP localhost:2181->localhost:43566 (ESTABLISHED)
java 22585 root 88u IPv6 324552 0t0 TCP localhost:43566->localhost:2181 (ESTABLISHED)
like statement. Now kill the zookeeper to start again.
sudo kill -9 1005
Then use below to start zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties
Sounds like zookeeper server is running.
try:
bin/zkServer.sh stop from the zookeeper directory to shut it down and then:
bin/zookeeper-server-start.sh config/zookeeper.properties
from the kafka directory
That fixed my issue
There must be some stale process using the port 2181.I had the same issue.First I checked the status of the server:
/usr/share/zookeeper$ bin/zkServer.sh status
or
/usr/share/zookeeper$ echo status | nc 127.0.0.1 2181
Then, I tried to start kafka and it failed with the same error. I changed permission and run as sudo..it didn't work either.
Since i couldn't see any process using the port. I restarted my computer and it worked!!.
Check if zookeeper is already running or not by using this command.
bin/kafka-topics.sh --list --zookeeper localhost:2181
Check if you get number of topics, if you get any that means Zookeeper was already running.
So verify whether Zookeeper is already running or not.
If you are running command bin/zookeeper-server-start.sh config/zookeeper.properties
and getting error :
ERROR Unexpected exception, exiting abnormally (org.apache.zookeeper.server.ZooKeeperServerMain)
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
In that case in your virtual machine port 2181 is already using by zookeeper.
so in kafka zookeeper.properties change the clientPort value to the port which is not in use like 5181
Again run the command and Zookeeper will start working.
First, stop the Zookeeper with the command below:
$ bin/zookeeper-server-stop.sh config/zookeeper.properties
Then, start it again and you should be good to go:
$ bin/zookeeper-server-start.sh config/zookeeper.properties
May be another user is running the process. check using jps if any process with Quorum is running kill it and then try
I have solved the problem by doing following commands.
Go to kafka folder where you installed and type sudo bin/zookeeper-server-stop.sh
bin/zookeeper-server-start.sh config/zookeeper.properties
I hope this helps. Best of luck!
Maybe you can stop your hbase first.
just like this follow..
[root#master kafka_2.11-0.10.1.0]# stop-hbase.sh
stopping hbase................
localhost: stopping zookeeper.
[root#master kafka_2.11-0.10.1.0]# jps
2903 ResourceManager
60745 Worker
2586 NameNode
2762 SecondaryNameNode
93996 Jps
60653 Master
[root#master kafka_2.11-0.10.1.0]# bin/zookeeper-server-start.sh config/zookeeper.properties
[2019-12-05 01:09:43,959] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2019-12-05 01:09:43,965] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
[2019-12-05 01:09:43,965] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
[2019-12-05 01:09:43,965] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)
[2019-12-05 01:09:43,965] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2019-12-05 01:09:44,013] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2019-12-05 01:09:44,013] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2019-12-05 01:09:44,023] INFO Server environment:zookeeper.version=3.4.8--1, built on 02/06/2016 03:18 GMT (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-05 01:09:44,023] INFO Server environment:host.name=master (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-05 01:09:44,023] INFO Server environment:java.version=1.8.0_171 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-05 01:09:44,023] INFO Server environment:java.vendor=Oracle Corporation (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-05 01:09:44,023] INFO Server environment:java.home=/usr/local/soft/jdk1.8.0_171/jre (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-05 01:09:44,023] INFO Server environment:java.class.path=:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/aopalliance-repackaged-2.4.0-b34.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/argparse4j-0.5.0.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/connect-api-0.10.1.0.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/connect-file-0.10.1.0.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/connect-json-0.10.1.0.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/connect-runtime-0.10.1.0.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/guava-18.0.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/hk2-api-2.4.0-b34.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/hk2-locator-2.4.0-b34.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/hk2-utils-2.4.0-b34.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jackson-annotations-2.6.0.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jackson-core-2.6.3.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jackson-databind-2.6.3.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jackson-jaxrs-base-2.6.3.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jackson-jaxrs-json-provider-2.6.3.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jackson-module-jaxb-annotations-2.6.3.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/javassist-3.18.2-GA.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/javax.annotation-api-1.2.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/javax.inject-1.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/javax.inject-2.4.0-b34.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/javax.ws.rs-api-2.0.1.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jersey-client-2.22.2.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jersey-common-2.22.2.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jersey-container-servlet-2.22.2.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jersey-container-servlet-core-2.22.2.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jersey-guava-2.22.2.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jersey-media-jaxb-2.22.2.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jersey-server-2.22.2.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jetty-http-9.2.15.v20160210.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jetty-io-9.2.15.v20160210.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jetty-security-9.2.15.v20160210.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jetty-server-9.2.15.v20160210.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jetty-util-9.2.15.v20160210.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/jopt-simple-4.9.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/kafka_2.11-0.10.1.0.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/kafka_2.11-0.10.1.0-sources.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/kafka_2.11-0.10.1.0-test-sources.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/kafka-clients-0.10.1.0.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/kafka-log4j-appender-0.10.1.0.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/kafka-streams-0.10.1.0.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/kafka-streams-examples-0.10.1.0.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/kafka-tools-0.10.1.0.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/log4j-1.2.17.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/lz4-1.3.0.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/metrics-core-2.2.0.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/reflections-0.9.10.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/rocksdbjni-4.9.0.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/scala-library-2.11.8.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/scala-parser-combinators_2.11-1.0.4.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/slf4j-api-1.7.21.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/slf4j-log4j12-1.7.21.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/snappy-java-1.1.2.6.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/validation-api-1.1.0.Final.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/zkclient-0.9.jar:/usr/local/soft/kafka_2.11-0.10.1.0/bin/../libs/zookeeper-3.4.8.jar (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-05 01:09:44,024] INFO Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-05 01:09:44,024] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-05 01:09:44,024] INFO Server environment:java.compiler=<NA> (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-05 01:09:44,024] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-05 01:09:44,024] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-05 01:09:44,024] INFO Server environment:os.version=2.6.32-431.el6.x86_64 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-05 01:09:44,024] INFO Server environment:user.name=root (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-05 01:09:44,024] INFO Server environment:user.home=/root (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-05 01:09:44,025] INFO Server environment:user.dir=/usr/local/soft/kafka_2.11-0.10.1.0 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-05 01:09:44,035] INFO tickTime set to 3000 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-05 01:09:44,035] INFO minSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-05 01:09:44,035] INFO maxSessionTimeout set to -1 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-05 01:09:44,050] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
I got the same problem. I got it because zookeeper server is crashed by my laptop crash.
I got solved by the help from the below link.
How to recover Zookeeper from java.io.EOFException after a server crash?
I found the offending log file by opening one by one [Zookeper-data-dir]/zookeeper_0/version-2. And found a log file without head or any other thing. When I deleted it, I got my problem solved and my zookeeper server started running normally.
Seems to be the ZooKeeper port 2181 is still in use, Please follow the below steps to address this issue:
Use the netstat command to find the process that is holding onto port 2181. Kill the process that is using the ZooKeeper port 2181:
$ netstat -antp | grep 2181
tcp 0 0 0.0.0.0:2181 0.0.0.0:*
LISTEN 28016/java <defunct>
$ kill -9 28016
Restarting the kafka process solves the issue.
There is existing zookeeper running, how should I remove this. And than stop the zookeeper
bin/kafka-topics.sh --list --zookeeper localhost:2181
test
Many mentioned the process for linux versions.
In case if we want to identify the same for windows, we can identify with below
From PowerShell, finding the port listening to 2181
PS C:\Users\<username>> netstat -ano | findstr 2181
TCP [::1]:2181 [::]:0 LISTENING 4564
Option #1: kill that process id using task kill
PS C:\Users\vishnus> taskkill /F /PID 4564
In case this option doesn't work and failing
ERROR: The process with PID 4564 could not be terminated.
Reason: Access is denied.
Option #2
Go to "Task Manager" --> Services tab, check for PID column and go to that process and right click and mark it as stop. then it will get stopped.
By this time, the earlier instance is killed, so you can start the zookeeper
I have the same problem and then I debug step by step and find the solution as follows:
stop the zookeeper:
/opt/kafka_2.13-2.7.0/bin/zookeeper-server-stop.sh
Then I check whether the zookeeper process has indeed been stopped:
ps aux | grep zookeeper
If the zookeeper has been stopped, then no zookeeper process should be displayed. You can also check the zookeeper process via port number or java process. If your port number for zookeeper is the default 2181, you can check which process is listening to this port by running sudo lsof -i :2181. You can also check the all the java processs and find the corresponding zookeeper process by running sudo ps -fC java
If the zookeeper has not been stopped, run sudo kill -9 zookeeper_process_id to kill it.
After this, we are sure that zookeeper have indeed been stopped. If we run jps, the QuorumPeerMain will not be shown.
After this, we can restart the zookeeper,
/opt/kafka_2.13-2.7.0/bin/zookeeper-server-start.sh /opt/kafka_2.13-2.7.0/config/zookeeper.properties
For windows user: below step will stop the zookeeper
step1 : netstat -ano | findstr :
NOTE: port number can be replaced by the port where zookeeper is runing
Step2: Now from image , the circled one is PID so copy that and put it in below command
taskkill /PID <PIDNO/F

kafka cant connect to zookeeper- FATAL Fatal error during KafkaServerStable startup

Well..every service in the world can connect to my zookeeper expect kafka. Below is my connection string in server.properties file
zk.connect=1.dzk.syd.druid.neo.com:2181, 2.dzk.syd.druid.neo.com:2181
Have have all ports on the two zookeeper servers ....total promiscuous mode. I can even telnet into the zookeeper server from the kafka server..
telnet 2.dzk.syd.druid.neo.com 2181
Trying 54.252.183.218...
Connected to 2.dzk.syd.druid.neo.com.
Escape character is '^]'.
So....rather confused on why kafka will not connect to zookeeper?
I am using ubuntu 12.04 and kafka 0.7.2
[2013-07-16 04:36:49,915] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2013-07-16 04:36:49,915] INFO Client environment:user.dir=/etc/sv/kafka (org.apache.zookeeper.ZooKeeper)
[2013-07-16 04:36:49,916] INFO Initiating client connection, connectString=1.dzk.syd.druid.neo.com:2181, 2.dzk.syd.druid.neo.com:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient#39cc65b1 (org.apache.zookeeper.ZooKeeper)
[2013-07-16 04:36:49,935] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2013-07-16 04:36:49,938] FATAL Fatal error during KafkaServerStable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
org.I0Itec.zkclient.exception.ZkException: Unable to connect to 1.dzk.syd.druid.neo.com:2181, 2.dzk.syd.druid.neo.com:2181
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:66)
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:872)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:98)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:84)
at kafka.server.KafkaZooKeeper.startup(KafkaZooKeeper.scala:44)
at kafka.log.LogManager.<init>(LogManager.scala:93)
at kafka.server.KafkaServer.startup(KafkaServer.scala:58)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
at kafka.Kafka$.main(Kafka.scala:47)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.UnknownHostException: 2.dzk.syd.druid.neo.com: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:894)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1286)
at java.net.InetAddress.getAllByName0(InetAddress.java:1239)
at java.net.InetAddress.getAllByName(InetAddress.java:1155)
at java.net.InetAddress.getAllByName(InetAddress.java:1091)
at org.apache.zookeeper.ClientCnxn.<init>(ClientCnxn.java:387)
at org.apache.zookeeper.ClientCnxn.<init>(ClientCnxn.java:332)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:383)
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:64)
... 9 more
[2013-07-16 04:36:49,942] INFO Shutting down Kafka server (kafka.server.KafkaServer)
[2013-07-16 04:36:49,943] INFO shutdown scheduler kafka-logcleaner- (kafka.utils.KafkaScheduler)
[2013-07-16 04:36:49,944] INFO Kafka server shut down completed (kafka.server.KafkaServer)
In your kafka/config/server.properties, there should be a property
#host.name=localhost
if you have uncommented this, or set this to another name, then that name should be in the /etc/hosts file
It's been a while since this has been answered, but in case it could help someone here is how i fixed it :
Actually i am using an Ansible playbook to install Kafka cluster and the params generated in zookeeper.properties file were not correctly ordered :
server.1=0.0.0.0:2888:3888
server.2=kafka-4:2888:3888
server.3=kafka-5:2888:3888
server.4=kafka-3:2888:3888
server.5=kafka-2:2888:3888
Putting them in the right order,
server.1=0.0.0.0:2888:3888
server.2=kafka-2:2888:3888
server.3=kafka-3:2888:3888
server.4=kafka-4:2888:3888
server.5=kafka-5:2888:3888
Then restart Kafka service, fixed it.
Change this in zookeeper.properties
maxClientCnxns=0 to maxClientCnxns=1