Getting Exception in thread "main" kafka.zookeeper.ZooKeeperClientTimeoutException when creating topic - apache-kafka

I am new to Kafka, have started learning on my own.
I am trying to create a topic in Kafka, my zookeeper is running but every time I am getting below error.

This means your zookeeper instance(s) are not running good.
Please check zookeeper log files for any error. By default zookeeper logs are stored under server.log file

Related

i cannot start my kafka node zookeeper cluster

I am trying to start my zookeeper node Kafka server. I keep on getting errors "invalid config, exiting abnormally".enter image description here
I tried changing the name of the directory and created a new directory. yet, I am unsuccessful to run my Kafka server.

zookeeper server expiring session timeout exceeded (org.apache.zookeeper.server)

I'm beginner using Kafka and druid. Recently I get an error when I upload data to druid at the parse data step. I don't know about this issue and error. I have assumed that this zookeeper of the Kafka server getting errors. I ran the zookeeper by following ./bin/zookeeper-server-start.sh config/zookeeper.properties but the result has timed out like the picture

When does Zookeeper change Kafka cluster ID?

Sometimes I enforce the following error when trying to run Kafka broker:
ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID m1Ze6AjGRwqarkcxJscgyQ doesn't match stored clusterId Some(1TGYcbFuRXa4Lqojs4B9Hw) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
at kafka.server.KafkaServer.startup(KafkaServer.scala:220)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
at kafka.Kafka$.main(Kafka.scala:84)
at kafka.Kafka.main(Kafka.scala)
[2020-01-04 15:58:43,303] INFO shutting down (kafka.server.KafkaServer)
I know what the error message means and it can be solved with removing meta.properties in the Kafka log
dir.
I now want to know when this happens exactly to prevent this from happening. Why/when does sometimes the cluster ID that Zookeeper looks for changes?
Kafka requires Zookeeper to start. When you run zk with out-of-the-box setup, it contains data/log dir set as the tmp directory.
When your instance/machine restarts tmp directory is flushed, and zookeeper looses its identity and creates a new one. Kafka cannot connect to a new zk as meta.properties file contains reference of the previous cluster_id
To avoid this configure your $KAFKA_HOME/config/zookeeper.properties
change dataDir to a directory that will persist after Zookeeper shuts down / restarts.
This would avoid zk to lose its cluster_id everytime machine restarts.

Lagom Kafka Unexpected Close Error

In Lagom Dev Enviornment, after starting Kafka using lagomKafkaStart
sometimes it shows KafkaServer Closed Unexpectedly, after that i need to run clean command to again get it running.
Please suggest is this the expected behaviour.
This can happen if you forcibly shut down sbt and the ZooKeeper data becomes corrupted.
Other than running the clean command, you can manually delete the target/lagom-dynamic-projects/lagom-internal-meta-project-kafka/ directory.
This will clear your local data from Kafka, but not from any other database (Cassandra or RDBMS). If you are using Lagom's message broker API, it will automatically repopulate the Kafka topic from the source database when you restart your service.

How to create a durable topic in kafka

I am new to kafka and am still learning the basics of the same. I want to create a durable topic which is preserved even after the zoopkeeper and/or kafka server shutdown.
What I notice it this - I have a zookeeper and kafka server running on my local macbook. When I shutdown the zookeeper server and again bring it up quickly I can see the previously created topics. But If I restart the system and then restart the zookeeper server - I dont see the topic that I had created earlier.
I am running kafka_2.9.2-0.8.1.1 on my local system.
It happens because /tmp is getting cleaned after reboot resulting in loss of your data.
To fix this modify your Zookeeper dataDir property (in config/zookeeper.properties) and Kafka log.dirs (in config/server.properties) to be somewhere NOT in /tmp.