i cannot start my kafka node zookeeper cluster - apache-kafka

I am trying to start my zookeeper node Kafka server. I keep on getting errors "invalid config, exiting abnormally".enter image description here
I tried changing the name of the directory and created a new directory. yet, I am unsuccessful to run my Kafka server.

Related

When does Zookeeper change Kafka cluster ID?

Sometimes I enforce the following error when trying to run Kafka broker:
ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID m1Ze6AjGRwqarkcxJscgyQ doesn't match stored clusterId Some(1TGYcbFuRXa4Lqojs4B9Hw) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
at kafka.server.KafkaServer.startup(KafkaServer.scala:220)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
at kafka.Kafka$.main(Kafka.scala:84)
at kafka.Kafka.main(Kafka.scala)
[2020-01-04 15:58:43,303] INFO shutting down (kafka.server.KafkaServer)
I know what the error message means and it can be solved with removing meta.properties in the Kafka log
dir.
I now want to know when this happens exactly to prevent this from happening. Why/when does sometimes the cluster ID that Zookeeper looks for changes?
Kafka requires Zookeeper to start. When you run zk with out-of-the-box setup, it contains data/log dir set as the tmp directory.
When your instance/machine restarts tmp directory is flushed, and zookeeper looses its identity and creates a new one. Kafka cannot connect to a new zk as meta.properties file contains reference of the previous cluster_id
To avoid this configure your $KAFKA_HOME/config/zookeeper.properties
change dataDir to a directory that will persist after Zookeeper shuts down / restarts.
This would avoid zk to lose its cluster_id everytime machine restarts.

kafka start up failed due to no auth issue

I've a single node cluster setup of Apache kafka_2.12-2.5.1 (embedded zookeeper) on the same host. Enabled SSL on Zookeeper and it starts just fine. Kafka, however, throws fatal error "NoAuth for /brokers/ids" at start up.
Please help out if you've any pointers.

Getting Exception in thread "main" kafka.zookeeper.ZooKeeperClientTimeoutException when creating topic

I am new to Kafka, have started learning on my own.
I am trying to create a topic in Kafka, my zookeeper is running but every time I am getting below error.
This means your zookeeper instance(s) are not running good.
Please check zookeeper log files for any error. By default zookeeper logs are stored under server.log file

Lagom Kafka Unexpected Close Error

In Lagom Dev Enviornment, after starting Kafka using lagomKafkaStart
sometimes it shows KafkaServer Closed Unexpectedly, after that i need to run clean command to again get it running.
Please suggest is this the expected behaviour.
This can happen if you forcibly shut down sbt and the ZooKeeper data becomes corrupted.
Other than running the clean command, you can manually delete the target/lagom-dynamic-projects/lagom-internal-meta-project-kafka/ directory.
This will clear your local data from Kafka, but not from any other database (Cassandra or RDBMS). If you are using Lagom's message broker API, it will automatically repopulate the Kafka topic from the source database when you restart your service.

Kafka parcel not installed well

I am using a cloudera cluster (5.5.1-1.cdh5.5.1) and I installed the Kafka parcel (2.0.1-1.2.0.1.p0.5). When I tried to add the kafka service on the cluster, I had the following error on every host I tried to add a role on : Error found before invoking supervisord: User [kafka] does not exist.
I created the user manually on every node needed, restarted the service, and it seemed to work until I tried to modify the kafka config with the cloudera manager. The cloudera's kafka configuration isn't "binded" to the real kafka configuration.
The error when I try to modify the broker.id :
Fatal error during KafkaServerStartable startup. Prepare to shutdown
kafka.common.InconsistentBrokerIdException: Configured broker.id 101
doesn't match stored broker.id 145 in meta.properties. If you moved
your data, make sure your configured broker.id matches. If you intend
to create a new broker, you should remove all data in your data
directories (log.dirs).`