I started a kafka broker on my computer. I also ran zookeeper. Once i was playing with the commands to see consumer groups or so like, suddenly kafka got time out exception. When i restarted it, all the topics were gone (i mean i couldn't get them or watch them through command line). But when i checked the kafka-logs dir, they were all there. I also created a new topic and it was shown in cmd and interestingly it's log was near other topic logs that faced the problem mentioned. Any idea? I appreciate that.
Related
As far as I know version 1.0x and up, the config delete.topic.enable=true is the default configuration. Meaning, if I execute the delete command it should work as it expected.
However, I tried deleting the topic but it still there and was not deleted (when I describe the said topic in all brokers)
I read in some forums that I need to restart the broker servers and the zookeepers in order the deletion take effect.
Is there any other way to delete the topic without restarting the zookeeper and the brokers?
I was using a kafka topic, and it's metadata as well in my application. I hard deleted the topic from the zookeeper shell, by deleting the directories corresponding to that topic. After creating the topic again, I described the topic and found that no leaders have been assigned to this newly created topic. In the consumer, I can see repeated logs printing LEADER_NOT_AVAILABLE. Any reason as to what am I doing wrong? Or maybe is there a way to delete the metadata related to the kafka topic as well that I'm unaware of? Thanks in advance!
Deleting topics in Kafka hasn't been straightforward until recently. In general, you shouldn't attempt to delete Kafka topics by deleting metadata in Zookeeper. You should always use the included command line utilities.
First you need to make sure that deleting topics is enabled in the server.properties file on all brokers, and do a rolling restart if needed:
delete.topic.enable=true
After you restart the brokers to enable topic deletion, you should issue the delete command using the command line utilities:
./kafka-topics.sh —zookeeper <zookeeper_host>:2181 —delete —topic <topic_name>
If at this point, it's still stuck, try to run these two commands from the zookeeper shell to make sure and remove all metadata for that particular topic:
rmr /brokers/topics/<topic_name>
rmr /admin/delete_topics/<topic_name>
A few more details here:
https://medium.com/#contactsunny/manually-delete-apache-kafka-topics-424c7e016ff3
Having installed KAFKA and having looked at these posts:
kafka loses all topics on reboot
Kafka topic no longer exists after restart
and thus moving kafka-logs to /opt... location, I still note that when I reboot:
I can re-create the topic again.
the kafka-logs directory contains information on topics, offsets etc. but it gets corrupted.
I am wondering how to rectify this.
Testing of new topics prior to reboot works fine.
There can be two potential problems
If it is kafka running in docker, then docker image restart always cleans up the previous state and creates a new cluster hence all topics are lost.
Check the log.dir or Zookeeper data path. If either is set to /tmp directory, it will be cleaned on each reboot. Hence you will lose all logs and topics will be lost.
In this VM I noted the Zookeeper log was defined on /tmp. Changed that to /opt (presume it should be /var though) and the clearing of Kafka data when instance terminated was corrected. Not sure how to explain this completely.
I'm having some troubles with kafka topics.
We have on our development environment kafka version 2.0,
and I've checked for topic deletion to be enabled.
Infact I've been able to delete some topics, but there are few that won't be deleted no matter what I do. Brokers are running fine by the way.
Not only that, but it seems that those topics don't receive any message from producers, even though those are correctly configured. We tried change the topic where to send messages and it was fine.
I didn't find anything regard these 2 problems, any of you know anything?
I was trying to get a basic kafka stream working, and I created a producer and used
producer.send(record).get()
to send a ProducerRecord to the kafka stream. It was hanging, so I changed a few config things and eventually it worked. I tried it again with the EXACT same code and EXACT same server configuration, and it hung. I tried it about 10 more times and it hung each time. Why is this happening?