Zookeeper fails to delete nodes (topics in Kafka) - apache-kafka

I'm trying to delete a topic nodes in Zookeeper for Kafka. I tried: rmr /brokers/topics, as well as deleting individual node topics (delete /brokers/topics/topic-1).
It worked for most topics, but I 4 topics/nodes that just refuse to get deleted.
Any ideas?

Related

Cannot delete topic successfully

As far as I know version 1.0x and up, the config delete.topic.enable=true is the default configuration. Meaning, if I execute the delete command it should work as it expected.
However, I tried deleting the topic but it still there and was not deleted (when I describe the said topic in all brokers)
I read in some forums that I need to restart the broker servers and the zookeepers in order the deletion take effect.
Is there any other way to delete the topic without restarting the zookeeper and the brokers?

Delete __Consumer_offset Topic form Kafka

I'm trying to delete Kafka topic __Consumer_offset as it is causing a lot of confusion for my brokers.
When i do so, it says this topic can't be marked for deletion.
i'm using the zookeeper cli to delete it such as rmr /brokers/topic __consumer_offset, but it is not working!
__consumer_offsets is a kafka internal topic and it is not allowed to be deleted through delete topic command. It contains information about committed offsets for each topic:partition for each group of consumers (groupID). If you want to wipe it out entirely you have to delete the zookeeper dataDir location. That implies, you lose all the metadata.
Also if you just want to get rid of the existing consumer groups, you can as well reset the offsets or consider deleting them.
AFAIK you cannot delete that topic. It is a internal topic and should not be deleted manually.
If it is must, then you will have to manually clean/remove your data directory. When you deploy Kafka brokers and Zookeepers it creates data directory.
Note: By removing data directory you will loose all topics and related data. So this is not feasible option in Production.

Kafka partition directories not deleted in data dir

I am using bin/kafka-topics.sh --zookeeper --delete --topic and i see in kafka logs of that indicate that the partitions for that topic are marked for deletion. However, I am still seeing the directories for those partitions present in the data dir.
Is this something expected and I am have manually delete them?
The topics haven't been removed from the zookeeper also. I still see the topics in zookeeper. Is this also expected?
Thanks!
There could be several reasons for topics not being deleted automatically.
In order to delete a topic delete.topic.enable should be set to true.
If it is set to true, it should ideally delete the directories from Zookeeper and kafka data.dir . But in case, if it doesn't, you should check the logs to make sure if there is any problem with kafka brokers or zookeeper due to some LEADER selection issue.
So in that case, you have to cleanup the dirs manually.

Zookeeper client cannot rmr /brokers/topics/MY_TOPIC

I'm trying to remove a Kafka topic with 8 partitions and 2 replications. First I delete that topic using kafka-topic.sh --delete command. Then I used zkCli.sh -server slave1.....slave3, and rmr /brokers/topics/MY_TOPIC.
However I still see that topic in /brokers/topics/. And I tried restart Kafka, everything still the same.
Btw, topic with 1 partition and 1 replica can be deleted successfully.
You can set server properties to enable delete of kafka topic
Add line mentioned below in service.properties
delete.topic.enable = true
If you removing manually using rmr /brokers/topics/MY_topic then you also need to remove topic related metadata from other nodes in zookeeper ex- consumer information about that topic. Also need to remove kafka topic director on kafka server.
It is cleaner to enable topic delete property and execute kafka-topics.sh --delete

kafka-topics.sh --describe don't return anything

I am running a kafka cluster composed by 3 nodes.
One of the nodes crashed and it has been behaving oddly since then...
The following does not return anything on the malfunctioning node:
kafka-topics.sh --describe --zookeeper mynode01:2181
However, querying the topics on the other nodes return the expected topics.
Another thing I saw is that zookeeper seems to be missing some directories:
./zkCli.sh -server mynode01
[zk: localhost:2181(CONNECTED) 1] ls /
[controller, zookeeper]
Whereas if I check any other node it comes back with:
[zk: localhost:2181(CONNECTED) 0] ls /
[isr_change_notification, zookeeper, admin, consumers, config, controller, brokers]
The logs report the following entry:
Error for partition [myqueue-1,0] to broker 1:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
I tried a couple of things already to sort this out, with no joy:
Restart the kafka cluster, so that other node becomes leader.
Assign a different leader for the topics affected by running ./kafka-reassign-partitions.sh
Stop kafka and zookeeper services on the affected node, remove kafka-logs and zkdata and start them back up.
Although the cluster seems to be able to treat this node as any other and switch the roles of leader/follower with no issues... it looks like it got out of sync at some point and is not able to recover itself.
Any idea?
Thanks in advance
I was able to solve the issue by stopping zookeeper and kafka services in the affected node and removing the snapshots available in zkdata and the associated transaction logs available in zklog directories.
After starting zookeeper back up on the the affected node, the znodes missing were re-synced back.