How to delete kafka topic from cluster version : 0.10.2.1 - apache-kafka

I am not able to delete kafka topic, Its marked for deletion but never gets deleted. Iam running kafka cluster with zookeeper cluster.
version of kafka : 0.10.2.1
Can anyone help me , with the list of steps that one needs to follow in order to delete a topic in kafka cluster.
Went through various queries in stackoverflow but could not find a valid workable answer.

You should have enabled its property at config before starting kafka server; it is disabled at default. To enable deletion property first stop kafka server and then open the server.properties in config file
and then uncomment #delete.topic.enable=true or add
delete.topic.enable=true
at the end of the file.
Now you can start kafka server and then you can delete any topic you want via:
bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic YOUR_TOPIC_NAME.

You could use Kafka Tool
Download link here
Then connect to your kafka server .
After that you could see the available topics in that server. From there you can select and delete the topics .

Related

How to check that Kafka does rebalance?

I'm writing a Go service that works with Kafka. I have a problems with bad commits when broker rebalances. I want to do an experiment forcing Kafka to rebalance and to see how the service behaves.
What I do:
running Kafka in Docker locally (broker, zookeeper, schema registry and control center)
created a topic with 2 partition
running producer that sends messages to both partitions
Then I'm running two consumers with the same groupID, after that I'm closing one. It seems to me that broker should start rebalancing this moment. Or no? Who's logs should I check for it?
You can check that by running the following commands:
bin/kafka-consumer-groups --bootstrap-server host:9092 --list
and to describe:
bin/kafka-consumer-groups --bootstrap-server host:9092 --describe --group foo
Full documentation could be found here: Kafka consumer group
Who's logs should I check for it?
The running consumer's log should be checked, depending on if the library you're using actually logs such information.

Kafka Topic Creation with --bootstrap-server gives timeout Exception (kafka version 2.5)

When trying to create topic using --bootstrap-server,
I am getting exception "Error while executing Kafka topic command: Timed out waiting for a node" :-
kafka-topics --bootstrap-server localhost:9092 --topic boottopic --replication-factor 3 --partitions
However following works fine, using --zookeeper :-
kafka-topics --zookeeper localhost:2181--topic boottopic --replication-factor 3 --partitions
I am using Kafka version 2.5 and as per knowledge since version >2.2, all the offsets and metadata are stored on the broker itself. So, while creating topic there's no need to connect to zookeeper.
Please help to understand this behaviour
Note - I have set up a Zookeeper quorum and Kafka broker cluster each containing 3 instance on a single machine (for dev purposes)
Old question, but Ill answer anyways for the sake of internet wisdom.
You probably have auth set, when using --bootstrap-server you need to also specify your credentials with --command-config
since version >2.2, all the ... metadata are stored on the broker itself
False. Topic metadata is still stored on Zookeeper until KIP-500 is completed.
The AdminClient.createTopics() method, however that is used internally will delegate to Zookeeper from the Controller broker node in the cluster.
Hard to say what the error is, but most common issue is that Kafka is not running, you have SSL enabled and the certs are wrong, or the listeners are misconfigured.
For example, in the listeners, the default broker port on a Cloudera Kafka installation would be 6667, not 9092
each containing 3 instance on a single machine
Running 3 instances on one machine does not improve resiliency or performance unless you have 3 CPUs and 3 separate HDDs on that one motherboard.
"Error while executing Kafka topic command: Timed out waiting for a
node"
This seems like your broker is down or is inaccessible from where you are running those commands or it hasn't started yet (perhaps still starting).
Sometimes the broker startup takes long because it performs some cleaning operations. You may want to check your Kafka broker startup logs and see if it is ready and then try creating the topics by giving in the bootstrap servers.
There could also be some errors during your Kafka broker startup like Too many open files or wrong zookeeper url, zookeeper not being accessible by your broker, to name a few.
If you are able to create topics by passing in your Zookeeper URL means that zookeeper is up but does not necessarily mean that your Kafka broker(s) are also up and running.
Since a zookeeper can start without a broker but not vice-versa.

Why do we need to mention Zookeeper details even though Apache Kafka configuration file already has it?

I am using Apache Kafka in (Plain Vanilla) Hadoop Cluster for the past few months and out of curiosity I am asking this question. Just to gain additional knowledge about it.
Kafka server.properties file already has the below parameter :
zookeeper.connect=localhost:2181
And I am starting Kafka Server/Broker with the following command :
bin/kafka-server-start.sh config/server.properties
So I assume that Kafka automatically infers the Zookeeper details by the time we start the Kafka server itself. If that's the case, then why do we need to explicitly mention the zookeeper properties while we create Kafka topics the syntax for which is given below for your reference :
bin/kafka-topics.sh --create --zookeeper localhost:2181
--replication-factor 1 --partitions 1 --topic test
As per the Kafka documentation we need to start zookeeper before starting Kafka server. So I don't think Kafka can be started by commenting out the zookeeper details in Kafka's server.properties file
But atleast can we use Kafka to create topics and to start Kafka Producer/Consumer without explicitly mentioning about zookeeper in their respective commands ?
The zookeeper.connect parameter in the Kafka properties file is needed for having each Kafka broker in the cluster connecting to the Zookeeper ensemble.
Zookeeper will keep information about connected brokers and handling the controller election. Other than that, it keeps information about topics, quotas and ACL for example.
When you use the kafka-topics.sh tool, the topic creation happens at Zookeeper level first and then thanks to it, information are propagated to Kafka brokers and topic partitions are created and assigned to them (thanks to the elected controller). This connection to Zookeeper will not be needed in the future thanks to the new Admin Client API which provides some admin operations executed against Kafka brokers directly. For example, there is a opened JIRA (https://issues.apache.org/jira/browse/KAFKA-5561) and I'm working on it for having the tool using such API for topic admin operations.
Regarding producer and consumer ... the producer doesn't need to connect to Zookeeper while only the "old" consumer (before 0.9.0 version) needs Zookeeper connection because it saves topic offsets there; from 0.9.0 version, the "new" consumer saves topic offsets in real topics (__consumer_offsets). For using it you have to use the bootstrap-server option on the command line insteand of the zookeeper one.

Zookeeper client cannot rmr /brokers/topics/MY_TOPIC

I'm trying to remove a Kafka topic with 8 partitions and 2 replications. First I delete that topic using kafka-topic.sh --delete command. Then I used zkCli.sh -server slave1.....slave3, and rmr /brokers/topics/MY_TOPIC.
However I still see that topic in /brokers/topics/. And I tried restart Kafka, everything still the same.
Btw, topic with 1 partition and 1 replica can be deleted successfully.
You can set server properties to enable delete of kafka topic
Add line mentioned below in service.properties
delete.topic.enable = true
If you removing manually using rmr /brokers/topics/MY_topic then you also need to remove topic related metadata from other nodes in zookeeper ex- consumer information about that topic. Also need to remove kafka topic director on kafka server.
It is cleaner to enable topic delete property and execute kafka-topics.sh --delete

How To delete topic which is run in multinode multi cluster HA kafka

I have set up kafka version 2.11-0.10.1.0 in multinode multicluster environment .In kafka server.properties I already added delete.enable.topic=true in all the 3 machines.
I am using command for delete topic is ,
./bin/kafka-topic.sh --zookeeper ip1:2181,ip2:2181,ip3:2181 --delete --topic topicname
but It's not deleting ,showing topic name -mark for deletion
So everytime I am clearing kafka-logs and zookeeper logs for delete topic .
Anybody having any idea to delete using command prompt .
In general everything you are doing sounds right, I suspect that your problem is a simple typo.
The parameter to enable topic deletion in Kafka is called: delete.topic.enable not, as you stated above: delete.enable.topic.
This would cause deletion to revert to its default value, which is false and result in the behavior you are seeing.
Correcting this and restarting should fix your issue and delete all topics.