I have deleted topic directly from Zookeeper using the below command and did not execute deletion from Kafka before:
zookeeper-shell.sh localhost:2181 rmr /brokers/topics/<topic_name>
Now what I see is that the topic shows up in the log.dirs of at least one broker in the cluster. Is there a way that can be deleted as well.
When i attempt to delete from kafka now it throws the below error
Error while executing topic command : Topic <topic_name> does not exist on ZK path <zookeeper_server_list:2181>
I think you have missed a couple of steps. In order to manually delete a topic you need to follow these steps:
1) Stop Kafka server
2) On each broker, you have you have to delete all the topic's log files under logs.dirs:
rm -rf path/to/logs/topic_name/
3) Remove topic directory from Zookeeper:
> zookeeper-shell.sh localhost:2181
> ls /brokers/topics
> rmr /brokers/topics/topic_name
4) Restart Kafka server
Note that the suggested way for deleting a topic is
/bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic topic_name
assuming that delete.topic.enable=true.
Related
I want to know the different between these commands.
-- With bootstrap server
kafka-topics \
--bootstrap-server b-1.bhuvi-cluster-secure.jj9mhr.c3.kafka.ap-south-1.amazonaws.com:9098,b-2.bhuvi-cluster-secure.jj9mhr.c3.kafka.ap-south-1.amazonaws.com:9098 \
--delete \
--topic debezium-my-topic \
--command-config /etc/kafka/client.properties
-- With zookeeper
kafka-topics \
--zookeeper z-3.bhuvi-cluster-secure.jj9mhr.c3.kafka.ap-south-1.amazonaws.com:2182,z-1.bhuvi-cluster-secure.jj9mhr.c3.kafka.ap-south-1.amazonaws.com:2181,z-2.bhuvi-cluster-secure.jj9mhr.c3.kafka.ap-south-1.amazonaws.com:2181 \
--delete \
--topic debezium-my-topic
The reason behind this is, the Kafka ACL for delete topic is restricted. If I run the first command it's giving an error like Topic authorization failed which is correct(due to ACL) but the second command didn't check anything from ACL and deleted the topic directly.
Your authorizer.class.name configured on the brokers may only depend on Kafka, and doesn't use Zookeeper AdminClient to verify ACLs
CLI --zookeeper option is considered deprecated, and will be completely removed with KIP-500
I am trying to create a topic in Kafka, I installed a fresh copy of Kafka by downloading the .tar from official Apache mirror site.
I used the tar -xvf command to unpack the bundle and started the server, which ran ok.
Now I am trying the command:
bin/kafka-create-topic.sh --zookeeper localhost:2181 --replica 1 --partition 1 --topic test
I tried to dig the problem and actually check if the file was present, somehow the result is negative:
You are in the bin directory so the command should be :
./kafka-create-topic.sh .....
and not
bin/kafka-create-topic.sh
maybe a copy & paste error from the documentation web site ;)
[UPDATE]
even if the directory was wrong, in this case the new Kafka version has changed the script name so the right command is :
kafka-topic.sh --create ...
Use the following command to create a topic in Kafka,
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic topic_name
For further clarification, visit https://kafka.apache.org/quickstart
I want to clear steps of how to install zookeeper, kafka and storm in Ubuntu
It will guide through sequence of steps :
Kafka binary file already has built-in Zookeeper with it, so you don't need to download it separately. Download Kafka at the below link.
Download Kafka version 0.8.2.0 from http://kafka.apache.org/downloads.html
Release and Un-tar the zip file using the below commands
tar -xzf kafka_2.9.1-0.8.2.0.tgz
Go into the extracted folder
cd kafka_2.9.1-0.8.2.0
Start the Zookeeper Server(which listens on port 2181 for kafka server requests)
bin/zookeeper-server-start.sh config/zookeeper.properties
Now start the Kafka Server in a new terminal window
bin/kafka-server-start.sh config/server.properties
Now let us test if the zookeeper-kafka configuration is working.
Open a new terminal and Create a topic test:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
bin/kafka-topics.sh --list --zookeeper localhost:2181
Use a producer to Kafka's topic test:
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
This is a message(You have to enter these messages)
This is another message
Use Kafka's Consumer to see the messages produced :
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
The above command should list all the messages you typed earlier. That's it. You successfully configured your Zookeeper-Kafka Single Broker.
To configure multil-broker use the link following, refer the official site kafka.apache.org
Now Let's install Apache Storm :
Download tar.gz file from mirrorShttp://mirrors.ibiblio.org/apache/storm/apache-storm-0.9.2-incubating/
Extract it : $tar xzvf apache-storm-0.9.2-incubating.tar.gz
Create a data directory
sudo mkdir /var/stormtmp
sudo chmod -R 777 /var/stormtmp
sudo gedit apache-storm-0.9.2-incubating/conf/storm.yaml
Edit the opened file such that it should have following properties set(JAVA_HOME path, you can use jdk7 or higher versions. Java must be installed in your system) :
storm.zookeeper.servers: - "localhost"
storm.zookeeper.port: 2181
nimbus.host: "localhost"
storm.local.dir: "/var/stormtmp"
java.library.path: "/usr/lib/jvm/java-7-openjdk-amd64"
supervisor.slots.ports:
-6700
-6701
-6702
-6703
worker.childopts: "-Xmx768m"
nimbus.childopts: "-Xmx512m"
supervisor.childopts: "-Xmx256m"
If everything goes fine, you are now ready with apache-zookeeper-kafka-storm, you can restart the system, That's it.
I'm playing around with Kafka and using my own local single instance of zookeeper + kafka and running into this error that I don't seem to understand how to resolve.
I started a simple server per the Apache Kafka Quickstart Guide
$ bin/zookeeper-server-start.sh config/zookeeper.properties
$ bin/kafka-server-start.sh config/server.properties
Then utilizing kafkacat (installed via Homebrew) I started a Producer that will just echo messages that I type into the console
$ kafkacat -P -b localhost:9092 -t TestTopic -T
test1
test1
But when I try to consume those messages I get an error:
$ kafkacat -C -b localhost:9092 -t TestTopic
% ERROR: Topic TestTopic error: Broker: Leader not available
And similarly when I try to list its' metadata
$ kafkacat -L -b localhost:9092 -t TestTopic
Metadata for TestTopic (from broker -1: localhost:9092/bootstrap):
0 brokers:
1 topics:
topic "TestTopic" with 0 partitions: Broker: Leader not available (try again)
My questions:
Is this an issue with my running instance of zookeeper and/or kafkacat - I ask this because I've been constantly shutting them down and restarting them, after deleting the /tmp/zookeeper and /tmp/kafka-logs directories
Is there some simple setting that I need to try? I tried adding auto.leader.rebalance.enable=true in Kafka's server.properties settings file, but that didn't fix this particular issue
How do I do a fresh restart of zookeeper/kafka. Is shutting them down, deleting the /tmp/zookeeper and /tmp/kafka-logs directories and then restarting zookeeper and then kafka the way to go? (Well maybe the way to go is to build a docker container that I can stand-up and tear down, I was going to use the spotify/docker-kafka container but that is not on Kafka 0.9.0.0 and I haven't taking the time to build my own)
It might be, but probably is not. My guess is the topic isn't created, so kafkacat echoes the massage on screen but doesn't really send it to kafka. All the topics are probably deleted after you delete the /tmp/kafka-logs
No. I don't think this is the way to look for a solution.
Having a docker container is definitely the way to go - you'll soon end up running kafka on multiple brokers, examining the replication behavior, high availability scenarios etc.. Having it dockerised helps a lot.
I'm trying out Kafka (0.8.2.1) in a VM, but am having trouble with it: though everything is fine while the machine remains on (even if I restart ZK/Kafka), if I reboot the machine (after gracefully shutting down ZK/Kafka) it seems all Kafka topics go lost.
I'm probably missing something basic here, since this is probably not supposed to happen. What might it be?
cd /vagrant/kafka_2.11-0.8.2.1
bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 10 --topic foo
bin/kafka-topics.sh --list --zookeeper localhost:2181
# foo
# ^C then resume ZooKeeper, Kafka, or both
bin/kafka-topics.sh --list --zookeeper localhost:2181
# foo
# ^C both, reboot machine, boot ZK/Kafka again
bin/kafka-topics.sh --list --zookeeper localhost:2181
# no topics
Looks like the default location for logs is in the /tmp directory which gets wiped on reboot. Change that location in the config to a more permanent location.
Go to kafka installation folder > config> server.properties
search for log.dirs in that file, change path from /tmp/logs to local directory. Restart kafka server and you will see topics created will be saved in that local folder we have changed in config file.
This happens because the tmp folder get cleared out on reboot.
To fix this issue, do the following.
Go to you kafka installation directory and search for the file server.properties. You should see a section as below
A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs
Change the logs.dir to something more local or a custom dir like this.
log.dirs=/Users/xxx/yyy/software/confluent-5.3.1/mydata
Reboot your kafka cluster for the changes to take effect.
Reboot your system and you will see the Topics are still present.