Not able to create a topic in Kafka - apache-kafka

I am trying to create a topic in Kafka, I installed a fresh copy of Kafka by downloading the .tar from official Apache mirror site.
I used the tar -xvf command to unpack the bundle and started the server, which ran ok.
Now I am trying the command:
bin/kafka-create-topic.sh --zookeeper localhost:2181 --replica 1 --partition 1 --topic test
I tried to dig the problem and actually check if the file was present, somehow the result is negative:

You are in the bin directory so the command should be :
./kafka-create-topic.sh .....
and not
bin/kafka-create-topic.sh
maybe a copy & paste error from the documentation web site ;)
[UPDATE]
even if the directory was wrong, in this case the new Kafka version has changed the script name so the right command is :
kafka-topic.sh --create ...

Use the following command to create a topic in Kafka,
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic topic_name
For further clarification, visit https://kafka.apache.org/quickstart

Related

Topic deletion from Zookeeper

I have deleted topic directly from Zookeeper using the below command and did not execute deletion from Kafka before:
zookeeper-shell.sh localhost:2181 rmr /brokers/topics/<topic_name>
Now what I see is that the topic shows up in the log.dirs of at least one broker in the cluster. Is there a way that can be deleted as well.
When i attempt to delete from kafka now it throws the below error
Error while executing topic command : Topic <topic_name> does not exist on ZK path <zookeeper_server_list:2181>
I think you have missed a couple of steps. In order to manually delete a topic you need to follow these steps:
1) Stop Kafka server
2) On each broker, you have you have to delete all the topic's log files under logs.dirs:
rm -rf path/to/logs/topic_name/
3) Remove topic directory from Zookeeper:
> zookeeper-shell.sh localhost:2181
> ls /brokers/topics
> rmr /brokers/topics/topic_name
4) Restart Kafka server
Note that the suggested way for deleting a topic is
/bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic topic_name
assuming that delete.topic.enable=true.

How to execute kafka-configs command inside testcontainers kafka image

I'm using testcontainers kafka image which is confluent cp-kafka, I need to amend kafka config using command:
./kafka-configs.sh --alter --entity-name <TOPIC_NAME> --entity-type topics --add-config message.timestamp.type=LogAppendTime --zookeeper <HOST>:<PORT>
I have an issue exucuting this command using KafkaContainer::execInContainer which yields no such file or directory.
I've looked at github image and confluent installation on ubuntu where it should be located, and tried various options with no success.
How to properly do this ?
./kafka-configs.sh assumes that kafka-configs.sh is available in a current directory, which might not be the case (try running cwd or ls -la instead and check the output).
But if you remove ./ before kafka-configs.sh, it should use the command from the PATH variable, which is supposed to be configured inside the Kafka image.

Where does Confluent's kafka-avro-console-consumer write its log files?

I am running Confluent's kafka-avro-console-consumer as described in the quickstart tutorial:
kafka-avro-console-consumer --topic test --zookeeper localhost:2181 --from-beginning
However, I have created a separate user kafka to run this command. I get the error message
mkdir: cannot create directory '/usr/bin/../logs': Permission denied
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /usr/bin/../logs/schema-registry.log (No such file or directory)
[...]
I would rather not give the kafka write access to the entire /usr/bin directory. Where exactly is Kafka trying to create the logs directory?
The kafka-avro-console-consumer respects the LOG_DIR environment variable so if you set it, the log file will be written to this directory instead.

How to configure zookeeper, kafka and storm in ubuntu 14.04?

I want to clear steps of how to install zookeeper, kafka and storm in Ubuntu
It will guide through sequence of steps :
Kafka binary file already has built-in Zookeeper with it, so you don't need to download it separately. Download Kafka at the below link.
Download Kafka version 0.8.2.0 from http://kafka.apache.org/downloads.html
Release and Un-tar the zip file using the below commands
tar -xzf kafka_2.9.1-0.8.2.0.tgz
Go into the extracted folder
cd kafka_2.9.1-0.8.2.0
Start the Zookeeper Server(which listens on port 2181 for kafka server requests)
bin/zookeeper-server-start.sh config/zookeeper.properties
Now start the Kafka Server in a new terminal window
bin/kafka-server-start.sh config/server.properties
Now let us test if the zookeeper-kafka configuration is working.
Open a new terminal and Create a topic test:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
bin/kafka-topics.sh --list --zookeeper localhost:2181
Use a producer to Kafka's topic test:
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
This is a message(You have to enter these messages)
This is another message
Use Kafka's Consumer to see the messages produced :
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
The above command should list all the messages you typed earlier. That's it. You successfully configured your Zookeeper-Kafka Single Broker.
To configure multil-broker use the link following, refer the official site kafka.apache.org
Now Let's install Apache Storm :
Download tar.gz file from mirrorShttp://mirrors.ibiblio.org/apache/storm/apache-storm-0.9.2-incubating/
Extract it : $tar xzvf apache-storm-0.9.2-incubating.tar.gz
Create a data directory
sudo mkdir /var/stormtmp
sudo chmod -R 777 /var/stormtmp
sudo gedit apache-storm-0.9.2-incubating/conf/storm.yaml
Edit the opened file such that it should have following properties set(JAVA_HOME path, you can use jdk7 or higher versions. Java must be installed in your system) :
storm.zookeeper.servers: - "localhost"
storm.zookeeper.port: 2181
nimbus.host: "localhost"
storm.local.dir: "/var/stormtmp"
java.library.path: "/usr/lib/jvm/java-7-openjdk-amd64"
supervisor.slots.ports:
-6700
-6701
-6702
-6703
worker.childopts: "-Xmx768m"
nimbus.childopts: "-Xmx512m"
supervisor.childopts: "-Xmx256m"
If everything goes fine, you are now ready with apache-zookeeper-kafka-storm, you can restart the system, That's it.

kafka loses all topics on reboot

I'm trying out Kafka (0.8.2.1) in a VM, but am having trouble with it: though everything is fine while the machine remains on (even if I restart ZK/Kafka), if I reboot the machine (after gracefully shutting down ZK/Kafka) it seems all Kafka topics go lost.
I'm probably missing something basic here, since this is probably not supposed to happen. What might it be?
cd /vagrant/kafka_2.11-0.8.2.1
bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 10 --topic foo
bin/kafka-topics.sh --list --zookeeper localhost:2181
# foo
# ^C then resume ZooKeeper, Kafka, or both
bin/kafka-topics.sh --list --zookeeper localhost:2181
# foo
# ^C both, reboot machine, boot ZK/Kafka again
bin/kafka-topics.sh --list --zookeeper localhost:2181
# no topics
Looks like the default location for logs is in the /tmp directory which gets wiped on reboot. Change that location in the config to a more permanent location.
Go to kafka installation folder > config> server.properties
search for log.dirs in that file, change path from /tmp/logs to local directory. Restart kafka server and you will see topics created will be saved in that local folder we have changed in config file.
This happens because the tmp folder get cleared out on reboot.
To fix this issue, do the following.
Go to you kafka installation directory and search for the file server.properties. You should see a section as below
A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs
Change the logs.dir to something more local or a custom dir like this.
log.dirs=/Users/xxx/yyy/software/confluent-5.3.1/mydata
Reboot your kafka cluster for the changes to take effect.
Reboot your system and you will see the Topics are still present.