EndOfStreamException after executing zookeeper-shell.sh command - apache-zookeeper

I'm starting with Zookeeper and Kafka - after executing zookeeper-shell.sh commands I'm still getting EndOfStreamException. So far it doesn't cause broker failures (still getting valid responses) but I would like to resolve it before it starts. I operate on single cluster with 1 Zookeeper and 3 Kafka brokers. Commands I used so far are simplest possible:
/bin/zookeeper-shell.sh localhost:2181 ls /brokers/ids
/bin/zookeeper-shell.sh localhost:2181 get /brokers/ids/{id}
Zookeeper command
Zookeeper logs

You're running a non-interactive shell command, then it exits successfully with code 0 and ends the session.

Related

How can I get into the Zookeeper that is integrated in Kafka? ( 2.2.0 )

I have installed a kafka that has integrated zookeeper.
I have seen that to enter an independent Zookeeper installation, you can run the following command to enter the zookeeper console:
bin/ZkCli.sh
ls /zookeeper/quota
But in Kafka's scripts I only have:
zookeeper-security-migration.sh
zookeeper-server-start.sh
zookeeper-server-stop.sh
zookeeper-shell.sh
I have tried to do the following:
./zookeeper-shell.sh -server 127.0.0.1:2181 ls /zookeeper/quota
But it doesn't work, it doesn't do anything
How can I get into the Zookeeper that is integrated in Kafka?
After starting Zookeeper, you can connect to it using the zookeeper-shell.sh tool.
To get into the shell:
./zookeeper-shell IP:2181
Then you can execute commands, like:
ls /
You can use cd to move within the nodes and get to print the content of nodes.
You can also use this script to just run commands and return (without getting into the shell)
./zookeeper-shell.sh localhost:2181 get /controller
/zookeeper/quota is not a path used by Kafka, Quotas are stored under /config

Modified server.properties and now I can't create topics

I installed kafka on my machine using brew.
brew install kafka
I opened the following file
/usr/local/etc/kafka/server.properties
By default kafka brokers run on 9092. but in my case this port is already taken by elastic search.
So I made the following entry in this file
listeners=PLAINTEXT://abhisheks-mini:9093
Now i restarted kafka using brew services restart kafka
but now if I try to do
./kafka-topics.sh --create --zookeeper localhost:2181 --partitions 1 --replication-factor 1 --topic test
I get an error
Error while executing topic command : replication factor: 1 larger than available brokers: 0
[2017-09-30 12:39:18,076] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: replication factor: 1 larger than available brokers: 0
(kafka.admin.TopicCommand$)
I think this is because I changed the port and the utility does not know about the new port.
Currently kafka-topics.sh only talks to zookeeper and according to ZK you don't have a broker running.
I suggest you check the broker logs (/usr/local/var/log/kafka) to see why it failed starting.
Also if you only want to change the port, you don't need to provide a hostname/ip, you just need:
listeners=PLAINTEXT://:9093

How to setup kafka and zookeeper on your local machine?

I have installed kafka and zookeeper on my local C:\ drive folder now i am trying to run the zookeeper and kafka server so i can create topics but its throwing below error, any idea what is going wrong here ?
kafka comand Line:
C:\kafka_2.11-0.10.0.0>.\bin\windows\kafka-server-start.bat .\config\server.pro
erties
'#' is not recognized as an internal or external command,
operable program or batch file.
The syntax of the command is incorrect.
Error: missing `server' JVM at `C:\Program Files (x86)\Java\jre8\bin\server\jvm
dll'.
Please install or use the JRE or JDK that contains these missing components.
zookeeper.config
tickTime=2000
initLimit=10
syncLimit=5
dataDir=\zookeeper-3.4.8
clientPort=2181
Have you set up ZOOKEEPER_HOME in system variable ?
Please follow this video
https://www.youtube.com/watch?v=OJKesEpO6ok
In the latest version kafka_2.12-3.1.0 (2022) after unzipping and setting the properties and logs. keep the Kafka folder on the C drive and always run the command prompt with 'run as administrator'.
You can configure zookeeper with below commands
The .bat file is for windows
Terminal 1 zookeeper
C:\kafka\bin\windows>zookeeper-server-start.bat
..\..\config\zookeeper.properties
Terminal 2 Server
C:\kafka\bin\windows>kafka-server-start.bat
..\..\config\server.properties
Terminal 3 Creating a Topic
C:\kafka\bin\windows>kafka-topics.bat --create --topic tutorialGB
--bootstrap-server localhost:9092
Created topic tutorialGB.
To checklist of topic created
C:\kafka\bin\windows>kafka-topics.bat --list --bootstrap-server
localhost:9092
tutorialGB

How to resolve "Leader not available" Kafka error when trying to consume

I'm playing around with Kafka and using my own local single instance of zookeeper + kafka and running into this error that I don't seem to understand how to resolve.
I started a simple server per the Apache Kafka Quickstart Guide
$ bin/zookeeper-server-start.sh config/zookeeper.properties
$ bin/kafka-server-start.sh config/server.properties
Then utilizing kafkacat (installed via Homebrew) I started a Producer that will just echo messages that I type into the console
$ kafkacat -P -b localhost:9092 -t TestTopic -T
test1
test1
But when I try to consume those messages I get an error:
$ kafkacat -C -b localhost:9092 -t TestTopic
% ERROR: Topic TestTopic error: Broker: Leader not available
And similarly when I try to list its' metadata
$ kafkacat -L -b localhost:9092 -t TestTopic
Metadata for TestTopic (from broker -1: localhost:9092/bootstrap):
0 brokers:
1 topics:
topic "TestTopic" with 0 partitions: Broker: Leader not available (try again)
My questions:
Is this an issue with my running instance of zookeeper and/or kafkacat - I ask this because I've been constantly shutting them down and restarting them, after deleting the /tmp/zookeeper and /tmp/kafka-logs directories
Is there some simple setting that I need to try? I tried adding auto.leader.rebalance.enable=true in Kafka's server.properties settings file, but that didn't fix this particular issue
How do I do a fresh restart of zookeeper/kafka. Is shutting them down, deleting the /tmp/zookeeper and /tmp/kafka-logs directories and then restarting zookeeper and then kafka the way to go? (Well maybe the way to go is to build a docker container that I can stand-up and tear down, I was going to use the spotify/docker-kafka container but that is not on Kafka 0.9.0.0 and I haven't taking the time to build my own)
It might be, but probably is not. My guess is the topic isn't created, so kafkacat echoes the massage on screen but doesn't really send it to kafka. All the topics are probably deleted after you delete the /tmp/kafka-logs
No. I don't think this is the way to look for a solution.
Having a docker container is definitely the way to go - you'll soon end up running kafka on multiple brokers, examining the replication behavior, high availability scenarios etc.. Having it dockerised helps a lot.

kafka loses all topics on reboot

I'm trying out Kafka (0.8.2.1) in a VM, but am having trouble with it: though everything is fine while the machine remains on (even if I restart ZK/Kafka), if I reboot the machine (after gracefully shutting down ZK/Kafka) it seems all Kafka topics go lost.
I'm probably missing something basic here, since this is probably not supposed to happen. What might it be?
cd /vagrant/kafka_2.11-0.8.2.1
bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 10 --topic foo
bin/kafka-topics.sh --list --zookeeper localhost:2181
# foo
# ^C then resume ZooKeeper, Kafka, or both
bin/kafka-topics.sh --list --zookeeper localhost:2181
# foo
# ^C both, reboot machine, boot ZK/Kafka again
bin/kafka-topics.sh --list --zookeeper localhost:2181
# no topics
Looks like the default location for logs is in the /tmp directory which gets wiped on reboot. Change that location in the config to a more permanent location.
Go to kafka installation folder > config> server.properties
search for log.dirs in that file, change path from /tmp/logs to local directory. Restart kafka server and you will see topics created will be saved in that local folder we have changed in config file.
This happens because the tmp folder get cleared out on reboot.
To fix this issue, do the following.
Go to you kafka installation directory and search for the file server.properties. You should see a section as below
A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs
Change the logs.dir to something more local or a custom dir like this.
log.dirs=/Users/xxx/yyy/software/confluent-5.3.1/mydata
Reboot your kafka cluster for the changes to take effect.
Reboot your system and you will see the Topics are still present.