Kafka console Consume Command not accepting bootstrap-servers - apache-kafka

I have installed Kafka on my windows machine. My Kafka version is kafka_2.12-2.4.0.
I have started the Zookeeper server, then Kafka server then creates the topic and then produces the message to created Topic. Till here everything is fine.
But when I run the Consume command, it is giving me below error.
'--bootstrap-servers' is not recognized as an internal or external
command, operable program or batch file.
I am using the below command.
.\bin\windows\kafka-console-consumer.bat --bootstrap-servers localhost:9092 --topic TEST_TOPIC --from-beginning
Please suggest me what could be the problem.

You should use --bootstrap-server instead of --bootstrap-servers (note 's' at the end):
Try:
kafka/bin/kafka-console-consumer.bat \
--bootstrap-server localhost:9092 \
--topic TEST_TOPIC \
--from-beginning

I resolved the issue on my side. The issue was that in the command prompt, for some reason, I had a double arrow >>:
C:\kafka_2.13-2.4.0>> .\bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic myTopic
'--bootstrap-server' is not recognized as an internal or external command,
operable program or batch file.
Once I removed the double arrow, the error went away. Now I appear to have other issues where Kafka is not running on the port I thought it was, but that is a separate issue.

Related

How to manipulate offsets of the source database for Debezium

So I've been experimenting with Kafka and I am trying to manipulate/change the offsets of the source database using this link https://debezium.io/documentation/faq/. I was successfully able to do it but I was wondering how I would do this in native kafka commands instead of using kafkacat.
So these are the kafka commands that I'm using
kafkacat -b kafka:9092 -C -t connect_offsets -f 'Partition(%p) %k %s\n'
and
echo '["In-house",{"server":"optimus-prime"}]|{"ts_sec":1657643280,"file":"mysql-bin.000200"","pos":2136,"row":1,"server_id":223344,"event":2}' | \
kafkacat -P -b kafka:9092 -t connect_offsets -K \| -p 2
It basically reverts the offset of the source system back to a previous binlog and I can be able to read the db from a previous point in time. So this works well, but was wondering what I would need to compose via native kafka since we don't have kafkacat on our dev/prod servers although I do see it's value and maybe that will be installed later in the future. This is what I have so far for the transalation but it's not quite doing what I'm thinking.
./kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic
connect_offsets --property print.offset=true --property print.partition=true --
property print.headers=true --property print.timestamp=true --property
print.key=true --from-beginning
After I run this I get these results.
This works well for the kafka consumer command but when I try to translate the producer command I run into issues.
./kafka-console-producer.sh --bootstrap-server kafka:9092 --topic connect_offsets
--property parse.partition=true --property parse.key=true --property
key.separator=":"
I get a prompt after the producer command and I enter this
["In-house",{"server":"optimus-prime"}]:{"ts_sec":1657643280,"file":"mysql-bin.000200","pos":2136,"row":1,"server_id":223344,"event":2}:2
But it seems like it's not taking the command because the bin log position doesn't update after I run the consumer command again. Any ideas? Let me know.
EDIT: After applying OneCricketeer's changes I'm getting this stack trace.
key.separator=":" looks like it will be an issue considering it will split your data at ["In-house",{"server":
So, basically you produced a bad event into the topic, and maybe broke the connector...
If you want to literally use the same command, keep your key separator as |, or any other character that will not be part of the key.
Also, parse.partition isn't a property that is used, so you should remove :2 at the end... I'm not even sure kafka-console-producer can target a specific partition.

kafka delete a topic using bootstrap server vs zookeeper

I want to know the different between these commands.
-- With bootstrap server
kafka-topics \
--bootstrap-server b-1.bhuvi-cluster-secure.jj9mhr.c3.kafka.ap-south-1.amazonaws.com:9098,b-2.bhuvi-cluster-secure.jj9mhr.c3.kafka.ap-south-1.amazonaws.com:9098 \
--delete \
--topic debezium-my-topic \
--command-config /etc/kafka/client.properties
-- With zookeeper
kafka-topics \
--zookeeper z-3.bhuvi-cluster-secure.jj9mhr.c3.kafka.ap-south-1.amazonaws.com:2182,z-1.bhuvi-cluster-secure.jj9mhr.c3.kafka.ap-south-1.amazonaws.com:2181,z-2.bhuvi-cluster-secure.jj9mhr.c3.kafka.ap-south-1.amazonaws.com:2181 \
--delete \
--topic debezium-my-topic
The reason behind this is, the Kafka ACL for delete topic is restricted. If I run the first command it's giving an error like Topic authorization failed which is correct(due to ACL) but the second command didn't check anything from ACL and deleted the topic directly.
Your authorizer.class.name configured on the brokers may only depend on Kafka, and doesn't use Zookeeper AdminClient to verify ACLs
CLI --zookeeper option is considered deprecated, and will be completely removed with KIP-500

How to setup kafka and zookeeper on your local machine?

I have installed kafka and zookeeper on my local C:\ drive folder now i am trying to run the zookeeper and kafka server so i can create topics but its throwing below error, any idea what is going wrong here ?
kafka comand Line:
C:\kafka_2.11-0.10.0.0>.\bin\windows\kafka-server-start.bat .\config\server.pro
erties
'#' is not recognized as an internal or external command,
operable program or batch file.
The syntax of the command is incorrect.
Error: missing `server' JVM at `C:\Program Files (x86)\Java\jre8\bin\server\jvm
dll'.
Please install or use the JRE or JDK that contains these missing components.
zookeeper.config
tickTime=2000
initLimit=10
syncLimit=5
dataDir=\zookeeper-3.4.8
clientPort=2181
Have you set up ZOOKEEPER_HOME in system variable ?
Please follow this video
https://www.youtube.com/watch?v=OJKesEpO6ok
In the latest version kafka_2.12-3.1.0 (2022) after unzipping and setting the properties and logs. keep the Kafka folder on the C drive and always run the command prompt with 'run as administrator'.
You can configure zookeeper with below commands
The .bat file is for windows
Terminal 1 zookeeper
C:\kafka\bin\windows>zookeeper-server-start.bat
..\..\config\zookeeper.properties
Terminal 2 Server
C:\kafka\bin\windows>kafka-server-start.bat
..\..\config\server.properties
Terminal 3 Creating a Topic
C:\kafka\bin\windows>kafka-topics.bat --create --topic tutorialGB
--bootstrap-server localhost:9092
Created topic tutorialGB.
To checklist of topic created
C:\kafka\bin\windows>kafka-topics.bat --list --bootstrap-server
localhost:9092
tutorialGB

How to configure zookeeper, kafka and storm in ubuntu 14.04?

I want to clear steps of how to install zookeeper, kafka and storm in Ubuntu
It will guide through sequence of steps :
Kafka binary file already has built-in Zookeeper with it, so you don't need to download it separately. Download Kafka at the below link.
Download Kafka version 0.8.2.0 from http://kafka.apache.org/downloads.html
Release and Un-tar the zip file using the below commands
tar -xzf kafka_2.9.1-0.8.2.0.tgz
Go into the extracted folder
cd kafka_2.9.1-0.8.2.0
Start the Zookeeper Server(which listens on port 2181 for kafka server requests)
bin/zookeeper-server-start.sh config/zookeeper.properties
Now start the Kafka Server in a new terminal window
bin/kafka-server-start.sh config/server.properties
Now let us test if the zookeeper-kafka configuration is working.
Open a new terminal and Create a topic test:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
bin/kafka-topics.sh --list --zookeeper localhost:2181
Use a producer to Kafka's topic test:
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
This is a message(You have to enter these messages)
This is another message
Use Kafka's Consumer to see the messages produced :
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
The above command should list all the messages you typed earlier. That's it. You successfully configured your Zookeeper-Kafka Single Broker.
To configure multil-broker use the link following, refer the official site kafka.apache.org
Now Let's install Apache Storm :
Download tar.gz file from mirrorShttp://mirrors.ibiblio.org/apache/storm/apache-storm-0.9.2-incubating/
Extract it : $tar xzvf apache-storm-0.9.2-incubating.tar.gz
Create a data directory
sudo mkdir /var/stormtmp
sudo chmod -R 777 /var/stormtmp
sudo gedit apache-storm-0.9.2-incubating/conf/storm.yaml
Edit the opened file such that it should have following properties set(JAVA_HOME path, you can use jdk7 or higher versions. Java must be installed in your system) :
storm.zookeeper.servers: - "localhost"
storm.zookeeper.port: 2181
nimbus.host: "localhost"
storm.local.dir: "/var/stormtmp"
java.library.path: "/usr/lib/jvm/java-7-openjdk-amd64"
supervisor.slots.ports:
-6700
-6701
-6702
-6703
worker.childopts: "-Xmx768m"
nimbus.childopts: "-Xmx512m"
supervisor.childopts: "-Xmx256m"
If everything goes fine, you are now ready with apache-zookeeper-kafka-storm, you can restart the system, That's it.

kafka loses all topics on reboot

I'm trying out Kafka (0.8.2.1) in a VM, but am having trouble with it: though everything is fine while the machine remains on (even if I restart ZK/Kafka), if I reboot the machine (after gracefully shutting down ZK/Kafka) it seems all Kafka topics go lost.
I'm probably missing something basic here, since this is probably not supposed to happen. What might it be?
cd /vagrant/kafka_2.11-0.8.2.1
bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 10 --topic foo
bin/kafka-topics.sh --list --zookeeper localhost:2181
# foo
# ^C then resume ZooKeeper, Kafka, or both
bin/kafka-topics.sh --list --zookeeper localhost:2181
# foo
# ^C both, reboot machine, boot ZK/Kafka again
bin/kafka-topics.sh --list --zookeeper localhost:2181
# no topics
Looks like the default location for logs is in the /tmp directory which gets wiped on reboot. Change that location in the config to a more permanent location.
Go to kafka installation folder > config> server.properties
search for log.dirs in that file, change path from /tmp/logs to local directory. Restart kafka server and you will see topics created will be saved in that local folder we have changed in config file.
This happens because the tmp folder get cleared out on reboot.
To fix this issue, do the following.
Go to you kafka installation directory and search for the file server.properties. You should see a section as below
A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs
Change the logs.dir to something more local or a custom dir like this.
log.dirs=/Users/xxx/yyy/software/confluent-5.3.1/mydata
Reboot your kafka cluster for the changes to take effect.
Reboot your system and you will see the Topics are still present.