I want to know the different between these commands.
-- With bootstrap server
kafka-topics \
--bootstrap-server b-1.bhuvi-cluster-secure.jj9mhr.c3.kafka.ap-south-1.amazonaws.com:9098,b-2.bhuvi-cluster-secure.jj9mhr.c3.kafka.ap-south-1.amazonaws.com:9098 \
--delete \
--topic debezium-my-topic \
--command-config /etc/kafka/client.properties
-- With zookeeper
kafka-topics \
--zookeeper z-3.bhuvi-cluster-secure.jj9mhr.c3.kafka.ap-south-1.amazonaws.com:2182,z-1.bhuvi-cluster-secure.jj9mhr.c3.kafka.ap-south-1.amazonaws.com:2181,z-2.bhuvi-cluster-secure.jj9mhr.c3.kafka.ap-south-1.amazonaws.com:2181 \
--delete \
--topic debezium-my-topic
The reason behind this is, the Kafka ACL for delete topic is restricted. If I run the first command it's giving an error like Topic authorization failed which is correct(due to ACL) but the second command didn't check anything from ACL and deleted the topic directly.
Your authorizer.class.name configured on the brokers may only depend on Kafka, and doesn't use Zookeeper AdminClient to verify ACLs
CLI --zookeeper option is considered deprecated, and will be completely removed with KIP-500
Related
So I've been experimenting with Kafka and I am trying to manipulate/change the offsets of the source database using this link https://debezium.io/documentation/faq/. I was successfully able to do it but I was wondering how I would do this in native kafka commands instead of using kafkacat.
So these are the kafka commands that I'm using
kafkacat -b kafka:9092 -C -t connect_offsets -f 'Partition(%p) %k %s\n'
and
echo '["In-house",{"server":"optimus-prime"}]|{"ts_sec":1657643280,"file":"mysql-bin.000200"","pos":2136,"row":1,"server_id":223344,"event":2}' | \
kafkacat -P -b kafka:9092 -t connect_offsets -K \| -p 2
It basically reverts the offset of the source system back to a previous binlog and I can be able to read the db from a previous point in time. So this works well, but was wondering what I would need to compose via native kafka since we don't have kafkacat on our dev/prod servers although I do see it's value and maybe that will be installed later in the future. This is what I have so far for the transalation but it's not quite doing what I'm thinking.
./kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic
connect_offsets --property print.offset=true --property print.partition=true --
property print.headers=true --property print.timestamp=true --property
print.key=true --from-beginning
After I run this I get these results.
This works well for the kafka consumer command but when I try to translate the producer command I run into issues.
./kafka-console-producer.sh --bootstrap-server kafka:9092 --topic connect_offsets
--property parse.partition=true --property parse.key=true --property
key.separator=":"
I get a prompt after the producer command and I enter this
["In-house",{"server":"optimus-prime"}]:{"ts_sec":1657643280,"file":"mysql-bin.000200","pos":2136,"row":1,"server_id":223344,"event":2}:2
But it seems like it's not taking the command because the bin log position doesn't update after I run the consumer command again. Any ideas? Let me know.
EDIT: After applying OneCricketeer's changes I'm getting this stack trace.
key.separator=":" looks like it will be an issue considering it will split your data at ["In-house",{"server":
So, basically you produced a bad event into the topic, and maybe broke the connector...
If you want to literally use the same command, keep your key separator as |, or any other character that will not be part of the key.
Also, parse.partition isn't a property that is used, so you should remove :2 at the end... I'm not even sure kafka-console-producer can target a specific partition.
I have recently been adding advertised.listeners property to my MSK cluster configuration. The command used was
/usr/local/kafka/bin/kafka-configs.sh --bootstrap-server b-1.xxxxxxxxx.c4.kafka.xxxxxxxxx.amazonaws.com:9094 \
--entity-type brokers \
--entity-name 1 \
--alter \
--command-config kafka.client.properties \
--add-config advertised.listeners=[CLIENT_SECURE://b-1.xxxxxxxxx.c4.kafka.xxxxxxxxx.amazonaws.com:9002,REPLICATION://b-1-internal.xxxxxxxxx.kafka.xxxxxxxxx.amazonaws.com:9093,REPLICATION_SECURE://b-1-internal.xxxxxxxxx.kafka.xxxxxxxxx.amazonaws.com:9095]
However, now if I try to get the configuration back using the following command
/usr/local/kafka/bin/kafka-configs.sh --bootstrap-server b-1.xxxxxxxxx.kafka.xxxxxxxxx.amazonaws.com:9002 \
--entity-type brokers \
--entity-name 1 \
--describe \
--all \
--command-config kafka.client.properties
I get a message in the trace log saying "Client is not ready to send to node".
Initially I thought this was to do with security groups but confirmed that they were open and allowing traffic. Any help would be as usual be greatly appreciated!!
I have deleted topic directly from Zookeeper using the below command and did not execute deletion from Kafka before:
zookeeper-shell.sh localhost:2181 rmr /brokers/topics/<topic_name>
Now what I see is that the topic shows up in the log.dirs of at least one broker in the cluster. Is there a way that can be deleted as well.
When i attempt to delete from kafka now it throws the below error
Error while executing topic command : Topic <topic_name> does not exist on ZK path <zookeeper_server_list:2181>
I think you have missed a couple of steps. In order to manually delete a topic you need to follow these steps:
1) Stop Kafka server
2) On each broker, you have you have to delete all the topic's log files under logs.dirs:
rm -rf path/to/logs/topic_name/
3) Remove topic directory from Zookeeper:
> zookeeper-shell.sh localhost:2181
> ls /brokers/topics
> rmr /brokers/topics/topic_name
4) Restart Kafka server
Note that the suggested way for deleting a topic is
/bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic topic_name
assuming that delete.topic.enable=true.
I have installed Kafka on my windows machine. My Kafka version is kafka_2.12-2.4.0.
I have started the Zookeeper server, then Kafka server then creates the topic and then produces the message to created Topic. Till here everything is fine.
But when I run the Consume command, it is giving me below error.
'--bootstrap-servers' is not recognized as an internal or external
command, operable program or batch file.
I am using the below command.
.\bin\windows\kafka-console-consumer.bat --bootstrap-servers localhost:9092 --topic TEST_TOPIC --from-beginning
Please suggest me what could be the problem.
You should use --bootstrap-server instead of --bootstrap-servers (note 's' at the end):
Try:
kafka/bin/kafka-console-consumer.bat \
--bootstrap-server localhost:9092 \
--topic TEST_TOPIC \
--from-beginning
I resolved the issue on my side. The issue was that in the command prompt, for some reason, I had a double arrow >>:
C:\kafka_2.13-2.4.0>> .\bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic myTopic
'--bootstrap-server' is not recognized as an internal or external command,
operable program or batch file.
Once I removed the double arrow, the error went away. Now I appear to have other issues where Kafka is not running on the port I thought it was, but that is a separate issue.
I want to clear steps of how to install zookeeper, kafka and storm in Ubuntu
It will guide through sequence of steps :
Kafka binary file already has built-in Zookeeper with it, so you don't need to download it separately. Download Kafka at the below link.
Download Kafka version 0.8.2.0 from http://kafka.apache.org/downloads.html
Release and Un-tar the zip file using the below commands
tar -xzf kafka_2.9.1-0.8.2.0.tgz
Go into the extracted folder
cd kafka_2.9.1-0.8.2.0
Start the Zookeeper Server(which listens on port 2181 for kafka server requests)
bin/zookeeper-server-start.sh config/zookeeper.properties
Now start the Kafka Server in a new terminal window
bin/kafka-server-start.sh config/server.properties
Now let us test if the zookeeper-kafka configuration is working.
Open a new terminal and Create a topic test:
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
bin/kafka-topics.sh --list --zookeeper localhost:2181
Use a producer to Kafka's topic test:
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
This is a message(You have to enter these messages)
This is another message
Use Kafka's Consumer to see the messages produced :
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
The above command should list all the messages you typed earlier. That's it. You successfully configured your Zookeeper-Kafka Single Broker.
To configure multil-broker use the link following, refer the official site kafka.apache.org
Now Let's install Apache Storm :
Download tar.gz file from mirrorShttp://mirrors.ibiblio.org/apache/storm/apache-storm-0.9.2-incubating/
Extract it : $tar xzvf apache-storm-0.9.2-incubating.tar.gz
Create a data directory
sudo mkdir /var/stormtmp
sudo chmod -R 777 /var/stormtmp
sudo gedit apache-storm-0.9.2-incubating/conf/storm.yaml
Edit the opened file such that it should have following properties set(JAVA_HOME path, you can use jdk7 or higher versions. Java must be installed in your system) :
storm.zookeeper.servers: - "localhost"
storm.zookeeper.port: 2181
nimbus.host: "localhost"
storm.local.dir: "/var/stormtmp"
java.library.path: "/usr/lib/jvm/java-7-openjdk-amd64"
supervisor.slots.ports:
-6700
-6701
-6702
-6703
worker.childopts: "-Xmx768m"
nimbus.childopts: "-Xmx512m"
supervisor.childopts: "-Xmx256m"
If everything goes fine, you are now ready with apache-zookeeper-kafka-storm, you can restart the system, That's it.