This is pretty much the same question as this: Messages sent to all consumers with the same consumer group name. The accepted answer is to use Kafka 0.8.1 or newer, which I did.
Kafka documentation says:
If all the consumer instances have the same consumer group, then this works just like a traditional queue balancing load over the consumers.
But I am not able to observe this behaviour using Kafka 0.8.2.1 and kafkacat.
My setup:
Kafka + Zookeeper running in spotify/kafka container (via boot2docker)
One producer
Two consumers with the same group.id
First, in running Kafka container, I created a topic:
$KAFKA_HOME/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 2 --topic beta
Two consumers subscribed to same topic:
kafkacat -C -b $(boot2docker ip):9092 -t beta -X group.id=mygroup
Then I produce a message using kafkacat:
date | kafkacat -P -b $(boot2docker ip):9092 -t beta
I expect that only one consumer receives the message, but actually both of them do. What am I doing wrong?
Edit: When I try to run the same consumers using kafka-console-consumer.sh, all is fine:
echo "group.id=mygroupid" > /consumer.beta.properties
$KAFKA_HOME/bin/kafka-console-consumer.sh \
--zookeeper localhost:2181 \
--topic beta \
--consumer.config /consumer.beta.properties
All works as expected: the message is only consumed once. I assume the issue lies with kafkacat.
kafkacat now supports the 0.9 high-level KafkaConsumer:
kafkacat -b <broker> -G <groupid> topic1 topic2..
It turns out this feature is not supported by librdkafka (the library that kafkacat is using). Quoting from the Github issue:
Short answer, this is not supported for now. Also, whatever you see in the Kafka documentation is only true if you're using the JVM consumer/producer. Assume that any other consumer/producer implementation is not entirely following the doc.
Related
We have seen topic creation (which we control via terraform) fail sometimes. During the initial apply, terraform reports a connection error. On replan, it wants to create the topics, but on a second apply it fails saying the topic already exists. When this happens, the zookeeper nodes know about the topic but the kafka brokers do not.
$ zook=zookeeper-N.FQDN:2181
$ broker=kafka-N.FQDN:6667
$
$ kafka-topics.sh --describe --zookeeper $zook --topic troubleSomeTopic
Topic: troubleSomeTopic TopicId: x-ggFaJCRY6THYGvNjA20Q PartitionCount: 32 ReplicationFactor: 4 Configs: compression.type=snappy,retention.ms=86400000,segment.ms=86400000
/// verbose partition details
$ kafka-topics.sh --describe --bootstrap-server $broker --topic troubleSomeTopic
Error while executing topic command : Topic 'troubleSomeTopic' does not exist as expected
/// java backtrace
That example just checks one zookeeper and one broker, but all show the same results.
This cluster has zookeeper version 3.5.9 running on three nodes, and kafka 2.13-2.8.0 running on six. There is a similar question for an earlier version of kafka, but that method does not work any more.
How to remove an inconsistent kafka topic metadata data from kafka_2.10-0.8.1.1
The fix we have found for this is to delete the topic in the zookeeper, then restart the kafka cluster controller.
$ kafka-topics.sh --delete --zookeeper $zook --topic troubleSomeTopic
$ zookeeper-shell.sh $zook get /controller 2>/dev/null |
grep brokerid | jq -r .brokerid
10213
$ zookeeper-shell.sh $zook get /brokers/ids/10213 |
tail -1 | jq -r .host
kafka-5.FQDN
$
That's the broker to restart.
Before the restart, the zookeepers still know about the topic, but have it MarkedForDeletion: true. Restarting the controller broker clears the topic from zookeeper after a minute or two.
I am looking for a way to delete a topic or all its messages using kafkacat.
Is it possible or the only way is through the script listed here?
bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic mytopic
There is no delete topic feature at this stage in kafkacat according to man page and github source code. So the only way is to use kafka-topics script.
github source code
man page
kafkacat is a generic non-JVM producer and consumer for Apache Kafka
0.8, think of it as a
netcat for Kafka.
In producer mode ( -P ), kafkacat reads messages from stdin, delimited with a configurable
delimeter and produces them to the provided Kafka cluster, topic and partition. In consumer
mode ( -C ), kafkacat reads messages from a topic and partition and prints them to stdout
using the configured message delimiter.
If neither -P or -C are specified kafkacat attempts to figure out the mode automatically
based on stdin/stdout tty types.
kafkacat also features a metadata list mode ( -L ), to display the current state of the
Kafka cluster and its topics and partitions.
As #Naween Banuka pointed out, you can also use zookeeper-shell.sh or zkCli.sh (found under zookeeper/bin) to do this:
List the existing topics : ./zookeeper-shell.sh localhost:2181 ls /brokers/topics
Remove Topic : ./zookeeper-shell.sh localhost:2181 rmr /brokers/topics/yourtopic
Yes, It's possible.
But first, You have to enable topic deletion on all brokers.
change delete.topic.enable to true.
By default, it's false (In server.properties file)
Then,
Use topic delete command.
bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic mytopic
If you want to delete that topic permanently,
You can use the zookeeper delete command.
List the existing topics : ./zookeeper-shell.sh localhost:2181 ls /brokers/topics
Remove Topic : ./zookeeper-shell.sh localhost:2181 rmr /brokers/topics/yourtopic
I have a requirement where I have to read from 0.8.2 version of Kafka and process the data and write to 0.10.2 version of Kafka.
Please help me to find the solution.
use kafka MirrorMaker.you can mirror data in 0.8.2 to 0.10.2,then process data in 0.10.2
not sure it's the best option, but you could use 2 machines (not tested):
box 1: consumer then netcat
bin/kafka-console-consumer.sh --zookeeper myzookeeper:2181 --topic test --from-beginning | nc 1.2.3.4 5600
box 2 (IP 1.2.3.4): netcat then producer
nc -l -p 5600 | bin/kafka-console-producer.sh --zookeeper myotherzookeeper:2181 --topic test
I need to find out a way to ask Kafka for a list of topics. I know I can do that using the kafka-topics.sh script included in the bin\ directory. Once I have this list, I need all the consumers per topic. I could not find a script in that directory, nor a class in the kafka-consumer-api library that allows me to do it.
The reason behind this is that I need to figure out the difference between the topic's offset and the consumers' offsets.
Is there a way to achieve this? Or do I need to implement this functionality in each of my consumers?
Use kafka-consumer-groups.sh
For example
bin/kafka-consumer-groups.sh --list --bootstrap-server localhost:9092
bin/kafka-consumer-groups.sh --describe --group mygroup --bootstrap-server localhost:9092
you can use this for 0.9.0.0. version kafka
./kafka-consumer-groups.sh --list --zookeeper hostname:potnumber
to view the groups you have created. This will display all the consumer group names.
./kafka-consumer-groups.sh --describe --zookeeper hostname:potnumber --describe --group consumer_group_name
To view the details
GROUP, TOPIC, PARTITION, CURRENT OFFSET, LOG END OFFSET, LAG, OWNER
I realize that this question is nearly 4 years old now. Much has changed in Kafka since then. This is mentioned above, but only in small print, so I write this for users who stumble over this question as late as I did.
Offsets by default are now stored in a Kafka Topic (not in Zookeeper any more), see Offsets stored in Zookeeper or Kafka?
There's a kafka-consumer-groups utility which returns all the information, including the offset of the topic and partition, of the consumer, and even the lag (Remark: When you ask for the topic's offset, I assume that you mean the offsets of the partitions of the topic). In my Kafka 2.0 test cluster:
kafka-consumer-groups --bootstrap-server kafka:9092 --describe
--group console-consumer-69763 Consumer group 'console-consumer-69763' has no active members.
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
pytest 0 5 6 1 - - -
``
All the consumers per topic
(Replace --zookeeper with --bootstrap-server to get groups stored by newer Kafka clients)
Get all consumers-per-topic as a table of topictabconsumer:
for t in `kafka-consumer-groups.sh --zookeeper <HOST>:2181 --list 2>/dev/null`; do
echo $t | xargs -I {} sh -c "kafka-consumer-groups.sh --zookeeper <HOST>:2181 --describe --group {} 2>/dev/null | grep ^{} | awk '{print \$2\"\t\"\$1}' "
done > topic-consumer.txt
Make this pairs unique:
cat topic-consumer.txt | sort -u > topic-consumer-u.txt
Get the desired one:
less topic-consumer-u.txt | grep -i <TOPIC>
I do not see it mentioned here, but a command that i use often and that helps me to have a bird's eye view on all groups, topics, partitions, offsets, lags, consumers, etc
kafka-consumer-groups.bat --bootstrap-server localhost:9092 --describe --all-groups
A sample would look like this:
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
Group Topic 2 7 7 0 <SOME-ID> XXXX <SOME-ID>
:
:
The most important column is the LAG, where for a healthy platform, ideally it should be 0(or nearer to 0 or a low number for high throughput) - at all times. So make sure you monitor it!!! ;-).
P.S:
An interesting article on how you can monitor the lag can be found here.
Kafka stores all the information in zookeeper. You can see all the topic related information under brokers->topics. If you wish to get all the topics programmatically you can do that using Zookeeper API.
It is explained in detail in below links
Tutorialspoint, Zookeeper Programmer guide
High level consumers are registered into Zookeeper, so you can fetch a list from ZK, similarly to the way kafka-topics.sh fetches the list of topics. I don't think there's a way to collect all consumers; any application sending in a few consume requests is actually a "consumer", and you cannot tell whether they are done already.
On the consumer side, there's a JMX metric exposed to monitor the lag. Also, there is Burrow for lag monitoring.
You can also use kafkactl for this:
# get all consumer groups (output as yaml)
kafkactl get consumer-groups -o yaml
# get only consumer groups assigned to a single topic (output as table)
kafkactl get consumer-groups --topic topic-a
Sample output (e.g. as yaml):
name: my-group
protocoltype: consumer
topics:
- topic-a
- topic-b
- topic-c
Disclaimer: I am contributor to this project
I need to delete the topic test in Apache Kafka 0.8.1.1.
As expressed in the documentation here, I have executed:
bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test
However, this results in the following message:
Command must include exactly one action: --list, --describe, --create or --alter
How can I delete this topic?
Deleting topic isn't always working in 0.8.1.1
Deletion should be working in the next release, 0.8.2
kafka-topics.sh --delete --zookeeper localhost:2181 --topic your_topic_name
Topic your_topic_name is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
You may also pass in a bootstrap server instead of zookeeper:
kafka-topics.sh --bootstrap-server kafka:9092 --delete --topic your_topic_name
Is it possible to delete a topic?
Jira KAFKA-1397
It seems that the deletion command was not officially documented in Kafka 0.8.1.x because of a known bug (https://issues.apache.org/jira/browse/KAFKA-1397).
Nevertheless, the command was still shipped in the code and can be executed as:
bin/kafka-run-class.sh kafka.admin.DeleteTopicCommand --zookeeper localhost:2181 --topic test
In the meantime, the bug got fixed and the deletion command is now officially available from Kafka 0.8.2.0 as:
bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic test
Add below line in ${kafka_home}/config/server.properties
delete.topic.enable=true
Restart the kafka server with new config:
${kafka_home}/bin/kafka-server-start.sh ~/kafka/config/server.properties
Delete the topics you wish to:
${kafka_home}/bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic daemon12
Andrea is correct. we can do it using command line.
And we still can program it, by
ZkClient zkClient = new ZkClient("localhost:2181", 10000);
zkClient.deleteRecursive(ZkUtils.getTopicPath("test2"));
Actually I do not recommend you delete topic on Kafka 0.8.1.1. I can delete this topic by this method, but if you check log for zookeeper, deletion mess it up.
Steps to Delete 1 or more Topics in Kafka
#To delete topics in kafka the delete option needs to be enabled in Kafka server.
1. Go to {kafka_home}/config/server.properties
2. Uncomment delete.topic.enable=true
#Delete one Topic in Kafka enter the following command
kafka-topics.sh --delete --zookeeper localhost:2181 --topic <your_topic_name>
#To Delete more than one topic from kafka
(good for testing purposes, where i created multiple topics & had to delete them for different scenarios)
Stop the Kafka Server and Zookeeper
go to server folder where the logs are stored (defined in their config files) and delete the kafkalogs and zookeeper folder manually
Restart the zookeeper and kafka server and try to list topics,
bin/kafka-topics.sh --list --zookeeper localhost:2181
if no topics are listed then the all topics have been deleted successfully.If topics are listed, then the delete was not successful. Try the above steps again or restart your computer.
You can delete a specific kafka topic (example: test) from zookeeper shell command (zookeeper-shell.sh). Use the below command to delete the topic
rmr {path of the topic}
example:
rmr /brokers/topics/test
This steps will delete all topics and data
Stop Kafka-server and Zookeeper-server
Remove the data directories of both services, by default on windows they are
C:/tmp/kafka-logs and C:/tmp/zookeeper.
then start Zookeeper-server and Kafka-server
The command:
bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic test
unfortunately only marks topic for deletion.
Deletion does not happen.
That makes troubles, while testing any scripts, which prepares Kafka configuration.
Connected threads:
Purge Kafka Queue
Is there a way to delete all the data from a topic or delete the topic before every run?
As mentioned in doc here
Topic deletion option is disabled by default. To enable it set the server config
delete.topic.enable=true
Kafka does not currently support reducing the number of partitions for a topic or changing the replication factor.
Make sure delete.topic.enable=true
Adding to above answers one has to delete the meta data associated with that topic in zookeeper consumer offset path.
bin/zookeeper-shell.sh zookeeperhost:port
rmr /consumers/<sample-consumer-1>/offsets/<deleted-topic>
Otherwise the lag will be negative in kafka-monitoring tools based on zookeeper.
First, you run this command to delete your topic:
$ bin/kafka-topics.sh --delete --bootstrap-server localhost:9092 --topic <topic_name>
List active topics to check delete completely:
$ bin/kafka-topics.sh --list --bootstrap-server localhost:9092
If you have issues deleting the topics, try to delete the topic using:
$KAFKA_HOME/bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic your_topic_name
command. Then in order to verify the deletion process, go to the kafka logs directory which normally is placed under /tmp/kafka-logs/, then delete the your_topic_name file via rm -rf your_topic_name command.
Remember to monitor the whole process via a kafka management tool like Kafka Tool.
The mentioned process above will remove the topics without kafka server restart.
There is actually a solution without touching those bin/kafka-*.sh: If you have installed kafdrop, then simply do:
url -XPOST http://your-kafdrop-domain/topic/THE-TOPIC-YOU-WANT-TO-DELETE/delete
bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic <topic-name>
Step 1: Make sure you are connected to zookeeper and Kafka running
Step 2: To delele the Kafka topic run kafka-topics script, add the port and --topic with name of your topic and --delete it just delete the topic with success.
# Delete the kafka topic
# it will delete the kafka topic
bin/kafka-topics.sh --zookeeper 127.0.0.1:2181 --topic name_of_topic --delete
This worked for me:
kafka-topics --delete --bootstrap-server localhost:9092 --topic user-commands
I am using Confluent CLI and run this on the directory I installed it.
Recent Kafka versions are about to remove the Zookeeper dependency. Therefore, you should instead reference the brokers (through --boostrap-server):
kafka-topics \
--bootstrap-server localhost:9092,localhost:9093,localhost:9094 \
--delete \
--topic topic_for_deletion
for confluent cloud use -
$ confluent kafka topic delete my_topic