Test if kafka ready, topics are reachable from cli - apache-kafka

Was looking around and couldn't find anything other than kafka-topic --list.
I'm running Kafka in a K8s environment and have an init container that creates a couple of topics. I want my main container to start only when the topics are created and the topics are "subscribable" to. kafka-topic --list I believe is just reaching zookeeper as I can see my pod is showing error messages about the topic.
I did try kafka-console-consumer but even if the topic is not present it doesn't exit with status 1. It does exit with status one if the bootstrap server is not reachable. I'm looking for a solution similar to below
kafka-console-consumer --bootstrap-server correct-bootstrap-server:9092 --topic correct-topic --timeout-ms 100
exits with 0 ( this one works )
kafka-console-consumer --bootstrap-server wrong-bootstrap-server:9092 --topic wrong-topic --timeout-ms 100
exits with a non zero exit code (this one works too).
kafka-console-consumer --bootstrap-server correct-bootstrap-server:9092 --topic wrong-topic --timeout-ms 100
exits with a non zero exit code ( this one doesn't work as it exits with code 0)
Thanks.

It is not totally trivial to make sure from CLI if a Kafka topic is "ready". Many things can go wrong.
We had the same issues, and the current approach we are taking involves several call to the kafka-topic CLI
We make sure the topic exists with kafka-topics.sh --describe --topic FOO
Check that all partitions have a leader kafka-topics.sh --describe --topic FOO --unavailable-partitions (output should be empty)
Check that all partitions are fully replicated kafka-topics.sh --describe --topic FOO --under-replicated-partitions (output should be empty)
That still doesn't make it 100% certain that the topic is "ready", but it works for us

kafka-topics can list under-replicated, offline and under-min-isr partitions.
The best bet is to check that your topic is not under-replicated. If it isn't, it should be ready.

Related

Kafka is not sending messages to other partitions

Apache Kafka installed on Mac (Intel).
Single local producer and single local consumer.
1 topic with 3 partitions and 1 replication factor is created:
bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic animal --partitions 3 --replication-factor 1
Producer code:
bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic animal
Producer Messages:
>alligator
>crocodile
>tiger
When producing messages (manually via producer-console), all go into the same partition. Shouldn't they get distributed across partitions?
I've tried with 3 records (as above), but they get sent to 1 partition only. Checked within tmp/kafka-logs/topic-0/00**00.log
Other logs in topic- are empty.
I've tried with tens of records, but no luck.
I even increased the default partition configuration (num.partitions=3) within 'config/server.properties', but no luck.
I've also tried with different topics, but no luck.
Starting with kafka 2.4, the default partitioner was changed from round-robin to sticky, which will stick to the same partition (pun intended) for an entire batch.
With my kafka version, the kafka-console-producer uses a default batch size of 16384, so once you produce enough messages to fill that buffer, the partition will change.
If a producer, produces messages with the same key then it’s guaranteed to be produced on the same partition. so in your case if you want it to be consumed by different partitions than make sure to publish it with different keys.
You will need to set below property.
--property parse.key=true
See below command to produce record with key.
kafka-console-producer --broker-list 127.0.0.1:9092 --topic first_topic --property parse.key=true --property key.separator=,
> key1,value1
> key2,value2

how Compaction works in Apache Kafka

My input is 1:45$ and we are processing the message and next I am updating the 1:null.
I do see still the 1:45$ in the topic along with the 1:null (I can see both the messages)
I want output to be 1:null in the same topic.
I have used this code:
kafka-topics --create --zookeeper zookeeper:2181 --topic latest- product-price --replication-factor 1 --partitions 1 --config "cleanup.policy=compact" --config "delete.retention.ms=100" --config "segment.ms=100" --config "min.cleanable.dirty.ratio=0.01"
kafka-console-producer --broker-list localhost:9092 --topic latest- product-price --property parse.key=true --property key.separator=::
1::45$
1::null
kafka-console-consumer --bootstrap-server localhost:9092 --topic latest-product-price --property print.key=true --property key.separator=:: --from-beginning
But I do not find any compaction in my case and need some inputs to make the value as 1::null
Compaction in Kafka is not immediate. If you send two messages with the same key to a compacted topic, and you have a live consumer on that topic, that consumer will see both messages come through.
Periodically, there's a background cleaner thread that goes looking for duplicate keys in compacted topics, and removes the overwritten records, so that a consumer that pulls down the data after that log cleaner has run, will only see the last change/update for a particular key. So, topic compaction seems to be better suited for consumers that run periodically, not ones that are active 100% of the time.
One thing that you can tune how often this background log cleaner thread runs, to maybe run those consumers more often. Look for the log.cleaner configuration parameters in the Kafka documentation: https://kafka.apache.org/documentation/#brokerconfigs
There's a good explanation on how Kafka log compaction works at this link:
https://medium.com/swlh/introduction-to-topic-log-compaction-in-apache-kafka-3e4d4afd2262

Kafka 10.2 new consumer vs old consumer

I've spent some hours to figure out what was going on but didn't manage to find the solution.
Here is my set up on a single machine:
1 zookeeper running
3 broker running (on port 9092/9093/9094)
1 topic with 3 partitions and 3 replications (each partition are properly assigned between brokers)
I'm using kafka console producer to insert messages. If i check the replication offset (cat replication-offset-checkpoint), I see that my messages are properly ingested by Kafka.
Now I use the kafka console consumer (new):
sudo bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic testTopicPartitionned2
I dont see anything consumed. I tried to delete my logs folder (/tmp/kafka-logs-[1,2,3]), create new topics, still nothing.
However when I use the old kafka consumer:
sudo bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic testTopicPartitionned2
I can see my messages.
Am I missing something big here to make this new consumer work ?
Thanks in advance.
Check to see what setting the consumer is using for auto.offset.reset property
This will affect what a consumer group without a previously committed offset will do in terms of setting where to start reading messages from a partition.
Check the Kafka docs for more on this.
Try providing all your brokers to --bootstrap-server argument to see if you notice any differnce:
sudo bin/kafka-console-consumer.sh --bootstrap-server localhost:9092,localhost:9093,localhost:9094 --from-beginning --topic testTopicPartitionned2
Also, your topic name is rather long. I assume you've already made sure you provide the correct topic name.

Kafka Connect Offsets. Get/Set?

How do I get, set, or reset the offset of a Kafka Connect connector/task/sink?
I can use the /usr/bin/kafka-consumer-groups tool which runs kafka.admin.ConsumerGroupCommand to see the offsets for all my regular Kafka consumer groups. However, Kafka Connect tasks and groups do not show up with this tool.
Similarly, I can use the zookeeper-shell to connect to Zookeeper and I can see zookeeper entries for regular Kafka consumer groups, but not for Kafka Connect sinks.
As of 0.10.0.0, Connect doesn't provide an API for managing offsets. It's something we want to improve in the future, but not there yet. The ConsumerGroupCommand would be the right tool to manage offsets for Sink connectors. Note that source connector offsets are stored in a special offsets topic for Connect (they aren't like normal Kafka offsets since they are defined by the source system, see offset.storage.topic in the worker configuration docs) and since sink Connectors uses the new consumer, they won't store their offsets in Zookeeper -- all modern clients use native Kafka-based offset storage. The ConsumerGroupCommand can work with these offsets, you just need to pass the --new-consumer option).
You can't set offsets, but you can use kafka-consumer-groups.sh tool to "scroll" the feed forward.
The consumer group of your connector has a name of connect-*CONNECTOR NAME*, but you can double check:
unset JMX_PORT; ./bin/kafka-consumer-groups.sh --bootstrap-server *KAFKA HOSTS* --list
To view current offset:
unset JMX_PORT; ./bin/kafka-consumer-groups.sh --bootstrap-server *KAFKA HOSTS* --group connect-*CONNECTOR NAME* --describe
To move the offset forward:
unset JMX_PORT; ./bin/kafka-console-consumer.sh --bootstrap-server *KAFKA HOSTS* --topic *TOPIC* --max-messages 10000 --consumer-property group.id=connect-*CONNECTOR NAME* > /dev/null
I suppose you can move the offset backward as well by deleting the consumer group first, using --delete flag.
Don't forget to pause and resume your connector via Kafka Connect REST API.
In my case(testing reading files into producer and consume in console, all in local only), I just saw this in producer output:
offset.storage.file.filename=/tmp/connect.offsets
So I wanted to open it but it is binary, with some hardly recognizable characters.
I deleted it(rename it also works), and then I can write into the same file and get the file content from consumer again. You have to restart the console producer to take effect because it attempts to read the offset file, if not there, create a new one, so that the offset is reset.
If you want to reset it without deletion, you can use:
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group <group-name> --reset-offsets --to-earliest --topic <topic_name>
You can check all group names by:
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list
and check details of each group:
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group <group_name> --describe
In production environment, this offset is managed by zookeeper, so more steps (and caution) is needed. You can refer to this page:
https://metabroadcast.com/blog/resetting-kafka-offsets
https://community.hortonworks.com/articles/81357/manually-resetting-offset-for-a-kafka-topic.html
Steps:
kafka-topics --list --zookeeper localhost:2181
kafka-run-class kafka.tools.GetOffsetShell --broker-list localhost:9092 -topic vital_signs --time -1 // -1 for largest, -2 for smallest
set /consumers/{yourConsumerGroup}/offsets/{yourFancyTopic}/{partitionId} {newOffset}

Confusion about delete Kafka Topic

I'm using Kafka 0.8.0, it's Cloudera version.
When I deleted the topic such as: kafka-topics --zookeeper 10.0.0.11:2181/ --delete --topic test
it response:
Topic test is already marked for deletion.
But afterwards I recreated it, it throw exception as following:
kafka-topics --create --zookeeper 10.0.0.11:2181 --partitions 90 --replication-factor 2 --topic test
Error while executing topic command Topic "test" already exists.
kafka.common.TopicExistsException: Topic "test" already exists.
Any ideas please? How should I delete the topic and it's data.
My Kakfa version is kafka_2.10-0.8.2.2, below link works for me (from Delete topic in Kafka 0.8.1.1)
Add below line in ${kafka_home}/config/server.properties
delete.topic.enable=true
Restart the kafka server with new config:
${kafka_home}/bin/kafka-server-start.sh ~/kafka/config/server.properties
Delete the topics you wish to:
${kafka_home}/bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic daemon12
More information from Kafka FAQ:
Deleting a topic is supported since 0.8.2.x. You will need to enable
topic deletion (setting delete.topic.enable to true) on all brokers
first.
Make sure that, In kafka's config.poperties file 'delete.topic.enable' property should be true.
use below command
bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic test
Even if topic is not deleted then follow next steps.
open terminal zookeeper client mode:
A. stop zookeeper
B. rmr /broker/topics
C. check given topic using below command
/bin/kafka-topics.sh --zookeeper maxiq:2181 --list
If delete.topic.enable is set to false by default, the topics are not deleted on execution of --delete command (as displayed in the command response).
Topic test is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
In order to overcome this, use steps mentioned by #Shawn
The best and easy to make delete.topic.enable=true is not modify the server.property file.
Because, if kafka restart from ambari, it will overwrite this file, delete.topic.enable =false again.
Only at ambari, click on kafak/config/advance kafka broker/delete.topic.enable =true, it will work.
I just found that out as now.
Delete topic in kafka
step 1 -->> cd /usr/lib/zookeeper/bin
step 2 -->> zkCli.sh -server 127.0.0.1:2181
step 3 -->>rmr /brokers/topics/topic_name