How to measure kafka reBalance duration - apache-kafka

is there tool available to measure Kafka re-balancing duration? or check any intermediate status?
we have observe a many time, specific consumer get stuck forever during Kafka rebalancing, we never waited to finish.

There was a bug in Apache Kafka 2.4.1 which got fixed, and MSK has 2.4.1.1. This was exactly related to infinite rebalance of consumers.
If your MSK running later versions, you can open support case.

Related

KafkaStreams stop consuming partitions after partition leader rebalance

We have experimented an issue that could be caused by the parameter auto.leader.rebalance.enable, which is set to true by default on brokers.
In detail, when the automatic rebalance occurs, for example after a broker restart, some partition leaders are moved to match the preferred leader.
After this event, some stateful Kafka Streams applications blocks on the source partitions whose leader has been moved and the consumer lag start to grow.
Is it a known issue? Why don't applications receive the information regarding the change of leader?
The tactical solution we found in case we need to execute a rolling restart of brokers is:
Stop stateful applications
Perform brokers rolling restarts.
Wait 5 minutes (default value) untile the automatic leader rebalance occurs
Start stateful applications.
We are using Confluent Platform Community 5.2.2, deployed on a 3 node on prem cluster.
We are trying to recreate what happened in the test environment but without success. is it possible that it is influenced by the load of the cluster, much lower in test?
Thanks in Advance!
Giorgio

Kafka broker occassionally takes much longer than usual to load logs on startup

We are observing that Kafka brokers occasionally take much more time to load logs on startup than usual. Much longer in this case means 40 minutes instead of at most 1 minute. This happens during a rolling restart following the procedure described by Confluent. This happens after the broker reported that controlled shutdown was succesful.
Kafka Setup
Confluent Platform 5.5.0
Kafka Version 2.5.0
3 Replicas (minimum 2 in sync)
Controlled broker shutdown enabled
1TB of AWS EBS for Kafka log storage
Other potentially useful information
We make extensive use of Kafka Streams
We use exactly-once processing and transactional producers/consumers
Observations
It is not always the same broker that takes a long time.
It does not only occur when the broker is the active controller.
A log partition that loads quickly (15ms) can take a long time (9549 ms) for the same broker a day later.
We experienced this issue before on Kafka 2.4.0 but after upgrading to 2.5.0 it did not occur for a few weeks.
Does anyone have an idea what could be causing this? Or what additional information would be useful to track down the issue?

Kafka streams 1.0: processing timeout with high max.poll.interval.ms and session.timeout.ms

I am using a stateless processor using Kafka streams 1.0 with kafka broker 1.0.1
The problem is, the CustomProcessor get closed every few seconds, which resulted in rebalance signal, I am using the following configs:
session.timeout.ms=15000
heartbeat.interval.ms=3000 // set it to 1/3 session.timeout
max.poll.interval.ms=Integer.MAX_VALUE // make it that large as I am doing a intensive computational operations that might take up to 10 mins processing 1 kafka message (NLP operations)
max.poll.records=1
despite this configuration and my understanding of how kafka timeout configurations work, I see the consumer rebalancing every few seconds.
I already went through the below article and other stackoverflow questions. about how to tune the long time operations and avoid very long session timeout that will make failure detection so late, however I still see unexpected behavior, unless I misunderstand something.
KIP-62
Diff between session.timeout.ms and max.poll.interval
Kafka kstreams processing timeout
For the consumer environment setup, I have 8 machines each 16 code, and consuming from 1 topic with 100 partitions, I am following what practice this confluent doc here recommends.
Any pointers?
I figured it out. after lots of debugging and enable verbose logging for both kafka streams client and the broker, it turned out to 2 things:
There is a critical bug in streams 1.0.0 (HERE), so I upgraded my client version from 1.0.0 to 1.0.1
I update the value of the consumer property default.deserialization.exception.handler from org.apache.kafka.streams.errors.LogAndFailExceptionHandler to org.apache.kafka.streams.errors.LogAndContinueExceptionHandler.
After the above 2 changes, everything went so perfect with no restarts, I am using grafana to monitor the restarts, and for the past 48 hours, there is no single restart happened.
I might do more troubleshooting to make sure which of the 2 items above make the real fix, but I am on a hurry to deploy to production, so if anybody is intrested to start from there, go ahead, else, once I got time will do the further analysis and update the answer!
So happy to get this fixed!!!

When does Kafka delete a topic?

I am very new to Kafka and I am dabbling about with it.
Say I have Kafka running on a Debian machine and I have managed to create a topic with a 100 messages on it.
After that initial burst of activity (i.e. placing a 100 messages onto the topic via some Kafka Producer) the Topic is just sat there idle with nothing happening (no consumers consuming and no producers producing)
I am aware of a Message Retention Policy setting, which I believe has a default value of 7 days. Let's say those 7 days pass, and the messages are indeed removed from the Topic, but what about the Topic itself?
Will Kafka eventually kill that Topic?
Also, what happens when I manually go and pull out the power cord for the machine that Kafka is running on? Will the Topic be discarded? Or will I still have my topic after I start up the machine, run ZooKeeper and create a Kafka Broker?
Any light on this matter would be appreciated.
Thank you
No, Kafka will keep the topic. It sounds like a bad idea that Kafka deletes topics by itself.
Before version 1.0.0 the topic deletion option (delete.topic.enable) was set to false by default. So it wasn't even possible to delete it without changing the config.
So the answer for you question would be Kafka never deletes topics.

In Storm, how to migrate offsets to store in Kafka?

I've been having all sorts of instabilities related to Kafka and offsets. Things like workers crashing on startup with exceptions related to invalidate offsets, and other things I don't understand.
I read that it is recommended to migrate offsets to be stored in Kafka instead of Zookeeper. I found the below in the Kafka documentation:
Migrating offsets from ZooKeeper to Kafka Kafka consumers in
earlier releases store their offsets by default in ZooKeeper. It is
possible to migrate these consumers to commit offsets into Kafka by
following these steps: 1. Set offsets.storage=kafka and
dual.commit.enabled=true in your consumer config. 2. Do a rolling
bounce of your consumers and then verify that your consumers are
healthy. 3. Set dual.commit.enabled=false in your consumer config. 4. Do
a rolling bounce of your consumers and then verify that your consumers
are healthy.
A roll-back (i.e., migrating from Kafka back to ZooKeeper) can also
be performed using the above steps if you set
offsets.storage=zookeeper.
http://kafka.apache.org/documentation.html#offsetmigration
But, again, I don't understand what this is instructing me to do. I don't see anywhere in my topology config where I configure where offsets are stored. Is it buried in the cluster yaml?
Any advice on if storing offsets in Kafka, rather than Zookeeper, is a good idea? And how I can perform this change?
At the time of this writing Storm's Kafka spout (see documentation/README at https://github.com/apache/storm/tree/master/external/storm-kafka) only supports managing consumer offsets in ZooKeeper. That is, all current Storm versions (up to 0.9.x and including 0.10.0 Beta) still rely on ZooKeeper for storing such offsets. Hence you should not perform the ZK->Kafka offset migration you referenced above because Storm isn't compatible yet.
You will need to wait until the Storm project -- specifically, its Kafka spout -- supports managing consumer offsets via Kafka (instead of ZooKeeper). And yes, in general it is better to store consumer offsets in Kafka rather than ZooKeeper, but alas Storm isn't there yet.
Update November 2016:
The situation in Storm has improved in the meantime. There's now a new, second Kafka spout that is based on Kafka's new 0.10 consumer client, which stores consumer offsets in Kafka (and not in ZooKeeper): https://github.com/apache/storm/tree/master/external/storm-kafka-client.
However, at the time I am writing this, there are still several issues being reported by the users in the storm-user mailing list (such as Urgent help! kafka-spout stops fetching data after running for a while), so I'd use this new Kafka spout with care, and only after thorough testing.