What happens to consumer groups in Kafka if the entire cluster goes down? - apache-kafka

We have a consumer service that is always trying to read data from a topic using a consumer group. Due to redeployments, our Kafka cluster periodically is brought down and recreated again.
Whenever the cluster comes back again, we observed that although the previous topics are picked up (probably from zookeeper), the previous consumer groups are not created. Because of this, our running consumer process which is created with a previous consumer group gets stuck and never comes out.
Is this how the behavior of the consumer groups should be or is there a configuration we need to enable somewhere?
Any help is greatly appreciated.

Kafka Brokers keep a cache of healthy consumers and consumer groups, if the entire cluster is destroyed/recreated it no longer has knowledge of those consumers and groups, including offsets. The consumers will have to reconnect and re-establish the group and offsets from the beginning of the topic.
Operationally it makes more sense to keep the Kafka cluster running long-term, and do version upgrades in a rolling fashion so you don't interrupt the service.

Related

Confluent.Kafka.KafkaException: Broker: Specified group generation id is not valid

Environment
3-node Kafka Cluster
Amazon MSK
v2.3
1 topic
6 partitions
1 consumer group with 2 consumers
Running in Kubernetes
Confluent .NET SDK 1.2.2
Except for bootstrap.servers and group.id, all of the default settings.
Problem
First, one of my consumers encounters the following exception.
Confluent.Kafka.KafkaException: Broker: Specified group generation id is not valid
at Confluent.Kafka.Impl.SafeKafkaHandle.Commit(IEnumerable`1 offsets)
at Confluent.Kafka.Consumer`2.Commit(IEnumerable`1 offsets)
The exception is trapped and the consumer is supposed to retry, but instead the app sits idle. The container is still up and running, but not consuming any more messages.
What's weirder is that the broker never reassigns that consumer's partitions so the consumer lag on those partitions begins to grow. It seems like the consumer is both alive (since the broker is not reassigning its partitions) and dead (since it cannot commit its offset or consume more messages). If we intervene and manually restart the consumers then the partitions are reassigned and the situation goes back to normal.
I'm not entirely sure what to make of the exception above. Google doesn't offer much. The most relevant lead I have is this issue in GitHub, which involves a broker restarting. To my knowledge, that is not happening in my situation. Any assistance would be greatly appreciated.
at least I have found a solution for me.
In my code I did a manual commit and set EnableAutoCommit = false.
Somehow it was possible that for an offset a commit was executed twice. I removed the manual commits on the consumer and set EnableAutoCommit = true.
After that it worked.

KafkaStreams stop consuming partitions after partition leader rebalance

We have experimented an issue that could be caused by the parameter auto.leader.rebalance.enable, which is set to true by default on brokers.
In detail, when the automatic rebalance occurs, for example after a broker restart, some partition leaders are moved to match the preferred leader.
After this event, some stateful Kafka Streams applications blocks on the source partitions whose leader has been moved and the consumer lag start to grow.
Is it a known issue? Why don't applications receive the information regarding the change of leader?
The tactical solution we found in case we need to execute a rolling restart of brokers is:
Stop stateful applications
Perform brokers rolling restarts.
Wait 5 minutes (default value) untile the automatic leader rebalance occurs
Start stateful applications.
We are using Confluent Platform Community 5.2.2, deployed on a 3 node on prem cluster.
We are trying to recreate what happened in the test environment but without success. is it possible that it is influenced by the load of the cluster, much lower in test?
Thanks in Advance!
Giorgio

Does Kafka Consumer reconnect and resubscribe to topics after cluster goes down and comes back up

kafka consumer using librdkafka (high level consumer) connected to kafka cluster and subscribed to 10 topics and consuming data. There was assign partitions event.
There was network issue due to which cluster was not reachable. Lost connection with group co-ordinator and heartbeats got stuck. There was revoked partitions event where the code calls unassign on consumer.
When cluster came back up, the consumer was not consuming any data although its in the while true loop calling consume with timeout of 1 sec.
Does consumer needs to resubscribe to topics again once it is connected to cluster? What is a reliable way to detect the consumer is connected to cluster in code?
Does consumer needs to resubscribe to topics again once it is connected to cluster?
Yes. New group members will cause a rebalance amongst existing members, and they need to resubscribe
What is a reliable way to detect the consumer is connected to cluster in code?
You could describe the consumer group and see if there are active clients for the consumer group you are interested in

Fixing under replicated partitions in kafka

In our production environment, we often see that the partitions go under-replicated while consuming the messages from the topics. We are using Kafka 0.11. From the documentation what is understand is
Configuration parameter replica.lag.max.messages was removed. Partition leaders will no longer consider the number of lagging messages when deciding which replicas are in sync.
Configuration parameter replica.lag.time.max.ms now refers not just to the time passed since last fetch request from the replica, but also to time since the replica last caught up. Replicas that are still fetching messages from leaders but did not catch up to the latest messages in replica.lag.time.max.ms will be considered out of sync.
How do we fix this issue? What are the different reasons for replicas go out of sync? In our scenario, we have all the Kafka brokers in the single RACK of the blade servers and all are using the same network with 10GBPS Ethernet(Simplex). I do not see any reason for the replicas to go out of sync due to the network.
We faced the same issue:
Solution was:
Restart the Zookeeper leader.
Restart the broker\brokers that are not replicating some of the partitions.
No data lose.
The issue is due to a faulty state in ZK, there was an opened issue on ZK for this, don't remember the number.
I faced the same issue on Kafka 2.0,
On restart Kafka controller node everything caught-up on the replicas.
But still looking for the reasons why few partitions are under-replicated whereas the other partitions on the same nodes for the same topic works good, and this issue i see on a random partitions.
Do NOT run reassignment for all topics together, consider running it for small portions.
Find the topic that has under-replicated partitions and where reassignment process can't be completed.
Set unclean.leader.election.enable to true for this topic.
Find under-replicated partition that stuck for this topic. Check its leader ID.
Stop the broker (just the service, not the instance).
Execute Preferred Replica Election (in yahoo/kafka-manager or manually).
Start the broker back.
Repeat for the rest of topics that have the same problem.
Also I tried this advice, it didn't help me: https://stackoverflow.com/a/51063607/1929406

When does Kafka delete a topic?

I am very new to Kafka and I am dabbling about with it.
Say I have Kafka running on a Debian machine and I have managed to create a topic with a 100 messages on it.
After that initial burst of activity (i.e. placing a 100 messages onto the topic via some Kafka Producer) the Topic is just sat there idle with nothing happening (no consumers consuming and no producers producing)
I am aware of a Message Retention Policy setting, which I believe has a default value of 7 days. Let's say those 7 days pass, and the messages are indeed removed from the Topic, but what about the Topic itself?
Will Kafka eventually kill that Topic?
Also, what happens when I manually go and pull out the power cord for the machine that Kafka is running on? Will the Topic be discarded? Or will I still have my topic after I start up the machine, run ZooKeeper and create a Kafka Broker?
Any light on this matter would be appreciated.
Thank you
No, Kafka will keep the topic. It sounds like a bad idea that Kafka deletes topics by itself.
Before version 1.0.0 the topic deletion option (delete.topic.enable) was set to false by default. So it wasn't even possible to delete it without changing the config.
So the answer for you question would be Kafka never deletes topics.