In our production environment we have more than 25 topics.
Recently we installed KafkaOffsetMonitor tool which is used to monitor kafka offset.
But the tools shows only 3 topics information and it not show 22 topics.
When we verified we found all 22 topics doesn't have consumer group.
If consumer group is there then only we can monitor.
Question is: How to assign consumer group for existing topics?
Related
We have 2 diff kafka clusters with 10 brokers in each cluster and each cluster has its own Zookeeper cluster. We also have setup MirrorMaker 2 to sync data between the clusters. With MM2, the offset is also being synced along with data.
Looking forward to setup Active/Active for my consumer application as well as producer application.
Lets say the clusters are DC1 & DC2.
Topic name is test-mm.
With MM2 setup,
In DC1,
test-mm
test-mm-DC2(Mirror of DC2)
In DC2,
test-mm
test-mm-DC1(Mirror of DC1)
Consumer Active/ Active
In DC1, I have an application consuming data from test-mm & test-mm-DC2 with the consumer group name group1-test.
In DC2, The same application is consuming data from test-mm & test-mm-DC1 with the consumer group name group1-test.
Application is running as Active/Active on both DCs.
Now producer in DC1 is producing to the topic test-mm in DC1 and it gets mirrored to the topic test-mm-DC1 in DC2. My assumption here is, the offset gets synced so, with the same consumer group name, we can run consumer application on both DCs and only one consumer will get and process the message. Also, when the consumer application in DC1 goes down, the consumer application in DC2 will start processing and we can achieve the real active/active for consumers. Is this correct?
Producer active/active,
It may not be possible with Producer in DC1 and Producer 2 in DC2 as the sequence may not be maintained with 2 different producer. Not sure if Active/Active can be achieved with producer.
You will want two producers, one producing to test-mm in DC1 and the other producing to test-mm in DC2. Once messages have been produced to test-mm in DC1 this will be replicated to test-mm-DC1 in DC2 and vice versa. This is achieving active / active as the data will exist on both DCs, your consumers are also consuming from both DCs and if one DC fails the other producer and consumer will continue as normal. Please let me know if this has not answered your question.
Hopefully my comment answers your question about exactly once processing with MM2. The Stack Overflow post I linked takes the following paragraph from the IBM guide: https://ibm-cloud-architecture.github.io/refarch-eda/technology/kafka-mirrormaker/#record-duplication
This Cloudera blog also mentions that exactly once processing does not apply across multiple clusters: https://blog.cloudera.com/a-look-inside-kafka-mirrormaker-2/
Cross-cluster Exactly Once Guarantee
Kafka provides support for exactly-once processing but that guarantee
is provided only within a given Kafka cluster and does not apply
across multiple clusters. Cross-cluster replication cannot directly
take advantage of the exactly-once support within a Kafka cluster.
This means MM2 can only provide at least once semantics when
replicating data across the source and target clusters which implies
there could be duplicate records downstream.
Now with regards to the below question:
Now producer in DC1 is producing to the topic test-mm in DC1 and it
gets mirrored to the topic test-mm-DC1 in DC2. My assumption here is,
the offset gets synced so, with the same consumer group name, we can
run consumer application on both DCs and only one consumer will get
and process the message. Also, when the consumer application in DC1
goes down, the consumer application in DC2 will start processing and
we can achieve the real active/active for consumers. Is this correct?
See this post here, they ask a similar question: How are consumers setup in Active - Active Kafka setup
I've not configured MM2 in an active/active architecture before so can't confirm whether you would have two active consumers for each DC or one. Hopefully another member will be able to answer this question for you.
We have a single consumer group set up for the continuous migration multiple topics. There is a service running, which connects a new consumer instance to each topic. We have the ability to disable the consumers for single topics while keeping the others running.
Sometimes we have to reset the offset for a specific topic, so that the migration starts over. Is there a way to override the necessity to disable all of the consumers to do so? As the offset is being kept per topic basis in a consumer group, I don't see why we have to disable the connections to all of the other topics. If this is not possible, what is the benefit of reading multiple topics with a single consumer group?
Example:
service A -> consumer with group "migration" -> consumes topic A
service A -> consumer with group "migration" -> consumer for topic B is stopped
service A -> consumer with group "migration" -> consumes topic C
set offset for group "migration" for topic B
Error: Assignments can only be reset if the group 'migration' is inactive, but the current state is Stable.
Each member of a consumer-group is assigned to topics-partitions, so until the group is UP (continuous poll/commit for assigned partition) you couldn't reset-offsets.
But
There is a solution allow you to do it without service-interruption (stopping consumers)
First : you should switch your partition assignor to "CooperativeStickyAssignor"
partition.assignment.strategy:
[RangeAssignor,CooperativeStickyAssignor]
Restart all of application instances one by one
Remove The Range assignor
partition.assignment.strategy:
[CooperativeStickyAssignor]
Restart all of application instances one by one
After this change, you could now:
take-off the topic (where you'd like to setup a reset-offset) from the consumer topic's list.
Build and redeploy your app
=> The cooperative assignor will revoke/remove the partitions of this topic from the assignation, without stopping the consume from other topics
Reset your offsets with :
kafka-consumer-group --bootstrap-server xxxx --group GROUP_ID_NAME --topic TOPIC_NAME --reset-offset --to-xxx ... --execute
And then reput your topic on your consumer config then build and redeploy it.
Pay Attention : this feature only available from kafka-client version 2.4
More details here : https://www.confluent.io/fr-fr/blog/cooperative-rebalancing-in-kafka-streams-consumer-ksqldb/
Kafka machines are installed as part of hortonworks packages , kafka version is 0.1X
We run the deeg_data applications, consuming data from kafka topics
On last days we saw that our application – deeg_data are failed and we start to find the root cause
On kafka cluster we see the following behavior
/usr/hdp/current/kafka-broker/bin/kafka-consumer-groups.sh --group deeg_data --describe --bootstrap-server kafka1:6667
To enable GC log rotation, use -Xloggc:<filename> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=<num_of_files>
where num_of_file > 0
GC log rotation is turned off
Consumer group ‘deeg_data’ is rebalancing
from kafka side kafka cluster is healthy and all topics are balanced and all kafka brokers are up and signed correctly to zookeeper
After some time ( couple hours ) , we run again the following , but without the errors about - Consumer group ‘deeg_data’ is rebalancing
And we get the following correctly results
/usr/hdp/current/kafka-broker/bin/kafka-consumer-groups.sh --group deeg_data --describe --bootstrap-server kafka1:6667
To enable GC log rotation, use -Xloggc:<filename> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=<num_of_files>
where num_of_file > 0
GC log rotation is turned off
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG OWNER
deeg_data pot.sdr.proccess 0 6397256247 6403318505 6062258 consumer-1_/10.3.6.237
deeg_data pot.sdr.proccess 1 6397329465 6403390955 6061490 consumer-1_/10.3.6.237
deeg_data pot.sdr.proccess 2 6397314633 6403375153 6060520 consumer-1_/10.3.6.237
deeg_data pot.sdr.proccess 3 6397258695 6403320788 6062093 consumer-1_/10.3.6.237
deeg_data pot.sdr.proccess 4 6397316230 6403378448 6062218 consumer-1_/10.3.6.237
deeg_data pot.sdr.proccess 5 6397325820 6403388053 6062233 consumer-1_/10.3.6.237.
.
.
.
So we want to understand why we get:
Consumer group ‘deeg_data’ is rebalancing
What is the reason for above state , and why we get rebalancing
we also have good post (https://www.confluent.io/blog/kafka-consumer-multi-threaded-messaging/)
Group rebalancing
Consumer group rebalancing is triggered when partitions need to be reassigned among consumers in the consumer group: A new consumer joins the group; an existing consumer leaves the group; an existing consumer changes subscription; or partitions are added to one of the subscribed topics.
Rebalancing is orchestrated by the group coordinator and it involves communication with all consumers in the group. To dive deeper into the consumer group rebalance protocol, see Everything You Always Wanted to Know About Kafka’s Rebalance Protocol But Were Afraid to Ask by Matthias J. Sax from Kafka Summit and The Magical Rebalance Protocol of Apache Kafka by Gwen Shapira.
Regarding consumer client code, some of the partitions assigned to it might be revoked during a rebalance. In the older version of the rebalancing protocol, called eager rebalancing, all partitions assigned to a consumer are revoked, even if they are going to be assigned to the same consumer again. With the newer protocol version, incremental cooperative rebalancing, only partitions that are reassigned to another consumer will be revoked. You can learn more about the new rebalancing protocol in this blog post by Konstantine Karantasis and this blog post by Sophie Blee-Goldman.
Regardless of protocol version, when a partition is about to be revoked, the consumer has to make sure that record processing is finished and the offset is committed for that partition before informing the group coordinator that the partition can be safely reassigned.
With automatic offset commit enabled in the thread per consumer model, you don’t have to worry about group rebalancing. Everything is done by the poll method automatically. However, if you disable automatic offset commit and commit manually, it’s your responsibility to commit offsets before the join group request is sent. You can do this in two ways:
Note - also good post is from you-tube - https://www.youtube.com/watch?v=QaeXDh12EhE
Note - good stack-overflow post - Kafka Consumer Rebalancing takes too long
Note - from ENV side , since our zookeeper servers are installed on VM machines and VM machine are using non ssd disks , and regarding to swap consuming , then I think we need to consider also the post - https://community.cloudera.com/t5/Community-Articles/Zookeeper-Sizing-and-Placement/ta-p/247885
The rebalance in Kafka is a protocol and is used by various components (Kafka connect, Kafka streams, Schema registry etc.) for various purposes.
In the most simplest form, a rebalance is triggered whenever there is any change in the metadata.
Now, the word metadata can have many meanings - for example:
In the case of a topic, it's metadata could be the topic partitions and/or replicas and where (which broker) they are stored
In the case of a consumer group, it could be the number of consumers that are a part of the group and the partitions they are consuming the messages from etc.
The above examples are by no means exhaustive i.e. there is more metadata for topics and consumer groups but I wouldn't go into more details here.
So, if there is any change in:
The number of partitions or replicas of a topic such as addition, removal or unavailability
The number of consumers in a consumer group such as addition or removal
Other similar changes...
A rebalance will be triggered. In the case of consumer group rebalancing, consumer applications need to be robust enough to cater for such scenarios.
So rebalances are a feature. However, in your case it appears that it is happening very frequently so you may need to investigate the logs on your client application and the cluster.
Following are a couple of references that might help:
Rebalance protocol - A very good article on medium on this subject
Consumer rebalancing - Another post on SO focusing on consumer rebalancing
I'm using Zookeeper and Kafka for messaging use case using Java. I thought consumer group details will be removed when you restart Zookeeper and Kafka servers. But they don't. Does zookeeper keeps consumer groups details in some kind of a file?
Should I remove consumer group details manually if I want to reset the consumer groups?
Can anyone clarify this to me?
Since Kafka 0.9, Consumer Offsets are stored directly in Kafka in an internal topic called __consumer_offsets.
Consumer Offsets are preserved across restarts and are kept at least for offsets.retention.minutes (7 days by default).
If you want to reset a Consumer Group, you can:
use the kafka-consumer-groups.sh tool with the --reset-offsets option
use AdminClient.deleteConsumerGroups() to fully delete the Consumer group
There is following consumer code:
from kafka.client import KafkaClient
from kafka.consumer import SimpleConsumer
kafka = KafkaClient("localhost", 9092)
consumer = SimpleConsumer(kafka, "my-group", "my-topic")
consumer.seek(0, 2)
for message in consumer:
print message
kafka.close()
Then I produce message with script:
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-topic
The thing is that when I start consumers as two different processes then I receive new messages in each process. However I want it to be sent to only one consumer, not broadcasted.
In documentation of Kafka (https://kafka.apache.org/documentation.html) there is written:
If all the consumer instances have the same consumer group, then this
works just like a traditional queue balancing load over the consumers.
I see that group for these consumers is the same - my-group.
How to make it so that new message is read by exactly one consumer instead of broadcasting it?
the consumer-group API was not officially supported until kafka v. 0.8.1 (released Mar 12, 2014). For server versions prior, consumer groups do not work correctly. And as of this post the kafka-python library does not currently attempt to send group offset data:
https://github.com/mumrah/kafka-python/blob/c9d9d0aad2447bb8bad0e62c97365e5101001e4b/kafka/consumer.py#L108-L115
Its hard to tell from the example above what your Zookeeper configuration is or if there's one at all. You'll need a Zookeeper cluster for the consumer group information to be persisted WRT what consumer within each group has consumed to a given offset.
A solid example is here:
Official Kafka documentation - Consumer Group Example
This should not happen - make sure that both of the consumers are being registered under the same consumer group in the zookeeper znodes. Each message to a topic should be consumed by a consumer group exactly once, so one consumer out of everyone in the group should receive the message, not what you are experiencing. What version of Kafka are you using?