I have topic with messages, and having consumer with group name as "KafkaConsumerExample". when i restart consumer, all the messages from topic was received without issues. but, when i change the name of my consumer group with same consumer code, consumer is not pulling data from topic ? what would be reason for this issue, changing the consumer name changes the behavior from topic. can you please help here ?
You cannot consume messages again using the same consumer group name. This is because kafka maintains offsets to maintain logs of the data it has consumed. This ensures kafka's exactly once semantics.
If you wish to consume the same data again from the topic, you need to change the name of the consumer group.
I hope this helps! let me know if I addressed your question or if there is anything else?
The problem you're running into is that when a new consumer group is used to read a topic, the first messages read are just after the messages already read by some other consumer group ( see auto.offset.reset explanation). This consumer config property lets you start reading at the latest offset (default, so you can start roughly where other groups have left off) but you want to set this property to "earliest" in order to force the consumer to read from the first message in each partition.
Related
As I understand that a Kafka message can be identified by topic, partition and offset. If I add the message along with the topic, partition and offset into my local database, then I can compare this when a new Kafka message received to ensure I won't insert the same message again.
But by default Kafka topic has a retention policy to keep the Kafka messages only for 7 days. After that the messages will be removed.
My question is that after a Kafka message is removed by the retention policy, will the message offset be re-used for new message? If yes then it will be an issue for me to mistreat a new message as an existing message as they held the same offset. Please advise how the offset works for the retention policy and how to handle this. Thank you!
No, as long as the Kafka cluster is not recreated, a topic will not reuse offsets. It is common to keep the offset stored (e.g. in the database or automatically using consumer groups) to know up to which point a consumer has processed a topic.
Having an app that is running in several instances and each instance needs to consume all messages from all partitions of a topic.
I have 2 strategies that I am aware of:
create a unique consumer group id for each app instance and subscribe and commit as usual,
downside is kafka still needs to maintain a consumer group on behalf of each consumer.
ask kafka for all partitions for the topic and assign the consumer to all of those. As I understand there is no longer any consumer group created on behalf of the consumer in Kafka. So the question is if there still is a need for committing offsets as there is no consumer group on the kafka side to keep up to date. The consumer was created without assigning it a 'group.id'.
ask kafka for all partitions for the topic and assign the consumer to
all of those. As I understand there is no longer any consumer group
created on behalf of the consumer in Kafka. So the question is if
there still is a need for committing offsets as there is no consumer
group on the kafka side to keep up to date. The consumer was created
without assigning it a 'group.id'.
When you call consumer.assign() instead of consumer.subscribe() no group.id property is required which means that no group is required or is maintained by Kafka.
Committing offsets is basically keeping track of what has been processed so that you dont process them again. This may as well be done manually also. For example, reading polled messages and writing the offsets to a file once after the messages have been processed.
In this case, your program is responsible for writing the offsets and also reading from the next offset upon restart using consumer.seek()
The only drawback is, if you want to move your consumer from one machine to another, you would need to copy this file also.
You can also store them in some database that is accessible from any machine in case you don't want the file to be copied (though writing to a file may be relatively simpler and faster).
On the other hand, if there is a consumer group, so long as your consumer has access to Kafka, your Kafka will let your consumer automatically read from the last committed offset.
There will always be a consumer group setting. If you're not setting it, whatever consumer you're running will use its default setting or Kafka will assign one.
Kafka will keep track of the offset of all consumers using the consumer group.
There is still a need to commit offsets. If no offsets are being committed, Kafka will have no idea what has been read already.
Here is the command to view all your consumer groups and their lag:
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --all-groups
I am writing a kafka consumer application. I have a topic with 4 partitions - 1 is leader and 3 are followers. Producer uses key to identify a partition to push a message.
If I write a consumer and run it on different nodes or start 4 instances of same consumer, how message consuming will happen ? Does all 4 instances will get same messages ?
What happens in the case of multiple consumer(same group) consuming a single topic?
Do they get same data?
How offset is managed? Is it separate for each consumer?
I would suggest that you read at least first few chapters of confluent's definitive guide to kafka to get a priliminary understanding of how kafka works.
I've kept my answers brief. Please refer to the book for detailed explanation.
How offset is managed? Is it separate for each consumer?
Depends on the group id. Only one offset is managed for a group.
What happens in the case of multiple consumer(same group) consuming a single topic?
Consumers can be multiple - all can be identified by the same or different groups.
If 2 consumers belong to the same group, both will not get all messages.
Do they get same data?
No. Once a message is sent and a read is committed, the offset is incremented for that group. So a different consumer with the same group will not receive that message.
Hope that helps :)
What happens in the case of multiple consumer(same group) consuming a single topic?
Answer: Producers send records to a particular partition based on the record’s key here. The default partitioner for Java uses a hash of the record’s key to choose the partition. When there are multiple consumers in same consumer group, each consumer gets different partition. So, in this case, only single consumer receives all the messages. When the consumer which is receiving messages goes down, group coordinator (one of the brokers in the cluster) triggers rebalance and then that partition is assigned to one of the available consumer.
Do they get same data?
Answer: If consumer commits consumed messages to partition and goes down, so as stated above, rebalance occurs. The consumer who gets this partition, will not get messages. But if consumer goes down before committing its then the consumer who gets this partition, will get messages.
How offset is managed? Is it separate for each consumer?
Answer: No, offset is not separate to each consumer. Partition never gets assigned to multiple consumers in same consumer group at a time. The consumer who gets partition assigned, gets offset as well by default.
Is the offset a property of the topic/partition, or is it a property of a consumer?
If it's a property of a consumer, does that mean multiple consumers reading from the same partition could have different offsets?
Also what happens to a consumer if it goes down, how does Kafka know it's dealing with the same consumer when it comes back online? presumably a new client ID is generated so it wouldn't have the same ID as previously.
In most cases it is a property of a Consumer Group. When writing the consumers, you normally specify the consumer group in the group.id parameter. This group ID is used to recover / store the latest offset from / in the special topic __consumer_offsets where it is stored directly in the Kafka cluster it self. The consumer group is used not only for the offset but also to ensure that each partition will be consumed only from a single client per consumer group.
However Kafka gives you a lot of flexibility - so if you need you can store the offset somewhere else and you can do it based on whatever criteria you want. But in most cases following the consumer group concept and storing the offset inside Kafka is the best thing you can do.
Kafka identifies consumer based on group.id which is a consumer property and each consumer should have this property
A unique string that identifies the consumer group this consumer belongs to. This property is required if the consumer uses either the group management functionality by using subscribe(topic) or the Kafka-based offset management strategy
And coming to offset it is a consumer property and broker property, whenever consumer consumes messages from kafka topic it will submit offset (which means consumed this list of messages from 1 to 10) next time it will start consuming from 10, offset can be manually submitted or automatically submitted enable.auto.commit
If true the consumer's offset will be periodically committed in the background.
And each consumer group will have its offset, based on that kafka server identifies either new consumer or old consumer was restarted
I'm managing a kafka queue using a common consumer group across multiple machines. Now I also need to show the current content of the queue. How do I read only those messages within the group which haven't been read, yet making those messages again readable by other consumers in the group which actually processes those messages. Any help would be appreciated.
In Kafka, the notion of "reading" messages from a topic and that of "consuming" them are the same thing. At a high level, the only thing that makes a "consumed" message unavailable to a consumer is that consumer setting its read offset to a value beyond that of the message in question. Thus, you can turn off the autocommit feature of your consumers and avoid committing offsets in cases where you'd like only to "read" but not to "consume".
A good proxy for getting "all messages which haven't been read" is to compare the latest committed offset to the highwater mark offset per partition. This provides a notion of "lag" that indicates how far behind a given consumer is in its consumption of a partition. The fetch_consumer_lag CLI function in pykafka is a good example of how to do this.
In Kafka, a partition can be consumed by only one consumer in a group i.e. if your topic has 10 partitions and you spawned 20 consumers with same groupId, then only 10 will be connected to Kafka and remaining 10 will be sitting idle. A new consumer will be identified by Kafka only in case one of the existing consumer dies or does not poll from the topic.
AFAIK, I don't think you can do what I understand you want to do within a consumer group. You can obviously create another groupId and process message based on the information gathered by first consumer group.
Kafka now has a KStream.peek() method
See proposal "Add KStream peek method".
It's not 100% clear to me from the docs that this prevents consuming of message that's peeked from the topic, but I can't see how you could use it in any crash-safe, robust way unless it does.
See also:
Handling consumer rebalance when implementing synchronous auto-offset commit
High-Level Consumer and peeking messages
I think that you can use publish-subscribe model. Then each consumer has own offset and could consume all messages for itself.