I was wondering can anyone explain how actually __consumer_offsets works?
For testing purposes I have single instance of kafka 0.11.0.0 With these overridden settings:
offsets.topic.replication.factor=1
broker.id=0
offsets.retention.minutes=43200
log.flush.scheduler.interval.ms=60000
log.retention.hours=720
log.flush.interval.ms=60000
log.retention.check.interval.ms=300000
log.segment.bytes=1073741824
And I have a single consumer called pigeon.
Everything works fine, untill I do a kill -9 on kafka server (unclean shutdown). After that it seems that the client looses offset.
Before the kill -9:
Log from the client (using kafka-reactive):
2017-10-12 13:08:32.960 [DEBUG] o.a.k.c.c.i.ConsumerCoordinator - Group pigeon committed offset 275620 for partition ClusterEvents-0
Looking at ConsumerGroupCommand:
# ./kafka-run-class.sh kafka.admin.ConsumerGroupCommand --bootstrap-server localhost:9092 --group pigeon --new-consumer --describe`
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
ClusterEvents 0 275620 275620 0 pigeon-1507813552573-b3c74e75-04c1-48d0-bf5a-b66c203861aa/10.84.2.238 pigeon-1507813552573
And looking at __consumer_offsets:
#./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic __consumer_offsets --from-beginning --formatter "kafka.coordinator.group.GroupMetadataManager\$OffsetsMessageFormatter" | grep pigeon
[pigeon,ClusterEvents,0]::[OffsetMetadata[264458,NO_METADATA],CommitTime 1507285596838,ExpirationTime 1509877596838]
<.....>
[pigeon,ClusterEvents,0]::[OffsetMetadata[275620,NO_METADATA],CommitTime 1507813712886,ExpirationTime 1510405712886]
So the __consumer_offsets first offset is 264458 and we can see that 275620 offset is committed
After kill -9:
Now let's do a kill -9 on kafka process, while kafka is down, stop the consumer and after kafka restarts let's look at the same data:
# ./kafka-run-class.sh kafka.admin.ConsumerGroupCommand --bootstrap-server localhost:9092 --group pigeon --new-consumer --describe`
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
ClusterEvents 0 264458 275645 11187 - - -
#./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic __consumer_offsets --from-beginning --formatter "kafka.coordinator.group.GroupMetadataManager\$OffsetsMessageFormatter" | grep pigeon
[pigeon,ClusterEvents,0]::[OffsetMetadata[264458,NO_METADATA],CommitTime 1507285596838,ExpirationTime 1509877596838]
<.....>
[pigeon,ClusterEvents,0]::[OffsetMetadata[275620,NO_METADATA],CommitTime 1507813712886,ExpirationTime 1510405712886]
So although __consumer_offsets contains same info that offset is 275620 commited, but ConsumerGroupCommand reports that the current offset is 264458. Why?
How does __consumer_offsets actually work?
If I restart the consumer, it will start consuming from offset 264458, commit the latest offset, and I can do a kill -9 on kafka again, and it will start consuming from 264458
Am I misunderstanding how this should work? At first I though that this is due to log changes not being fsynced to the disk, so i decreased
log.flush.interval.ms to 60s, and waited for couple of minutes between kills. But that does not seem to help. And since __consumer_offsets contains much greater commiteed value, why does after unclean shutdown set offset to 264458
Apparently it was an issue with kafka 0.11.0.0 and 0.11.0.1 fixes that.
More info [KAFKA-5600] - Group loading regression causing stale metadata/offsets cache
Related
During the last maintenance of Kafka, which required a rolling restart of kafka brokers, we witnessed a reset for consumer group offsets for certain partitions.
At 11:14 am, everything is fine for the consumer group and we don't see a consumer lag:
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 0 105130857 105130857 0 st-...
...
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 6 78591770 78591770 0 st-...
However 5 minutes later, during the rolling restart of brokers, we have a reset for one partition and a consumer lag of millions of events.
$ bin/kafka-consumer-groups --bootstrap-server XXX:9093,XXX... --command-config secrets.config --group st-xx --describe
Note: This will not show information about old Zookeeper-based consumers.
[2019-08-26 12:44:13,539] WARN Connection to node -5 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2019-08-26 12:44:13,707] WARN [Consumer clientId=consumer-1, groupId=st-xx] Connection to node -5 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Consumer group 'st-xx' has no active members.
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 0 105132096 105132275 179
...
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 6 15239401 78593165 63353764 ...
In the last two hours, the offset for the partition hasn't recovered and we need to patch it now manually. We had similar issues during the last rolling restart of brokers.
Has anyone seen something like this before? The only clue we could find is this ticket, however we run Kafka version: 1.0.1-kafka3.1.0,
When I describe one of my topics I get this status:
➜ local-kafka_2.12-2.0.0 bin/kafka-consumer-groups.sh --bootstrap-server myip:1025 --group mygroup --describe
Consumer group 'mygroup' has no active members.
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
mytopic 0 858 858 0 - - -
when I try to reset it to the earliest, I get this status:
➜ local-kafka_2.12-2.0.0 bin/kafka-consumer-groups.sh --bootstrap-server myip:1025 --group mygroup --topic mytopic --reset-offsets --to-earliest --execute
TOPIC PARTITION NEW-OFFSET
mytopic 0 494
I would have expected the new offset to be at 0 rather than 494.
Question
1 - In the describe output the current offset is shown as 858, however resetting to earliest shows as 494. So there would be a lag of 364. My question is, what happened to the remaining 494 (858-364) offsets? Are they gone because of some configuration setup for this topic? My retention.ms is set to 1 week
2 - If the 494 records are gone, is there a way to recover them somehow?
In case you have access to the data directory of your kafka clusters, you can see the data that is present in there using the command kafka-run-class.bat kafka.tools.DumpLogSegments.
For more information see e.g. here: https://medium.com/#durgaswaroop/a-practical-introduction-to-kafka-storage-internals-d5b544f6925f
Your data might have been deleted either due to log retention time or due to size limitation of the logs (the configuration property log.retention.bytes).
I have two kafka streams applications.
I can see there consumer groups using:
>bin/kafka-consumer-groups.sh --list --bootstrap-server localhost:9092`
streams-distribution-app
streams-collection-app
I can see offsets of the distribution-app using:
>bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group streams-distribution-app
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
o365_user_activity 0 3900 3985 85 streams-distribution-app-StreamThread-1 /127.0.0.1
o365_cassandra_data_load 0 - 0 - streams-distribution-app-StreamThread-1 /127.0.0.1 streams-distribution-app
But I cant view the offsets for collection-app:
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group streams-collection-app
Note: This will not show information about old Zookeeper-based consumers.
The app is running and processing logs. I dont understand what can be the reason for not describing the offset information.
UPDATE:
After quite some time (around 20 mins) I can finally see the offsets.
But there is a lot of lag.
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group streams-collection-app
Note: This will not show information about old Zookeeper-based consumers.
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
o365_activity_contenturl 0 200 64941 64741 streams-collection-app-StreamThread-1 /127.0.0.1 streams-collection-app
These are the configs for stream-collection-app:
props.put(StreamsConfig.producerPrefix(ProducerConfig.RETRIES_CONFIG), Integer.MAX_VALUE);
props.put(StreamsConfig.producerPrefix(ProducerConfig.RETRY_BACKOFF_MS_CONFIG), Integer.MAX_VALUE);
props.put(StreamsConfig.producerPrefix(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG), Integer.MAX_VALUE);
props.put(StreamsConfig.consumerPrefix(ConsumerConfig.MAX_POLL_RECORDS_CONFIG), 200);
UPDATE 2:
I tried increasing the num.stream.threads config and set it 4. And again restarted the application with some 40k+ records on source topic.
Still there was no output on describing the stream app consumer-group. After about 10-15 mins I can see the following offsets:
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group streams-collection-app
Note: This will not show information about old Zookeeper-based consumers.
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
o365_activity_contenturl 0 200 11274 11074 streams-collection-app-StreamThread-1 /127.0.0.1 streams-app-StreamThread-1
o365_activity_contenturl 1 200 11384 11184 streams-collection-app-StreamThread-2 /127.0.0.1 streams-app-StreamThread-2
o365_activity_contenturl 3 200 11343 11143 streams-collection-app-StreamThread-4 /127.0.0.1 streams-app-StreamThread-4
o365_activity_contenturl 2 200 11471 11271 streams-collection-app-StreamThread-3 /127.0.0.1 streams-app-StreamThread-3
Lets say the partition has 4 replicas (1 leader, 3 followers) and all are currently in sync. min.insync.replicas is set to 3 and request.required.acks is set to all or -1.
The producer send a message to the leader, the leader appends it to it's log. After that, two of the replicas crashed before they could fetch this message. One remaining replica successfully fetched the message and appended to it's own log.
The leader, after certain timeout, will send an error (NotEnoughReplicas, I think) to the producer since min.insync.replicas condition is not met.
My question is: what will happen to the message which was appended to leader and one of the replica's log?
Will it be delivered to the consumers when crashed replicas come back online and broker starts accepting and committing new messages (i.e. high watermark is forwarded in the log)?
If there is no min.insync.replicas available and producer uses ack=all, then the message is not committed and consumers will not receive that message, even after crashed replicas come back and are added to the ISR list again. You can test this in the following way.
Start two brokers with min.insync.replicas = 2
$ ./bin/kafka-server-start.sh ./config/server-1.properties
$ ./bin/kafka-server-start.sh ./config/server-2.properties
Create a topic with 1 partition and RF=2. Make sure both brokers are in the ISR list.
$ ./bin/kafka-topics.sh --zookeeper zookeeper-1 --create --topic topic1 --partitions 1 --replication-factor 2
Created topic "topic1".
$ ./bin/kafka-topics.sh --zookeeper zookeeper-1 --describe --topic topic1
Topic:topic1 PartitionCount:1 ReplicationFactor:2 Configs:
Topic: topic1 Partition: 0 Leader: 1 Replicas: 1,2 Isr: 1,2
Run console consumer and console producer. Make sure produce uses ack=-1
$ ./bin/kafka-console-consumer.sh --new-consumer --bootstrap-server kafka-1:9092,kafka-2:9092 --topic topic1
$ ./bin/kafka-console-producer.sh --broker-list kafka-1:9092,kafka-2:9092 --topic topic1 --request-required-acks -1
Produce some messages. Consumer should receive them.
Kill one of the brokers (I killed broker with id=2). Check that ISR list is reduced to one broker.
$ ./bin/kafka-topics.sh --zookeeper zookeeper-1 --describe --topic topic1
Topic:topic1 PartitionCount:1 ReplicationFactor:2 Configs:
Topic: topic1 Partition: 0 Leader: 1 Replicas: 1,2 Isr: 1
Try to produce again. In the producer you should get some
Error: NOT_ENOUGH_REPLICAS
(one per retry) and finally
Messages are rejected since there are fewer in-sync replicas than required.
Consumer will not receive these messages.
Restart the killed broker and try to produce again.
Consumer will receive these message but not those that you sent while one of the replicas was down.
From my understanding, the watermark will not advance until both failed
follow-broker recovered and caught up.
See this blog post for more details: http://www.confluent.io/blog/hands-free-kafka-replication-a-lesson-in-operational-simplicity/
Error observerd
Messages are rejected since there are fewer in-sync replicas than required.
To resolve this i had increase the number of replication factors and it worked
From 0.8.1.1 release, Kafka provides the provision for storage of offsets in Kafka, instead of Zookeeper (see this).
I'm not able to figure out how to check the details of offsets consumed, as the current tools only provide consumer offset count checks for zookeeper only.(I'm referring to this)
If there are any tools available to check consumer offset, please let me know.
I'am using kafka 0.8.2 with offsets stored in kafka. This tools works good for me:
./kafka-run-class.sh kafka.tools.ConsumerOffsetChecker
--topic your-topic
--group your-consumer-group
--zookeeper localhost:2181
You get all informations you need: topic size, consumer lag, owner.
The following straight command gives the enough details:
kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-second-application
You will get the details like this
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
first_topic 0 4 4 0 consumer-1-7cb31cf3-1621-4635-8f95-6ae85215b31b /10.200.237.53 consumer-1
first_topic 1 3 3 0 consumer-1-7cb31cf3-1621-4635-8f95-6ae85215b31b /10.200.237.53 consumer-1
first_topic 2 3 3 0 consumer-1-7cb31cf3-1621-4635-8f95-6ae85215b31b /10.200.237.53 consumer-1
first-topic 0 4 4 0 - - -
I'm using Kafka 2.1 and I use kafka-consumer-groups command which gives useful details like current offset, log-end offset, lag, etc. The simplest command syntax is
kafka-consumer-groups.sh \
--bootstrap-server localhost:29092 \
--describe --group <consumer group name>
And the sample output looks like this
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
your.topic 1 17721650 17721673 23 consumer-159-beb9050b /1.2.3.4 consumer-159
your.topic 3 17718700 17718719 19 consumer-159-beb9050b /1.2.3.4 consumer-159
your.topic 0 17721700 17721717 17 consumer-159-beb9050b /1.2.3.4 consumer-159
HTH