Kafka Streams 1.1.0: Consumer Group Reprocessing Entire Log - apache-kafka

We have a kafka streams application (2.0) which is communicating with kafka brokers (1.1.0). The streams application has been reprocessing the entire log for no discernible reason - the application hadn't been restarted, wasn't being rebalanced, and was just sitting around - in some cases it was processing messages, in others it was waiting to receive messages (having processed messages less than 6 hours ago). We've done a fair amount of research and have ruled out a potential cause by setting the offset-retention-minutes to 1 week, the same amount of time as our message retention. Additionally, it wouldn't make sense that this would be the root cause of the issue the consumer group offset was reset while it was actively processing messages.
There is nothing interesting in the broker logs around the time of the events:
[2019-02-21 09:02:20,009] INFO [GroupMetadataManager brokerId=2] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-02-21 09:12:20,009] INFO [GroupMetadataManager brokerId=2] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-02-21 09:12:51,084] INFO [ProducerStateManager partition=MY_TOPIC-1] Writing producer snapshot at offset 422924 (kafka.log.ProducerStateManager)
[2019-02-21 09:12:51,085] INFO [Log partition=MY_TOPIC-1, dir=/data1/kafka] Rolled new log segment at offset 422924 in 1 ms. (kafka.log.Log)
[2019-02-21 09:14:56,384] INFO [ProducerStateManager partition=MY_TOPIC-12] Writing producer snapshot at offset 295610 (kafka.log.ProducerStateManager)
[2019-02-21 09:14:56,384] INFO [Log partition=MY_TOPIC-12, dir=/data1/kafka] Rolled new log segment at offset 295610 in 1 ms. (kafka.log.Log)
[2019-02-21 09:15:19,365] INFO [ProducerStateManager partition=__transaction_state-8] Writing producer snapshot at offset 3939084 (kafka.log.ProducerStateManager)
[2019-02-21 09:15:19,365] INFO [Log partition=__transaction_state-8, dir=/data1/kafka] Rolled new log segment at offset 3939084 in 0 ms. (kafka.log.Log)
[2019-02-21 09:21:26,755] INFO [ProducerStateManager partition=MY_TOPIC-9] Writing producer snapshot at offset 319799 (kafka.log.ProducerStateManager)
[2019-02-21 09:21:26,755] INFO [Log partition=MY_TOPIC-9, dir=/data1/kafka] Rolled new log segment at offset 319799 in 1 ms. (kafka.log.Log)
[2019-02-21 09:22:20,009] INFO [GroupMetadataManager brokerId=2] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-02-21 09:23:31,283] INFO [ProducerStateManager partition=__consumer_offsets-17] Writing producer snapshot at offset 47345110 (kafka.log.ProducerStateManager)
[2019-02-21 09:23:31,297] INFO [Log partition=__consumer_offsets-17, dir=/data1/kafka] Rolled new log segment at offset 47345110 in 28 ms. (kafka.log.Log)
And absolutely nothing in the application logs (even with the log level set to DEBUG).
Any ideas about what might be causing this issue?

Upgrading the Kafka brokers to 2.0.0 resolved this issue.

Related

spooldir connector not processing large file

there is large file with 40M records. the spool dir connector processed half of the records but after that it stopped pushing the records to the Topic. log is something like below -
327878 [2021-01-07 23:08:59,903] INFO Processed 20060000 lines of /dir/dir1/abc.txt_1607697517821.txt (com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceTask:144)
327879 [2021-01-07 23:08:59,997] INFO Processed 20080000 lines of /dir/dir1/abc.txt_1607697517821.txt (com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceTask:144)
327880 [2021-01-07 23:09:00,225] INFO Processed 20100000 lines of /dir/dir1/abc.txt_1607697517821.txt (com.github.jcustenborder.kafka.connect.spooldir.SpoolDirCsvSourceTask:144)
327881 [2021-01-07 23:09:04,788] INFO WorkerSourceTask{id=cust-stream-1} Committing offsets (org.apache.kafka.connect .runtime.WorkerSourceTask:478)
327882 [2021-01-07 23:09:04,788] INFO WorkerSourceTask{id=cust-stream-1} flushing 0 outstanding messages for offset c ommit (org.apache.kafka.connect.runtime.WorkerSourceTask:495)
327883 [2021-01-07 23:09:04,795] INFO WorkerSourceTask{id=cust-stream-1} Finished commitOffsets successfully in 6 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:574)
327884 [2021-01-07 23:09:04,795] INFO WorkerSourceTask{id=cust-stream-0} Committing offsets (org.apache.kafka.connect .runtime.WorkerSourceTask:478)
327885 [2021-01-07 23:09:04,795] INFO WorkerSourceTask{id=cust-stream-0} flushing 0 outstanding messages for offset c ommit (org.apache.kafka.connect.runtime.WorkerSourceTask:495)
327886 [2021-01-07 23:09:04,795] INFO WorkerSourceTask{id=cust-stream-2} Committing offsets (org.apache.kafka.connect .runtime.WorkerSourceTask:478)
commit offset flush messages in last few lines are getting repeated in the log.
abc_1607697517821.txt.PROCESSING file still exists , showing that it's not finished yet.

Kafka delete topics when auto.create.topics is enabled

We have a 2 node kafka cluster with both auto.create.topics.enable and delete.topic.enable set to true. Our app reads from a common request topic and responds to a response topic provided by the client in the request payload.
auto.create.topics is set to true as our client has an auto-scale environment wherein a new worker will read from a new response topic. Due to some implementation issues on the client side, there are a lot of topics created which have never been used (end offset is 0) and we are attempting to clean that up.
The problem is that upon deleting the topic, it is being recreated almost immediately. These topics don't have any consumer (as the worker listening to it is already dead).
I have tried the following
Kafka CLI delete command
kafka-topics.sh --zookeeper localhost:2181 --topic <topic-name> --delete
Create a zookeeper node under
/admin/delete_topics/<topic-name>
Both don't seem to work. In the logs, I see that a request for delete was received and the corresponding logs/indexes were deleted. But within a few seconds/minutes, the topic is auto-created. Logs for reference -
INFO [Partition <topic-name>-0 broker=0] No checkpointed highwatermark is found for partition <topic-name>-0 (kafka.cluster.Partition)
INFO Replica loaded for partition <topic-name>-0 with initial high watermark 0 (kafka.cluster.Replica)
INFO Replica loaded for partition <topic-name>-0 with initial high watermark 0 (kafka.cluster.Replica)
INFO [Partition <topic-name>-0 broker=0] <topic-name>-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
INFO Deleted log /home/ec2-user/data/kafka/0/<topic-name>-4.7a79dfc720624d228d5ee90c8d4c325e-delete/00000000000000000000.log. (kafka.log.LogSegment)
INFO Deleted offset index /home/ec2-user/data/kafka/0/<topic-name>-4.7a79dfc720624d228d5ee90c8d4c325e-delete/00000000000000000000.index. (kafka.log.LogSegment)
INFO Deleted time index /home/ec2-user/data/kafka/0/<topic-name>-4.7a79dfc720624d228d5ee90c8d4c325e-delete/00000000000000000000.timeindex. (kafka.log.LogSegment)
INFO Deleted log for partition <topic-name>-4 in /home/ec2-user/data/kafka/0/<topic-name>-4.7a79dfc720624d228d5ee90c8d4c325e-delete. (kafka.log.LogManager)
INFO Deleted log /home/ec2-user/data/kafka/0/<topic-name>-2.d32a905f9ace459cb62a530b2c605347-delete/00000000000000000000.log. (kafka.log.LogSegment)
INFO Deleted offset index /home/ec2-user/data/kafka/0/<topic-name>-2.d32a905f9ace459cb62a530b2c605347-delete/00000000000000000000.index. (kafka.log.LogSegment)
INFO Deleted time index /home/ec2-user/data/kafka/0/<topic-name>-2.d32a905f9ace459cb62a530b2c605347-delete/00000000000000000000.timeindex. (kafka.log.LogSegment)
INFO Deleted log for partition <topic-name>-2 in /home/ec2-user/data/kafka/0/<topic-name>-2.d32a905f9ace459cb62a530b2c605347-delete. (kafka.log.LogManager)
INFO Deleted log /home/ec2-user/data/kafka/0/<topic-name>-3.0670e8aefae5481682d53afcc09bab6a-delete/00000000000000000000.log. (kafka.log.LogSegment)
INFO Deleted offset index /home/ec2-user/data/kafka/0/<topic-name>-3.0670e8aefae5481682d53afcc09bab6a-delete/00000000000000000000.index. (kafka.log.LogSegment)
INFO Deleted time index /home/ec2-user/data/kafka/0/<topic-name>-3.0670e8aefae5481682d53afcc09bab6a-delete/00000000000000000000.timeindex. (kafka.log.LogSegment)
INFO Deleted log for partition <topic-name>-3 in /home/ec2-user/data/kafka/0/<topic-name>-3.0670e8aefae5481682d53afcc09bab6a-delete. (kafka.log.LogManager)
INFO Deleted log /home/ec2-user/data/kafka/0/<topic-name>-7.ac76d42a39094955abfb9d37951f4fae-delete/00000000000000000000.log. (kafka.log.LogSegment)
INFO Deleted offset index /home/ec2-user/data/kafka/0/<topic-name>-7.ac76d42a39094955abfb9d37951f4fae-delete/00000000000000000000.index. (kafka.log.LogSegment)
INFO Deleted time index /home/ec2-user/data/kafka/0/<topic-name>-7.ac76d42a39094955abfb9d37951f4fae-delete/00000000000000000000.timeindex. (kafka.log.LogSegment)
INFO Deleted log for partition <topic-name>-7 in /home/ec2-user/data/kafka/0/<topic-name>-7.ac76d42a39094955abfb9d37951f4fae-delete. (kafka.log.LogManager)
INFO Deleted log /home/ec2-user/data/kafka/0/<topic-name>-1.4872c74d579f4553a881114749e08141-delete/00000000000000000000.log. (kafka.log.LogSegment)
INFO Deleted offset index /home/ec2-user/data/kafka/0/<topic-name>-1.4872c74d579f4553a881114749e08141-delete/00000000000000000000.index. (kafka.log.LogSegment)
INFO Deleted time index /home/ec2-user/data/kafka/0/<topic-name>-1.4872c74d579f4553a881114749e08141-delete/00000000000000000000.timeindex. (kafka.log.LogSegment)
INFO Deleted log for partition <topic-name>-1 in /home/ec2-user/data/kafka/0/<topic-name>-1.4872c74d579f4553a881114749e08141-delete. (kafka.log.LogManager)
INFO Deleted log /home/ec2-user/data/kafka/0/<topic-name>-0.489b7241226341f0a7ffa3d1b9a70e35-delete/00000000000000000000.log. (kafka.log.LogSegment)
INFO Deleted offset index /home/ec2-user/data/kafka/0/<topic-name>-0.489b7241226341f0a7ffa3d1b9a70e35-delete/00000000000000000000.index. (kafka.log.LogSegment)
INFO Deleted time index /home/ec2-user/data/kafka/0/<topic-name>-0.489b7241226341f0a7ffa3d1b9a70e35-delete/00000000000000000000.timeindex. (kafka.log.LogSegment)
INFO Deleted log for partition <topic-name>-0 in /home/ec2-user/data/kafka/0/<topic-name>-0.489b7241226341f0a7ffa3d1b9a70e35-delete. (kafka.log.LogManager)
INFO Deleted log /home/ec2-user/data/kafka/0/<topic-name>-5.6d659cd119304e1f9a4077265364d05b-delete/00000000000000000000.log. (kafka.log.LogSegment)
INFO Deleted offset index /home/ec2-user/data/kafka/0/<topic-name>-5.6d659cd119304e1f9a4077265364d05b-delete/00000000000000000000.index. (kafka.log.LogSegment)
INFO Deleted time index /home/ec2-user/data/kafka/0/<topic-name>-5.6d659cd119304e1f9a4077265364d05b-delete/00000000000000000000.timeindex. (kafka.log.LogSegment)
INFO Deleted log for partition <topic-name>-5 in /home/ec2-user/data/kafka/0/<topic-name>-5.6d659cd119304e1f9a4077265364d05b-delete. (kafka.log.LogManager)
INFO Deleted log /home/ec2-user/data/kafka/0/<topic-name>-6.652d1aec02014a3aa59bd3e14635bd3b-delete/00000000000000000000.log. (kafka.log.LogSegment)
INFO Deleted offset index /home/ec2-user/data/kafka/0/<topic-name>-6.652d1aec02014a3aa59bd3e14635bd3b-delete/00000000000000000000.index. (kafka.log.LogSegment)
INFO Deleted time index /home/ec2-user/data/kafka/0/<topic-name>-6.652d1aec02014a3aa59bd3e14635bd3b-delete/00000000000000000000.timeindex. (kafka.log.LogSegment)
INFO Deleted log for partition <topic-name>-6 in /home/ec2-user/data/kafka/0/<topic-name>-6.652d1aec02014a3aa59bd3e14635bd3b-delete. (kafka.log.LogManager)
INFO [GroupCoordinator 0]: Removed 0 offsets associated with deleted partitions: <topic-name>-0. (kafka.coordinator.group.GroupCoordinator)
INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(<topic-name>-0) (kafka.server.ReplicaFetcherManager)
INFO [ReplicaAlterLogDirsManager on broker 0] Removed fetcher for partitions Set(<topic-name>-0) (kafka.server.ReplicaAlterLogDirsManager)
INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(<topic-name>-0) (kafka.server.ReplicaFetcherManager)
INFO [ReplicaAlterLogDirsManager on broker 0] Removed fetcher for partitions Set(<topic-name>-0) (kafka.server.ReplicaAlterLogDirsManager)
INFO Log for partition <topic-name>-0 is renamed to /home/ec2-user/data/kafka/0/<topic-name>-0.185c7eda12b749a2999cd39b3f90c738-delete and is scheduled for deletion (kafka.log.LogManager)
INFO Creating topic <topic-name> with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(0, 1)) (kafka.zk.AdminZkClient)
INFO [KafkaApi-0] Auto creation of topic <topic-name> with 1 partitions and replication factor 2 is successful (kafka.server.KafkaApis)
INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(<topic-name>-0) (kafka.server.ReplicaFetcherManager)
INFO [Log partition=<topic-name>-0, dir=/home/ec2-user/data/kafka/0] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
INFO [Log partition=<topic-name>-0, dir=/home/ec2-user/data/kafka/0] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 0 ms (kafka.log.Log)
INFO Created log for partition <topic-name>-0 in /home/ec2-user/data/kafka/0 with properties {compression.type -> producer, message.format.version -> 2.2-IV1, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 86400000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
INFO [Partition <topic-name>-0 broker=0] No checkpointed highwatermark is found for partition <topic-name>-0 (kafka.cluster.Partition)
INFO Replica loaded for partition <topic-name>-0 with initial high watermark 0 (kafka.cluster.Replica)
INFO Replica loaded for partition <topic-name>-0 with initial high watermark 0 (kafka.cluster.Replica)
INFO [Partition <topic-name>-0 broker=0] <topic-name>-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
Does anyone know the reason behind the topic being re-created when no consumers are listening and no producers are producing to the topic? Could replication be behind it (some race condition perhaps)? We are using Kafka 2.2.
Deleting the log directory for that topic directly seems to work, however, this is cumbersome when there are thousands of topics created. We want to have a cleanup script that does this periodically as due to the auto-scale nature of the client environment, there may be frequent orphaned response topics.
Update
I tried Giorgos' suggestion by disabling auto.create.topics.enable and then deleting the topic. This time the topic did get deleted, but none of my applications through any errors (which leads to me the conclusion that there are no consumers/producers to the said topic).
Further, when auto.create.topics.enable is enabled and the topic is created with a replication-factor=1, the topic does not get re-created after deletion. This leads me to believe that perhaps replication is the culprit. Could this be a bug in Kafka?
Jumped the gun here; turns out something is listening/re-creating these topics from the customer environment.
Even if you've mentioned that no consumer/producer is consuming/producing from the topic, it sounds that this is the case. Maybe you have any connectors running on Kafka Connect that replicate data from/to Kafka?
If you still can't find what is causing the re-creation of the deleted Kafka topics, I would suggest setting auto.create.topics.enable to false (temporarily) so that topics cannot be automatically re-created. Then the process that is causing topic re-creation will normally fail and this is going to be reported in your logs.

How do you completely purge Apache Kafka?

I'm working on Spring Java micro-services using Apache Kafka for messaging. At times I want to completely reset my Kafka cluster (Zookeeper and broker) so that I know I have a clean slate to test with. However, my broker still seems to know a lot about things that should have been delete.
The environment is Windows 10 and I'm running Kafka v2.12.2 from Cygwin.
Here's my current process for resetting my Kafka setup:
Stop the broker
Stop Zookeeper
Delete the data directory
Restart Zookeeper
Restart the broker
At this point I see the broker logging showing references to loading offsets and consumer groups.
For example:
[2018-10-23 09:38:49,118] INFO Replica loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Replica)
[2018-10-23 09:38:49,120] INFO [Partition __consumer_offsets-0 broker=0] __consumer_offsets-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
And:
[2018-10-23 09:38:49,171] INFO Replica loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Replica)
[2018-10-23 09:38:49,172] INFO [Partition __consumer_offsets-23 broker=0] __consumer_offsets-23 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2018-10-23 09:38:49,174] INFO Replica loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Replica)
[2018-10-23 09:38:49,174] INFO [Partition __consumer_offsets-1 broker=0] __consumer_offsets-1 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
And:
[2018-10-23 09:38:49,304] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-46 (kafka.coordinator.group.GroupMetadataManager)
[2018-10-23 09:38:49,304] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-49 (kafka.coordinator.group.GroupMetadataManager)
[2018-10-23 09:38:49,304] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-41 (kafka.coordinator.group.GroupMetadataManager)
[2018-10-23 09:38:49,305] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-44 (kafka.coordinator.group.GroupMetadataManager)
[2018-10-23 09:38:49,305] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-47 (kafka.coordinator.group.GroupMetadataManager)
[2018-10-23 09:38:49,305] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-1 (kafka.coordinator.group.GroupMetadataManager)
[2018-10-23 09:38:49,306] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-4 (kafka.coordinator.group.GroupMetadataManager)
EDIT #1:
Below are a couple of lines from my property files. If I delete 'C:\tool\kafka\data' I'll still see similar logging to the above.
zookeeper.properties
dataDir=C:/tools/kafka/data/zookeeper
server.properties
log.dirs=C:/tools/kafka/data/kafka-logs
Turns out I had a process still connected to the cluster and this caused the offsets to be rebuilt once the clustered started.
Make 110% sure every process had finished and deleting the data directory worked as expected.

How does an offset expire for an Apache Kafka consumer group?

I was making some tests on an old topic when I noticed some strange behaviours. Reading Kafka's log I noticed this "removed 8 expired offsets" message:
[GroupCoordinator 1001]: Stabilized group GROUP_NAME generation 37 (kafka.coordinator.GroupCoordinator)
[GroupCoordinator 1001]: Assignment received from leader for group GROUP_NAME for generation 37 (kafka.coordinator.GroupCoordinator)
Deleting segment 0 from log __consumer_offsets-31. (kafka.log.Log)
Deleting segment 0 from log __consumer_offsets-45. (kafka.log.Log)
Deleting index /data/kafka-logs/__consumer_offsets-45/00000000000000000000.index.deleted (kafka.log.OffsetIndex)
Deleting index /data/kafka-logs/__consumer_offsets-31/00000000000000000000.index.deleted (kafka.log.OffsetIndex)
Deleting segment 0 from log __consumer_offsets-13. (kafka.log.Log)
Deleting index /data/kafka-logs/__consumer_offsets-13/00000000000000000000.index.deleted (kafka.log.OffsetIndex)
Deleting segment 0 from log __consumer_offsets-11. (kafka.log.Log)
Deleting segment 4885 from log __consumer_offsets-11. (kafka.log.Log)
Deleting index /data/kafka-logs/__consumer_offsets-11/00000000000000004885.index.deleted (kafka.log.OffsetIndex)
Deleting index /data/kafka-logs/__consumer_offsets-11/00000000000000000000.index.deleted (kafka.log.OffsetIndex)
Deleting segment 0 from log __consumer_offsets-26. (kafka.log.Log)
Deleting segment 12406 from log __consumer_offsets-26. (kafka.log.Log)
Deleting index /data/kafka-logs/__consumer_offsets-26/00000000000000012406.index.deleted (kafka.log.OffsetIndex)
Deleting index /data/kafka-logs/__consumer_offsets-26/00000000000000000000.index.deleted (kafka.log.OffsetIndex)
Deleting segment 0 from log __consumer_offsets-22. (kafka.log.Log)
Deleting segment 8643 from log __consumer_offsets-22. (kafka.log.Log)
Deleting index /data/kafka-logs/__consumer_offsets-22/00000000000000008643.index.deleted (kafka.log.OffsetIndex)
Deleting index /data/kafka-logs/__consumer_offsets-22/00000000000000000000.index.deleted (kafka.log.OffsetIndex)
Deleting segment 0 from log __consumer_offsets-6. (kafka.log.Log)
Deleting segment 9757 from log __consumer_offsets-6. (kafka.log.Log)
Deleting index /data/kafka-logs/__consumer_offsets-6/00000000000000000000.index.deleted (kafka.log.OffsetIndex)
Deleting index /data/kafka-logs/__consumer_offsets-6/00000000000000009757.index.deleted (kafka.log.OffsetIndex)
Deleting segment 0 from log __consumer_offsets-14. (kafka.log.Log)
Deleting segment 1 from log __consumer_offsets-14. (kafka.log.Log)
Deleting index /data/kafka-logs/__consumer_offsets-14/00000000000000000001.index.deleted (kafka.log.OffsetIndex)
Deleting index /data/kafka-logs/__consumer_offsets-14/00000000000000000000.index.deleted (kafka.log.OffsetIndex)
[GroupCoordinator 1001]: Preparing to restabilize group GROUP_NAME with old generation 37 (kafka.coordinator.GroupCoordinator)
[GroupCoordinator 1001]: Stabilized group GROUP_NAME generation 38 (kafka.coordinator.GroupCoordinator)
[GroupCoordinator 1001]: Assignment received from leader for group GROUP_NAME for generation 38 (kafka.coordinator.GroupCoordinator)
[Group Metadata Manager on Broker 1001]: Removed 8 expired offsets in 1 milliseconds. (kafka.coordinator.GroupMetadataManager)
In fact, I have 2 questions:
How does this offset expiration work for a consumer group?
Can this expired offset explain this behaviour where my consumer would not poll anything when it had auto.offset.reset = latest, but it polled from the last committed offset when it had auto.offset.reset = earliest ?
Update
Since Apache Kafka 2.1, offsets won't be deleted as long as the consumer group is active, independent if the consumers commit offsets or not, ie, the offset.retention.minutes clocks only starts to tick when the group becomes empty (in older released, the clock started to tick directly when the commit happened).
Cf. https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets
Original Answer
Kafka, by default deletes committed offsets after a configurable period of time. See parameter offsets.retention.minutes. Ie, if a consumer group is inactive (ie, does not commit any offsets) for this amount of time, the offsets get deleted. Thus, even if the consumer is running, if it does not commit offsets for some partitions, those offsets are subject to offset.retention.minutes.
If you start a consumer, the following happens:
look for a (valid) committed offset (for the consumer group)
if valid offset is found, resume from there
if no valid offset is found, reset offset according to auto.offset.reset parameter
Thus, if your offsets got deleted and auto.offset.reset = latest, you consumer will not poll anything until new data is added to the topic. If auto.offset.reset = earliest it should consume the whole topic.
See this JIRA for a discussion about this https://issues.apache.org/jira/browse/KAFKA-3806 and https://issues.apache.org/jira/browse/KAFKA-4682
Check my answer here. You should not forget about file rolling. It impacts offset files removal.

cluster no response due to replication

I found this in my server.log:
[2016-03-29 18:24:59,349] INFO Scheduling log segment 3773408933 for log g17-4 for deletion. (kafka.log.Log)
[2016-03-29 18:24:59,349] INFO Scheduling log segment 3778380412 for log g17-4 for deletion. (kafka.log.Log)
[2016-03-29 18:24:59,403] WARN [ReplicaFetcherThread-3-4], Replica 2 for partition [g17,4] reset its fetch offset from 3501121050 to current leader 4's start offset 3501121050 (kafka.server.ReplicaFetcherThread)
[2016-03-29 18:24:59,403] ERROR [ReplicaFetcherThread-3-4], Current offset 3781428103 for partition [g17,4] out of range; reset offset to 3501121050 (kafka.server.ReplicaFetcherThread)
[2016-03-29 18:25:27,816] INFO Rolled new log segment for 'g17-12' in 1 ms. (kafka.log.Log)
[2016-03-29 18:25:35,548] INFO Rolled new log segment for 'g18-10' in 2 ms. (kafka.log.Log)
[2016-03-29 18:25:35,707] INFO Partition [g18,10] on broker 2: Shrinking ISR for partition [g18,10] from 2,4 to 2 (kafka.cluster.Partition)
[2016-03-29 18:25:36,042] INFO Partition [g18,10] on broker 2: Expanding ISR for partition [g18,10] from 2 to 2,4 (kafka.cluster.Partition)
The offset of replication is larger than leader's, so the replication data will delete, and then copy the the data from leader.
But when copying, the cluster is very slow; some storm topology fail due to no response from Kafka.
How do I prevent this problem from occurring?
How do I slow down the replication rate, while replication is copying?