Spring Kafka - Re reading an offset after sometime - apache-kafka

I am using #KafkaListener with props as
max.poll.records to 50. (Each record takes 40-60 sec to process)
enable-auto-commit=false
ack-mode to manual immediate
Below is the logic
#KafkaListener(groupId=“ABC”, topic=“Data1” containerFactory=“myCustomContainerFactory”)
public void listen(ConsumerRecord<String, Object> record, Acknowledge ack) {
try{
process(record);
ack.acknowledge();
}
Catch(e){
reprocess() // pause container and seek
}
}
Other props like max.poll.interval.ms, session.timeout.ms or heartbeat are of default values
I am not able to understand whats going wrong here,
Suppose if 500 msg are published to 2 partition
I am not sure why the consumer is not polling records as per max.poll.records prop actually its polls all 500 msg as soon as the application starts or msg are published by producer
Its observed that after processing some records say approx 5-7 mins consumer re reads an offset again.. which actually was read fine processed and acknowledged..
After a hour the log file shows that same messages are read multiple times.
Any help is appreciated
Thanks.

The default max.poll.interval.ms is 300,000 milliseconds (5 minutes).
You either need to reduce max.poll.records or increase the interval - otherwise Kafka will force a rebalance due to a non-responsive consumer.
With such a large processing time, I would recommend max.poll.records=1; you clearly don't need higher throughput.

Related

Kafka Consumer death handling

I have question regarding handling of consumers death due to exceeding the timeout values.
my example configuration:
session.timeout.ms = 10000 (10 seconds)
heartbeat.interval.ms = 2000 (2 seconds)
max.poll.interval.ms = 300000 (5 minutes)
I have 1 topic, 10 partitions, 1 consumer group, 10 consumers (1 partition = 1 consumer).
From my understanding consuming messages in Kafka, very simplified, works as follows:
consumer polls 100 records from topic
a heartbeat signal is sent to broker
processing records in progress
processing records completes
finalize processing (commit, do nothing etc.)
repeat #1-5 in a loop
My question is, what happens if time between heartbeats takes longer than previously configured session.timeout.ms. I understand the part, that if session times out, the broker initializes a re-balance, the consumer which processing took longer than the session.timeout.ms value is marked as dead and a different consumer is assigned/subscribed to that partition.
Okey, but what then...?
Is that long-processing consumer removed/unsubscribed from the topic and my application is left with 9 working consumers? What if all the consumers exceed timeout and are all considered dead, am I left with a running application which does nothing because there are no consumers?
Long-processing consumer finishes processing after re-balancing already took place, does broker initializes re-balance again and consumer is assigned a partition anew? As I understand it continues running #1-5 in a loop and sending a heartbeat to broker initializes also process of adding consumer to the consumers group, from which it was removed after being given dead status, correct?
Application throws some sort of exception indicating that session.timeout.ms was exceeded and the processing is abruptly stopped?
Also what about max.poll.interval.ms property, what if we even exceed that period and consumer X finishes processing after max.poll.interval.ms value? Consumer already exceeded the session.timeout.ms value, it was excluded from consumer group, status set to dead, what difference does it gives us in configuring Kafka consumer?
We have a process which extracts data for processing and this extraction consists of 50+ SQL queries (majority being SELECT's, few UPDATES), they usually go fast but of course all depends on the db load and possible locks etc. and there is a possibility that the processing takes longer than the session's timeout. I do not want to infinitely increase sessions timeout until "I hit the spot". The process is idempotent, if it's repeated X times withing X minutes we do not care.
Please find the answers.
#1. Yes. If all of your consumer instances are kicked out of the consumer group due to session.timeout, then you will be left with Zero consumer instance, eventually, consumer application is dead unless you restart.
#2. This depends, how you write your consumer code with respect to poll() and consumer record iterations. If you have a proper while(true) and try and catch inside, you consumer will be able to re-join the consumer group after processing that long running record.
#3. You will end up with the commit failed exception:
failed: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records.
And again it depends on your code, to auto join into the consumer group.
#4. Answer lies here
session.timeout.ms
The amount of time a consumer can be out of contact with the brokers while still
considered alive defaults to 3 seconds. If more than session.timeout.ms passes
without the consumer sending a heartbeat to the group coordinator, it is considered
dead and the group coordinator will trigger a rebalance of the consumer group to
allocate partitions from the dead consumer to the other consumers in the group. This
property is closely related to heartbeat.interval.ms. heartbeat.interval.ms con‐
trols how frequently the KafkaConsumer poll() method will send a heartbeat to the
group coordinator, whereas session.timeout.ms controls how long a consumer can
go without sending a heartbeat. Therefore, those two properties are typically modi‐
fied together—heatbeat.interval.ms must be lower than session.timeout.ms, and
is usually set to one-third of the timeout value. So if session.timeout.ms is 3 sec‐
onds, heartbeat.interval.ms should be 1 second. Setting session.timeout.ms
lower than the default will allow consumer groups to detect and recover from failure
sooner, but may also cause unwanted rebalances as a result of consumers taking
longer to complete the poll loop or garbage collection. Setting session.timeout.ms
higher will reduce the chance of accidental rebalance, but also means it will take
longer to detect a real failure.

Impact of reducing max.poll.records in Kafka Consumer configuration

I am writing an consumer application to pick records from kafka stream and process it using spring-kafka.
My processing steps are as below :
Getting records from stream --> dump it into a table --> Fetch records and call API --> API will update records into a table --> calling Async Commit()
It seems in some scenarios, the API processing taking more time because of more records are being fetched and we are getting below errors?
Member consumer-prov-em-1-399ede46-9e12-4388-b5b8-f198a4e6a5bc
sending LeaveGroup request to coordinator apslt2555.uhc.com:9095 (id:
2147483577 rack: null) due to consumer poll timeout has expired. This
means the time between subsequent calls to poll() was longer than the
configured max.poll.interval.ms, which typically implies that the poll
loop is spending too much time processing messages. You can address
this either by increasing max.poll.interval.ms or by reducing the
maximum size of batches returned in poll() with max.poll.records.
org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot
be completed since the group has already rebalanced and assigned the
partitions to another member. This means that the time between
subsequent calls to poll() was longer than the configured
max.poll.interval.ms, which typically implies that the poll loop is
spending too much time message processing. You can address this either
by increasing max.poll.interval.ms or by reducing the maximum size of
batches returned in poll() with max.poll.records.
I know this can be handled by reducing max.poll.records or by increasing max.poll.interval.ms. What I am trying to understand if I set max.poll.records to 10 then what would be poll() behavior? Is it going to take 10 records from stream wait for these records to be committed and then will go for next 10 records ? When the next poll occurs ?Is it also going to impact performance as we are reducing max.poll.records from default 500 to 10.
Do I also have to increase max.poll.interval.ms. Probably make it 10 minutes. Is there any down impact that I should be aware of while changing these values ? Except these parameters, is there any other way to handle these errors ?
max.poll.records allows batch processing consumption model in which records are collected in memory before flushing them to another system. The idea is to get all the records by polling from kafka together and then process that in memory in the poll loop.
If you decrease the number then the consumer will be polling more frequently from kafka. This means it needs to make network call more often. This might reduce performance of kafka stream processing.
max.poll.interval.ms controls the maximum time between poll invocations before the consumer will proactively leave the group. If this number increases then it will take longer for kafka to detect the consumer failures. On the other hand, if this value is too low kafka might falsely detect many alive consumers as failed thus rebalancing more often.

Should we use max.poll.records or max.poll.interval.ms to handle records that take longer to process in kafka consumer?

I'm trying to understand what is better option to handle records that take longer to process in kafka consumer? I ran few tests to understand this and observed that we can control this with by modifying either max.poll.records or max.poll.interval.ms.
Now my question is, what's the better option to choose? Please suggest.
max.poll.records simply defines the maximum number of records returned in a single call to poll().
Now max.poll.interval.ms defines the delay between the calls to poll().
max.poll.interval.ms: The maximum delay between invocations of
poll() when using consumer group management. This places an upper
bound on the amount of time that the consumer can be idle before
fetching more records. If poll() is not called before expiration of
this timeout, then the consumer is considered failed and the group
will rebalance in order to reassign the partitions to another member.
For consumers using a non-null group.instance.id which reach this
timeout, partitions will not be immediately reassigned. Instead, the
consumer will stop sending heartbeats and partitions will be
reassigned after expiration of session.timeout.ms. This mirrors the
behavior of a static consumer which has shutdown.
I believe you can tune both in order to get to the expected behaviour. For example, you could compute the average processing time for the messages. If the average processing time is say 1 second and you have max.poll.records=100 then you should allow approximately 100+ seconds for the poll interval.
If you have slow processing and so want to avoid rebalances then tuning either would achieve that. However extending max.poll.interval.ms to allow for longer gaps between poll does have a bit of a side effect.
Each consumer only uses 2 threads - polling thread and heartbeat thread.
The latter lets the group know that your application is still alive so can trigger a rebalance before max.poll.interval.ms expires.
The polling thread does everything else in terms of group communication so during the poll method you find out if a rebalance has been triggered elsewhere, you find out if a partition leader has died and hence metadata refresh is required. The implication is that if you allow longer gaps between polls then the group as a whole is slower to respond to change (for example no consumers start receiving messages after a rebalance until they have all received their new partitions - if a rebalance occurs just after one consumer has started processing a batch for 10 minutes then all consumers will be hanging around for at least that long).
Hence for a more responsive group in situations where processing of messages is expected to be slow you should choose to reduce the records fetched in each batch.

What is the delay time between each poll

In kafka documentation i'm trying to understand this property max.poll.interval.ms
The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member.
This mean each poll will happen before the poll-time-out by default it is 5 minutes. So my question is exactly how much time consumer thread takes between two consecutive polls?
For example: Consumer Thread 1
First poll--> with 100 records
--> process 100 records (took 1 minute)
--> consumer submitted offset
Second poll--> with 100 records
--> process 100 records (took 1 minute)
--> consumer submitted offset
Does consumer take time between first and second poll? if yes, why? and how can we change that time ( assume this when topic has huge data)
It's not clear what you mean by "take time between"; if you are talking about the spring-kafka listener container, there is no wait or sleep, if that's what you mean.
The consumer is polled immediately after the offsets are committed.
So, max.poll.interval.ms must be large enough for your listener to process max.poll.records (plus some extra, just in case).
But, no, there are no delays added between polls, just the time it takes the listener to handle the results of the poll.

Consumer rebalance while using autocommit

We're using consumer kafka client 0.10.2.0 with the following configuration:
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "1000");
props.put(ConsumerConfig.RECEIVE_BUFFER_CONFIG, 64 * 1024);
props.put(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG, 16 * 1024);
props.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG, RoundRobinAssignor.class.getName());
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "30000");
props.put(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG, "40000");
props.put(ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG, "10000");
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "100");
So as you can see we're using autocommit.
The consumer API version that we're using has a dedicated thread for doing autocommit.
So every one second we have an autocommit which means that we have an heartbeat every one second.
Our application processing time may actually take(from time to time) more than 40 seconds (the request time out interval)
What I wanted to ask is:
1 - if the processing time will take , for example , a minute . will there be a rebalance although there is the autocommit heartbean every second?
2 - What more weird is that in case of long execution time it seems that we're getting the same message more than once. Is it normal? If the consumer has committed an offset , why the rebalance make the same offset being used again?
Thanks,
Orel
You can use KafkaConsumer.pause() / KafkaConsumer.resume() to prevent consumer re-balancing during long processing pauses. JavaDocs. Take a look at this question.
Re.2. Are you sure that these offsets are commited?
Just to clarify , AutoCommit check is called in every poll and it checks that the time elapsed is greater than configured time ,if yes then only it does the commit
Eg. if commit interval is 5 secs and poll is happening in 7 secs, In this case , the commit will happen after 7 sec
For your questions
Auto commit doesn't count for heartbeat , if there is long processing time then obviously commit will not happen and will lead to session timeout which in-turn triggers rebalance
This shouldn't happen unless you are seeking/resetting the offset to previously committed offset or the consumer rebalance occurred
From Kafka v0.10.1.0, you don't need to manually trigger auto commit to do heart beat. Kafka consumer itself initiates a new thread for heart-beat mechanism in background. To know more, read KIP-62.
In your case, you can set max.poll.interval.ms to the maximum time taken by your processor to handle the max.poll.record records.