Difference between session.timeout.ms and max.poll.interval.ms for Kafka >= 0.10.1 - apache-kafka

I am unclear why we need both session.timeout.ms and max.poll.interval.ms and when would we use one or the other or both? It seems like both settings indicate the upper bound on the time the coordinator will wait to get the heartbeat from a consumer before assuming it's dead.
Also how does it behave for versions 0.10.1.0+ based on KIP-62?

Before KIP-62, there is only session.timeout.ms (ie, Kafka 0.10.0 and earlier). max.poll.interval.ms is introduced via KIP-62 (part of Kafka 0.10.1).
KIP-62, decouples heartbeats from calls to poll() via a background heartbeat thread, allowing for a longer processing time (ie, time between two consecutive poll()) than heartbeat interval.
Assume processing a message takes 1 minute. If heartbeat and poll are coupled (ie, before KIP-62), you will need to set session.timeout.ms larger than 1 minute to prevent consumer to time out. However, if a consumer dies, it also takes longer than 1 minute to detect the failed consumer.
KIP-62 decouples polling and heartbeat allowing to send heartbeats between two consecutive polls. Now you have two threads running, the heartbeat thread and the processing thread and thus, KIP-62 introduced a timeout for each. session.timeout.ms is for the heartbeat thread while max.poll.interval.ms is for the processing thread.
Assume, you set session.timeout.ms=30000, thus, the consumer heartbeat thread must sent a heartbeat to the broker before this time expires. On the other hand, if processing of a single message takes 1 minutes, you can set max.poll.interval.ms larger than one minute to give the processing thread more time to process a message.
If the processing thread dies, it takes max.poll.interval.ms to detect this. However, if the whole consumer dies (and a dying processing thread most likely crashes the whole consumer including the heartbeat thread), it takes only session.timeout.ms to detect it.
The idea is, to allow for a quick detection of a failing consumer even if processing itself takes quite long.
Implemenation Detail
The new timeout max.poll.interval.ms is mainly a client side concept: if poll() is not called within max.poll.interval.ms, the heartbeat thread will detect this case and send a leave-group request to the broker. -- max.poll.interval.ms is still relevant for consumer group rebalances: if a rebalance is triggered, consumers have max.poll.interval.ms time to re-join the group by calling poll() client side which triggers a join-group request.

Related

Spring Kafka Consumer Unable to Rejoin after LeaveGroup Request

We are using Spring Kafka in Production under heavy load.
We have used #KafkaListener annotations and created those listeners as part of Spring Boot Services.
Very frequently these consumers send LeaveGroup requests to the coordinator and then the consumers hang/stuck indefinitely without any log or error. The only option we are left in that case is to redeploy that particular instance.
This is the series of logs that we see:
Attempt to heartbeat failed since group is rebalancing
Attempt to heartbeat failed since group is rebalancing
This member will leave the group because consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records.
Member consumer-1-0d0b333d-9e5b-4038-ac3f-e5d59c4e9d19 sending LeaveGroup request to coordinator 172.25.128.233:9092 (id: 2147483560 rack: null)
Additional information:
We are using below Kafka Configs:
Batch Size: 10
Max Poll Interval: 12 mins
Heart Beat: 8 seconds
Max Partition Size: 10 MB
Session Timeout: 25 seconds
Request Timeout: 20 mins
Basically, we want to know once the Consumer leaves the group, why it is not sending the Join Request again?
You should understand that the retry suspends the consumer thread (if a BackOffPolicy is used). There are no calls to Consumer.poll() during the retries. Kafka has two properties to determine consumer health. The session.timeout.ms is used to determine if the consumer is active. Since kafka-clients version 0.10.1.0, heartbeats are sent on a background thread, so a slow consumer no longer affects that. max.poll.interval.ms (default: five minutes) is used to determine if a consumer appears to be hung (taking too long to process records from the last poll). If the time between poll() calls exceeds this, the broker revokes the assigned partitions and performs a rebalance. For lengthy retry sequences, with back off, this can easily happen.
Since version 2.1.3, you can avoid this problem by using stateful retry in conjunction with a SeekToCurrentErrorHandler. In this case, each delivery attempt throws the exception back to the container, the error handler re-seeks the unprocessed offsets, and the same message is redelivered by the next poll(). This avoids the problem of exceeding the max.poll.interval.ms property (as long as an individual delay between attempts does not exceed it). So, when you use an ExponentialBackOffPolicy, you must ensure that the maxInterval is less than the max.poll.interval.ms property. To enable stateful retry, you can use the RetryingMessageListenerAdapter constructor that takes a stateful boolean argument (set it to true). When you configure the listener container factory (for #KafkaListener), set the factory’s statefulRetry property to true.
https://docs.spring.io/spring-kafka/reference/html/#stateful-retry

Kafka10.1 heartbeat.interval.ms, session.timeout.ms and max.poll.interval.ms

I am using kafka 0.10.1.1 and confused with the following 3 properties.
heartbeat.interval.ms
session.timeout.ms
max.poll.interval.ms
heartbeat.interval.ms - This was added in 0.10.1 and it will send heartbeat between polls.
session.timeout.ms - This is to start rebalancing if no request to kafka and it gets reset on every poll.
max.poll.interval.ms - This is across the poll.
But, when does kafka starts rebalancing? Why do we need these 3? What are the default values for all of them?
Thanks
Assuming we are talking about Kafka 0.10.1.0 or upwards where each consumer instance employs two threads to function. One is user thread from which poll is called; the other is heartbeat thread that specially takes care of heartbeat things.
session.timeout.ms is for heartbeat thread. If coordinator fails to get any heartbeat from a consumer before this time interval elapsed, it marks consumer as failed and triggers a new round of rebalance.
max.poll.interval.ms is for user thread. If message processing logic is too heavy to cost larger than this time interval, coordinator explicitly have the consumer leave the group and also triggers a new round of rebalance.
heartbeat.interval.ms is used to have other healthy consumers aware of the rebalance much faster. If coordinator triggers a rebalance, other consumers will only know of this by receiving the heartbeat response with REBALANCE_IN_PROGRESS exception encapsulated. Quicker the heartbeat request is sent, faster the consumer knows it needs to rejoin the group.
Suggested values:
session.timeout.ms : a relatively low value, 10 seconds for instance.
max.poll.interval.ms: based on your processing requirements
heartbeat.interval.ms: a relatively low value, better 1/3 of the session.timeout.ms
session.timeout.ms is closely related to heartbeat.interval.ms.
heartbeat.interval.ms controls how frequently the KafkaConsumer poll() method will send a heartbeat to the group coordinator, whereas session.timeout.ms controls how long a consumer can go without sending a heartbeat.
Therefore, those two properties are typically modified together.
heatbeat.interval.ms must be lower than session.timeout.ms, and is usually set to one-third of the timeout value. So if session.timeout.ms is 3 seconds, heartbeat.interval.ms should be 1 second.
max.poll.interval.ms - The maximum delay between invocations of poll() when using consumer group management.
This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member
just make them more clear, a heartbeat thread(along with user thread which invokes Poll function in the same process) will send heartbeat to coordinator every "heartbeat.interval.ms" time, and the coordinator will mark the consumer in the user thread as dead if it exceeds "session.timeout.ms" or "max.poll.interval.ms".

Difference between heartbeat.interval.ms and session.timeout.ms in Kafka consumer config

I'm currently running kafka 0.10.0.1 and the corresponding docs for the two values in question are as follows:
heartbeat.interval.ms -
The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than session.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.
session.timeout.ms -
The timeout used to detect failures when using Kafka's group management facilities. When a consumer's heartbeat is not received within the session timeout, the broker will mark the consumer as failed and rebalance the group. Since heartbeats are sent only when poll() is invoked, a higher session timeout allows more time for message processing in the consumer's poll loop at the cost of a longer time to detect hard failures. See also max.poll.records for another option to control the processing time in the poll loop.
It isn't clear to me why the docs recommend setting heartbeat.interval.ms to 1/3 of session.timeout.ms. Does it not make sense to have these values be the same since the heartbeat is only sent when poll() is invoked, and thus when processing of the current records is done?
The heartbeat.interval.ms specifies the frequency of sending heart beat signal by the consumer. So if this is 3000 ms (default), then every 3 seconds the consumer will send the heartbeat signal to the broker.
The session.timeout.ms specifies the amount of time within which the broker needs to get at least one heart beat signal from the consumer. Otherwise it will mark the consumer as dead. The default value 10000 ms (10 seconds) makes provision for missing three heart beat signals before a broker will mark the consumer as dead.
In a network setup under heavy load, it is normal to miss few heartbeat signals. So it is recommended to wait for missing 3 heart beat signals before marking the consumer as dead. That is the reason for the 1/3 recommendation.
The code makes a hard limit that you cannot set heartbeat.interval.ms no less than request.timeout.ms, otherwise Kafka complains "Heartbeat must be set lower than the session timeout".
If you really have these two configs be the same value, a possible situation is network client will never heartbeat anymore because the session timeout nearly always happens before doing heartbeat.
As for the 1/3, I prefer to think it sort of being a heuristic value.
heartbeat.interval.ms is the duration within which consumer sends signal to kafka broker to indicate it is alive, session.timeout.ms is the maximum duration that kafka broker can wait without a receiving heartbeat from consumer, if session.timeout.ms duration exceeds without receiving a heartbeat from consumer than that consumer will be marked as dead(i.e.it can no more consume message). In a kafka queue where millions of messages processing a day can have more session.timeout.ms duration say upto 30000ms( default is 10s) to keep consumer alive while processing huge volume.

heartbeat failed for group because it's rebalancing

What's the exact reason to have heartbeat failure for group because it's rebalancing ? What's the reason for rebalance where all the consumers in group are up ?
Thank you.
Heartbeats are the basic mechanism to check if all consumers are still up and running. If you get a heartbeat failure because the group is rebalancing, it indicates that your consumer instance took too long to send the next heartbeat and was considered dead and thus a rebalance got triggered.
If you want to prevent this from happening, you can either increase the timeout (session.timeout.ms), or make sure your consumer sends heartbeat more often (heartbeat.interval.ms). Heartbeats are basically embedded in poll(), thus, you need to make sure you call poll frequently enough. This can usually be achieved by limit the number of records a single poll returns via max.poll.records (to shorten the time it takes to process all data that got fetched).
Update
Since Kafka 0.10.1, heartbeats are sent in a background thread, and not when poll() is called (cf. https://cwiki.apache.org/confluence/display/KAFKA/KIP-62%3A+Allow+consumer+to+send+heartbeats+from+a+background+thread). In this new design, configuration session.timeout.ms and heartbeat.interval.ms are still the same. Additionally, there is max.poll.interval.ms that determines how often poll() must be called. If you miss to call poll() within max.poll.interval.ms, the heartbeat thread assume that the processing thread died, and will send a leave-group-request that will trigger a rebalance, and the heartbeat thread will stop sending heartbeats afterwards. If you processing thread is ok but just slow, the next call to poll() will initiate another rebalance to re-join the group again.
For more details, cf. Difference between session.timeout.ms and max.poll.interval.ms for Kafka >= 0.10.1

Confluent Kafka Consumer Configuration - How session.timeout.ms and max.poll.interval.ms are related?

I'm trying to understand how the default values of below two confluent consumer configurations work together.
max.poll.interval.ms - As per confluent documentation, the default value is 300,000 ms
session.timeout.ms - As per confluent documentation, the default value is 10,000 ms
heartbeat.interval.ms - As per confluent documentation, the default value is 3,000 ms
Let's say if I'm using these default values in my configuration. Now I've a question here.
For example, let's assume for a consumer, consumer is sending heartbeats every 3,000 ms and my first poll happened at the timestamp t1 and then second poll happened at t1 + 20,00 ms. Then would it cause a re-balance because this exceed the 'session.timeout.ms' ? or would it work fine as the consumer did send a heartbeat as per the expected timestamp?
In previous thread Here also explained about session time out and max poll timeout. Let me also explain about my understanding on this.
ConsumerRecords poll(final long timeout):
is used to fetch data sequentially from topic's partition starting from last consumed offset or manual set offset. This will return immediately if there are record available otherwise it will await the passed timeout. If timeout passes will return empty record.
The poll API keep calling to fetch any new message arrived as well as its ensure liveness of consumer.Underneath the covers
session.timeout.ms During each poll Consumer coordinator send heartbeat to broker to ensure that consumer's session live and active. If broker didn't receive any heartbeat till session.timeout.ms broker then broker leave that consumer and do rebalance
You can assume session.timeout.ms is maximum time broker wait to get heartbeat from consumer whereas heartbeat.interval.ms is expected time consumer suppose to send heartbeat to Broker.
thats explained heartbeat.interval.ms always less than session.timeout.ms because ideal case 1/3 of session timeout.
max.poll.interval.ms : The maximum delay between invocations of poll() when using consumer group management. That means consumer maximum time will be idle before fetching more records.If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance by calling poll in order to reassign the partitions to another consumer instance.
If we are doing long batch processing its good to increase max.poll.interval.ms but please note increasing this value may delay a group rebalance since the consumer will only join the rebalance inside the call to poll. We can tune by keeping max poll interval low by tuning max.poll.records.
Now let's discuss how they relate to each other.
Consumer while calling poll its check heartbeat, session time out poll time out in background as below manner:
Consumer coordinator check if consumer is not in rebalancing state if still rebalancing then wait coordinator to join the consumer. wait and call poll . Please note if max.poll.interval.ms large it will take more time to rebalance.
After poll and rebalance completed coordinator check session time out
if session timeout has expired without seeing a successful heartbeat, old coordinator will get disconnected so next poll will try to rebalance.
So Session timeout directly dependent time coordinator liveness if session time out consumer coordinator itself get dead and call poll will have to assign new coordinator before rebalancing.
After session timeout check coordinator validate heartbeat.pollTimeoutExpired if poll timeout has expired, which means that the foreground thread has stalled in between calls to poll(), so member explicitly leave the group and call poll to get join new consumer not whole consumer group coordinator.
After session time out and poll time out validation and before sending heart beat status , consumer coordinator check heart beat timeout, if heart beat exceed max delay heart beat time then pause/wait to retry backoff and poll again.
If heartbeat time is also in limit not exceed then consumer coordinator sent sendHeartbeatRequest
In case of sendHeartbeatRequest success thread will reset heartbeat time and call poll but in case of fail and consumer group is not in rebalance state it will wakeup consumer group coordinator to call poll again.
As mentioned on shared link polling is independent with heartbeat so during polling in case poll is quite larger heartbeat still allow to sent heartbeat which make sure your thread are live means session time out doesn't directly link to poll .
session.timeout.ms: Max time to receive heart beat
max.poll.interval.ms: Max time on independent processing thread
So if you set max.poll.interval.ms 300,000 then will have 300,000 ms to next poll that means consumer thread have max 300,000 ms to complete processing. In between heartbeat will keep sending heartbeat request at heartbeat.interval.ms i.e. 3,000 to indicate thread is still live and in case no heartbeat till session.timeout.ms i.e. 10,000 coordinator will be dead and call poll to reassign new coordinator and rebalancing