Should consumer or client to produce retry event? - apache-kafka

Let's say we have a Kafka consumer poll from a normal topic that is heavy loaded and for each event, make a client call to service. The duration of client call may vary, sometimes fast sometimes slow, we have a retry topic so whenever client call has issue, we'll produce a retry event.
Here is an interesting design question, which domain should be responsible for producing the retry event?
If we let consumer to handle retry produce, this means we have to let consumer to wait for our client call gets finished, which would bring risk of consumer lag because our event processing speed would become slow
If we let service to handle retry produce, this solve the consumer lag issue as consumer would just act as send and forget. However, when service tries to produce a retry event but fails, our retry record might get lost forever in current client call
I also think of having additional DB for persisting retry events, but this would bring more concern on what if DB write operations fails and we might lose the retry similarly as kafka produce error out
The expectation would be keep it more resilient so that all failed event may get a chance for retry and at same time, should also avoid consumer lag issue

I'm not sure I completely understand the question, but I will give it a shot. To summarise, you want to ensure the producer retries if the event failed.
The producer retries default is 2147483647. If the produce request fails, it will keep retrying.
However, produce requests will fail before the number of retries are exhausted if the timeout configured by delivery.timeout.ms expires first before successful acknowledgement. The default for delivery.timeout.ms is 2 mins so you might want to increase this.
To ensure the producer always sends the record you also want to focus on the producer configurations acks.
If acks=all, all replicas in the ISR must acknowledge the record before it is considered successful. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee.
The above can cause duplicate messages. If you wanted to avoid duplicates, I can also let you know how to do that.

With Spring for Apache Kafka, the DeadletterPublishingRecoverer (which can be used to publish to your "retry" topic) has a property failIfSendResultIsError.
When this is true (default), the recovery operation fails and the DefaultErrorHandler will detect the failure and re-seek the failed consumer record so that it will continue to be retried.
The non-blocking retry mechanism uses this recoverer internally so the same behavior will occur there too.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#retry-topic

Related

retry logic blocks the main consumer while its waiting for the retry in spring

I am referring:
https://medium.com/trendyol-tech/how-to-implement-retry-logic-with-spring-kafka-710b51501ce2
And it says that if we use below:
factory.setErrorHandler(new SeekToCurrentErrorHandler(new DeadLetterPublishingRecoverer(kafkaTemplate), 3));
It will block the main consumer while its waiting for the retry. (https://medium.com/trendyol-tech/how-to-implement-retry-logic-with-spring-kafka-710b51501ce2#:~:text=Also%20it%20blocks%20the%20main%20consumer%20while%20its%20waiting%20for%20the%20retry)
So, my question is do we really need retry on main topic or can we move the failed messages to a retry topic and then process messages there so that our main topic is non-blocking.
Can we achieve non-blocking retry using STCH?
Non-blocking retries were recently added to the new 2.7 release.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#retry-topic
Achieving non-blocking retry / dlt functionality with Kafka usually requires setting up extra topics and creating and configuring the corresponding listeners. Since 2.7 Spring for Apache Kafka offers support for that via the #RetryableTopic annotation and RetryTopicConfiguration class to simplify that bootstrapping.
If message processing fails, the message is forwarded to a retry topic with a back off timestamp. The retry topic consumer then checks the timestamp and if it’s not due it pauses the consumption for that topic’s partition. When it is due the partition consumption is resumed, and the message is consumed again. If the message processing fails again the message will be forwarded to the next retry topic, and the pattern is repeated until a successful processing occurs, or the attempts are exhausted, and the message is sent to the Dead Letter Topic (if configured).

Is Poll call during kafka rebalancing a busy wait?

I am using manual kafka commit by setting property enable.auto.commit as false while initialising the Kafka consumer and calling kafka commit manually after receiving and processing the message.
However since the processing of message in my consumer is time taking, I am getting Exception with message "error": "Broker: Group rebalance in progress"
The reason being that commit after rebalance timeout is rejected with this error. Now the recovery action for this is either I exit and re-instantiate the process which will trigger rebalancing and partition assignment again. Another way is to catch this exception and then continue as usual which will work correctly only if the poll() call is blocked till the rebalancing is complete, otherwise it will fetch the next packet from the batch and might process and commit it successfully leading to loss of the message whose commit got failed while rebalancing.
So, Need to know what is the correct way to handle this case, should I re-instantiate the process or should I catch and ignore the exception?
The best approach is to ignore if it happens occasionally, and if it happens frequently then reduce the max.poll.records or increase the max.poll.interval.ms to ensure it does only happen occasionally. Also, ensure that your code can handle duplicate records (if you can't do that then there is a different answer).
The error you see is, as you probably realise, just because by the time the consumer committed, the group had decided that it had probably gone and so it's partitions were picked up by a different consumer as part of a rebalance - the new consumer would have started from the last committed offset, hence duplicates.
Given that the original consumer is alive and well it will no doubt poll again and so trigger another rebalance. This poll won't block waiting for rebalance to occur - each poll allows for some communication about the current state of the group (within the polling thread) and after a number of polls the new allocation of partitions will be agreed and accepted after which the rebalance is considered compete and that poll will tell the consumer it's partition allocation and return a set of records.

Kafka group re-balancing after consumer failed. org.apache.kafka.clients.consumer.internals.ConsumerCoordinator

I'm running a Kafka cluster with 4 nodes, 1 producer and 1 consumer. It was working fine until consumer failed. Now after I restart the consumer, it starts consuming new messages but after some minutes it throws this error:
[WARN ]: org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Auto offset commit failed for group eventGroup: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured session.timeout.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
And it starts consuming the same messages again and loops forever.
I increased session timeout, tried to change group id and it still does the same thing.
Also is the client version of Kafka consumer a big deal?
I'd suggest you to decouple the consumer and the processing logic, to start with. E.g. let the Kafka consumer only poll messages and maybe after sanitizing the messages (if necessary) delegate the actual processing of each record to a separate thread, then see if the same error is still occurring. The error says, you're spending too much time between the subsequent polls, so this might resolve your issue. Also, please mention the version of Kafka you're using. Kafka had a different heartbeat management policy before version 0.10 which could make this issue easier to reproduce.

How to recover from exceptions sent by producer.send() in Spring Cloud Stream

We experienced the following scenario :
We have a Kafka cluster composed of 3 nodes, each topic created has 3 partitions
A message is sent through MessageChannel.send(), producing a record for, let's say, partition 1
The broker acting as the partition leader for that partition fails
By default, MessageChannel.send() returns true and doesn't throw any exception, even if, eventually, the KafkaProducer can't send successfully the message. We observe, about 30 seconds after this call, the following message in the logs : Expiring 10 record(s) for helloworld-topic-1 due to 30008 ms has passed since batch creation plus linger time
In our case, this is not acceptable as we have to be sure that all messages are eventually delivered to Kafka, at the moment of the return of the call to MessageChannel.send().
We turned on spring.cloud.stream.kafka.bindings.<channelName>.producer.sync to true which does exactly as the documentation describes. It blocks the caller for the producer's acknowledgment of the success or the failure of the delivery (MessageTimeoutException, InterruptedException, ExecutionException), all of this controlled by KafkaProducerMessageHandler. It seems to be the best approach for us as the performance impact is negligible in our case.
But, do we need to take care of the retry ourselves if an exception is thrown ? (in our client code with #Retryable for instance)
Here is a simple project to experiment : https://github.com/phdezann/spring-cloud-bus-kafka-helloworld
If the send() is performed on the #StreamListener thread and the exception is thrown back to the binder, the binder retry configuration will perform retries.
However, since you are doing the send on an HTTP thread you will need to do your own retry (call send within the scope of a RetryTemplate()) or make the controller method #Retryable.

Multithreaded Kafka Consumer or PerPartition-PerConsumer

What should be the better approach while implementing kafka consumer.
Objective is read from Kafka and write back to db. Millions of Rows
Approach 1 :
Per Partition - Per Consumer - Wait for message to consume(i.e. written back to db) then proceed to next in polling loop.
Approach 2 :
Per Partition - Per Consumer - Send Record to worker thread or threadpool to be written back to db and later on commit the offset and keep on polling. Offset Management needs to be taken taken care. In this don't wait for message to written back to DB. Just keep on polling, pass the message to worker thread.
Any insights on both of them ?
Thanks
Approach 1:
The approach is applicable only if it is possible for you to estimate the message processing time otherwise it is not recommended.
Problem: In this approach the main problem is keeping the consumer alive, If you will wait for the messages to be completely processed before calling the poll() again, you have to make sure that your consumer should be alive until it calls poll() because kafka maintains a property named "session.timeout.ms". The kafka broker/cluster takes it action on the value of this property, if consumer is unable to call poll() again with in the time period of "session.timeout.ms", broker will mark consumer dead and it will be kicked out. Now, when consumer will finish the message processing and will call poll() again, it is considered as a new joiner and will again give the set of records starting from the offset as it was before. Keeping this scenario in mind, consumer will be stuck in an infinite loop where it will never proceed its offset.
Possible solution 1: To use this approach you need a good value of following property "session.timeout.ms" with the following side effects:
1: Value too low: Consumer will be marked dead as described above and will never proceed its offset, however messages will be processed but every time it finish the messages it will get the previous messages + new messages again.
2: Value too high: Broker will be very late in detecting the genuine failure of consumer that will result in record duplication and will effect the overall throughput.
Possible Solution 2: (Only valid for version 0.10.1.x) Official fix by Kafka in release (0.10.1.0).
In this approach, two notable entities are introduced: a new property "max.poll.interval.ms" that sets the maximum delay between client calls to poll() and a background thread that is responsible for keeping the consumer alive. So, in a scenario, when consumer calls a method poll() and then gets busy in message processing , the internal background thread will keep the heart beat alive and as a result consumer will stay alive. However, this internal background thread will itself remain alive until the timeout value for the property “max.poll.interval.ms” remains valid. So, this thread will wait for the consumer to call poll() with in the time period value of “max.poll.interval.ms” if not, it will send a leave request and will die itself as well."
Again the tricky part in this solution is to find a suitable value of this property: "max.poll.interval.ms" (very important, This time will be the time for which background thread will keep the heartbeat alive without the need of explicit calling poll()).
Approach 2: Using a worker thread is a good idea but then you have to maintain an internal queue or validation for received messages which can be complex and also you need to use manual commits against auto commits. For more information about commits see this and search heading "Commits and Offsets".
Problem: In this approach the main problem is to keep track of messages received and messages processed successfully. As, your consumer will receive the message it will pass message to respective worker thread and will commit the offset and move forward to receive more messages. During this process you have to take care of following issues:
What if the message is received and offset committed but later for whatever reason the worker thread failed to process the message, now how to get that message again ?
What if messages are received by consumer but there are no free worker threads to process ?
Solution: There can be different ways to resolve the above issues and one way is to use the internal queue to keep the messages and manual commits that will be sent only when worker thread will report the successful processing of the message. However a very careful implementation is required because it can leads to complex code and can also results in memory management or threading issues.
Suggestion: Depending upon your requirements, you can use one approach or the other with implementing fixed for the possible issues as described above. However I would recommend a more robust solution will be to use partition pause/resume. In very abstract way your consumer should do following steps:
1: poll () for messages.
2: Pause all the respective topics/partitions.
3: Assigned messages to worker threads and wait for their processing.
4: Keep calling poll() but as the partitions are paused there will be no extra message received while consumer will be kept alive. (Make sure no new topic is registered during this point)
5: If all worker threads should report message processing success/failure then commit the offsets accordingly.
6: Resume all the partitions.
Note: There can be better ways or other solutions possible depending upon your scenario and requirements. It's just an idea or one of the possible solutions.