I am using Spring Kafka's KafkaTemplate for sending message using the async way and doing proper error handing using callback.
Also, I have configured the Kafka producer to have maximum of retries (MAX_INTEGER).
However there maybe some errors which is related with avro serialization, but for those retry wouldn't help. So how can I escape those error without retries but for other broker related issues I want to do retry?
The serialization exception will occur before the message is sent, so the retries property is irrelevant in that case; it only applies when the message is actually sent.
Related
I'm writing a kafka listener which should just forward the messages from a topic to a jms queue. I need to stop processing new messages only for a custom exception JmsBrokerConnectionException but i want to continue processing new messages for any other exceptions (ie invalid data) and send error messages to a DLT.
I am using spring-kafka 2.2.7 and cannot upgrade it.
I have currently a solution which uses:
SeekToCurrentErrorHandler (configured with 0 retries and a DeadLetterPubishingRecoverer)
a retry template used in the #KafkaListener method configured with Integer.MAX_VALUE retries, which retries only for JmsBrokerConnectionException
MANUAL_IMMEDIATE ack
The solution seems to do the job but it has the drawback that, for long outages of the jms broker, it would cause a rebalance each max.poll.interval.ms (ie 5 mins).
The question:
Is it a good idea to let max.poll.interval.ms expire and have a group rebalance to handle error conditions for which you want to stop message consumption?
I don't have high-throughput requirements.
The input topic has 10 partitions and i will have 2 consumers.
I know there are other solutions using stateful retry or pausing/resuming the container, but i'd like to keep using the current solution unless i am missing any major drawbacks about it.
I am using spring-kafka 2.2.7 and cannot upgrade it.
That version is no longer supported.
Version 2.3 added backoff and exception classification to the STCEH, eliminating the need for a retry template at the listener level.
That said, you can use stateful retry (https://docs.spring.io/spring-kafka/docs/current/reference/html/#stateful-retry) with a STCEH that always retries, and do the dead letter publishing in the RecoveryCallback at the listener level. The consumer record is available in the retry context with the RetryingMessageListenerAdapter.CONTEXT_RECORD key.
Since you are doing manual acks, you will also need to commit the offset via the CONTEXT_CONSUMER key.
In MassTransit while using transport like RabbitMQ when an exception is thrown, the message goes into queue queue_name_error. But using Kafka, there is no topic with _error suffix, nor similar queue on supporting transport. How to handle exceptions properly using Kafka with MassTransit, and where erroneous messages can be found?
Since Kafka (and Azure Event Hub) are essentially log files with a fancy API, there is no need for an _error queue, as there are no queues anyway. There are no dead letters either. So the built-in error handling of MassTransit that moves faulted messages to the _error doesn't apply (nor does it make sense).
You can use the retry middleware (UseMessageRetry, etc.) with topic endpoints, to handle transient exceptions. You can also log the offset of poison messages to deal with them. The offset doesn't change, the messages remain in the topic until the expiration is reached.
I am trying to implement a retry logic within kafka streams processor topology in the event there was an exception producing to a sink topic.
I am using a custom ProductionExceptionHandler to be able to catch exception that happen on "producer.send" to the sink topic upon context.forward
What criteria should I use to be able resend the message to an alternate sink topic if there was an exception sending to original sink topic. Could this be deduced from type of exception in producer exception handler without compromising the transactional nature of the internal producer in Kafka streams.
If we decide to produce to a dead letter queue from production exception handler in some unrecoverable errors, could this be done within the context of "EOS" guarantee or it has to be a custom producer not known to the topology.
Kafka Streams has not built-in support for dead-letter-queue. Hence, you are "on your own" to implement it.
What criteria should I use to be able resend the message to an alternate sink topic if there was an exception sending to original sink topic.
Not sure what you mean by this? Can you elaborate?
Could this be deduced from type of exception in producer exception handler
Also not sure about this part.
without compromising the transactional nature of the internal producer in Kafka streams.
That is not possible. You have no access to the internal producer.
If we decide to produce to a dead letter queue from production exception handler in some unrecoverable errors, could this be done within the context of "EOS" guarantee or it has to be a custom producer not known to the topology.
You would need to maintain your own producer and thus it's out-of-scope for EOS.
I do not understand how messages that could not be de-serialized can be written to a DLT topic with spring kafka.
I configured the consumer according to the spring kafka docs and this works well for exceptions that occur after de-serialization of the message.
But when the message is not de-serializable a org.apache.kafka.common.errors.SerializationExceptionis thrown while polling for messages.
Subsequently, SeekToCurrentErrorHandler.handle(Exception thrownException, List<ConsumerRecord<?, ?>> records, ...) is called with this exception but with an empty list of records and is therefore unable to write something to DLT topic.
How can I write those messages to DLT topic also?
The problem is that the exception is thrown by the Kafka client itself so Spring doesn't get to see the actual record that failed.
That's why we added the ErrorHandlingDeserializer2 which can be used to wrap the actual deserializer; the failure is passed to the listener container and re-thrown as a DeserializationException.
See the documentation.
When a deserializer fails to deserialize a message, Spring has no way to handle the problem, because it occurs before the poll() returns. To solve this problem, version 2.2 introduced the ErrorHandlingDeserializer2. This deserializer delegates to a real deserializer (key or value). If the delegate fails to deserialize the record content, the ErrorHandlingDeserializer2 returns a null value and a DeserializationException in a header that contains the cause and the raw bytes. When you use a record-level MessageListener, if the ConsumerRecord contains a DeserializationException header for either the key or value, the container’s ErrorHandler is called with the failed ConsumerRecord. The record is not passed to the listener.
The DeadLetterPublishingRecoverer has logic to detect the exception and publish the failed record.
In my spring boot application i am reading the message from kafka topic and saving the message in to HBase.
in case the DB is down and the message is consumed from the topic , how should i ensure that the message is not lost. can someone share me a sample code.
If your code encounters an error during the processing of a record, you as the developer, are responsible for handling retries or error catching. spring-kafka can't capture errors outside of the Kafka API for you.
That being said, Kafka will not remove the record just because it's consumed until it fully expires off the topic. You should definitely set enable.auto.commit to false and commit your own offsets after a successful database action, at the expense of potential duplicated records in hbase
I would also like to point out that you should probably be using Kafka Connect, which is meant to integrate external systems to Kafka, not a plain consumer.