Handling errors in MassTransit Kafka consumers - apache-kafka

I came across this response from Chris regarding kafka exception handling in MT, and I was wondering - if I'd like to use Kafka DLQs for poison messages (as mentioned here or suggested there under Error Recovery), does it mean I will need to implement it on my own via the MT middleware (send to DLQ in an exception handling middleware)?

Related

Exception handling using Kafka rider in MassTransit

In MassTransit while using transport like RabbitMQ when an exception is thrown, the message goes into queue queue_name_error. But using Kafka, there is no topic with _error suffix, nor similar queue on supporting transport. How to handle exceptions properly using Kafka with MassTransit, and where erroneous messages can be found?
Since Kafka (and Azure Event Hub) are essentially log files with a fancy API, there is no need for an _error queue, as there are no queues anyway. There are no dead letters either. So the built-in error handling of MassTransit that moves faulted messages to the _error doesn't apply (nor does it make sense).
You can use the retry middleware (UseMessageRetry, etc.) with topic endpoints, to handle transient exceptions. You can also log the offset of poison messages to deal with them. The offset doesn't change, the messages remain in the topic until the expiration is reached.

Kafka Streams Retry PAPI and dead letter

I am trying to implement a retry logic within kafka streams processor topology in the event there was an exception producing to a sink topic.
I am using a custom ProductionExceptionHandler to be able to catch exception that happen on "producer.send" to the sink topic upon context.forward
What criteria should I use to be able resend the message to an alternate sink topic if there was an exception sending to original sink topic. Could this be deduced from type of exception in producer exception handler without compromising the transactional nature of the internal producer in Kafka streams.
If we decide to produce to a dead letter queue from production exception handler in some unrecoverable errors, could this be done within the context of "EOS" guarantee or it has to be a custom producer not known to the topology.
Kafka Streams has not built-in support for dead-letter-queue. Hence, you are "on your own" to implement it.
What criteria should I use to be able resend the message to an alternate sink topic if there was an exception sending to original sink topic.
Not sure what you mean by this? Can you elaborate?
Could this be deduced from type of exception in producer exception handler
Also not sure about this part.
without compromising the transactional nature of the internal producer in Kafka streams.
That is not possible. You have no access to the internal producer.
If we decide to produce to a dead letter queue from production exception handler in some unrecoverable errors, could this be done within the context of "EOS" guarantee or it has to be a custom producer not known to the topology.
You would need to maintain your own producer and thus it's out-of-scope for EOS.

Handle Deserialization Error (Dead Letter Queue) in a kafka consumer

After some research i found few configuration i can use to handle this.
default.deserialization.exception.handler - from the StreamsConfig
errors.deadletterqueue.topic.name - from the SinkConnector config
I cant seem to find the equivalent valid configuration for a simple consumer
I want to start a simple consumer and have a DLQ handling whether its via just stating the DLQ topic and let kafka produce it ( like in the sink connector) or purely providing my own class that will produce it (like the streams config).
How can i achieve a DLQ with a simple consumer?
EDIT: Another option i figured is simply handling it in my Deserializer class, just catch an exception there and produce it to my DLQ
But it will mean ill need to create a producer in my deserializer class...
Is this the best practice to handle a DLQ from a consumer?

Spring Kafka Template send with retry based on different cases

I am using Spring Kafka's KafkaTemplate for sending message using the async way and doing proper error handing using callback.
Also, I have configured the Kafka producer to have maximum of retries (MAX_INTEGER).
However there maybe some errors which is related with avro serialization, but for those retry wouldn't help. So how can I escape those error without retries but for other broker related issues I want to do retry?
The serialization exception will occur before the message is sent, so the retries property is irrelevant in that case; it only applies when the message is actually sent.

Kafka ktable corrupt message handling

We are using Ktabke for aggregation in kafka, its very basic use and have referred from kafka doc.
I am just trying to investigate that if some message consumption fails while aggregating how we can move such message to error topic or dlq.
I found something similar for KStream but not able to find for KTable and i was not able to simply extends KStream solution to ktable.
Reference for KStream
Handling bad messages using Kafka's Streams API
My use case is very simple for any kind of exception just move to error topic and move on to different message
There is no built in support for what you ask atm (Kafka 2.2), but you need to make sure that your application code does not throw any exceptions. All provided handlers that can be configured, are for exceptions thrown by Kafka Streams runtime. Those handlers are providing, because otherwise the user has no chance at all to react to those exception.
Feel free to create feature request Jira.