Kafka Streams Deserialization Handler - apache-kafka

I am trying to use the LogAndContinueExceptionHandler on deserialization. It works fine when an error occurs by successfully logging it and continue. However, lets say I have a continuous stream of errors on my incoming messages, and I stop and restart the kafka streams application, then I see that the messages which failed and already logged in my last attempt re-appear again (they are getting logged again). It is more problematic if I try to send the messages in error to a DLQ. On a restart, they are sent again to DLQ. As soon as I have a good record coming in, it looks like the offset moves further and not seeing the already logged messages again on another restart. Is there a way to manually commit within the streams application? I tried to use the ProcessorContext#commit(), but that doesn't seem to have any effect.
I reproduced this behavior by running the sample provided here: https://github.com/confluentinc/kafka-streams-examples/blob/4.0.0-post/src/main/java/io/confluent/examples/streams/WordCountLambdaExample.java
I changed the incoming value Serde to Serdes.Integer().getClass().getName() to force a deserialization error on input and reduced the commit interval to just 1 second. Also added the following to the config.
streamsConfiguration.put(StreamsConfig.DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG, LogAndContinueExceptionHandler.class);.
Once it fails and when I restart the app, the same records failed before appear on the logs again. For example, I see the following output on the console each time I restart the app. I would expect these to be not tried again as we already skipped them before.
2018-01-27 15:24:37,591 WARN wordcount-lambda-example-client-StreamThread-1 o.a.k.s.p.i.StreamThread:40 - Exception caught during Deserialization, taskId: 0_0, topic: words, partition: 0, offset: 113
org.apache.kafka.common.errors.SerializationException: Size of data received by IntegerDeserializer is not 4
2018-01-27 15:24:37,592 WARN wordcount-lambda-example-client-StreamThread-1 o.a.k.s.p.i.StreamThread:40 - Exception caught during Deserialization, taskId: 0_0, topic: words, partition: 0, offset: 114
org.apache.kafka.common.errors.SerializationException: Size of data received by IntegerDeserializer is not 4
Looks like when deserialization exceptions occur, this flag is never set to be true here: https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamTask.java#L228. It seems like it only becomes true once processing succeeds. That might be the reason why commit is not happening even after I manually call processorContext#commit().
Appreciate any help on this mater.
Thank you.

Related

Need some clarification regarding error handling in kafka connect

I'm looking over the following site KIP:298: Error Handling in Connect
In example 2, what does the following configuration will do? Bit more information or an example can help me out to understand.:-
# retry for at most 10 minutes times waiting up to 30 seconds between consecutive failures
errors.retry.timeout=600000
errors.retry.delay.max.ms=30000
And one more thing is, while dealing with sink connector, when I'm getting some errors due to duplicate records, it keeps on trying for a certain period, how to set our own limit of retries?
I tried by setting errors.retry.timeout=0 even though duplicate key error was retrying continuously for certain no.of.times, but if the error is because of schema or serializer it's not retrying.
And finally, errors.log.enable when this is true where does these logs are stored? I was checking in connect log, but not able to find the difference between default log and when the errors.log.enable is set to true.
Not sure how to fix your problem ,but when errors.log.enable=true, you should see 2 additional topic are created for your connector, yourconnector-error and yourconnector-success, you should be able to see the connector failure message in yourconnector-error topic.

Kafka connect error handling and improved logging

I was trying to leverage some enhancements in Kafka connect in 2.0.0 release as specified by this KIP https://cwiki.apache.org/confluence/display/KAFKA/KIP-298%3A+Error+Handling+in+Connect and I came across this good blog post by Robin https://www.confluent.io/blog/kafka-connect-deep-dive-error-handling-dead-letter-queues.
Here are my questions
I have set errors.tolerance=all in my connector config. If I understand correctly, it will not fail for bad records and move forward. Is my understanding correct?
In my case, the consumer doesn't fail and stays in the RUNNING state (which is expected) but the consumer offsets don't move forward for the paritions with the bad records. Any guess why this may be happening?
I have set errors.log.include.messages and errors.log.enable to true for my connector but I don't see any additional logging for the failed records. The logs are similar to what I used to see before enabling these properties. I didn't see any message like this https://github.com/apache/kafka/blob/5a95c2e1cd555d5f3ec148cc7c765d1bb7d716f9/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/LogReporter.java#L67
Some Context:
In my connector, I do some transformations, validations for every record and if any of these fail, I throw RetriableException. Earlier I was throwing RuntimeException but I changed to RetriableException after reading the comments for RetryWithToleranceOperator class.
I have tried to keep it brief but let me know if any additional context is required.
Thanks so much in advance!

StreamsException: Extracted timestamp value is negative, which is not allowed

This could be a duplicate of Error in Kafka Streams using kafka-node - negative timestamp, but certainly not. My Kafka Streams app does some transformation logic on each message and forwards it to a new topic. There is no time-based aggregation/processing in the app, so there is no need of using any custom timestamp extractor. This app was running fine for several days, but all of sudden the app thrown a negative timestamp exception.
Exception in thread "StreamThread-4" org.apache.kafka.streams.errors.StreamsException: Extracted timestamp value is negative, which is not allowed.
After throwing this exception from all StreamThreads (10 in total), the app was kind of frozen as there was no further progress on the stream for several hours. There was no exception thrown after that. When I restarted the app, it started to process only the newly coming messages.
Now the question is, what happened to those messages that came in between (after throwing the exception and before restarting the app). In case, those missing messages had no embedded timestamp (Highly impossible as no changes happened in the broker and producer), isn't that the app should have thrown an exception for each such message? Or is't like the app stop the stream progress when it detects the negative timestamp in the message at first time? Is there a way to handle this situation so that the app can progress the stream, even after detecting any negative timestamp?My app uses Kafka Streams library version 0.10.0.1-cp1.
Note: I can easily put up a custom timestamp extractor which can check the negative timestamp in each message, but that is a lot of unnecessary overhead for my app. All I want to understand is why was the stream not progressed after detecting a message with negative timestamp.
Even if you do not have any time based operator, a Kafka Streams application checks if timestamps returned from timestamp extractor are valid, because timestamps are used to determine processing order of records from different partitions, to ensure records are processes in-order and all partitions are consumed in an time-based aligned manner.
If a negative timestamp is detected, the application (or actually the corresponding thread) dies. Unfortunately, it is currently not possible to recover from such an exception and you would need to restart your application. See also Confluent FAQs: http://docs.confluent.io/3.1.1/streams/faq.html#invalid-timestamp-exception
If your application dies and you restart it, it will resume processing where it left off. Unfortunately, in Kafka 0.10.0.1 there is a bug (fixed in upcoming release 0.10.2) and in case of failure an incorrect offset can get committed and the application "steps over" some records. I assume this happened in your case, and if you have only some records with an invalid timestamp, those record might have been skipped allowing your application to resume after restart. This behavior is actually a bug -- without the bug, Kafka Stream would try to process those records with invalid timestamp again and again and fail every time until you provide a custom timestamp extractor that fixes the problem by returning a valid timestamp.
How to fix it:
The correct fix would be to provide a custom timestamp extractor that does never return an invalid (ie, negative) timestamp.
I have no explanation why you got invalid timestamps though... This is quite strange and you might want to investigate your producer setup and try to figure out if there is the possibility that your producer puts and invalid timestamp (even if this is unlikely -- I have no other idea what the root cause of the problem could be).
Further remarks:
In the next release (0.10.2), handling invalid timestamps gets simplified and Kafka Streams provides more built-in timestamp extractors that handle records with invalid timestamps differently. For example, this allows you to auto-skip records with invalid timestamps instead of raising an error (current behavior). For more details see KIP-93: https://cwiki.apache.org/confluence/display/KAFKA/KIP-93%3A+Improve+invalid+timestamp+handling+in+Kafka+Streams

How can a kafka consumer doing infinite retires recover from a bad incoming message?

I am kafka newbie and as I was reading the docs, I had this design related question related to kafka consumer.
A kafka consumer reads messages from the kafka stream which is made up
of one or more partitions from one or more servers.
Lets say one of the incoming messages is corrupt and as a result the consumer fails to process. But when processing event logs you don't want to drop any events, as a result you do infinite retries to avoid transient errors during processing. In such cases of infinite retries, how can the consumer move forward. Is there a way to blacklist this message for next retry?
I'd think it needs manual intervention. Where we log some message metadata (don't know what exactly yet) to look at which message is failing and have logic in place where each consumer checks redis (or someplace else?) after n reties to see if this message needs to be skipped. The blacklist doesn't have to be stored forever in the redis either, only until the consumer can skip it. Here's a pseudocode of what i just described:
while (errorState) {
if (msg in blacklist) {
//skip
commitOffset()
} else {
errorState = processMessage(msg);
if (!errorState) {
commitOffset();
} else {
// log this msg so that we can add to blacklist
logger.info(msg)
}
}
}
I'd like to hear from more experienced folks to see if there are better ways to do this.
We had a requirement in our project where the processing of an incoming message to update a record was dependent on the record being present. Due to some race condition, sometimes update arrived before the insert. In such cases, we implemented couple of approaches.
A. Manual retry with a predefined delay. The code checks if the insert has arrived. If so, processing goes as normal. Otherwise, it would sleep for 500ms, then try again. This would repeat 10 times. At the end, if the message is still not processed, the code logs the message, commits the offset and moves forward. The processing of message is always done in a thread from a pool, so it doesn't block the main thread either. However, in the worst case each message would take 5 seconds of application time.
B. Recently, we refined the above solution to use a message scheduler based on kafka. So now if insert has not arrived before the update, system sends it to a separate scheduler which operates on kafka. This scheduler would replay the message after some time. After 3 retries, we again log the message and stop scheduling or retrying. This gives us the benefit of not blocking the application threads and manage when we would like to replay the message again.

JMS messages moving to DLQ

JMS mesages are sometimes moving to the DLQ without throwing any exception.
Jboss server instance used is 4.3.0.GA_CP04_EAP.
We are using an an MDB that listens for incoming messages on a queue A, when it receives any message it updates the database and sens an email in one transaction.Transaction is CMT.
Now, what is happening is, sometimes mesages are not picked up by the consumer and they end up in the DLQ. Though from the JMX- console message count i could see that the message once did arrive to the queue A but then goes to the DLQ.
This happens intermittently and does not throw any exceptions on the logs either .
What seems to work most of the times is restarting the servers. No idea about what happens behind the scenes though.
**And after 29 days, same problem has returned.
This follows a pattern but varies with every restart.
There are 2 clustered serevrs which also do loadbalancing , P1 and P2.
First two email messages go to and processed by P1-Email sent
Next email message resquest goes to P2-Email sent
Next two email messages go to and processed by P1-Email sent
Next email message resquest goes to P2-Email NOT SENT
and the cycle repeats
I have found a workaround to this nagging problem thanks to the helpful info found at http://leakfromjavaheap.blogspot.in/2013/05/when-dead-letter-queue-becomes-zombie.html
DLQ listener is set up to listen for any incoming messages and puts them back to their intended destination if any of them is found on DLQ.
Also, considering the situation where any message is travelling from DLQ to the Queue and back to the DLQ in endless loops, a counter is set to check how many times the message has been to the DLQ before, if it exceeds the limit, then it is put to a Permanent DLQ (DLQ for a DLQ).
Application has been running smoothly ever since.
If you can provide the log details when message goes to DLQ, would be better to dig into this issue.
The logs did not contain any useful info; not even an exception to give a hint.
Finally,changed the local tx data source to xa data source and it was a success.Still wondering if there is a reason behind it.