Currently, I have a Kafka Listener configured with a ConcurrentKafkaListenerContainerFactory and a SeekToCurrentErrorHandler (with a DeadLetterPublishingRecoverer configured with 1 retry).
My Listener method is annotated with #Transactional (and also all the methods in my Services that interact with the DB).
My Listener method does the following:
Receive message from Kafka
Interact with several services that save different parts of the received data to the DB
Ack message in Kafka (i.e., commit offset)
If it fails somewhere in the middle, it should rollback and retry until max retries.
Then send message to DLT.
I'm trying to make this method fully transactional, i.e., if something fails all previous changes are rolled back.
However, the #Transactional annotation in the Listener method is not enough.
How can I achieve this?
What configurations should I employ to make the Listener method fully transactional?
If you are not also publishing to Kafka from the listener, there is no need (or benefit) to using Kafka transactions; just overhead. The STCEH + DLPR is enough.
If you are also publishing to Kafka (and want those to be rolled back too), then see the documentation - configure a KafkaTransactionManager in the listener container.
Related
I have a need to turn Kafka consumer on/off on the basis of some Database driven property. How can it be achieved.
one way that i have thought of is : throwing exception from consumer when consumer flag is turned off. and container factory config is defined as
factory.setErrorHandler(new SeekToCurrentErrorHandler());
But it actively seeks the same message.
is there any way to turn heart-beat off and then on back again on demand.
You can stop() and start() the listener container.
It appears you are using #KafkaListener since you are using a container factory.
In that case
#KafkaListener(id = "foo" ...)
and then use the KafkaListenerEndpointRegistry bean ...
registry.getListenerContainer("foo").stop();
The Consumer has a few APIs to control its state:
pause()/resume(): Allows to stop/resume consusuming from a set of partitions. The Consumer stays subscribed (so no rebalance) but just does not fetch any new records until resumed
unsubscribe(): Allows to change consumer subscription, if not subscribed to anything, it will just stay connected to the cluster.
If you are "done" with the Consumer, you can also call close() and start a new one when needed
I have a Kafka consumer that I create on a schedule. It attempts to consume all of the new messages that have been added since the last commit was made.
I would like to shut the consumer down once it consumes all of the new messages in the log instead of waiting indefinitely for new messages to come in.
I'm having trouble finding a solution via Kafka's documentation.
I see a number of timeout related properties available in the Confluent.Kafka.ConsumerConfig and ClientConfig classes, including FetchWaitMaxMs, but am unable to decipher which to use. I'm using the .NET client.
Any advice would be appreciated.
I have found a solution. Version 1.0.0-beta2 of Confluent's .NET Kafka library provides a method called .Consume(TimeSpan timeSpan). This will return null if there are no new messages to consume or if we're at the partition EOF. I was previously using the .Consume(CancellationToken cancellationToken) overload which was blocking and preventing me from shutting down the consumer. More here: https://github.com/confluentinc/confluent-kafka-dotnet/issues/614#issuecomment-433848857
Another option was to upgrade to version 1.0.0-beta3 which provides a boolean flag on the ConsumeResult object called IsPartitionEOF. This is what I was initially looking for - a way to know when I've reached the end of the partition.
I have never used the .NET client, but assuming it cannot be all that different from the Java client, the poll() method should accept a timeout value in milliseconds, so setting that to 5000 should work in most cases. No need to fiddle with config classes.
Another approach is to find the maximum offset at the time that your consumer is created, and only read up until that offset. This would theoretically prevent your consumer from running indefinitely if, by any chance, it is not consuming as fast as producers produce. But I have never tried that approach.
We are implementing a Kafka Consumer using Spring Kafka. As I understand correctly if processing of a single message fails, there is the option to
Don't care and just ACK
Do some retry handling using a RetryTemplate
If even this doesn't work do some custom failure handling using a RecoveryCallback
I am wondering what your best practices are for that. I think of simple application exceptions, such as DeserializationException (for JSON formatted messages) or longer local storage downtime, etc. Meaning there is needed some extra work, like a hotfix deployment, to fix the broken application to be able to re-process the faulty messages.
Since losing messages (i. e. not processing them) is not an option for us, the only option left is IMO to store the faulty messages in some persistence store, e. g. another "faulty messages" Kafka topic for example, so that those events can be processed again at a later time and there is no need to stop event processing totally.
How do you handle these scenarios?
One example is Spring Cloud Stream, which can be configured to publish failed messages to another topic errors.foo; users can then copy them back to the original topic to try again later.
This logic is done in the recovery callback.
We have a use case where we can't drop any messages at all, even for faulty messages. So when we encounter a faulty message, we will send a default message in place of that faulty record and at the same time send the message to a failed-topic for retry later.
I have decided to use Kafka for an event sourcing implementation and there are a few things I am still not quite sure about. One is finding a good way of recreating my materialized views (stored in a Postgres database) in case of failures.
I am building a messaging application so consider the example of a service receiving a REST request to create a new message. It will validate the request and then create an event in Kafka (e.g. "NewMessageCreated"). The service (and possibly other services as well) will then pick up that event in order to update its local database. Let's assume however that the database has crashed so saving the order in the database fails. If I understand correctly how to deal with this situation I should empty the database and try to recreate it by replaying all Kafka events.
If my assumption is correct I can see the following issues:
1) I need to enforce ordering by userId for my "messages" topic (so all messages from a particular user are consumed in order) so this means that I cannot use Kafka's log compaction feature for that topic. This means I will always have to replay all events from Kafka no matter how big my application becomes! Is there a way to address this in a better way?
2) Each time I replay any events from Kafka they may trigger the creation of new events (e.g. a consumer might do some processing and then generate a new event before committing). This sounds really problematic so I am thinking if instead of just replaying the events when rebuilding my caches, I should be processing the events but disable generation of new events (even though this would require extra code and seems cumbersome).
3) When an error occurs (e.g. due to some resource failure or due to a bug) while consuming some message, should I commit the message and generate an error in a Kafka topic, or should I not commit at all? In the latter case this will mean that subsequent messages in the same partition cannot be committed either (otherwise they will implicitly commit the previous one as well).
Any ideas how to address these issues?
Thanks.
TL;DR
When a Flume source fails to push a transaction to the next channel in the pipeline, does it always keep event instances for the next try?
In general, is it safe to have a stateful Flume interceptor, where processing of events depends on previously processed events?
Full problem description:
I am considering the possibility of leveraging guarantees offered by Apache Kafka regarding the way topic partitions are distributed among consumers in a consumer group to perform streaming deduplication in an existing Flume-based log consolidation architecture.
Using the Kafka Source for Flume and custom routing to Kafka topic partitions, I can ensure that every event that should go to the same logical "deduplication queue" will be processed by a single Flume agent in the cluster (for as long as there are no agent stops/starts within the cluster). I have the following setup using a custom-made Flume interceptor:
[KafkaSource with deduplication interceptor]-->()MemoryChannel)-->[HDFSSink]
It seems that when the Flume Kafka source runner is unable to push a batch of events to the memory channel, the event instances that are part of the batch are passed again to my interceptor's intercept() method. In this case, it was easy to add a tag (in the form of a Flume event header) to processed events to distinguish actual duplicates from events in a failed batch that got re-processed.
However, I would like to know if there is any explicit guarantee that Event instances in failed transactions are kept for the next try or if there is the possibility that events are read again from the actual source (in this case, Kafka) and re-built from zero. In that case, my interceptor will consider those events to be duplicates and discard them, even though they were never delivered to the channel.
EDIT
This is how my interceptor distinguishes an Event instance that was already processed from a non-processed event:
public Event intercept(Event event) {
Map<String,String> headers = event.getHeaders();
// tagHeaderName is the name of the header used to tag events, never null
if( !tagHeaderName.isEmpty() ) {
// Don't look further if event was already processed...
if( headers.get(tagHeaderName)!=null )
return event;
// Mark it as processed otherwise...
else
headers.put(tagHeaderName, "");
}
// Continue processing of event...
}
I encountered the similar issue:
When a sink write failed, Kafka Source still hold the data that has already been processed by interceptors. In next attempt, those data will send to interceptors, and get processed again and again. By reading the KafkaSource's code, I believe it's bug.
My interceptor will strip some information from origin message, and will modify the origin message. Due to this bug, the retry mechanism will never work as expected.
So far, The is no easy solution.