Reactor kafka how to guarantee at least once after a flatMap - apache-kafka

I have a service that consumes from kafka and store the data to database. Simplified the logic as below:
Flux<ReceiverRecord<String, byte[]>> kafkaFlux = KafkaReceiver.create(options).receive();
kafkaFlux.flatMap(r -> store(r))// IO operation, store to database
.subscribe(record -> record.receiverOffset().acknowledge()); // Ack the record
The flatMap makes the flux disordered. Based on the reactor kafka documentation, the acknowledge() could ack a record that hasn't been store to database:
https://projectreactor.io/docs/kafka/snapshot/api/reactor/kafka/receiver/ReceiverOffset.html
Acknowledges the ReceiverRecord associated with this offset. The offset will be committed automatically based on the commit configuration parameters ReceiverOptions.commitInterval() and ReceiverOptions.commitBatchSize(). When an offset is acknowledged, it is assumed that all records in this partition up to and including this offset have been processed. All acknowledged offsets are committed if possible when the receiver Flux terminates.
How to guarantee at least once but do not block the stream?

Starting with version 1.3.8, commits can be performed out of order and the framework will defer the commits as needed, until any "gaps" are filled. This removes the need for applications to keep track of offsets and commit them in the right order.
You can set the maxDeferredCommits in your ReceiverOptions to enable the out-of-order commits feature.

Related

Multiple kafka producer setup: Eliminate delayed/stale writes

I have a central database from which reader reads the data in order it is received and then writes to a Kafka Topic. For resiliency, we have multiple readers but at a time only one should be active and hence, events are written in Kafka in the order in which they are received. This switching of readers is decided by the central database to which they are subscribed and if the reader hasn't responded in last n seconds, it can switch to a different reader to maintain SLA.
For simplicity, say each change is essentially a document change. So, change1 updates Doc1, change2 updates Doc2, change3 updates Doc1 again etc and as part of each change, you receive the updated document.
Now consider the following scenario. In the example below, Dx-Vy represents DocX-VersionY:
Reader1 becomes the active reader.
Reader1 receives D1-V1, D2-V1 and commits to Kafka. D1-V1, D2-V1 commit acked to Database.
Reader1 now receives D1-V2, D2-V2 but froze before committing to Kafka
Database makes Reader2 as the active reader. And because it never received the ack from Reader1, it will provide the same update to Reader2.
Now, Reader2 receives D1-V2, D2-V2 and commits to Kafka. D1-V2, D2-V2 commit acked to Database.
Reader2 receives D1-V3, D2-V3 and commits to Kafka. D1-V3, D2-V3 commit acked to Database.
Reader1 comes back and writes D1-V1, D2-V1 (which is stale data now).
Now, we have stale data committed to Kafka. I want to avoid this scenario.
I could not find any documentation that says Kafka Producers have a concept of lease where only if the producer has the lease, it can write to the Kafka Topic (or partition). Or any concept of Etags where it can only change the document if somekind of Etag match succeeds (whether it is at partition level, event key-level etc). I am still new to Kafka, so this might be a naive question and the scenario can be avoided using Kafka correctly. Can someone point me in the right direction or to the correct documentation (if it exists) to avoid something like above from happening?
I went through Kafka documentation but couldn't find any Kafka attribute that can help me with this scenario. I also Google-ed similar problems to see if there is any existing answer for such a problem.

Kafka Streams: Any guarantees on ordering of saves to state stores when using at_least_once?

We have a Kafka Streams Java topology built with the Processor API.
In the topology, we have a single processor, that saves to multiple state stores.
As we use at_least_once, we would expect to see some inconsistencies between the state stores - e.g. an incoming record results in writes to both state store A and B, but a crash between the saves results in only the save to store A getting written to the Kafka change log topic.
Are we guaranteed that the order in which we save will also be the order in which the writes to the state stores happen? E.g. if we first save to store A and then to store B, we can of course have situation where the write to both change logs succeeded, and a situation where only the write to change log A was completed - but can we also end up in a situation where only the write to change log B was completed?
What situations will result in replays? A crash of course - but what about rebalances, new broker partition leader, or when we get an "Offset commit failed" error (The request timed out)?
A while ago, we tried using exactly_once, which resulted in a lot of error messages, that didn't make sense to us. Would exactly_once give us atomic writes across multiple state stores?
Ad 3. According to The original design document on exactly-once support in Kafka Streams I think with eaxctly_once you get atomic writes across multiple state stores
When stream.commit() is called, the following steps are executed in order:
Flush local state stores (KTable caches) to make sure all changelog records are sent downstream.
Call producer.sendOffsetsToTransactions(offsets) to commit the current recorded consumer’s positions within the transaction. Note that although the consumer of the thread can be shared among multiple tasks hence multiple producers, task’s assigned partitions are always exclusive, and hence it is safe to just commit the offsets of this tasks’ assigned partitions.
Call producer.commitTransaction() to commit the current transaction. As a result the task state represented as the above triplet is committed atomically.
Call producer.beginTransaction() again to start the next transaction.

Repeatedly produced to Apache Kafka, different offsets? (Exactly once semantics)

While trying to implement exactly-once semantics, I found this in the official Kafka documentation:
Exactly-once delivery requires co-operation with the destination
storage system but Kafka provides the offset which makes implementing
this straight-forward.
Does this mean that I can use the (topic, partiton, offset) tuple as a unique primary identifier to implement deduplication?
An example implementation would be to use an RDBMS and this tuple as a primary key for an insert operation within a big processing transaction where the transaction fails if the insertion is not possible anymore because of an already existing primary key.
I think the question is equivalent to:
Does a producer use the same offset for a message when retrying to send it after detecting a possible failure or does every retry attempt get its own offset?
If the offset is reused when retrying, consumers obviously see multiple messages with the same offset.
Other question, maybe somehow related:
With single or multiple producers producing to the same topic, can there be "gaps" in the offset number sequence seen by one consumer?
Another possibility could be that the offset is determined e.g. solely by or as recently as the message reaches the leader which does the job (implying that - if not listening to something like a producer's suggested offset - there are probably no gaps/offset jumps, but also different offsets for duplicate messages and I would have to use my own unique identifier within the application's message on application level).
To answer my own question:
The offset is generated solely by the server (more precisely: by the leader of the corresponding partition), not by the producing client. It is then sent back to the producer in the produce response. So:
Does a producer use the same offset for a message when retrying to
send it after detecting a possible failure or does every retry attempt
get its own offset?
No. (See update below!) The producer does not determine offsets and two identical/duplicate application messages can have different offsets. So the offset cannot be used to identify messages for producer deduplication purposes and a custom UID has to be defined in the application message. (Source)
With single or multiple producers producing to the same topic, can there be "gaps" in the offset number sequence seen by one consumer?
Due to the fact that there is only a single leader for every partition which maintains the current offset and the fact that (with the default configuration) this leadership is only transfered to active in-sync replica in case of a failure, I assume that the latest used offset is always communicated correctly when electing a new leader for a partition and therefore there are should not be any offset gaps or jumps initially. However, because of the log compaction feature, there are cases (assuming log compaction being enabled) where there can indeed be gaps in a stream of offsets when consuming already committed messages of a partition once again after the compaction has kicked in. (Source)
Update (Kafka >= 0.11.0)
Starting from Kafka version 0.11.0, producers now additionally send a sequence number with their requests, which is then used by the leader to deduplicate requests by this number and the producer's ID. So with 0.11.0, the precondition on the producer side for implementing exactly once semantics is given by Kafka itself and there's no need to send another unique ID or sequence number within the application's message.
Therefore, the answer to question 1 could now also be yes, somehow.
However, note that exactly once semantics are still only possible with the consumer never failing. Once the consumer can fail, one still has to watch out for duplicate message processings on consumer side.

How to commit manually with Kafka Stream?

Is there a way to commit manually with Kafka Stream?
Usually with using the KafkaConsumer, I do something like below:
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records){
// process records
}
consumer.commitAsync();
}
Where I'm calling commit manually. I don't see a similar API for KStream.
Commits are handled by Streams internally and fully automatic, and thus there is usually no reason to commit manually. Note, that Streams handles this differently than consumer auto-commit -- in fact, auto-commit is disabled for the internally used consumer and Streams manages commits "manually". The reason is, that commits can only happen at certain points during processing to ensure no data can get lost (there a many internal dependencies with regard to updating state and flushing results).
For more frequent commits, you can reduce commit interval via StreamsConfig parameter commit.interval.ms.
Nevertheless, manual commits are possible indirectly, via low-level Processor API. You can use the context object that is provided via init() method to call context#commit(). Note, that this is only a "request to Streams" to commit as soon as possible -- it's not issuing a commit directly.

Usecases for manual offset management in Kafka

I'm trying to implement Kafka consumer on Java.
Assume that the consumer contains some message-processing logic that may throw an exception. In that case the consumer should sleep for some time and reprocess the last message.
My idea was to use manual offset management: an offset is not committed on fail, thus the consumer presumably will read from the old offset.
During testing I've found out that a message is actually read only once in spite of the fact that an offset is not committed. Last committed offset considered only on application restart.
My questions are:
Whether I'm doing the right thing?
What are usecases for manual offset management?
KafkaConsumer keep the latest offsets in-memory, thus, if an exception occurs (and you recover from it) and you want to read a message a second time, you need to use seek() before polling a second time.
Committing offsets is "only" there, to preserve the offsets when the client is shut down or crashed (ie, offsets stored reliably vs. in-memory). On client start up, the latest committed offsets are fetched and than the client only used it's own in-memory offsets.
Manual offset management is useful, if you want to "bundle" offset commits with some other action (eg, a second "commit" in another system that must be in-sync with committed Kafka offsets).