Question on implementing cqrs/saga with Kafka - apache-kafka

I am studying cqrs and would like to implement the saga pattern to handle distributed transaction using Kafka.
A saga would subscribe the domain event from other aggregate to send a command. My problem is the domain event would be handled by the aggregate event handler as well.
If the aggregate event handler handled the event successfully, the offset would be committed so that the job in the broker gone.
Let's say if the aggregate event handler handle that event successfully, but the saga is not triggered because of some unexpected issue. Since the job is gone, the event would never be picked up by the saga if they both live in same consumer group...
So does it make sense to have a consumer group for the aggregate and create another consumer group for the saga?

so that the job in the broker gone.
I think you're misunderstanding that consuming messages from a Kafka topic does not remove them.
Part of your saga flow should be an action for committing the full offset trace once each service does its complete action
does it make sense to have a consumer group for the aggregate and create another consumer group for the saga?
It makes sense to have intermediate topics, sure. You could use Kafka Streams processors to easily implement that

Related

If Service produce event to one topic, only this service have to consume processed event from another topic (Kafka)

I have to implement event driven architecture services with Kafka (Java tech stack).
I drew example:
Imagine that I have 3 external producers (Ms1, Ms2, Ms3), who sends events in to one topic, which my service reads. After receiving event, my service processing some business logic and than pushes event to another topic. Ms1, Ms2, Ms3 subscribe on this topic and listen what come in. My goal is: if Ms1 sent event to topic-1, only Ms1 must received response event from topic-2 (despite the fact that other Consumers are listening to this topic too, they are forbidden to receive event belong to Ms1). If Ms2 sent event to topic-1, than only Ms2 must received event from topic-2.
And I don't know how many consumers/producer will be. It's floating amount. Today it can be 3 external producers/consumers, tomorrow maybe 30 and so on. They can subscribe and unsubscribe.
Kafka records shouldn't "belong" to particular services, IMO, this is mostly metadata about data lineage; maybe that information will be valuable for some other consumer use case that you've not considered yet.
If you have multiple consumers from one topic, there's no logic outside of filtering and explicit partition assignments that would get "all M1 producer events to all M1 consumers"
If you want to lock down access to topics to particular clients, use ACLs and certificates. Otherwise, there's nothing stopping new consumer groups from subscribing to whatever topics they want

How to process events which are out of order using Kafka Streams

I have an application where events are sent on a Kafka topic based on user actions like User Login, user's Intermediate actions (optional) and User Logout. Each event has some information in a event object along with userId , for example a Login Event has loginTime; Add Note has notes (Intermediate actions). Similarly a Logout event has logoutTime. The requirement is to aggregate information from all these events into one object after receiving the Logout event for each user & send it on downstream.
Due to some reasons (Network delay, multiple event producer) events may not come in order (User Logout event may come before Intermediate event), So the question is how to handle such scenarios? I can not wait for Intermediate events after receiving User Logout event since Intermediate events are optional depending on user's actions.
The only option which I think here, is to wait for some time after receiving User Logout event, process Intermediate events if received within that wait time & send processed event, but again not sure how to achieve this.
Kafka does not guarantee order on topic, it guarantee order on partition. One topic can have more than one partition so every consumer that is consuming your topic will consume one partition. That is how kafka is achieving scalability. So what you are experiencing is normal behavior (it isn't bug or related to network delay or something like that). What you can do is to make sure that all messages that you want to proceed in order are sent to the same partition. You can do that by setting number of partitions to 1, that is the dumbest way. When you send message with producer, by default kafka take a look into key, take hash of it and by that hash know on which partition should send a message. You can make sure that for all messages, the key is the same. That way all hashes of keys will be the same and all messages will go to the same partition. Also, you can implement custom partitioner and override default way how kafka choose on which partition message will go. In this way, all messages will arrive in order. If you cannot do any of this actions, then you will receive events out of order and you will have to think about a way how to consume them out of order but that is not question related to kafka.
If you are not able to preserve order of event (that Logout will be last event),
you can achieve your requirements using ProcesorApi from Kafka Streams. Kafka Streams DSL can be combine with Processor API (more details here).
You can have several partitions, but all events for particular user has to be send to same Partition.
You have to implement custom Processor/Transformer.
Your processor will be put each event/activity in state store (aggregate all event from particular user under same key).
Processor API gives you ability to create some kind of scheduler (Punctuator).
You can schedule to check every X seconds events for particular user. If Logout was long ago, you get all events/activities and make some aggregation and send results to downstreams.
As said in other answers, in Kafka order is maintained on per-partition basis.
Since you are talking about user events, why don't you make UserID as your Kafka topic key? So, that all events related to a specific user will always be ordered (provided they are produced by a single producer).
You should ensure (by design) that only one Kafka producer pushes all the user change events to the given topic. In this way, you can avoid out-of order messages due to multiple producers.
From streams, you might also want to look at Windows in Kafka streams. Tumbling windows for example is non-overlapping and fixed size. You aggregate records over a period of time.
Now you may want to sort the aggregated by their timestamp (or you said you have logout time, login time etc) and act accordingly.
Simple and effective solution
Use synchronous send and set delivery.timeout.ms and retries to a maximum value.
To ensure fault tolerance set acks=all with min.insync.replicas=2 (topic configuration) and use a single producer to push to that topic.
You should also set max.block.ms to some max value so that your send() does not return immediately if there is an error in fetching the metadata (for example, when Kafka is down).
Benchmark the synchronous send with your rate and check to see if it meets your requirements or benchmark number.
This ensures that a message that came first is sent first to Kafka and then the next message is not sent until the previous message is successfully acknowledged.
If your benchmark figure is not met, try having a back-pressure
mechanism like in-memory/persistent queue.
Add event to a queue in Thread-1
Peek (not dequeue) event from the queue in Thread-2
Call producer.send(...).get() in Thread-2
Dequeue the event in Thread-2
The key is to make your frontend tracker to send ordered events to the backend service which then produces events to kafka.
You can achieve that by batching the events, and sending the batched events to the backend only after the previous batched events are successfully delivered.

Kafka Consumer API vs Streams API for event filtering

Should I use the Kafka Consumer API or the Kafka Streams API for this use case? I have a topic with a number of consumer groups consuming off it. This topic contains one type of event which is a JSON message with a type field buried internally. Some messages will be consumed by some consumer groups and not by others, one consumer group will probably not be consuming many messages at all.
My question is:
Should I use the consumer API, then on each event read the type field and drop or process the event based on the type field.
OR, should I filter using the Streams API, filter method and predicate?
After I consume an event, the plan is to process that event (DB delete, update, or other depending on the service) then if there is a failure I will produce to a separate queue which I will re-process later.
Thanks you.
This seems more a matter of opinion. I personally would go with Streams/KSQL, likely smaller code that you would have to maintain. You can have another intermediary topic that contains the cleaned up data that you can then attach a Connect sink, other consumers, or other Stream and KSQL processes. Using streams you can scale a single application on different machines, you can store state, have standby replicas and more, which would be a PITA to do it all yourself.

Confused about preventing duplicates with new Kafka idempotent producer API

My app has 5+ consumers consuming off of five partitions on a kafka topic.(using kafka version 11) My consumer's each produce a message to another topic then save some state to the database, then do a manual_ immediate acknowledgement and move onto the next message.
I'm trying to solve the scenario when they emit successful to the outbound topic. then we have a failure/lose the consumer. When another consumer takes over the partition it will emit ANOTHER message to the outbound topic. This is bad :(
I discovered that kafka now has idempotent producers but from what I read it only guarantees for a producers session.
"When producer restarts, new PID gets assigned. So the idempotency is promised only for a single producer session" - (blog) - https://hevodata.com/blog/kafka-exactly-once
This seems largely useless to me. In my use-case the whole point is when I replay a message on another consumer it does not duplicate the outbound message.
Is there something i'm missing?
When using transactions, you shouldn't use ANY consumer-based mechanism, manual or otherwise, to commit the offsets.
Instead, you use the producer to send the offsets to the transaction so the offset commit is part of the transaction.
If configured with a KafkaTransactionManager, or ChainedKafkaTransactionManager the Spring listener container will send the offsets to the transaction when the listener exits normally.
If you don't use a Kafka transaction manager, you need to use the KafkaTemplate (or Producer if you are using the native APIs) to send the offsets to the transaction.
Using the consumer to commit the offset is not part of the transaction, so things will not work as expected.
When using a transaction manager, the listener container binds the Producer to the thread so any downstream KafkaTemplate operations participate in the transaction that the consumer starts. See the documentation.

Kafka Rebalancing and listeners pitfalls

I am reading Kafka: The Definitive Guide and would like to better understand the rebalance listener. The example in the book simple uses a HashMap to maintain the current offsets that have been processed and will commit the current state when a partition is revoked. My concerns are:
There are two issues/questions I have around the code example:
The language used leads me to assume that these callbacks are made on a different thread. So, shouldn't thread safety be considered when applying the current offsets? Additionally, shouldn't the current batch be cancelled after this is committed?
It says to use commitSync to make sure offsets are committed before the rebalance proceeds. However this is only synchronous within that consumer. Is there some mechanism where the coordinator will not proceed until it hears back from all subscribed consumers?
I re-read the section in the book and I agree I was a bit confused too!
The Javadoc states:
This callback will only execute in the user thread as part of the
poll(long) call whenever partition assignment changes.
I had a look at the code and confirmed the rebalance listener methods are indeed called in the same thread that owns the Consumer.
Yes you should use commitSync() when committing in the rebalance listener.
To explain why, let's look at the golden path example. We start with a consumer happily consuming and heartbeating regularly to the coordinator. At some point the coordinator returns a REBALANCE_IN_PROGRESS error to a heartbeat request. This can be caused by a new member wanting to join the group, a member leaving or failing to heartbeat, or new partition being added/removed from the subscription. At this point, all consumers need to rejoin the group.
Before attempting to rejoin the group, the consumer will synchronously execute ConsumerRebalanceListener.onPartitionsRevoked(). Once the listener returns, the consumer will send a JoinRequest to the coordinator to rejoin the group.
That said, and I think this is what you were thinking about, if your callback takes too long (> session.timeout.ms) to commit, the group could be already be in another generation and the partitions with offset trying to be committed assigned to another member. In that case, the commit will fail even if it was synchronous. But by using commitSync() in the listener you are guaranteed the consumer won't rejoin the group before completing the commit.