Kafka ordering with multiple producers on same topic and parititon - apache-kafka

Let's say I have two producers (ProducerA and ProducerB) writing to the same topic with a single partition. Each producer is writing it's own unique events serially. So if ProducerA fired 3 events and then ProducerB fired 3 events, my understanding is that Kafka cannot guarantee the order across the producer's events like this:
ProducerA_event_1
ProducerA_event_2
ProducerA_event_3
ProducerB_event_1
ProducerB_event_2
ProducerB_event_3
due to acking, retrying, etc.
However will individual producer's events still be in order? For example:
ProducerA_event_1
ProducerB_event_2
ProducerB_event_1
ProducerA_event_2
ProducerA_event_3
ProducerB_event_3
This is of course a simplified version of what I am doing, but I just want to guarantee that if I am reading from a topic for a specific producer's events, then those events will be in order even if other producer's events interleave them.

Short answer to this one is Yes, the individual producer's events will be guaranteed to be in order.
Messages in Kafka are appended to a topic partition in the order they are sent and the consumers read the messages in the same order they are stored in the topic partition.
So assuming if you are interested in the messages from Producer A and are filtering everything else, then in the given scenario, you can expect the events 1, 2 and 3 from Producer A to be read in the order.
PS: I am however curious to understand the motivation behind using just one partition. Also, on your statement:
So if ProducerA fired 3 events and then ProducerB fired 3 events, my
understanding is that Kafka cannot guarantee the order across the
producer's events like this:
You are correct in saying that the overall ordering is something that cannot be guaranteed but ordering within a partition can be guaranteed.
I hope this helps.

There is a nice article on medium which states that Kafka does not always guarantee the message ordering even for the same producer. It all depends on the Kafka configuration. In particular, max.in.flight.requests.per.connection has to be set to 1. The reason is if there are multiple requests (say, 2) in flight and the first one failed, the second will get appended to the log earlier, thus breaking the ordering.

A producer's messages will be stored, per partition, in the order they are received. If you can guarantee message ordering on the producer, then consumers can assume ordering when polling. Retry logic, multiple KafkaProducer instances, and other asynchronous implementation details might complicate ordered message production. Often these can be mitigated by including a unique event identifier, an identifier of the producer, and a timestamp of sufficient granularity either in the key or value of the message. Relying on ordering in an asynchronous framework is often a best case flow but there should be some way to compensate when things come in out of order.

Related

Producer-consumer with side constraints in Kafka (or others)

We have a bunch of producers that send messages/events to a bunch of consumers. Each message must be consumed by exactly one consumer. We know that this common scenario can easily be achieved by using consumer groups in Kafka. However, we also have a couple of additional constraints: Not every consumer can consume every message. Messages have (arbitrary) requirements attached to them and only consumers that fulfil these requirements must process them. This would still be possible with a consumer group where a consumer first looks at the message and eventually re-submits it if it does not meet the requirements. However, there is no guarantee that messages will be seen by every consumers at least once so they may bounce around indefinitely although there may be a matching consumer. We also cannot set up multiple topics because the requirements for consumers are arbitrary complex boolean formulas defined by the user and not the application. This can result in a combinatorial explosion of topics.
Additionally we want to be able to dynamically add and remove consumers from the group in case more processing resources are needed. As far as I understood Kafka, this can lead to consumers not getting any messages if there are not enough partitions and dynamically re-partitioning is also not really possible (without admin interaction).
Is there any way to make this work in Kafka? Maybe Kafka is also not the right technology, are there others that are more suitable? We also looked at RabbitMQ but also there we did not find a way that guarantees that every consumer is seeing a message so that it can evaluate the requirements.
you could commit offsets manually when you after identifying the desired events by setting ENABLE_AUTO_COMMIT_CONFIG to false in your consumer configs but your use-case would trigger excessive rebalances which stops any consumption. i don't think Kafka is the appropriate infrastructure for this.
however if you could mark your events with finite number of keys, you can dictate which partition they are produced to. using the same key in your consumer guarantees to poll events from the same partition. note that you need to have the same number of partitions in your topic as the number of unique keys.

Kafka - Synchronized Consumer Groups

i am trying to make my head regarding Kafka consumers and I'd like to know if the following use case can be solved using Kafka.
My use case is basically this one:
I have a stream that I'd like to be consumed in sync by several consumers. In other words, I have a first consumer that starts to consume the stream, then another consumer arrives later. I'd like this second consumer to start to consume the stream at the offset where is currently the first consumer.
I know that I need to have the consumers in two different groups. But it is not clear for me :
on how or if it is possible to coordinate the groups offset
if I would expect a latency for such coordination task
You do not need two different groups, all consumers can check one topic. Or as many as they like, for that matter.
offset
Messages typically are identified by their arrival date, so all the clients need to tell the producer "my last visit was at 10:00, give me all new messages". So all each client needs to keep track of is when which individual topic was checked last.
latency
this is kind of "of scope" at this point. Of course there will be latency, but it depends on the environment, like "how many consumers", "how many topics", "message format" etc.
so can your usecase be solved using kafka
In short: yes. "Can one consumer continue where another has left", the consumers could exchange the latest index between each other, of course that would require some internal synchronization. Kafka itself does not care about consumers, so it will not keep track itself about the latest index. You need to do the work. Another possibility would be to actually consume the messages (like, delete them from queue once consumed), so each time another consumer hits the queue it is guaranteed to receive the messages another consumer left off. Of course that would depend on your usecase, can you actually delete your messages from the queue.
This is not a problematic treated by kafka directly (consumer group is to distribute partitions among members, not to attribute the same offset), but you can do somehting for this. You could simply create an other topic, where consumer1 would post either offset or copy of the message read (so you would need bth consumer and producer for this), and your other synchronized consumer would react against this - of course there ould be some latency for this.
What is your use case behind this? Why can't you consume at different offset? Couldn't you rather having one consumer, which would then dispatch the message read to to different processes, so that they are indeed synchronized? (with no latency)
What do you mean by synchronized: should consumer2 (and 3 and more) only consume the same message than consumer1 (ie can't consume faster, what I assume in both previous solution) While this is possible, it would really be better to know the reason behind this, maybe there is a better way for you to process data

How does a kafka process schedule writes to different partition?

Imagine a scenario where we have 3 partitions belonging to 3 different topics on a machine which runs a kafka process/broker. This broker will receive messages for all three partitions. It will store them on different log subdirectories. My question is how does the kafka broker schedule these writes? How does it decide which partition/topic will be written next?
For ordering over requests, the image below shows roughly, how the broker internally handles produce requests:
There is a number of network threads that pull bytes of the network layer and convert these to internal requests. These requests are then stuck in a fifo request queue, from where the io threads pull them and append the contained messages to the relevant partitions. So in short messages are processed in the order they are received in.
Looking through the code I am unsure, whether there may be potential for a race condition here, where a smaller request could "overtake" a large request that was sent immediately before it. However even if this were possible it is an extremely unlikely fringe case that I can't see ever occurring for a single producer. Maybe someone with a better understanding of the code can weigh in here?
As for ordering of batched messages in one request, the request stores messages internally in a HashMap, which uses TopicPartition as a key, since as far as I am aware a Scala HashMap does not preserve ordering of the inserted elements, I don't think that there are any guarantees around the order in which multiple partitions in one request get processed - which is fine, as ordering is only guaranteed to be preserved within the partition.
Within each partition, messages are processed in the order they were given to the producer before sending.

Is there any way to maintain message ordering between partitions of a kafka topic with a single consumer?

We are developing a kafka based streaming system in which the producer would produce to multiple partitions within its topic and a single consumer would consume from the topic. I know that kafka maintains message order within partitions, but can we maintain a global message order between partitions within a topic?
Short answer:
no, Kafka does not provide any ordering guarantees between partitions.
Long answer:
I don't quite understand your problem. If you are saying you have only one consumer consuming your topic, why would you have more than 1 partition in that topic and reinvent the wheel trying to maintain order between partitions? If you want to leave some space for future growth, e.g. adding another consumer to consume a part of partitions, then you'll have to rethink your "global message order" idea.
Do you really need ALL messages to be processed in order? Or maybe you could partition by client/application/whatever and maintain order per partition? In most cases you don't really need that global message order, but just have to partition your data properly.
Maintaining order between multiple consumers is a really tough problem to solve, and even if solved correctly you'll just neglect all Kafka benefits.
You can't benifit from kafka if you want the global ordering in more than one partition. Kafka only supports message ordering in only one partition. In our company, we need only the same catergory messages are sent to the same partition, which can easily partition using partitionId.
The purpose of partitions in Kafka is to create a partial order of messages in a broader topic, where the messages follow a strict total order in any given partition. So the answer is 'no', it would defeat the purpose of partitions if any notion of cross-partition order were to be introduced.
I would suggest instead focusing on how messages (records, in Kafka parlance) are keyed, which effectively determines how they are mapped to a partition. Which partition specifically doesn't matter, as long as the mapping is deterministic and repeatable — all you should care about is that identically keyed records will always appear on the same partition and, hence, will not be assigned to multiple consumers at the same time (within the same consumer group).
If you are publishing updates to persisted entities, the primary key of the entity is typically a good starting point for a Kafka record key. If there needs to be some order of updates across a connected graph of entities, then taking the ID root of the graph and making it the key will likely satisfy your ordering needs.

Apache Kafka order of messages with multiple partitions

As per Apache Kafka documentation, the order of the messages can be achieved within the partition or one partition in a topic. In this case, what is the parallelism benefit we are getting and it is equivalent to traditional MQs, isn't it?
In Kafka the parallelism is equal to the number of partitions for a topic.
For example, assume that your messages are partitioned based on user_id and consider 4 messages having user_ids 1,2,3 and 4. Assume that you have an "users" topic with 4 partitions.
Since partitioning is based on user_id, assume that message having user_id 1 will go to partition 1, message having user_id 2 will go to partition 2 and so on..
Also assume that you have 4 consumers for the topic. Since you have 4 consumers, Kafka will assign each consumer to one partition. So in this case as soon as 4 messages are pushed, they are immediately consumed by the consumers.
If you had 2 consumers for the topic instead of 4, then each consumer will be handling 2 partitions and the consuming throughput will be almost half.
To completely answer your question,
Kafka only provides a total order over messages within a partition, not between different partitions in a topic.
ie, if consumption is very slow in partition 2 and very fast in partition 4, then message with user_id 4 will be consumed before message with user_id 2. This is how Kafka is designed.
I decided to move my comment to a separate answer as I think it makes sense to do so.
While John is 100% right about what he wrote, you may consider rethinking your problem. Do you really need ALL messages to stay in order? Or do you need all messages for specific user_id (or whatever) to stay in order?
If the first, then there's no much you can do, you should use 1 partition and lose all the parallelism ability.
But if the second case, you might consider partitioning your messages by some key and thus all messages for that key will arrive to one partition (they actually might go to another partition if you resize topic, but that's a different case) and thus will guarantee that all messages for that key are in order.
In kafka Messages with the same key, from the same Producer, are delivered to the Consumer in order
another thing on top of that is, Data within a Partition will be stored in the order in which it is written therefore, data read from a Partition will be read in order for that partition
So if you want to get your messages in order across multi partitions, then you really need to group your messages with a key, so that messages with same key goes to same partition and with in that partition the messages are ordered.
In a nutshell, you will need to design a two level solution like above logically to get the messages ordered across multi partition.
You may consider having a field which has the Timestamp/Date at the time of creation of the dataset at the source.
Once, the data is consumed you can load the data into database. The data needs to be sorted at the database level before using the dataset for any usecase. Well, this is an attempt to help you think in multiple ways.
Let's consider we have a message key as the timestamp which is generated at the time of creation of the data and the value is the actual message string.
As and when a message is picked up by the consumer, the message is written into HBase with the RowKey as the kafka key and value as the kafka value.
Since, HBase is a sorted map having timestamp as a key will automatically sorts the data in order. Then you can serve the data from HBase for the downstream apps.
In this way you are not loosing the parallelism of kafka. You also have the privilege of processing sorting and performing multiple processing logics on the data at the database level.
Note: Any distributed message broker does not guarantee overall ordering. If you are insisting for that you may need to rethink using another message broker or you need to have single partition in kafka which is not a good idea. Kafka is all about parallelism by increasing partitions or increasing consumer groups.
Traditional MQ works in a way such that once a message has been processed, it gets removed from the queue. A message queue allows a bunch of subscribers to pull a message, or a batch of messages, from the end of the queue. Queues usually allow for some level of transaction when pulling a message off, to ensure that the desired action was executed, before the message gets removed, but once a message has been processed, it gets removed from the queue.
With Kafka on the other hand, you publish messages/events to topics, and they get persisted. They don’t get removed when consumers receive them. This allows you to replay messages, but more importantly, it allows a multitude of consumers to process logic based on the same messages/events.
You can still scale out to get parallel processing in the same domain, but more importantly, you can add different types of consumers that execute different logic based on the same event. In other words, with Kafka, you can adopt a reactive pub/sub architecture.
ref: https://hackernoon.com/a-super-quick-comparison-between-kafka-and-message-queues-e69742d855a8
Well, this is an old thread, but still relevant, hence decided to share my view.
I think this question is a bit confusing.
If you need strict ordering of messages, then the same strict ordering should be maintained while consuming the messages. There is absolutely no point in ordering message in queue, but not while consuming it. Kafka allows best of both worlds. It allows ordering the message within a partition right from the generation till consumption while allowing parallelism between multiple partition. Hence, if you need
Absolute ordering of all events published on a topic, use single partition. You will not have parallelism, nor do you need (again parallel and strict ordering don't go together).
Go for multiple partition and consumer, use consistent hashing to ensure all messages which need to follow relative order goes to a single partition.