I have a publishing endpoint 'P' and two consumers 'A', 'B'.
When 'P' publish messages '1' '2' '3', I expect that consumers would consume messages with the order of '123'. But in reality consumption order maybe '132', '312' and so forth.
I have configured the settings related to concurrency but it doesn't work
concurrency settings
Question: Is there a configuration that could prevent 'B' from taking message until 'A' consumes message successfully ?
Thanks
There is no guarantee regarding processing messages in order, and should be avoided at all costs.
Messages in the queue is FIFO but when you have multiple consumers with pre-fetch setup, each consumer A and B will pull x messages off the queue, and process them depending on how long it takes it, consumer A could go back for more message before consumer B is done, and process messages out of order which is common situation.
Message retries could also cause messages to be processed out of order.
Saying this, if you do required this, using a single consumer set with a concurrency limit of 1 would process messages in order, at the cost of throughput.
Related
Scenario
10 kafka consumers within a same Consumer Group.
Kafka has 10 partitions => which means each partition is automatically assigned to a single consumer within the group.
Message is sent to partition on a round-robin basis.
Every now and then, a message will take much longer to process than other messages.
In such occasions, there's a chance the next message is assigned to a consumer that is still busy working while there are other free consumers
Question
Does Kafka support a mechanism to automatically send message to a partition whose consumer is free?
If it doesn't, what is the common approach to this scenario?
Although you could implement a custom Assignor class, by default, consumption is only based on assignment, not by load; such information is not communicated back to the group coordinator. Plus, shuffling around constantly based on load would likely cause frequent group rebalances, causing consumption to be even slower
Regarding length-of-processing, I am not aware of any way your consumer would be able to inspect message before partition assignment and polling such records. Therefore, you'd need to decouple your processing logic from the actual poll loop if you'd like to improve processing times.
Can one have 2 clients reading a single topic such that they never receive the same message? If one client dies, the other keeps reading and gets all the messages.
In a word "redundant clients" - not for performance sake but for client failover.
All I have seen is examples of N partitions and >N clients in a consumer group where N clients get messages and the rest are idle. It's not optimal to have 2 clients on a single partition where one client does nothing until the other client fails.
More than one clients in the same consumer group cannot be assigned the same partition at the same time, therefore will never receive the same messages
The scenario you're asking for is more fault tolerance than load balancing... Assuming one partition, if you run two consumers and one encounters some fatal exception while consuming a message and doesn't commit that offset and the client dies, then the secondary idle consumer will pickup from the last committed offset and try consuming those same messages after the consumer group rebalances
I was once asked on an interview, how would you deal with messages coming in out of order in a message queue. It has been a while and I have not found a definitive answer and I was wondering if an expert in the field can help me answer it to address my own curiosity.
I understand that some message queues provide exactly-once and FIFO guarantees. Also I am aware of the notion of event time and processing time in streaming systems. For instance, in log based message queues like Kafka, mixed up ordering may be less likely to happen due to the presence of offsets and message durability (I may be wrong). I have also thought about using timestamps requiring each message sender to record the time of message before sending it but this is fraught with inconsistency due to clock skew.
Given all of that, I am wondering how can one address mixed up ordering in a traditional messaging system like AMQP, JMS or RabbitMQ where a dozen of IOT devices may be sending messages and I as a consumer want to reconcile them in the correct order.
If queue your system is using, provides ordered message guarantee, then simply use that channel(like kakfa's single partition, AMQP under some settings).
But if queue your system is using does not provide strict ordering then general Idea is that client can have monotonically increasing[1] number(or timestamp) attached with each message it sends to queue. This forms the basis of sequence which producer intends to send to its receivers.
How to get montonically increasing value:
Using timestamp:
POSIX clock_gettime() function with CLOCK_MONOTONIC[2] provides option to get monotonically increasing timestamp, which can be used by producer to put timestamp on each message. Receiver can identify out of order messages when it sees that received message has timestamp older than latest message.
Using sequence number:
Before sending each message you can simply increase an atomic counter and attach counter value to each message, so that receiver can know about intended ordering. This will form strictly increasing sequence. Approach is very similar to Lamport's logical clock[3] which provides virtual clock for producer.
Dealing with out of order messages on receiver side:
This is pretty much application specific but in general you have 2 options when messages arrive out of order:
a) discard the older message, like in cases in which receiver have to show latest value of a stock.
b) Have buffer to reorder sequencing, like within a TCP connection(e.g. zookeeper uses TCP as queue for FIFO ordering [4-5])
Tools:
If you are not adding timestamp to messages, then send all messages to Apache kafka single partition in sequence from producer, as this will ensure that receiver can receive messages in sequence.
If you are using messaging system which does not guarantee ordered delivery (like AMQP under some settings[6]), then you can consider adding additional monotonically increasing number/clock with each message.
[1] https://en.wiktionary.org/wiki/monotonic_increasing#targetText=Adjective,contrast%20this%20with%20strictly%20increasing
[2] https://linux.die.net/man/2/clock_gettime
[3] https://en.wikipedia.org/wiki/Lamport_timestamps#Lamport's_logical_clock_in_distributed_systems
[4] https://cwiki.apache.org/confluence/download/attachments/24193445/zookeeper-internals.pdf?version=1&modificationDate=1295034038000&api=v2
[5] http://www.tcs.hut.fi/Studies/T-79.5001/reports/2012-deSouzaMedeiros.pdf
[6] RabbitMQ - Message order of delivery
I can answer with respect to Apache Kafka.
Apache Kafka guarantees strict order on a topic by partition means each partition is an immutable sequence of message appending in a strict order.
So in case, more than one partition consumer may consume messages from more than one partition which can't be in strict order. We can consider below 2 options to achieve strict order.
If looking for 1 producer message in order use only 1 partition per topic. so the producer will publish on the same partition in sequence order which will get consumed by consumers in strict order.
Producer publishes a message to multi-partition, so use multi-consumer in consumer group but use assign to specific partition per consumer to consume message from specific partition will guarantee strict order per partition per consumer
Ok so I understand that you only get order guarantee per partition.
Just random thought/question.
Assuming that the partition strategy is correct and the messages are grouped correctly to the proper partition (or even say we are using 1 partition)
I suppose that the producing application must send each message 1 by 1 to kafka and make sure that each message has been acked before sending the next one right?
Yes, you are correct that the order the producing application sends the message dictates the order they are stored in the partition.
Messages sent by a producer to a particular topic partition will be appended in the order they are sent. That is, if a message M1 is sent by the same producer as a message M2, and M1 is sent first, then M1 will have a lower offset than M2 and appear earlier in the log.
http://kafka.apache.org/documentation.html#intro_guarantees
However, if you have multiple messages in flight simultaneously I am not sure how order is determined.
You might want to think about the acks config for your producer as well. There are failure conditions where a message may be missing if the leader goes down after M1 is published and a new leader receives M2. In this case you won't have an out of order condition, but a missing message so it's slightly orthogonal to your original question but something to consider if message guarantees and order are critical to your application.
http://kafka.apache.org/documentation.html#producerconfigs
Overall, designing a system where small differences in order are not that important can really simplify things.
sync send message one by one(definitely slow!),
or async send message in batch with max.in.flight.requests.per.connection = 1
Yes, the Producer should be single threaded. If one uses multiple Producer threads to produce to the same partition, ordering guarantee on the Consumer will still be lost.So, ordering guarantee on the same partition implicitly also means a single Producer thread.
There are two strategies for sending messages in kafka : synchronous and asynchronous.
For synchronous type, it is intuitively that a producer send message one by one to the target partition, thus the message order is guaranteed.
For asynchronous type, messages are send using batching method, that is to say, if M1 is send prior to M2, then M1 is accumulated in the memory first, then the same with M2. So When producer sends batches of messages in a single request, the messages order thus will be guaranteed.
As per Apache Kafka documentation, the order of the messages can be achieved within the partition or one partition in a topic. In this case, what is the parallelism benefit we are getting and it is equivalent to traditional MQs, isn't it?
In Kafka the parallelism is equal to the number of partitions for a topic.
For example, assume that your messages are partitioned based on user_id and consider 4 messages having user_ids 1,2,3 and 4. Assume that you have an "users" topic with 4 partitions.
Since partitioning is based on user_id, assume that message having user_id 1 will go to partition 1, message having user_id 2 will go to partition 2 and so on..
Also assume that you have 4 consumers for the topic. Since you have 4 consumers, Kafka will assign each consumer to one partition. So in this case as soon as 4 messages are pushed, they are immediately consumed by the consumers.
If you had 2 consumers for the topic instead of 4, then each consumer will be handling 2 partitions and the consuming throughput will be almost half.
To completely answer your question,
Kafka only provides a total order over messages within a partition, not between different partitions in a topic.
ie, if consumption is very slow in partition 2 and very fast in partition 4, then message with user_id 4 will be consumed before message with user_id 2. This is how Kafka is designed.
I decided to move my comment to a separate answer as I think it makes sense to do so.
While John is 100% right about what he wrote, you may consider rethinking your problem. Do you really need ALL messages to stay in order? Or do you need all messages for specific user_id (or whatever) to stay in order?
If the first, then there's no much you can do, you should use 1 partition and lose all the parallelism ability.
But if the second case, you might consider partitioning your messages by some key and thus all messages for that key will arrive to one partition (they actually might go to another partition if you resize topic, but that's a different case) and thus will guarantee that all messages for that key are in order.
In kafka Messages with the same key, from the same Producer, are delivered to the Consumer in order
another thing on top of that is, Data within a Partition will be stored in the order in which it is written therefore, data read from a Partition will be read in order for that partition
So if you want to get your messages in order across multi partitions, then you really need to group your messages with a key, so that messages with same key goes to same partition and with in that partition the messages are ordered.
In a nutshell, you will need to design a two level solution like above logically to get the messages ordered across multi partition.
You may consider having a field which has the Timestamp/Date at the time of creation of the dataset at the source.
Once, the data is consumed you can load the data into database. The data needs to be sorted at the database level before using the dataset for any usecase. Well, this is an attempt to help you think in multiple ways.
Let's consider we have a message key as the timestamp which is generated at the time of creation of the data and the value is the actual message string.
As and when a message is picked up by the consumer, the message is written into HBase with the RowKey as the kafka key and value as the kafka value.
Since, HBase is a sorted map having timestamp as a key will automatically sorts the data in order. Then you can serve the data from HBase for the downstream apps.
In this way you are not loosing the parallelism of kafka. You also have the privilege of processing sorting and performing multiple processing logics on the data at the database level.
Note: Any distributed message broker does not guarantee overall ordering. If you are insisting for that you may need to rethink using another message broker or you need to have single partition in kafka which is not a good idea. Kafka is all about parallelism by increasing partitions or increasing consumer groups.
Traditional MQ works in a way such that once a message has been processed, it gets removed from the queue. A message queue allows a bunch of subscribers to pull a message, or a batch of messages, from the end of the queue. Queues usually allow for some level of transaction when pulling a message off, to ensure that the desired action was executed, before the message gets removed, but once a message has been processed, it gets removed from the queue.
With Kafka on the other hand, you publish messages/events to topics, and they get persisted. They don’t get removed when consumers receive them. This allows you to replay messages, but more importantly, it allows a multitude of consumers to process logic based on the same messages/events.
You can still scale out to get parallel processing in the same domain, but more importantly, you can add different types of consumers that execute different logic based on the same event. In other words, with Kafka, you can adopt a reactive pub/sub architecture.
ref: https://hackernoon.com/a-super-quick-comparison-between-kafka-and-message-queues-e69742d855a8
Well, this is an old thread, but still relevant, hence decided to share my view.
I think this question is a bit confusing.
If you need strict ordering of messages, then the same strict ordering should be maintained while consuming the messages. There is absolutely no point in ordering message in queue, but not while consuming it. Kafka allows best of both worlds. It allows ordering the message within a partition right from the generation till consumption while allowing parallelism between multiple partition. Hence, if you need
Absolute ordering of all events published on a topic, use single partition. You will not have parallelism, nor do you need (again parallel and strict ordering don't go together).
Go for multiple partition and consumer, use consistent hashing to ensure all messages which need to follow relative order goes to a single partition.