I have a Kafka topic which currently has 3 partitions. I want my consumers to read from the same partition but each message should go to a different consumer in a round-robin fashion. Is it possible to achieve this?
In order to do that, you have to implement a consumer group. It's provided out of the box with Kafka. You have just to specify the same group.id to your tree consumer.
[edit] But, each consumers will read in different Kafka partition. I think that make difference consumer for mthe same group read in the same partition is not possible if you're using only the Kafka API.
See more in the documentation : http://kafka.apache.org/documentation.html#intro_consumers
How about this, at the producer, the messages are routed based on some key. It is possible to route message 1 to partition 1, message 2 to partition 2, message 3 to partition 3. Then you should group three consumers in one group. It is possible to make consumer 1 to consume partition 1, consumer 2 to consume partition 2, consumer 3 to consume partition 3.
By the way, how to implement it depends on which kafka client you are using, what the messages are. You should give more details....
What you are saying defeats the purpose of partitions. Partitions are not designed for simple load balancing in kafka. If you really want that, you have two options.
If you have a control over the producer producing to the topic, do a simple mod 3 hash partitioning. So the messages will be distributed equally in the 3 partitions. Now each of your consumer will consume from one partition. This effectively means every third message is read by each consumer. That solves your problem.
If you cannot control the producer, consume from the topic in the normal way. Write a producer with simple mod 3 hash partitioning and produce it to a new topic. Again consume from that topic. The same thing repeats as in the first case.
Related
Lets say I have one consumer group which subscribed to 4 topics and partitions for each topics are:-
EDITED:
First topic => 5 partitions
Second topic => 3 partitions
Third topic => 2 partitions
Fourth topic => 1 partitions
Total number of partitions = 11. So total how many applications I can run.
5(max number of partitions in input topics) or 11?
In kafka, scaling consumers depends on partition number.
Lets assume you have one topic with 3 partitions. And you have 2 different consumer app (different consumer groups) which does different works.
You can scale your consumer number up to 3 for per consumer group.
Single consumer (consumer group A) can consume messages from 3
partitions.
Two consumer (same consumer group) can not consume single
partition.
Take look at image : https://hadoopabcd.files.wordpress.com/2015/04/consumer-group.png
Read more about consumer groups blog series : https://dzone.com/articles/understanding-kafka-consumer-groups-and-consumer-l
In ideal situation the number of consumer in the consumer group should be equal to the number of partition. If that is not the case then you can have more then one consumer group kafka provides the feature that 2 consumer from the different consumer group can read from the same partition. That’s totally depends on your resources how many resources do you have for running the consumers.
Suppose you have an application that needs to read messages from a Kafka topic, run some validations against them, and write the results to another data store. In this case your application will create a consumer object, subscribe to the appropriate topic, and start receiving messages, validating them and writing the results. This may work well for a while, but what if the rate at which producers write messages to the topic exceeds the rate at which your application can validate them? If you are limited to a single consumer reading and processing the data, your application may fall farther and farther behind, unable to keep up with the rate of incoming messages. Obviously there is a need to scale consumption from topics. Just like multiple producers can write to the same topic, we need to allow multiple consumers to read from the same topic, splitting the data between them.
Kafka consumers are typically part of a consumer group. When multiple consumers are subscribed to a topic and belong to the same consumer group, each consumer in the group will receive messages from a different subset of the partitions in the topic.
Please refer to this https://www.safaribooksonline.com/library/view/kafka-the-definitive/9781491936153/ch04.html
I am trying to develop a better understanding of how Kafka works. To keep things simple, currently I am running Kafka on one Zookeeper with 3 brokers and one partition with duplication factor of 3. I learned that, in general, it's better to have number of partitions ~= number of consumers.
Question 1: Do topics share offsets in the same partition?
I have multiple topics (e.g. dogs, cats, dinosaurs) on one partition (e.g. partition 0). Now my producers have produced a message to each of the topics. "msg: bark" to dogs, "msg: meow" to cats and "msg: rawr" to dinosaurs. I noticed that if I specify dogs[0][0], I get back bark and if I do the same on cats and dinosaurs, I do get back each message respectively. This is an awesome feature but it contradicts with my understanding. I thought offset is specific to a partition. If I have pushed three messages into a partition sequentially. Shouldn't the messages be indexed with 0, 1, and 2? Now it seems me that offset is specific to a topic.
This is how I imagined it
['bark', 'meow', 'rawr']
In reality, it looks like this
['bark']
['meow']
['rawr']
But that can't be it. There must be something keeping track of offset and the actual physical location of where the message is in the log file.
Question 2: How do you manage your messages if you were to have multiple partitions for one topic?
In question 1, I have multiple topics in one partition, now let's say I have multiple partitions for one topic. For example, I have 4 partitions for the dogs topic and I have 100 messages to push to my Kafka cluster. Do I distribute the messages evenly across partitions like 25 goes in partition 1, 25 goes in partition 2 and so on...?
If a consumer wants to consume all those 100 messages at once, he/she needs to hit all four partitions. How is this different from hitting 1 partition with 100 messages? Does network bandwidth impose a bottleneck?
Thank you in advance
For your question 1: It is impossible to have multiple topics on one partition. Partition is part of topic conceptually. You can have 3 topics and each of them has only one partition. So you have 3 partitions in total. That explains the behavior that you observed.
For your question 2: AT the producer side, if a valid partition number is specified that partition will be used when sending the record. If no partition is specified but a key is present, a partition will be chosen using a hash of the key. If neither key nor partition is present a partition will be assigned in a round-robin fashion. Now the number of partitions decides the max parallelism. There is a concept called consumer group, which can have multiple consumers in the same group consuming the same topic. In the example you gave, if your topic has only one partition, the max parallelism is one and only one consumer in the consumer group will receive messages (100 of them). But if you have 4 partitions, you can have up to 4 consumers, one for each partition and each receives 25 messages.
Just wanna understand the basics properly.
Let's say I've a topic called "myTopic" that has 3 partitions P0, P1 & P2.
Each of these partitions will have a leader and the data (messages) for this topic is distributed across these partitions.
1. Producer will always writes to the leader of the partition in a round robin fashion based on the load on the broker. Is that right?
2. How do the producer know the leader of the partition?
3. Consumer reading a particular topic should read all partitions of that topic? Is that correct?
Appreciate your help.
Producer will always writes to the leader of the partition in a round robin fashion based on the load on the broker. Is that right?
By default, yes.
That said, a producer can also decide to use a custom partitioning scheme, i.e. a different strategy to which partitions data is being written to.
How do the producer know the leader of the partition?
Through the Kafka protocol.
Consumer reading a particular topic should read all partitions of that topic? Is that correct?
By default, yes.
That said, you can also implement e.g. consumer applications that implement custom logic, e.g. a "sampling" consumer that only reads from 1 out of N partitions.
Producer will always writes to the leader of the partition
Yes, always.
in a round robin fashion based on the load on the broker
No. If a partition is explicitly set on a ProducerRecord then that partition is used. Otherwise, if a custom partitioner implementation is provided, that determines the partition. Otherwise, if the msg key is not null, the hash of the key will be used to consistently send msgs with the same key to the same partition. If the msg key is null, only then the msg will indeed be sent to any partition in a round-robin fashion. However, this is irrespective of the load on the broker.
How do the producer know the leader of the partition?
By periodically asking the broker for metadata.
Consumer reading a particular topic should read all partitions of that topic? Is that correct?
Consumers form consumer groups. If there are multiple consumer instances in a consumer group, each consumes a subset of the partitions. But the consumer group as a whole consumes from all partitions. That is, unless you decide to go "low-level" and manage that yourself, which you can do.
I want any information/explanation on how Kafka maintains a message sequence when messages are written to topic with multiple partition.
For e.g. I have multiple message producer each producing messages sequentially and writing on the Kafka topic with more than 1 partition. In this case, how consumer group will work to consume messages.
Kafka only provides a total order over records within a partition, not between different partitions in a topic. Even within one partition, you still could encounter the out-of-order events if retries is enabled and max.in.flight.requests.per.connection is larger than 1.
Work-around is create a topic with only one partition although it means only one consumer process per consumer group.
Kafka will store messages in the partitions according to the message key given to the producer. If none is given, then the messages will be written in a round-robin style into the partitions. To keep ordering for a topic, you need to make sure the ordered sequence has the same key, or that the topic has only one partition.
In the Kafka documentation:
Kafka handles this differently. Our topic is divided into a set of
totally ordered partitions, each of which is consumed by one consumer
at any given time. This means that the position of consumer in each
partition is just a single integer, the offset of the next message to
consume. This makes the state about what has been consumed very small,
just one number for each partition. This state can be periodically
checkpointed. This makes the equivalent of message acknowledgements
very cheap.
Yet, following their quick start guide in that same document, I was easily able to:
Create a topic with a single partition
Start a console-producer
Push a few messages
Start a consumer to consume --from-beginning
Start another consumer --from-beginning
And have both consumers successfully consume from the same partition.
But this seems at odds with the documentation above?
When using different consumer groups, consumers can consume the same partitions easily. You may consider group ids as different applications consuming a Kafka topic. Multiple different applications might want to use the data in a Kafka topic differently and thus not to conflict with other applications. That's why two consumers may consume one partition (in fact the only way how two consumers can consume one partition).
And when you start a console consumer it randomly generates a group id for it (link) thus these consumers are doing exactly what I just wrote.