Can multiple consumers consume at the same time? - apache-kafka

I am new to Kafka . I wanted to know if multiple consumers can consume data from the topic at the same time and process them simultaneously or they are consumed and processed in batches ?

Kafka Consumers group themselves under a consumer group name, and each record published to a topic is delivered to one consumer instance within each subscribing consumer group. Consumer instances can be in separate processes or on separate machines.
If all the consumer instances have the same consumer group, then the records will effectively be load balanced over the consumer instances.
If all the consumer instances have different consumer groups, then each record will be broadcast to all the consumer processes.
The processing of the data can be done simultaneously or in batches based on your requirement.

Related

What is the need of consumer group in kafka?

I don't understand the practical use case of the consumer group in Kafka.
A partition can only be read by only one consumer in a consumer group, so only a subset of a topic record is read by one consumer.
Can someone help with any practical scenario where the consumer group helps?
It's for parallel processing of event messages from the specific topic.
Consumers label themselves with a consumer group name, and each record published to a topic is delivered to one consumer instance within each subscribing consumer group. Consumer instances can be in separate processes or on separate machines.
If all the consumer instances have the same consumer group, then the records will effectively be load balanced over the consumer instances.
If all the consumer instances have different consumer groups, then each record will be broadcast to all the consumer processes.
Read more here:
https://docs.confluent.io/5.3.3/kafka/introduction.html#consumers

Can consumer groups span multiple servers?

When creating a consumer group in Kafka, does it create a pool of workers that run on the same JVM process or could a consumer group span multiple computers/nodes?
If it spans multiple computers then keeping track of offsets etc. will be hard.
First of all, you don't create consumer groups directly. You just create consumers and consumers that have same group.id will represent a consumer group. When multiple consumers
are subscribed to a topic and belong to the same consumer group, each consumer in
the group will receive messages from a different subset of the partitions in the topic. As shown in the image below:
Of course you can create these consumers in different servers and it is recommended approach for load balancing.
Kafka stores offsets for each consumer groups in topic named __consumer_offsets. So keeping track of the offsets is not that hard. You can check consumer offsets for a consumer groups with a command like this:
"does it create a pool of workers that run on the same jvm process or could a consumer group span multiple computers/nodes?"
It depends on how many jvm processes you create for your consumer group. And, yes, it can span multiple computer/nodes. Kafka's group coordinator will then assign individual threads to a partition of a topic. Note that a single TopicPartition can be consumed at maximum by one consumer (jvm process) within the same consumer group.
"If it spans multiple computers then keeping track of offsets etc. will be hard."
Kafka makes this easy by centrally storing all meta information and progress of each consumer group within an internal topic called "__consumer_offsets" which is available across the entire cluster, if and only if all nodes belong to the same cluster.

Consuming from single kafka partition by multiple consumers

I read following in kafka docs:
The way consumption is implemented in Kafka is by dividing up the partitions in the log over the consumer instances so that each instance is the exclusive consumer of a "fair share" of partitions at any point in time.
Kafka only provides a total order over records within a partition, not between different partitions in a topic.
Per-partition ordering combined with the ability to partition data by key is sufficient for most applications.
However, if you require a total order over records this can be achieved with a topic that has only one partition, though this will mean only one consumer process per consumer group.
I read following on this page:
Consumers read from any single partition, allowing you to scale throughput of message consumption in a similar fashion to message production.
Consumers can also be organized into consumer groups for a given topic — each consumer within the group reads from a unique partition and the group as a whole consumes all messages from the entire topic.
If you have more consumers than partitions then some consumers will be idle because they have no partitions to read from.
If you have more partitions than consumers then consumers will receive messages from multiple partitions.
If you have equal numbers of consumers and partitions, each consumer reads messages in order from exactly one partition.
Doubts
Does this means that single partition cannot be consumed by multiple consumers? Cant we have single partition and a consumer group with more than one consumer and make them all consume from single partition?
If single partition can be consumed by only single consumer, I was thinking why is this design decision?
What if I need total order over records and still need it to be consumed parallel? Is it undoable in Kafka? Or such scenario does not make sense?
Within a consumer group, at any time a partition can only be consumed by a single consumer. No you can't have 2 consumers within the same group consuming from the same partition at the same time.
Kafka Consumer groups allow to have multiple consumer "sort of" behave like a single entity. The group as a whole should only consume messages once. If multiple consumer in a group were to consume the same partitions, these records would be processed multiple times.
If you need to consume a partition multiple times, be sure these consumers are in different groups.
When processing needs to happen in order (serially) at any time there's only a single task to do. If you have records 1, 2 and 3 and want to process them in order, you cannot do anything until message 1 has been processed. It's the same for message 2 and 3. So what do you want to do in parallel?

using assign instead of subscribe in kafka consumer side

When I have 1000 of web server and all are interested in messages from a topic. I am thinking of writing a specific data to a particular partition of a topic and 1000+ servers are interest in the data in that particular partition. How good is to implement assign instead of subscribe. How scalable is this approach is. can I assign 1000+ consumer to read data from a particular partition.
In Kafka, every consumer belongs to a consumer group. When a Kafka producer sends a message to a particular group, the records of a partition are being delivered to a single consumer.
If the number of partitions is greater than the number of consumers, then some consumers will consume data from more than one partition. On the other hand, if the number of consumers is greater than the number of partitions, some consumers will be inactive as they will receive no data.
You cannot have multiple consumers -within the same consumer group- consuming data from a single partition. Therefore, in order to consume data from the same partition using N consumers, you'd need to create N distinct consumer groups too.
Note that partitioning enhances the parallelism within a Kafka cluster. If you create thousands of consumers to consume data from only one partition, I suspect that you will lose some level of parallelism.
Subscribe vs Assign
Subscribe makes use of the consumer group; Kafka coordinator sends assignment to a consumer and the partitions of the topics subscribed to, will be distributed to the instances within that group.
Assign forces assignment to a list of topics.

If you have less consumers than partitions, what happens?

If you have less consumers than partitions, does that simply mean you will not consume all the messages on a given topic?
In a cloud environment, how are you suppose to keep track how many consumers are running and how many are pointing to a given topic#partition?
What if you have multiple consumers on a given topic#partition? I guess the consumer has to somehow keep track of what messages it has already processed in case of duplicates?
In fact, each consumer belongs to a consumer group. When Kafka cluster sends data to a consumer group, all records of a partition will be sent to a single consumer in the group.
If there're more paritions than consumers in a group, some consumers will consume data from more than one partition. If there're more consumers in a group than paritions, some consumers will get no data. If you add new consumer instances to the group, they will take over some partitons from old members. If you remove a consumer from the group (or the consumer dies), its partition will be reassigned to other member.
Now let's take a look at your questions:
If you have less consumers than partitions, does that simply mean you will not consume all the messages on a given topic?
NO. Some consumers in the same consumer group will consume data from more than one partition.
In a cloud environment, how are you suppose to keep track how many consumers are running and how many are pointing to a given topic#partition?
Kafka will take care of it. If new consumers join the group, or old consumers dies, Kafka will do reblance.
What if you have multiple consumers on a given topic#partition?
You CANNOT have multiple consumers (in a consumer group) to consume data from a single parition. However, if there're more than one consumer group, the same partition can be consumed by one (and only one) consumer in each consumer group.
1) No that means you will one consumer handling more than one consumer.
2) Kafka never assigns same partition to more than one consumer because that will violate order guarantee within a partition.
3) You could implement ConsumerRebalanceListener, in your client code that gets called whenever partitions are assigned or revoked from consumer.
You might want to take a look at this article specically "Assigning partitions to consumers" part. In that i have a sample where you create topic with 3 partitions and then a consumer with ConsumerRebalanceListener telling you which consumer is handling which partition. Now you could play around with it by starting 1 or more consumers and see what happens. The sample code is in github
http://www.javaworld.com/article/3066873/big-data/big-data-messaging-with-kafka-part-2.html