I know that if we have multiple partitions and almost the same number of consumers in the consumer group then the processing will speed up. If we want to maintain the ordering of the event and process each event as and when it is received, how can we achieve this with multiple partitions and consumers.
In my use case processing the events in order is extremely critical or else the system will fall apart. I wanted to use multiple partitions to increase parallelism but somehow 'get them in order'.
Simplest answer: you can't
Once you split data to partitions, you can't guarantee order in consumption (even with single consumer).
Isn't there any logic in which you can shard the data into partitions, so that messages that has to be consumed in order would end up in same partition?
Related
Need a help in getting the best design solution for creating Kafka consumers.
Will be having multiple topics and those can be like groups say for example
10 topics that are used to send out emails (10 count is chosen because will be getting more client traffic and want to dedicate a topic per client like each topic for one client so that others will not be delayed or waited)
10 topics to process a business logic and the 10 count explanation is same as above.
Now with this usage what's the best way to design Kafka consumers? Consumer dedicated to each topic ? or is there a way where we can scale up consumer dynamically by passing in which topic it needs to subscribe? For sure will be deploying this in containers but want suggestions on how to get started with consumer part with dynamic scalability and common code. And what's the best technology to implement this type of kafka consumers? (dotnet/java/python) ?
Also please do suggest if partitions make sense in this kind of design so that we can leverage consumer groups.
Consumers belonging to a same consumer group are assigned partitions in a topic.
In kafka, a topic can have multiple partitions. The consumers consume the messages of a particular topic from their assigned partition(s). The messages in the partition are ordered by sequential offsets.
Now, topic-wide record order is not important, you generally want to start with a higher number of partitions in a topic. Let's say start with 100 partitions. Your data will be distributed across the 100 partitions in a topic, assuming null keys or at least 100 unique key values with non colliding hashes, as record keys determine partitioning. If topic order is important, you're limited to one partition, and therefore one consumer thread; however, this thread can separate consumption from processing by loading records into alternative data structures (a queue) for processing.
You can now have 10 consumers consuming from 100 partitions. Each consumer will be assigned to about 10 partitions, and they will consume the messages in a round-robin fashion.
If you want to scale out, you simply increase the number of consumers. If you double the number of consumer to 20 then each consumer will process 5 partitions, thus, you get double throughput.
I have a standalone Kafka setup with single disk. planning to stream over million records. How to decide partitions for my topic for better through-put? has to be 1 partition?
Is it recommended to have multiple partitions for a topic on standalone Kafka server?
Yes you need multiple partitions even for a single node kafka cluster. That is because you can only have as many consumers as you have partitions. If you have a single partition then you can only have a single consumer, and that will limit throughput. Especially if you want to stream millions of rows (although the period for those is not specified).
The only real downside to this is that messages are only consumed in order within the same partition. Other than that, you should go with multiple partitions. You will need to estimate the throughput of a single consumer in order to calculate the partitions, then maybe add one or 2 on top of that.
You can still add partitions later but it's probably better to try to start with the right amount first and change later as you learn more or as your volume increases/decreases.
There are two main factors to consider:
Number of producers and consumers
Each client, producer or consumer, can only connect to one partition. For this reason, the number of partitions must be at least the max(number of producers, number of consumers).
Throughput
You must determine the troughput to calculate how many consumers should be in the consumer group. The combined reading capacity of consumers should be at least as high as the combined writing capacity of producers.
From my understanding partitions and consumers are tied up into a 1:1 relationship in which a single consumer processes a partition. However is there such a way to repartition in the middle of processing?
We are currently trying to optimize a process in which the topic gets consumed across a group but there are cases in which the data processing needs to take longer on a certain consumer while others are already idle. Its like data cleansing where a certain partition might no longer need cleansing while others require fuzzy matching thereby adding complexity to the task a consumer performs.
Your understanding with regards to partitions and consumers is not quite right.
If you have N partitions, then you can have up to N consumers within the same consumer group each of which reading from a single partition. When you have less consumers than partitions, then some of the consumers will read from more than one partition. Also, if you have more consumers than partitions then some of the consumers will be inactive and will receive no messages at all.
If you have one consumer per partition, then some of the partitions might receive more messages and this is why some of your consumers might be idle while some others might still processing some messages. Note that messages are not always inserted into topic partitions in a round-robin fashion as messages with the same key are placed into the same partition.
in kafka topics are partitioned, and even if you can add partitions to a topic there is no repartitioning: all the data already written to a partition stays there, new data will be partitioned among the existing partitions (in a round robin fashion if you do not define keys, otherwise one key will always land in the same partition as long as you do not add partitions.)
But if you have a consumer group, and you add or remove consumers to this group, there is a group rebalancing where each consumer receives its share of partitions to exclusively consume from.
So if you have 3 partitions (with evenly distributed messages among them) and 2 consumers (in the same group) one consumer will have twice as much messages to handle than the other; with 3 consumers each one will consume one partition; with 4 consumers one will stay idle...
So as you already have evenly distributed messages (which is good), you should have as many consumers as you have partitions, and if it is still not fast enough you may add n partitions and n consumers. (For sure you could also try to optimize the consumer but that is another story...)
Added to answer comment:
Once a consumer -- from a given group -- is consuming a partition, it will continue to do so and will be the only one from the group consuming this partition, even if a lot of other consumers from the same group are idle. In one group a partition is never shared between consumers. (If the consumer crashes, another one will continue the work, and if a new consumer enters the group a rebalance will occur, but anyway only one consumer will work on one partition at a given time).
So one approach, as said in your comment would be to distribute the load evenly over the partitions. Another approach, would be to have a topic dedicated to expensive jobs, let it have a lot of partitions and a lot of consumers; and let the topic for non-expensive jobs have fever consumers.
Last approach that I would not recommend would be to not use the consumer group features and to manage yourself how you consume from Kafka, by using assign and seek methods from the consumer. (See KafkaConsumer JavaDoc for more information). Spark Structured Streaming for example is using that approach, but it is much more complex...
Currently I am having only one partition of topic and only one consumer.
But my consumer runs slow so I want to add new consumers to the same topic.
But at the same time I also want all the consumers to process exclusive set of messages.
I am not able to determine how many partitions should I create so that I can add consumers whenever I want to increase throughput and to process exclusive messages.
If you have N partitions for a topic, you can start a consumer group with N consumers, and they will each get to work on one of the partitions.
Each consumer will get a disparate set of messages (unless there is fail-over involved, then some messages may be delivered more than once) that way, and if the workload is evenly spread amongst all partitions, this will load-balance properly.
If you have fewer than N consumers, some of them will get more than one partition, but they will still not share partitions and each will get roughly the same number of partitions.
If you have more than N consumers, some will stay idle (but can act as hot standbys in case one of the others drops out).
The key to all this is that you put these consumers into the same consumer group. That is the way to indicate to Kafka that they should divide the partitions between them. If you started N consumer groups instead, each group would get all messages in parallel (each message would be processed N times).
The main downside to partitioning is that you lose the total ordering of messages. Messages are delivered in-order only with respect to other messages in the same partition. This may or may not be a problem for your application (and if it is, you can re-arrange things in your application by looking at timestamps).
1. Consuming concurrently on the same topic and same partition
Suppose I have 100 partitions for a given topic (e.g. Purchases), I can easily consume these 100 partitions (e.g. Electronics, Clothing, and etc...) in parallel using a consumer group with 100 consumers in it.
However, that is assigning one consumer to each subset of the total data on Purchases. What if I want just want to consume one subset of data with 100 consumers concurrently? For example, for all of my consumers, they just want to know Electronics partition of the Purchases topic.
Is there way they can consume this partition concurrently?
In general I just want all my consumers to receive the same data set concurrently.
From the information I've gathered, it seems to me that consumers CANNOT consume from replicas: Consuming from a replica
Can I produce the same data to multiple topics, like Purchase-1[Electronics] and Purchase-2[Electronics] so then I can consume them concurrently? Is this a recommended approach?
2. Producing concurrently on the same topic and same partition
When multiple producers are producing to the same topic and same partition, since we can only write to the partition leader and replicas are only there for fault-tolerance, does this mean there isn't any concurrency? (i.e. each commit must wait in line.)
If those 100 consumers belong to different consumer groups, they can consume from the same topic and partition simultaneously. In that case, you need to make sure each consumer is able to handle the load from the 100 partitions.
Producers can produce to the same topic partition at the same time, but the actual order of messages written to the partition is determined by the partition leader.
If you want to consumer from a single partition in parallel, use something like Parallel Consumer (PC).
By using PC, you can process all your keys in parallel, regardless of how long it takes, and you can be as concurrent as you wish.
PC directly solves for this, by sub partitioning the input partitions by key and processing each key in parallel.
It also tracks per record acknowledgement. Check out Parallel Consumer on GitHub (it's open source BTW, and I'm the author).