How to increase KAFKA consumer on demand in single Group - apache-kafka

Lets say we have one topic "topic-1" in kafka with partition 5.
Consumer Group-A with 5 consumer attached to "topic-1" each partition. Due to large workload large number of message get publish. Now we want to scale up consumer / add more consumer in Group-A to process message.
How can we increase consumer ON_DEMAND in same group?
Is any way to do it from coding ? so that single message get consumed by each consumer.
Once load is decrease shut-down few consumer from same group.

What I would suggest is having some partitions as buffer for when the
load increases.
For eg. if having 5 partitions is enough for normal load, I would
suggest having 15 partitions for that topic but only 5 consumers
for them at the start.
Then, when the load increases, keep adding consumers, preferably in other machines, until the load decreases
You can have kubernetes do the autoscaling for you

Kafka framework suggest that the number of the consumers corresponds to the number of the partitions. Increasing the number of the consumers will not help as you will have one consumer per partition anyway and the rest will remain idle. If you need to speed it up you can read the data from Kafka and process them in another thread. You can scale with this number of processing threads and you will need to program it yourself.

Related

Is it possible to control how often a Spring Kafka Message Listener switches between its assigned partitions?

When a Spring Kafka MessageListener is consuming messages from multiple partitions, it keeps processing messages from one partition until there are no more messages and only after that it continues with the next partition. (based on my observations)
Is it possible to set a max number of messages/batches and tell the Listener to switch faster to the next partition rather than later?
This would improve fairness and consume evenly from all assigned partitions.
switch faster to the next partition, consume evenly from all assigned partitions
I don't think Kafka has any properties for this. kafka consumer config
It's weird. You could see a partition replica in Kafka as a log file. Your consumer poll runs in one thread, for better performance, it should consume from one file, and the next poll will consume from another file rather than separate it and consume evenly from many partitions for each poll, right? Eventually, you still need to consume all of the messages on the topic.

Increase in Consumer Groups will increase delay in Producer Response?

I have an use case where i have to test the saturation point of my Kafka (3 node) Cluster with high no of Consumer groups.(To find the saturation point for our production use case) Producer ack=all.
I created many consumer groups more than 10000 , there is no Problem(No load Just created Consumer groups not consuming).
So i started load testing, I created 3 topics (1 partition) with 3 replication factor,Each broker is leader for a topic(i made sure by kafka-topic describe).
I planned to constantly produce 4.5MBps for each topic and increase consumer groups from zero.100KB size of 45 records to a topic.
When i produce data for no consumer groups in the cluster the producer response time is just 2 ms/record.
For 100 Consumer groups per record it taking 7ms.
When increasing consumer groups for a topic to 200 the time increasing to 28-30ms with that i cant able to produce 4.5MBps .When increasing more Consumer groups the producer response is decreasing.
The broker have 15 I/O threads and 10 Network threads.
Analysis done for above scenario
With grafana JMX metrics there is no spike in request and response queue.
There is no delay in I/O picking up by checking request queue time.
The network thread average idle percentage is 0.7 so network thread is not a bottleneck.
When reading some articles Socket buffer can be bottle neck for high bandwidth throughput so increased it from 100KB to 4MB but no use.
There is no spike in GC,file descriptor,heap space
By this there is no problem with I/O threads,Network Threads,Socket Buffer
So what can be a bottleneck here?
I thought it would be because of producing data to single partition. So i created more topic with 1 partition and parallel try to produced 4.5MBps per each topic ended up same delay in producer response.
What can be really bottleneck here? Because producer is decoupled from Consumer.
But when i increasing more no of Consumer groups to broker, The producer response why affecting?
As we know the common link between the Producer and consumer is Partition where the data remains and is being read and Write by consumers and producers respectively There are 3 things that we need to consider here
Relationship between Producer to Partition : I understand that you need to have the correct no. of partition created to send some message with consistent speed and here is the calculation we use to optimize the number of partitions for a Kafka implementation.
Partitions = Desired Throughput / Partition Speed
Conservatively, you can estimate that a single partition for a single Kafka topic runs at 10 MB/s.
As an example, if your desired throughput is 5 TB per day. That figure comes out to about 58 MB/s. Using the estimate of 10 MB/s per partition, this example implementation would require 6 partitions. I believe its not about creating more topics with one partition but it is about creating a topic with optimized no of partitions
Since you are sending the message consistently with 1 partition then this could be the issue. Also since you have chosen acks=all, this can be the reason for increased latency that every message that you pass to the topic has to make sure that it gets the acknowledgment from leader as well as the followers hence introducing the latency. As the message keeps on increasing, its likely that there must be some increase in latency factor as well. This could be in actual the reason for increased response time for producer. To have that addressed you can do below things:
a) Send the Messages in Batch
b) Compress the Data
Partition : Each partition has a leader. Also, with multiple replicas, most partitions are written into leaders. However, if the leaders are not balanced properly, it might be possible that one might be overworked, compared to others causing the latency again. So again the optimized number of partitions are the key factors.
Relationship between consumer to Partition : From your example I understand that you are increasing the consumer groups from Zero to some number. Please note that when you keep on increasing the consumer group , there is the rebalancing of the partition that takes place.
Rebalancing is the process to map each partition to precisely one consumer. While Kafka is rebalancing, all involved consumers processing is blocked
If you want to get more details
https://medium.com/bakdata/solving-my-weird-kafka-rebalancing-problems-c05e99535435
And when that rebalancing happens, there is the partition movement as well that happens which is nothing but again an overhead.
In conclusion I understand that the producer throughput might have been increasing because of below factors
a) No of partitions not getting created correctly w.r.t. messaging speed
b) Messages not being send in Batches with proper size and with a proper compression type
c) Increase in consumer group causing more of rebalancing hence movement of partition
d) Since due to rebalancing the consumer, the consumer blocks the functioning for partition reassignment we can say the message fetching also gets stopped hence causing the partition to store more of the data while accepting the new Data.
e) Acks= all which should be causing latency as well
In continuation to your query, trying to visualize it
Let us assume as per your condition
1 Topic having 1 partition(with RF as 3) and 1 to 100/500 consumers groups having 1 single consumer in each group(No rebalancing) subscribing to same topic
Here only one server(Being leader) in the cluster would be actively participating to do following functions that can result in the processing delays since the other 2 brokers are just the followers and will act if the leader goes down.

Kafka - Standalone sever - How to decide partitions?

I have a standalone Kafka setup with single disk. planning to stream over million records. How to decide partitions for my topic for better through-put? has to be 1 partition?
Is it recommended to have multiple partitions for a topic on standalone Kafka server?
Yes you need multiple partitions even for a single node kafka cluster. That is because you can only have as many consumers as you have partitions. If you have a single partition then you can only have a single consumer, and that will limit throughput. Especially if you want to stream millions of rows (although the period for those is not specified).
The only real downside to this is that messages are only consumed in order within the same partition. Other than that, you should go with multiple partitions. You will need to estimate the throughput of a single consumer in order to calculate the partitions, then maybe add one or 2 on top of that.
You can still add partitions later but it's probably better to try to start with the right amount first and change later as you learn more or as your volume increases/decreases.
There are two main factors to consider:
Number of producers and consumers
Each client, producer or consumer, can only connect to one partition. For this reason, the number of partitions must be at least the max(number of producers, number of consumers).
Throughput
You must determine the troughput to calculate how many consumers should be in the consumer group. The combined reading capacity of consumers should be at least as high as the combined writing capacity of producers.

Increase or decrease Kafka partitions dynamically

I have a system where load is not constant. We may get 1000 requests a day or no requests at all.
We use Kafka to pass on the requests between services. We have kept average number of Kafka consumers to reduce cost incurred. Now my consumers of Kafka will sit ideal if there are no requests received that day, and there will be lag if too many requests are received.
We want to keep these consumers on Autoscale mode, such that my number of servers(Kafka consumer) will increase if there is a spike in number of requests. Once the number of requests get reduced, we will remove the servers. Therefore, the Kafka partitions have to be increased or decreased accordingly
Kafka allows increasing partition. In this case, how can we decrease Kafka partitions dynamically?
Is there any other solution to handle this Auto-scaling?
Scaling up partitions will not fix your lag problems in the short term since no data is moved between partitions when you do this, so existing (or new) consumers are still stuck with reading the data in the previous partitions.
It's not possible to decrease partitions and its not possible to scale consumers beyond the partition count.
If you are able to sacrifice processing order for consumption speed, you can separate the consuming threads and working threads, as hinted at in the KafkaConsumer javadoc, then you would be able to scale these worker threads.
Since you are thinking about modifying the partition counts, then I'm guessing processing order isn't a problem.
have one or more consumer threads that do all data consumption and hands off ConsumerRecords instances to a blocking queue consumed by a pool of processor threads that actually handle the record processing.
A single consumer can consume multiple partitions. Therefore partition your work for the largest anticipated parallel requirement, and then scale your number of consumers as required.
For example, if you think you need 32 parallel consumers you would give you Kafka topic 32 partitions.
You can run 32 consumers (each gets one partition), or eight consumers (each gets four partitions) or just one (which gets all 32 partitions). Or any number of consumers in between. Kafka's protocol ensures that within the consumer group all partitions are consumed, and will rebalance as and when consumers are added or removed.

Kafka add on-demand consumer instances for topic

Can I use kafka as the messaging queue in system which recieves end user events as messages there will be short bursts of load (lasting couple of hours to a day) in month.
The objective is to add additional consumers (worker) during these peak load duration and remove them when not being utilized (scaling up and down).
However it seems that one consumers in a consumer-group will read at-most one partition from the topic, thus the max number of consumers parallely processing the topic is limited to number of partitions in topic.
Is kafka suitable for these use cases where I would like to increase the number of worker 100 folds for a short bursts?