Kafka Broker and Consumer optimization - apache-kafka

We have 25Million records written to the Kafka topic.
The topic is having 24 Partitions and 24 Consumers.
Each message is 1KB. And these messages are wrapped with Avro for serialization and Deserialization.
Replication Factor is 2.
The fetch size is 50000 and the Poll interval is 50ms.
Right now during the load test to consume and process 1Million, it takes 40mins on an average. But, we want to process the 25Million records in less than 20 to 30mins.
Broker configs:
background.threads = 10
num.network.threads = 7
num.io.threads = 8
Set replica.lag.time.max.ms = 500
Set replica.lag.max.messages = 4
Set log.flush.interval.ms to default value as per logs
Used G1 collector instead of MarkSweepGC
Changed Xms to 4G and Xmx to 4G
our setup has 8 brokers each with 3 disks and with 10GBPS ethernet with simplex network.
Consumer Configs:
We are using Java Consumer API to consume the messages. We made the swappiness to 1 and using 200 threads to process the data within the consumer. Inside the consumer we are picking up the message and hitting Redis, MaprDB to perform some business logic. Once, the logic is completed we are committing the message using Kafka Commit Sync.
Each consumer is running with -xms 4G and -xmx 4G. What are the other aspects we need to consider in order to increase the read throughput?

I won't provide you an exact answer to your problem, but more a roadmap and methodological help.
10 min for 1Million message is indeed slow IF everything works fine AND the consumer's task is light.
First thing you need to know is what is your bottle neck.
It could be:
the Kafka cluster itself: messages are long to be pulled out of the cluster. T test that, you should check with a simple consumer (the one provided with Kafka CLI for example), running directly on a machine where you have a broker (or close), to avoid network latency. How fast is that?
the network between the brokers and the consumer
the consumer: what does it do? Maybe the processing is really long. Then optimisation should run there. Can you monitor the ressources (CPU, RAM) required for your consumer? Maybe one good test you could do is create a test consumer, in which you load 10k messages in memory, then do your business logic and time it. How long does it last? This will tell you the max throughput of your consumer, irrespective of Kafka's speed.

Related

Increase in Consumer Groups will increase delay in Producer Response?

I have an use case where i have to test the saturation point of my Kafka (3 node) Cluster with high no of Consumer groups.(To find the saturation point for our production use case) Producer ack=all.
I created many consumer groups more than 10000 , there is no Problem(No load Just created Consumer groups not consuming).
So i started load testing, I created 3 topics (1 partition) with 3 replication factor,Each broker is leader for a topic(i made sure by kafka-topic describe).
I planned to constantly produce 4.5MBps for each topic and increase consumer groups from zero.100KB size of 45 records to a topic.
When i produce data for no consumer groups in the cluster the producer response time is just 2 ms/record.
For 100 Consumer groups per record it taking 7ms.
When increasing consumer groups for a topic to 200 the time increasing to 28-30ms with that i cant able to produce 4.5MBps .When increasing more Consumer groups the producer response is decreasing.
The broker have 15 I/O threads and 10 Network threads.
Analysis done for above scenario
With grafana JMX metrics there is no spike in request and response queue.
There is no delay in I/O picking up by checking request queue time.
The network thread average idle percentage is 0.7 so network thread is not a bottleneck.
When reading some articles Socket buffer can be bottle neck for high bandwidth throughput so increased it from 100KB to 4MB but no use.
There is no spike in GC,file descriptor,heap space
By this there is no problem with I/O threads,Network Threads,Socket Buffer
So what can be a bottleneck here?
I thought it would be because of producing data to single partition. So i created more topic with 1 partition and parallel try to produced 4.5MBps per each topic ended up same delay in producer response.
What can be really bottleneck here? Because producer is decoupled from Consumer.
But when i increasing more no of Consumer groups to broker, The producer response why affecting?
As we know the common link between the Producer and consumer is Partition where the data remains and is being read and Write by consumers and producers respectively There are 3 things that we need to consider here
Relationship between Producer to Partition : I understand that you need to have the correct no. of partition created to send some message with consistent speed and here is the calculation we use to optimize the number of partitions for a Kafka implementation.
Partitions = Desired Throughput / Partition Speed
Conservatively, you can estimate that a single partition for a single Kafka topic runs at 10 MB/s.
As an example, if your desired throughput is 5 TB per day. That figure comes out to about 58 MB/s. Using the estimate of 10 MB/s per partition, this example implementation would require 6 partitions. I believe its not about creating more topics with one partition but it is about creating a topic with optimized no of partitions
Since you are sending the message consistently with 1 partition then this could be the issue. Also since you have chosen acks=all, this can be the reason for increased latency that every message that you pass to the topic has to make sure that it gets the acknowledgment from leader as well as the followers hence introducing the latency. As the message keeps on increasing, its likely that there must be some increase in latency factor as well. This could be in actual the reason for increased response time for producer. To have that addressed you can do below things:
a) Send the Messages in Batch
b) Compress the Data
Partition : Each partition has a leader. Also, with multiple replicas, most partitions are written into leaders. However, if the leaders are not balanced properly, it might be possible that one might be overworked, compared to others causing the latency again. So again the optimized number of partitions are the key factors.
Relationship between consumer to Partition : From your example I understand that you are increasing the consumer groups from Zero to some number. Please note that when you keep on increasing the consumer group , there is the rebalancing of the partition that takes place.
Rebalancing is the process to map each partition to precisely one consumer. While Kafka is rebalancing, all involved consumers processing is blocked
If you want to get more details
https://medium.com/bakdata/solving-my-weird-kafka-rebalancing-problems-c05e99535435
And when that rebalancing happens, there is the partition movement as well that happens which is nothing but again an overhead.
In conclusion I understand that the producer throughput might have been increasing because of below factors
a) No of partitions not getting created correctly w.r.t. messaging speed
b) Messages not being send in Batches with proper size and with a proper compression type
c) Increase in consumer group causing more of rebalancing hence movement of partition
d) Since due to rebalancing the consumer, the consumer blocks the functioning for partition reassignment we can say the message fetching also gets stopped hence causing the partition to store more of the data while accepting the new Data.
e) Acks= all which should be causing latency as well
In continuation to your query, trying to visualize it
Let us assume as per your condition
1 Topic having 1 partition(with RF as 3) and 1 to 100/500 consumers groups having 1 single consumer in each group(No rebalancing) subscribing to same topic
Here only one server(Being leader) in the cluster would be actively participating to do following functions that can result in the processing delays since the other 2 brokers are just the followers and will act if the leader goes down.

Kafka Random Access to Logs

I am trying to implement a way to randomly access messages from Kafka, by using KafkaConsumer.assign(partition), KafkaConsumer.seek(partition, offset).
And then read poll for a single message.
Yet i can't get past 500 messages per second in this case. In comparison if i "subscribe" to the partition i am getting 100,000+ msg/sec. (#1000 bytes msg size)
I've tried:
Broker, Zookeeper, Consumer on the same host and on different hosts. (no replication is used)
1 and 15 partitions
default threads configuration in "server.properties" and increased to 20 (io and network)
Single consumer assigned to a different partition each time and one consumer per partition
Single thread to consume and multiple threads to consume (calling multiple different consumers)
Adding two brokers and a new topic with it's partitions on both brokers
Starting multiple Kafka Consumer Processes
Changing message sizes 5k, 50k, 100k -
In all cases the minimum i get is ~200 msg/sec. And the maximum is 500 if i use 2-3 threads. But going above, makes the ".poll()", call take longer and longer (starting from 3-4 ms on a single thread to 40-50 ms with 10 threads).
My naive kafka understanding is that the consumer opens a connection to the broker and sends a request to retrieve a small portion of it's log. While all of this has some involved latency, and retrieving a batch of messages will be much better - i would imagine that it would scale with the number of receivers involved, with the expense of increased server usage on both the VM running the consumers and the VM running the broker. But both of them are idling.
So apparently there is some synchronization happening on broker side, but i can't figure out if it is due to my usage of Kafka or some inherent limitation of using .seek
I would appreaciate some hints of whether i should try something else, or this is all i can get.
Kafka is a streaming platform by design. It means there are many, many things has been developed for accelerating sequential access. Storing messages in batches is just one thing. When you use poll() you utilize Kafka in such way and Kafka do its best. Random access is something for what Kafka don't designed.
If you want fast random access to distributed big data you would want something else. For example, distributed DB like Cassandra or in-memory system like Hazelcast.
Also you could want to transform Kafka stream to another one which would allow you to use sequential way.

Kafka Topic Lag is keep increasing gradually when Message Size is huge

I am using the Kafka Streams Processor API to construct a Kafka Streams application to retrieve messages from a Kafka topic. I have two consumer applications with the same Kafka Streams configuration. The difference is only in the message size. The 1st one has messages with 2000 characters (3KB) while 2nd one has messages with 34000 characters (60KB).
Now in my second consumer application I am getting too much lag which increases gradually with the traffic while my first application is able to process the messages at the same time without any lag.
My Stream configuration parameters are as below,
application.id=Application1
default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
num.stream.threads=1
commit.interval.ms=10
topology.optimization=all
Thanks
In order to consume messages faster, you need to increase the number of partitions (if it's not yet done, depending on the current value), and do one of the following two options:
1) increase the value for the config num.stream.threads within your application
or
2) start several applications with the same consumer group (the same application.id).
as for me, increasing num.stream.threads is preferable (until you reach the number of CPUs of the machine your app runs on). Try gradually increasing this value, e.g go from 4 over 6 to 8, and monitor the consumer lag of your application.
By increasing num.stream.threads your app will be able to consume messages in parallel, assuming you have enough partitions.

How many Partitions can we have in Kafka?

I have a requirement in my IoT project like, a custom java application called "NorthBound" (NB) can manage 3000 devices maximum. Devices send data to SouthBound (SB - Java Application), SB sends data to Kafka and from Kafka, NB consume the messages.
To manage around 100K devices, I am planning to start multiple instances (around 35) of NorthBound, but i want same instance should receive the messages from same devices. e.g. Device1 is sending data to NB_instance1, Device2 is sending data to NB_instance2 etc.
To handle this, i am thinking of creating 35 partitions of same topic (Device-Messages) so that each NB instance can consume one partition and same device's data should go to same NB instance. Is it the right approach? Or is there any better way?
How many partitions can we make in a Kafka cluster? and What is a recommended value considering 3 nodes (Brokers) in a cluster?
Currently, we have only 1 node in Kafka. Can we continue with single node and 35 partitions?
Say on startup I might have only 5-6K devices, then I will have only 2 partitions with 2 NB instances. Gradually when we add more devices, we will keep adding more partitions and NB instances. Can we do it without restarting Kafka? Is it possible to create partitions dynamically?
Regards,
Krishan
As you can imagine the number of partitions you can have depends on a number of factors.
Assuming you have recent hardware, since Kafka 1.1, you can have 1000s of partitions per broker. Moreover Kafka has been tested with over 100000 partitions in a cluster. Link 1
As a rule of thumb, it's recommended to over partition a bit in order to allow future growth in traffic/usage. Kafka allows to add partitions at runtime but that will change partitioning of keyed messages which can be an issue depending on your use case.
Finally, it's not recommended to run a single broker for production workloads as if it was to crash or fail, you'd be exposed to an outage and possibly data loss. It's best to at least have 2 of them with a replication factor of 2 even with only 35 partitions.

Can kafka producers supply data at quota rate in presence of replicas?

I have a kafka producer belonging to client with clientid - "p1" and quota as 50 MBps.
Now I tested the performance of my producer using "bin/kafka-producer-perf-test.sh" and I was able to get throughput close to 50 MBps when writing to a partition without a replica.
I tried the same experiment on a partition with three replicas. But this time the throughput got reduced to 30 MBps.
My question is that shouldn't kafka allow the producer to still get a throughput of 50 MBps even in presence of replicas? There is nothing else running in the system, so I am not sure why this is happening?
Did you change the acks config of producer? From you description it seems that acks is set to all, so producer waits untill the sent data will be replicated among three brokers which affects throughput. If you didn't change the acks try to set it to 0, so producer will not wait for any acknowledgment from the server at all, just to see if it affects throughput.