I have read the entire Documentation from the suggested website http://kafka.apache.org/ and did not able to understand the Hardware Requirements
1)I need a clarification on: How many Partitions and Replication is Required for collecting minimum 50GB of data per/day for single topic
2)It is given that the 0000000000000.log file is able to store up-to 100GB of data. Is it possible to reduce this log file size for reducing the usage of I/O ?
If the data is uniformed ingested during the entire day, that means that you need to ingest something like 600kb per second, all depends on the number of messages that are on those 600kb (according to Jay Creps explanation here you need to calculate something like 22 bytes of overhead per message) (keep in mind that the way you ACK the messages from the producer is also very important)
But you should be able with 1 topic and 1 partition to get this throughput from a producer.
1.Check this link it has the answer to choose #partitions:
http://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/][1]
Yes it is possible to change the maximum size of log file in kafka. You have to set the below mentioned property on each of the brokers and then restart the brokers.
log.segment.bytes=1073741824
Above line will set the log segment size to 1GB.
Related
The task is routing messages from a single huge source topic to many (few thousands) destination topics. Overall rate is about few millions of records per second. It barely handles such payload now, and we are looking for a solution to optimise it. However, it does not seem it reached any limit at hardware or network level, so I suppose it can be improved. A latency isn't important (few minutes delay is fine), an average message size is less than 1 KiB.
The most obvious way to increase throughput is to make batch.size and linger.ms larger. But the problem is a different message rate in destination topics: depends on a message destination the rate may vary from few messages per second to hundreds of thousands per second.
As I understand (please, correct me if I'm wrong), but batch.size is per-partition parameter. So, if we set batch.size too big we will go out of memory, because it was multiplied by a number of destination topics even all of them have only one partition. Otherwise, if batch.size will be small, then producer will send requests to broker too often. In each app instance we use a single producer for all destination topics (ProduceRequest can include batches to different topics). The only way to set this parameter different per topic is using a separate producer per topic, but it means thousands of threads and many context switches.
Can we set a minimum size of actual ProduceRequest, i.e. like batch.size, but for overall batches in the request, i.e. something opposite to max.request.size?
Or is there any way to increase throughput of producer?
the problem looks solveable and seems like we solved. it's not a big problem for Kafka to stream to 3k topics, but there are some things you should take care about:
Kafka-producer tries to allocate batch.size * number_of_destination_partitions memory on the start. if you have batch.size equals 10mb and 3k topics with 1 partition per topic, Kafka-producer will require at least ~30gb on the start (source code).
so the more destination partitions you have, the less batch.size you have to set up or the more memory you need. we chose small batch.size
messages rate per destination topics does't affect general performance. Kafka-producer sends several batches per one request. here max.request.size comes into the play (source code, maxSize is max.request.size). the higher max.request.size, the more batches could be sent per one request. it is important to understand that reaching a batch.size or a linger.ms don't instantly triggers sending batch to the broker. as soon as batch reaches the batch.size or the linger.ms, it is marked as sendable and will be processed later with other batches (source code).
moreover, batch.size or a linger.ms are not the only reasons to mark batch as sendable (check the previous link). and this is where the batches are actually sent (source code). that's why the same events rate per destination topics is not required, but still there are some nuances which will be described next.
2.1. a few words about linger.ms. can't say for sure how it acts in this scenario. on the one hand, the larger it is, the longer Kafka-producer will wait to collect messages for exact partition and the more data for that partition will be send per one request. one the other hand, it seems like the less it is, there more batches for different partitions could be packed into one request. while there is no certainty about how to do better.
despite that Kafka-producer is able to send more than one batch per request, it can't send more that one batch per request for one specific partition. thats why if you have skewed messages rate for destination topics, you have to increase partitions count for most loaded ones to increase throughput. but it's always necessary to remember that an increasing partitions count leads to an increase in memory usage.
actually, an information above helped us to solve our problems with performance. but there may be other nuances that we don't know about yet.
I hope it will be useful.
I have a Kafka service with 1000GB disk and this running parameter:
log.retention.bytes=350000000000
However, the usage of disk space reaches 90% (900GB). Since that parameter is running, the disk size should not exceeds 326GB. Why could this happen?
Other properties:
log.index.interval.bytes=4000
log.segment.bytes=250000000
log.index.size.max.bytes=10485760
log.retention.ms=168
while the official documentation isnt very clear:
The maximum size of the log before deleting it
the confluent documentation on topic configs (which should really be considered the official documentation anyway) has a better one (under retention.bytes):
This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes.
in short, this config isnt even per topic. its per partition. im not aware of a kafka config that acts as a broker-wide size limit.
if youre trying to balance data load across multiple brokers in a cluster perhaps you should look at cruise control
I have a very very large(count of messages) Kafka topic, it might have more than 20M message per second, but, message size is small, it's just some plain text, each less than 1KB, I can use several partitions per topic, and also I can use several servers to work on one topic and they will consume one of the partitions in the topic...
what if I need +100 servers for a huge topic?
Is it logical to create +100 partitions or more on a single topic?
You should define "large" when mentioning Kafka topics:
Large means huge data in terms of volume size.
Message size is large that it takes time sending a message from queue to client for processing?
Intensive write to that topic? In that case, do you need to process read as fast as possible? (i.e: can we delay process data for about 1 hour)
...
In either case, you should better think on the consumer side for a better design topic and partition. For instances:
Processing time for each message is slow, and it better process fast between messages: In that case, you should create many partitions. It is like a load balancer and server relationship, you create many workers for doing your job.
If only some message types, the time processing is slow, you should consider moving to a new topic. There is a nice article: Should you put several event types in the same Kafka topic explains this decision.
Is the order of messages important? for example, message A happens before message B, message A should be processed first. In this case, you should make all messages of the same type going to the same partition (only the same partition can maintain message order), or move to a separate topic (with a single partition).
...
After you have a proper design for topic and partition, it is come to question: how many partitions should you have for each topic. Increasing total partitions will increase your throughput, but at the same time, it will affect availability or latency. There are some good topics here and here that explain carefully how will total partitions per topic affect the performance. In my opinion, you should benchmark directly on your system to choose the correct value. It depends on many factors of your system: processing power of server machine, network capacity, memory ...
And the last part, you don't need 100 servers for 100 partitions. Kafka will try to balance all partitions between servers, but it is just optional. For example, if you have 1 topic with 7 partitions running on 3 servers, there will be 2 servers store 2 partitions each and 1 server stores 3 partitions. (so 2*2 + 3*1 = 7). In the newer version of Kafka, the mapping between partition and server information will be stored on the zookeeper.
you will get better help, if you are more specific and provide some numbers like what is your expected load per second and what is each message size etc,
in general Kafka is pretty powerful and behind the seances it writes the data to buffer and periodically flush the data to disk. and as per the benchmark done by confluent a while back, Kafka cluster with 6 node supports around 0.8 million messages per second below is bench marking pic
Our friends were right, I refer you to this book
Kafka, The Definitive Guide
by Neha Narkhede, Gwen Shapira & Todd Palino
You can find the answer on page 47
How to Choose the Number of Partitions
There are several factors to consider when choosing the number of
partitions:
What is the throughput you expect to achieve for the topic?
For example, do you expect to write 100 KB per second or 1 GB per
second?
What is the maximum throughput you expect to achieve when consuming from a single partition? You will always have, at most, one consumer
reading from a partition, so if you know that your slower consumer
writes the data to a database and this database never handles more
than 50 MB per second from each thread writing to it, then you know
you are limited to 60MB throughput when consuming from a partition.
You can go through the same exercise to estimate the maxi mum throughput per producer for a single partition, but since producers
are typically much faster than consumers, it is usu‐ ally safe to skip
this.
If you are sending messages to partitions based on keys, adding partitions later can be very challenging, so calculate throughput
based on your expected future usage, not the cur‐ rent usage.
Consider the number of partitions you will place on each broker and available diskspace and network bandwidth per broker.
Avoid overestimating, as each partition uses memory and other resources on the broker and will increase the time for leader
elections. With all this in mind, it’s clear that you want many
partitions but not too many. If you have some estimate regarding the
target throughput of the topic and the expected throughput of the con‐
sumers, you can divide the target throughput by the expected con‐
sumer throughput and derive the number of partitions this way. So if I
want to be able to write and read 1 GB/sec from a topic, and I know
each consumer can only process 50 MB/s, then I know I need at least 20
partitions. This way, I can have 20 consumers reading from the topic
and achieve 1 GB/sec. If you don’t have this detailed information, our
experience suggests that limiting the size of the partition on the
disk to less than 6 GB per day of retention often gives satisfactory
results.
I wanted to know how does Kafka use open file descriptors. Why is it recommended to have a large number of open file descriptor. Does it impact Producer and Consumer throughput.
Brokers create and maintain file handles for each log segment files and network connections. The total number could be very huge if the broker hosts many partitions and partition has many log segment files. This applies for the network connection as well.
I don't immediately see any possible performance declines caused by setting a large file-max, but the page cache miss matters.
Kafka keeps one file descriptor open for every segment file, and it fails miserably if the limit is too low. I don't know if it affects consumer throughput, but I assume it doesn't since Kafka appears to ignore the limit until it is reached.
The number of segment files is the number of partitions multiplied by some number that is dependent on the retention policy. The default retention policy is to start a new segment after one week (or 1GB, whatever occurs first) and to delete a segment when all data in it is more than one week old.
(disclaimer: This answer is for Kafka 1.0 based on what I have learnt from one installation I have)
We can check in below ways.
if a broker hosts many partitions. For example, a Kafka broker needs at least the following number of file descriptors to just track log segment files:
(number of partitions)*(partition size / segment size)
I am fairly new to kafka so forgive me if this question is trivial. I have a very simple setup for purposes of timing tests as follows:
Machine A -> writes to topic 1 (Broker) -> Machine B reads from topic 1
Machine B -> writes message just read to topic 2 (Broker) -> Machine A reads from topic 2
Now I am sending messages of roughly 1400 bytes in an infinite loop filling up the space on my small broker very quickly. I'm experimenting with setting different values for log.retention.ms, log.retention.bytes, log.segment.bytes and log.segment.delete.delay.ms. First I set all of the values to the minimum allowed, but it seemed this degraded performance, then I set them to the maximum my broker could take before being completely full, but again the performance degrades when a deletion occurs. Is there a best practice for setting these values to get the absolute minimum delay?
Thanks for the help!
Apache Kafka uses Log data structure to manage its messages. Log data structure is basically an ordered set of Segments whereas a Segment is a collection of messages. Apache Kafka provides retention at Segment level instead of at Message level. Hence, Kafka keeps on removing Segments from its end as these violate retention policies.
Apache Kafka provides us with the following retention policies -
Time Based Retention
Under this policy, we configure the maximum time a Segment (hence messages) can live for. Once a Segment has spanned configured retention time, it is marked for deletion or compaction depending on configured cleanup policy. Default retention time for Segments is 7 days.
Here are the parameters (in decreasing order of priority) that you can set in your Kafka broker properties file:
Configures retention time in milliseconds
log.retention.ms=1680000
Used if log.retention.ms is not set
log.retention.minutes=1680
Used if log.retention.minutes is not set
log.retention.hours=168
Size based Retention
In this policy, we configure the maximum size of a Log data structure for a Topic partition. Once Log size reaches this size, it starts removing Segments from its end. This policy is not popular as this does not provide good visibility about message expiry. However it can come handy in a scenario where we need to control the size of a Log due to limited disk space.
Here are the parameters that you can set in your Kafka broker properties file:
Configures maximum size of a Log
log.retention.bytes=104857600
So according to your use case you should configure log.retention.bytes so that your disk should not get full.