kafka + how to avoid running out of disk storage - apache-kafka

I want to described the following case that was on one of our production cluster
We have ambari cluster with HDP version 2.6.4
Cluster include 3 kafka machines – while each kafka have disk with 5 T
What we saw is that all kafka disks was with 100% size , so kafka disk was full and this is the reason that all kafka brokers was failed
df -h /kafka
Filesystem Size Used Avail Use% Mounted on
/dev/sdb 5T 5T 23M 100% /var/kafka
After investigation we saw that log.retention.hours=7 days
So seems that purging is after 7 days and maybe this is the reason that kafka disks are full with 100% even if they are huge – 5T
What we want to do now – is how to avoid this case in the future?
So
We want to know – how to avoid full used capacity on kafka disks
What we need to set in Kafka config in order to purge the kafka disk according to the disk size – is it possible ?
And how to know the right value of log.retention.hours ? according to the disk size or other?

In Kafka, there are two types of log retention; size and time retention. The former is triggered by log.retention.bytes while the latter by log.retention.hours.
In your case, you should pay attention to size retention that sometimes can be quite tricky to configure. Assuming that you want a delete cleanup policy, you'd need to configure the following parameters to
log.cleaner.enable=true
log.cleanup.policy=delete
Then you need to think about the configuration of log.retention.bytes, log.segment.bytes and log.retention.check.interval.ms. To do so, you have to take into consideration the following factors:
log.retention.bytes is a minimum guarantee for a single partition of a topic, meaning that if you set log.retention.bytes to 512MB, it means you will always have 512MB of data (per partition) in your disk.
Again, if you set log.retention.bytes to 512MB and log.retention.check.interval.ms to 5 minutes (which is the default value) at any given time, you will have at least 512MB of data + the size of data produced within the 5 minute window, before the retention policy is triggered.
A topic log on disk, is made up of segments. The segment size is dependent to log.segment.bytes parameter. For log.retention.bytes=1GB and log.segment.bytes=512MB, you will always have up to 3 segments on the disk (2 segments which reach the retention and the 3rd one will be the active segment where data is currently written to).
Finally, you should do the math and compute the maximum size that might be reserved by Kafka logs at any given time on your disk and tune the aforementioned parameters accordingly. Of course, I would also advice to set a time retention policy as well and configure log.retention.hours accordingly. If after 2 days you don't need your data anymore, then set log.retention.hours=48.

I think you have three options:
1) Increase the size of the disks until you notice that you have a comfortable amount of space free thanks to your increase and current retention policy of 7 days. For me a comfortable amount free is around 40% (but that is personal preference).
2) Lower your retention policy to for example 3 days and see if your disks are still full after a period of time. The right retention period varies between different use cases. If you don't need a backup of the data on Kafka when something goes wrong then just pick a very low retention period. If it is crucial that you have need those 7 days worth of data then you should not change the period but the disk sizes.
3) A combination of the options 1 and 2.
More information about optimal retention policies: Kafka optimal retention and deletion policy

Related

Kafka reduce the no of open files as crossing 1000000

I have an kafka recieving 1gb of data every min from certain events, due to which the no of files open is going above 1000000. I am not sure which setting needs to be changed to lessen this no. Can anyone guide what could be a quick fix? should i increase the log.segment.bytes=1073741824
to 10 GB to reduce no of files getting created , or increase log.retention.check.interval.ms=300000 to 15 mins so less get checked for deletion
Increasing the size of the segments will reduce number of files maintained by the broker, with the tradeoffs being that only closed segments are ones that get cleaned or compacted.
The other alternative is to reconsider what types of data you're using. For example, if sending files or other large binary blobs, consider using filesystem URIs rather than push the whole data through a topic

How to check the actual number of incremental fetch session cache slots used in Kafka cluster?

I am reading this question Kafka: Continuously getting FETCH_SESSION_ID_NOT_FOUND, and I am trying to apply the solution suggested by Hrishikesh Mishra, as we also face the similar issue, so I increased the broker setting max.incremental.fetch.session.cache.slots to 2000, default was 1000. But now I wonder how can I monitor the actual number of used incremental fetch session cache slots, in prometheus I see kafka_server_fetchsessioncache_numincrementalfetchpartitionscached metrics, and promql query shows on each of three brokers the number that is now significantly over 2000, that is 2703, 2655 and 2054, so I am confused if I look at the proper metrics. There is also kafka_server_fetchsessioncache_incrementalfetchsessionevictions_total that shows zeros on all brokers.
OK, there is also kafka_server_fetchsessioncache_numincrementalfetchsessions that shows cca 500 on each of three brokers, so that is total of cca 1500, which is between 1000 and 2000, so maybe that metrics is the one that is controlled by max.incremental.fetch.session.cache.slots ?
Actually, as of now, it is already more than 700 incremental fetch sessions on each broker, that is total of more than 2100, so, obviously, the limit of 2000 applies to each broker, so that the number in the whole cluster can go as far as 6000. The reason why the number is now below 1000 on each broker is because the brokers were restarted after the configuration change.
And the question is how can this allocation be checked on the individual consumer level. Such a query:
count by (__name__) ({__name__=~".*fetchsession.*"})
returns only this table:
Element Value
kafka_server_fetchsessioncache_incrementalfetchsessionevictions_total{} 3
kafka_server_fetchsessioncache_numincrementalfetchpartitionscached{} 3
kafka_server_fetchsessioncache_numincrementalfetchsessions{} 3
The metric named kafka.server:type=FetchSessionCache,name=NumIncrementalFetchSessions is the correct way to monitor the number of FetchSessions.
The size is configurable via max.incremental.fetch.session.cache.slots. Note that this setting is applied per-broker, so each broker can cache up to max.incremental.fetch.session.cache.slots sessions.
The other metric you saw, kafka.server:type=FetchSessionCache,name=NumIncrementalFetchPartitionsCached, is the total number of partitions used across all FetchSession. Many FetchSessions will used several partitions so it's expected to see a larger number of them.
As you said, the low number of FetchSessions you saw was likely due to the restart.

Hardware requirement for apache kafka

I am building a production environment where I will be having Apache Kafka. I want to know the best hardware combination to have for better performance. I will be having 5000 transactions/second.
You would need to provide some more details regarding your use-case like average size of messages etc. but here's my 2 cents anyway:
Confluent's documentation might shed some light:
CPUs Most Kafka deployments tend to be rather light on CPU
requirements. As such, the exact processor setup matters less than the
other resources. Note that if SSL is enabled, the CPU requirements can
be significantly higher (the exact details depend on the CPU type and
JVM implementation).
You should choose a modern processor with multiple cores. Common
clusters utilize 24 core machines.
If you need to choose between faster CPUs or more cores, choose more
cores. The extra concurrency that multiple cores offers will far
outweigh a slightly faster clock speed.
How to compute your throughput
It might also be helpful to compute the throughput. For example, if you have 800 messages per second, of 500 bytes each then your throughput is 800*500/(1024*1024) = ~0.4MB/s. Now if your topic is partitioned and you have 3 brokers up and running with 3 replicas that would lead to 0.4/3*3=0.4MB/s per broker.
More details regarding your architecture can be found in Confluent's whitepaper Apache Kafka and Confluent Reference Architecture. Here's the section for memory usage,
ZooKeeper uses the JVM heap, and 4GB RAM is typically sufficient. Too
small of a heap will result in high CPU due to constant garbage
collection while too large heap may result in long garbage collection
pauses and loss of connectivity within the ZooKeeper cluster.
Kafka brokers use both the JVM heap and the OS page cache. The JVM heap is used for replication of partitions between brokers and for log
compaction. Replication requires 1MB (default replica.max.fetch.size)
for each partition on the broker. In Apache Kafka 0.10.1 (Confluent
Platform 3.1), we added a new configuration
(replica.fetch.response.max.bytes) that limits the total RAM used for
replication to 10MB, to avoid memory and garbage collection issues
when the number of partitions on a broker is high. For log compaction,
calculating the required memory is more complicated and we recommend
referring to the Kafka documentation if you are using this feature.
For small to medium-sized deployments, 4GB heap size is usually
sufficient. In addition, it is highly recommended that consumers
always read from memory, i.e. from data that was written to Kafka and
is still stored in the OS page cache. The amount of memory this
requires depends on the rate at this data is written and how far
behind you expect consumers to get. If you write 20GB per hour per
broker and you allow brokers to fall 3 hours behind in normal
scenario, you will want to reserve 60GB to the OS page cache. In cases
where consumers are forced to read from disk, performance will drop
significantly
Kafka Connect itself does not use much memory, but some connectors buffer data internally for efficiency. If you run multiple connectors
that use buffering, you will want to increase the JVM heap size to 1GB
or higher.
Consumers use at least 2MB per consumer and up to 64MB in cases of large responses from brokers (typical for bursty traffic).
Producers will have a buffer of 64MB each. Start by allocating 1GB RAM and add 64MB for each producer and 16MB for each consumer planned.
There are many different factors that need to be taken into consideration when it comes to tune the configuration of your architecture. I would suggest to go through the aforementioned documentation, monitor your existing cluster and resources and finally tune them accordingly.

How much memory Kafka cluster needs?

How can i calculate how much memory and cpu my Kafka cluster needs?
My cluster consists from 3 nodes, with throughput of ~800 messages per second.
Currently they have (each) 6 GB ram, 2 CPU, 1T disk, and it seems to be not enough. How much would you allocate?
You would need to provide some more details regarding your use-case like average size of messages etc. but here's my 2 cents anyway:
Confluent's documentation might shed some light:
CPUs Most Kafka deployments tend to be rather light on CPU
requirements. As such, the exact processor setup matters less than the
other resources. Note that if SSL is enabled, the CPU requirements can
be significantly higher (the exact details depend on the CPU type and
JVM implementation).
You should choose a modern processor with multiple cores. Common
clusters utilize 24 core machines.
If you need to choose between faster CPUs or more cores, choose more
cores. The extra concurrency that multiple cores offers will far
outweigh a slightly faster clock speed.
How to compute your throughput
It might also be helpful to compute the throughput. For example, if you have 800 messages per second, of 500 bytes each then your throughput is 800*500/(1024*1024) = ~0.4MB/s. Now if your topic is partitioned and you have 3 brokers up and running with 3 replicas that would lead to 0.4/3*3=0.4MB/s per broker.
More details regarding your architecture can be found in Confluent's whitepaper Apache Kafka and Confluent Reference Architecture. Here's the section for memory usage,
ZooKeeper uses the JVM heap, and 4GB RAM is typically sufficient. Too
small of a heap will result in high CPU due to constant garbage
collection while too large heap may result in long garbage collection
pauses and loss of connectivity within the ZooKeeper cluster.
Kafka brokers use both the JVM heap and the OS page cache. The JVM heap is used for replication of partitions between brokers and for log
compaction. Replication requires 1MB (default replica.max.fetch.size)
for each partition on the broker. In Apache Kafka 0.10.1 (Confluent
Platform 3.1), we added a new configuration
(replica.fetch.response.max.bytes) that limits the total RAM used for
replication to 10MB, to avoid memory and garbage collection issues
when the number of partitions on a broker is high. For log compaction,
calculating the required memory is more complicated and we recommend
referring to the Kafka documentation if you are using this feature.
For small to medium-sized deployments, 4GB heap size is usually
sufficient. In addition, it is highly recommended that consumers
always read from memory, i.e. from data that was written to Kafka and
is still stored in the OS page cache. The amount of memory this
requires depends on the rate at this data is written and how far
behind you expect consumers to get. If you write 20GB per hour per
broker and you allow brokers to fall 3 hours behind in normal
scenario, you will want to reserve 60GB to the OS page cache. In cases
where consumers are forced to read from disk, performance will drop
significantly
Kafka Connect itself does not use much memory, but some connectors buffer data internally for efficiency. If you run multiple connectors
that use buffering, you will want to increase the JVM heap size to 1GB
or higher.
Consumers use at least 2MB per consumer and up to 64MB in cases of large responses from brokers (typical for bursty traffic).
Producers will have a buffer of 64MB each. Start by allocating 1GB RAM and add 64MB for each producer and 16MB for each consumer planned.
There are many different factors that need to be taken into consideration when it comes to tune the configuration of your architecture. I would suggest to go through the aforementioned documentation, monitor your existing cluster and resources and finally tune them accordingly.
I think you want to start by profiling your kafka cluster.
See the answer to this post: CPU Profiling kafka brokers.
It basically recommends that you use a prometheus and grafana stack to visualize your load on a timeline - from this you should be able to determine your bottleneck. And links to an article that describes how.
Also, you may find the post interresting, because the poster seems to have about the same workload as you.

Zookeeper Overall data size limit

I am new to Zookeeper, trying to understand if it fits for my use case.
I have 10 million hierarchical data, which I want to store in Zookeeper.
10M key-value pair with size of the key and value will be 1KB each.
So the total data size is approximately ~20GB (10M * 2KB) without replication.
I know the zNodes data size limit is 1MB( which can be changed).
Questions:
Will zookeeper able to support 20GB of data, with no performance impact.
Is there max size after which the performance degrades?
Is there a limit to total number of nodes?
Zookeeper will no way be suitable for this use case. Zookeeper keeps dumping/snapshotting the data tree periodically and that means it will be dumping whole of the 20 GB data every few minutes. Moreover Zookeeper nodes in the cluster/ensemble are more like replica of each other and hence whole data is replicated to each Zookeeper node and hence no data partitioning either. Zookeeper is not a database.
I guess for your use case, it will be much better to go with some database or some distributed cache (Redis/Hazelcast etc.)
Anyway there are no limits on the total number of nodes on Zookeeper.

Categories