Is there any storage limits for a Kafka compacted topic? - apache-kafka

When doing stateful processing in kafka streams we can hold large state. We can provision more disks space for the client as the data grows. But what about the changelog topic? The local state is backed up in this compacted topic. Are there any limitations in how much data we can store in this topic?
We did not encounter any issues yet. But i see that some cloud services do have limitations on the size for a compacted topic. Is this a kafka limitation? An if yes, do these limitations also apply for non compacted topics?

Infinite retention of any topic log segments can be achieved by setting
log.retention.bytes = -1
log.retention.hours = -1
This option is available from version 0.9.0.0 which indicates a mature feature on Kafka.
However, many suggest that using Kafka as permanent storage is not what it was designed to do and as the amount of data stored in Kafka increases, users eventually hit a “retention cliff,” at which point it becomes significantly more expensive to store, manage, and retrieve data. The infrastructure costs will be increased as the longer the retention period the more hardware is required.
Having said that, it seems that people do use Kafka for persistence storage, for example, The New York Times uses Kafka as a source of truth, storing 160 years of journalism going back to the 1850s.
I would suggest using a small message size if you decide to use
Kafka as a System Of Record (SOR) and to hold the state of an entity.
Kafka makes it very clear that its performance is greatly based on the event/message size, so there is a size limit on them.
Kafka has a default limit of 1MB per message in the topic. This is
because very large messages are considered inefficient and an
anti-pattern in Apache Kafka.
more for handling larger messages here.
By default, each Kafka topic partition log will start at a minimum size of 20MB and grow to a maximum size of 100MB on disk before a new log file is created. It's possible to have multiple log files in a partition at any one time.

Related

Can I use Kafka for multiple independent consumers sequential reads?

I have the following use case:
50 students write their own code which consumes a preloaded dataset, and they will repeat it many times.
They all need to do the same task: read the data in order, and process it.
The dataset is a time series containing 600 million messages, each message is about 1.3KB.
Processing will probably be in Spark, but not mandatory.
The dataset is fixed and ReadOnly.
The data should be read at "reasonable speed" > 30MB/sec for each consumer.
I was thinking of setting kafka cluster with 3+ brokers, 1 topic, and 50 partitions.
My issue with the above plan is that each student (== consumer) must read all the data, regardless of what other consumers do.
Is Kafka a good fit for this? If so, how?
What if I relax the requirement of reading the dataset in order? i.e. a consumer can read the 600M messages in any order.
Is it correct that in this case each consumer will simply pull the full topic (starting with "earliest)?
An alternative is to set an HDFS storage (we use Azure so it's called Storage Account) and simply supply a mount point. However, I do not have control of the throughput in this case.
Throughput calculation:
let's say 25 consumers run concurrently, each reading at 30MB/s -> 750MB/s .
Assuming data is read from disk, and disk rate is 50MB/s, I need to read concurrently from 750/50 = 15 disks.
Does it mean I need to have 15 brokers? I did not see how one broker can allocate partitions to several disks attached to it.
similar posts:
Kafka topic partitions to Spark streaming
How does one Kafka consumer read from more than one partition?
(Spring) Kafka appears to consume newly produced messages out of order
Kafka architecture many partitions or many topics?
Is it possible to read from multiple partitions using Kafka Simple Consumer?
Processing will probably be in Spark, but not mandatory
An alternative is to set an HDFS storage (we use Azure)
Spark can read from Azure Blob Storage, so I suggest you start with that first. You can easily scale up Spark executors in parallel for throughput.
If want to use Kafka, don't base consumption rate on disk speed alone, especially when Kafka can do zero-copy transfers. Use kafka-consumer-perf-test script to test how fast your consumers can go with one partition. Or, better, if your data has some key other than timestamp that you can order by, then use that.
It's not really clear if each "50 students" does the same processing on the data set, or some pre computations can be done, but if so, Kafka Streams KTables can be setup to aggregate some static statistics of the data, if it's all streamed though a topic, that way, you can distribute load for those queries, and not need 50 parallel consumers.
Otherwise, my first thought would be to simply use a TSDB like OpenTSDB, Timescale or Influx, maybe Druid . Which could also be used with Spark, or queried directly.
If you are using Apache Spark 3.0+ there are ways around consumer per partition bound, as it can use more executor threads than partitions are, so it's mostly about how fast your network and disks are.
Kafka stores latest offsets in memory, so probably for your use case most of reads will be from memory.
Desired minimum number of partitions to read from Kafka. By default, Spark has a 1-1 mapping of topicPartitions to Spark partitions consuming from Kafka. If you set this option to a value greater than your topicPartitions, Spark will divvy up large Kafka partitions to smaller pieces. Please note that this configuration is like a hint: the number of Spark tasks will be approximately minPartitions. It can be less or more depending on rounding errors or Kafka partitions that didn't receive any new data.
https://spark.apache.org/docs/3.0.1/structured-streaming-kafka-integration.html

Are 3k kafka topics decrease performance?

I have a Kafka Cluster (Using Aivan on AWS):
Kafka Hardware
Startup-2 (2 CPU, 2 GB RAM, 90 GB storage, no backups) 3-node high availability set
Ping between my consumers and the Kafka Broker is 0.7ms.
Backgroup
I have a topic such that:
It contains data about 3000 entities.
Entity lifetime is a week.
Each week there will be different 3000 entities (on avg).
Each entity may have between 15k to 50k messages in total.
There can be at most 500 messages per second.
Architecture
My team built an architecture such that there will be a group of consumers. They will parse this data, perform some transformations (without any filtering!!) and then sends the final messages back to the kafka to topic=<entity-id>.
It means I upload the data back to the kafka to a topic that contains only a data of a specific entity.
Questions
At any given time, there can be up to 3-4k topics in kafka (1 topic for each unique entity).
Can my kafka handle it well? If not, what do I need to change?
Do I need to delete a topic or it's fine to have (alot of!!) unused topics over time?
Each consumer which consumes the final messages, will consume 100 topics at the same time. I know kafka clients can consume multiple topics concurrenctly but I'm not sure what is the best practices for that.
Please share your concerns.
Requirements
Please focus on the potential problems of this architecture and try not to talk about alternative architectures (less topics, more consumers, etc).
The number of topics is not so important in itself, but each Kafka topic is partitioned and the total number of partitions could impact performance.
The general recommendation from the Apache Kafka community is to have no more than 4,000 partitions per broker (this includes replicas). The linked KIP article explains some of the possible issues you may face if the limit is breached, and with 3,000 topics it would be easy to do so unless you choose a low partition count and/or replication factor for each topic.
Choosing a low partition count for a topic is sometimes not a good idea, because it limits the parallelism of reads and writes, leading to performance bottlenecks for your clients.
Choosing a low replication factor for a topic is also sometimes not a good idea, because it increases the chance of data loss upon failure.
Generally it's fine to have unused topics on the cluster but be aware that there is still a performance impact for the cluster to manage the metadata for all these partitions and some operations will still take longer than if the topics were not there at all.
There is also a per-cluster limit but that is much higher (200,000 partitions). So your architecture might be better served simply by increasing the node count of your cluster.

Does kafka support millions of partitions?

Will we have any problem if we have millions of partitions for one topic?
Due to our business requirement, we are thinking if we can make a partition for every user in kafka.
We have millions of users.
Any insight would be appreciated!
Yes, I think you will end up having problems if you have millions of partitions for several reasons:
(Most importantly!!) Customers come and go, so you will have the requirement to constantly change the number of partitions or have plenty of unused partitions (because you can not reduce the number of partitions within a topic).
More Partitions Requires More Open File Handles: More Partitions means more directories and segment files on disk.
More Partitions May Increase Unavailability: Planned failures move Leaders off of a Broker one at a time, with minimal downtime per partition. In a hard failure all the leaders are immediately unavailable.
More Partitions May Increase End-to-end Latency: For the message to be seen by a Consumer it must be committed. The Broker replicates data from the leader with a single thread, resulting in overhead per Partition.
More Partitions May Require More Memory In the Client
More details are provided in the blog from Confluent on How to choose the number of topics/partitions in a Kafka cluster?.
In addition, according to Confluent's training material for Kafka developers it is recommended:
"The current limits (2-4K Partitions/Broker, 100s K Partitions per cluster) are maximums. Most environments are well below these values (typically in the 1000-1500 range or less per Broker)."
This blog explains that "Apache Kafka Supports 200K Partitions Per Cluster".
This might change with the replacement of Zookeeper KIP-500 but, again, looking at the first bullet point above this will still be a unhealthy software design.

Kafka disk space gets full

I have a Kafka service with 1000GB disk and this running parameter:
log.retention.bytes=350000000000
However, the usage of disk space reaches 90% (900GB). Since that parameter is running, the disk size should not exceeds 326GB. Why could this happen?
Other properties:
log.index.interval.bytes=4000
log.segment.bytes=250000000
log.index.size.max.bytes=10485760
log.retention.ms=168
while the official documentation isnt very clear:
The maximum size of the log before deleting it
the confluent documentation on topic configs (which should really be considered the official documentation anyway) has a better one (under retention.bytes):
This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes.
in short, this config isnt even per topic. its per partition. im not aware of a kafka config that acts as a broker-wide size limit.
if youre trying to balance data load across multiple brokers in a cluster perhaps you should look at cruise control

large number of Channels Kafka

I was wondering if Kafka has any limitation or starts slowing down (due to GC or other reasons) if we have large number of channels. We have a heavy volume of data that we will be sending through Kafka (Over 2B data points). We were thinking of having about 1600 channels to start with.
Has anyone come across issues when we have such large number of channels in Kafka? Similarly, do you see issues with local DC replication with these large number of channels and lastly any foreseeable issues if we are using MirrorMaker for cross DC replication with such large number of channels
Any pointers are highly appreciated
Thanks
I believe there is no hard limit on number of topics in Kafka itself. However, since Kafka stores topic info in Zookeeper (//brokers/topics/), and Zookeeper has a 1MB limitation on max node size, there can be only a finite number of topics. Also, Kafka brokers store data for different topics in /var/kafka/data/. Performance may suffer if there are too many subdirs in /var/kafka/data/.
I haven't tried thousands of topics but Kafka with a few hundred topics works ok for my purposes. The only area where I had problems was dynamic topic creation while using high level consumer. It required client re-connection to pick up the new topics on all consumer boxes. This caused time consuming consumer re-balancing (which sometimes failed, preventing reading from some topics). As a result I had to switch to simple consumer and take care about read coordination in my code.
I'd recommend to create a simple test app that generates some random data for the number of topics you expect going forward and verify that performance is acceptable.