I am building a production environment where I will be having Apache Kafka. I want to know the best hardware combination to have for better performance. I will be having 5000 transactions/second.
You would need to provide some more details regarding your use-case like average size of messages etc. but here's my 2 cents anyway:
Confluent's documentation might shed some light:
CPUs Most Kafka deployments tend to be rather light on CPU
requirements. As such, the exact processor setup matters less than the
other resources. Note that if SSL is enabled, the CPU requirements can
be significantly higher (the exact details depend on the CPU type and
JVM implementation).
You should choose a modern processor with multiple cores. Common
clusters utilize 24 core machines.
If you need to choose between faster CPUs or more cores, choose more
cores. The extra concurrency that multiple cores offers will far
outweigh a slightly faster clock speed.
How to compute your throughput
It might also be helpful to compute the throughput. For example, if you have 800 messages per second, of 500 bytes each then your throughput is 800*500/(1024*1024) = ~0.4MB/s. Now if your topic is partitioned and you have 3 brokers up and running with 3 replicas that would lead to 0.4/3*3=0.4MB/s per broker.
More details regarding your architecture can be found in Confluent's whitepaper Apache Kafka and Confluent Reference Architecture. Here's the section for memory usage,
ZooKeeper uses the JVM heap, and 4GB RAM is typically sufficient. Too
small of a heap will result in high CPU due to constant garbage
collection while too large heap may result in long garbage collection
pauses and loss of connectivity within the ZooKeeper cluster.
Kafka brokers use both the JVM heap and the OS page cache. The JVM heap is used for replication of partitions between brokers and for log
compaction. Replication requires 1MB (default replica.max.fetch.size)
for each partition on the broker. In Apache Kafka 0.10.1 (Confluent
Platform 3.1), we added a new configuration
(replica.fetch.response.max.bytes) that limits the total RAM used for
replication to 10MB, to avoid memory and garbage collection issues
when the number of partitions on a broker is high. For log compaction,
calculating the required memory is more complicated and we recommend
referring to the Kafka documentation if you are using this feature.
For small to medium-sized deployments, 4GB heap size is usually
sufficient. In addition, it is highly recommended that consumers
always read from memory, i.e. from data that was written to Kafka and
is still stored in the OS page cache. The amount of memory this
requires depends on the rate at this data is written and how far
behind you expect consumers to get. If you write 20GB per hour per
broker and you allow brokers to fall 3 hours behind in normal
scenario, you will want to reserve 60GB to the OS page cache. In cases
where consumers are forced to read from disk, performance will drop
significantly
Kafka Connect itself does not use much memory, but some connectors buffer data internally for efficiency. If you run multiple connectors
that use buffering, you will want to increase the JVM heap size to 1GB
or higher.
Consumers use at least 2MB per consumer and up to 64MB in cases of large responses from brokers (typical for bursty traffic).
Producers will have a buffer of 64MB each. Start by allocating 1GB RAM and add 64MB for each producer and 16MB for each consumer planned.
There are many different factors that need to be taken into consideration when it comes to tune the configuration of your architecture. I would suggest to go through the aforementioned documentation, monitor your existing cluster and resources and finally tune them accordingly.
Related
How can I monitor and check the memory usage of a Kafka topic? Are there ways to do it without installing other tools?
Topics allocate disk space on the brokers, not (significant) RAM, due to zero-copy writes. Only temporary memory allocations would be made to handle regular client interactions, but to my knowledge JVM metrics, only show bytes in/out per topic, not memory (or CPU) per topic. There are many tools that allow you to inspect JMX beans, if you'd like to take a look yourself, but none built into Kafka, thus requiring "installing other tools"
How can i calculate how much memory and cpu my Kafka cluster needs?
My cluster consists from 3 nodes, with throughput of ~800 messages per second.
Currently they have (each) 6 GB ram, 2 CPU, 1T disk, and it seems to be not enough. How much would you allocate?
You would need to provide some more details regarding your use-case like average size of messages etc. but here's my 2 cents anyway:
Confluent's documentation might shed some light:
CPUs Most Kafka deployments tend to be rather light on CPU
requirements. As such, the exact processor setup matters less than the
other resources. Note that if SSL is enabled, the CPU requirements can
be significantly higher (the exact details depend on the CPU type and
JVM implementation).
You should choose a modern processor with multiple cores. Common
clusters utilize 24 core machines.
If you need to choose between faster CPUs or more cores, choose more
cores. The extra concurrency that multiple cores offers will far
outweigh a slightly faster clock speed.
How to compute your throughput
It might also be helpful to compute the throughput. For example, if you have 800 messages per second, of 500 bytes each then your throughput is 800*500/(1024*1024) = ~0.4MB/s. Now if your topic is partitioned and you have 3 brokers up and running with 3 replicas that would lead to 0.4/3*3=0.4MB/s per broker.
More details regarding your architecture can be found in Confluent's whitepaper Apache Kafka and Confluent Reference Architecture. Here's the section for memory usage,
ZooKeeper uses the JVM heap, and 4GB RAM is typically sufficient. Too
small of a heap will result in high CPU due to constant garbage
collection while too large heap may result in long garbage collection
pauses and loss of connectivity within the ZooKeeper cluster.
Kafka brokers use both the JVM heap and the OS page cache. The JVM heap is used for replication of partitions between brokers and for log
compaction. Replication requires 1MB (default replica.max.fetch.size)
for each partition on the broker. In Apache Kafka 0.10.1 (Confluent
Platform 3.1), we added a new configuration
(replica.fetch.response.max.bytes) that limits the total RAM used for
replication to 10MB, to avoid memory and garbage collection issues
when the number of partitions on a broker is high. For log compaction,
calculating the required memory is more complicated and we recommend
referring to the Kafka documentation if you are using this feature.
For small to medium-sized deployments, 4GB heap size is usually
sufficient. In addition, it is highly recommended that consumers
always read from memory, i.e. from data that was written to Kafka and
is still stored in the OS page cache. The amount of memory this
requires depends on the rate at this data is written and how far
behind you expect consumers to get. If you write 20GB per hour per
broker and you allow brokers to fall 3 hours behind in normal
scenario, you will want to reserve 60GB to the OS page cache. In cases
where consumers are forced to read from disk, performance will drop
significantly
Kafka Connect itself does not use much memory, but some connectors buffer data internally for efficiency. If you run multiple connectors
that use buffering, you will want to increase the JVM heap size to 1GB
or higher.
Consumers use at least 2MB per consumer and up to 64MB in cases of large responses from brokers (typical for bursty traffic).
Producers will have a buffer of 64MB each. Start by allocating 1GB RAM and add 64MB for each producer and 16MB for each consumer planned.
There are many different factors that need to be taken into consideration when it comes to tune the configuration of your architecture. I would suggest to go through the aforementioned documentation, monitor your existing cluster and resources and finally tune them accordingly.
I think you want to start by profiling your kafka cluster.
See the answer to this post: CPU Profiling kafka brokers.
It basically recommends that you use a prometheus and grafana stack to visualize your load on a timeline - from this you should be able to determine your bottleneck. And links to an article that describes how.
Also, you may find the post interresting, because the poster seems to have about the same workload as you.
We are planning to build a multi TB Kafka Cluster.
From LinkedIn presentations, which are supposed to handle the largest Kafka cluster in the world, it seems like they are using a few pretty large servers.
We are planning to go the other way: Launch a lot of small Kafka brokers handling a few GB each.
What are the pros and cons of scaling vertically vs horizontally with Kafka? e.g for 50TB, having 5 brokers handling 10TB each, or 5000 brokers handling 10GB each.
Those numbers are made up.
ps: Maintaining 5 or 5000 servers for us has the same operational cost as it's all automated.
My recommendation would be to go with 5 brokers with 10TB each, with 3 redundant copies of the data (RF3). Kafka brokers generate a lot of crosstalk/chatter between them, so it's best to minimize the network overhead as well as operational and even cognitive overhead when there's issues.
You mention that operational cost is all the same to you. In my experience, it's never that simple. There's setup time, configuration for 5000 different machines, network traffic, etc. And even if it's all automated, 5000 servers will have hardware issues, on average, at 1000x the rate of 5 servers, so if you expect 1% of the servers to fail per year, you'll have brokers failing almost weekly. Having large servers doesn't guarantee no hardware failures, but the likelihood is less.
Scenario: I have a low-volume topic (~150msgs/sec) for which we would like to have a
low propagation delay from producer to consumer.
I added a time stamp from a producer and read it at consumer to record the propagation delay, with default configurations the msg (of 20 bytes) showed a propagation delay of 1960ms to 1230ms. No network delay is involved since, I tried on a 1 producer and 1 simple consumer on the same machine.
When I have tried adjusting the topic flush interval to 20ms, it drops
to 1100ms to 980ms. Then I tried adjusting the consumers "fetcher.backoff.ms" to 10ms, it dropped to 1070ms - 860ms.
Issue: For a 20 bytes of a msg, I would like to have a propagation delay as low as possible and ~950ms is a higher figure.
Question: Anything I am missing out in configuration?
I do welcome comments, delay which you got as minimum.
Assumption: The Kafka system involves the disk I/O before the consumer get the msg from the producer and this goes with the hard disk RPM and so on..
Update:
Tried to tune the Log Flush Policy for Durability & Latency.Following is the configuration:
# The number of messages to accept before forcing a flush of data to disk
log.flush.interval=10
# The maximum amount of time a message can sit in a log before we force a flush
log.default.flush.interval.ms=100
# The interval (in ms) at which logs are checked to see if they need to be
# flushed to disk.
log.default.flush.scheduler.interval.ms=100
For the same msg of 20 bytes, the delay was 740ms -880ms.
The following statements are made clear in the configuration itself.
There are a few important trade-offs:
Durability: Unflushed data is at greater risk of loss in the event of a crash.
Latency: Data is not made available to consumers until it is flushed (which adds latency).
Throughput: The flush is generally the most expensive operation.
So, I believe there is no way to come down to a mark of 150ms - 250ms. (without hardware upgrade) .
I am not trying to dodge the question but I think that kafka is a poor choice for this use case. While I think Kafka is great (I have been a huge proponent of its use at my workplace), its strength is not low-latency. Its strengths are high producer throughput and support for both fast and slow consumers. While it does provide durability and fault tolerance, so do more general purpose systems like rabbitMQ. RabbitMQ also supports a variety of different clients including node.js. Where rabbitMQ falls short when compared to Kafka is when you are dealing with extremely high volumes (say 150K msg/s). At that point, Rabbit's approach to durability starts to fall apart and Kafka really stands out. The durability and fault tolerance capabilities of rabbit are more than capable at 20K msg/s (in my experience).
Also, to achieve such high throughput, Kafka deals with messages in batches. While the batches are small and their size is configurable, you can't make them too small without incurring a lot of overhead. Unfortunately, message batching makes low-latency very difficult. While you can tune various settings in Kafka, I wouldn't use Kafka for anything where latency needed to be consistently less than 1-2 seconds.
Also, Kafka 0.7.2 is not a good choice if you are launching a new application. All of the focus is on 0.8 now so you will be on your own if you run into problems and I definitely wouldn't expect any new features. For future stable releases, follow the link here stable Kafka release
Again, I think Kafka is great for some very specific, though popular, use cases. At my workplace we use both Rabbit and Kafka. While that may seem gratuitous, they really are complimentary.
I know it's been over a year since this question was asked, but I've just built up a Kafka cluster for dev purposes, and we're seeing <1ms latency from producer to consumer. My cluster consists of three VM nodes running on a cloud VM service (Skytap) with SAN storage, so it's far from ideal hardware. I'm using Kafka 0.9.0.0, which is new enough that I'm confident the asker was using something older. I have no experience with older versions, so you might get this performance increase simply from an upgrade.
I'm measuring latency by running a Java producer and consumer I wrote. Both run on the same machine, on a fourth VM in the same Skytap environment (to minimize network latency). The producer records the current time (System.nanoTime()), uses that value as the payload in an Avro message, and sends (acks=1). The consumer is configured to poll continuously with a 1ms timeout. When it receives a batch of messages, it records the current time (System.nanoTime() again), then subtracts the receive time from the send time to compute latency. When it has 100 messages, it computes the average of all 100 latencies and prints to stdout. Note that it's important to run the producer and consumer on the same machine so that there is no clock sync issue with the latency computation.
I've played quite a bit with the volume of messages generated by the producer. There is definitely a point where there are too many and latency starts to increase, but it's substantially higher than 150/sec. The occasional message takes as much as 20ms to deliver, but the vast majority are between 0.5ms and 1.5ms.
All of this was accomplished with Kafka 0.9's default configurations. I didn't have to do any tweaking. I used batch-size=1 for my initial tests, but I found later that it had no effect at low volume and imposed a significant limit on the peak volume before latencies started to increase.
It's important to note that when I run my producer and consumer on my local machine, the exact same setup reports message latencies in the 100ms range -- the exact same latencies reported if I simply ping my Kafka brokers.
I'll edit this message later with sample code from my producer and consumer along with other details, but I wanted to post something before I forget.
EDIT, four years later:
I just got an upvote on this, which led me to come back and re-read. Unfortunately (but actually fortunately), I no longer work for that company, and no longer have access to the code I promised I'd share. Kafka has also matured several versions since 0.9.
Another thing I've learned in the ensuing time is that Kafka latencies increase when there is not much traffic. This is due to the way the clients use batching and threading to aggregate messages. It's very fast when you have a continuous stream of messages, but any time there is a moment of "silence", the next message will have to pay the cost to get the stream moving again.
It's been some years since I was deep in Kafka tuning. Looking at the latest version (2.5 -- producer configuration docs here), I can see that they've decreased linger.ms (the amount of time a producer will wait before sending a message, in hopes of batching up more than just the one) to zero by default, meaning that the aforementioned cost to get moving again should not be a thing. As I recall, in 0.9 it did not default to zero, and there was some tradeoff to setting it to such a low value. I'd presume that the producer code has been modified to eliminate or at least minimize that tradeoff.
Modern versions of Kafka seem to have pretty minimal latency as the results from here show:
2 ms (median)
3 ms (99th percentile)
14 ms (99.9th percentile)
Kafka can achieve around millisecond latency, by using synchronous messaging. With synchronous messaging, the producer does not collect messages into a patch before sending.
bin/kafka-console-producer.sh --broker-list my_broker_host:9092 --topic test --sync
The following has the same effect:
--batch-size 1
If you are using librdkafka as Kafka client library, you must also set socket.nagle.disable=True
See https://aivarsk.com/2021/11/01/low-latency-kafka-producers/ for some ideas on how to see what is taking those milliseconds.
Will frequent Compaction and Memtable Flushing affect write latency of the cluster?
In our implementation we have a bunch of Counter Column Families [about 30] which gets updated very actively. Every request to our system does around 15-20 updates[all diff CFs].
We are able to notice Compaction and Flushing happening very frequently in our system logs of cassandra on heavy traffic. And By the time we also experience high load on nodes responsible for the keys [Day Timestamp, Minute Timestamp, Hour Timestamp] and write latency of the cluster increases than usual [0.6 ms to 26 ms]
We haven't touched any of the defaults of cassandra and our machines running cassandra have reasonably good enough configuration[32G ram and 16 Cores] 4G to cassandra
We tried disabling durable_writes to know whether it helps but it didn't do that much good as we expected
Short version: if Cassandra is configured as recommended with commitlog on a separate disk from the data directories, then flush and compaction should have negligible impact.
Caveats:
Updates are primarily CPU-bound, and compaction takes a lot of CPU. If you are running on machines or VMs with less than 4 cores [not your situation, but for the sake of completeness] you might want to reduce compaction_throughput_mb_per_sec to throttle it down.
If you have enough CFs flushing all at the same time (which it sounds like may be the case with updating 2/3 of your CFs with each request) then Cassandra may block writes temporarily to make sure that it's not accepting writes faster than it can flush them (which could otherwise eventually result in running out of memory and dying). 4 GB is a relatively small heap for high volume inserts across many CFs; I'd suggest increasing that to 8. It's also worth enabling the JVM GC logging to see how hard the JVM is having to work -- example settings are in cassandra-env.sh.
Finally, you don't mention the Cassandra version you are using but performance has reliably increased with each major release. Especially if you are using something older than 0.8, I would recommend upgrading.