Kafka recommended system configuration - apache-kafka

I'm expecting our influx in to Kafka to raise to around 2 TB/day over a period of time. I'm planning to setup a Kafka cluster with 2 brokers (each running on separate system). What is the recommended hardware configuration for handling 2 TB/day ?

To use as a base you could look here: https://docs.confluent.io/4.1.1/installation/system-requirements.html#hardware
You need to know the amount of messages you get per second/hour because this will determine the size of your cluster. For HD, it's not necessary to get SSD because the system will use RAM to store the data first. Still you could need quite speed hard disk to ensure that the flushing process of the queue will not slow your system.
I would also recommend to use 3 kafka broker and 3 or 4 zookeeper server too.

Related

Druid Tuning Configuration

iam beginner using druid and kafka.
I want to create interactive data realtime kafka-druid. I still confuse what tuning should i change of this configuration?
Thank in advance
It depends:
on your size of kafka's messages
on your producer's capacity and the rate of messages produced to Kafka
on your Kafka's servers, partitioning and lots of other factors.
on your druid's deployment model (single or cluster)
and ....
But the most important thing I see missing here is your task count, which means the amount of parallel processing (and since yours is Kafka, it means parallel consumers). Increase it and make sure your Druid host (or your middle manager host if it is a cluster) has the adequate cores for your tasks. And make sure in your middle-manager you have increased the total number of available tasks:
druid.worker.capacity

How does distribution mechanism works when Kafka runs locally?

How does distribution mechanism works when Kafka runs locally? Please tell the disadvantages too.
If you only run one broker locally, you have a single point of failure and no processing is truly distributed
If you have multiple brokers on the same machine, and you mount different volumes for each broker process logs, you'd end up with distributed storage + fault tolerance, but still no distributed processing
In either case, you can create as many topics as you want with many partitions, but you can only set the replication factor of the topics to be the number of active brokers
Multiple consumer processes are also able to run fine on a single machine, but you'd get more throughput by separating brokers and consumers across several physical machines (more cpu available, and different network interfaces)

kafka Performance Reduced when adding more consumer or producer

I have 3 server with 10GB connection between them and run a Kafka cluster on 2 servers and generate some test in third server...
when I run a single java producer (in third server that is not in Kafka cluster) sending 1 million messages take 3 seconds, but when I run another java producer (with different topic) both of producers take 6 seconds for sending messages.
I sure network connection is not bottleneck (it is 10GB)
so why this problem happened and how can I solve this (I want both producers take 3 seconds) ?
Sounds like you are getting a consistent 333,333 messages/sec performance out of a two node kafka cluster, with zookeeper running on the same 2 machines as your 2 kafka brokers. You don’t say what size these messages are or what kind of disks you are using, or how much memory, or if you are publishing with acks=all, or what programming language you are using (I assume java) but that actually sounds like good consistent results that are probably disk IO bound on the brokers or cpu bound on your single client machine.

Separate zookeeper install or not using kafka 10.2?

I would like to use the embedded Zookeeper 3.4.9 that come with Kafka 10.2, and not install Zookeeper separately. Each Kafka broker will always have a 1:1 Zookeeper on localhost.
So if I have 5 brokers on hosts A, B, C, D and E, each with a single Kafka and Zookeeper instance running on them, is it sufficient to just run the Zookeeper provided with Kafka?
What downsides or configuration limitations, if any, does the embedded 3.4.9 Zookeeper have compared to the standalone version?
These are a few reason not to run zookeeper on the same box as Kafka brokers.
They scale differently
5 zk and 5 Kafka works but 6:6 or 11:11 do not. You don't need more than 5 zookeeper nodes even for a quite large Kafka cluster. Unlike Kafka, Zookeeper replicates data to all nodes so it gets slower as you add more nodes.
They compete for disk I/O
Zookeeper is very disk I/O latency sensitive. You need to have it on a separate physical disk from the Kafka commit log or you run the risk that a lot of publishing to Kafka will slow zookeeper down and cause it to drop out of the ensemble causing potential problems.
They compete for page cache memory
Kafka uses Linux OS page cache to reduce disk I/O. When other apps run on the same box as Kafka you reduce or "pollute" the page cache with other data that takes away from cache for Kafka.
Server failures take down more infrastructure
If the box reboots you lose both a zookeeper and a broker at the same time.
Even though ZooKeeper comes with each Kafka release it does not mean they should run on the same server. Actually, it is advised that in a production environment they run on separate servers.
In the Kafka broker configuration you can specify the ZooKeeper address, and it can be local or remote. This is from broker config (config/server.properties):
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
You can replace localhost with any other accessible server name or IP address.
We've been running a setup as you described, with 3 to 5 nodes, each running a kafka broker and the zookeeper that comes with kafka distribution on the same nodes. No issues with that setup so far, but our data throughput isn't high.
If we were to scale above 5 nodes we'd separate them, so that we only scale kafka brokers but keep the zookeeper ensemble small. If zookeeper and kafka start competing for I/O too much, then we'd move their data directories to separate drives. If they start competing for CPU, then we'd move them to separate boxes.
All in all, it depends on your expected throughput and how easily you can upgrade your setup if it starts causing contention. You can start small and easy, with kafka and zookeeper co-located as long as you have the flexibility to upgrade your setup with more nodes and introduce separation later on. If you think this will be hard to add later, better start running them separate from the start. We've been running them co-located for 18+ months and haven't encountered resource contention so far.

Does it make sense to have more #replicas than #broker in kafka?

I am going to use kafka as messaging system. Still missing the following dots in my mind.
How many brokers can I have on one machine ?
Does it make sense to have more #replicas (partition replication) than #broker in kafka ?
Is it possible to add additional zookeeper server(on other machine) to scale without shutting down/restarting the current service ?
You could have more than one broker per machine but there is usually not any good reason to have more than one.
I can not think of a good reason to have more #replicas specified than #brokers.
Your Zookeeper servers should optimally be on separate machines and be and odd number of nodes. There is a tradeoff between write latency and resiliency here. 3 Zookeepers are common where write latency is very important. 5 or even 7 nodes can be used for more resiliency.