I am working on scaling the kafka cluster in Prod. Confluent provides easy way to add kafka brokers. However, how do I know how to scale zookeeper along with Kafka. What should be the ratio? Right now we have 5 zookeeper nodes for 5 kafka brokers. If I have 10 kafka brokers how many zookeeper nodes should be there?
Zookeeper works as a coordination service for Apache Kafka which stores metadata of kafka cluster. Zookeeper cluster is called ensemble.
Number of servers in a zookeeper ensemble are an odd number(3,5 etc).These numbers represents, how much your cluster is fault tolerant.A three node ensemble ,you can run with one node missing.
With five node ensemble,you can run with two nodes missing and your cluster will be available.
You can add as many zookeeper servers based on how much you want system to be functional inspire of failures, however a ZooKeeper cluter of more than 7 nodes is not recommended for issues with overhead of latency and over-communication between those nodes.
Related
I have two vm servers (say S1 and S2) and need to install kafka in cluster mode where there will be topic with only one partition and two replicas(one is leader in itself and other is follower ) for reliability.
Got high level idea from this cluster setup Want to confirm If below strategy is correct.
First set up zookeeper as cluster on both nodes for high availability(HA). If I do setup zk on single node only and then that node goes down, complete cluster
will be down. Right ? Is it mandatory to use zk in latest kafka version also ? Looks it is must for older version Is Zookeeper a must for Kafka?
Start the kafka broker on both nodes . It can be on same port as it is hosted on different nodes.
Create Topic on any node with partition 1 and replica as two.
zookeeper will select any broker on one node as leader and another as follower
Producer will connect to any broker and start publishing the message.
If leader goes down, zookeeper will select another node as leader automatically . Not sure how replica of 2 will be maintained now as there is only
one node live now ?
Is above strategy correct ?
Useful resources
ISR
ISR vs replication factor
First set up zookeeper as cluster on both nodes for high
availability(HA). If I do setup zk on single node only and then that
node goes down, complete cluster will be down. Right ? Is it mandatory
to use zk in latest kafka version also ? Looks it is must for older
version Is Zookeeper a must for Kafka?
Answer: Yes. Zookeeper is still must until KIP-500 will be released. Zookeeper is responsible for electing controller, storing metadata about Kafka cluster and managing broker membership (link). Ideally the number of Zookeeper nodes should be at least 3. By this way you can tolerate one node failure. (2 healthy Zookeeper nodes (majority in cluster) are still capable of selecting a controller)) You should also consider to set up Zookeeper cluster on different machines other than the machines that Kafka is installed. Thus the failure of a server won't lead to loss of both Zookeeper and Kafka nodes.
Start the kafka broker on both nodes . It can be on same port as it is
hosted on different nodes.
Answer: You should first start Zookeeper cluster, then Kafka cluster. Same ports on different nodes are appropriate.
Create Topic on any node with partition 1 and replica as two.
Answer: Partitions are used for horizontal scalability. If you don't need this, one partition is okay. By having replication factor 2, one of the nodes will be leader and one of the nodes will be follower at any time. But it is not enough for avoiding data loss completely as well as providing HA. You should have at least 3 Kafka brokers, 3 replication factor of topics, min.insync.replicas=2 as broker config and acks=all as producer config in the ideal configuration for avoiding data loss by not compromising HA. (you can check this for more information)
zookeeper will select any broker on one node as leader and another as
follower
Answer: Controller broker is responsible for maintaining the leader/follower relationship for all the partitions. One broker will be partition leader and another one will be follower. You can check partition leaders/followers with this command.
bin/kafka-topics.sh --describe --bootstrap-server localhost:9092 --topic my-replicated-topic
Producer will connect to any broker and start publishing the message.
Answer: Yes. Setting only one broker as bootstrap.servers is enough to connect to Kafka cluster. But for redundancy you should provide more than one broker in bootstrap.servers.
bootstrap.servers: A list of host/port pairs to use for establishing
the initial connection to the Kafka cluster. The client will make use
of all servers irrespective of which servers are specified here for
bootstrapping—this list only impacts the initial hosts used to
discover the full set of servers. This list should be in the form
host1:port1,host2:port2,.... Since these servers are just used for the
initial connection to discover the full cluster membership (which may
change dynamically), this list need not contain the full set of
servers (you may want more than one, though, in case a server is
down).
If leader goes down, zookeeper will select another node as leader
automatically . Not sure how replica of 2 will be maintained now as
there is only one node live now ?
Answer: If Controller broker goes down, Zookeeper will select another broker as new Controller. If broker which is leader of your partition goes down, one of the in-sync-replicas will be the new leader. (Controller broker is responsible for this) But of course, if you have just two brokers then replication won't be possible. That's why you should have at least 3 brokers in your Kafka cluster.
Yes - ZooKeeper is still needed on Kafka 2.4, but you can read about KIP-500 which plans to remove the dependency on ZooKeeper in the near future and start using the Raft algorithm in order to create the quorum.
As you already understood, if you will install ZK on a single node it will work in a standalone mode and you won't have any resiliency. The classic ZK ensemble consist 3 nodes and it allows you to lose 1 ZK node.
After pointing your Kafka brokers to the right ZK cluster you can start your brokers and the cluster will be up and running.
In your example, I would suggest you to create another node in order to gain better resiliency and met the replication factor that you wanted, while still be able to lose one node without losing data.
Bear in mind that using single partition means that you are bounded to single consumer per Consumer Group. The rest of the consumers will be idle.
I suggest you to read this blog about Kafka Best Practices and how to choose the number of topics/partitions in a Kafka cluster.
I currently have a 3 node Kafka cluster which connects to base chroot path in my zookeeper ensemble.
zookeeper.connect=172.12.32.123:2181,172.11.43.211:2181,172.18.32.131:2181
Now, I want to add a new 5 node Kafka cluster which will connect to some other chroot path in the same zookeeper ensemble.
zookeeper.connect=172.12.32.123:2181,172.11.43.211:2181,172.18.32.131:2181/cluster/2
Will these configurations work as in the relative paths for the two chroots? I understand that the original Kafka cluster should have been connected on some path other than the base chroot path for better isolation.
Also, is it good to have same zookeeper ensemble across Kafka clusters? The documentation says that it is generally better to have isolated zookeeper ensembles for different clusters.
If you're only limited to a single Zookeeper cluster, then it should work out fine with a unique chroot that doesn't collide with the other cluster's znodes.
It is not "good" to share, no, because Zookeeper losing quorum causes two clusters to be down, but again if you're limited on hardware, then it'll still work
Note: You can only afford to lose one ZK server with 3 nodes in the cluster, which is why a cluster of 5 is recommended
If I have, say, 3 partitions with replication factor 3. Now what I understood is that they have all to connect to the same zookeeper. Ok what if they can't due to network issues ? Will the replication continue when the network is avaialble again?
If ZK is down, your Kafka cluster will have limited functionality. For details, see How does Kafka depend on Zookeeper?
Kafka requires Zookeeper (ZK). If ZK is down, then the entire Kafka cluster will be "down" (meaning: will be almost unusable). ZK is used for a bunch of things like managing internal topics etc.
If ZK becomes available to the Kafka cluster, the cluster will be operational.
Anyone using Auto Scale to scale you Zookeeper cluster? If the zookeeper scale, how clients know it has been scale up or down? Specially like Kafka where the zookeeper list is being added into config file, what happen zookeeper scaled how kafka now it has been scale etc?
Short answer: ZooKeeper clients do not need to essentially know/track if there are new nodes added to the ZooKeeper cluster. They just need at least one ZK node available (healthy) for them.
Longer answer (with Kafka as example client of ZK):
If you're only adding new nodes to the ZooKeeper cluster, it's not essential for Kafka brokers to know about this, because the zookeeper.connect configuration still contains healthy ZK nodes.
If however, you're replacing/removing some of the ZooKeeper nodes, and these are the only nodes present in the zookeeper.connect configuration, then a rolling restart of the Kafka nodes will be required, after updating the zookeeper.connect configuration.
For #1 above, best to add the new ZK nodes to the Kafka configuration at the next opportunity of Kafka cluster restart.
Same is applicable for other technologies also that depend on ZK (e.g. Apache Storm).
I am wondering is there any way to make the zookeeper failover for kafka cluster.
For example: i want to setup 2 zookeeper instances for my kafka cluster. In case of one zookeeper fails, Kafka servers still able to read metadata of topics from second zookeeper.
any advice is highle appricicated.
Zookeeper works as a so-called quorum – a cluster of nodes that forms a consensus based on simple majority votes.
For production, you should use 3 or 5 Zookeeper instances in a quorum.
If you're using 3, your cluster can survive losing one server (because the remaining two form a simple majority). With 5, you can lose two servers because 3 is a majority of 5.
2 is a bad idea because your cluster won't work if 1 node goes down.
Please check this question
$KAFKA_HOME/config/server.properties
Here you can set multiple zookeeper
zookeeper.connect=<server1>:2181,<server2>:2181,<server2>:2181
Maintain 2n+1(quorum ) rule in case of zookeeper