I have a five node kafka cluster(confluent 5.5 community edition) with 3 zookeeper nodeseach on different aws instances.
While doing failover testing , noticed that the kafka cluster works fine even if all zookeeper nodes are down.
I was able to produce , consume and also create new consumers.
why does the kafka cluster not stop if it cannot connect to any zookeeper nodes ?
What would be the possible issues if we are unaware of such a failure scenario in production and kafka cluster continues to run without zookeeper connectivity ?
how do we handle such a scenario ?
Broker leader election, topic creation, simple ACLs (if you use them) still depend on Zookeeper. For other basic functions relying on the Kafka bootstrap protocols, they might still work, sure. There should definitely be broker logs indicating connection was lost
Ideally you'd have basic process healthchecking and incident management software that you shouldn't miss critical services going down in prod
How to handle? Restart Zookeeper...
Related
While I am creating cluster setup for kafka I came to know zookeeper quorum set up is needed for coordination between kafka brokers.
Are there any other scenarios where we use zookeeper other than only for kafka setup in real time?
This link lists many applications and organisations using ZooKeeper
https://zookeeper.apache.org/doc/r3.6.2/zookeeperUseCases.html
ZooKeeper is used with many Apache projects and is a distributed coordination service used to manage a large set of hosts. In simple terms, Zookeeper allows workers to get on with their jobs and handles all the other complexities i.e. if a leader goes down, alerting the workers, electing a new leader etc.
What I have zookeeper setup which is running on server1, server2 and server3 and similarly kafka also running in server1, server2 and server3.
Setup are running in kubernetes.
Problem statement:
In case one zookeeper setup get down entire setup will get down, because kafka is depended to zookeeper. am i right?
If Q1 correct - Is there any way to make setup like if one zookeeper server will get down then kafka should run as it is?
How to expose kafka port in kubernetes setup ?
what is the recommended way to persist data in kubernetes for production server ?
I fail to see how Zookeeper questions are related to k8s... But you definitely should set affinity rules such that Zookeeper and Kafka are not on the same physical servers or sharing same disks
If one Zookeeper out of three goes down, you'll end up with a split brain event in that no single Zookeeper knows which should be responsible for leadership. This effectively can crash or corrupt Kafka, yes.
To mitigate that risk, you can choose to run 5 Zookeepers, in which case you can lose up to 3 servers to reach the same state. The Definitive Guide book covers these concepts in the first few chapters
Regarding the other questions - NodePorts and PVCs, generally speaking.
Use one of the popular Kafka Operators on Github and you'll not need to think too hard about setting those properties
You still must manually perform Kafka admin tasks in any installation... You can use extra services like Cruise Control if you want to reduce that workload, though
I have Kafka and Zookeeper co-located on the same servers, with multiple nodes.
In Kafka's server.properties, I have a line like
zookeeper.connect=server1:2181,server2:2181...
the problem is, Kafka will not start until all of the Zookeeper nodes are available. Otherwise, I will get an error like "fatal error during Kafka startup" and "Timed out waiting for connection while in state: CONNECTING" even though the other Zookeeper nodes are up.
This makes it challenging to script startup of each node independently, since the startup scripts on one node are dependent on the state of other nodes.
First: is this expected behavior or am I doing something wrong? Suppose I have 3 nodes in Zookeeper cluster; all 3 nodes have to be up for Kafka to start? That seems counterintuitive, since a larger cluster would actually increase the chance of failure on startup rather than provide more resiliency.
Second: What's a good solution for this? Is the only approach to make Kafka on each node wait until Zookeeper is fully up on all nodes?
As far as I know, this is a prerequisite for Kafka to start up correctly, and I don't think too much of a burden. If the zookeeper cluster itself is already having problems at startup time, Kafka itself might run into problems, so ensuring that the Zookeeper cluster is healthy is a good initial check, IMHO.
A way to get around this limitation is to configure a single-node Zookeeper cluster, and tell Kafka to use that cluster. After the fact, you can grow the zookeeper cluster to 3 or more nodes, while Kafka is already up and running. More details can be found here:
Adding new ZooKeeper node in Kafka cluster?
For the record, Kafka itself is completely fine if the Zookeeper cluster goes down once it's up and running. It just wouldn't be able to accept new producer/consumer connections or create topics, but the current ones that are active on the cluster continue to work just fine.
We have met the same problem in our production environment.
It turns out to be a bug (ZOOKEEPER-2184) from zookeeper library which kafka uses talking to zookeeper.
Our kafka version is 1.1.1 which use zookeeper-3.4.10.jar.
After we replaced it with zookeeper-3.4.13.jar, kafka can restart successfully.
If I have, say, 3 partitions with replication factor 3. Now what I understood is that they have all to connect to the same zookeeper. Ok what if they can't due to network issues ? Will the replication continue when the network is avaialble again?
If ZK is down, your Kafka cluster will have limited functionality. For details, see How does Kafka depend on Zookeeper?
Kafka requires Zookeeper (ZK). If ZK is down, then the entire Kafka cluster will be "down" (meaning: will be almost unusable). ZK is used for a bunch of things like managing internal topics etc.
If ZK becomes available to the Kafka cluster, the cluster will be operational.
I want to run the flink cluster with High-availability mode. Hence I have made the setting as per JobManager High Availability into flink configuration files. When I start the zookeeper quorum by using start-zookeeper-quorum.sh, I am able to start two zookeerper servers(peers) on two machines. but when I start the flink cluster with 2 JobManagers, I get the message as Service temporarily unavailable due to an ongoing leader election. Please refresh. on web UI of flink.
What does this massage means? Is there a way to notify the leader in configuration file?
The problem is with your zookeeper installation. Your zk nodes can not choose a leader. Also number of two nodes is not the best choice. You should have at least 3 instances or other greater odd number.
You should check the admin docs of Zookeeper for instance here