Zookeeper: Get Leader node from Follower node - apache-zookeeper

If I have an ensemble of Zookeeper nodes where I have the IP of one of the Zookeeper node which is a 'follower' node, is it possible to find out the 'Leader' node from the follower node by connecting through zkCli or Curator client ?

Related

Why can't we deploy the master and quorum nodes on the same server in the MongoDB?

Why can't we deploy a replication cluster with two servers, one with a primary node and a arbiter node, and the other with a secondary node.

Kafka Cluster : Use of 9091 port

Confluent Kafka (Community) mentions port 9091 is required for inter broker communication.
What should be the firewall rules for this port on AWS .
Setup: 3 node broker , 3 node zookeeper
Refer
https://docs.confluent.io/platform/current/installation/system-requirements.html
As it says, it's internal topic replication only, so you should only allow traffic within the VPC, preferably only from the brokers (whitelist the broker ips, exclude everything else)

How can I know if a zookeeper node is a container node?

We can use ephemeralOwner property of a node to check if it is an ephemeral node but how to check if a node is a container node in zookeeper.

Kubernetes Node disconnected from master

In our cluster we have a node running mongodb that is in a separate facility then the master node.
I have noticed if the VPN connecting the master node goes down thus separating the local worker node, that I am unable to locally connect to the mongodb port. It seems like the mongodb port 27017 goes away when if the worker node disconnects from the master node.
It was my understanding that Kubernetes is a orchestration system that configures the different worker nodes. So if a worker node disconnects I thought that it would just hold the same configuration keeping the mongodb running with its port on that node.
Is there a setting to keep the node configured as is that I might be missing?
In our configuration we have a pod that is running under a deployment which is assigned to a service in which that service is assigned to connect the ports to the IP of the worker node that is in question.
In the cluster configuration we are using the weaves network CNI.

Access kafka broker outside k8 minikube cluster

I have a landoop kafka image running on a Pod on minikube k8 cluster on my mac. I have 2 different services to expose the port 8081 for schema registry and 9092 for broker. I have mapped the ports 8081 -> 30081 and 9092 -> 30092 in my NodePort services so that I can access it from outside the cluster.
But when I try to run a console consumer or my consumer app, Kafka never consumes messages.
To verify broker 9092 port is reachable outside k8 cluster:
nc <exposed-ip> 30092, it says the port is open.
To verify Schema registry 8081 is reachable:
curl -X GET http://192.168.99.100:30081/subjects
It returns the schemas that are available.
I had a couple of questions.
1) Can we not access Kafka out of k8 cluster in an above-mentioned way outside of k8 cluster?If so am I doing it wrong in some way?
2) If the port is open, doesn't that mean that broker is available?
Any help is appreciated.Thanks
Accessing a Kafka cluster from outside a container network is rather complicated if you cannot route directly from the outside to the pod.
When you first connect to a Kafka cluster you connect to a single broker and the broker returns the list of all brokers and partitions inside the Kafka cluster. The Kafka client then uses the list to interact with the brokers where the specific topic lays.
The problem is that the broker lists contains by default the internal IP of the Kafka broker. Which would be in your case the container network ip. You can overwrite this value by setting advertised.listeners inside each broker's configuration.
To make a Kafka cluster available from outside Kubernetes you need to configure a nodeport service per each of your brokers and set the advertised.listeners setting of each broker to the external ip of the corresponding nodeport service. But note that this adds additional latency and failure points when you try to use Kafka from inside your Kubernetes cluster.
You need to set the advertised listeners for Kafka. For the landoop docker images this can be set via the environment flag
-e ADV_HOST=192.168.99.100