How can I know if a zookeeper node is a container node? - apache-zookeeper

We can use ephemeralOwner property of a node to check if it is an ephemeral node but how to check if a node is a container node in zookeeper.

Related

Are the master and worker nodes the same node in case of a single node cluster?

I started a minikube cluster (single node cluster) on my local machine with the command:
minikube start --driver=virtualbox
Now, when I execute the command:
kubectl get nodes
it returns:
NAME STATUS ROLES AGE VERSION
minikube Ready master 2m59s v1.19.0
My question is: since the cluster has only one node and according to the previous command it is a master node, what is the worker node? Are the master and worker nodes the same node in case of a single node cluster?
The answer to your question is yes in your case your master node is itself a worker node.
Cluster The group of vm or physical computers.
Master is the where control plane component installed such as etcd,controller-manager,api-server which are necessary to control the whole cluster state. In best practices and big production cluster never ever use master node to schedule application related workload.
Worker node is the simple plane VM where docker and kubernetes packages installed but not installed the control-plane component etc. Normally worker node is used to handle your application related workload.
And if you have only one machine where you configure kubernetes then it becomes single node kubernetes. and it act as a master/worker.
I hope this helps you to unsderstand
since the cluster has only one node and according to the previous command it is a master node, what is the worker node? Are the master and worker nodes the same node in case of a single node cluster?
Yes, using Minikube, you only use a single node. And your workload is scheduled to execute on the same node.
Typically, Taints and Tolerations is used on master nodes to prevent workload to be scheduled to those nodes.

Should we deploy Cluster Autoscaler for mater node and worker node separately?

For autoscaling the Kubernetes cluster created with kubeadm on AWS , I'm going through the cluster autoscaler there I saw master node setup.I created master node and worker node so master node is having one ASG and worker node will have one ASG .So should I deploy CA in master node alone or to worker node also we have to deploy?
Cluster autoscaler is to scale out the workers and not for masters. You just need one auto scaler in your cluster. Hope this answers your query

In Kubernetes does a host (a worker node), a pod in the node and the container in the pod all have separate process namespace?

In docker host and the containers do have separate process name-space. In case of Kubernetes the containers are wrapped inside a pod. Does that mean in Kubernetes the host (a worker node), the pod in the node and the container in the pod all have separate process namespace?
Pods don't have anything of their own, they are just a collection of containers. By default, all containers run in their own PID namespace (which is separate from the host), however you can set all the containers in a pod to share the same one. This is also used with Debug Containers.

Zookeeper: Get Leader node from Follower node

If I have an ensemble of Zookeeper nodes where I have the IP of one of the Zookeeper node which is a 'follower' node, is it possible to find out the 'Leader' node from the follower node by connecting through zkCli or Curator client ?

Kubernetes cannot access pods at slave node

I deployed Kubernetes with multiple master configuration from kubadm multiple maser HA document. Then I join a worker node to this cluster.
At worker node, I could not ping ip of pods which were running at other nodes.
At each master node, I could ping other node's pods.
I also found that the cni0 interface not existed at worker nodes but existed at master nodes.
Did I miss any configurations?
Any suggestions will be appreciated.