Kubernetes cannot access pods at slave node - kubernetes

I deployed Kubernetes with multiple master configuration from kubadm multiple maser HA document. Then I join a worker node to this cluster.
At worker node, I could not ping ip of pods which were running at other nodes.
At each master node, I could ping other node's pods.
I also found that the cni0 interface not existed at worker nodes but existed at master nodes.
Did I miss any configurations?
Any suggestions will be appreciated.

Related

How 2 master nodes share the etcd information in Kubernetes production cluster?

When having 2 master nodes in the Kubernetes production cluster, do they share their etcd database?
Do they have duplication of Etcd each in its own node? or the `etcd should be in a separate machine? how exactly it's working?
In case one master node is dead, does the other master node has all its etcd information?

Are the master and worker nodes the same node in case of a single node cluster?

I started a minikube cluster (single node cluster) on my local machine with the command:
minikube start --driver=virtualbox
Now, when I execute the command:
kubectl get nodes
it returns:
NAME STATUS ROLES AGE VERSION
minikube Ready master 2m59s v1.19.0
My question is: since the cluster has only one node and according to the previous command it is a master node, what is the worker node? Are the master and worker nodes the same node in case of a single node cluster?
The answer to your question is yes in your case your master node is itself a worker node.
Cluster The group of vm or physical computers.
Master is the where control plane component installed such as etcd,controller-manager,api-server which are necessary to control the whole cluster state. In best practices and big production cluster never ever use master node to schedule application related workload.
Worker node is the simple plane VM where docker and kubernetes packages installed but not installed the control-plane component etc. Normally worker node is used to handle your application related workload.
And if you have only one machine where you configure kubernetes then it becomes single node kubernetes. and it act as a master/worker.
I hope this helps you to unsderstand
since the cluster has only one node and according to the previous command it is a master node, what is the worker node? Are the master and worker nodes the same node in case of a single node cluster?
Yes, using Minikube, you only use a single node. And your workload is scheduled to execute on the same node.
Typically, Taints and Tolerations is used on master nodes to prevent workload to be scheduled to those nodes.

Should we deploy Cluster Autoscaler for mater node and worker node separately?

For autoscaling the Kubernetes cluster created with kubeadm on AWS , I'm going through the cluster autoscaler there I saw master node setup.I created master node and worker node so master node is having one ASG and worker node will have one ASG .So should I deploy CA in master node alone or to worker node also we have to deploy?
Cluster autoscaler is to scale out the workers and not for masters. You just need one auto scaler in your cluster. Hope this answers your query

How to start with kubernetes?

I have two IP'S master node and worker node? I need to deploy some services using these. I don't know anything about kubernetes ,what is master node and worker node?
How do I start?
You should start from the very basic things..
Kubernetes concept page is your starting point.
The Kubernetes Master is a collection of three processes that run on a
single node in your cluster, which is designated as the master node.
Those processes are: kube-apiserver, kube-controller-manager and
kube-scheduler.
Each individual non-master node in your cluster runs
two processes: kubelet, which communicates with the Kubernetes Master.
kube-proxy, a network proxy which reflects Kubernetes networking
services on each node.
Regarding you question in comment: read Organizing Cluster Access Using kubeconfig Files. Make sure you have kubeconfig file in the right place..

Kubernetes: Node vs Hosts vs Cluster terminology

http://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/
"This guide will only get ONE node working. Multiple nodes requires a functional networking configuration done outside of kubernetes."
So, is a node made up of many hosts?
I thought cluster is made up of many hosts. Is the cluster made up of many nodes instead?
Each node had a master and minions so a cluster has more than one master?
Host: some machine (physical or virtual)
Master: a host running Kubernetes API server and other master systems
Node: a host running kubelet + kube-proxy that pods can be scheduled onto
Cluster: a collection of one or masters + one or more nodes