Kubernetes: Node vs Hosts vs Cluster terminology - kubernetes

http://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/
"This guide will only get ONE node working. Multiple nodes requires a functional networking configuration done outside of kubernetes."
So, is a node made up of many hosts?
I thought cluster is made up of many hosts. Is the cluster made up of many nodes instead?
Each node had a master and minions so a cluster has more than one master?

Host: some machine (physical or virtual)
Master: a host running Kubernetes API server and other master systems
Node: a host running kubelet + kube-proxy that pods can be scheduled onto
Cluster: a collection of one or masters + one or more nodes

Related

How data flows between worker nodes in Kubernetes?

I have a single master cluster with 3 worker nodes. The master node has one network interface of 10Gb capacity and all worker nodes have a 40Gb interface. They are all connected via a switch.
I'd like to know if this might create a bottleneck if the data between nodes have to pass through the master node?
In general, I like to understand the communication flow between worker nodes. For instance, a pod in node1 sends data to a pod in node2, does the traffic go through the master node? I have seen the architecture diagram on the Kubernetes docs and it appears to be the case:
source: https://kubernetes.io/docs/concepts/overview/components/
If this is the case, it is possible to define a control plane network separate from the data plane by possibly adding another interface to worker nodes?
Please note that this is a bare-metal on-prem installation with OSS Kubernetes v1.20.
For instance, a pod in node1 sends data to a pod in node2, does the traffic go through the master node?
No. Kubernetes is designed with a flat network model. If Pod on node A send a request to Pod on node B, the inter-node traffic is directly from node A to node B as they are on the same IP network.
See also The Kubernetes network model

Are the master and worker nodes the same node in case of a single node cluster?

I started a minikube cluster (single node cluster) on my local machine with the command:
minikube start --driver=virtualbox
Now, when I execute the command:
kubectl get nodes
it returns:
NAME STATUS ROLES AGE VERSION
minikube Ready master 2m59s v1.19.0
My question is: since the cluster has only one node and according to the previous command it is a master node, what is the worker node? Are the master and worker nodes the same node in case of a single node cluster?
The answer to your question is yes in your case your master node is itself a worker node.
Cluster The group of vm or physical computers.
Master is the where control plane component installed such as etcd,controller-manager,api-server which are necessary to control the whole cluster state. In best practices and big production cluster never ever use master node to schedule application related workload.
Worker node is the simple plane VM where docker and kubernetes packages installed but not installed the control-plane component etc. Normally worker node is used to handle your application related workload.
And if you have only one machine where you configure kubernetes then it becomes single node kubernetes. and it act as a master/worker.
I hope this helps you to unsderstand
since the cluster has only one node and according to the previous command it is a master node, what is the worker node? Are the master and worker nodes the same node in case of a single node cluster?
Yes, using Minikube, you only use a single node. And your workload is scheduled to execute on the same node.
Typically, Taints and Tolerations is used on master nodes to prevent workload to be scheduled to those nodes.

How to start with kubernetes?

I have two IP'S master node and worker node? I need to deploy some services using these. I don't know anything about kubernetes ,what is master node and worker node?
How do I start?
You should start from the very basic things..
Kubernetes concept page is your starting point.
The Kubernetes Master is a collection of three processes that run on a
single node in your cluster, which is designated as the master node.
Those processes are: kube-apiserver, kube-controller-manager and
kube-scheduler.
Each individual non-master node in your cluster runs
two processes: kubelet, which communicates with the Kubernetes Master.
kube-proxy, a network proxy which reflects Kubernetes networking
services on each node.
Regarding you question in comment: read Organizing Cluster Access Using kubeconfig Files. Make sure you have kubeconfig file in the right place..

Kubernetes Node disconnected from master

In our cluster we have a node running mongodb that is in a separate facility then the master node.
I have noticed if the VPN connecting the master node goes down thus separating the local worker node, that I am unable to locally connect to the mongodb port. It seems like the mongodb port 27017 goes away when if the worker node disconnects from the master node.
It was my understanding that Kubernetes is a orchestration system that configures the different worker nodes. So if a worker node disconnects I thought that it would just hold the same configuration keeping the mongodb running with its port on that node.
Is there a setting to keep the node configured as is that I might be missing?
In our configuration we have a pod that is running under a deployment which is assigned to a service in which that service is assigned to connect the ports to the IP of the worker node that is in question.
In the cluster configuration we are using the weaves network CNI.

Kubernetes cannot access pods at slave node

I deployed Kubernetes with multiple master configuration from kubadm multiple maser HA document. Then I join a worker node to this cluster.
At worker node, I could not ping ip of pods which were running at other nodes.
At each master node, I could ping other node's pods.
I also found that the cni0 interface not existed at worker nodes but existed at master nodes.
Did I miss any configurations?
Any suggestions will be appreciated.