How data flows between worker nodes in Kubernetes? - kubernetes

I have a single master cluster with 3 worker nodes. The master node has one network interface of 10Gb capacity and all worker nodes have a 40Gb interface. They are all connected via a switch.
I'd like to know if this might create a bottleneck if the data between nodes have to pass through the master node?
In general, I like to understand the communication flow between worker nodes. For instance, a pod in node1 sends data to a pod in node2, does the traffic go through the master node? I have seen the architecture diagram on the Kubernetes docs and it appears to be the case:
source: https://kubernetes.io/docs/concepts/overview/components/
If this is the case, it is possible to define a control plane network separate from the data plane by possibly adding another interface to worker nodes?
Please note that this is a bare-metal on-prem installation with OSS Kubernetes v1.20.

For instance, a pod in node1 sends data to a pod in node2, does the traffic go through the master node?
No. Kubernetes is designed with a flat network model. If Pod on node A send a request to Pod on node B, the inter-node traffic is directly from node A to node B as they are on the same IP network.
See also The Kubernetes network model

Related

Kubernetes relation between worker node IP address and Pod IP

I have two questions.
All the tutorials in the youtube says that, if the worker node internal IP is 10.10.1.0 then the pods inside the node will have internal IPs between 10.10.1.1 till 10.10.1 254. But in my Google Kubernetes Engine it is very different and I don't see any relation between them.
rc-server-1x769 ip is 10.0.0.8 but its corresponding node gke-kubia-default-pool-6f6eb62a-qv25 has 10.160.0.7
How to release the external ips assigned to my worker nodes.
For Q2:
GKE manages the VMs created in your cluster so if they go down or if there needs to be down/up scaling, VMs are created with the same characteristics. I do not believe what you are asking is possible (release). You will need to consider a private cluster.
Pod's CIDR and Cluster CIDR - it's different entities.
So Pod-Pod communication happens within Pod's CIDR, not within cluster CIDR.
Your nodes should have interfaces, which corresponds to your Pods CIDR. But from Cluster point of view, they have Cluster IP's. (kubectl output)

Are the master and worker nodes the same node in case of a single node cluster?

I started a minikube cluster (single node cluster) on my local machine with the command:
minikube start --driver=virtualbox
Now, when I execute the command:
kubectl get nodes
it returns:
NAME STATUS ROLES AGE VERSION
minikube Ready master 2m59s v1.19.0
My question is: since the cluster has only one node and according to the previous command it is a master node, what is the worker node? Are the master and worker nodes the same node in case of a single node cluster?
The answer to your question is yes in your case your master node is itself a worker node.
Cluster The group of vm or physical computers.
Master is the where control plane component installed such as etcd,controller-manager,api-server which are necessary to control the whole cluster state. In best practices and big production cluster never ever use master node to schedule application related workload.
Worker node is the simple plane VM where docker and kubernetes packages installed but not installed the control-plane component etc. Normally worker node is used to handle your application related workload.
And if you have only one machine where you configure kubernetes then it becomes single node kubernetes. and it act as a master/worker.
I hope this helps you to unsderstand
since the cluster has only one node and according to the previous command it is a master node, what is the worker node? Are the master and worker nodes the same node in case of a single node cluster?
Yes, using Minikube, you only use a single node. And your workload is scheduled to execute on the same node.
Typically, Taints and Tolerations is used on master nodes to prevent workload to be scheduled to those nodes.

How to start with kubernetes?

I have two IP'S master node and worker node? I need to deploy some services using these. I don't know anything about kubernetes ,what is master node and worker node?
How do I start?
You should start from the very basic things..
Kubernetes concept page is your starting point.
The Kubernetes Master is a collection of three processes that run on a
single node in your cluster, which is designated as the master node.
Those processes are: kube-apiserver, kube-controller-manager and
kube-scheduler.
Each individual non-master node in your cluster runs
two processes: kubelet, which communicates with the Kubernetes Master.
kube-proxy, a network proxy which reflects Kubernetes networking
services on each node.
Regarding you question in comment: read Organizing Cluster Access Using kubeconfig Files. Make sure you have kubeconfig file in the right place..

k8s should traffic goes to master nodes or worker nodes?

Should traffic from clients (outside world) to service inside k8s comes in through master nodes or worker nodes? and why?
From what i seen so far, docs are always showing LB pools consisting of master nodes instead of worker nodes. is there a reason for this?
in a big bluster, would it be more beneficial to send all traffic to a few designated worker nodes?
for example:
let say my k8s cluster has 2 master nodes, 4 worker nodes, and an external load balancer. most examples out there load balance incoming traffic to the 2 master nodes instead of the 4 worker nodes. why is this? is there a reason in term of efficiency/performance?
please advise. thank you.
What do you mean the traffic goes through worker nodes or master node? You expose your service in the pods to the outside world via NodePort or LoadBalancer. So who ever hits the LoadBalancer or reach the node on a particular port would be redirected to the corresponding service.

Kubernetes: Node vs Hosts vs Cluster terminology

http://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/
"This guide will only get ONE node working. Multiple nodes requires a functional networking configuration done outside of kubernetes."
So, is a node made up of many hosts?
I thought cluster is made up of many hosts. Is the cluster made up of many nodes instead?
Each node had a master and minions so a cluster has more than one master?
Host: some machine (physical or virtual)
Master: a host running Kubernetes API server and other master systems
Node: a host running kubelet + kube-proxy that pods can be scheduled onto
Cluster: a collection of one or masters + one or more nodes