Kubernetes relation between worker node IP address and Pod IP - kubernetes

I have two questions.
All the tutorials in the youtube says that, if the worker node internal IP is 10.10.1.0 then the pods inside the node will have internal IPs between 10.10.1.1 till 10.10.1 254. But in my Google Kubernetes Engine it is very different and I don't see any relation between them.
rc-server-1x769 ip is 10.0.0.8 but its corresponding node gke-kubia-default-pool-6f6eb62a-qv25 has 10.160.0.7
How to release the external ips assigned to my worker nodes.

For Q2:
GKE manages the VMs created in your cluster so if they go down or if there needs to be down/up scaling, VMs are created with the same characteristics. I do not believe what you are asking is possible (release). You will need to consider a private cluster.

Pod's CIDR and Cluster CIDR - it's different entities.
So Pod-Pod communication happens within Pod's CIDR, not within cluster CIDR.
Your nodes should have interfaces, which corresponds to your Pods CIDR. But from Cluster point of view, they have Cluster IP's. (kubectl output)

Related

EKS provisioned LoadBalancers reference all nodes in the cluster. If the pods reside on 1 or 2 nodes is this efficient as the ELB traffic increases?

In Kubernetes (on AWS EKS) when I create a service of type LoadBalancer the resultant EC2 LoadBalancer is associated with all nodes (instances) in the EKS cluster even though the selector in the service will only find the pods running on 1 or 2 of these nodes (ie. a much smaller subset of nodes).
I am keen to understand is this will be efficient as the volume of traffic increases.
I could not find any advice on this topic and am keen to understand if this the correct approach.
This could introduce additional SNAT if the request arrives at the node which the pods is not running on and also does not preserve the source IP of the request. You can change externalTrafficPolicy to Local which only associates nodes have pods running to the LoadBalancers.
You can get more information from the following links.
Perserve source IP
EKS load balancer support
On EKS, if you are using AWS CNI, which is default for EKS, then you can use aws-alb-ingress-loadbalancer to create ELB & ALB.
While creating loadbalancer you can use below annotation, then traffic is only routed to your pods.
alb.ingress.kubernetes.io/target-type: ip
Reference:
https://github.com/aws/amazon-vpc-cni-k8s
https://github.com/kubernetes-sigs/aws-alb-ingress-controller
https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/#target-type

Kubernetes: Same node but different IPs

I have created a K8s cluster on GCP, and I deployed an application.
Then I scaled it:
kubectl scale deployment hello-world-rest-api --replicas=3
Now when I run 'kubectl get pods', I see three pods. Their NODE value is same. I understand it means they all are deployed on same machine. But I observe that IP value for all three is different.
If NODE is same, then why is IP different?
There are several networks in a k8s cluster. The pods are on the pod network, so every pod deployed on the nodes of a k8s cluster can see each other as though they are independent nodes on a network. The pod address space is different from the node address space. So, each pod running on a node gets a unique address from the pod network, which is also different from the node network. The k8s components running on each node perform the address translation.

How to provide for 2 different IP ranges? --pod-network-cidr= for multiple IP ranges

I have 2 different IP sets in the same network. My kubeadm is in a different IP range than my other nodes. How shall I set the property here: kubeadm init --pod-network-cidr=
cat /etc/hosts
#kubernetes slaves ebdp-ch2-d587p.sys.***.net 172.26.0.194, ebdp-ch2-d588p.sys.***.net 172.26.0.195
10.248.43.214 kubemaster
172.26.0.194 kube2
172.26.0.195 kube3
--pod-network-cidr is for IPs of the pods that kubernetes will manage. It is not related with nodes of the cluster.
For nodes, the requirement is (from Kubernetes doc):
Full network connectivity between all machines in the cluster (public
or private network is fine)
In addition to #Yavuz Sert answer, --pod-network-cidr flag identifies Container Network Interface (CNI) IP pool for Pods communication purpose within a Kubernetes cluster. You have to choose some separate IP subnet for Pod networking, it has to be different against your current given network sets. Since --pod-network-cidr has successfully applied kube-proxy reflects Pod IP subnet and add appropriate routes for network communication between Pods through cluster overlay network. Indeed you can find clusterCIDR flag withing kube-proxy configmap which corresponds to --pod-network-cidr.

Ip addressing of pods in Kubernetes

How does pods get unique IP addresses even if they reside in the same worker node?
Also pod is not a device what is logic behind having it an IP address?
Is the IP address assigned to a pod a virtual IP?
A pod is part of a cluster (group of nodes), and cluster networking tells you that:
In reality, Kubernetes applies IP addresses at the Pod scope - containers within a Pod share their network namespaces - including their IP address.
This means that containers within a Pod can all reach each other’s ports on localhost.
This does imply that containers within a Pod must coordinate port usage, but this is no different than processes in a VM.
This is called the “IP-per-pod” model.
The constraints are:
all containers can communicate with all other containers without NAT
all nodes can communicate with all containers (and vice-versa) without NAT
the IP that a container sees itself as is the same IP that others see it as
See more with "Networking with Kubernetes" from Alok Kumar Singh:
Here:
We have a machine, it is called a node in kubernetes.
It has an IP 172.31.102.105 belonging to a subnet having CIDR 172.31.102.0/24.
(CIDR: Classless Inter-Domain Routing, a method for allocating IP addresses and IP routing)
The node has an network interface eth0 attached. It belongs to root network namespace of the node.
For pods to be isolated, they were created in their own network namespaces — these are pod1 n/w ns and pod2 n/w ns.
The pods are assigned IP addresses 100.96.243.7 and 100.96.243.8 from the CIDR range 100.96.0.0/11.
For the, see "Kubernetes Networking" from CloudNativelabs:
Kubernetes does not orchestrate setting up the network and offloads the job to the CNI (Container Network Interface) plug-ins. Please refer to the CNI spec for further details on CNI specification.
Below are possible network implementation options through CNI plugins which permits pod-to-pod communication honoring the Kubernetes requirements:
layer 2 (switching) solution
layer 3 (routing) solution
overlay solutions
layer 2 (switching)
You can see their IP attributed as part of a container subnet address range.
layer 3 (routing)
This is about populating the default gateway router with routes for the subnet as shown in the diagram.
Routes to 10.1.1.0/24 and 10.1.2.0/24 are configured to be through node1 and node2 respectively.
overlay solutions
Generally not used.
Note: See also (Oct. 2018): "Google Kubernetes Engine networking".
Kubernetes creates a network within your network for the containers. in GKE, for example, by default it is a /14, but can be overwritten by a user with a range between /11 and /19.
When Kubernetes creates a pod, it assigns an IP address from these range. Now, you can't have another VM, not part of your cluster, in your network, with the same IP address that a pod has.
Why? Imagine, you have a VPN tunnel that needs to deliver a packet to an address that both, the pod and the VM are using. Who is it going to deliver to?
So, answering your question; no, it is not a virtual IP, it is a physical IP address from your network.

chaning ip addresses within Kubernetes cluster

I set up Kubernetes on ubuntu with one master and two worker nodes. I use the IPv4 subnet 192.168.2.0/24 to define ip addresses for the master and nodes.
So, the kube-api server talks via 192.168.2.108 to the nodes on that subnet.
Now, I would like to change the subnet within the cluster to 192.168.50.0/24. What kind of options do I have to achieve that?
Is this possible anyway?
Thx
aracloud
You can specify the subent for pods and services to the kube-controller during start up.
See https://kubernetes.io/docs/reference/generated/kube-controller-manager/
--service-cluster-ip-range string
--cluster-cidr string
Will set the virtual IP range for services and pods respectivley.
Worker nodes will inherit pod range from the master.