Kubernetes: Same node but different IPs - kubernetes

I have created a K8s cluster on GCP, and I deployed an application.
Then I scaled it:
kubectl scale deployment hello-world-rest-api --replicas=3
Now when I run 'kubectl get pods', I see three pods. Their NODE value is same. I understand it means they all are deployed on same machine. But I observe that IP value for all three is different.
If NODE is same, then why is IP different?

There are several networks in a k8s cluster. The pods are on the pod network, so every pod deployed on the nodes of a k8s cluster can see each other as though they are independent nodes on a network. The pod address space is different from the node address space. So, each pod running on a node gets a unique address from the pod network, which is also different from the node network. The k8s components running on each node perform the address translation.

Related

Kubernetes relation between worker node IP address and Pod IP

I have two questions.
All the tutorials in the youtube says that, if the worker node internal IP is 10.10.1.0 then the pods inside the node will have internal IPs between 10.10.1.1 till 10.10.1 254. But in my Google Kubernetes Engine it is very different and I don't see any relation between them.
rc-server-1x769 ip is 10.0.0.8 but its corresponding node gke-kubia-default-pool-6f6eb62a-qv25 has 10.160.0.7
How to release the external ips assigned to my worker nodes.
For Q2:
GKE manages the VMs created in your cluster so if they go down or if there needs to be down/up scaling, VMs are created with the same characteristics. I do not believe what you are asking is possible (release). You will need to consider a private cluster.
Pod's CIDR and Cluster CIDR - it's different entities.
So Pod-Pod communication happens within Pod's CIDR, not within cluster CIDR.
Your nodes should have interfaces, which corresponds to your Pods CIDR. But from Cluster point of view, they have Cluster IP's. (kubectl output)

Can a worker node in kubernetes can run two different pods?

In kubernetes we can pods on worker nodes and pods share the resources and IP address,but what if we run two diiferent pods on a same worker node does that mean that both the pods will have different IP address?
To answer the main question - yes. A node can and does run different pods. Even if you have only one Deployment you can run
kubectl describe nodes my-node
Or even
kubectl get pods --all-namespaces
To see some pods that kubernetes uses for its control plane on each node.
About the second question, it really depends on your deployment, id recommend on reading about kube proxy which is a pod running on every node! (Regarding your first question) and is in charge of the networking layer and communication within the cluster
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/
The pods will have their own IP address within that node, and there are ways to directly communicate with pods
https://superuser.openstack.org/articles/review-of-pod-to-pod-communications-in-kubernetes/
https://kubernetes.io/docs/concepts/cluster-administration/networking/

EKS provisioned LoadBalancers reference all nodes in the cluster. If the pods reside on 1 or 2 nodes is this efficient as the ELB traffic increases?

In Kubernetes (on AWS EKS) when I create a service of type LoadBalancer the resultant EC2 LoadBalancer is associated with all nodes (instances) in the EKS cluster even though the selector in the service will only find the pods running on 1 or 2 of these nodes (ie. a much smaller subset of nodes).
I am keen to understand is this will be efficient as the volume of traffic increases.
I could not find any advice on this topic and am keen to understand if this the correct approach.
This could introduce additional SNAT if the request arrives at the node which the pods is not running on and also does not preserve the source IP of the request. You can change externalTrafficPolicy to Local which only associates nodes have pods running to the LoadBalancers.
You can get more information from the following links.
Perserve source IP
EKS load balancer support
On EKS, if you are using AWS CNI, which is default for EKS, then you can use aws-alb-ingress-loadbalancer to create ELB & ALB.
While creating loadbalancer you can use below annotation, then traffic is only routed to your pods.
alb.ingress.kubernetes.io/target-type: ip
Reference:
https://github.com/aws/amazon-vpc-cni-k8s
https://github.com/kubernetes-sigs/aws-alb-ingress-controller
https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/#target-type

How to provide for 2 different IP ranges? --pod-network-cidr= for multiple IP ranges

I have 2 different IP sets in the same network. My kubeadm is in a different IP range than my other nodes. How shall I set the property here: kubeadm init --pod-network-cidr=
cat /etc/hosts
#kubernetes slaves ebdp-ch2-d587p.sys.***.net 172.26.0.194, ebdp-ch2-d588p.sys.***.net 172.26.0.195
10.248.43.214 kubemaster
172.26.0.194 kube2
172.26.0.195 kube3
--pod-network-cidr is for IPs of the pods that kubernetes will manage. It is not related with nodes of the cluster.
For nodes, the requirement is (from Kubernetes doc):
Full network connectivity between all machines in the cluster (public
or private network is fine)
In addition to #Yavuz Sert answer, --pod-network-cidr flag identifies Container Network Interface (CNI) IP pool for Pods communication purpose within a Kubernetes cluster. You have to choose some separate IP subnet for Pod networking, it has to be different against your current given network sets. Since --pod-network-cidr has successfully applied kube-proxy reflects Pod IP subnet and add appropriate routes for network communication between Pods through cluster overlay network. Indeed you can find clusterCIDR flag withing kube-proxy configmap which corresponds to --pod-network-cidr.

kubectl, information about node from inside docker

i am running ipyparallel in an kube cluster. I have several pods running on one node which is fine. But for my computation i want to help ipyparallel in loadbalancing by choosing pods evenly over all nodes.
Is there a way to get this information from inside the pods/docker?
You could use a Kubernetes Service which does round-robin loadbalancing.
If you need the IP addresses, you could do a DNS A- or SRV-Records lookup and get all IPs of all running instances: http://kubernetes.io/docs/admin/dns/