Ip addressing of pods in Kubernetes - kubernetes

How does pods get unique IP addresses even if they reside in the same worker node?
Also pod is not a device what is logic behind having it an IP address?
Is the IP address assigned to a pod a virtual IP?

A pod is part of a cluster (group of nodes), and cluster networking tells you that:
In reality, Kubernetes applies IP addresses at the Pod scope - containers within a Pod share their network namespaces - including their IP address.
This means that containers within a Pod can all reach each other’s ports on localhost.
This does imply that containers within a Pod must coordinate port usage, but this is no different than processes in a VM.
This is called the “IP-per-pod” model.
The constraints are:
all containers can communicate with all other containers without NAT
all nodes can communicate with all containers (and vice-versa) without NAT
the IP that a container sees itself as is the same IP that others see it as
See more with "Networking with Kubernetes" from Alok Kumar Singh:
Here:
We have a machine, it is called a node in kubernetes.
It has an IP 172.31.102.105 belonging to a subnet having CIDR 172.31.102.0/24.
(CIDR: Classless Inter-Domain Routing, a method for allocating IP addresses and IP routing)
The node has an network interface eth0 attached. It belongs to root network namespace of the node.
For pods to be isolated, they were created in their own network namespaces — these are pod1 n/w ns and pod2 n/w ns.
The pods are assigned IP addresses 100.96.243.7 and 100.96.243.8 from the CIDR range 100.96.0.0/11.
For the, see "Kubernetes Networking" from CloudNativelabs:
Kubernetes does not orchestrate setting up the network and offloads the job to the CNI (Container Network Interface) plug-ins. Please refer to the CNI spec for further details on CNI specification.
Below are possible network implementation options through CNI plugins which permits pod-to-pod communication honoring the Kubernetes requirements:
layer 2 (switching) solution
layer 3 (routing) solution
overlay solutions
layer 2 (switching)
You can see their IP attributed as part of a container subnet address range.
layer 3 (routing)
This is about populating the default gateway router with routes for the subnet as shown in the diagram.
Routes to 10.1.1.0/24 and 10.1.2.0/24 are configured to be through node1 and node2 respectively.
overlay solutions
Generally not used.
Note: See also (Oct. 2018): "Google Kubernetes Engine networking".

Kubernetes creates a network within your network for the containers. in GKE, for example, by default it is a /14, but can be overwritten by a user with a range between /11 and /19.
When Kubernetes creates a pod, it assigns an IP address from these range. Now, you can't have another VM, not part of your cluster, in your network, with the same IP address that a pod has.
Why? Imagine, you have a VPN tunnel that needs to deliver a packet to an address that both, the pod and the VM are using. Who is it going to deliver to?
So, answering your question; no, it is not a virtual IP, it is a physical IP address from your network.

Related

GCP Cluster ip address is not the same as request's remoteAddr

I have a node in Google Cloud Platform Kubernetes public cluster. When I make HTTP request from my application to external website, nginx in that website shows some IP address different than the IP address of my kubernetes cluster. I can't figure out where that IP address comes from. I'm not using NAT in GCP.
I will just add some official terminology to put some light on GKE networking before providing an answer;
Let's have a look at some GKE networking terminology:
The Kubernetes networking model relies heavily on IP addresses. Services, Pods, containers, and nodes communicate using IP addresses and ports. Kubernetes provides different types of load balancing to direct traffic to the correct Pods. All of these mechanisms are described in more detail later in this topic. Keep the following terms in mind as you read:
ClusterIP: The IP address assigned to a Service. In other documents, it may be called the "Cluster IP". This address is stable for the lifetime of the Service, as discussed in the Services section in this topic.
Pod IP: The IP address assigned to a given Pod. This is ephemeral, as discussed in the Pods section in this topic.
Node IP: The IP address assigned to a given node.
Additionally you may have a look at the exposing your service documentation which may give you even more insight.
And to support the fact that you got your node's IP - GKE uses an IP masquerading:
IP masquerading is a form of network address translation (NAT) used to perform many-to-one IP address translations, which allows multiple clients to access a destination using a single IP address. A GKE cluster uses IP masquerading so that destinations outside of the cluster only receive packets from node IP addresses instead of Pod IP addresses.

Kubernetes relation between worker node IP address and Pod IP

I have two questions.
All the tutorials in the youtube says that, if the worker node internal IP is 10.10.1.0 then the pods inside the node will have internal IPs between 10.10.1.1 till 10.10.1 254. But in my Google Kubernetes Engine it is very different and I don't see any relation between them.
rc-server-1x769 ip is 10.0.0.8 but its corresponding node gke-kubia-default-pool-6f6eb62a-qv25 has 10.160.0.7
How to release the external ips assigned to my worker nodes.
For Q2:
GKE manages the VMs created in your cluster so if they go down or if there needs to be down/up scaling, VMs are created with the same characteristics. I do not believe what you are asking is possible (release). You will need to consider a private cluster.
Pod's CIDR and Cluster CIDR - it's different entities.
So Pod-Pod communication happens within Pod's CIDR, not within cluster CIDR.
Your nodes should have interfaces, which corresponds to your Pods CIDR. But from Cluster point of view, they have Cluster IP's. (kubectl output)

what is the use of cluster IP in kubernetes

Can someone help me understand about the IP address I see for cluster IP when I list services.
what is cluster IP (not the service type, but the real IP)?
how it is used?
where does it come from?
can I define the range for cluster IP (like we do for pod network)?
Good question to start learning something new (also for me):
Your concerns are related to kube-proxy by default in K8s cluster it's working in iptables mode.
Every node in a Kubernetes cluster runs a kube-proxy. Kube-proxy is responsible for implementing a form of virtual IP for Services.
In this mode, kube-proxy watches the Kubernetes control plane for the addition and removal of Service and Endpoint objects. For each Service, it installs iptables rules, which capture traffic to the Service’s clusterIP and port, and redirect that traffic to one of the Service’s backend sets. For each Endpoint object, it installs iptables rules which select a backend Pod.
Node components kube-proxy:
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
kube-proxy uses the operating system packet filtering layer if there is one and it’s available. Otherwise, kube-proxy forwards the traffic itself.
As described here:
Due to these iptables rules, whenever a packet is destined for a service IP, it’s DNATed (DNAT=Destination Network Address Translation), meaning the destination IP is changed from service IP to one of the endpoints pod IP chosen at random by iptables. This makes sure the load is evenly distributed among the backend pods.
When this DNAT happens, this info is stored in conntrack — the Linux connection tracking table (stores 5-tuple translations iptables has done: protocol, srcIP, srcPort, dstIP, dstPort). This is so that when a reply comes back, it can un-DNAT, meaning change the source IP from the Pod IP to the Service IP. This way, the client is unaware of how the packet flow is handled behind the scenes.
There are also different modes, you can find more information here
During cluster initialization you can use --service-cidr string parameter Default: "10.96.0.0/12"
ClusterIP: The IP address assigned to a Service
Kubernetes assigns a stable, reliable IP address to each newly-created Service (the ClusterIP) from the cluster's pool of available Service IP addresses. Kubernetes also assigns a hostname to the ClusterIP, by adding a DNS entry. The ClusterIP and hostname are unique within the cluster and do not change throughout the lifecycle of the Service. Kubernetes only releases the ClusterIP and hostname if the Service is deleted from the cluster's configuration. You can reach a healthy Pod running your application using either the ClusterIP or the hostname of the Service.
Pod IP: The IP address assigned to a given Pod.
Kubernetes assigns an IP address (the Pod IP) to the virtual network interface in the Pod's network namespace from a range of addresses reserved for Pods on the node. This address range is a subset of the IP address range assigned to the cluster for Pods, which you can configure when you create a cluster.
Resources:
Iptables Mode
Network overview
Understanding Kubernetes Kube-Proxy
Hope this helped
The cluster IP is the address where your service can be reached from inside the cluster. You won't be able to ping from the external network the cluster IP unless you do some kind of SSH tunneling. This IP is auto assigned by k8s and it might be possible to define a range (I'm not sure and I don't see why you need to do so).

How to provide for 2 different IP ranges? --pod-network-cidr= for multiple IP ranges

I have 2 different IP sets in the same network. My kubeadm is in a different IP range than my other nodes. How shall I set the property here: kubeadm init --pod-network-cidr=
cat /etc/hosts
#kubernetes slaves ebdp-ch2-d587p.sys.***.net 172.26.0.194, ebdp-ch2-d588p.sys.***.net 172.26.0.195
10.248.43.214 kubemaster
172.26.0.194 kube2
172.26.0.195 kube3
--pod-network-cidr is for IPs of the pods that kubernetes will manage. It is not related with nodes of the cluster.
For nodes, the requirement is (from Kubernetes doc):
Full network connectivity between all machines in the cluster (public
or private network is fine)
In addition to #Yavuz Sert answer, --pod-network-cidr flag identifies Container Network Interface (CNI) IP pool for Pods communication purpose within a Kubernetes cluster. You have to choose some separate IP subnet for Pod networking, it has to be different against your current given network sets. Since --pod-network-cidr has successfully applied kube-proxy reflects Pod IP subnet and add appropriate routes for network communication between Pods through cluster overlay network. Indeed you can find clusterCIDR flag withing kube-proxy configmap which corresponds to --pod-network-cidr.

Google Cloud deployment and Kubernetes node IP address change

We have had our database running on Kubernetes cluster (deployed to our private network) in Google cloud for a few months now. Last week we noticed that for some reason the IP address of all underlying nodes (VMs) changed. This caused an outage. We have been using the NodePort configuration of Kubernetes for our service to access our database (https://kubernetes.io/docs/concepts/services-networking/service/#nodeport).
We understand that the IP address of the pods within the VMs are dynamic and will eventually change, however we did not know that the IP address of the actual nodes (VMs) may also change. Is this normal? Does anyone know what can cause a VM IP address change in a Kubernetes cluster?
From the documentation about Ephemeral IP Addresses on GCP,
When you create an instance or forwarding rule without specifying an
IP address, the resource is automatically assigned an ephemeral
external IP address. Ephemeral external IP address are released from a
resource if you delete the resource. For VM instances, if you stop the
instance, the IP address is also released. Once you restart the
instance, it is assigned a new ephemeral external IP address.
You can assign static external IP addresses to instances, but as #Notauser mentioned, it is not recommended for Kubernetes nodes. This is because you may configure autoscaler for your instance groups and node sizes can be minimized or maximized.
Also, you need to reserve a static IP address for each node, which is not recommended. Moreover you will waste Static IP address resources and if the reserved static IP addresses are not used, you will still be charged for that.
Otherwise you can configure HTTP loadbalancer using ingress and then reserve a static IP address for your load balancer. Instead of using NodePort you should use ClusterIP type services and create an ingress rule forwarding the traffic to those services.
If you are using a managed Kubernetes Engine (GKE) cluster, this is expected as nodes are mortal and might be replaced or restarted if it becomes unresponsive for example. Therefore the node's IP will change. There is currently no way to assign a static (fixed) public IP to nodes. In this case you should expose your DB service as cluster IP instead. it will have an unchanged static IP. Here's an example on how to do that.
Alternatively, if you are using a non-managed kubernetes cluster in Compute Engine (GCE) then you simply have to promote your nodes IP's to static.