kubernetes: providing explicit CIDRs for pods/services - kubernetes

A kubernetes cluster, unless I am wrong, incorporates 3 networks:
the physical network connecting the master(s)/worker(s)
a virtual network interconnecting the pods (where all pods can reach each other)
a virtual network where the services are exposed
my question is whether at some point (i.e. when creating the cluster via say kops) one can provision for specific CIDRs for the two virtual nets

when you execute kubeadm init --pod-network-cidr 10.244.0.0/16,You provision CIDR for pods network.this network used by Flunnel or other CNI addon for routable . But service IP not need routable

Related

Kubernetes relation between worker node IP address and Pod IP

I have two questions.
All the tutorials in the youtube says that, if the worker node internal IP is 10.10.1.0 then the pods inside the node will have internal IPs between 10.10.1.1 till 10.10.1 254. But in my Google Kubernetes Engine it is very different and I don't see any relation between them.
rc-server-1x769 ip is 10.0.0.8 but its corresponding node gke-kubia-default-pool-6f6eb62a-qv25 has 10.160.0.7
How to release the external ips assigned to my worker nodes.
For Q2:
GKE manages the VMs created in your cluster so if they go down or if there needs to be down/up scaling, VMs are created with the same characteristics. I do not believe what you are asking is possible (release). You will need to consider a private cluster.
Pod's CIDR and Cluster CIDR - it's different entities.
So Pod-Pod communication happens within Pod's CIDR, not within cluster CIDR.
Your nodes should have interfaces, which corresponds to your Pods CIDR. But from Cluster point of view, they have Cluster IP's. (kubectl output)

Calico IPs Confusion

I am bit confused about Calico IPs :
If I add calico to kubernetes cluster using
kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
The CALICO_IPV4POOL_CIDR is 192.168.0.0/16
So IP Range is 192.168.0.0 to 192.168.255.255
Now I have initiated the cluster using :
kubeadm init --pod-network-cidr=20.96.0.0/12 --apiserver-advertise-address=192.168.56.30
So, now pods will have IP address (using pod network CIDR) will be between: 20.96.0.0 to 20.111.255.255
What are these two different IPs. My Pods are getting IP addresses 20.96.205.192 and so on.
The CALICO_IPV4POOL_CIDR is #commented by default, look at these lines in calico.yaml:
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
# - name: CALICO_IPV4POOL_CIDR
# value: "192.168.0.0/16"
For all effects, unless manually modified before deployment, those lines are not considered during deployment.
Another important line in the yaml itself is:
# Pod CIDR auto-detection on kubeadm needs access to config maps.
This confirms that the CIDR is obtained from the cluster, not from calico.yaml.
What are these two different IPs? My Pods are getting IP addresses 20.96.205.192 and so on.
Kubeadm supports many Pod network add-ons, Calico is one of those. Calico on the other hand is supported by many kinds of deployment, kubeadm is just one of those.
Kubeadm --pod-network-cidr in your deployment is the correct way to define the pod network CIDR, this is why the range 20.96.0.0/12 is effectively used.
CALICO_IPV4POOL_CIDR is required for other kinds of deployment that does not specify the CIDR pool reservation for pod networks.
Note:
The range 20.96.0.0/12 is not a Private Network range, and it can cause problems if a client with a Public IP from that range tries to access your service.
The classless reserved IP ranges for Private Networks are:
10.0.0.0/8 (16.777.216 addresses)
172.16.0.0/12 (1.048.576 addresses)
192.168.0.0/16 (65.536 addresses)
You can use any subnet size inside these ranges for your POD CIDR Network, just make sure it doesn't overlaps with any subnet in your network.
Additional References:
Calico - Create Single Host Kubernetes Cluster with Kubeadm
Kubeadm Calico Installation
IETF RFC1918 - Private Address Space

what is the use of cluster IP in kubernetes

Can someone help me understand about the IP address I see for cluster IP when I list services.
what is cluster IP (not the service type, but the real IP)?
how it is used?
where does it come from?
can I define the range for cluster IP (like we do for pod network)?
Good question to start learning something new (also for me):
Your concerns are related to kube-proxy by default in K8s cluster it's working in iptables mode.
Every node in a Kubernetes cluster runs a kube-proxy. Kube-proxy is responsible for implementing a form of virtual IP for Services.
In this mode, kube-proxy watches the Kubernetes control plane for the addition and removal of Service and Endpoint objects. For each Service, it installs iptables rules, which capture traffic to the Service’s clusterIP and port, and redirect that traffic to one of the Service’s backend sets. For each Endpoint object, it installs iptables rules which select a backend Pod.
Node components kube-proxy:
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
kube-proxy uses the operating system packet filtering layer if there is one and it’s available. Otherwise, kube-proxy forwards the traffic itself.
As described here:
Due to these iptables rules, whenever a packet is destined for a service IP, it’s DNATed (DNAT=Destination Network Address Translation), meaning the destination IP is changed from service IP to one of the endpoints pod IP chosen at random by iptables. This makes sure the load is evenly distributed among the backend pods.
When this DNAT happens, this info is stored in conntrack — the Linux connection tracking table (stores 5-tuple translations iptables has done: protocol, srcIP, srcPort, dstIP, dstPort). This is so that when a reply comes back, it can un-DNAT, meaning change the source IP from the Pod IP to the Service IP. This way, the client is unaware of how the packet flow is handled behind the scenes.
There are also different modes, you can find more information here
During cluster initialization you can use --service-cidr string parameter Default: "10.96.0.0/12"
ClusterIP: The IP address assigned to a Service
Kubernetes assigns a stable, reliable IP address to each newly-created Service (the ClusterIP) from the cluster's pool of available Service IP addresses. Kubernetes also assigns a hostname to the ClusterIP, by adding a DNS entry. The ClusterIP and hostname are unique within the cluster and do not change throughout the lifecycle of the Service. Kubernetes only releases the ClusterIP and hostname if the Service is deleted from the cluster's configuration. You can reach a healthy Pod running your application using either the ClusterIP or the hostname of the Service.
Pod IP: The IP address assigned to a given Pod.
Kubernetes assigns an IP address (the Pod IP) to the virtual network interface in the Pod's network namespace from a range of addresses reserved for Pods on the node. This address range is a subset of the IP address range assigned to the cluster for Pods, which you can configure when you create a cluster.
Resources:
Iptables Mode
Network overview
Understanding Kubernetes Kube-Proxy
Hope this helped
The cluster IP is the address where your service can be reached from inside the cluster. You won't be able to ping from the external network the cluster IP unless you do some kind of SSH tunneling. This IP is auto assigned by k8s and it might be possible to define a range (I'm not sure and I don't see why you need to do so).

How to provide for 2 different IP ranges? --pod-network-cidr= for multiple IP ranges

I have 2 different IP sets in the same network. My kubeadm is in a different IP range than my other nodes. How shall I set the property here: kubeadm init --pod-network-cidr=
cat /etc/hosts
#kubernetes slaves ebdp-ch2-d587p.sys.***.net 172.26.0.194, ebdp-ch2-d588p.sys.***.net 172.26.0.195
10.248.43.214 kubemaster
172.26.0.194 kube2
172.26.0.195 kube3
--pod-network-cidr is for IPs of the pods that kubernetes will manage. It is not related with nodes of the cluster.
For nodes, the requirement is (from Kubernetes doc):
Full network connectivity between all machines in the cluster (public
or private network is fine)
In addition to #Yavuz Sert answer, --pod-network-cidr flag identifies Container Network Interface (CNI) IP pool for Pods communication purpose within a Kubernetes cluster. You have to choose some separate IP subnet for Pod networking, it has to be different against your current given network sets. Since --pod-network-cidr has successfully applied kube-proxy reflects Pod IP subnet and add appropriate routes for network communication between Pods through cluster overlay network. Indeed you can find clusterCIDR flag withing kube-proxy configmap which corresponds to --pod-network-cidr.

Ip addressing of pods in Kubernetes

How does pods get unique IP addresses even if they reside in the same worker node?
Also pod is not a device what is logic behind having it an IP address?
Is the IP address assigned to a pod a virtual IP?
A pod is part of a cluster (group of nodes), and cluster networking tells you that:
In reality, Kubernetes applies IP addresses at the Pod scope - containers within a Pod share their network namespaces - including their IP address.
This means that containers within a Pod can all reach each other’s ports on localhost.
This does imply that containers within a Pod must coordinate port usage, but this is no different than processes in a VM.
This is called the “IP-per-pod” model.
The constraints are:
all containers can communicate with all other containers without NAT
all nodes can communicate with all containers (and vice-versa) without NAT
the IP that a container sees itself as is the same IP that others see it as
See more with "Networking with Kubernetes" from Alok Kumar Singh:
Here:
We have a machine, it is called a node in kubernetes.
It has an IP 172.31.102.105 belonging to a subnet having CIDR 172.31.102.0/24.
(CIDR: Classless Inter-Domain Routing, a method for allocating IP addresses and IP routing)
The node has an network interface eth0 attached. It belongs to root network namespace of the node.
For pods to be isolated, they were created in their own network namespaces — these are pod1 n/w ns and pod2 n/w ns.
The pods are assigned IP addresses 100.96.243.7 and 100.96.243.8 from the CIDR range 100.96.0.0/11.
For the, see "Kubernetes Networking" from CloudNativelabs:
Kubernetes does not orchestrate setting up the network and offloads the job to the CNI (Container Network Interface) plug-ins. Please refer to the CNI spec for further details on CNI specification.
Below are possible network implementation options through CNI plugins which permits pod-to-pod communication honoring the Kubernetes requirements:
layer 2 (switching) solution
layer 3 (routing) solution
overlay solutions
layer 2 (switching)
You can see their IP attributed as part of a container subnet address range.
layer 3 (routing)
This is about populating the default gateway router with routes for the subnet as shown in the diagram.
Routes to 10.1.1.0/24 and 10.1.2.0/24 are configured to be through node1 and node2 respectively.
overlay solutions
Generally not used.
Note: See also (Oct. 2018): "Google Kubernetes Engine networking".
Kubernetes creates a network within your network for the containers. in GKE, for example, by default it is a /14, but can be overwritten by a user with a range between /11 and /19.
When Kubernetes creates a pod, it assigns an IP address from these range. Now, you can't have another VM, not part of your cluster, in your network, with the same IP address that a pod has.
Why? Imagine, you have a VPN tunnel that needs to deliver a packet to an address that both, the pod and the VM are using. Who is it going to deliver to?
So, answering your question; no, it is not a virtual IP, it is a physical IP address from your network.