I am trying to understand how kubernetes knows whom to call to get IP address to the pod? Is it mentioned in the ConfigMap?
Can you share any pointers to learn more on this?
I think it has been explained pretty well in this article:
Automating Kubernetes Networking with CNI
Kubernetes uses CNI plug-ins to orchestrate networking. Every time a
POD is initialized or removed, the default CNI plug-in is called with
the default configuration.
CNI plugin will create a pseudo interface and will attach the relevant underlay network also setting up the IP and routes which are mapped to the Pod namespace. When it gets the information about deployed container it will become responsible for IP address and iptables rules and routing on the node.
The process itself varies on different CNI - so topics like how iptables rules are created and how routing information is exchanged by nodes etc.
It is a lot of writing and it has been already written so I will just link the pointers as you requested:
Calico IPAM
Calico:
How do I configure the Pod IP range?
When using Calico IPAM, IP addresses are assigned from IP Pools.
By default, all enabled IP Pool are used. However, you can specify
which IP Pools to use for IP address management in the CNI network
config, or on a per-Pod basis using Kubernetes annotations.
Flannel networking with IPAM section
Related
Why do we need point-to-point connection between pods while we have workloads abstraction and networking mechanism (Service/kube-proxy/Ingress etc.) over it?
What is the default CNI?
REDACTED: I was confused about this question because I felt like I haven't installed any of popular CNI plugins when I was installing Kubernetes. It turns out Kubernetes defaults to kubenet
Btw, I see a lot of overlap features between Istio and container networks. IMO they could achieve identical objectives. The only difference is that Istio is high-level and CNI is low-level and more efficient, is that correct?
REDACTED:Interestingly, istio has it's own CNI
Kubernetes networking has some requirements:
pods on a node can communicate with all pods on all nodes without NAT
agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node
pods in the host network of a node can communicate with all pods on all nodes without NAT
and CNI(Container Network Interface) setup a standard interface, all implements(calico, flannel) need follow it.
So it aims to resolve the kubernetes networking.
The SVC is different, it's supplied a virtual address to proxy the pods, sine pods is ephemeral and its ip will changing but the address of svc is immutable.
For the istio, it's another thing, it make the connection between microservice as infrastructure and pull out this part from business code (think about spring cloud).
why do we need point-to-point connection between pods while we have workloads abstraction and networking mechanism(Service/kube-proxy/Ingress etc.) over it?
In general, you will find everything about networking in a cluster in this documentation. You can find more information about pod networking:
Every Pod gets its own IP address. This means you do not need to explicitly create links between Pods and you almost never need to deal with mapping container ports to host ports. This creates a clean, backwards-compatible model where Pods can be treated much like VMs or physical hosts from the perspectives of port allocation, naming, service discovery, load balancing, application configuration, and migration.
Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):
pods on a node can communicate with all pods on all nodes without NAT
agents on a node (e.g. system daemons, kubelet) can communicate with all pods on that node
Note: For those platforms that support Pods running in the host network (e.g. Linux):
pods in the host network of a node can communicate with all pods on all nodes without NAT
Then you are asking:
what is the default cni?
There is no single default CNI in a kubernetes cluster. It depends on what type you meet, where and how you set up the cluster etc. As you can see reading this doc about implementing networking model there are many CNI's available in Kubernetes.
Istio is a completely different tool for something else. You can't compare them like that. Istio is a service mesh tool.
Istio extends Kubernetes to establish a programmable, application-aware network using the powerful Envoy service proxy. Working with both Kubernetes and traditional workloads, Istio brings standard, universal traffic management, telemetry, and security to complex deployments.
I'm currently using 10.222.0.0/16 network for my pods on a single node cluster test environment.
When I reboot the machine or redeploy pods they get the first ip address which has not been used previously. I want to prevent this from happening by assigning static ips for pods with calico.
How can I achieve this?
Generally that approach would be against the dynamic nature of Kubernetes' IP layer. However, there is a solution found in the Project Calico docs:
Choose the IP address for a pod instead of allowing Calico to choose
automatically.
Bear in mind that:
You must be using the Calico IPAM.
If you are not sure, ssh to one of your Kubernetes nodes and examine
the CNI configuration.
cat /etc/cni/net.d/10-calico.conflist
Look for the entry:
"ipam": {
"type": "calico-ipam"
},
If it is present, you are using the Calico IPAM. If the IPAM is set to
something else, or the 10-calico.conflist file does not exist, you
cannot use these features in your cluster.
I am trying to understand Kubernetes and how it works under the hood. As I understand it each pod gets its own IP address. What I am not sure about is what kind of IP address that is.
Is it something that the network admins at my company need to pass out? Or is an internal kind of IP address that is not addressable on the full network?
I have read about network overlays (like Project Calico) and I assume they play a role in this, but I can't seem to find a page that explains the connection. (I think my question is too remedial for the internet.)
Is the IP address of a Pod a full IP address on my network (just like a Virtual Machine would have)?
Kubernetes clusters
Is the IP address of a Pod a full IP address on my network (just like a Virtual Machine would have)?
The thing with Kubernetes is that it is not a service like e.g. a Virtual Machine, but a cluster that has it's own networking functionality and management, including IP address allocation and network routing.
Your nodes may be virtual or physical machines, but they are registered in the NodeController, e.g. for health check and most commonly for IP address management.
The node controller is a Kubernetes master component which manages various aspects of nodes.
The node controller has multiple roles in a node’s life. The first is assigning a CIDR block to the node when it is registered (if CIDR assignment is turned on).
Cluster Architecture - Nodes
IP address management
Kubernetes Networking depends on the Container Network Interface (CNI) plugin your cluster is using.
A CNI plugin is responsible for ... It should then assign the IP to the interface and setup the routes consistent with the IP Address Management section by invoking appropriate IPAM plugin.
It is common that each node is assigned an CIDR range of IP-addresses that the nodes then assign to pods that is scheduled on the node.
GKE network overview describes it well on how it work on GKE.
Each node has an IP address assigned from the cluster's Virtual Private Cloud (VPC) network.
Each node has a pool of IP addresses that GKE assigns Pods running on that node (a /24 CIDR block by default).
Each Pod has a single IP address assigned from the Pod CIDR range of its node. This IP address is shared by all containers running within the Pod, and connects them to other Pods running in the cluster.
Each Service has an IP address, called the ClusterIP, assigned from the cluster's VPC network.
Kubernetes Pods are going to receive a real IP address like how's happening with Docker ones due to the brdige network interface: the real hard stuff to understand is basically the Pod to Pod connection between different nodes and that's a black magic performed via kube-proxy with the help of iptables/nftables/IPVS (according to which component you're running in the node).
A different story regarding IP addresses assigned to a Service of ClusterIP kind: in fact, it's a Virtual IP used to transparently redirect to endpoints as needed.
Kubernetes networking could look difficult to understand but we're lucky because Tim Hockin provided a really good talk named Life of a Packet that will provide you a clear overview of how it works.
I am trying to install Kubernetes in my on-premise server Ubuntu 16.04. And referring following documentation ,
https://medium.com/#Grigorkh/install-kubernetes-on-ubuntu-1ac2ef522a36
After installing kubelete kubeadm and kubernetes-cni I found that to initiate kubeadm with following command,
kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.133.15.28 --kubernetes-version stable-1.8
Here I am totally confused about why we are setting cidr and api server advertise address. I am adding few confusion from Kubernetes here,
Why we are specifying CIDR and --apiserver-advertise-address here?
How I can find these two address for my server?
And why flannel is using in Kubernetes installation?
I am new to this containerization and Kubernetes world.
Why we are specifying CIDR and --apiserver-advertise-address here?
And why flannel is using in kubernetes installation?
Kubernetes using Container Network Interface for creating a special virtual network inside your cluster for communication between pods.
Here is some explanation "why" from documentation:
Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):
all containers can communicate with all other containers without NAT
all nodes can communicate with all containers (and vice-versa) without NAT
the IP that a container sees itself as is the same IP that others see it as
Kubernetes applies IP addresses at the Pod scope - containers within a Pod share their network namespaces - including their IP address. This means that containers within a Pod can all reach each other’s ports on localhost. This does imply that containers within a Pod must coordinate port usage, but this is no different than processes in a VM. This is called the “IP-per-pod” model.
So, Flannel is one of the CNI which can be used for create network which will connect all your pods and CIDR option define a subnet for that network. There are many alternative CNI with similar functions.
If you want to get more details about how network working in Kubernetes you can read by link above or, as example, here.
How I can find these two address for my server?
API server advertise address has to be only one and static. That address using by all components to communicate with API server. Unfortunately, Kubernetes has no support of multiple API server addresses per master.
But, you can still use as many addresses on your server as you want, but only one of them you can define as --apiserver-advertise-address. The only one request for it - it has to be accessible from all your nodes in cluster.
If I'm running processes in 2 pods that communicate with each other over tcp (addressing each other through Kubernetes services) and the pods are scheduled to the same node will the communication take place over the network or will Kubernetes know to use the loopback device?
In a kubernetes cluster, a pod could be scheduled in any node in the cluster. The another pod which wants to access it should not ideally know where this pod is running or its POD IP address. Kubernetes provides a basic service discovery mechanism by providing DNS names to the kubernetes services (which are associated with pods). When a pod wants to talk to another pod, it should use the DNS name (e.g. svc1.namespace1.svc.cluster.local)
loopback is not mentioned in "community/contributors/design-proposals/network/networking"
Because every pod gets a "real" (not machine-private) IP address, pods can communicate without proxies or translations. The pod can use well-known port numbers and can avoid the use of higher-level service discovery systems like DNS-SD, Consul, or Etcd.
When any container calls ioctl(SIOCGIFADDR) (get the address of an interface), it sees the same IP that any peer container would see them coming from — each pod has its own IP address that other pods can know.
By making IP addresses and ports the same both inside and outside the pods, we create a NAT-less, flat address space. Running "ip addr show" should work as expected. This would enable all existing naming/discovery mechanisms to work out of the box, including self-registration mechanisms and applications that distribute IP addresses.
We should be optimizing for inter-pod network communication.
Using IP was already mentioned last year in "Kubernetes - container communication within a pod using names instead of 'localhost'?"