Static ip adress for kubernetes pods with calico cni - kubernetes

I'm currently using 10.222.0.0/16 network for my pods on a single node cluster test environment.
When I reboot the machine or redeploy pods they get the first ip address which has not been used previously. I want to prevent this from happening by assigning static ips for pods with calico.
How can I achieve this?

Generally that approach would be against the dynamic nature of Kubernetes' IP layer. However, there is a solution found in the Project Calico docs:
Choose the IP address for a pod instead of allowing Calico to choose
automatically.
Bear in mind that:
You must be using the Calico IPAM.
If you are not sure, ssh to one of your Kubernetes nodes and examine
the CNI configuration.
cat /etc/cni/net.d/10-calico.conflist
Look for the entry:
"ipam": {
"type": "calico-ipam"
},
If it is present, you are using the Calico IPAM. If the IPAM is set to
something else, or the 10-calico.conflist file does not exist, you
cannot use these features in your cluster.

Related

Add iptable rules by a pod inside itself and also in the worker node

I have a requirement where I'll deploy a pod and when it comes up I need to add some iptable rules inside the pod.
At the same time, I need to add some iptable rules in the worker node on which the pod is running.
If I use "hostNetwork" option for the pod, the iptable rules which I need to add in the pod will also get added to the worker node.
How can this be achieved, where the pod itself adds iptable rules inside the pod as well as in the worker node.
Not recommended. ⛔
Basically, the kube-proxy and the network overlay generally heavily use iptables to make things happen in Kubernetes. Adding your own iptables could work but you would have to watch everything Kubernetes does and make sure anything you do doesn't conflict.
There is no specific tool that would help manage all of the iptables that Kubernetes creates along with the ones you create. This would only work at the node level. There isn't such a thing as adding iptables at the pod level.
You could use some of the networking objects like Network Policies to restrict traffic or if you are using Calico you can use a more advanced version of it.
Another option is just to have an external firewall that restrict some traffic to your nodes in your Kubernetes cluster.
✌️

How kubernetes decides which network plugin to call for IPAM?

I am trying to understand how kubernetes knows whom to call to get IP address to the pod? Is it mentioned in the ConfigMap?
Can you share any pointers to learn more on this?
I think it has been explained pretty well in this article:
Automating Kubernetes Networking with CNI
Kubernetes uses CNI plug-ins to orchestrate networking. Every time a
POD is initialized or removed, the default CNI plug-in is called with
the default configuration.
CNI plugin will create a pseudo interface and will attach the relevant underlay network also setting up the IP and routes which are mapped to the Pod namespace. When it gets the information about deployed container it will become responsible for IP address and iptables rules and routing on the node.
The process itself varies on different CNI - so topics like how iptables rules are created and how routing information is exchanged by nodes etc.
It is a lot of writing and it has been already written so I will just link the pointers as you requested:
Calico IPAM
Calico:
How do I configure the Pod IP range?
When using Calico IPAM, IP addresses are assigned from IP Pools.
By default, all enabled IP Pool are used. However, you can specify
which IP Pools to use for IP address management in the CNI network
config, or on a per-Pod basis using Kubernetes annotations.
Flannel networking with IPAM section

Steps of Kubernetes CNI when using Flannel

I have been setting up Kubernets with kubeadm and I have used Flannel to setup the pod network. The setup basically worked but I have been running into all kinds of problems (and bugs) and now I am trying to gain a better understanding of the different steps involved in network setup process (e.g. CNI and flannel).
From an end-user/admin perspective I simply pass --pod-network-cidr with some network argument to kubeadm and then later I apply a pod configuration for flannel using kubectl. Kubernetes will then start a flannel pod on each of my nodes. Assuming everything worked, flannel should then use the container network interfaces (CNI) of Kubernetes to setup the pod network.
As a result of this process I should get a pod network which includes the following:
A cni0 bridge.
A flannel.x interface.
iptables entries to route between the host and the pod network.
The following files and binaries seem to be involved in the setup:
kubectl reads a CNI configuration such as /etc/cni/net.d/10-flannel.conflist and invokes the CNI plugin described in the config file.
Somehow a folder /var/lib/cni is being created which seems to contain configuration files for the network setup.
A CNI plugin such as /opt/cni/bin/flannel is run, I don't yet understand what it does.
What am I missing on this list and how does (2.) fit into these steps. How does /var/lib/cni get created and which program is responsible for this?
As I see from code of CNI:
var (
CacheDir = "/var/lib/cni"
)
this folder used as cache dir for CNI and looks like created by CNI plugin.
Here you can find detailed documentation about CNI.
What is CNI?
CNI (Container Network Interface), a Cloud Native Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins. CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted. Because of this focus, CNI has a wide range of support and the specification is simple to implement.
All projects like Calico, Flannel use CNI as a base. That's why they called CNI-plugins
Here you can find documentation about how kubernetes interact with CNI.

CIDR Address and advertise-address defining in Kubernetes Installation

I am trying to install Kubernetes in my on-premise server Ubuntu 16.04. And referring following documentation ,
https://medium.com/#Grigorkh/install-kubernetes-on-ubuntu-1ac2ef522a36
After installing kubelete kubeadm and kubernetes-cni I found that to initiate kubeadm with following command,
kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.133.15.28 --kubernetes-version stable-1.8
Here I am totally confused about why we are setting cidr and api server advertise address. I am adding few confusion from Kubernetes here,
Why we are specifying CIDR and --apiserver-advertise-address here?
How I can find these two address for my server?
And why flannel is using in Kubernetes installation?
I am new to this containerization and Kubernetes world.
Why we are specifying CIDR and --apiserver-advertise-address here?
And why flannel is using in kubernetes installation?
Kubernetes using Container Network Interface for creating a special virtual network inside your cluster for communication between pods.
Here is some explanation "why" from documentation:
Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):
all containers can communicate with all other containers without NAT
all nodes can communicate with all containers (and vice-versa) without NAT
the IP that a container sees itself as is the same IP that others see it as
Kubernetes applies IP addresses at the Pod scope - containers within a Pod share their network namespaces - including their IP address. This means that containers within a Pod can all reach each other’s ports on localhost. This does imply that containers within a Pod must coordinate port usage, but this is no different than processes in a VM. This is called the “IP-per-pod” model.
So, Flannel is one of the CNI which can be used for create network which will connect all your pods and CIDR option define a subnet for that network. There are many alternative CNI with similar functions.
If you want to get more details about how network working in Kubernetes you can read by link above or, as example, here.
How I can find these two address for my server?
API server advertise address has to be only one and static. That address using by all components to communicate with API server. Unfortunately, Kubernetes has no support of multiple API server addresses per master.
But, you can still use as many addresses on your server as you want, but only one of them you can define as --apiserver-advertise-address. The only one request for it - it has to be accessible from all your nodes in cluster.

DaemonSets on Google Container Engine (Kubernetes)

I have a Google Container Engine cluster with 21 nodes, there is one pod in particular that I need to always be running on a node with a static IP address (for outbound purposes).
Kubernetes supports DaemonSets
This is a way to have a pod be deployed to a specific node (or in a set of nodes) by giving the node a label that matches the nodeSelector in the DaemonSet. You can then assign a static IP to the VM instance that the labeled node is on. However, GKE doesn't appear to support the DaemonSet kind.
$ kubectl create -f go-daemonset.json
error validating "go-daemonset.json": error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
$ kubectl create -f go-daemonset.json --validate=false
unable to recognize "go-daemonset.json": no kind named "DaemonSet" is registered in versions ["" "v1"]
When will this functionality be supported and what are the workarounds?
If you only want to run the pod on a single node, you actually don't want to use a DaemonSet. DaemonSets are designed for running a pod on every node, not a single specific node.
To run a pod on a specific node, you can use a nodeSelector in the pod specification, as documented in the Node Selection example in the docs.
edit: But for anyone reading this that does want to run something on every node in GKE, there are two things I can say:
First, DaemonSet will be enabled in GKE in version 1.2, which is planned for March. It isn't enabled in GKE in version 1.1 because it wasn't considered stable enough at the time 1.1 was cut.
Second, if you want to run something on every node before 1.2 is out, we recommend creating a replication controller with a number of replicas greater than your number of nodes and asking for a hostPort in the container spec. The hostPort will ensure that no more than one pod from the RC will be run per node.
DaemonSets is still alpha feature and Google Container Engine supports only production Kubernetes features. Workaround: build your own Kubernetes cluster (GCE, AWS, bare metal, ...) and enable alpha/beta features.