I've had the opportunity to install k8s clusters on CentOS VMs. In most cases, i used flanneld as overlay. On some other cases, though, i noticed flannel pods in kube-system namespace. IMHO, we need not have both flanneld and flannel pods for underlying CNI to function properly with kubernetes.
Have read plenty of documentation on how flannel overlay fits into kubernetes ecosystem. However, i haven't found the answers to some questions. Hope somebody can provide pointers.
What is the basis for choosing flanneld or flannel pod?
Are there any differences in functionality between flanneld and flannel pod?
How does the flannel pod provide CNI functionality? My understanding is the pod populates etcd with IP address k/v pairs but how is this info really used?
Do most CNI plugins have a choice between running as daemon or pod?
You are right, you don't need both of them because they do the same job. There is no differences between them just where the daemon run in system, in isolated container or in system as regular daemon. All CNI plugins bases on CNI library and route the traffic. Flannel use system ETCD as key-value storage. if you have ETCD inside kubernetes cluster it will use this if external it will use external ETCD. it is only you choose what prefer to you, For example If you are running external ETCD usually people running flannel as daemon in system.
Related
I have this mixed cluster which shows all the nodes as Ready (both Windows & Linux ones). However, only the Linux nodes have aws-node & kube-proxy pods. I RDPed into a Windows node and can see a kube-proxy service.
My question remains: do the Windows nodes need aws-node & kube-proxy pods in the kube-system namespace or do they work differently than Linux ones?
kube-proxy pods are part of default installation of Kubernetes. They are automaticaly created, and are needed on both Linux and Windows.
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
[source]
aws-node pod is a part of AWS CNI plugin for Kubernetes
The Amazon VPC Container Network Interface (CNI) plugin for Kubernetes is deployed with each of your Amazon EC2 nodes in a Daemonset with the name aws-node. The plugin consists of two primary components:
[...]
[source]
It is currently only supported on Linux. Windows nodes are using a different CNI plugin - vpc-shared-eni
We have an application that requires a different container runtime than what we have right now in our kubernetes cluster. Our cluster is deployed via kubeadm in our baremetal cluster. K8s version is 1.21.3
Can we install different container runtime or version to a single worker node?
Just wanted to verify inspite knowing that k8s design is modular enough.. CRI, CNI etc
I'm kinda new to Kubernetes, and I would like to understand what is the purpose of Kube-proxy in Azure AKS/regular cluster.
from what I understand, Kube-proxy is updated by the API cluster from the various deployments configurations, which then updates the IP-table stack in the Linux kernel that responsible for the traffic routes between pods and services.
Am I missing something important?
Thanks!!
Basically kube-proxy component runs on each node to provide network features. It is run as a Kubernetes DaemonSet and its configuration is stored on a Kubernetes ConfigMap. You can edit the kube-proxy DaemonSet or ConfigMap on the kube-system namespace using commands:
$ kubectl -n kube-system edit daemonset kube-proxy
or
$ kubectl -n kube-system edit configmap kube-proxy
kube-proxy currently supports three different operation modes:
User space: This mode gets its name because the service routing takes place in kube-proxy in the user process space
instead of in the kernel network stack. It is not commonly used as it is slow and outdated.
IPVS (IP Virtual Server): Built on the Netfilter framework, IPVS implements Layer-4 load balancing in the Linux kernel, supporting multiple load-balancing algorithms, including least connections and shortest expected delay. This kube-proxy mode became generally available in Kubernetes 1.11, but it requires the Linux kernel to have the IPVS modules loaded. It is also not as widely supported by various Kubernetes networking projects as the iptables mode.
iptables: This mode uses Linux kernel-level Netfilter rules to configure all routing for Kubernetes Services. This mode is the default for kube-proxy on most platforms. When load balancing for multiple backend pods, it uses unweighted round-robin scheduling.
IPVS (IP Virtual Server): Built on the Netfilter framework, IPVS implements Layer-4 load balancing in the Linux kernel, supporting multiple load-balancing algorithms, including least connections and shortest expected delay. This kube-proxy mode became generally available in Kubernetes 1.11, but it requires the Linux kernel to have the IPVS modules loaded. It is also not as widely supported by various Kubernetes networking projects as the iptables mode.
Take a look: kube-proxy, kube-proxy-article, aks-kube-proxy.
Read also: proxies-in-kubernetes.
Is there an easy way to enable Network Policies in single-node k8s cluster managed by Docker Desktop for Mac?
A single-node k8s cluster managed by Docker Desktop for Mac is imply a VM provisioned by the Docker for Mac Daemon that is then bootstrapped with a Kubernetes cluster. Docker has extended this solution in some ways to make it easier for developers to use but it is effectively similar to using Minikube.
A NetworkPolicy is a Kubernetes resource and as you have discovered, it is not enabled in your environment by default. This is because the NetworkPolicy resource requires a controller to be installed to enabled the enforcement of NetworkPolicy rules after they have been declared. Many applications can be installed to provide this functionality. The most common way is by installing a CNI like Calico.
After you do this, Calico will be able to enforce your NetworkPolicy rules that you have defined. They will automatically move from the Pending to Ready state in the cluster.
I using flannel on kubernetes.
On very node, there is a flannel interface and cni interface.
I.E, If I use 10.244.0.0 as subnet, Then
flannel 10.244.3.0
cni 10.244.3.1
They almost always come as a pair like above.
The quest is, If I use flannel, The number of nodes should be less equal than 255 ? 10.244.1~255.0
That is I can only manage 255 nodes on kubernetes with flannel ???
Flannels network range is changeable in its net-conf.json, see the recommended kubernetes deployment of flannel 0.8.0 for clarification. The actual subnet given to node the is set on node join by the Kubernetes node controller and fetched by flannel via the Kubernetes api server on startup before network creation when the --kube-subnet-mgr option of the flannel daemon is set.
I am not familiar with the implementation of the Kubernetes node controller, I suspect it would assign smaller subnets to the nodes if the third octet of the CIDR is exhausted. If you want to be absolutely sure, set your flannel network to something like 10.0.0.0/8, depending on number of nodes and pods.