Should Windows K8s nodes have aws-node & kube-proxy pods? - kubernetes

I have this mixed cluster which shows all the nodes as Ready (both Windows & Linux ones). However, only the Linux nodes have aws-node & kube-proxy pods. I RDPed into a Windows node and can see a kube-proxy service.
My question remains: do the Windows nodes need aws-node & kube-proxy pods in the kube-system namespace or do they work differently than Linux ones?

kube-proxy pods are part of default installation of Kubernetes. They are automaticaly created, and are needed on both Linux and Windows.
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
[source]
aws-node pod is a part of AWS CNI plugin for Kubernetes
The Amazon VPC Container Network Interface (CNI) plugin for Kubernetes is deployed with each of your Amazon EC2 nodes in a Daemonset with the name aws-node. The plugin consists of two primary components:
[...]
[source]
It is currently only supported on Linux. Windows nodes are using a different CNI plugin - vpc-shared-eni

Related

Can kubernetes have different container runtimes/version in worker nodes?

We have an application that requires a different container runtime than what we have right now in our kubernetes cluster. Our cluster is deployed via kubeadm in our baremetal cluster. K8s version is 1.21.3
Can we install different container runtime or version to a single worker node?
Just wanted to verify inspite knowing that k8s design is modular enough.. CRI, CNI etc

Azure kubernetes kube-proxy explanation

I'm kinda new to Kubernetes, and I would like to understand what is the purpose of Kube-proxy in Azure AKS/regular cluster.
from what I understand, Kube-proxy is updated by the API cluster from the various deployments configurations, which then updates the IP-table stack in the Linux kernel that responsible for the traffic routes between pods and services.
Am I missing something important?
Thanks!!
Basically kube-proxy component runs on each node to provide network features. It is run as a Kubernetes DaemonSet and its configuration is stored on a Kubernetes ConfigMap. You can edit the kube-proxy DaemonSet or ConfigMap on the kube-system namespace using commands:
$ kubectl -n kube-system edit daemonset kube-proxy
or
$ kubectl -n kube-system edit configmap kube-proxy
kube-proxy currently supports three different operation modes:
User space: This mode gets its name because the service routing takes place in kube-proxy in the user process space
instead of in the kernel network stack. It is not commonly used as it is slow and outdated.
IPVS (IP Virtual Server): Built on the Netfilter framework, IPVS implements Layer-4 load balancing in the Linux kernel, supporting multiple load-balancing algorithms, including least connections and shortest expected delay. This kube-proxy mode became generally available in Kubernetes 1.11, but it requires the Linux kernel to have the IPVS modules loaded. It is also not as widely supported by various Kubernetes networking projects as the iptables mode.
iptables: This mode uses Linux kernel-level Netfilter rules to configure all routing for Kubernetes Services. This mode is the default for kube-proxy on most platforms. When load balancing for multiple backend pods, it uses unweighted round-robin scheduling.
IPVS (IP Virtual Server): Built on the Netfilter framework, IPVS implements Layer-4 load balancing in the Linux kernel, supporting multiple load-balancing algorithms, including least connections and shortest expected delay. This kube-proxy mode became generally available in Kubernetes 1.11, but it requires the Linux kernel to have the IPVS modules loaded. It is also not as widely supported by various Kubernetes networking projects as the iptables mode.
Take a look: kube-proxy, kube-proxy-article, aks-kube-proxy.
Read also: proxies-in-kubernetes.

How kube-apiserver knows where is kubelet service/process running in worker node?

I have bootstraped (kubernetes the hard way by kelseyhightower) a k8s cluster in virtual box with 2 master(s) and 2 worker(s) and 1 LB for 2 master's kube-apiserver. BTW, kubelet is not running in master, only in worker node.
Now cluster is up and running but I am not able to understand how kube-apiserver on master is connecting to kubelet to fetch the node's metric data etc.
Could you please let me in details?
Kubernetes API server is not aware of Kubelets but Kubelets are aware of Kubernetes API server. Kubelet registers the node and reports metrics to Kubernetes API Server which gets persisted into ETCD key value store. Kubelets use a kubeconfig file to communicate with Kubernetes API Server. This kubeconfig file has the endpoint of Kubernetes API server.The communication between Kubelet and Kubernetes API Server is secure with mutual TLS.
In Kubernetes the Hard Way Kubernetes control plane components - API Server, Scheduler, Controller Manager are run as systems unit and that's why there is no Kubelet running on the control plane nodes and if you perform kubectl get nodes command you would not see the master nodes listed as there is no Kubelet to register the master nodes.
A more standard way to deploy Kubernetes control plane components - API Server, Scheduler, Controller Manager is using Kubelet and not systemd units and that's how Kubeadm deploys Kubernetes control plane.
Official documentation on Master to Cluster communication.

Is it possible to have kube-proxy without the kubernetes environment in vm pod using istio mesh expansion

I have been working on a very innovative project which involves both Kubernetes and Istio. So, I have 2-node kubernetes cluster setup with istio installed withe their side cars in the pods. I have already hosted the bookinfo application in the nodes but by using a separate VM by following the procedures given in Istio Mesh-Expansion.
So I have VM where the details and Mysqldb pods are present. The other pods are running in the k8s cluster. So Now, They communicate within a private network.
So my next phase of project would require me to setup Kube-proxy separately without installing Kubernetes in the VM, so as to allow it to directly communicate to the Kube-Api Server running in the master nodes of the k8s cluster through the private network. Hence, Can anybody suggest a way how to go about this?
All components of Kubernetes should be connected to the kube-api. Otherwise, they will not work.
So my next phase of project would require me to setup Kube-proxy separately without installing Kubernetes in the VM, so as to allow it to directly communicate to the Kube-Api Server running in the master nodes of the k8s cluster through the private network.
To access the kube-api server using Service with the private ClusterIP address, you should already have a kube-proxy. So, it is impossible to use any ClusterIP private address until you setup kube-proxy which is communicating with your kube-api by its address outside the Cluster IP range.
Kube-api can be exposed using NodePort or LoadBalancer type of service.

How to install kubernetes cluster on Rancher cluster?

Use Rancher server on several servers:
Master
Node1
Node2
Node3
Maybe only need rancher agent on node servers.
Also want to make kubernetes cluster on these servers. So install Kubernetes master on Rancher master, install Kubernetes nodes(kubelet) on Rancher nodes. Is it right?
So, the Kubernetes nodes can't install using Rancher server but should do it by self?
You will need a Rancher Agent on any server you want Rancher to place containers on. Rancher can deploy Kubernetes for you. I believe what you want to do is add all of the nodes, including the Rancher master, to a single Cattle environment(The Default env is Cattle). When adding the Rancher Server make sure you set CATTLE_AGENT_IP=. Once the hosts are registered, you will want to set host labels on the nodes. For nodes 1,2,3 you will set the label compute=true. On the Rancher Server you will set host 2 host labels, etcd=true and orchestration=true.
Once the labels are set up. Click on Catalog and search for Kubernetes. You can probably stick with most defaults, but CHANGE plane isolation to required.
Rancher should deploy Kubernetes management servers on the same host as your Rancher Server and the remaining nodes will be Kuberenetes minions.