I was curious if we can use k8s and k3s together as it will help me in solving a complex architecture for edge computing.
I have a running k8s cluster prepared using kubeadm. For edge devices, I would like to have lightweight k3s running on them. Is it possible to use k8s cluster control plane to control k3s agent running on edge devices (embedded/routers etc)?
This kind of setup will open a lot of options for me as I'll have k8s functionality with k3s light footprint.
I tried using the default token of kubeadm k8s on k3s agent node (worker) but obviously it didn't join.
Kubeadm join internally performs TLS bootstrap of kubelet on worker nodes.You will have to TLS bootstrap the kubelet on k3s worker nodes to join the cluster created by kubeadm.
Related
I have this mixed cluster which shows all the nodes as Ready (both Windows & Linux ones). However, only the Linux nodes have aws-node & kube-proxy pods. I RDPed into a Windows node and can see a kube-proxy service.
My question remains: do the Windows nodes need aws-node & kube-proxy pods in the kube-system namespace or do they work differently than Linux ones?
kube-proxy pods are part of default installation of Kubernetes. They are automaticaly created, and are needed on both Linux and Windows.
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
[source]
aws-node pod is a part of AWS CNI plugin for Kubernetes
The Amazon VPC Container Network Interface (CNI) plugin for Kubernetes is deployed with each of your Amazon EC2 nodes in a Daemonset with the name aws-node. The plugin consists of two primary components:
[...]
[source]
It is currently only supported on Linux. Windows nodes are using a different CNI plugin - vpc-shared-eni
We have an application that requires a different container runtime than what we have right now in our kubernetes cluster. Our cluster is deployed via kubeadm in our baremetal cluster. K8s version is 1.21.3
Can we install different container runtime or version to a single worker node?
Just wanted to verify inspite knowing that k8s design is modular enough.. CRI, CNI etc
I have bootstraped (kubernetes the hard way by kelseyhightower) a k8s cluster in virtual box with 2 master(s) and 2 worker(s) and 1 LB for 2 master's kube-apiserver. BTW, kubelet is not running in master, only in worker node.
Now cluster is up and running but I am not able to understand how kube-apiserver on master is connecting to kubelet to fetch the node's metric data etc.
Could you please let me in details?
Kubernetes API server is not aware of Kubelets but Kubelets are aware of Kubernetes API server. Kubelet registers the node and reports metrics to Kubernetes API Server which gets persisted into ETCD key value store. Kubelets use a kubeconfig file to communicate with Kubernetes API Server. This kubeconfig file has the endpoint of Kubernetes API server.The communication between Kubelet and Kubernetes API Server is secure with mutual TLS.
In Kubernetes the Hard Way Kubernetes control plane components - API Server, Scheduler, Controller Manager are run as systems unit and that's why there is no Kubelet running on the control plane nodes and if you perform kubectl get nodes command you would not see the master nodes listed as there is no Kubelet to register the master nodes.
A more standard way to deploy Kubernetes control plane components - API Server, Scheduler, Controller Manager is using Kubelet and not systemd units and that's how Kubeadm deploys Kubernetes control plane.
Official documentation on Master to Cluster communication.
How to run two kubernetes master without worker node ,one k8s master another should work as slave ,?
You can find 2 solution for Creating Highly Available clusters with kubeadm here.
There are described steps in order to create 2 kinds kinds of cluster:
Stacked control plane and etcd nodes
External etcd nodes
Additional resources:
Install and configure a multi-master Kubernetes cluster with kubeadm - HAProxy as a load balancer
kubernetes-the-hard-way
Hope this help.
Use Rancher server on several servers:
Master
Node1
Node2
Node3
Maybe only need rancher agent on node servers.
Also want to make kubernetes cluster on these servers. So install Kubernetes master on Rancher master, install Kubernetes nodes(kubelet) on Rancher nodes. Is it right?
So, the Kubernetes nodes can't install using Rancher server but should do it by self?
You will need a Rancher Agent on any server you want Rancher to place containers on. Rancher can deploy Kubernetes for you. I believe what you want to do is add all of the nodes, including the Rancher master, to a single Cattle environment(The Default env is Cattle). When adding the Rancher Server make sure you set CATTLE_AGENT_IP=. Once the hosts are registered, you will want to set host labels on the nodes. For nodes 1,2,3 you will set the label compute=true. On the Rancher Server you will set host 2 host labels, etcd=true and orchestration=true.
Once the labels are set up. Click on Catalog and search for Kubernetes. You can probably stick with most defaults, but CHANGE plane isolation to required.
Rancher should deploy Kubernetes management servers on the same host as your Rancher Server and the remaining nodes will be Kuberenetes minions.