We have an application that requires a different container runtime than what we have right now in our kubernetes cluster. Our cluster is deployed via kubeadm in our baremetal cluster. K8s version is 1.21.3
Can we install different container runtime or version to a single worker node?
Just wanted to verify inspite knowing that k8s design is modular enough.. CRI, CNI etc
Related
I was curious if we can use k8s and k3s together as it will help me in solving a complex architecture for edge computing.
I have a running k8s cluster prepared using kubeadm. For edge devices, I would like to have lightweight k3s running on them. Is it possible to use k8s cluster control plane to control k3s agent running on edge devices (embedded/routers etc)?
This kind of setup will open a lot of options for me as I'll have k8s functionality with k3s light footprint.
I tried using the default token of kubeadm k8s on k3s agent node (worker) but obviously it didn't join.
Kubeadm join internally performs TLS bootstrap of kubelet on worker nodes.You will have to TLS bootstrap the kubelet on k3s worker nodes to join the cluster created by kubeadm.
I have been reading for several days about how to deploy a Kubernetes cluster from scratch. It's all ok until it comes to etcd.
I want to deploy the etcd nodes inside the Kubernetes cluster. It looks there are many options, like etcd-operator (https://github.com/coreos/etcd-operator).
But, to my knowledge, a StatefulSet or a ReplicaSet makes use of a etcd.
So, what is the right way to deploy such a cluster?
My first thought: start with a single member etcd, either as a pod or a local service in the master node and, when the Kubernetes cluster is up, deploy the etcd StatefulSet and move/change/migate the initial etcd to the new cluster.
The last part sounds weird to me: "and move/change/migate the initial etcd to the new cluster."
Am I wrong with this approach?
I don't find useful information on this topic.
Kubernetes has 3 components: master components, node components and addons.
Master components
kube-apiserver
etcd
kube-scheduler
kube-controller-manager/cloud-controller-manager
Node components
kubelet
kube-proxy
Container Runtime
While implementing Kubernetes yu have to implement etcd as part of it. If it is multi node architecture you can use independent node or along with master node as per your requirement. You can find more details here. If you are looking for step by step guide follow this document if you need multi node architecture. If you need single node Kubernetes go for minikube.
I've had the opportunity to install k8s clusters on CentOS VMs. In most cases, i used flanneld as overlay. On some other cases, though, i noticed flannel pods in kube-system namespace. IMHO, we need not have both flanneld and flannel pods for underlying CNI to function properly with kubernetes.
Have read plenty of documentation on how flannel overlay fits into kubernetes ecosystem. However, i haven't found the answers to some questions. Hope somebody can provide pointers.
What is the basis for choosing flanneld or flannel pod?
Are there any differences in functionality between flanneld and flannel pod?
How does the flannel pod provide CNI functionality? My understanding is the pod populates etcd with IP address k/v pairs but how is this info really used?
Do most CNI plugins have a choice between running as daemon or pod?
You are right, you don't need both of them because they do the same job. There is no differences between them just where the daemon run in system, in isolated container or in system as regular daemon. All CNI plugins bases on CNI library and route the traffic. Flannel use system ETCD as key-value storage. if you have ETCD inside kubernetes cluster it will use this if external it will use external ETCD. it is only you choose what prefer to you, For example If you are running external ETCD usually people running flannel as daemon in system.
I have several operational deployments on minikube locally and am trying to deploy them on GCP with kubernetes.
When I describe a pod created by a deployment (which created a replication set that spawned the pod):
kubectl get po redis-sentinel-2953931510-0ngjx -o yaml
It indicates it landed on one of the kubernetes vms.
I'm having trouble with deployments that work separately failing due to lack of resources e.g. cpu even though I provisioned a VM above the requirements. I suspect the cluster is placing the pods on it's own nodes and running out of resources.
How should I proceed?
Do I introduce a vm to be orchestrated by kubernetes?
Do I enlarge the kubernetes nodes?
Or something else all together?
It was a resource problem and node pool size was inhibiting the deployments.I was mistaken in trying to provide google compute instances and disks.
I ended up provisioning Kubernetes node pools with more cpu and disk space and solved it. I also added elasticity by provisioning autoscaling.
here is a node pool documentation
here is a terraform Kubernetes deployment
here is the machine type documentation
I know that Openshift uses some K8s components to orchestrate PODS. Is there any way K8 and Openshift integrate together?. Means I should see the PODS which are deployed with K8s in Openshift UI and vise versa.
Followed Openshift as POD in K8 documentation,but I was struck at Step-4, unable to find kubernetes account key in GCE cluster (/srv/kubernetes/server.key).
Or is any way K8 nodes join under Openshift cluster?