Specify containerd version on minikube or kind - kubernetes

I am trying to reproduce an issue that requires me to use containerd v1.4.4 for my container-runtime and kubernetes v1.19.8. When I try to use minikube to create a multi-node cluster locally, it allows me to specify the kubernetes version but I am unable to specify the containerd version(i.e. it always uses v1.4.9) and based on this github discussion, it doesn't seem to support it. I then turned to kind but was unable to find a way to specify the same from the documentation. Is there a way either in kind or in minikube to specify the containerd version?

I ended up using kubeadm and set up a master and worker node using 2 VMs. This allowed me to specify the versions I want on the worker node. Building a base image on kind should also work as user Mikolaj.S mentioned

Related

can i install kubernetes on amazon linux 2

I'm having trouble Installing kubeadm on my amazon linux 2 instance specifically when i try to create a cluster,
when i try Installing runtime i get to chose which one to use :
containerd
CRI-O
Docker Engine
Mirantis Container Runtime
first of all i'm wondering which one i should use between them that is compatible with amazon linux 2 and second of all whenever i run yum install for any CRI i get this same error:
this is the output of the command: yum install cri-o
the doc that i followed is: https://kubernetes.io/docs/setup/production-environment/container-runtimes/
hi, hope you are enjoying your kubernetes Journey !
First off, you I wanna tell you that you can use whichever you want between the container runtime you want to install.
You can use docker if you are not familiar with the others but containerd is in my opinion the best lightweight alternative ( containerd is used in docker, but for kubernetes you don't need all the layers that docker provides only the container runtime Itself, here containerd ) you can read this for more info, but there is plenty of documentation about this.: https://www.tutorialworks.com/difference-docker-containerd-runc-crio-oci/
Second of all, I don't know how you are trying to install your kubernetes cluster but again there is few couples of way to do it:
The hardest but very instructive can be kubernetes the hard way ( https://github.com/kelseyhightower/kubernetes-the-hard-way )
Next you can use kubeadm (again there is plenty of documentation on the internet but you can follow one of the kubeadm tutorials: https://devopscube.com/setup-kubernetes-cluster-kubeadm/ )
Here is a list of tools that you can use to install your kubernetes cluster, you can look for tutorials for each of them on the internet: https://dzone.com/articles/50-useful-kubernetes-tools )
Last but not least, since you are on aws, you can use the AWS EKS service to setup quickly a robust kubernetes cluster. (https://aws.amazon.com/fr/eks/)
This is for AWS. If you want a local k8s cluster I strongly suggest you to use kind (kubernetes in docker)
Bguess

Purge Kubernetes Image Cache on containerd runtime with daemonSet

I am trying to purge images from the local kubernetes cache on a set cadence. Before you could setup some volumeMounts on a daemonSet and talk to the docker runtime directly.
The latest runtime is based on containerd but I can't seem to connect using the containerd.sock - when I run ctr image ls or nerdctl it shows as nothing running or images on the node. It also returns no errors.
Is there a different method for manually purging from the containerd runtime running on a daemonSet?
Answered in comments, most containerd commands are built for the Docker integration which uses the default containerd namespace (note, nothing to do with Linux namespaces, this is administrative namespacing inside containerd). Most commands have an option to set the ns being used but crictl is already set up for the CRI namespace that Kubernetes uses (because it's also a CRI client).

gpu worker node unable to join cluster

I've a EKS setup (v1.16) with 2 ASG: one for compute ("c5.9xlarge") and the other gpu ("p3.2xlarge").
Both are configured as Spot and set with desiredCapacity 0.
K8S CA works as expected and scale out each ASG when necessary, the issue is that the newly created gpu instance is not recognized by the master and running kubectl get nodes emits nothing.
I can see that the ec2 instance was in Running state and also I could ssh the machine.
I double checked the the labels and tags and compared them to the "compute".
Both are configured almost similarly, the only difference is that the gpu nodegroup has few additional tags.
Since I'm using eksctl tool (v.0.35.0) and the compute nodeGroup vs. gpu nodeGroup is basically copy&paste, I can't figured out what could be the problem.
UPDATE:
ssh the instance I could see the following error (/var/log/messages)
failed to run Kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
and the kubelet service crashed.
would it possible the my GPU uses wrong AMI (amazon-eks-gpu-node-1.18-v20201211)?
As a simple you can use this preBootstrapCommands in eksctl yaml config file:
- name: test-node-group
preBootstrapCommands:
- "sed -i 's/cgroupDriver:.*/cgroupDriver: cgroupfs/' /etc/eksctl/kubelet.yaml"
There is some issue with EKS 1.16, even the graviton processors machine won't join the cluster. To fix it first you try upgrading your CNI version. Please refer the documentation here:
https://docs.aws.amazon.com/eks/latest/userguide/cni-upgrades.html
And if that doesn't work, then upgrade your EKS version to the latest available version then should work.
I've found out the issue. It seems to be mis-alignment between eksctl (v0.35.0) and the AL2-GPU AMI.
AWS team change the control group in docker to be "systemd" instead of "cgroup" (github) while the eksctl tool I used didn't absorb the changes.
A temporary solution is to edit the /etc/eksctl/kubelet.yaml file using preBootstrapCommands

Change Container Runtime without destroying cluster

we are running multiple kubespray deployed clusters with 10-100 nodes.
with 1.20 kubernetes deperecates dockershim support -> https://github.com/kubernetes/kubernetes/blob/ab32085bf36fc7af1ded30456e2f09399dc1115f/CHANGELOG/CHANGELOG-1.20.md#deprecation
how to change the container runtime to containerd - without removing nodes and without destroying master.
i am not at panick, just wan't to be prepared we are at 1.19 already so 1.22 is not soo faar away.
anyways i tested it with a smaller cluster, and it was way easier as expected.
change: container_manager to containerd.
run the kubespray cluster.yml playbook over all nodes and boom.
only needed to do a simple ansible playbook to uninstall docker et-all, but it also works with docker installed.
Please treat this answer as a friendly advise.
First of all, as suggested in yesterday's fresh article Don't Panic: Kubernetes and Docker:
You do not need to panic :)
Kubernetes is only deprecating Docker as a container runtime after v1.20. They are currently only planning to remove Docker runtime support in the 1.22 release in late 2021(almost year!), so please don't brake your 100 nodes clusters till work solution will appear :)

How to change kubelet configuration via kubeadm

I'm fairly new to Kubernetes and trying to wrap my head around how to manage ComponentConfigs in already running clusters.
For example:
Recently I initialized a kubeadm cluster in a test environment running Ubuntu. When I did that, I found CoreDNS to be in a CrashLoopBackoff which turned out to be the case because Ubuntu was configured to use systemd-resolved and so the resolv.conf had a loopback resolver configured. After reading the docs for coredns, I found out that a solution for that would be to change the resolvConf parameter for kubelet - either via commandline arguments or in the config.
So how would one do this properly in a kubeadm-managed cluster?
Reading [this page in the documentation][1] I didn't really get a clue, because it seems to be tailored to the case of initializing a new cluster or joining new nodes.
Of course, in this particular situation I could just use "Kubeadm reset" and initialize it again with a --config parameter but that doesn't seem to be the right solution for a running cluster.
So after digging a bit deeper I found several infos:
I could change the /var/lib/kubelet/kubeadm-flags.env on the node directly, but AFAICT this only makes sense for node-specific changes.
There is a ConfigMap in the kube-system namespace named kubelet-config-1.14. This seems promising for upcoming nodes joining the cluster to get the right configuration - but would changing that CM affect the already running Kubelet?
There is a marshalled version of the running config in /var/lib/config/kubelet.yaml that I could change, but AFAIU this would be overriden by kubelet itself periodically (?) or at least during a kubeadm upgrade.
There seems to be an option to specify a configmap in the node object, to let kubelet dynamically load the configuration from there, but given that there is already an existing configmap it seems more sensible to change that one.
I seemingly had success by some combination of changing aforementioned CM, running kubeadm upgrade something afterwards and rebooting the machine (since restarting the kubelet did not fix the CoreDNS issue ... but maybe I was to impatient).
So I am now asking:
What is the recommended way to carry out changes to the kubelet configuration (or any other configuration I could affect via kubeadm-config.yaml) that works and is upgrade-safe for cases where the configuration is not node-specific?
And if this involves running kubeadm ... config --config - how do I extract the existing Kubeadm-config in a way that I can feed it back to to kubeadm?
I am entirely happy with pointers to the right documentation, I just didn't find the right clues myself.
TIA
What you are looking for is well described in official documentation.
The basic workflow for configuring a Kubelet is as follows:
Write a YAML or JSON configuration file containing the Kubelet’s configuration.
Wrap this file in a ConfigMap and save it to the Kubernetes control plane.
Update the Kubelet’s corresponding Node object to use this ConfigMap.
In addition there is DynamicKubeletConfig Feature Gate is enabled by default starting from Kubernetes v1.11, but you need some additional steps to activate it. You need to remember about, that Kubelet’s --dynamic-config-dir flag must be set to a writable directory on the Node.