I am trying to purge images from the local kubernetes cache on a set cadence. Before you could setup some volumeMounts on a daemonSet and talk to the docker runtime directly.
The latest runtime is based on containerd but I can't seem to connect using the containerd.sock - when I run ctr image ls or nerdctl it shows as nothing running or images on the node. It also returns no errors.
Is there a different method for manually purging from the containerd runtime running on a daemonSet?
Answered in comments, most containerd commands are built for the Docker integration which uses the default containerd namespace (note, nothing to do with Linux namespaces, this is administrative namespacing inside containerd). Most commands have an option to set the ns being used but crictl is already set up for the CRI namespace that Kubernetes uses (because it's also a CRI client).
Related
I want to store Kubelet logs in specific path to ship logs in my ELK stack. According Kubernetes reference --log-dir flag is deprecated from v1.23. here
How can do this in my on-premise Kubernetes. (v1.26)
OS = Oracle Linux 8.6
As per this 1.26V Kubernetes Logging Architecture, you can use this log location process as mentioned below in official document :
On Linux nodes that use systemd, the kubelet and container runtime write to journald by default. You use journalctl to read the systemd
journal; for example: journalctl -u kubelet.
If systemd is not present, the kubelet and container runtime write to .log files in the /var/log directory. If you want to have logs
written elsewhere, you can indirectly run the kubelet via a helper
tool, kube-log-runner, and use that tool to redirect kubelet logs to
a directory that you choose.
You can also set a logging directory using the deprecated kubelet command line argument --log-dir. However, the kubelet always directs
your container runtime to write logs into directories within
/var/log/pods. For more information on kube-log-runner, read System Logs.
As per the 1.26V kubernetes deprecations and major changes doc :
Generally available (GA) or stable API versions may be marked as
deprecated but must not be removed within a major version of
Kubernetes.
So, you can also use below and have a try :
--log - dirs flag allows you to specify a list of directories where Kubelet logs will be stored. For example, to store the Kubelet logs in the directory /var/log/kubelet, you can use the following command:
kubelet --log-dirs=/var/log/kubelet
You should also ensure that the directory you specify is writable by the user running the Kubelet process.
I am trying to reproduce an issue that requires me to use containerd v1.4.4 for my container-runtime and kubernetes v1.19.8. When I try to use minikube to create a multi-node cluster locally, it allows me to specify the kubernetes version but I am unable to specify the containerd version(i.e. it always uses v1.4.9) and based on this github discussion, it doesn't seem to support it. I then turned to kind but was unable to find a way to specify the same from the documentation. Is there a way either in kind or in minikube to specify the containerd version?
I ended up using kubeadm and set up a master and worker node using 2 VMs. This allowed me to specify the versions I want on the worker node. Building a base image on kind should also work as user Mikolaj.S mentioned
I've a EKS setup (v1.16) with 2 ASG: one for compute ("c5.9xlarge") and the other gpu ("p3.2xlarge").
Both are configured as Spot and set with desiredCapacity 0.
K8S CA works as expected and scale out each ASG when necessary, the issue is that the newly created gpu instance is not recognized by the master and running kubectl get nodes emits nothing.
I can see that the ec2 instance was in Running state and also I could ssh the machine.
I double checked the the labels and tags and compared them to the "compute".
Both are configured almost similarly, the only difference is that the gpu nodegroup has few additional tags.
Since I'm using eksctl tool (v.0.35.0) and the compute nodeGroup vs. gpu nodeGroup is basically copy&paste, I can't figured out what could be the problem.
UPDATE:
ssh the instance I could see the following error (/var/log/messages)
failed to run Kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
and the kubelet service crashed.
would it possible the my GPU uses wrong AMI (amazon-eks-gpu-node-1.18-v20201211)?
As a simple you can use this preBootstrapCommands in eksctl yaml config file:
- name: test-node-group
preBootstrapCommands:
- "sed -i 's/cgroupDriver:.*/cgroupDriver: cgroupfs/' /etc/eksctl/kubelet.yaml"
There is some issue with EKS 1.16, even the graviton processors machine won't join the cluster. To fix it first you try upgrading your CNI version. Please refer the documentation here:
https://docs.aws.amazon.com/eks/latest/userguide/cni-upgrades.html
And if that doesn't work, then upgrade your EKS version to the latest available version then should work.
I've found out the issue. It seems to be mis-alignment between eksctl (v0.35.0) and the AL2-GPU AMI.
AWS team change the control group in docker to be "systemd" instead of "cgroup" (github) while the eksctl tool I used didn't absorb the changes.
A temporary solution is to edit the /etc/eksctl/kubelet.yaml file using preBootstrapCommands
we are running multiple kubespray deployed clusters with 10-100 nodes.
with 1.20 kubernetes deperecates dockershim support -> https://github.com/kubernetes/kubernetes/blob/ab32085bf36fc7af1ded30456e2f09399dc1115f/CHANGELOG/CHANGELOG-1.20.md#deprecation
how to change the container runtime to containerd - without removing nodes and without destroying master.
i am not at panick, just wan't to be prepared we are at 1.19 already so 1.22 is not soo faar away.
anyways i tested it with a smaller cluster, and it was way easier as expected.
change: container_manager to containerd.
run the kubespray cluster.yml playbook over all nodes and boom.
only needed to do a simple ansible playbook to uninstall docker et-all, but it also works with docker installed.
Please treat this answer as a friendly advise.
First of all, as suggested in yesterday's fresh article Don't Panic: Kubernetes and Docker:
You do not need to panic :)
Kubernetes is only deprecating Docker as a container runtime after v1.20. They are currently only planning to remove Docker runtime support in the 1.22 release in late 2021(almost year!), so please don't brake your 100 nodes clusters till work solution will appear :)
Is it possible to take an image or a snapshot of container running inside pod using kubectl?
Via docker, it is possible to use the docker commit command that creates an image of a container from which we can spawn more containers. I wanted to understand if there was something similar that we could do with kubectl.
No, partially because that's not in the kubernetes mental model of anything one would wish to do to a cluster, and partially because docker is not the only container runtime kubernetes uses. Every runtime one could use underneath kubernetes would need to support that operation, and I doubt they do.
You are welcome to do your own docker commit either by getting a shell on the Node, or by running a privileged Pod then connecting to the docker.sock via a volumeMount and running it that way