How to stop/start containers at k3s agent? - kubernetes

Docker provides the following functions to stop and start the same container.
OP46B1:/ # docker stop 18788407a60c
OP46B1:/ # docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
18788407a60c ubuntu:test "/bin/bash" 34 minutes ago Exited (0) 7 seconds ago charming_gagarin
OP46B1:/ # docker start 18788407a60c
But k3s agent does not provide this function. A container stopped by "k3s crictl stop" cannot be restarted by "k3s crictl start". The following error will appear. How to stop and start the same container at k3s agent?
OP46B1:/data # ./k3s-arm64 crictl stop 5485f899c7bb6
5485f899c7bb6
OP46B1:/data # ./k3s-arm64 crictl ps -a
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
5485f899c7bb6 b58be220837f0 3 days ago Exited pod-webapp86 0 92a94e8eec410
OP46B1:/data# ./k3s-arm64 crictl start 5485f899c7bb6
FATA[2020-10-20T00:54:04.520056930Z] Starting the container "5485f899c7bb6" failed: rpc error: code = Unknown desc = failed to set starting state for container "5485f899c7bb6f2d294a3a131b33d8f35c9cf84df73cacb7b8af1ee48a591dcf": container is in CONTAINER_EXITED state

k3s is a distribution of kubernetes. Kubernetes is an abstraction over the container framework (containerd/docker/etc.). As such, you shouldn't try to control the containers directly using k3s crictl, but instead use the pod abstraction provided by kubernetes.
k3s kubectl get pods -A will list all the pods that are currently running in the k3s instance.1
k3s kubectl delete pod -n <namespace> <pod-selector> will delete the pod(s) specified, which will stop (and delete) their containers.2

Related

Eventual failure: kubectl exec fails with "operation not permitted: unkown"

I have some Pods that are running some Python programs. Initially I'm able to execute simple commands into the Pods. However after some time (maybe hours?) I start to get the following error:
$ kubectl exec -it mypod -- bash
error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "37a9f1042841590e48e1869f8b0ca13e64df02d25458783e74d8e8f2e33ad398": OCI runtime exec failed: exec failed: unable to start container pr
ocess: open /dev/pts/0: operation not permitted: unknown
If I restart the Pods, then this clears the condition. However, I'd like to figure out why this happening to avoid having to restart Pods each time.
The Pods are running a simple Python script, and the Python program is still running as normal (kubectl logs shows what I expect).
Also, I'm running K3s for Kubernetes across 4 nodes (1 master, 3 workers). I noticed all Pods running on certain nodes started to experience this issue. For example, initially I found all Pods running on worker2 and worker3 had this issue (but all Pods on worker1 did not). Eventually all Pods across all worker nodes start to have this problem. So it appears to be related to a condition on the node that is preventing exec from running. However restarting the Pods resets the condition.
As far as I can tell, the containers are running fine in containerd. I can log into the nodes and containerd shows the containers are running, I can check logs, etc...
What else should I check?
Why would the ability to exec stop working? (but containers are still running)
There is a couple of GitHub issues one or another from the middle of august. They said it was an SELinux issue and fixed in the runc v1.1.4. You should check your runc version and when it is below the mentioned version then update it.
Otherwise, you can disable SELinux when you aren't working in production:
setenforce 0
or when you want some more sophisticated solution, try this: https://github.com/moby/moby/issues/43969#issuecomment-1217629129

Error: error inspecting object: no such container minikube

I am trying to run minikube on ubuntu 18.04 version. getting an error while starting minikube. Please help. Tried minikube delete and start again but dosent work
Aspire-E5-573G:~$ minikube start --driver=podman --container-runtime=cri-o
😄 minikube v1.13.0 on Ubuntu 18.04
❗ Using podman 2 is not supported yet. your version is "2.0.6". minikube might not work. use at your own risk.
✨ Using the podman (experimental) driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
💾 Downloading Kubernetes v1.19.0 preload ...
> preloaded-images-k8s-v6-v1.19.0-cri-o-overlay-amd64.tar.lz4: 551.13 MiB /
🔄 Restarting existing podman container for "minikube" ...
🤦 StartHost failed, but will try again: podman inspect ip minikube: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:
stderr:
Error: error inspecting object: no such container minikube
🔄 Restarting existing podman container for "minikube" ...
😿 Failed to start podman container. Running "minikube delete" may fix it: podman inspect ip minikube: sudo -n podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:
stderr:
Error: error inspecting object: no such container minikube
❌ Exiting due to GUEST_PROVISION: Failed to start host: podman inspect ip minikube: sudo -n podman container inspect -f minikube: exit status 125
stdout:
stderr:
Error: error inspecting object: no such container minikube
😿 If the above advice does not help, please let us know:
👉 https://github.com/kubernetes/minikube/issues/new/choose
As the error already indicates podman 2 is not yet supported.
Using podman 2 is not supported yet. your version is "2.0.6". minikube might not work. use at your own risk.
The workaround for this as described here is to use version v.1.9.3.
Here`s the merge that was done to warn about podman version 2.
Yes you are right. I tried using docker as the driver in the arguments and it worked. Podman 2 is not yet supported

minikube stop doesn't stop the pods after sudo minikube start --vm-driver none. kube-apiserver still running

I use minikube v1.6.2, kubectl 1.17.
I start minikube without Virtualbox, with:
sudo minikube start --vm-driver none
Now, to stop it, I do:
sudo minikube stop
minikube stop # I don't know which one is the good one, but I do both
but, after that, when I do:
kubectl get po
I still get the pods listing. The only way to stop it is to actually reboot my machine.
Why is it happening, and how should I fix it ?
minikube stop when used with --vm-driver=none does not do any cleanup of the pods. As mentioned here:
When minikube starts without a hypervisor, it installs a local kubelet
service on your host machine, which is important to know for later.
Right now it seems that minikube start is the only command aware of
--vm-driver=none. Running minikube stop keeps resulting in errors related to docker-machine, and as luck would have it also results in
none of the Kubernetes containers terminating, nor the kubelet service
stopping.
Of course, if you wish to actually terminate minikube, you will need
to execute service kubelet stop and then ensure the k8s containers are
removed from the output in docker ps.
If you wish to know the overview of none (bare-metal) driver you can find it here.
Also as a workaround you can stop and remove all Docker containers that have 'k8s' in their name by executing the following command: docker stop (docker ps -q --filter name=k8s) and docker rm (docker ps -aq --filter name=k8s).
Please let me know if that helped.

Cannot shell into the container, rpc error: code = 5 desc ... shim-log.json: no such file or directory

trying to shell into the container by kubectl exec -it xxxxxx
but it returns
rpc error: code = 5 desc = open /var/run/docker/libcontainerd/containerd/faf3fd49262cc738e16368001eba5e1113abcb8a87e7b818cb84af3799906149/30fe901c16e0465aa15b596bf3e4f244fb12a7e4133b6e4da5aa35167a8dfb30/shim-log.json: no such file or directory
trying to reboot the node but not help
Thanks #Prafull Ladha
Eventually I restarted the docker (systemctl restart docker) of that Node which my pods could not be shelled, and it resumes to normal
The problem is with containerd, Once the containerd restart in the background, the docker daemon still try to process event streams against the old socket handles. After that, the error handeling when client can't connect to the containerd leads to the CPU spike on machine.
This is the open issue with docker and currently the workaround is to restart the docker.
sudo systemctl restart docker
It appears like some issue with the docker daemon. it would help if you add the logs from the container to research the root cause.
deploy alpine pod and see if you can get into the container. This is to isolate if the problem is with the platform or the pod that you are running.
kubectl run pingpong --image alpine ping 8.8.8.8
kubectl exec -it <pingpong-pod-name> sh
most likely something wrong with the pod that you are running. share the container logs for further help

Minikube: kubectl connection refused - did you specify the right host or port?

I try to run minikube v0.22.1 and kubectl v1.7.5 on MacOS with Virtualbox.
$ minikube start
Starting local Kubernetes v1.7.5 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
$ minikube version
minikube version: v0.22.1
$ minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100
However all kubectl commands fail with "connection refused - did you specify the right host or port?"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T19:32:26Z", GoVersion:"go1.9", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
The solution proposed here (sudo ifconfig vboxnet0 up) did not help, the vboxnet0 interface is up.
Any ideas or suggestions are highly appreciated.
If you run
kubectl config get-contexts
Do you get the following?
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube
If not that means your kubectl context is not correctly setup. To setup the context correctly run this
kubectl config use-context minikube
You may have it stopped or saved for any reason. sometimes, after you enable/disable addons you may need to restart it.
1) Restart minikube VM, stop it
$ minikube stop
2) Start it again, make sure you assign enough cpu/memory (the following is just an example of how to pass it, you need to adjust it based on available resources in your machine)
$ minikube start --memory=10000 --cpu 4
If this didn't work out, you can do the following that will help you to know more about the underlying cause of problem:
Check minikube status and make sure the status is Running
$ minikube status
Or, check minkube logs:
minikube logs
Finally, if you couldn't fix it, you may need to delete and start it from scratch
$ minikube delete && minikube start
Ref: https://github.com/kubernetes/minikube/issues/1498
I will just drop this in here in case anyone find this question.
As of right now I don't know the versions of the OP's setup. So I'm going to assume he has the latest version that was available when he posted, which was: 0.22.1
Description
I had a similar issue. The cluster was timing-out irregularly. One moment I got answers using kubectl cluster-info dump another I didn't. Then it worked again, and then it didn't. I found a github bug report with a solution.
Solution
Remove your VirtualBox VM.
Remove the ./minikube folder.
Remove the minikube executable.
Install version 0.19.0.
Verify that minikube is working with: kubectl
Versions
OS: Windows 10 (Home edition)
Minikube bugged version: 0.22.2
Minikube working version: 0.19.0
Kubectl (client): v1.7.0
Kubectl (server): v1.6.0
EDIT:
I kept having some issue with minikube after I posted this original answer. I found something that fixed the issue completely.
It's related to the dynamic memory setting in Hyper-V.
Solution
1. Turn off the hyper-v minikube VM.
2. Go to the VM's settings.
3. Turn off dynamic memory allocation.
4. Assign a decent amount of memory.
5. Save and turn the VM on again.
This should work with any minikube version. See this github issue for progress on an automatated solution
When debugging the minikube commands, e.g.
$ minikube dashboard --loglevel 0 --logtostderr
some proxy issues became visible and could be solved.
I ran into this situation this morning (another Monday!) on MacOS 11.3 with minikube v1.19.0.
I ran minikube status and got the following:
E0503 14:15:43.912005 7308 status.go:412] kubeconfig endpoint: got: 127.0.0.1:64041, want: 127.0.0.1:56537
minikube
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
Seemed like good advice, so I did run minikube update-context and got this:
🎉 "minikube" context has been updated to point to 127.0.0.1:56537
💗 Current context is "minikube"
After which everything worked like it did on Friday.
After the Linux Security OS patching and reboot we are unable to start kubernetes service received below error.
Error message: The connection to the server 192.168.1.101:8443 received while starting the kubernetes service.
This issue happened due to systemd package got updated during the security patching.
So We did below action to bring up the application On each master nodes
1. Update the /usr/lib/systemd/system/kubelet.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
2. Update the /usr/lib/systemd/system/kube-proxy.service fie by removing the below two lines:
ExecStartPost=/bin/bash -c 'umask 0022; pgrep -x kubelet > /run/kubelet.pid'
ExecStopPost=/bin/bash -c 'rm -f /run/kubelet.pid'
3. Run the kube-restart.sh on the master nodes.
4. run the kube-restart.sh on the worker nodes.
Update: I am using minikube version: v1.25.2
The command mentioned in this thread did NOT work:
minikube start --memory=10000 --cpu 4 #this will FAIL
This, however, DID WORK (use cpus instead. I also changed values to show minimum requirement for Docker):
minikube start --memory=1800 --cpus=2 # this will work
minikube start --memory=1800 --cpus 2 # this will also work
minikube delete && minikube start
sudo minikube start --vm-driver=none (start minikube again)
This solved my problem
minikube delete
minikube start
just restarted the container