Pods stuck in Terminating status - kubernetes
I tried to delete a ReplicationController with 12 pods and I could see that some of the pods are stuck in Terminating status.
My Kubernetes cluster consists of one control plane node and three worker nodes installed on Ubuntu virtual machines.
What could be the reason for this issue?
NAME READY STATUS RESTARTS AGE
pod-186o2 1/1 Terminating 0 2h
pod-4b6qc 1/1 Terminating 0 2h
pod-8xl86 1/1 Terminating 0 1h
pod-d6htc 1/1 Terminating 0 1h
pod-vlzov 1/1 Terminating 0 1h
You can use following command to delete the POD forcefully.
kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>
The original question is "What could be the reason for this issue?" and the answer is discussed at https://github.com/kubernetes/kubernetes/issues/51835 & https://github.com/kubernetes/kubernetes/issues/65569 & see https://www.bountysource.com/issues/33241128-unable-to-remove-a-stopped-container-device-or-resource-busy
Its caused by docker mount leaking into some other namespace.
You can logon to pod host to investigate.
minikube ssh
docker container ps | grep <id>
docker container stop <id>
Force delete the pod:
kubectl delete pod --grace-period=0 --force --namespace <NAMESPACE> <PODNAME>
The --force flag is mandatory.
I found this command more straightforward:
for p in $(kubectl get pods | grep Terminating | awk '{print $1}'); do kubectl delete pod $p --grace-period=0 --force;done
It will delete all pods in Terminating status in default namespace.
Delete the finalizers block from resource (pod,deployment,ds etc...) yaml:
"finalizers": [
"foregroundDeletion"
]
In my case the --force option didn't quite work. I could still see the pod ! It was stuck in Terminating/Unknown mode. So after running
kubectl -n redis delete pods <pod> --grace-period=0 --force
I ran
kubectl -n redis patch pod <pod> -p '{"metadata":{"finalizers":null}}'
Practical answer -- you can always delete a terminating pod by running:
kubectl delete pod NAME --grace-period=0
Historical answer -- There was an issue in version 1.1 where sometimes pods get stranded in the Terminating state if their nodes are uncleanly removed from the cluster.
I stumbled upon this recently to free up resource in my cluster. here is the command to delete them all.
kubectl get pods --all-namespaces | grep Terminating | while read line; do
pod_name=$(echo $line | awk '{print $2}' ) \
name_space=$(echo $line | awk '{print $1}' ); \
kubectl delete pods $pod_name -n $name_space --grace-period=0 --force
done
hope this help someone who read this
Force delete ALL pods in namespace:
kubectl delete pods --all -n <namespace> --grace-period 0 --force
If --grace-period=0 is not working then you can do:
kubectl delete pods <pod> --grace-period=0 --force
I stumbled upon this recently when removing rook ceph namespace - it got stuck in Terminating state.
The only thing that helped was removing kubernetes finalizer by directly calling k8s api with curl as suggested here.
kubectl get namespace rook-ceph -o json > tmp.json
delete kubernetes finalizer in tmp.json (leave empty array "finalizers": [])
run kubectl proxy in another terminal for auth purposes and run following curl request to returned port
curl -k -H "Content-Type: application/json" -X PUT --data-binary #tmp.json 127.0.0.1:8001/k8s/clusters/c-mzplp/api/v1/namespaces/rook-ceph/finalize
namespace is gone
Detailed rook ceph teardown here.
I used this command to delete the pods
kubectl delete pod --grace-period=0 --force --namespace <NAMESPACE> <PODNAME>
But when I tried run another pod, it didn't work, it was stuck in "Pending" state, it looks like the node itself was stuck.
For me, the solution was to recreate the node. I simply went to GKE console and deleted the node from the cluster and so GKE started another.
After that, everything started to work normally again.
I had to same issue in a production Kubernetes cluster.
A pod was stuck in Terminating phase for a while:
pod-issuing mypod-issuing-0 1/1 Terminating 0 27h
I tried checking the logs and events using the command:
kubectl describe pod mypod-issuing-0 --namespace pod-issuing
kubectl logs mypod-issuing-0 --namespace pod-issuing
but none was available to view
How I fixed it:
I ran the command below to forcefully delete the pod:
kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>
This deleted the pod immediately and started creating a new one. However, I ran into the error below when another pod was being created:
Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data mypod-issuing-token-5swgg aws-iam-token]: timed out waiting for the condition
I had to wait for 7 to 10 minutes for the volume to become detached from the previous pod I deleted so that it can become available for this new pod I was creating.
For my case, I don't like workaround. So there are steps :
k get pod -o wide -> this will show which Node is running the pod
k get nodes -> Check status of that node... I got it NotReady
I went and I fixed that node. For my case, it's just restart kubelet :
ssh that-node -> run swapoff -a && systemctl restart kubelet (Or systemctl restart k3s in case of k3s | or systemctl restart crio in other cases like OCP 4.x (k8s <1.23) )
Now deletion of pod should work without forcing the Poor pod.
Please try below command:
kubectl patch pod <pod>-p '{"metadata":{"finalizers":null}}'
Before doing a force deletion i would first do some checks.
1- node state: get the node name where your node is running, you can see this with the following command:
"kubectl -n YOUR_NAMESPACE describe pod YOUR_PODNAME"
Under the "Node" label you will see the node name.
With that you can do:
kubectl describe node NODE_NAME
Check the "conditions" field if you see anything strange.
If this is fine then you can move to the step, redo:
"kubectl -n YOUR_NAMESPACE describe pod YOUR_PODNAME"
Check the reason why it is hanging, you can find this under the "Events" section.
I say this because you might need to take preliminary actions before force deleting the pod, force deleting the pod only deletes the pod itself not the underlying resource (a stuck docker container for example).
I'd not recommend force deleting pods unless container already exited.
Verify kubelet logs to see what is causing the issue "journalctl -u kubelet"
Verify docker logs: journalctl -u docker.service
Check if pod's volume mount points still exist and if anyone holds lock on it.
Verify if host is out of memory or disk
you can use awk :
kubectl get pods --all-namespaces | awk '{if ($4=="Terminating") print "oc delete pod " $2 " -n " $1 " --force --grace-period=0 ";}' | sh
One reason WHY this happens can be turning off a node (without draining it). Fix in this case is to turn on the node again; then termination should succeed.
My pods stuck in 'Terminating', even after I tried to restart docker & restart server. Resolved after edit the pod & delete items below 'finalizer'
$ kubectl -n mynamespace edit pod/my-pod-name
I am going to try the most extense answer, because none of the above are wrong, but they do not work in all case scenarios.
The usual way to put an end to a terminating pod is:
kubectl delete pod -n ${namespace} ${pod} --grace-period=0
But you may need to remove finalizers that could be preventing the POD from stoppoing using:
kubectl -n ${namespace} patch pod ${pod} -p '{"metadata":{"finalizers":null}}'
If none of that works, you can remove the pod from etcd with etcdctl:
# Define variables
ETCDCTL_API=3
certs-path=${HOME}/.certs/e
etcd-cert-path=${certs-path}/etcd.crt
etcd-key-path=${certs-path}/etcd.key
etcd-cacert-path=${certs-path}/etcd.ca
etcd-endpoints=https://127.0.0.1:2379
namespace=myns
pod=mypod
# Call etcdctl to remove the pod
etcdctl del \
--endpoints=${etcd-endpoints}\
--cert ${etcd-cert-path} \
--key ${etcd-client-key}\
--cacert ${etcd-cacert-path} \
--prefix \
/registry/pods/${namespace}/${pod}
This last case should be used as last resource, in my case I ended having to do it due to a deadlock that prevented calico from starting in the node due to Pods under terminating status. Those pods won't be removed until calico is up, but they have reserved enough CPU to avoid calico, or any other pod, from Initializing.
Following command with awk and xargs can be used along with --grace-period=0 --force to delete all the Pods in Terminating state.
kubectl get pods|grep -i terminating | awk '{print $1}' | xargs kubectl delete --grace-period=0 --force pod
go templates will work without awk, for me it works without --grace-period=0 --force but, add it if you like
this will output the command to delete the Terminated pods.
kubectl get pods --all-namespaces -otemplate='{{ range .items }}{{ if eq .status.reason "Terminated" }}{{printf "kubectl delete pod -n %v %v\n" .metadata.namespace .metadata.name}}{{end}}{{end}}'
if you are happy with the output, you cat add | sh - to execute it.
as follow:
kubectl get pods --all-namespaces -otemplate='{{ range .items }}{{ if eq .status.reason "Terminated" }}{{printf "kubectl delete pod -n %v %v\n" .metadata.namespace .metadata.name}}{{end}}{{end}}' |sh -
for me below command has resolved the issue
oc patch pvc pvc_name -p '{"metadata":{"finalizers":null}}
Related
How to delete all the Terminated pods of a kubernetes cluster?
I know how to delete a specific pod: kubectl -n <namespace> delete pod <pod-name> Is there a way to delete all the Terminated pods once?
What does terminated pod mean? If you wish to delete finished pods of any jobs in the namespace then you can remove them with a single command: kubectl -n <namespace> delete pods --field-selector=status.phase==Succeeded Another approach in Kubernetes 1.23 onwards is to use Job's TTL controller feature: spec: ttlSecondsAfterFinished: 100 In your case Terminated status means your pods are in a failed state. To remove them just change the status.phase to Failed state (https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodStatus)
You can pipe 2 commands: kubectl -n <namespace> get pods --field-selector=status.phase==Succeeded -o custom-columns=NAME:.metadata.name --no-headers | kubectl -n <namespace> delete pods I don't think there is an 'exec' option to kubectl get (like the CLI tool 'find' for instance). If the command fits your needs, you can always convert it to an alias or shell function
How to reset K3s cluster pods
I have a k3s cluster with following pods: kube-system pod/calico-node-xxxx kube-system pod/calico-kube-controllers-xxxxxx kube-system pod/metrics-server-xxxxx kube-system pod/local-path-provisioner-xxxxx kube-system pod/coredns-xxxxx How can I reset (stop and start the pods again) the pods either with command (kubectl maybe) or any script?
To reset a pod, you can just delete it. If it's managed by deployment (pods in your question should be), they should be recreated automatically. kubectl delete pod <pod-name> <pod2-name> ... -n <namespace> If the pods you want to reset, have common label, you can filter them with --selector flag kubectl delete pods --selector=<label-name>=<label-value> -n <namespace> However, if you changed the deployments somehow, you will need to apply the unmodified manifest. kubectl apply -f <yaml-file>
Warning: - This will reset your whole cluster and delete all running data. This is not the exact answer but best answer. take 1 min only. Just uninstall by running below command sudo /usr/local/bin/k3s-uninstall.sh Then install a fresh cluster with below command curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --disable=traefik" sh - Then export var using below command export KUBECONFIG=/etc/rancher/k3s/k3s.yaml Also it may complain about some k3s config file access so sudo chmod 444 /etc/rancher/k3s/k3s.yaml
How to remove the pods of a removed nodes
I have removed and delete a node from k8s cluster using the following commands: kubectl drain worker1 --ignore-daemonsets kubectl delete worker1 After that, I saw the kube-proxy and the weave daemonset(both for worker1) still existed (it is expected since I ignored the daemonset)even the nodes is drained and deleted. How can I remove these pods if the node(worker1) is drained and deleted. Thank you
Find out the name of the pod which is scheduled on that deleted node and delete the pod using kubectl delete pods <pod_name> --grace-period=0 --force -n <namespace> Use below command to display more details about pod including the node on which the pod is scheduled kubectl get pods -n <namespace> -o wide You could also use kubeadm reset on that node. Please note this will uninstall and remove all Kubernetes related software from that node.
using kubectl delete command to remove core-dns pod blocked / No activity
I found my coredns pod throw error: Readiness probe failed: Get http://172.30.224.7:8080/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers) . I am delete pod using this command: kubectl delete pod coredns-89764d78c-mbcbz -n kube-system but the command keep waiting and nothing response,how to know the progress of deleting? this is output: [root#ops001 ~]# kubectl delete pod coredns-89764d78c-mbcbz -n kube-system pod "coredns-89764d78c-mbcbz" deleted and the terminal hangs or blocked,when I use browser UI with using kubernetes dashboard the pod exits.how to force delete it? or fix it the right way?
You are deleting a pod which is monitored by deployment controller. That's why when you delete one of the pods, the controller create another to make sure the number of pods equal to the replica count. If you really want to delete the coredns[not recommended], delete the deployment instead of the pods. $ kubectl delete deployment coredns -n kube-system
Answering another part of your question: but the command keep waiting and nothing response,how to know the progress of deleting? this is output: [root#ops001 ~]# kubectl delete pod coredns-89764d78c-mbcbz -n kube-system pod "coredns-89764d78c-mbcbz" deleted and the terminal blocked... When you're deleting a Pod and you want to see what's going on under the hood, you can additionally provide -v flag and specify the desired verbosity level e.g.: kubectl delete pod coredns-89764d78c-mbcbz -n kube-system -v 8 If there is some issue with the deletion of specific Pod, it should tell you the details. I totally agree with #P Ekambaram's comment: if coredns is not started. you need to check logs and find out why it is not getting started – P Ekambaram You can always delete the whole coredns Deployment and re-deploy it again but generally you shouldn't do that. Looking at Pod logs: kubectl logs coredns-89764d78c-mbcbz -n kube-system should also tell you some details explaining why it doesn't work properly. I would say that deleting the whole coredns Deployment is a last resort command.
What will happen to evicted pods in kubernetes?
I just saw some of my pods got evicted by kubernetes. What will happen to them? just hanging around like that or I have to delete them manually?
A quick workaround I use, is to delete all evicted pods manually after an incident. You can use this command: kubectl get pods --all-namespaces -o json | jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | "kubectl delete pods \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c
To delete pods in Failed state in namespace default kubectl -n default delete pods --field-selector=status.phase=Failed
Evicted pods should be manually deleted. You can use following command to delete all pods in Error state. kubectl get pods --all-namespaces --field-selector 'status.phase==Failed' -o json | kubectl delete -f -
Depending on if a soft or hard eviction threshold that has been met, the Containers in the Pod will be terminated with or without grace period, the PodPhase will be marked as Failed and the Pod deleted. If your Application runs as part of e.g. a Deployment, there will be another Pod created and scheduled by Kubernetes - probably on another Node not exceeding its eviction thresholds. Be aware that eviction does not necessarily have to be caused by thresholds but can also be invoked via kubectl drain to empty a node or manually via the Kubernetes API.
To answer the original question: the evicted pods will hang around until the number of them reaches the terminated-pod-gc-threshold limit (it's an option of kube-controller-manager and is equal to 12500 by default), it's by design behavior of Kubernetes (also the same approach is used and documented for Jobs - https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup). Keeping the evicted pods pods around allows you to view the logs of those pods to check for errors, warnings, or other diagnostic output.
The bellow command delete all failed pods from all namespaces kubectl get pods -A | grep Evicted | awk '{print $2 " -n " $1}' | xargs -n 3 kubectl delete pod
One more bash command to delete evicted pods kubectl get pods | grep Evicted | awk '{print $1}' | xargs kubectl delete pod
Just in the case someone wants to automatically delete all evicted pods for all namespaces: Powershell Foreach( $x in (kubectl get po --all-namespaces --field-selector=status.phase=Failed --no-headers -o custom-columns=:metadata.name)) {kubectl delete po $x --all-namespaces } Bash kubectl get po --all-namespaces --field-selector=status.phase=Failed --no-headers -o custom-columns=:metadata.name | xargs kubectl delete po --all-namespaces
Kube-controller-manager exists by default with a working K8s installation. It appears that the default is a max of 12500 terminated pods before GC kicks in. Directly from the K8s documentation: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/#kube-controller-manager --terminated-pod-gc-threshold int32 Default: 12500 Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. If <= 0, the terminated pod garbage collector is disabled.
In case you have pods with a Completed status that you want to keep around: kubectl get pods --all-namespaces --field-selector 'status.phase==Failed' -o json | kubectl delete -f -
Another way still with awk. To prevent any human error that could make me crazy (deleting desirable pods), I check before the result of the get pods command : kubectl -n my-ns get pods --no-headers --field-selector=status.phase=Failed If that looks good, here we go : kubectl -n my-ns get pods --no-headers --field-selector=status.phase=Failed | \ awk '{system("kubectl -n my-ns delete pods " $1)}' Same thing with pods of all namespaces. Check : kubectl get -A pods --no-headers --field-selector=status.phase=Failed Delete : kubectl get -A pods --no-headers --field-selector status.phase=Failed | \ awk '{system("kubectl -n " $1 " delete pod " $2 )}'
OpenShift equivalent of Kalvin's command to delete all 'Evicted' pods: eval "$(oc get pods --all-namespaces -o json | jq -r '.items[] | select(.status.phase == "Failed" and .status.reason == "Evicted") | "oc delete pod --namespace " + .metadata.namespace + " " + .metadata.name')"
To delete all the Evicted pods by force, you can try this one-line command: $ kubectl get pod -A | sed -nr '/Evicted/s/(^\S+)\s+(\S+).*/kubectl -n \1 delete pod \2 --force --grace-period=0/e' Tips: use the p modifier of s command of sed instead of e will just print the real command to do the deletion job: $ kubectl get pod -A | sed -nr '/Evicted/s/(^\S+)\s+(\S+).*/kubectl -n \1 delete pod \2 --force --grace-period=0/p'
below command will get all evicted pods from the default namespace and delete them kubectl get pods | grep Evicted | awk '{print$1}' | xargs -I {} kubectl delete pods/{}
Here is the 'official' guide for how to hard code the threshold(if you do not want to see too many evicted pods): kube-controll-manager But a known problem is how to have kube-controll-manager installed...
When we have too many evicted pods in our cluster, this can lead to network load as each pod, even though it is evicted is connected to the network and in case of a cloud Kubernetes cluster, will have blocked an IP address, which can lead to exhaustion of IP addresses too if you have a fixed pool of IP addresses for your cluster. Also, when we have too many pods in Evicted status, it becomes difficult to monitor the pods by running the kubectl get pod command as you will see too many evicted pods, which can be a bit confusing at times. To delete and evicted pod run the following command kubectl delete pod <podname> -n <namespace> what if you have many evicted pods kubectl get pod -n <namespace> | grep Evicted | awk '{print $1}' | xargs kubectl delete pod -n <namespace>