How to delete all the Terminated pods of a kubernetes cluster? - kubernetes

I know how to delete a specific pod:
kubectl -n <namespace> delete pod <pod-name>
Is there a way to delete all the Terminated pods once?

What does terminated pod mean? If you wish to delete finished pods of any jobs in the namespace then you can remove them with a single command:
kubectl -n <namespace> delete pods --field-selector=status.phase==Succeeded
Another approach in Kubernetes 1.23 onwards is to use Job's TTL controller feature:
spec:
ttlSecondsAfterFinished: 100
In your case Terminated status means your pods are in a failed state. To remove them just change the status.phase to Failed state (https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#PodStatus)

You can pipe 2 commands:
kubectl -n <namespace> get pods --field-selector=status.phase==Succeeded -o custom-columns=NAME:.metadata.name --no-headers | kubectl -n <namespace> delete pods
I don't think there is an 'exec' option to kubectl get (like the CLI tool 'find' for instance).
If the command fits your needs, you can always convert it to an alias or shell function

Related

update manifest of a kubernetes object

I have a k8s cluster and I have to update metrics-server (in the kube-system) namespace. I've tried to:
kubectl apply -n kube-system -f my-updated-metrics-server.yaml
and
kubectl replace -n kube-system -f my-updated-metrics-server.yaml
without success. What happens is that the deployment gets updated but then after a while (10-15 min), it gets to the previous status (before the apply/replace commands).
any thoughts?
UPDATED (as requested)
$ kubectl get ns |grep -iE 'argo|flux'
$

What is the command to know the Kubernetes Pod running status?

I am trying to create the containers, when i am trying to build it, it is going into failed state so how can i see the kube pod status in running status and would like to know the root cause of it. Why is it not getting success
Get the logs for the pod
kubectl logs -f <pod_name> -n <namespace>
Get the list of events and other information for the pod
kubectl describe po <pod_name> -n <namespace>

How to reset K3s cluster pods

I have a k3s cluster with following pods:
kube-system pod/calico-node-xxxx
kube-system pod/calico-kube-controllers-xxxxxx
kube-system pod/metrics-server-xxxxx
kube-system pod/local-path-provisioner-xxxxx
kube-system pod/coredns-xxxxx
How can I reset (stop and start the pods again) the pods either with command (kubectl maybe) or any script?
To reset a pod, you can just delete it. If it's managed by deployment (pods in your question should be), they should be recreated automatically.
kubectl delete pod <pod-name> <pod2-name> ... -n <namespace>
If the pods you want to reset, have common label, you can filter them with --selector flag
kubectl delete pods --selector=<label-name>=<label-value> -n <namespace>
However, if you changed the deployments somehow, you will need to apply the unmodified manifest.
kubectl apply -f <yaml-file>
Warning: - This will reset your whole cluster and delete all running data.
This is not the exact answer but best answer. take 1 min only.
Just uninstall by running below command
sudo /usr/local/bin/k3s-uninstall.sh
Then install a fresh cluster with below command
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --disable=traefik" sh -
Then export var using below command
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
Also it may complain about some k3s config file access so
sudo chmod 444 /etc/rancher/k3s/k3s.yaml

Pods stuck in Terminating status

I tried to delete a ReplicationController with 12 pods and I could see that some of the pods are stuck in Terminating status.
My Kubernetes cluster consists of one control plane node and three worker nodes installed on Ubuntu virtual machines.
What could be the reason for this issue?
NAME READY STATUS RESTARTS AGE
pod-186o2 1/1 Terminating 0 2h
pod-4b6qc 1/1 Terminating 0 2h
pod-8xl86 1/1 Terminating 0 1h
pod-d6htc 1/1 Terminating 0 1h
pod-vlzov 1/1 Terminating 0 1h
You can use following command to delete the POD forcefully.
kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>
The original question is "What could be the reason for this issue?" and the answer is discussed at https://github.com/kubernetes/kubernetes/issues/51835 & https://github.com/kubernetes/kubernetes/issues/65569 & see https://www.bountysource.com/issues/33241128-unable-to-remove-a-stopped-container-device-or-resource-busy
Its caused by docker mount leaking into some other namespace.
You can logon to pod host to investigate.
minikube ssh
docker container ps | grep <id>
docker container stop <id>
Force delete the pod:
kubectl delete pod --grace-period=0 --force --namespace <NAMESPACE> <PODNAME>
The --force flag is mandatory.
I found this command more straightforward:
for p in $(kubectl get pods | grep Terminating | awk '{print $1}'); do kubectl delete pod $p --grace-period=0 --force;done
It will delete all pods in Terminating status in default namespace.
Delete the finalizers block from resource (pod,deployment,ds etc...) yaml:
"finalizers": [
"foregroundDeletion"
]
In my case the --force option didn't quite work. I could still see the pod ! It was stuck in Terminating/Unknown mode. So after running
kubectl -n redis delete pods <pod> --grace-period=0 --force
I ran
kubectl -n redis patch pod <pod> -p '{"metadata":{"finalizers":null}}'
Practical answer -- you can always delete a terminating pod by running:
kubectl delete pod NAME --grace-period=0
Historical answer -- There was an issue in version 1.1 where sometimes pods get stranded in the Terminating state if their nodes are uncleanly removed from the cluster.
I stumbled upon this recently to free up resource in my cluster. here is the command to delete them all.
kubectl get pods --all-namespaces | grep Terminating | while read line; do
pod_name=$(echo $line | awk '{print $2}' ) \
name_space=$(echo $line | awk '{print $1}' ); \
kubectl delete pods $pod_name -n $name_space --grace-period=0 --force
done
hope this help someone who read this
Force delete ALL pods in namespace:
kubectl delete pods --all -n <namespace> --grace-period 0 --force
If --grace-period=0 is not working then you can do:
kubectl delete pods <pod> --grace-period=0 --force
I stumbled upon this recently when removing rook ceph namespace - it got stuck in Terminating state.
The only thing that helped was removing kubernetes finalizer by directly calling k8s api with curl as suggested here.
kubectl get namespace rook-ceph -o json > tmp.json
delete kubernetes finalizer in tmp.json (leave empty array "finalizers": [])
run kubectl proxy in another terminal for auth purposes and run following curl request to returned port
curl -k -H "Content-Type: application/json" -X PUT --data-binary #tmp.json 127.0.0.1:8001/k8s/clusters/c-mzplp/api/v1/namespaces/rook-ceph/finalize
namespace is gone
Detailed rook ceph teardown here.
I used this command to delete the pods
kubectl delete pod --grace-period=0 --force --namespace <NAMESPACE> <PODNAME>
But when I tried run another pod, it didn't work, it was stuck in "Pending" state, it looks like the node itself was stuck.
For me, the solution was to recreate the node. I simply went to GKE console and deleted the node from the cluster and so GKE started another.
After that, everything started to work normally again.
I had to same issue in a production Kubernetes cluster.
A pod was stuck in Terminating phase for a while:
pod-issuing mypod-issuing-0 1/1 Terminating 0 27h
I tried checking the logs and events using the command:
kubectl describe pod mypod-issuing-0 --namespace pod-issuing
kubectl logs mypod-issuing-0 --namespace pod-issuing
but none was available to view
How I fixed it:
I ran the command below to forcefully delete the pod:
kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>
This deleted the pod immediately and started creating a new one. However, I ran into the error below when another pod was being created:
Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data mypod-issuing-token-5swgg aws-iam-token]: timed out waiting for the condition
I had to wait for 7 to 10 minutes for the volume to become detached from the previous pod I deleted so that it can become available for this new pod I was creating.
For my case, I don't like workaround. So there are steps :
k get pod -o wide -> this will show which Node is running the pod
k get nodes -> Check status of that node... I got it NotReady
I went and I fixed that node. For my case, it's just restart kubelet :
ssh that-node -> run swapoff -a && systemctl restart kubelet (Or systemctl restart k3s in case of k3s | or systemctl restart crio in other cases like OCP 4.x (k8s <1.23) )
Now deletion of pod should work without forcing the Poor pod.
Please try below command:
kubectl patch pod <pod>-p '{"metadata":{"finalizers":null}}'
Before doing a force deletion i would first do some checks.
1- node state: get the node name where your node is running, you can see this with the following command:
"kubectl -n YOUR_NAMESPACE describe pod YOUR_PODNAME"
Under the "Node" label you will see the node name.
With that you can do:
kubectl describe node NODE_NAME
Check the "conditions" field if you see anything strange.
If this is fine then you can move to the step, redo:
"kubectl -n YOUR_NAMESPACE describe pod YOUR_PODNAME"
Check the reason why it is hanging, you can find this under the "Events" section.
I say this because you might need to take preliminary actions before force deleting the pod, force deleting the pod only deletes the pod itself not the underlying resource (a stuck docker container for example).
I'd not recommend force deleting pods unless container already exited.
Verify kubelet logs to see what is causing the issue "journalctl -u kubelet"
Verify docker logs: journalctl -u docker.service
Check if pod's volume mount points still exist and if anyone holds lock on it.
Verify if host is out of memory or disk
you can use awk :
kubectl get pods --all-namespaces | awk '{if ($4=="Terminating") print "oc delete pod " $2 " -n " $1 " --force --grace-period=0 ";}' | sh
One reason WHY this happens can be turning off a node (without draining it). Fix in this case is to turn on the node again; then termination should succeed.
My pods stuck in 'Terminating', even after I tried to restart docker & restart server. Resolved after edit the pod & delete items below 'finalizer'
$ kubectl -n mynamespace edit pod/my-pod-name
I am going to try the most extense answer, because none of the above are wrong, but they do not work in all case scenarios.
The usual way to put an end to a terminating pod is:
kubectl delete pod -n ${namespace} ${pod} --grace-period=0
But you may need to remove finalizers that could be preventing the POD from stoppoing using:
kubectl -n ${namespace} patch pod ${pod} -p '{"metadata":{"finalizers":null}}'
If none of that works, you can remove the pod from etcd with etcdctl:
# Define variables
ETCDCTL_API=3
certs-path=${HOME}/.certs/e
etcd-cert-path=${certs-path}/etcd.crt
etcd-key-path=${certs-path}/etcd.key
etcd-cacert-path=${certs-path}/etcd.ca
etcd-endpoints=https://127.0.0.1:2379
namespace=myns
pod=mypod
# Call etcdctl to remove the pod
etcdctl del \
--endpoints=${etcd-endpoints}\
--cert ${etcd-cert-path} \
--key ${etcd-client-key}\
--cacert ${etcd-cacert-path} \
--prefix \
/registry/pods/${namespace}/${pod}
This last case should be used as last resource, in my case I ended having to do it due to a deadlock that prevented calico from starting in the node due to Pods under terminating status. Those pods won't be removed until calico is up, but they have reserved enough CPU to avoid calico, or any other pod, from Initializing.
Following command with awk and xargs can be used along with --grace-period=0 --force to delete all the Pods in Terminating state.
kubectl get pods|grep -i terminating | awk '{print $1}' | xargs kubectl delete --grace-period=0 --force pod
go templates will work without awk, for me it works without --grace-period=0 --force but, add it if you like
this will output the command to delete the Terminated pods.
kubectl get pods --all-namespaces -otemplate='{{ range .items }}{{ if eq .status.reason "Terminated" }}{{printf "kubectl delete pod -n %v %v\n" .metadata.namespace .metadata.name}}{{end}}{{end}}'
if you are happy with the output, you cat add | sh - to execute it.
as follow:
kubectl get pods --all-namespaces -otemplate='{{ range .items }}{{ if eq .status.reason "Terminated" }}{{printf "kubectl delete pod -n %v %v\n" .metadata.namespace .metadata.name}}{{end}}{{end}}' |sh -
for me below command has resolved the issue
oc patch pvc pvc_name -p '{"metadata":{"finalizers":null}}

Command to delete all pods in all kubernetes namespaces

Upon looking at the docs, there is an API call to delete a single pod, but is there a way to delete all pods in all namespaces?
There is no command to do exactly what you asked.
Here are some close matches.
Be careful before running any of these commands. Make sure you are connected to the right cluster, if you use multiple clusters. Consider running. kubectl config view first.
You can delete all the pods in a single namespace with this command:
kubectl delete --all pods --namespace=foo
You can also delete all deployments in namespace which will delete all pods attached with the deployments corresponding to the namespace
kubectl delete --all deployments --namespace=foo
You can delete all namespaces and every object in every namespace (but not un-namespaced objects, like nodes and some events) with this command:
kubectl delete --all namespaces
However, the latter command is probably not something you want to do, since it will delete things in the kube-system namespace, which will make your cluster not usable.
This command will delete all the namespaces except kube-system, which might be useful:
for each in $(kubectl get ns -o jsonpath="{.items[*].metadata.name}" | grep -v kube-system);
do
kubectl delete ns $each
done
kubectl delete daemonsets,replicasets,services,deployments,pods,rc,ingress --all --all-namespaces
to get rid of them pesky replication controllers too.
You can simply run
kubectl delete all --all --all-namespaces
The first all means the common resource kinds (pods, replicasets, deployments, ...)
kubectl get all == kubectl get pods,rs,deployments, ...
The second --all means to select all resources of the selected kinds
Note that all does not include:
non namespaced resourced (e.g., clusterrolebindings, clusterroles, ...)
configmaps
rolebindings
roles
secrets
...
In order to clean up perfectly,
you could use other tools (e.g., Helm, Kustomize, ...)
you could use a namespace.
you could use labels when you create resources.
You just need sed to do this:
kubectl get pods --no-headers=true --all-namespaces |sed -r 's/(\S+)\s+(\S+).*/kubectl --namespace \1 delete pod \2/e'
Explains:
use command kubectl get pods --all-namespaces to get the list of all pods in all namespaces.
use --no-headers=true option to hide the headers.
use s command of sed to fetch the first two words, which represent namespace and pod's name respectively, then assemble the delete command using them.
the final delete command is just like:
kubectl --namespace kube-system delete pod heapster-eq3yw.
use the e modifier of s command to execute the command assembled above, which will do the actual delete works.
To avoid delete pods in kube-system namespace, just need to add grep -v kube-system to exclude kube-system namespace before the sed command.
I tried commands from listed answers here but pods were stuck in terminating state.
I found below command to delete all pods from particular namespace if stuck in terminating state or you are not able to delete it then you can delete pods forcefully.
kubectl delete pods --all --grace-period=0 --force --namespace namespace
Hope it might be useful to someone.
K8s completely works on the fundamental of the namespace. if you like to release all the resource related to specified namespace.
you can use the below mentioned :
kubectl delete namespace k8sdemo-app
steps to delete pv:
delete all deployment and pods or resources related to that PV
kubectl delete --all deployment -n namespace
kubectl delete --all pod -n namespace
edit pv
kubectl edit pv pv_name -n namespace
remove kubernetes.io/pv-protection
delete pv
kubectl delete pv pv_name -n namespace
Delete all PODs in all Namespace only (restart deployment)
kubectl get pod -A -o yaml | kubectl delete -f -
You can use kubectl delete pods -l dev-lead!=carisa or what label you have.
Here is a one-liner that can be extended with grep to filter by name.
kubectl get pods -o jsonpath="{.items[*].metadata.name}" | \
tr " " "\n" | \
xargs -i -P 0 kubectl delete pods {}
One line command to delete all pods in all namespaces.
kubectl get ns -o=custom-columns=Namespace:.metadata.name --no-headers | xargs -n1 kubectl delete pods --all -n
kubectl delete po,ing,svc,pv,pvc,sc,ep,rc,deploy,replicaset,daemonset --all -A
If you already have pods which are recreated, think to delete all deployments first
kubectl delete -n *NAMESPACE deployment *DEPLOYMENT
Just replace the NAMSPACE and the DEPLOYMENT to corresponding ones, you can get all deployments information by the following command
kubectl get deployments --all-namespaces
Kubectl bulk (bulk-action on krew) plugin may be useful for you, it gives you bulk operations on selected resources. This is the command for deleting pods
' kubectl bulk pods -n namespace delete '
You could check details in this
I create a python code to delete all in namespace
delall.py
import json,sys,os;
obj=json.load(sys.stdin);
for item in obj["items"]:
os.system("kubectl delete " + item["kind"] + "/" +item["metadata"]["name"] + " -n yournamespace")
and then
kubectl get all -n kong -o json | python delall.py
If you have multiple pod which are crashing or error and you want to delete them
kubectl delete pods --all -n | gep
It was hinted at above, but I just thought I would helpfully point out that the shortcut for "--all-namespaces" is "-A" that's with a capital A. HTH somebody. I've opened a PR to have this helpful hint added to the official Kubernetes Cheat Sheet.
If you want to delete pods in all namespaces just to have them restarted and you are aware that some of them will be recreated, I like the following for loop:
for i in $(kubectl get pods -A | awk '{print $1}' | uniq | grep -V NAMESPACE); do kubectl delete --all pods -n $i; done
if you have hpa, then scale down.