How to delete only unmounted PVCs and PVs? - kubernetes

We don't want to delete PV and PVC as pods reuse them most of the times.
However, in the long term, we end up with many PVs' and PVCs' that are not used.
How to safely clean?

Not very elegant but bash way to delete Released PV's
kubectl get pv | grep Released | awk '$1 {print$1}' | while read vol; do kubectl delete pv/${vol}; done

Looking through the current answers it looks like most of these don't directly answer the question (I could be mistaken). A PVC that is Bound is not the same as Mounted. The current answers should suffice to clean up Unbound PVC's, but finding and cleaning up all Unmounted PVC's seems unanswered.
Unfortunately it looks like the -o=go-template=... doesn't have a variable for Mounted By: as shown in kubectl describe pvc.
Here's what I've come up with after some hacking around:
To list all PVC's in a cluster (mounted and not mounted) you can do this: kubectl describe -A pvc | grep -E "^Name:.*$|^Namespace:.*$|^Mounted By:.*$"
The -A will return every PVC in the cluster in every namespace. We then filter down to show just the Name, Namespace and Mounted By fields.
The best I could come up with to then get the names and namespaces of all unmounted PVC's is this:
kubectl describe -A pvc | grep -E "^Name:.*$|^Namespace:.*$|^Mounted By:.*$" | grep -B 2 "<none>" | grep -E "^Name:.*$|^Namespace:.*$"
To actually delete the PVC's is somewhat difficult because we need to know the name of the PVC as well as it's namespace. We use cut, paste and xargs to do this:
kubectl describe -A pvc | grep -E "^Name:.*$|^Namespace:.*$|^Mounted By:.*$" | grep -B 2 "<none>" | grep -E "^Name:.*$|^Namespace:.*$" | cut -f2 -d: | paste -d " " - - | xargs -n2 bash -c 'kubectl -n ${1} delete pvc ${0}'
cut removes Name: and Namespace: since they just get in the way
paste puts the Name of the PVC and it's Namespace on the same line
xargs -n bash makes it so the PVC name is ${0} and the namespace is ${1}.
I admit that I have a feeling that this isn't the best way to do this but it was the only obvious way I could come up with (on the CLI) to do this.
After running this your volumes will go from Bound to Unbound and the other answers in this thread have good ideas on how to clean those up.
Also, keep in mind that some of the volume controllers don't actually delete your data when the volumes are deleted in Kubernetes. You might still need to clean that up in whichever system you are using.
For example, in the NFS controller the data gets renamed with an archived- prefix and on the NFS side you can run rm -rf /persistentvolumes/archived-*. For AWS EBS you might still need to delete the EBS volumes if they are detached from any instance.
I hope this helps!

If you'd like to remove all the Unbound PVs and PVCs, you can do this:
First delete the PVCs:
$ kubectl -n <namespace> get pvc | tail -n +2 | grep -v Bound | \
awk '{print $1}' | xargs -I{} kubectl -n namespace delete pvc {}
Then just delete the PVs:
$ kubectl -n <namespace> get pv | tail -n +2 | grep -v Bound | \
awk '{print $1}' | xargs -I{} kubectl -n namespace delete pv {}

All previous answsers are valid and interesting. Here is another simple way to delete persistent volumes.
You should first delete your associated persistentvolumeclaim but in some cases the persistentvolumes could not be deleted automaticaly. (Ex : a "Retain" reclaim policy).
Here is a safe syntax for persistentvolumes deletion with Released satus (unused and unmounted).
kubectl get --no-headers persistentvolumes|awk '$5=="Released" { print $1 }'|xargs echo "kubectl delete persistentvolumes"

Until you keep pvc your pv will be in Bound state. So you can just go and delete unused pvc with:
kubectl -n namespace get pvc -o name | grep myname | xargs kubectl -n namespace delete

Yeah, first you need to delete unused PVC.
With kubectl get pvc --all-namespaces you can list all of them in all namespaces along with the corresponding PVs.
In order to delete unused PVs you need to change its ReclaimPolicy because if it's set to Retain the PVs won't be deleted but will hang in "Released" status. So in order to do that you need to patch PV (it's not possible to edit it manually with kubectl edit for some reason):
kubectl patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'

Related

Kubernetes: How can I delete all PVC that have a specific word in their name?

I want to delete all PersistentVolumeClaim in Kubernetes that have claim word in their name.
for example: claim-me-dot-com
How can I do it? thanks.
You can use this to delete all PVCs whose names contain claim. You should check every namespace and then delete the PVC in that namespace. Otherwise, you may delete important PVCs.
export NAMESPACE=your-name-space
export DELETE_PHRASE=claim
kubectl get pvc -n NAMESPACE --no-headers=true | awk '{ print $2 }' | grep $DELETE_PHRASE | xargs kubectl delete pvc -n $NAMESPACE
To delete in bulk, you can use this for script to delete all of the PVCs. Use this at your own RISK
for ns in `kubectl get ns --no-headers=true -o custom-columns=":metadata.name"`;
for pvc in `kubectl get pvc -n $ns --no-headers=true -o custom-columns=":metadata.name" | grep claim`;
kubectl delete pvc $pvc -n $ns;
done
done
The solution below is restricted to the current namespace:
kubectl get pvc -o name | grep "claim-me-dot-com" | xargs -n 1 kubectl delete
To delete pvc from all namespaces:
kubectl get pvc -o name -A | grep "claim-me-dot-com" | xargs -n 1 kubectl delete

Delete Kubernetes namespace only if it's empty?

I am using Helm to deploy multiple "components" of my application into a single namespace and using Jenkins to trigger create and destroy jobs. It doesn't seem that I can use Helm to delete the namespace thus I am looking to just use a Kubernetes command.
However, It seems that if I use kubectl delete namespace it will forcefully destroy the namespace and all its resources.
I'd like to destroy the namespace only if it is empty. Is there a command to do this?
I'd like destroy the namespace only if it is empty. Is there a command
to do this?
No there is not command to do that. This behavior is by design.
I would suggest a different approach. You should have all your deployment yamls in version control system for all of the components including namespace. When you want to create use kubectl create -f deployment.yaml and when you want to delete use kubectl delete -f deployment.yaml
See Remove Empty Namespaces Operator, it can do exactly what you want.
Why? Because it's not so easy to iterate over resources in the namespace to decide if it's empty or not. After all, there are "default resources" like default service account and probably other stuff from you tooling/operators.
So these resources should be excluded from iteration. Bash scripting becomes too complicated this way. And one day I decided to implement it with Python.
You can run kubectl get all --namespace YOUR_NAMESPACE and then depends on output call delete namespace
try this, better iterate over kube-api resources and this will give every resource list inside the namespace.
kubectl api-resources --verbs=list --namespaced -o name \
| xargs -n 1 kubectl get --show-kind --ignore-not-found -l <label>=<value> -n
<namespace>
or another approch
kubectl api-resources --verbs=list --namespaced -o name | `
%{ kubectl get $_ --show-kind --ignore-not-found -l <label>=<value> -n
<namespace> }
There's not a simple command to check a namespace before delete, it requires some kubectl scripting or a kube API client.
From the github issue discussing get alls limitations liggit provides an example and adding some jq processing you can get a (slow) command that errors unless it successfully finds all resource types are empty (no items):
set -o pipefail
kubectl api-resources --verbs=list --namespaced -o name \
| xargs -n 1 kubectl get --ignore-not-found -n YOUR_NAMESPACE -o json \
| jq '.items[] | .kind + "/" + .metadata.name | error'
just use folloing to delete all empty namespaces
kubectl get ns --no-headers -o custom-columns=":metadata.name" | xargs -I{} kubectl get all -n {} 2>&1 | grep "No" | cut -d " " -f 5 | xargs -I{} kubectl delete namespace {}
you can list empty namespaces by this
kubectl get ns --no-headers -o custom-columns=":metadata.name" | xargs -I{} kubectl get all -n {} 2>&1 | grep "No" | cut -d " " -f 5

Kubernetes POD delete with Pattern Match or Wildcard

When I am using below it deletes the running POD after matching the pattern from commandline:
kubectl get pods -n bi-dev --no-headers=true | awk '/group-react/{print $1}' | xargs kubectl delete -n bi-dev pod
However when I am using this command as an alias in .bash_profile it doesn't execute .
This is how I defined it :
alias kdpgroup="kubectl get pods -n bi-dev --no-headers=true | awk '/group-react/{print $1}'| kubectl delete -n bi-dev pod"
when execute this as below I get below error in commandline:
~ $ kdpgroup
error: resource(s) were provided, but no name, label selector, or --all flag specified
When I define this in .bash_profile I get this :
~ $ . ./.bash_profile
-bash: alias: }| xargs kubectl delete -n bi-dev pod: not found
~ $
Am I missing something to delete POD using Pattern Match or with Wilcard ?
thanks
Am I missing something to delete POD using Pattern Match or with Wilcard?
When using Kubernetes it is more common to use labels and selectors. E.g. if you deployed an application, you usually set a label on the pods e.g. app=my-app and you can then get the pods with e.g. kubectl get pods -l app=my-app.
Using this aproach, it is easier to delete the pods you are interested in, with e.g.
kubectl delete pods -l app=my-app
or with namespaces
kubectl delete pods -l app=my-app -n default
See more on Kubernetes Labels and Selectors
Set-based selector
I have some pod's running in the name of "superset-react" and "superset-graphql" and I want to search my wildcard superset and delete both of them in one command
I suggest that those pods has labels app=something-react and app=something-graphql. If you want to classify those apps, e.g. if your "superset" varies, you could add a label app-type=react and app-type=graphql to all those type of apps.
Then you can delete pods for both app types with this command:
kubectl delete pods -l 'app-type in (react, graphql)'
As the question asks, this is about using a wild card. Let me give examples on using wild cards to delete pods.
Delete Pods which contain the word "application"
Replace <namespace> with the namespace you want to delete pods from.
kubectl get pods -n <namespace> --no-headers=true | awk '/application/{print $1}'| xargs kubectl delete -n <namespace> pod
This will give a response like the following. It will print out the deleted pods.
pod "sre-application-7fb4f5bff9-8crgx" deleted
pod "sre-application-7fb4f5bff9-ftzfd" deleted
pod "sre-application-7fb4f5bff9-rrkt2" deleted
Delete Pods which contain "application" or "service"
Replace <namespace> with the namespace you want to delete pods from.
kubectl get pods -n <namespace> --no-headers=true | awk '/application|service/{print $1}'| xargs kubectl delete -n <namespace> pod
This will give a response like the following. It will print out the deleted pods.
pod "sre-application-7fb4f5bff9-8crgx" deleted
pod "sre-application-7fb4f5bff9-ftzfd" deleted
pod "sre-service-7fb4f5bff9-rrkt2" deleted
You just need to escape the '$1' variable in the awk command:
alias kdpgroup="kubectl get pods -n bi-dev --no-headers=true | awk '/group-react/{print \$1}'| xargs kubectl delete -n bi-dev pod"
I know that escape is boring, and if you want to avoid it you can use as a function in you .bash_profile:
kdpgroup() {
kubectl get pods -n default --no-headers=true | awk '{print $1}' | xargs kubectl delete pod -n default
}
A robust way with variables, based on #keetSugathadasa answer:
ns="optional-namespace"
regex="pattern"
kubectl get pods ${ns:+ -n $ns} --no-headers | awk /${regex}/'{print $1}' \
| xargs kubectl delete ${ns:+ -n $ns} pod
using grep you can filter the keyword like this and delete matching pod name like this
kubectl get pods --no-headers=true | awk '{print $1}' | grep keyword | xargs kubectl delete pod

What will happen to evicted pods in kubernetes?

I just saw some of my pods got evicted by kubernetes. What will happen to them? just hanging around like that or I have to delete them manually?
A quick workaround I use, is to delete all evicted pods manually after an incident. You can use this command:
kubectl get pods --all-namespaces -o json | jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | "kubectl delete pods \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c
To delete pods in Failed state in namespace default
kubectl -n default delete pods --field-selector=status.phase=Failed
Evicted pods should be manually deleted. You can use following command to delete all pods in Error state.
kubectl get pods --all-namespaces --field-selector 'status.phase==Failed' -o json | kubectl delete -f -
Depending on if a soft or hard eviction threshold that has been met, the Containers in the Pod will be terminated with or without grace period, the PodPhase will be marked as Failed and the Pod deleted. If your Application runs as part of e.g. a Deployment, there will be another Pod created and scheduled by Kubernetes - probably on another Node not exceeding its eviction thresholds.
Be aware that eviction does not necessarily have to be caused by thresholds but can also be invoked via kubectl drain to empty a node or manually via the Kubernetes API.
To answer the original question: the evicted pods will hang around until the number of them reaches the terminated-pod-gc-threshold limit (it's an option of kube-controller-manager and is equal to 12500 by default), it's by design behavior of Kubernetes (also the same approach is used and documented for Jobs - https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup). Keeping the evicted pods pods around allows you to view the logs of those pods to check for errors, warnings, or other diagnostic output.
The bellow command delete all failed pods from all namespaces
kubectl get pods -A | grep Evicted | awk '{print $2 " -n " $1}' | xargs -n 3 kubectl delete pod
One more bash command to delete evicted pods
kubectl get pods | grep Evicted | awk '{print $1}' | xargs kubectl delete pod
Just in the case someone wants to automatically delete all evicted pods for all namespaces:
Powershell
Foreach( $x in (kubectl get po --all-namespaces --field-selector=status.phase=Failed --no-headers -o custom-columns=:metadata.name)) {kubectl delete po $x --all-namespaces }
Bash
kubectl get po --all-namespaces --field-selector=status.phase=Failed --no-headers -o custom-columns=:metadata.name | xargs kubectl delete po --all-namespaces
Kube-controller-manager exists by default with a working K8s installation. It appears that the default is a max of 12500 terminated pods before GC kicks in.
Directly from the K8s documentation:
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/#kube-controller-manager
--terminated-pod-gc-threshold int32 Default: 12500
Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods. If <= 0, the terminated pod garbage collector is disabled.
In case you have pods with a Completed status that you want to keep around:
kubectl get pods --all-namespaces --field-selector 'status.phase==Failed' -o json | kubectl delete -f -
Another way still with awk.
To prevent any human error that could make me crazy (deleting desirable pods), I check before the result of the get pods command :
kubectl -n my-ns get pods --no-headers --field-selector=status.phase=Failed
If that looks good, here we go :
kubectl -n my-ns get pods --no-headers --field-selector=status.phase=Failed | \
awk '{system("kubectl -n my-ns delete pods " $1)}'
Same thing with pods of all namespaces.
Check :
kubectl get -A pods --no-headers --field-selector=status.phase=Failed
Delete :
kubectl get -A pods --no-headers --field-selector status.phase=Failed | \
awk '{system("kubectl -n " $1 " delete pod " $2 )}'
OpenShift equivalent of Kalvin's command to delete all 'Evicted' pods:
eval "$(oc get pods --all-namespaces -o json | jq -r '.items[] | select(.status.phase == "Failed" and .status.reason == "Evicted") | "oc delete pod --namespace " + .metadata.namespace + " " + .metadata.name')"
To delete all the Evicted pods by force, you can try this one-line command:
$ kubectl get pod -A | sed -nr '/Evicted/s/(^\S+)\s+(\S+).*/kubectl -n \1 delete pod \2 --force --grace-period=0/e'
Tips: use the p modifier of s command of sed instead of e will just print the real command to do the deletion job:
$ kubectl get pod -A | sed -nr '/Evicted/s/(^\S+)\s+(\S+).*/kubectl -n \1 delete pod \2 --force --grace-period=0/p'
below command will get all evicted pods from the default namespace and delete them
kubectl get pods | grep Evicted | awk '{print$1}' | xargs -I {} kubectl delete pods/{}
Here is the 'official' guide for how to hard code the threshold(if you do not want to see too many evicted pods): kube-controll-manager
But a known problem is how to have kube-controll-manager installed...
When we have too many evicted pods in our cluster, this can lead to network load as each pod, even though it is evicted is connected to the network and in case of a cloud Kubernetes cluster, will have blocked an IP address, which can lead to exhaustion of IP addresses too if you have a fixed pool of IP addresses for your cluster.
Also, when we have too many pods in Evicted status, it becomes difficult to monitor the pods by running the kubectl get pod command as you will see too many evicted pods, which can be a bit confusing at times.
To delete and evicted pod run the following command
kubectl delete pod <podname> -n <namespace>
what if you have many evicted pods
kubectl get pod -n <namespace> | grep Evicted | awk '{print $1}' | xargs kubectl delete pod -n <namespace>

Command to delete all pods in all kubernetes namespaces

Upon looking at the docs, there is an API call to delete a single pod, but is there a way to delete all pods in all namespaces?
There is no command to do exactly what you asked.
Here are some close matches.
Be careful before running any of these commands. Make sure you are connected to the right cluster, if you use multiple clusters. Consider running. kubectl config view first.
You can delete all the pods in a single namespace with this command:
kubectl delete --all pods --namespace=foo
You can also delete all deployments in namespace which will delete all pods attached with the deployments corresponding to the namespace
kubectl delete --all deployments --namespace=foo
You can delete all namespaces and every object in every namespace (but not un-namespaced objects, like nodes and some events) with this command:
kubectl delete --all namespaces
However, the latter command is probably not something you want to do, since it will delete things in the kube-system namespace, which will make your cluster not usable.
This command will delete all the namespaces except kube-system, which might be useful:
for each in $(kubectl get ns -o jsonpath="{.items[*].metadata.name}" | grep -v kube-system);
do
kubectl delete ns $each
done
kubectl delete daemonsets,replicasets,services,deployments,pods,rc,ingress --all --all-namespaces
to get rid of them pesky replication controllers too.
You can simply run
kubectl delete all --all --all-namespaces
The first all means the common resource kinds (pods, replicasets, deployments, ...)
kubectl get all == kubectl get pods,rs,deployments, ...
The second --all means to select all resources of the selected kinds
Note that all does not include:
non namespaced resourced (e.g., clusterrolebindings, clusterroles, ...)
configmaps
rolebindings
roles
secrets
...
In order to clean up perfectly,
you could use other tools (e.g., Helm, Kustomize, ...)
you could use a namespace.
you could use labels when you create resources.
You just need sed to do this:
kubectl get pods --no-headers=true --all-namespaces |sed -r 's/(\S+)\s+(\S+).*/kubectl --namespace \1 delete pod \2/e'
Explains:
use command kubectl get pods --all-namespaces to get the list of all pods in all namespaces.
use --no-headers=true option to hide the headers.
use s command of sed to fetch the first two words, which represent namespace and pod's name respectively, then assemble the delete command using them.
the final delete command is just like:
kubectl --namespace kube-system delete pod heapster-eq3yw.
use the e modifier of s command to execute the command assembled above, which will do the actual delete works.
To avoid delete pods in kube-system namespace, just need to add grep -v kube-system to exclude kube-system namespace before the sed command.
I tried commands from listed answers here but pods were stuck in terminating state.
I found below command to delete all pods from particular namespace if stuck in terminating state or you are not able to delete it then you can delete pods forcefully.
kubectl delete pods --all --grace-period=0 --force --namespace namespace
Hope it might be useful to someone.
K8s completely works on the fundamental of the namespace. if you like to release all the resource related to specified namespace.
you can use the below mentioned :
kubectl delete namespace k8sdemo-app
steps to delete pv:
delete all deployment and pods or resources related to that PV
kubectl delete --all deployment -n namespace
kubectl delete --all pod -n namespace
edit pv
kubectl edit pv pv_name -n namespace
remove kubernetes.io/pv-protection
delete pv
kubectl delete pv pv_name -n namespace
Delete all PODs in all Namespace only (restart deployment)
kubectl get pod -A -o yaml | kubectl delete -f -
You can use kubectl delete pods -l dev-lead!=carisa or what label you have.
Here is a one-liner that can be extended with grep to filter by name.
kubectl get pods -o jsonpath="{.items[*].metadata.name}" | \
tr " " "\n" | \
xargs -i -P 0 kubectl delete pods {}
One line command to delete all pods in all namespaces.
kubectl get ns -o=custom-columns=Namespace:.metadata.name --no-headers | xargs -n1 kubectl delete pods --all -n
kubectl delete po,ing,svc,pv,pvc,sc,ep,rc,deploy,replicaset,daemonset --all -A
If you already have pods which are recreated, think to delete all deployments first
kubectl delete -n *NAMESPACE deployment *DEPLOYMENT
Just replace the NAMSPACE and the DEPLOYMENT to corresponding ones, you can get all deployments information by the following command
kubectl get deployments --all-namespaces
Kubectl bulk (bulk-action on krew) plugin may be useful for you, it gives you bulk operations on selected resources. This is the command for deleting pods
' kubectl bulk pods -n namespace delete '
You could check details in this
I create a python code to delete all in namespace
delall.py
import json,sys,os;
obj=json.load(sys.stdin);
for item in obj["items"]:
os.system("kubectl delete " + item["kind"] + "/" +item["metadata"]["name"] + " -n yournamespace")
and then
kubectl get all -n kong -o json | python delall.py
If you have multiple pod which are crashing or error and you want to delete them
kubectl delete pods --all -n | gep
It was hinted at above, but I just thought I would helpfully point out that the shortcut for "--all-namespaces" is "-A" that's with a capital A. HTH somebody. I've opened a PR to have this helpful hint added to the official Kubernetes Cheat Sheet.
If you want to delete pods in all namespaces just to have them restarted and you are aware that some of them will be recreated, I like the following for loop:
for i in $(kubectl get pods -A | awk '{print $1}' | uniq | grep -V NAMESPACE); do kubectl delete --all pods -n $i; done
if you have hpa, then scale down.