how can i delete kubernetes's pvc - kubernetes

i try delete pvc but i cant
kubectl get --all-namespaces pvc
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-logging es-local-pvc1 Terminating es-local-pv1 450Gi RWO 21d

kubectl delete pvc es-local-pvc1
if you see any problem, most likely that the pvc is protected from deletion.
finalizers:
- kubernetes.io/pvc-protection
you need to edit the pvc and verify that finalizers under metadata is set to null using the below patch.
kubectl patch pvc <pvc-name> -p '{"metadata":{"finalizers":null}}'
Then you should be able to delete the pvc

First of all you should try
kubectl delete pvc es-local-pvc1 -n test-logging
If it doesnt help, then I absolutely agree with solution provided by #PEkambaram.
Sometimes you can resolve this issue only by patching pv and pvc finalizers.
You can list finalizers by
kubectl describe pvc PVC_NAME | grep Finalizers
and change by
kubectl patch pvc <pvc-name> -p '{"metadata":{"finalizers":null}}'
Btw,the same could happen with PV also, ans you can do the same:
kubectl patch pv PV-NAME -p ’{“metadata”:{“finalizers”:null}}’
Github PV is stuck at terminating after PVC is deleted post also can help in situation when you need to patch pod
kubectl patch pvc db-pv-claim -p '{"metadata":{"finalizers":null}}'
kubectl patch pod db-74755f6698-8td72 -p '{"metadata":{"finalizers":null}}'

Related

Problem : Delete PVC (Persistent Volume Claim) Kubernetes Status Terminating

Basically, I have a problem deleting my spoc-volume-spoc-ihm-kube-test PVC I tried with:
kubectl delete -f file.yml
kubectl delete PVC
but I get every time the same Terminating Status. Also, when I delete the PVC the console is stuck in the deleting process.
Capacity: 10Gi
Storage Class: rook-cephfs
Access Modes: RWX
Here is the status in my terminal:
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
spoc-volume-spoc-ihm-kube-test Terminating pvc-- 10Gi RWX rook-cephfs 3d19h
Thank You for your answers,
Stack Community :)
I fixed the problem by deleting the pods depending on that pvc
The status: TERMINATING disappeared
You need to first check if the volume is attached to a resource using kubectl get volume attachment. If your volume is in the list, it means you have a resource i.e a pod or deployment that is attached to that volume. The reason why its not terminating is because the PVC and PV metadata finalizers are set to kubernetes.io/pv-protection.
Solution 1:
Delete the resources that are attached/using the volume i.e pods, deployments or statefulsets etc. After you delete the stuck PV and PVC will terminate.
Solution 2
If you are not sure where the volume is attached, you can delete/patch the PV and PVC metadata finalizers to null as follows:
a) Edit the PV and PVC and delete or set to null the finalizers in the metadata
kubectl edit pv {PV_NAME}
kubectl edit pvc {PVC_NAME}
b) Simply patch the PV and PVC as shown below:
kubectl patch pvc {PV_NAME} -p '{"metadata":{"finalizers":null}}'
kubectl patch pvc {PVC_NAME} -p '{"metadata":{"finalizers":null}}'
Hope it helps.

Pod is not found when trying to delete, however, can be patched

I have a pod that I can see on GKE. But if I try to delete them, I got the error:
kubectl delete pod my-pod --namespace=kube-system --context=cluster-1
Error from server (NotFound): pods "my-pod" not found
However, if I try to patch it, the operation was completed successfully:
kubectl patch deployment my-pod --namespace kube-system -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"secrets-update\":\"`date +'%s'`\"}}}}}" --context=cluster-1
deployment.apps/my-pod patched
Same namespace, same context, same pod. Why kubectl fails to delete the pod?
kubectl patch deployment my-pod --namespace kube-system -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"secrets-update\":\"`date +'%s'`\"}}}}}" --context=cluster-1
You are patching the deployment here, not the pod.
Additionally, your pod will not be called "my-pod" but would be called the name of your deployment plus a hash (random set of letters and numbers), something like "my-pod-ace3g"
To see the pods in the namespace use
kubectl get pods -n {namespace}
Since you've put the deployment in the "kube-system" namespace, you would use
kubectl get pods -n kube-system
Side note: Generally don't use the kube-system namespace unless your deployment is related to the cluster functionality. There's a namespace called default you can use to test things

Move or change a volume namespace

We re-organise our namespaces in Kubernetes. We want to move our Persistent volume Claims created by a storageclass from one namespace to another.
(Our backup tool don't help).
Option 1: use a backup tool
The easiest and safest option to migrate a pvc/pv to a new namespace is to use a backup tool (like velero)
Option 2: no backup tool (by hand)
This is undocumented.
In this exemple, we use VMware storage provider, but it should work with any storageClass.
Prepare
Make a * Backup * Backup * Backup * Backup * Backup * !!!
well, if you do have a backup tool for kubernetes (like velero) you can restore directly in the target namespace, otherwise use kubectl cp as explained in How to copy files from kubernetes Pods to local system
Let's set some environment variable and backup the existing PV and PVC ressources
NAMESPACE1=XXX
NAMESPACE2=XXX
PVC=mypvc
kubectl get pvc -n $NAMESPACE1 $PVC -o yaml | tee /tmp/pvc.yaml
PV=pvc-XXXXXXXXXXXXX-XXXXXXXXXXXX
kubectl get pv $PV -o yaml | tee /tmp/pv.yaml
Change ReclaimPolicy for PV
If your persistent volume (or storage provider) has persistentVolumeReclaimPolicy=Delete, make sure to change it to "Retain" to avoid data loss when removing the PVC below.
Run this:
kubectl patch pv "$PV" -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
Then check:
kubectl describe pv "$PV" | grep -e Reclaim
Remove the PVC
Manually delete the Persistent volume claim (you have a copy, right?).
kubectl delete -n "$NAMESPACE1" "$PVC"
Modify the Persistent Volume (PV)
A PV is attached to a namespace when it's first used by a PVC. Furthermore, the PV become "attached" to the PVC (by it's uid:, not by it's name).
Change the namespace of the PV. Temporarily use PVC "name" to "lock" the PV for that PVC (rather than PVC uid).
"kubectl patch pv "$PV" -p '{"spec":{"claimRef":{"namespace":"'$NAMESPACE2'","name":"'$PVC'","uid":null}}}'
Check what we have now:
kubectl get pv "$PV" -o yaml | grep -e Reclaim -e namespace -e uid: -e name: -e claimRef | grep -v " f:"
Create the new PVC
Create a PVC in the new namespace. Make sure to explicitly choose the PV to use (don't use StorageClass to provision the volume). Typically, you can copy the original PVC YAML, but drop namespace:, selfLink:, uid: in the section metadata:.
This command should work (it re-use the previous PVC), but you can use your own kubectl apply command.
grep -v -e "uid:" -e "resourceVersion:" -e "namespace:" -e "selfLink:" /tmp/pvc.yml | kubectl -n "$NAMESPACE2" apply -f -
Assign PVC to PV
At this point, the PV is bounded to the former PVC's name (but it may not work, and it is not the standard configuration). Running kubectl describe -n "$NAMESPACE2" pvc "$PVC"will complain with Status: Lost and/or Warning ClaimMisbound. So let's fix the problem:
Retrieve the new PVC's uid:
PVCUID=$( kubectl get -n "$NAMESPACE2" pvc "$PVC" -o custom-columns=UID:.metadata.uid --no-headers )
Then update the PV accordingly :
kubectl patch pv "$PV" -p '{"spec":{"claimRef":{"uid":"'$PVCUID'","name":null}}}'
After a few seconds the PV should be Status: Bound.
Restore PV ReclaimPolicy=Delete
(This step is optional. Needed to ensure PV is deleted when user delete the PVC)
Once the PV is in Bound state again, you can restore the reclaim policy if you want to preserve the original behaviour (i.e removing the PV when the PVC is removed) :
kubectl patch pv "$PV" -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'
## Check :
kubectl get pv $PV -o yaml | grep -e Reclaim -e namespace
Voilà
I have migrated a pv that was with a storage class of storageClassName: nfs-client in a different way to another namespace.
The steps performed:
Change the pvc policy to Retain (the default is Delete), it means that once the pvc is removed, the pv resource will not be removed.:
kubectl patch pvc <pvc-name> '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
Get the name of the pvc and copy it to another directory, taking into consideration its owner, permissions and everything (the flags -avR of the cp command, are really important to achieve this task).
dirpath=$(kubectl get pv <pvc-name> -o jsonpath="{.spec.nfs.path}")
\cp -avR ${dirpath} /tmp/pvc_backup
Once it has been copied, you can proceed to remove the previous pvc and pv:
kubectl delete pvc <pvc-name>
kubectl delete pv <pv-name>
Create the new pvc resource with your own yaml file path/pvc.yaml( for this example this was my pvc yaml file):
kubectl -n <target-namespace> create -f path/pvc.yaml
Once created in the right namespace, proceed to copy the content of the backup directory to the new pvc created (remember that you all need to do is to create the nfs pvc and the pv is automatically created):
nfs_pvc_dir=$(kubectl -n <target-namespace> get pv <pv-name> -o jsonpath="{.spec.nfs.path}")
\cp -avR /tmp/pvc_backup/* ${nfs_pvc_dir}/
Finally, proceed to bound your new pvc to a deployment or pod resource.
It was performed with microk8s on a public vps with a nfs storage:
https://microk8s.io/docs
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
Greetings :)

How to delete persistent volumes in Kubernetes

I am trying to delete persistent volumes on a Kubernetes cluster. I ran the following command:
kubectl delete pv pvc-08e65270-b7ce-11e9-ba0b-0a1e280502e2 pvc-08e87826-b7ce-11e9-ba0b-0a1e280502e2 pvc-08ea5f97-b7ce-11e9-ba0b-0a1e280502e2 pvc-08ec1cac-b7ce-11e9-ba0b-0a1e280502e2
However it showed:
persistentvolume "pvc-08e65270-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08e87826-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08ea5f97-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08ec1cac-b7ce-11e9-ba0b-0a1e280502e2" deleted
But the command did not exit. So I CONTROL+C to force exit the command. After a few minutes, I ran:
kubectl get pv
And the status is Terminating, but the volumes don't appear to be deleting.
How can I delete these persistent volumes?
It is not recommended to delete pv it should be handled by cloud provisioner. If you need to remove pv just delete pod bounded to claim and then pvc. After that cloud provisioner should also remove pv as well.
kubectl delete pvc --all
It sometimes could take some time so be patient.
Delete all the pods, which is using the pvc(you want to delete), then delete the PVC(PersistentVolumeClaim) & PV(PersistentVolume) in sequence.
Some thing like below(in sequence):
kubectl delete pod --all / pod-name
kubectl delete pvc --all / pvc-name
kubectl delete pv --all / pv-name
I have created below diagram to help explain this better.
The Kubectl commands are mentioned by other answers in this thread. The same should work.
kubectl delete sts sts-name
kubectl delete pvc pvc-name
kubectl delete pv pv-name
Some more useful info
If you see something stuck in terminating state, its because of guardrails set in place by k8s. These are referred to as 'Finalizers'.
If your PV is stuck in terminating state after deletion, it likely because you have deleted the PV before deleting the PVC.
If your PVC is stuck in terminating state after deletion, it likely because your pods are still running. (simply delete the pods/statefulset in such cases)
If you wish to delete the resource in terminating state, use below commands to bypass the pvc, pv protection finalizers.
kubectl patch pvc pvc_name -p '{"metadata":{"finalizers":null}}'
kubectl patch pv pv_name -p '{"metadata":{"finalizers":null}}'
Here is the documentation on PVC retention policy.
Here is the documentation on PV reclaim policy.
PVs are cluster resources provisioned by an administrator, whereas PVCs are a user's request for storage and resources. I guess you have still deployed the corresponding PVC.
Delete the deployment. E.g.:
kubectl delete deployment mongo-db
List the Persistent Volume Claim. E.g.:
kubectl get pvc
Delete the corresponding pcv. E.g.:
kubectl delete pvc mongo-db

Unabel to deploy mariadb on kubernetes using openstack-helm charts

I am trying to deploy OpenStack on kubernetes using helm charts. I am seeing the below error when trying to deploy MariaDB. Mariadb-server-0 looks for PVC which is in LOST state. I tried creating the PersistentVolume and assign the same but still, the pod looks for a lost PVC as shown in the error below.
2018-10-05T17:05:04.087573+00:00 node2: kubelet[9897]: E1005 17:05:04.087449 9897 desired_state_of_world_populator.go:273] Error processing volume "mysql-data" for pod "mariadb-server-0_openstack(c259471b-c8c0-11e8-9636-441ea14dfc98)": error processing PVC "openstack"/"mysql-data-mariadb-server-0": PVC openstack/mysql-data-mariadb-server-0 has non-bound phase ("Lost") or empty pvc.Spec.VolumeName ("pvc-74e81ef0-bb97-11e8-9636-441ea14dfc98")
Is there a way we can delete the old PVC entry from a cluster, so MariaDB doesn't look for the same while deploying ??
Thanks,
Ab
To delete a PVC, you can just use the typical kubectl commands.
See all the PVCs:
kubectl -n <namespace> get pvcs
To delete PVCs:
kubectl -n <namespace> delete pvc <pvc-id-from-the-previous-command>
Similarly, I would try PVs, to see if there are any dangling PVs.
See all the PVs:
kubectl -n <namespace> get pvcs
To delete PVs:
kubectl -n <namespace> delete pv <pv-id-from-the-previous-command>