Problem : Delete PVC (Persistent Volume Claim) Kubernetes Status Terminating - kubernetes

Basically, I have a problem deleting my spoc-volume-spoc-ihm-kube-test PVC I tried with:
kubectl delete -f file.yml
kubectl delete PVC
but I get every time the same Terminating Status. Also, when I delete the PVC the console is stuck in the deleting process.
Capacity: 10Gi
Storage Class: rook-cephfs
Access Modes: RWX
Here is the status in my terminal:
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
spoc-volume-spoc-ihm-kube-test Terminating pvc-- 10Gi RWX rook-cephfs 3d19h
Thank You for your answers,
Stack Community :)

I fixed the problem by deleting the pods depending on that pvc
The status: TERMINATING disappeared

You need to first check if the volume is attached to a resource using kubectl get volume attachment. If your volume is in the list, it means you have a resource i.e a pod or deployment that is attached to that volume. The reason why its not terminating is because the PVC and PV metadata finalizers are set to kubernetes.io/pv-protection.
Solution 1:
Delete the resources that are attached/using the volume i.e pods, deployments or statefulsets etc. After you delete the stuck PV and PVC will terminate.
Solution 2
If you are not sure where the volume is attached, you can delete/patch the PV and PVC metadata finalizers to null as follows:
a) Edit the PV and PVC and delete or set to null the finalizers in the metadata
kubectl edit pv {PV_NAME}
kubectl edit pvc {PVC_NAME}
b) Simply patch the PV and PVC as shown below:
kubectl patch pvc {PV_NAME} -p '{"metadata":{"finalizers":null}}'
kubectl patch pvc {PVC_NAME} -p '{"metadata":{"finalizers":null}}'
Hope it helps.

Related

Does K8S Persistent Volume change works with --record flag

I have a persistent volume (PV) and persistent volume claim (PVC) which got bound as well. Initially, the storage capacity was 2Gi for the PV and the requested storage from PVC was 1Gi.
I then edit the existing bound PV and increased the storage to 5Gi with the record flag as --record.
vagrant#mykubemaster:~/my-k8s$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 2Gi RWO Retain Bound test/my-pvc 106s
vagrant#mykubemaster:~/my-k8s$ kubectl edit pv my-pv --record
persistentvolume/my-pv edited
vagrant#mykubemaster:~/my-k8s$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWO Retain Bound test/my-pvc 2m37s
Now my question is if there is any way by which I can confirm that this --record flag have certainly recorded this storage change (edit PV) in history.
With deployments, it is easy to check with the kubectl rollout history <deployment name> but I'm not sure how to check this with other objects like PV.
Please assist. thanks
As mentioned in kubectl references docs:
Record current kubectl command in the resource annotation. If set to false, do not record the command. If set to true, record the command. If not set, default to updating the existing annotation value only if one already exists.
You can run kubectl get pv my-pv -o yaml and you should see that kubernetes.io/change-cause was updated with the command that you ran. In your case, it will be kubectl edit pv my-pv --record.
The rollout command that you mentioned (including rollout history) works only with the following resources:
deployments
daemonsets
statefulsets

how can i delete kubernetes's pvc

i try delete pvc but i cant
kubectl get --all-namespaces pvc
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-logging es-local-pvc1 Terminating es-local-pv1 450Gi RWO 21d
kubectl delete pvc es-local-pvc1
if you see any problem, most likely that the pvc is protected from deletion.
finalizers:
- kubernetes.io/pvc-protection
you need to edit the pvc and verify that finalizers under metadata is set to null using the below patch.
kubectl patch pvc <pvc-name> -p '{"metadata":{"finalizers":null}}'
Then you should be able to delete the pvc
First of all you should try
kubectl delete pvc es-local-pvc1 -n test-logging
If it doesnt help, then I absolutely agree with solution provided by #PEkambaram.
Sometimes you can resolve this issue only by patching pv and pvc finalizers.
You can list finalizers by
kubectl describe pvc PVC_NAME | grep Finalizers
and change by
kubectl patch pvc <pvc-name> -p '{"metadata":{"finalizers":null}}'
Btw,the same could happen with PV also, ans you can do the same:
kubectl patch pv PV-NAME -p ’{“metadata”:{“finalizers”:null}}’
Github PV is stuck at terminating after PVC is deleted post also can help in situation when you need to patch pod
kubectl patch pvc db-pv-claim -p '{"metadata":{"finalizers":null}}'
kubectl patch pod db-74755f6698-8td72 -p '{"metadata":{"finalizers":null}}'

Is it possile to create a pvc for a pv

I have a PV:
pvc-6b1a6054-c35e-11e9-afd7-0eeeeb629aaa 100Gi RWO Delete Bound pipeline-aws/mln13-0 performance 28h
Can I create a pvc to bind to this pv?
kubectl get pvc
doesn't show pvc mln13-0
Your pvc has bound to pv, in namespace pipeline-aws, so you can show your pvc with command:
kubectl get pvc -n pipeline-aws
In your case Persistent Volume is automatically created when it is dynamically provisioned. In following example, the PVC is defined as mln13-0, and a corresponding PV pvc-6b1a6054-c35e-11e9-afd7-0eeeeb629aaa is created and associated with PVC automatically.
Notice that the RECLAIM POLICY is Delete (default value), which is one of the two possible reclaim policies, the other one is Retain. In case of Delete, the PV is deleted automatically when the PVC is removed, and the data on the PVC will also be lost.
On the other hand, PV with Retain policy will not be deleted when the PVC is removed, and moved to Release status, so that data can be recovered by Administrators later.
With following command you can list all of PVCs in all namespaces along with the corresponding PVs:
$ kubectl get pvc --all-namespaces
Also what is interesting the PV can be accessed by any project/namespace, however once it is bound to a project, it can then only be accessed by containers from the same project/namespace. PVC is project/namespace specific, it means that if you have mulitple projects you would need to have a new PV and PVC for each project.
You can read more about binding in official K8S documentation.

How to delete persistent volumes in Kubernetes

I am trying to delete persistent volumes on a Kubernetes cluster. I ran the following command:
kubectl delete pv pvc-08e65270-b7ce-11e9-ba0b-0a1e280502e2 pvc-08e87826-b7ce-11e9-ba0b-0a1e280502e2 pvc-08ea5f97-b7ce-11e9-ba0b-0a1e280502e2 pvc-08ec1cac-b7ce-11e9-ba0b-0a1e280502e2
However it showed:
persistentvolume "pvc-08e65270-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08e87826-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08ea5f97-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08ec1cac-b7ce-11e9-ba0b-0a1e280502e2" deleted
But the command did not exit. So I CONTROL+C to force exit the command. After a few minutes, I ran:
kubectl get pv
And the status is Terminating, but the volumes don't appear to be deleting.
How can I delete these persistent volumes?
It is not recommended to delete pv it should be handled by cloud provisioner. If you need to remove pv just delete pod bounded to claim and then pvc. After that cloud provisioner should also remove pv as well.
kubectl delete pvc --all
It sometimes could take some time so be patient.
Delete all the pods, which is using the pvc(you want to delete), then delete the PVC(PersistentVolumeClaim) & PV(PersistentVolume) in sequence.
Some thing like below(in sequence):
kubectl delete pod --all / pod-name
kubectl delete pvc --all / pvc-name
kubectl delete pv --all / pv-name
I have created below diagram to help explain this better.
The Kubectl commands are mentioned by other answers in this thread. The same should work.
kubectl delete sts sts-name
kubectl delete pvc pvc-name
kubectl delete pv pv-name
Some more useful info
If you see something stuck in terminating state, its because of guardrails set in place by k8s. These are referred to as 'Finalizers'.
If your PV is stuck in terminating state after deletion, it likely because you have deleted the PV before deleting the PVC.
If your PVC is stuck in terminating state after deletion, it likely because your pods are still running. (simply delete the pods/statefulset in such cases)
If you wish to delete the resource in terminating state, use below commands to bypass the pvc, pv protection finalizers.
kubectl patch pvc pvc_name -p '{"metadata":{"finalizers":null}}'
kubectl patch pv pv_name -p '{"metadata":{"finalizers":null}}'
Here is the documentation on PVC retention policy.
Here is the documentation on PV reclaim policy.
PVs are cluster resources provisioned by an administrator, whereas PVCs are a user's request for storage and resources. I guess you have still deployed the corresponding PVC.
Delete the deployment. E.g.:
kubectl delete deployment mongo-db
List the Persistent Volume Claim. E.g.:
kubectl get pvc
Delete the corresponding pcv. E.g.:
kubectl delete pvc mongo-db

How pods are able to mount the same pvc with ReadWriteOnce access mode when storageClass is with glusterfs but not with cinder default storage?

Want to understand how pod1 claimed PVC with accessMode: ReadWriteOnce is able to share with pod2 when storageclass glusterfs is created?Shouldn't it fail as I need to specify the accessMode as ReadWriteMany?
-> Created storageclass as glusterfs with type:distributed
-> PV created on top of the storageclass above and pvc is done with AccessMode: ReadWriteOnce
-> First Pod attached the above PVC created
-> Second Pod trying to attach the same PVC created and it does work and able to access the files which first pod created
Tried another flow without a storageclass and directly creating PVC from the cinder storage and the below error shows up,
Warning FailedAttachVolume 28s attachdetach-controller Multi-Attach error for volume "pvc-644f3e7e-8e65-11e9-a43e-fa163e933531" Volume is already used by pod(s) pod1
Trying to understand why this is not happening when the storageclass is created and assigned to PV?
How I am able to access the files from the second pod when the AccessMode: ReadWriteOnce?
According to k8s documentation if multiple pods in different nodes need to access it should be ReadWriteMany.
If RWO access mode works then is it safe for both the pods to read and write? Will there be any issues?
What is the role of RWX if RWO works just fine in this case?
Would be great if some experts can give an insight into this. Thanks.
Volumes are RWO per node, not per Pod. Volumes are mounted to the node and then bind mounted to containers. As long as pods are scheduled to the same node, RWO volume can be bind mounted to both containers at the same time.