Is it possile to create a pvc for a pv - kubernetes

I have a PV:
pvc-6b1a6054-c35e-11e9-afd7-0eeeeb629aaa 100Gi RWO Delete Bound pipeline-aws/mln13-0 performance 28h
Can I create a pvc to bind to this pv?
kubectl get pvc
doesn't show pvc mln13-0

Your pvc has bound to pv, in namespace pipeline-aws, so you can show your pvc with command:
kubectl get pvc -n pipeline-aws

In your case Persistent Volume is automatically created when it is dynamically provisioned. In following example, the PVC is defined as mln13-0, and a corresponding PV pvc-6b1a6054-c35e-11e9-afd7-0eeeeb629aaa is created and associated with PVC automatically.
Notice that the RECLAIM POLICY is Delete (default value), which is one of the two possible reclaim policies, the other one is Retain. In case of Delete, the PV is deleted automatically when the PVC is removed, and the data on the PVC will also be lost.
On the other hand, PV with Retain policy will not be deleted when the PVC is removed, and moved to Release status, so that data can be recovered by Administrators later.
With following command you can list all of PVCs in all namespaces along with the corresponding PVs:
$ kubectl get pvc --all-namespaces
Also what is interesting the PV can be accessed by any project/namespace, however once it is bound to a project, it can then only be accessed by containers from the same project/namespace. PVC is project/namespace specific, it means that if you have mulitple projects you would need to have a new PV and PVC for each project.
You can read more about binding in official K8S documentation.

Related

Problem : Delete PVC (Persistent Volume Claim) Kubernetes Status Terminating

Basically, I have a problem deleting my spoc-volume-spoc-ihm-kube-test PVC I tried with:
kubectl delete -f file.yml
kubectl delete PVC
but I get every time the same Terminating Status. Also, when I delete the PVC the console is stuck in the deleting process.
Capacity: 10Gi
Storage Class: rook-cephfs
Access Modes: RWX
Here is the status in my terminal:
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
spoc-volume-spoc-ihm-kube-test Terminating pvc-- 10Gi RWX rook-cephfs 3d19h
Thank You for your answers,
Stack Community :)
I fixed the problem by deleting the pods depending on that pvc
The status: TERMINATING disappeared
You need to first check if the volume is attached to a resource using kubectl get volume attachment. If your volume is in the list, it means you have a resource i.e a pod or deployment that is attached to that volume. The reason why its not terminating is because the PVC and PV metadata finalizers are set to kubernetes.io/pv-protection.
Solution 1:
Delete the resources that are attached/using the volume i.e pods, deployments or statefulsets etc. After you delete the stuck PV and PVC will terminate.
Solution 2
If you are not sure where the volume is attached, you can delete/patch the PV and PVC metadata finalizers to null as follows:
a) Edit the PV and PVC and delete or set to null the finalizers in the metadata
kubectl edit pv {PV_NAME}
kubectl edit pvc {PVC_NAME}
b) Simply patch the PV and PVC as shown below:
kubectl patch pvc {PV_NAME} -p '{"metadata":{"finalizers":null}}'
kubectl patch pvc {PVC_NAME} -p '{"metadata":{"finalizers":null}}'
Hope it helps.

Does K8S Persistent Volume change works with --record flag

I have a persistent volume (PV) and persistent volume claim (PVC) which got bound as well. Initially, the storage capacity was 2Gi for the PV and the requested storage from PVC was 1Gi.
I then edit the existing bound PV and increased the storage to 5Gi with the record flag as --record.
vagrant#mykubemaster:~/my-k8s$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 2Gi RWO Retain Bound test/my-pvc 106s
vagrant#mykubemaster:~/my-k8s$ kubectl edit pv my-pv --record
persistentvolume/my-pv edited
vagrant#mykubemaster:~/my-k8s$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWO Retain Bound test/my-pvc 2m37s
Now my question is if there is any way by which I can confirm that this --record flag have certainly recorded this storage change (edit PV) in history.
With deployments, it is easy to check with the kubectl rollout history <deployment name> but I'm not sure how to check this with other objects like PV.
Please assist. thanks
As mentioned in kubectl references docs:
Record current kubectl command in the resource annotation. If set to false, do not record the command. If set to true, record the command. If not set, default to updating the existing annotation value only if one already exists.
You can run kubectl get pv my-pv -o yaml and you should see that kubernetes.io/change-cause was updated with the command that you ran. In your case, it will be kubectl edit pv my-pv --record.
The rollout command that you mentioned (including rollout history) works only with the following resources:
deployments
daemonsets
statefulsets

State of PV/PVC after Pod is Deleted in Kubernetes

I have a Kubernetes cluster with some pods deployed (DB, Frontend, Redis). A part that I can't fully grasp is what happens to the PVC after the pod is deleted.
For example, if I delete POD_A which is bound to CLAIM_A I know that CLAIM_A is not deleted automatically. If I then try to recreate the POD, it is attached back to the same PVC but the all the data is missing.
Can anyone explain what happens, I've looked at the official documentation but not making any sense at the moment.
Any help is appreciated.
PVCs have a lifetime independent of pods.
If PV still exists it may be because it has ReclaimPolicy set to Retain in which case it won't be deleted even if PVC is gone.
PersistentVolumes can have various reclaim policies, including “Retain”, “Recycle”, and “Delete”. For dynamically provisioned PersistentVolumes, the default reclaim policy is “Delete”. This means that a dynamically provisioned volume is automatically deleted when a user deletes the corresponding PersistentVolumeClaim. This automatic behavior might be inappropriate if the volume contains precious data.
Notice that the RECLAIM POLICY is Delete (default value), which is one of the two reclaim policies, the other one is Retain. (A third policy Recycle has been deprecated). In case of Delete, the PV is deleted automatically when the PVC is removed, and the data on the PVC will also be lost.
In that case, it is more appropriate to use the “Retain” policy. With the “Retain” policy, if a user deletes a PersistentVolumeClaim, the corresponding PersistentVolume is not be deleted. Instead, it is moved to the Released phase, where all of its data can be manually recovered.
This may also happens too when persistent volume is protected. You should be able to cross verify this:
Command:
$ kubectl describe pvc PVC_NAME | grep Finalizers
Output:
Finalizers: [kubernetes.io/pvc-protection]
You can fix this by setting finalizers to null using kubectl patch:
$ kubectl patch pvc PVC_NAME -p '{"metadata":{"finalizers": []}}' --type=merge
EDIT:
A PersistentVolume can be mounted on a host in any way supported by the resource provider. Each PV gets its own set of access modes describing that specific PV’s capabilities.
The access modes are:
ReadWriteOnce – the volume can be mounted as read-write by a single
node
ReadOnlyMany – the volume can be mounted read-only by many nodes
ReadWriteMany – the volume can be mounted as read-write by many nodes
In the CLI, the access modes are abbreviated to:
RWO - ReadWriteOnce
ROX - ReadOnlyMany
RWX - ReadWriteMany
So if you recreated pod and scheduler put it on different node and your PV has reclaim policy set to ReadWriteOnce it is normal that you cannot access your data.
Claims use the same conventions as volumes when requesting storage with specific access modes. My advice is to edit PV access mode to ReadWriteMany.
$ kubectl edit pv your_pv
You should be updating the access mode in PersistentVolume as shown below
accessModes:
- ReadWriteMany

How to delete persistent volumes in Kubernetes

I am trying to delete persistent volumes on a Kubernetes cluster. I ran the following command:
kubectl delete pv pvc-08e65270-b7ce-11e9-ba0b-0a1e280502e2 pvc-08e87826-b7ce-11e9-ba0b-0a1e280502e2 pvc-08ea5f97-b7ce-11e9-ba0b-0a1e280502e2 pvc-08ec1cac-b7ce-11e9-ba0b-0a1e280502e2
However it showed:
persistentvolume "pvc-08e65270-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08e87826-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08ea5f97-b7ce-11e9-ba0b-0a1e280502e2" deleted
persistentvolume "pvc-08ec1cac-b7ce-11e9-ba0b-0a1e280502e2" deleted
But the command did not exit. So I CONTROL+C to force exit the command. After a few minutes, I ran:
kubectl get pv
And the status is Terminating, but the volumes don't appear to be deleting.
How can I delete these persistent volumes?
It is not recommended to delete pv it should be handled by cloud provisioner. If you need to remove pv just delete pod bounded to claim and then pvc. After that cloud provisioner should also remove pv as well.
kubectl delete pvc --all
It sometimes could take some time so be patient.
Delete all the pods, which is using the pvc(you want to delete), then delete the PVC(PersistentVolumeClaim) & PV(PersistentVolume) in sequence.
Some thing like below(in sequence):
kubectl delete pod --all / pod-name
kubectl delete pvc --all / pvc-name
kubectl delete pv --all / pv-name
I have created below diagram to help explain this better.
The Kubectl commands are mentioned by other answers in this thread. The same should work.
kubectl delete sts sts-name
kubectl delete pvc pvc-name
kubectl delete pv pv-name
Some more useful info
If you see something stuck in terminating state, its because of guardrails set in place by k8s. These are referred to as 'Finalizers'.
If your PV is stuck in terminating state after deletion, it likely because you have deleted the PV before deleting the PVC.
If your PVC is stuck in terminating state after deletion, it likely because your pods are still running. (simply delete the pods/statefulset in such cases)
If you wish to delete the resource in terminating state, use below commands to bypass the pvc, pv protection finalizers.
kubectl patch pvc pvc_name -p '{"metadata":{"finalizers":null}}'
kubectl patch pv pv_name -p '{"metadata":{"finalizers":null}}'
Here is the documentation on PVC retention policy.
Here is the documentation on PV reclaim policy.
PVs are cluster resources provisioned by an administrator, whereas PVCs are a user's request for storage and resources. I guess you have still deployed the corresponding PVC.
Delete the deployment. E.g.:
kubectl delete deployment mongo-db
List the Persistent Volume Claim. E.g.:
kubectl get pvc
Delete the corresponding pcv. E.g.:
kubectl delete pvc mongo-db

How to rename persistence volume claim?

Is is possible to rename a PVC? I can't seem to find an evidence it is possible.
I'm trying mitigate an "No space left of device" issue I just stumbled upon. Essentially my plan requires me to resize the volume, on which my service persists its data.
Unfortunately I'm still on Kubernetes 1.8.6 on GKE. It does not have the PersistentVolumeClaimResize admission plugin enabled:
1.9.1: config-default.sh#L254#1.9.1
1.8.6: config-default.sh#L254#1.8.6
Therefor I have to try and save the data manually. I made the following plan:
create a new, bigger volume PVC,
create a temp container with attached "victim" pvc and a new bigger pvc,
copy the data,
drop "victim" PVC,
rename new bigger pvc to take place of "victim".
The PVC in question is attached to StatefulSet, so the old and new names must match (as StatefulSet expects follows the volume naming convention).
But I don't understand how to rename persistent volume claims.
The answer of your question is NO. There is no way to change any meta name in Kubernetes.
But, there is a way to fulfill your requirement.
You want to claim your new bigger PersistentVolume by old PersistentVolumeClaim.
Lets say, old PVC named victim and new PVC named bigger. You want to claim PV created for bigger by victim PVC. Because your application is already using victim PVC.
Follow these steps to do the hack.
Step 1: Delete your old PVC victim.
Step 2: Make PV of bigger Available.
$ kubectl get pvc bigger
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
bigger Bound pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO standard 30s
Edit PV pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 to set persistentVolumeReclaimPolicy to Retain. So that deleting PVC will not delete PV.
Now, delete PVC bigger.
$ kubectl delete pvc bigger
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Released default/bigger standard 3m
See the status, PV is Released.
Now, make this PV available to be claimed by another PVC, our victim.
Edit PV again to remove claimRef
$ kubectl edit pv pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Available standard 6m
Now the status of PV is Available.
Step 3: Claim bigger PV by victim PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: victim
spec:
accessModes:
- ReadWriteOnce
volumeName: pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6
resources:
requests:
storage: 10Gi
Use volumeName pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6
kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc/victim Bound pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO standard 9s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Bound default/victim standard 9m
Finally: Set persistentVolumeReclaimPolicy to Delete
This is how, your PVC victim has had bigger PV.
With Kubernetes 1.11+ you can perform on-demand resizing by simply modifying the PVC's storage request (https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/).
GKE supports this (I've used it several times my self) and it's pretty straightforward and without the drama.
I cannot validate this however I am fairly certain that for GKE you can go to disks in the Google Cloud Console and find the one that the PV uses and resize it there. Once you've done that you should be able to log into the node which its attached and run resize2fs on the device. This is dirty, but fairly certain this has worked for me once in the past.
You don't have to unmount or copy to do this, which can save you if the disk is live or large.