Some PVC are not deleted after helm purge - kubernetes

In my statefulset I defined volumeClaimTemplates. Added definition of storageclass. After deployment I have PVC, PV and SC created. Reclaim policy is Delete.
However after performing helm delete <> --purge all resources are deleted except PVC's
I use kubernetes.io/cinder for dynamic provisioning.
Below pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-x-kafka-0 Bound pvc-db37bd17-fe35-11ea-8161-fa163efa0a08 16Gi RWO sc-name 7m
Below pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-db37bd17-fe35-11ea-8161-fa163efa0a08 16Gi RWO Delete Bound ns/data-x-kafka-0 sc-name 12m
Could you please give me direction, where I am mistaken?

According to https://github.com/helm/helm/issues/3313 this is working as intended because the PVCs got created by the statefulset and not helm itself.

Related

Does K8S Persistent Volume change works with --record flag

I have a persistent volume (PV) and persistent volume claim (PVC) which got bound as well. Initially, the storage capacity was 2Gi for the PV and the requested storage from PVC was 1Gi.
I then edit the existing bound PV and increased the storage to 5Gi with the record flag as --record.
vagrant#mykubemaster:~/my-k8s$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 2Gi RWO Retain Bound test/my-pvc 106s
vagrant#mykubemaster:~/my-k8s$ kubectl edit pv my-pv --record
persistentvolume/my-pv edited
vagrant#mykubemaster:~/my-k8s$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWO Retain Bound test/my-pvc 2m37s
Now my question is if there is any way by which I can confirm that this --record flag have certainly recorded this storage change (edit PV) in history.
With deployments, it is easy to check with the kubectl rollout history <deployment name> but I'm not sure how to check this with other objects like PV.
Please assist. thanks
As mentioned in kubectl references docs:
Record current kubectl command in the resource annotation. If set to false, do not record the command. If set to true, record the command. If not set, default to updating the existing annotation value only if one already exists.
You can run kubectl get pv my-pv -o yaml and you should see that kubernetes.io/change-cause was updated with the command that you ran. In your case, it will be kubectl edit pv my-pv --record.
The rollout command that you mentioned (including rollout history) works only with the following resources:
deployments
daemonsets
statefulsets

Is it possile to create a pvc for a pv

I have a PV:
pvc-6b1a6054-c35e-11e9-afd7-0eeeeb629aaa 100Gi RWO Delete Bound pipeline-aws/mln13-0 performance 28h
Can I create a pvc to bind to this pv?
kubectl get pvc
doesn't show pvc mln13-0
Your pvc has bound to pv, in namespace pipeline-aws, so you can show your pvc with command:
kubectl get pvc -n pipeline-aws
In your case Persistent Volume is automatically created when it is dynamically provisioned. In following example, the PVC is defined as mln13-0, and a corresponding PV pvc-6b1a6054-c35e-11e9-afd7-0eeeeb629aaa is created and associated with PVC automatically.
Notice that the RECLAIM POLICY is Delete (default value), which is one of the two possible reclaim policies, the other one is Retain. In case of Delete, the PV is deleted automatically when the PVC is removed, and the data on the PVC will also be lost.
On the other hand, PV with Retain policy will not be deleted when the PVC is removed, and moved to Release status, so that data can be recovered by Administrators later.
With following command you can list all of PVCs in all namespaces along with the corresponding PVs:
$ kubectl get pvc --all-namespaces
Also what is interesting the PV can be accessed by any project/namespace, however once it is bound to a project, it can then only be accessed by containers from the same project/namespace. PVC is project/namespace specific, it means that if you have mulitple projects you would need to have a new PV and PVC for each project.
You can read more about binding in official K8S documentation.

How pods are able to mount the same pvc with ReadWriteOnce access mode when storageClass is with glusterfs but not with cinder default storage?

Want to understand how pod1 claimed PVC with accessMode: ReadWriteOnce is able to share with pod2 when storageclass glusterfs is created?Shouldn't it fail as I need to specify the accessMode as ReadWriteMany?
-> Created storageclass as glusterfs with type:distributed
-> PV created on top of the storageclass above and pvc is done with AccessMode: ReadWriteOnce
-> First Pod attached the above PVC created
-> Second Pod trying to attach the same PVC created and it does work and able to access the files which first pod created
Tried another flow without a storageclass and directly creating PVC from the cinder storage and the below error shows up,
Warning FailedAttachVolume 28s attachdetach-controller Multi-Attach error for volume "pvc-644f3e7e-8e65-11e9-a43e-fa163e933531" Volume is already used by pod(s) pod1
Trying to understand why this is not happening when the storageclass is created and assigned to PV?
How I am able to access the files from the second pod when the AccessMode: ReadWriteOnce?
According to k8s documentation if multiple pods in different nodes need to access it should be ReadWriteMany.
If RWO access mode works then is it safe for both the pods to read and write? Will there be any issues?
What is the role of RWX if RWO works just fine in this case?
Would be great if some experts can give an insight into this. Thanks.
Volumes are RWO per node, not per Pod. Volumes are mounted to the node and then bind mounted to containers. As long as pods are scheduled to the same node, RWO volume can be bind mounted to both containers at the same time.

How to rename persistence volume claim?

Is is possible to rename a PVC? I can't seem to find an evidence it is possible.
I'm trying mitigate an "No space left of device" issue I just stumbled upon. Essentially my plan requires me to resize the volume, on which my service persists its data.
Unfortunately I'm still on Kubernetes 1.8.6 on GKE. It does not have the PersistentVolumeClaimResize admission plugin enabled:
1.9.1: config-default.sh#L254#1.9.1
1.8.6: config-default.sh#L254#1.8.6
Therefor I have to try and save the data manually. I made the following plan:
create a new, bigger volume PVC,
create a temp container with attached "victim" pvc and a new bigger pvc,
copy the data,
drop "victim" PVC,
rename new bigger pvc to take place of "victim".
The PVC in question is attached to StatefulSet, so the old and new names must match (as StatefulSet expects follows the volume naming convention).
But I don't understand how to rename persistent volume claims.
The answer of your question is NO. There is no way to change any meta name in Kubernetes.
But, there is a way to fulfill your requirement.
You want to claim your new bigger PersistentVolume by old PersistentVolumeClaim.
Lets say, old PVC named victim and new PVC named bigger. You want to claim PV created for bigger by victim PVC. Because your application is already using victim PVC.
Follow these steps to do the hack.
Step 1: Delete your old PVC victim.
Step 2: Make PV of bigger Available.
$ kubectl get pvc bigger
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
bigger Bound pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO standard 30s
Edit PV pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 to set persistentVolumeReclaimPolicy to Retain. So that deleting PVC will not delete PV.
Now, delete PVC bigger.
$ kubectl delete pvc bigger
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Released default/bigger standard 3m
See the status, PV is Released.
Now, make this PV available to be claimed by another PVC, our victim.
Edit PV again to remove claimRef
$ kubectl edit pv pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Available standard 6m
Now the status of PV is Available.
Step 3: Claim bigger PV by victim PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: victim
spec:
accessModes:
- ReadWriteOnce
volumeName: pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6
resources:
requests:
storage: 10Gi
Use volumeName pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6
kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc/victim Bound pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO standard 9s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Bound default/victim standard 9m
Finally: Set persistentVolumeReclaimPolicy to Delete
This is how, your PVC victim has had bigger PV.
With Kubernetes 1.11+ you can perform on-demand resizing by simply modifying the PVC's storage request (https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/).
GKE supports this (I've used it several times my self) and it's pretty straightforward and without the drama.
I cannot validate this however I am fairly certain that for GKE you can go to disks in the Google Cloud Console and find the one that the PV uses and resize it there. Once you've done that you should be able to log into the node which its attached and run resize2fs on the device. This is dirty, but fairly certain this has worked for me once in the past.
You don't have to unmount or copy to do this, which can save you if the disk is live or large.

Increasing size of persistent disks on kubernetes

Suppose I have a one node Database service (PostgreSQL, MySQL, whatever...) deployed on kubernetes using a PersistentVolumeClaim of 10G That will be running on GKE or AWS or Azure (It does not really matter). What is the procedure to scale up the disk to 20G? Is there a way, for instance, to have a PVC bind to a existing disk (a snapshot of the 10G disk) or something like that?
What I want is to increase the storage size of a disk that belongs to a PVC AND maintain the old data (the disk will not necessarily be a database, so I'm not looking to restore a database backup or something like that).
I'm looking for something like: take a snapshot of the old disk, create a bigger disk from the snapshot and "make the PVC use the new disk".
Thank you
You have a PVC with PV 10G. You want to increase its size. Unfortunately resize is not supported yet. So, you need to create new PVC with 20G size.
Lets say, your existing PVC with 10G called older.
Follow these steps:
Step 1: Create new PVC with 20G, lets say its called latest.
Step 2: Mount older & latest both in a container. Copy data from older to latest.
Step 3: Delete PVC older, we do not need older any more. Data copied to latest PV.
Step 4: Make PV of latest Available.
$ kubectl get pvc latest
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
latest Bound pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO standard 30s
Edit PV pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 to set persistentVolumeReclaimPolicy to Retain. So that deleting PVC will not delete PV.
Now, delete PVC latest.
$ kubectl delete pvc latest
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Released default/latest standard 3m
See the status, PV is Released.
Now, make this latest PV available to be claimed by another PVC, our older as we want to use 20G under this PVC older.
Edit PV again to remove claimRef
$ kubectl edit pv pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Available standard 6m
Now the status of PV is Available.
Step 5: Claim latest PV by older PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: older
spec:
accessModes:
- ReadWriteOnce
volumeName: pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6
resources:
requests:
storage: 10Gi
Use volumeName pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6
$ kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc/older Bound pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO standard 9s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Bound default/older standard 9m
Finally: Set persistentVolumeReclaimPolicy to Delete
This is how, your PVC older has had latest PV with 20G.
In Kubernetes v1.11 the persistent volume expansion feature is being promoted to beta.
https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/
Enable this by setting the allowVolumeExpansion field to true in StorageClass. Then any PVC created from this StorageClass can be edited to request more space. And finally, the pod(s) referencing the volume should be restarted