Increasing size of persistent disks on kubernetes - kubernetes

Suppose I have a one node Database service (PostgreSQL, MySQL, whatever...) deployed on kubernetes using a PersistentVolumeClaim of 10G That will be running on GKE or AWS or Azure (It does not really matter). What is the procedure to scale up the disk to 20G? Is there a way, for instance, to have a PVC bind to a existing disk (a snapshot of the 10G disk) or something like that?
What I want is to increase the storage size of a disk that belongs to a PVC AND maintain the old data (the disk will not necessarily be a database, so I'm not looking to restore a database backup or something like that).
I'm looking for something like: take a snapshot of the old disk, create a bigger disk from the snapshot and "make the PVC use the new disk".
Thank you

You have a PVC with PV 10G. You want to increase its size. Unfortunately resize is not supported yet. So, you need to create new PVC with 20G size.
Lets say, your existing PVC with 10G called older.
Follow these steps:
Step 1: Create new PVC with 20G, lets say its called latest.
Step 2: Mount older & latest both in a container. Copy data from older to latest.
Step 3: Delete PVC older, we do not need older any more. Data copied to latest PV.
Step 4: Make PV of latest Available.
$ kubectl get pvc latest
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
latest Bound pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO standard 30s
Edit PV pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 to set persistentVolumeReclaimPolicy to Retain. So that deleting PVC will not delete PV.
Now, delete PVC latest.
$ kubectl delete pvc latest
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Released default/latest standard 3m
See the status, PV is Released.
Now, make this latest PV available to be claimed by another PVC, our older as we want to use 20G under this PVC older.
Edit PV again to remove claimRef
$ kubectl edit pv pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Available standard 6m
Now the status of PV is Available.
Step 5: Claim latest PV by older PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: older
spec:
accessModes:
- ReadWriteOnce
volumeName: pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6
resources:
requests:
storage: 10Gi
Use volumeName pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6
$ kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc/older Bound pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO standard 9s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Bound default/older standard 9m
Finally: Set persistentVolumeReclaimPolicy to Delete
This is how, your PVC older has had latest PV with 20G.

In Kubernetes v1.11 the persistent volume expansion feature is being promoted to beta.
https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/
Enable this by setting the allowVolumeExpansion field to true in StorageClass. Then any PVC created from this StorageClass can be edited to request more space. And finally, the pod(s) referencing the volume should be restarted

Related

Can I change Access mode of PVC from RWO(Standard storageclass) to RWX(NFS storageclass) without losing data?

I know we can edit the pvc and change to RWX but there is a cache in this, I'm trying to do in GKE, so for my pvc with RWO the storage class is standard, but if edit to RWX i need to change the storage class also to NFS.
Is it possible to achieve this without losing data inside PVC ?
Your existing pvc is using the standard storage class which doesn’t allow RWX . So it’s not possible. It means even if you change it in PVC config it’s not going to work.
Workaround to the above is take the backup of existing pv data. Create a new pvc with RWX mode for NFS pv and mount that to the application. Copy the backup data to the mounted volume.
You cannot change your StorageClass to a different one and expect the data to not be lost.You won't be even able to change most of the parameters in already created StorageClasses and PVC's.Changing the StorageClass for a PVC that stores your data will not transfer the data to a new location.
As said by #Manmohan Mittal, You need to create a new PVC for NFS storage class and copy the backup of existing pv data to the mounted volume.
However you can edit the PersistentVolume accessmodes to RWX that will automatically update PVC accessmodes without losing any data in the NFS Storage class.
A PersistentVolume can be mounted on a host in any way supported by the resource provider. Providers will have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read/write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities.
In Kubernetes Persistent Volume, it's mentioned that NFS supports all types of Access. RWO, RXX and RWX. AccessModes in PersistenceVolumeClaim (PVC) is an immutable field and cannot be changed once applied.
You can change the bounded PersistentVolume(PV) accessModes which will automatically update PVC AccessModes.
kubectl get PV
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my_pv 50Gi RWX Delete Available standard 2d22h
kubectl edit pv my_pv and change to desired access mode.
accessModes:
- ReadWriteMany
This will change the PVC AccessModes and the output is
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGE CLASS AGE
my_pvc Bound pvc-xxxx-xxxx-xxx 1Gi ROX standard 2s
Here, PVC is created with the ROX Accessmode in standard storageclass.

Does K8S Persistent Volume change works with --record flag

I have a persistent volume (PV) and persistent volume claim (PVC) which got bound as well. Initially, the storage capacity was 2Gi for the PV and the requested storage from PVC was 1Gi.
I then edit the existing bound PV and increased the storage to 5Gi with the record flag as --record.
vagrant#mykubemaster:~/my-k8s$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 2Gi RWO Retain Bound test/my-pvc 106s
vagrant#mykubemaster:~/my-k8s$ kubectl edit pv my-pv --record
persistentvolume/my-pv edited
vagrant#mykubemaster:~/my-k8s$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWO Retain Bound test/my-pvc 2m37s
Now my question is if there is any way by which I can confirm that this --record flag have certainly recorded this storage change (edit PV) in history.
With deployments, it is easy to check with the kubectl rollout history <deployment name> but I'm not sure how to check this with other objects like PV.
Please assist. thanks
As mentioned in kubectl references docs:
Record current kubectl command in the resource annotation. If set to false, do not record the command. If set to true, record the command. If not set, default to updating the existing annotation value only if one already exists.
You can run kubectl get pv my-pv -o yaml and you should see that kubernetes.io/change-cause was updated with the command that you ran. In your case, it will be kubectl edit pv my-pv --record.
The rollout command that you mentioned (including rollout history) works only with the following resources:
deployments
daemonsets
statefulsets

Some PVC are not deleted after helm purge

In my statefulset I defined volumeClaimTemplates. Added definition of storageclass. After deployment I have PVC, PV and SC created. Reclaim policy is Delete.
However after performing helm delete <> --purge all resources are deleted except PVC's
I use kubernetes.io/cinder for dynamic provisioning.
Below pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-x-kafka-0 Bound pvc-db37bd17-fe35-11ea-8161-fa163efa0a08 16Gi RWO sc-name 7m
Below pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-db37bd17-fe35-11ea-8161-fa163efa0a08 16Gi RWO Delete Bound ns/data-x-kafka-0 sc-name 12m
Could you please give me direction, where I am mistaken?
According to https://github.com/helm/helm/issues/3313 this is working as intended because the PVCs got created by the statefulset and not helm itself.

How pods are able to mount the same pvc with ReadWriteOnce access mode when storageClass is with glusterfs but not with cinder default storage?

Want to understand how pod1 claimed PVC with accessMode: ReadWriteOnce is able to share with pod2 when storageclass glusterfs is created?Shouldn't it fail as I need to specify the accessMode as ReadWriteMany?
-> Created storageclass as glusterfs with type:distributed
-> PV created on top of the storageclass above and pvc is done with AccessMode: ReadWriteOnce
-> First Pod attached the above PVC created
-> Second Pod trying to attach the same PVC created and it does work and able to access the files which first pod created
Tried another flow without a storageclass and directly creating PVC from the cinder storage and the below error shows up,
Warning FailedAttachVolume 28s attachdetach-controller Multi-Attach error for volume "pvc-644f3e7e-8e65-11e9-a43e-fa163e933531" Volume is already used by pod(s) pod1
Trying to understand why this is not happening when the storageclass is created and assigned to PV?
How I am able to access the files from the second pod when the AccessMode: ReadWriteOnce?
According to k8s documentation if multiple pods in different nodes need to access it should be ReadWriteMany.
If RWO access mode works then is it safe for both the pods to read and write? Will there be any issues?
What is the role of RWX if RWO works just fine in this case?
Would be great if some experts can give an insight into this. Thanks.
Volumes are RWO per node, not per Pod. Volumes are mounted to the node and then bind mounted to containers. As long as pods are scheduled to the same node, RWO volume can be bind mounted to both containers at the same time.

How to rename persistence volume claim?

Is is possible to rename a PVC? I can't seem to find an evidence it is possible.
I'm trying mitigate an "No space left of device" issue I just stumbled upon. Essentially my plan requires me to resize the volume, on which my service persists its data.
Unfortunately I'm still on Kubernetes 1.8.6 on GKE. It does not have the PersistentVolumeClaimResize admission plugin enabled:
1.9.1: config-default.sh#L254#1.9.1
1.8.6: config-default.sh#L254#1.8.6
Therefor I have to try and save the data manually. I made the following plan:
create a new, bigger volume PVC,
create a temp container with attached "victim" pvc and a new bigger pvc,
copy the data,
drop "victim" PVC,
rename new bigger pvc to take place of "victim".
The PVC in question is attached to StatefulSet, so the old and new names must match (as StatefulSet expects follows the volume naming convention).
But I don't understand how to rename persistent volume claims.
The answer of your question is NO. There is no way to change any meta name in Kubernetes.
But, there is a way to fulfill your requirement.
You want to claim your new bigger PersistentVolume by old PersistentVolumeClaim.
Lets say, old PVC named victim and new PVC named bigger. You want to claim PV created for bigger by victim PVC. Because your application is already using victim PVC.
Follow these steps to do the hack.
Step 1: Delete your old PVC victim.
Step 2: Make PV of bigger Available.
$ kubectl get pvc bigger
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
bigger Bound pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO standard 30s
Edit PV pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 to set persistentVolumeReclaimPolicy to Retain. So that deleting PVC will not delete PV.
Now, delete PVC bigger.
$ kubectl delete pvc bigger
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Released default/bigger standard 3m
See the status, PV is Released.
Now, make this PV available to be claimed by another PVC, our victim.
Edit PV again to remove claimRef
$ kubectl edit pv pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Available standard 6m
Now the status of PV is Available.
Step 3: Claim bigger PV by victim PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: victim
spec:
accessModes:
- ReadWriteOnce
volumeName: pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6
resources:
requests:
storage: 10Gi
Use volumeName pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6
kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc/victim Bound pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO standard 9s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/pvc-8badc3c2-08c5-11e8-b07a-080027b3e1a6 10Gi RWO Retain Bound default/victim standard 9m
Finally: Set persistentVolumeReclaimPolicy to Delete
This is how, your PVC victim has had bigger PV.
With Kubernetes 1.11+ you can perform on-demand resizing by simply modifying the PVC's storage request (https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/).
GKE supports this (I've used it several times my self) and it's pretty straightforward and without the drama.
I cannot validate this however I am fairly certain that for GKE you can go to disks in the Google Cloud Console and find the one that the PV uses and resize it there. Once you've done that you should be able to log into the node which its attached and run resize2fs on the device. This is dirty, but fairly certain this has worked for me once in the past.
You don't have to unmount or copy to do this, which can save you if the disk is live or large.