How to migrate from PVC to VolumeClaimTemplate - kubernetes

There is a statefulset which uses a dynamic provisioner with storageclass name of 'some-class'
and the following simple PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-pvc
spec:
storageClassName: some-class
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
ReclaimPolicy of the storageclass is retain.
When I delete the application the PV will be Retained.
But when reinstalling the App a new PVC and a new PV will be created and used. So I can't use the data from the old PV.
For this behaviour I need to use volumeClaimTemplates instead of PVC
The Question is how retain the data either by using the old PV or using the new one but data migrated from the old?
One solution would be to use label-selector for the PV from inside the claimtemplates, but unfortunately it doesn't work because of the dynamic provisioner.
Is there any Kubernetes level solution?
Or this is outside of the K8s capabilities and should be done by hand?

Related

How can I mount Pv on one node and use that same PV for pods in another anode

I have attached an EBS volume to one of the nodes in my cluster and I want that whatever pod are coming up, irrespective of the nodes they are scheduled onto, should use that EBS volume. is this possible?
My approach was to create a PV/PVC that mounts to that volume and then use that PVC in my pod, but I am not sure if it's mounting to same host that pod comes up in or a different host.
YAML for Storage Class
kind: StorageClass
metadata:
name: local-path
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
allowVolumeExpansion: true
reclaimPolicy: Delete
PV.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-pv
labels:
type: local
spec:
capacity:
storage: 200Mi
storageClassName: local-path
claimRef:
namespace: redis
name: data-redis-0
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt2/data/redis"
PVC.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-redis-0
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi
storageClassName: local-path
no when i am trying to schedule a pod the storage is also getting mounted on the same node instead
you are using local path you can not do it.
There is a different type of AccessMount ReadWriteMany, ReadWriteOnce, and ReadyWriteOnly with PVC.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It
is similar to a Pod. Pods consume node resources and PVCs consume PV
resources. Pods can request specific levels of resources (CPU and
Memory). Claims can request specific size and access modes (e.g., they
can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see
AccessModes).
Read More at : https://kubernetes.io/docs/concepts/storage/persistent-volumes/
Yes you can mount the multiple PODs to a single PVC but in that case, you have to use the ReadWriteMany. Most people use the NFS or EFS for this type of use case.
EBS is ReadWriteOnce, so it won't be possible to use the EBS in your case. you have to either use NFS or EFS.
you can use GlusterFs in the back it will be provisioning EBS volume. GlusterFS support ReadWriteMany and it will be faster compared to EFS as it's block storage (SSD).
For ReadWiteMany you can also checkout : https://min.io/
Find access mode details here : https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
I have attached an EBS volume to one of the nodes in my cluster and I want that whatever pod are coming up, irrespective of the nodes they are scheduled onto, should use that EBS volume. is this possible?
No. An EBS volume can only be attached to at most one EC2 instance, and correspondingly, one Kubernetes node. In Kubernetes terminology, it only allows the ReadWriteOnce access mode.
It looks like the volume you're trying to create is the backing store for a Redis instance. If the volume will only be attached to one pod at a time, then this isn't a problem on its own, but you need to let Kubernetes manage the volume for you. Then the cluster will know to detach the EBS volume from the node it's currently on and reattach it to the node with the new pod. Setting this up is a cluster-administration problem and not something you as a programmer can do, but it should be set up for you in environments like Amazon's EKS managed Kubernetes.
In this environment:
Don't create a StorageClass; this is cluster-level configuration.
Don't manually create a PersistentVolume; the cluster will create it for you.
You should be able to use the default storageClass: in your PersistentVolumeClaim.
You probably should use a StatefulSet to create the PersistentVolumeClaim for you.
So for example:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
volumeClaimTemplates: # automatically creates PersistentVolumeClaims
- metadata:
name: data-redis
spec:
accessModes: [ReadWriteOnce] # data won't be shared between pods
resources:
requests:
storage: 200Mi
# default storageClassName:
template:
spec:
containers:
- name: redis
volumeMounts:
- name: data-redis
mountPath: /data

PersistentVolume and PersistentVolumeClaim for multiple deployments

I have 3 deployments, a-depl, b-depl, c-depl. Now each of these 3 deployments has a db deployment: a-db-depl, b-db-depl, c-db-depl.
Now I want to persist each of these dbs. Do I need to create a single PV for all or a PV for each of the deployments?
I know that PV <-> PVC is 1-to-1 relation. But I dont know about Depl <-> PV.
Can someone please help?
As of now, I have no clue, so I am using a single PV for all of the dp deployments
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data/mongo"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
At a time one PV can be bound to only one PVC. So for each of your PVC you need to create a corresponding PV. To automate PV creation you can create a StorageClass and refer that StorageClass in your PVC. StorageClass can dynamically provision a PV for each PVC.
Whether multiple deployments can use the same PVC or PV depends on accessModes of the PVC or PV.
ReadOnlyMany - the volume can be mounted read-only by many nodes
ReadWriteMany- the volume can be mounted as read-write by many nodes
ReadWriteOnce - the volume can be mounted as read-write by a single node
How does one run multiple replicas of a pod and have each pod use its own storage
volume?
A StatefulSet resource, which is specifically tailored to applications where instances of the application must be treated as non-fungible individuals, with each one having a stable name and state.

How do I put storage quota limitation on storage class

I want to dynamically create PeristentVolumes and mount them into my pod using PVCs. SO, I am following kubernetes Dynamic Provisioning concept. I am creating PersistentVolumeClaim using Kubernetes StorageClasses.
I am creating PVC using StorageClasses like this.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc
namespace: test
spec:
accessModes:
- ReadWriteMany
storageClassName: test-sc
resources:
requests:
storage: 100M
Now, I want to put restriction on StorageClasses test-sc to limit storage usage. In any case, the sum of storage used by PVCs which are created using StorageClass test-sc across all namespaces should not exceed 150M.
I am able to limit the storage usage of PVCs created using StorageClass test-sc for single namespace as following.
apiVersion: v1
kind: ResourceQuota
metadata:
name: quota-limit-sc
namespace: test
spec:
hard:
secure-maprfs.storageclass.storage.k8s.io/requests.storage: 150Mi
How do I put this limitation on Cluster Level i.e. on Storage Classes ??
This is applicable per namespace only.
You will have to define quotas for all your namespaces

Override StorageClass parameters from a PVC

This may be a basic question but I haven't seen any documentation on it.
Can you override parameters defined within the StorageClass using the PVC?
For example, here is a StorageClass I have created:
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: sc-gold
provisioner: hpe.com/hpe
parameters:
provisioning: 'full'
cpg: 'SSD_r6'
snapcpg: 'FC_r6'
PVC
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-nginx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: sc-gold
I want to use the "sc-gold" StorageClass as defined above but be able to override/change the provisioning type from "full" to "thin" when creating the PVC without having to create another StorageClass. I don't see any examples of how the PVC would be formatted or if this is even supported within the spec.
Traditionally as Storage Admins, we create the StorageClass as storage "profiles" and then the users are assigned/consume the SC in order to create volumes, but is there any flexibility in the spec? I just want to limit the StorageClass sprawl that I can see happening in order to accommodate any and all scenarios.
Thoughts?
No. you cant override storage class params during PVC creation. you might need to create additional storageClass and map the required storageClass to the PVC.

Adding a Compute Engine Disk to Container Engine as persistent volume

I have a PersistentVolumeClaim that looks like the following:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitlab-config-storage
namespace: gitlab
annotations:
volume.beta.kubernetes.io/storage-class: fast
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
This created a Disk in Google Compute Engine, I then deleted the claim and reapplied it, but this created a new Disk, I would like to attach the original Disk to my claim as this had data in it I've already created, is there a way to force GKE to use a specific Disk?
By using a persistent volume claim, you are asking GKE to use a persistent disk, and then always use the same volume.
However, by deleting the claim, you've essentially destroyed it.
Don't delete the claim, ever, if you want to continue using it.
You can attach a claim to a multiple pods over its lifetime, and the disk will remain the same. As soon as you delete the claim, it will disappear.
Take a look here for more in.formation
You can re-attach a GCE disk to a PersistantVolumeClaim by first creating the PersistantVolume. Create a yaml file and set the proper values, e.g.:
---
apiVersion: v1
kind: PersistentVolume
name: pvc-gitlab-config-storage
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 25Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: gitlab-config-storage
namespace: gitlab
gcePersistentDisk:
pdName: <name_of_the_gke_disk>
persistentVolumeReclaimPolicy: Delete
storageClassName: fast
Create this with kubectl apply -f filename.yaml and then re-create your PersistantVolumeClaim with matching values to the spec and claimRef. The PVC should find the matching PV and bind to it & the existing GCE disk.