This may be a basic question but I haven't seen any documentation on it.
Can you override parameters defined within the StorageClass using the PVC?
For example, here is a StorageClass I have created:
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: sc-gold
provisioner: hpe.com/hpe
parameters:
provisioning: 'full'
cpg: 'SSD_r6'
snapcpg: 'FC_r6'
PVC
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-nginx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: sc-gold
I want to use the "sc-gold" StorageClass as defined above but be able to override/change the provisioning type from "full" to "thin" when creating the PVC without having to create another StorageClass. I don't see any examples of how the PVC would be formatted or if this is even supported within the spec.
Traditionally as Storage Admins, we create the StorageClass as storage "profiles" and then the users are assigned/consume the SC in order to create volumes, but is there any flexibility in the spec? I just want to limit the StorageClass sprawl that I can see happening in order to accommodate any and all scenarios.
Thoughts?
No. you cant override storage class params during PVC creation. you might need to create additional storageClass and map the required storageClass to the PVC.
Related
I would like to use a single mount point on a node (ie /data) and have a different sub folder for each PersistentVolumeClaim that I am going to use in my cluster.
At the moment I have multiple StorageClass and PersistentVolume for each sub folder, for example:
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: prometheus
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: prometheus
labels:
type: local
spec:
storageClassName: prometheus
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
local:
path: "/data/prometheus"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: disk
operator: In
values:
- local
As you can image having a StorageClass, a PersistentVolume for each PersistentVolumeClaim looks a bit of an overkill.
I have tried to use a single StorageClass and PersistentVolume (just pointing to /data), the usePath option (ie prometheus) with multiple PersistentVolumeClaim.
But I have noticed that if the securityContext.fsGroupChangePolicy option is enabled it will apply the user/group changes to root of the volume (ie /data) not to the subPath (ie /data/prometheus).
Is there a better solution?
Thanks
As you can image having a StorageClass, a PersistentVolume for each
PersistentVolumeClaim looks a bit of an overkill.
That's exactly how dynamic storage provisioning works. Single storage class specified in PVC used by a pod will provision single PV for that PVC. There's nothing wrong with it. I'd suggest using it if you are ok with its default volume reclaim policy of delete.
local-path-provisioner seems to be a good solution.
There is a statefulset which uses a dynamic provisioner with storageclass name of 'some-class'
and the following simple PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: example-pvc
spec:
storageClassName: some-class
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
ReclaimPolicy of the storageclass is retain.
When I delete the application the PV will be Retained.
But when reinstalling the App a new PVC and a new PV will be created and used. So I can't use the data from the old PV.
For this behaviour I need to use volumeClaimTemplates instead of PVC
The Question is how retain the data either by using the old PV or using the new one but data migrated from the old?
One solution would be to use label-selector for the PV from inside the claimtemplates, but unfortunately it doesn't work because of the dynamic provisioner.
Is there any Kubernetes level solution?
Or this is outside of the K8s capabilities and should be done by hand?
I have a bunch of standard PVs bound to PVCs in Kubernetes running in Google Kubernetes Engine. I want to change their storage class to SSD. How do I achieve that?
No it's not possible to change the storage class of an existing PVC.You will have to create a new PVC with desired storage class and then delete the existing one.
If I understood you correctly, you would like to change a type for your PVs, and the question is not "if" but "where".
The relations between PVC, PV and StorageClass is very simple.
PVC is just request for a storage of particular type (specified under storageClassName ) and size (that is listed in PV) .
kind: PersistentVolumeClaim
spec:
...
resources:
requests:
storage: 8Gi
storageClassName: slow
PV has storageClassName in spec: .
kind: PersistentVolume
...
spec:
capacity:
storage: 10Gi
...
storageClassName: slow
The storageClass has type .
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
fstype: ext4
replication-type: none
# type: pd-standard or pd-ssd. Default: pd-standard
Is it the info you've been looking for?
I'm fairly new to Kubernetes and find it difficult to get it working from documentation, Kubenetes docs says that StorageClass contains the fields provisioner, parameters, and reclaimPolicy, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned however can I use StorageClass with PV(not dynamic allocation) to specify high performance disk allocation such as ssd?
without StorageClass it worked fine for me.
following is my manifest
kind: PersistentVolume
metadata:
name: gke-pv
labels:
app: test
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: gce-disk
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gke-pvc
labels:
app: test
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd-sc
resources:
requests:
storage: 2Gi
selector:
matchLabels:
app: test
You need storage class if the storage needs to be provisioned dynamically.
If you are provisioning persistent volumes then it is called static storage provisioning. You don't need storage class in this scenario
The problem that is going on here is that if you want to statically provision PersistentVolumes, they don't have a StorageClass. However, GKE clusters are created with a standard StorageClass which is the default, and so the PVC gets confused and tries to dynamically allocate.
The solution is to have the PVC request an empty storage class, which forces it to look at the statically provisioned PVs.
So you'd use a sequence like this to create a PV and then get it bound to a PVC:
Manually provision the ssd:
gcloud compute disks create --size=10Gi --zone=[YOUR ZONE] --type=pd-ssd already-created-ssd-disk
Then apply a PV object that uses the statically provisioned disk, like so:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ssd-for-k8s-volume
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: already-created-ssd-disk
fsType: ext4
Then, you can claim it with a PVC like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-ssd-demo
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
You could also use labels to refine which PVs are selected, of course, for example if you have some that are SSD and others that are regular spinning metal.
Note that the idea of using a StorageClass for static provisioning isn't really the right thing, since StorageClass is tied to how you describe storage for dynamic provisioning.
I want to dynamically create PeristentVolumes and mount them into my pod using PVCs. SO, I am following kubernetes Dynamic Provisioning concept. I am creating PersistentVolumeClaim using Kubernetes StorageClasses.
I am creating PVC using StorageClasses like this.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc
namespace: test
spec:
accessModes:
- ReadWriteMany
storageClassName: test-sc
resources:
requests:
storage: 100M
Now, I want to put restriction on StorageClasses test-sc to limit storage usage. In any case, the sum of storage used by PVCs which are created using StorageClass test-sc across all namespaces should not exceed 150M.
I am able to limit the storage usage of PVCs created using StorageClass test-sc for single namespace as following.
apiVersion: v1
kind: ResourceQuota
metadata:
name: quota-limit-sc
namespace: test
spec:
hard:
secure-maprfs.storageclass.storage.k8s.io/requests.storage: 150Mi
How do I put this limitation on Cluster Level i.e. on Storage Classes ??
This is applicable per namespace only.
You will have to define quotas for all your namespaces