How do I change the storage class of existing persistent volumes? - kubernetes

I have a bunch of standard PVs bound to PVCs in Kubernetes running in Google Kubernetes Engine. I want to change their storage class to SSD. How do I achieve that?

No it's not possible to change the storage class of an existing PVC.You will have to create a new PVC with desired storage class and then delete the existing one.

If I understood you correctly, you would like to change a type for your PVs, and the question is not "if" but "where".
The relations between PVC, PV and StorageClass is very simple.
PVC is just request for a storage of particular type (specified under storageClassName ) and size (that is listed in PV) .
kind: PersistentVolumeClaim
spec:
...
resources:
requests:
storage: 8Gi
storageClassName: slow
PV has storageClassName in spec: .
kind: PersistentVolume
...
spec:
capacity:
storage: 10Gi
...
storageClassName: slow
The storageClass has type .
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
fstype: ext4
replication-type: none
# type: pd-standard or pd-ssd. Default: pd-standard
Is it the info you've been looking for?

Related

Volume GKE invalid disk size

I'am trying to create pod with volume persistent disk of 10gb but seems I cannot create disk under 200Gb.
I can see pv listed but pvClaim is on pending. I can see what the pc is Available so I cannot understand what's happen
Please find info below:
Invalid value for field 'resource.sizeGb': '10'. Disk size cannot be smaller than 200 GB., invalid
kubectl get pvc -n vault-ppd
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-vault-ppd-claim Pending balanced-persistent-disk 2m45s
kubectl get pv -n vault-ppd
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-vault-ppd 10Gi RWO Retain Available vault/pv-vault-ppd-claim
My manifest vault-ppd.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: balanced-persistent-disk
provisioner: pd.csi.storage.gke.io
parameters:
type: pd-standard
replication-type: regional-pd
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: topology.gke.io/zone
values:
- europe-west1-b
- europe-west1-c
- europe-west1-d
---
apiVersion: v1
kind: Namespace
metadata:
name: vault-ppd
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vault-ppd
namespace: vault-ppd
labels:
app.kubernetes.io/name: vault-ppd
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-vault-ppd
spec:
storageClassName: "balanced-persistent-disk"
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: vault
name: pv-vault-ppd-claim
gcePersistentDisk:
pdName: gke-vault-volume
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-vault-ppd-claim
namespace: vault-ppd
spec:
storageClassName: "balanced-persistent-disk"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Thx for helps guys
Your deployment has regional persistent disks of type pd-standard and replication-type: regional-pd, this means that volumes create a regional persistent disk. As mentioned in documentation the minimum capacity per disk for regional persistent disks is 200 GB . We cannot create a regional-pd with lower GB requirement for a standard disk. So now the workaround is, you can either create a PVC with a larger size or use pd-ssd instead.
Note: To use regional persistent disks of type pd-standard, set the PersistentVolumeClaim.storage attribute to 200Gi or higher. If you need a smaller persistent disk, use pd-ssd instead of pd-standard.
Refer Regional Persistent disks for information.
pdName: gke-vault-volume should be a regional replicated disk with size >=200GB, you can just update your PVC/PC with the correct size. If it is not, you can set storageClassName: "" in both the PVC and PV to use the standard default StorageClass that provide standard disk.

Can I combine StorageClass with PersistentVolume in GKE?

I'm fairly new to Kubernetes and find it difficult to get it working from documentation, Kubenetes docs says that StorageClass contains the fields provisioner, parameters, and reclaimPolicy, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned however can I use StorageClass with PV(not dynamic allocation) to specify high performance disk allocation such as ssd?
without StorageClass it worked fine for me.
following is my manifest
kind: PersistentVolume
metadata:
name: gke-pv
labels:
app: test
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: gce-disk
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gke-pvc
labels:
app: test
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd-sc
resources:
requests:
storage: 2Gi
selector:
matchLabels:
app: test
You need storage class if the storage needs to be provisioned dynamically.
If you are provisioning persistent volumes then it is called static storage provisioning. You don't need storage class in this scenario
The problem that is going on here is that if you want to statically provision PersistentVolumes, they don't have a StorageClass. However, GKE clusters are created with a standard StorageClass which is the default, and so the PVC gets confused and tries to dynamically allocate.
The solution is to have the PVC request an empty storage class, which forces it to look at the statically provisioned PVs.
So you'd use a sequence like this to create a PV and then get it bound to a PVC:
Manually provision the ssd:
gcloud compute disks create --size=10Gi --zone=[YOUR ZONE] --type=pd-ssd already-created-ssd-disk
Then apply a PV object that uses the statically provisioned disk, like so:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ssd-for-k8s-volume
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: already-created-ssd-disk
fsType: ext4
Then, you can claim it with a PVC like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-ssd-demo
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
You could also use labels to refine which PVs are selected, of course, for example if you have some that are SSD and others that are regular spinning metal.
Note that the idea of using a StorageClass for static provisioning isn't really the right thing, since StorageClass is tied to how you describe storage for dynamic provisioning.

Kubernetes - PersistentVolumeClaim failed

I have a GKE based Kubernetes setup and a POD that requires a storage volume. I attempt to use the config below:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-scratch-space
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2000Gi
storageClassName: standard
This PVC is not provisioned. I get the below error:
Failed to provision volume with StorageClass "standard": googleapi: Error 503: The zone 'projects/p01/zones/europe-west2-b' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
Looking at GKE quotas page, I don't see any issues. Deleting other PVCs also is not solving the issue. Can anyone help? Thanks.
There is no configuration problem at your side - there are actually not enough resources in the europe-west2-b zone to create a 2T persistent disk. Either try for a smaller volume or use a different zone.
There is an example for GCE in the docs. Create a new StorageClass specifying say the europe-west1-b zone (which is actually cheaper than europe-west2-b) like this:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gce-pd-europe-west1-b
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zones: europe-west1-b
And modify your PVC:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-scratch-space
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2000Gi
storageClassName: gce-pd-europe-west1-b

Snapshotting on google cloud/Kubernetes when using storageClass persistent volumes

StorageClasses are the new method of specifying dynamic persistent volume claim (PVC) dependencies within Kubernetes. This avoids the need to explicitly provision one directly with the cloud provider (in my case Google Container Engine (GKE)).
Definition for the StorageClasses (GKE already has a default for standard class)
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
zone: europe-west1-b
Definition for the actual PVC
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-server-pvc
namespace: staging
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
storageClassName: "standard"
Here is the result of kubernetes get storageclass:
NAME TYPE
fast kubernetes.io/gce-pd
standard (default) kubernetes.io/gce-pd
Here is the result of kubernetes get pvc:
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
nfs-pvc Bound nfs 1Mi RWX 119d
nfs-server-pvc Bound pvc-905a810b-3f13-11e7-82f9-42010a840072 100Gi RWO standard 81d
I would like to continue taking snapshots of the volumes but the dynamic nature of the volume names created (in this case pvc-905a810b-3f13-11e7-82f9-42010a840072), mean i cannot continue with the following command that i had been using via cron (note the "nfs" name is now incorrect):
gcloud compute --project "XXX-XXX" disks snapshot "nfs" --zone "europe-west1-b" --snapshot-names "nfs-${DATE}"
I guess this boils down to Kubernetes allowing explicit naming through StorageClass-based PVC. The docs don't seem to allow this. Any ideas?
One approach is to manually create the PV and give it a stable name that you can use in your scripts. You can use gcloud commands to create the underlying PD disks. When you create the PV, give it a label:
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: my-pv-0
labels:
pdName: my-pv-0
spec:
capacity:
storage: "10Gi"
accessModes:
- "ReadWriteOnce"
storageClassName: fast
gcePersistentDisk:
fsType: "ext4"
pdName: "my-pd-0"
Then attach it to the PVC using a selector:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pvc-0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: fast
selector:
matchLabels:
pdName: my-pv-0

Can one persistent volume by consumed by several persistent volume claims?

Is it correct to assume that one PV can be consumed by several PVCs and each pod instance needs one binding of PVC? I'm asking because I created a PV and then a PVC with different size requirements such as:
kind: PersistentVolume
apiVersion: v1
metadata:
name: k8sdisk
labels:
type: amazonEBS
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-xxxxxx
fsType: ext4
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: couchbase-pvc
labels:
type: amazonEBS
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
But when I use the PVC with the pod, it shows as 200GB available instead of the 5GB.
I'm sure I'm mixing things, but could not find a reasonable explanation.
When you have a PVC it will look for a PV that will satisfy it's requirements, but unless it is a volume and claim in multi-access mode (and there is a limited amount of backends that support it, like ie. NFS - details in http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes), the PV will not be shared by multiple PVC. Furthermore, the size in PVC is not intended as quota for the amount of data saved on the volume during pods life, but as a way to match big enough PV, and thats it.