PersistentVolumeClaim fails to create on Alicloud Kubernetes - kubernetes

I am trying to create a Dynamic storage volume on Kubernetes in Ali cloud.
First I have created a storage class.
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: alicloud-pv-class
provisioner: alicloud/disk
parameters:
type: cloud_ssd
regionid: cn-beijing
zoneid: cn-beijing-b
Then, tried creating a persistence volume claim as per below.
apiVersion: v1
kind: List
items:
- kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: node-pv
spec:
accessModes:
- ReadWriteOnce
storageClassName: alicloud-pv-class
resources:
requests:
storage: 64Mi
Creation of persistence volume fails with the following error.
Warning ProvisioningFailed 0s alicloud/disk alicloud-disk-controller-68dd8f98cc-z6ql5 5ef317c7-f110-11e8-96de-0a58ac100006 Failed to provision volume with StorageClass "alicloud-pv-class": Aliyun API Error: RequestId: 7B2CA409-3FDE-4BA1-85B9-80F15109824B Status Code: 400 Code: InvalidParameter Message: The specified parameter "Size" is not valid.
I am not sure where this Size parameter is specified. Did anyone come across a similar problem?

As pointed out in the docs, the minimum size for SSD is 20Gi, so I'd suggest to change storage: 64Mi to storage: 20Gi to fix it.

Related

Volume GKE invalid disk size

I'am trying to create pod with volume persistent disk of 10gb but seems I cannot create disk under 200Gb.
I can see pv listed but pvClaim is on pending. I can see what the pc is Available so I cannot understand what's happen
Please find info below:
Invalid value for field 'resource.sizeGb': '10'. Disk size cannot be smaller than 200 GB., invalid
kubectl get pvc -n vault-ppd
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-vault-ppd-claim Pending balanced-persistent-disk 2m45s
kubectl get pv -n vault-ppd
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-vault-ppd 10Gi RWO Retain Available vault/pv-vault-ppd-claim
My manifest vault-ppd.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: balanced-persistent-disk
provisioner: pd.csi.storage.gke.io
parameters:
type: pd-standard
replication-type: regional-pd
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: topology.gke.io/zone
values:
- europe-west1-b
- europe-west1-c
- europe-west1-d
---
apiVersion: v1
kind: Namespace
metadata:
name: vault-ppd
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vault-ppd
namespace: vault-ppd
labels:
app.kubernetes.io/name: vault-ppd
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-vault-ppd
spec:
storageClassName: "balanced-persistent-disk"
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: vault
name: pv-vault-ppd-claim
gcePersistentDisk:
pdName: gke-vault-volume
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-vault-ppd-claim
namespace: vault-ppd
spec:
storageClassName: "balanced-persistent-disk"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Thx for helps guys
Your deployment has regional persistent disks of type pd-standard and replication-type: regional-pd, this means that volumes create a regional persistent disk. As mentioned in documentation the minimum capacity per disk for regional persistent disks is 200 GB . We cannot create a regional-pd with lower GB requirement for a standard disk. So now the workaround is, you can either create a PVC with a larger size or use pd-ssd instead.
Note: To use regional persistent disks of type pd-standard, set the PersistentVolumeClaim.storage attribute to 200Gi or higher. If you need a smaller persistent disk, use pd-ssd instead of pd-standard.
Refer Regional Persistent disks for information.
pdName: gke-vault-volume should be a regional replicated disk with size >=200GB, you can just update your PVC/PC with the correct size. If it is not, you can set storageClassName: "" in both the PVC and PV to use the standard default StorageClass that provide standard disk.

How to use a storage class for StateFulSet? Do I have to create a PVC?

How to use a storage class for statefulset? I've created the StorageClass. I also created PVC but i'm a bit confused if PVC needs to be create since PVC already requests storage and volumeClaimTemplates also requests storage. Eitherway its not working with or with out pvc.
I get the following error:
create Pod dbhost001-0 in StatefulSet dbhost001 failed error: failed to create PVC mysql-dev-dbhost001-0: PersistentVolumeClaim "mysql-dev-dbhost001-0" is invalid: spec.resources[storage]: Required value
create Claim mysql-dev-dbhost001-0 for Pod dbhost001-0 in StatefulSet dbhost001 failed error: PersistentVolumeClaim "mysql-dev-dbhost001-0" is invalid: spec.resources[storage]: Required value
storageClass.yml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
Statefultset.yml:
apiVersion: apps/v1
kind: StatefulSet
....
....
volumeClaimTemplates:
- metadata:
name: mysql-dev
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
stroage: 2Gi
I'm not sure if pvc is needed? I was using this for a normal replicaset deployment. But not sure if Statefulset needs this.
PersistentVolumeClaim.yml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-dev
namespace: test-db-dev
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
storage: 2Gi
Figured it out.
First there was a typo in Statefultset.yml it should be storage instead of stroage.
Second there is no need for PersistentVolumeClaim since volumeClaimTemplates is the same thing which claims from storage class.

Getting an error when trying to create a persistent volume

I am trying to create a persistent volume on my kubernetes cluster running on an Amazon AWS EC2 instance (Ubuntu 18.04). I'm getting an error from kubectl when trying to create it.
I've tried looking up the error but I'm not getting any satisfactory search results.
Here is the pv.yaml file that I'm using.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
type: local
spec:
capacity:
storage: 1Gi
storageClassName: manual
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /home/ubuntu/data/pv001
Here's the error that I am getting:
Error from server (BadRequest): error when creating "./mysql-pv.yaml":
PersistentVolume in version "v1" cannot be handled as a
PersistentVolume: v1.PersistentVolume.Spec:
v1.PersistentVolumeSpec.PersistentVolumeSource: HostPath: Capacity:
unmarshalerDecoder: quantities must match the regular expression
'^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$', error found in #10 byte
of ...|":"manual"},"hostPat|..., bigger context ...|city":
{"storage":"1Gi","storageClassName":"manual"},"hostPath":
{"path":"/home/ubuntu/data/pv001"},"p|...
I cannot figure out from the message what the actual error is.
Any help appreciated.
remove the storage class from pv definition. storage class is needed for dynamic provisioning of pv's.
in your case, you are using host path volumes. it should work without storage class.
If you are on k8s 1.14 then look at local volumes. refer the below link
https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/
I don't think it's related to having quotes in the path. It's more about using the right indentation for storageClassName. It should be this instead:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
type: local
spec:
capacity:
storage: 1Gi
storageClassName: manual
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /home/ubuntu/data/pv001
You can remove it too, and it will use the default StorageClass
Try this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
type: local
spec:
capacity:
storage: 1Gi
storageClassName: manual
accessModes: ["ReadWriteOnce"]
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /home/ubuntu/data/pv001
storageClassName is under spec and same level as capacity (You put storageClassName under capacity which is wrong).
Read more: https://kubernetes.io/docs/concepts/storage/persistent-volumes/

Kubernetes - PersistentVolumeClaim failed

I have a GKE based Kubernetes setup and a POD that requires a storage volume. I attempt to use the config below:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-scratch-space
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2000Gi
storageClassName: standard
This PVC is not provisioned. I get the below error:
Failed to provision volume with StorageClass "standard": googleapi: Error 503: The zone 'projects/p01/zones/europe-west2-b' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
Looking at GKE quotas page, I don't see any issues. Deleting other PVCs also is not solving the issue. Can anyone help? Thanks.
There is no configuration problem at your side - there are actually not enough resources in the europe-west2-b zone to create a 2T persistent disk. Either try for a smaller volume or use a different zone.
There is an example for GCE in the docs. Create a new StorageClass specifying say the europe-west1-b zone (which is actually cheaper than europe-west2-b) like this:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gce-pd-europe-west1-b
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
zones: europe-west1-b
And modify your PVC:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-scratch-space
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2000Gi
storageClassName: gce-pd-europe-west1-b

Snapshotting on google cloud/Kubernetes when using storageClass persistent volumes

StorageClasses are the new method of specifying dynamic persistent volume claim (PVC) dependencies within Kubernetes. This avoids the need to explicitly provision one directly with the cloud provider (in my case Google Container Engine (GKE)).
Definition for the StorageClasses (GKE already has a default for standard class)
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
zone: europe-west1-b
Definition for the actual PVC
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-server-pvc
namespace: staging
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
storageClassName: "standard"
Here is the result of kubernetes get storageclass:
NAME TYPE
fast kubernetes.io/gce-pd
standard (default) kubernetes.io/gce-pd
Here is the result of kubernetes get pvc:
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
nfs-pvc Bound nfs 1Mi RWX 119d
nfs-server-pvc Bound pvc-905a810b-3f13-11e7-82f9-42010a840072 100Gi RWO standard 81d
I would like to continue taking snapshots of the volumes but the dynamic nature of the volume names created (in this case pvc-905a810b-3f13-11e7-82f9-42010a840072), mean i cannot continue with the following command that i had been using via cron (note the "nfs" name is now incorrect):
gcloud compute --project "XXX-XXX" disks snapshot "nfs" --zone "europe-west1-b" --snapshot-names "nfs-${DATE}"
I guess this boils down to Kubernetes allowing explicit naming through StorageClass-based PVC. The docs don't seem to allow this. Any ideas?
One approach is to manually create the PV and give it a stable name that you can use in your scripts. You can use gcloud commands to create the underlying PD disks. When you create the PV, give it a label:
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: my-pv-0
labels:
pdName: my-pv-0
spec:
capacity:
storage: "10Gi"
accessModes:
- "ReadWriteOnce"
storageClassName: fast
gcePersistentDisk:
fsType: "ext4"
pdName: "my-pd-0"
Then attach it to the PVC using a selector:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pvc-0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: fast
selector:
matchLabels:
pdName: my-pv-0