GKE Persistent Volume Not Saving Data - kubernetes

I have created a Persistent Volume and Volume claim for an app I am working on in GKE. The claim and storage appear to be setup correctly, however, the data doesn't persist if the pod is restarted. I am able to save data initially and I can see the file in the pod, but it disappears after restarting it.
I had asked this question previously, but didn't include my .yaml files and received a sort of generic answer as a result so I decided to repost with the .yaml files hoping someone could look at them and tell me where I am going wrong. From everything I've seen, it looks like the problem is in the Persistent Volume as the claim looks exactly like everyone else's.
apiVersion: apps/v1
kind: Deployment
metadata:
name: prod-api-meta-uploads-k8s
namespace: default
resourceVersion: "4500192"
selfLink: /apis/apps/v1/namespaces/default/deployments/prod-api-meta-uploads-k8s
uid: *******
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: prod-api-meta-uploads-k8s
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
gcb-build-id: *****
gcb-trigger-id:****
creationTimestamp: null
labels:
app: prod-api-meta-uploads-k8s
app.kubernetes.io/managed-by: gcp-cloud-build-deploy
app.kubernetes.io/name: prod-api-meta-uploads-k8s
app.kubernetes.io/version: becdb864864f25d2dcde2e62a2f70501cfd09f19
spec:
containers:
- image: bitbucket.org/api-meta-uploads-k8s#sha256:7766413c0d
imagePullPolicy: IfNotPresent
name: prod-api-meta-uploads-k8s-sha256-1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /uploads/profileImages
name: uploads-volume-prod
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: uploads-volume-prod
persistentVolumeClaim:
claimName: my-disk-claim-1
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-09-08T21:00:40Z"
lastUpdateTime: "2020-09-10T04:54:27Z"
message: ReplicaSet "prod-api-meta-uploads-k8s-5c8f66f886" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
- lastTransitionTime: "2020-09-10T06:49:41Z"
lastUpdateTime: "2020-09-10T06:49:41Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 36
readyReplicas: 1
replicas: 1
updatedReplicas: 1
** Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: "2020-09-09T16:12:51Z"
finalizers:
- kubernetes.io/pvc-protection
name: uploads-volume-prod
namespace: default
resourceVersion: "4157429"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/uploads-volume-prod
uid: f93e6134
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
storageClassName: standard
volumeMode: Filesystem
volumeName: pvc-f93e6
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 30Gi
phase: Bound
*** PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
finalizers:
- kubernetes.io/pvc-protection
name: my-disk-claim-1
namespace: default
resourceVersion: "4452471"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/my-disk-claim-1
uid: d533702b
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: fast
volumeMode: Filesystem
volumeName: pvc-d533702b
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 50Gi
phase: Bound

As you are using GKE you don't need to prepare PersistentVolume and PersistentVolumeClaim manually (static provisioning) in relationship 1:1, as GKE can use Dynamic Volume Provisioning.
It's good described in Persistent Volumes
When none of the static PVs the administrator created match a user's PersistentVolumeClaim, the cluster may try to dynamically provision a volume specially for the PVC.
In GKE on the beginning you have at least one storageclass named standard. It also have (default) next to name.
$ kubectl get sc
NAME PROVISIONER AGE
standard (default) kubernetes.io/gce-pd 110m
It means that if you won't specify storageClassName in your PersistentVolumeClaim, it will use storageclass which is set as default. In your YAMLs I can see that you have used storageClassName: standard. If you will check this storageclass you will se what ReclaimPolicy was set to delete. Below output:
$ kubectl describe sc standard
Name: standard
IsDefaultClass: Yes
Annotations: storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/gce-pd
Parameters: type=pd-standard
AllowVolumeExpansion: True
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
IsDefaultClass: means this storageclass is set as default.
ReclaimPolicy: defining ReclaimPolicy, delete in this case
As ReclaimPolicy is set to Delete:
For volume plugins that support the Delete reclaim policy, deletion removes both the PersistentVolume object from Kubernetes, as well as the associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume. Volumes that were dynamically provisioned inherit the reclaim policy of their StorageClass, which defaults to Delete. The administrator should configure the StorageClass according to users' expectations; otherwise, the PV must be edited or patched after it is created.
Depends on your needs, you can use:
Recycle:
If supported by the underlying volume plugin, the Recycle reclaim policy performs a basic scrub (rm -rf /thevolume/*) on the volume and makes it available again for a new claim.
However, please keep in mind that: Warning: The Recycle reclaim policy is deprecated. Instead, the recommended approach is to use dynamic provisioning. Adding this option as I didnt see what K8s version are you using, however it's not supported on GKE.
Retain:
The Retain reclaim policy allows for manual reclamation of the resource. When the PersistentVolumeClaim is deleted, the PersistentVolume still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume.
In addition, as you are using GKE, it supports only Delete and Retain.
The StorageClass "another-storageclass" is invalid: reclaimPolicy: Unsupported value: "Recycle": supported values: "Delete", "Retain"
In addition, as you specfied revisionHistoryLimit: 10, pod after 10 restarts will be recreated, in that situation pod, pv and pvc will be deleted when ReclaimPolicy will be set as delete.
Solution
As easiest solution, you should create new StorageClass with ReclaimPolicy different from Delete and use it in your PVC.
$ kubectl get sc,pv,pvc -A
NAME PROVISIONER AGE
storageclass.storage.k8s.io/another-storageclass kubernetes.io/gce-pd 53s
storageclass.storage.k8s.io/standard (default) kubernetes.io/gce-pd 130m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-67c35c06-3f38-4f55-98c8-6b2b41ae5313 1Gi RWO Retain Bound tst-dev/pvc-1 another-storageclass 43s
persistentvolume/pvc-be30a43f-e96c-4c9f-8863-464823899a8f 1Gi RWO Retain Bound tst-stage/pvc-2 another-storageclass 42s
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
tst-dev persistentvolumeclaim/pvc-1 Bound pvc-67c35c06-3f38-4f55-98c8-6b2b41ae5313 1Gi RWO another-storageclass 46s
tst-stage persistentvolumeclaim/pvc-2 Bound pvc-be30a43f-e96c-4c9f-8863-464823899a8f 1Gi RWO another-storageclass 45s
$ kubectl delete pvc pvc-1 -n tst-dev
persistentvolumeclaim "pvc-1" deleted
user#cloudshell:~ (project)$ kubectl delete pvc pvc-2 -n tst-stage
persistentvolumeclaim "pvc-2" deleted
user#cloudshell:~ (project)$ kubectl get sc,pv,pvc -A
NAME PROVISIONER AGE
storageclass.storage.k8s.io/another-storageclass kubernetes.io/gce-pd 7m49s
storageclass.storage.k8s.io/standard (default) kubernetes.io/gce-pd 137m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-67c35c06-3f38-4f55-98c8-6b2b41ae5313 1Gi RWO Retain Released tst-dev/pvc-1 another-storageclass 7m38s
persistentvolume/pvc-be30a43f-e96c-4c9f-8863-464823899a8f 1Gi RWO Retain Released tst-stage/pvc-2 another-storageclass 7m37s

Related

How to Deploy an existing EBS volume to EKS PVC?

I have an existing ebs volume in AWS with data on it. I need to create a PVC in order to use it in my pods.
Following this guide: https://medium.com/pablo-perez/launching-a-pod-with-an-existing-ebs-volume-mounted-in-k8s-7b5506fa7fa3
persistentvolume.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: jenkins-volume
labels:
type: amazonEBS
spec:
capacity:
storage: 60Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-011111111x
fsType: ext4
[$$]>kubectl describe pv
Name: jenkins-volume
Labels: type=amazonEBS
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 60Gi
Node Affinity: <none>
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: vol-011111111x
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
persistentVolumeClaim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: jenkins-pvc-shared4
namespace: jenkins
spec:
storageClassName: gp2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 60Gi
[$$]>kubectl describe pvc jenkins-pvc-shared4 -n jenkins
Name: jenkins-pvc-shared4
Namespace: jenkins
StorageClass: gp2
Status: Pending
Volume:
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 12s (x2 over 21s) persistentvolume-controller waiting for first consumer to be created before binding
[$$]>kubectl get pvc jenkins-pvc-shared4 -n jenkins
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jenkins-pvc-shared4 Pending gp2 36s
Status is pending (waiting to the consumer to be attached) - but it should already be provisioned.
The "waiting for consumer" message suggests that your StorageClass has its volumeBindingMode set to waitForFirstConsumer.
The default value for this setting is Immediate: as soon as you register a PVC, your volume provisioner would provision a new volume.
The waitForFirstConsumer on the other hand would wait for a Pod to request usage for said PVC, before the provisioning a volume.
The messages and behavior you're seeing here sound normal. You may create a Deployment mounting that volume, to confirm provisioning works as expected.
try fsType "xfs" instead of ext4
StorageClass is empty for your PV.
According to your guide, you created storageClass "standard", so add to your PersistentVolume spec
storageClassName: standard
and also set it in persistentVolumeClaim instead of gp2
The right config should be:
[$$]>cat persistentvolume2.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-name
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: aws://eu-west-2a/vol-123456-ID
capacity:
storage: 60Gi
persistentVolumeReclaimPolicy: Retain
storageClassName: gp2
[$$]>cat persistentVolumeClaim2.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: new-namespace
labels:
app.kubernetes.io/name: <app-name>
name: pvc-name
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 60Gi
storageClassName: gp2
volumeName: pv-name

Kubernetes: 2 PVCs in 2 namespaces binding to the same PV, one successful, one failed

So I have 2 PVCs in 2 namespaces binding to 1 PV:
The following are the PVCs:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-git
namespace: mlo-dev
labels:
type: local
spec:
storageClassName: mlo-git
volumeMode: Filesystem
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-git
namespace: mlo-stage
labels:
type: local
spec:
storageClassName: mlo-git
volumeMode: Filesystem
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
and the PV:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-git
labels:
type: local
spec:
storageClassName: mlo-git
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
hostPath:
path: /git
in the namespace "mlo-dev", the binding is successful:
$ kubectl describe pvc pvc-git -n mlo-dev
Name: pvc-git
Namespace: mlo-dev
StorageClass: mlo-git
Status: Bound
Volume: pv-git
Labels: type=local
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWX
VolumeMode: Filesystem
Mounted By:
...
various different pods here...
...
Events: <none>
Whereas in the namespace "mlo-stage", the binding is failed with the error message: storageclass.storage.k8s.io "mlo-git" not found
$ kubectl describe pvc pvc-git -n mlo-stage
Name: pvc-git
Namespace: mlo-stage
StorageClass: mlo-git
Status: Pending
Volume:
Labels: type=local
Annotations: Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By:
...
various different pods here...
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 3m4s (x302 over 78m) persistentvolume-controller storageclass.storage.k8s.io "mlo-git" not found
As I know, PV is not scoped to namespace, so it should be possible for PVCs in different namespaces to bind to the same PV?
+++++
Added:
+++++
when "kubectl describe pv pv-git", I got the following:
$ kubectl describe pv pv-git
Name: pv-git
Labels: type=local
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: mlo-git
Status: Bound
Claim: mlo-dev/pvc-git
Reclaim Policy: Retain
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /git
HostPathType:
Events: <none>
I've tried to reproduced your scenario (however if you would provide your storageclass yaml to exact reproduction, and changed AccessMode for tests) and in my opinion this behavior is correctly (worked as designed).
When you want to check if specific object is namespaced you can use command:
$ kubectl api-resources | grep pv
persistentvolumeclaims pvc true PersistentVolumeClaim
persistentvolumes pv false PersistentVolume
As PVC is true its mean pvc is namespaced, and PV is not.
PersistentVolumeClain and PersistentVolume are bounding in relationship 1:1. When your first PVC bounded to PV, this PV is taken and cannot be used again in that moment. You should create second PV. It can be changed depends on reclaimPolicy and what happend with pop/deployment
I guess you are using Static provisioning.
A cluster administrator creates a number of PVs. They carry the details of the real storage, which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.
In this case you have to create 1 PV to 1 PVC.
If you would use cloud environment, you would use Dynamic provisioning.
When none of the static PVs the administrator created match a user's PersistentVolumeClaim, the cluster may try to dynamically provision a volume specially for the PVC. This provisioning is based on StorageClasses: the PVC must request a storage class and the administrator must have created and configured that class for dynamic provisioning to occur.
As for example, on GKE I've tried to reproduce it and 1 PVC bound to PV. As GKE is using Dynamic provisioning, when you defined only PVC it used default storageclass and automatically created PV.
$ kubectl get pv,pvc -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-git 1Gi RWO Retain Bound mlo-dev/pvc-git mlo-git 15s
persistentvolume/pvc-e7a1e950-396b-40f6-b8d1-8dffc9a304d0 1Gi RWO Delete Bound mlo-stage/pvc-git mlo-git 6s
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mlo-dev persistentvolumeclaim/pvc-git Bound pv-git 1Gi RWO mlo-git 10s
mlo-stage persistentvolumeclaim/pvc-git Bound pvc-e7a1e950-396b-40f6-b8d1-8dffc9a304d0 1Gi RWO mlo-git 9s
Solution
To fix this issue, you should create another PersistentVolume to bound second PVC.
For more details about bounding you can check this topic. If you would like more information about PVC check this SO thread.
If second PV won't help, please provide more details about your environment (Minikube/Kubeadm, K8s version, OS, etc.) and your storageclass YAML.

Why the kubernetes pvc resize can't work with filesystem?

I want use the Kubernetes feature of dynamic resize PVC. After I edit the PVC size to a large one, only the PV size has changed, but PVC status still is FileSystemResizePending. My Kubernetes version is 1.15.3, in the normal situation the filesystem will expand automatically. Even if I recreate the pod, the PVC status still is FileSystemResizePending, and size not change.
The CSI driver is aws-ebs-csi-driver, version is alpha.
Kubernetes version is 1.15.3.
Feature-gates like this:
--feature-gates=ExpandInUsePersistentVolumes=true,CSINodeInfo=true,CSIDriverRegistry=true,CSIBlockVolume=true,VolumeSnapshotDataSource=true,ExpandCSIVolumes=true
Create StorageClass file is :
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
PV status:
kubectl describe pv pvc-44bbcd26-2d7c-4e42-a426-7803efb6a5e7
Name: pvc-44bbcd26-2d7c-4e42-a426-7803efb6a5e7
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by: ebs.csi.aws.com
Finalizers: [kubernetes.io/pv-protection external-attacher/ebs-csi-aws-com]
StorageClass: ebs-sc
Status: Bound
Claim: default/test
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 25Gi
Node Affinity:
Required Terms:
Term 0: topology.ebs.csi.aws.com/zone in [ap-southeast-1b]
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: ebs.csi.aws.com
VolumeHandle: vol-0beb77489a4b06f4c
ReadOnly: false
VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1568278824948-8081-ebs.csi.aws.com
Events: <none>
PVC status:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: ebs.csi.aws.com
creationTimestamp: "2019-09-12T09:08:09Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: test
name: test
namespace: default
resourceVersion: "5467113"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/test
uid: 44bbcd26-2d7c-4e42-a426-7803efb6a5e7
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 25Gi
storageClassName: ebs-sc
volumeMode: Filesystem
volumeName: pvc-44bbcd26-2d7c-4e42-a426-7803efb6a5e7
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 20Gi
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-09-12T09:10:29Z"
message: Waiting for user to (re-)start a pod to finish file system resize of
volume on node.
status: "True"
type: FileSystemResizePending
phase: Bound
I expect the PVC size will change to the value which I specified. But the PVC status always keep FileSystemResizePending.
Right in your pvc status you can see a reason:
message: Waiting for user to (re-)start a pod to finish file system resize of
volume on node
You should restart a pod which use that PV, which will cause a remount of PV and FS will be resized before next mount.
Not all file systems can be resized on-flight, so I think that is just a compatibility behavior. Also, that is more safe anyway.
Basically down the line resizefs is resizing the amount of space requested, which may take time according to the size of space requested.
You should not delete the pod since unmounting of volume is not supported when resize is in progress. Gives following error:
Output: umount: /var/nutanix/var/lib/kubelet/pods/36862d8a-e0bf-4d0f-bdd3-c897a4ed5ccd/volumes/kubernetes.io~csi/pvc-acad4f90-9811-4371-9512-3e14ed1cbc64/mount: target is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
Even if pod is deleted and is stuck in terminating state, it will ultimately terminate when resize of filesystem completes and a new pod is created.

Kubernetes : PVC binding status in pending

I create a PV and claimed the PV through PVC. I see that PV is created but the PVC binding status is stuck in pending.When i looked at the describe pvc output , I see that no persistent volumes available for this claim and no storage class is set. From the documentation I understand that storage class isnt mandatory . So, am unsure on what's missing in the PVC file.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-ghost
labels:
pv: pv-ghost
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 3Gi
hostPath:
path: /ghost/data
--------------------------
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-ghost
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
pv: pv-ghost
Out of describe PV and PVC
kubectl describe pv pv-ghost
Name: pv-ghost
Labels: pv=pv-ghost
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWX
Capacity: 3Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /ghost/data
HostPathType:
Events: <none>
kubectl describe pvc pvc-ghost
Name: pvc-ghost
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 8m44s (x8 over 10m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Normal FailedBinding 61s (x5 over 2m3s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Mounted By: <none>
You need to specify the volume source manually.
ReadWriteMany is only available for AzureFile, CephFS, Glusterfs, Quobyte, NFS, PortworxVolume.
Also Flexvolume depending on the drivers and VsphereVolume works when pods are collocated.
You can read it all in Kubernetes docs regarding Volume Mode
An example PV for aws would look like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-volume
spec:
capacity:
storage: 15Gi # Doesn't really matter, as EFS does not enforce it anyway
volumeMode: Filesystem
accessModes:
- ReadWriteMany
mountOptions:
- hard
- nfsvers=4.1
- rsize=1048576
- wsize=1048576
- timeo=300
- retrans=2
nfs:
path: /
server: fs-XXX.efs.eu-central-2.amazonaws.com
In the above issue,
The Capacity specified in the persistent volume is lesser than the Persistent volume claim Capacity. Try either increasing the Capacity number in the Persistent volume to 5Gi or reducing the Capacity number in the Persistent volume claim to 3Gi.
When you are using a hostPath in the Persistent volume the accessModes should be ReadWriteOnce.
Hostpath method is currently not supported in a multi-node cluster.

VolumeClaim in Kubernetes Google Cloud

I am trying to create both a PersistentVolume and a PersistentVolumeClaim on Google Kubernetes Engine.
The way to link them is via labelSelector.
I am creating the objects with this definition:
volume.yml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
namespace: mynamespace
labels:
pv-owner: owner
pv-usage: pv-test
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data/test/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-test
namespace: mynamespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
pv-usage: pv-test
and running:
kubectl apply -f volume.yml
Both objects are successfully created, however, VolumeClaim apparently keeps pending forever awaiting a Volume that matches its requirements.
Could you please help me?
Thanks!
First of all, PersistentVolume resources don’t belong to any namespace. They’re cluster-level resources like nodes, but PersistentVolumeClaim objects can only be created in a specific namespace.
Seems like when you created the claim earlier, it was immediately bound to the PersistentVolume. Can you show output of the commands:
$ kubectl get pv
$ kubectl get pvc
Highly likely your persistentVolumeReclaimPolicy was set to Retain, so your PersistentVolume is in Released status now. Since there is no another PersistenVolume resource matches your claim's requirements your PersistentVolumeClaim is in Pending status.
Thanks for your help #konstantin-vustin
I found the solution. I had to specify storageClassName: manual attribute in the spec of both objects.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class
According to the doc
A PV with no storageClassName has no class and can only be bound to PVCs that request no particular class.
So IMO it should have worked before, so I am not sure if I clearly understood it.
This was the status before
kubectl get pv pv-test-vol && kubectl get pvc pv-test --namespace openwhisk
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-test-vol 2Gi RWO Retain Available manual 26s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-test Pending standard 26s
The updated definitions
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
namespace: mynamespace
labels:
pv-owner: owner
pv-usage: pv-test
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data/test/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-test
namespace: mynamespace
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
pv-usage: pv-test
This is the status after
kubectl get pv pv-test-vol && kubectl get pvc pv-test --namespace openwhisk
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-test-vol 2Gi RWO Retain Bound openwhisk/pv-test manual 4s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-test Bound pv-test-vol 2Gi RWO manual 4s