Connecting to persistent volume in Kubernetes? - kubernetes

I'm in the process of converting a stack to k8s. The database requires persistent storage.
I have used kubectl create -f pv.yaml
pv.yaml (with edits based on #whites11's answer):
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/nfs"
claimRef:
kind: PersistentVolumeClaim
namespace: default
name: mongo-persisted-storage
I then create an example mongo replica set.
When I look at my k8s dashboard I see the error:
PersistentVolumeClaim is not bound: "mongo-persistent-storage-mongo-0"
(repeated 2 times)
In the persistent volume tab I see the volume which looks ok:
I'm having trouble figuring out the next step to make the volume claim happen successfully.
Edit #2
I went into the PVC page on the GUI and added a volume to the claim manually (based on feedback from #whites11). I can see that the PVC has been updated with the volume but it is still pending.
Edit #3
Realizing that since making the change suggested by #whites11, the original error message in the pod has changed. It is now "persistentvolume "pvvolume" not found (repeated 2 times)", I think I just need to figure out where I wrote pvvolume, instead of pv-volume. (or it could be the - was auto-parsed out somewhere?

You need to manually bind your PV to your PVC, by adding the appropriate claimRef section to the PV spec.
In practice, edit your PV with the method you prefer, and add a section similar to this:
claimRef:
name: mongo-persisted-storag
namespace: <your PVC namespace>
Than, you need to edit your PVC to bind the correct volume, by adding the following in its spec section:
volumeName: "<your volume name>"
Here an explanation on how this process works: https://docs.openshift.org/latest/dev_guide/persistent_volumes.html#persistent-volumes-volumes-and-claim-prebinding

Related

Reuse PV in Deployment

What I need?
A deployment with 2 PODs which read from the SAME volume (PV). The volume must be shared between PODS in a RW mode.
Note: I already have a rook ceph with a defined storageClass "rook-cephfs" which allow this capability. This SC also has Retain Policy
This is what I did:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-nginx
spec:
accessModes:
- "ReadWriteMany"
resources:
requests:
storage: "10Gi"
storageClassName: "rook-cephfs"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
serviceAccountName: default
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
volumeMounts:
- name: pvc-data
mountPath: /data
volumes:
- name: pvc-data
persistentVolumeClaim:
claimName: data-nginx
It works! Both nginx containers shares the volume.
Problem:
If a delete all the resources (except the PV) and a recreate them, a NEW PV is created instead of reuse the old one. So basically, the new volume is empty.
The OLD PV get the status "Released" instead of "Available"
I realized that if a apply a patch to the PV to remove the claimRef.uid :
kubectl patch pv $PV_NAME --type json -p '[{"op": "remove", "path": "/spec/claimRef/uid"}]'
and then redeploy it works.
But I don't want to do this manual step. I need this automated.
I also tried the same configuration with a statefulSet and got the same problem.
Any solution?
Make sure to use reclaimPolicy: Retain in your StorageClass. It will tell Kubernetes to reuse the PV.
Ref: https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/
But I don't want to do this manual step. I need this automated.
Based on the official documentation, it is unfortunately impossible. First look at the Reclaim Policy:
PersistentVolumes that are dynamically created by a StorageClass will have the reclaim policy specified in the reclaimPolicy field of the class, which can be either Delete or Retain. If no reclaimPolicy is specified when a StorageClass object is created, it will default to Delete.
So, we have 2 supported options for Reclaim Policy: Delete or Retain.
Delete option is not for you, because,
for volume plugins that support the Delete reclaim policy, deletion removes both the PersistentVolume object from Kubernetes, as well as the associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume. Volumes that were dynamically provisioned inherit the reclaim policy of their StorageClass, which defaults to Delete. The administrator should configure the StorageClass according to users' expectations; otherwise, the PV must be edited or patched after it is created.
Retain option allows you for manual reclamation of the resource:
When the PersistentVolumeClaim is deleted, the PersistentVolume still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume. An administrator can manually reclaim the volume with the following steps.
Delete the PersistentVolume. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume) still exists after the PV is deleted.
Manually clean up the data on the associated storage asset accordingly.
Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new PersistentVolume with the storage asset definition.

Unable to mount NFS on Kubernetes Pod

I am working on deploying Hyperledger Fabric test network on Kubernetes minikube cluster. I intend to use PersistentVolume to share cytpo-config and channel artifacts among various peers and orderers. Following is my PersistentVolume.yaml and PersistentVolumeClaim.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
path: "/nfsroot"
server: "3.128.203.245"
readOnly: false
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Following is the pod where the above claim is mount on /data
kind: Pod
apiVersion: v1
metadata:
name: test-shell
labels:
name: test-shell
spec:
containers:
- name: shell
image: ubuntu
command: ["/bin/bash", "-c", "while true ; do sleep 10 ; done"]
volumeMounts:
- mountPath: "/data"
name: pv
volumes:
- name: pv
persistentVolumeClaim:
claimName: persistent-volume-claim
NFS is setup on my EC2 instance. I have verified NFS server is working fine and I was able to mount it inside minikube. I am not understanding what wrong am I doing, but any file present inside 3.128.203.245:/nfsroot is not present in test-shell:/data
What point am I missing. I even tried hostPath mount but to no avail. Please help me out.
I think you should check the following things to verify that NFS is mounted successfully or not
run this command on the node where you want to mount.
$showmount -e nfs-server-ip
like in my case $showmount -e 172.16.10.161
Export list for 172.16.10.161:
/opt/share *
use $df -hT command see that Is NFS is mounted or not like in my case it will give output 172.16.10.161:/opt/share nfs4 91G 32G 55G 37% /opt/share
if not mounted then use the following command
$sudo mount -t nfs 172.16.10.161:/opt/share /opt/share
if the above commands show an error then check firewall is allowing nfs or not
$sudo ufw status
if not then allow using the command
$sudo ufw allow from nfs-server-ip to any port nfs
I made the same setup I don't face any issues. My k8s cluster of fabric is running successfully . The hf k8s yaml files can be found at my GitHub repo. There I have deployed the consortium of Banks on hyperledger fabric which is a dynamic multihost blockchain network that means you can add orgs, peers, join peers, create channels, install and instantiate chaincode on the go in an existing running blockchain network.
By default in minikube you should have default StorageClass:
Each StorageClass contains the fields provisioner, parameters, and reclaimPolicy, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned.
For example, NFS doesn't provide an internal provisioner, but an external provisioner can be used. There are also cases when 3rd party storage vendors provide their own external provisioner.
Change the default StorageClass
In your example this property can lead to problems.
In order to list enabled addons in minikube please use:
minikube addons list
To list all StorageClasses in your cluster use:
kubectl get sc
NAME PROVISIONER
standard (default) k8s.io/minikube-hostpath
Please note that at most one StorageClass can be marked as default. If two or more of them are marked as default, a PersistentVolumeClaim without storageClassName explicitly specified cannot be created.
In your example the most probable scenario is that you have already default StorageClass. Applying those resources caused: new PV creation (without StoraglClass), new PVC creation (with reference to existing default StorageClass). In this situation there is no reference between your custom pv/pvc binding) as an example please take a look:
kubectl get pv,pvc,sc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/nfs 3Gi RWX Retain Available 50m
persistentvolume/pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 1Gi RWX Delete Bound default/pvc-nfs standard 50m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pvc-nfs Bound pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 1Gi RWX standard 50m
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/standard (default) k8s.io/minikube-hostpath Delete Immediate false 103m
This example will not work due to:
new persistentvolume/nfs has been created (without reference to pvc)
new persistentvolume/pvc-8aeb802f-cd95-4933-9224-eb467aaa9871 has been created using default StorageClass. In the Claim section we can notice that this pv has been created due to dynamic pv provisioning using default StorageClass with reference to default/pvc-nfs claim (persistentvolumeclaim/pvc-nfs ).
Solution 1.
According to the information from the comments:
Also I am able to connect to it within my minikube and also my actual ubuntu system.
I you are able to mount from inside minikube host this nfs share
If you mounted nfs share into your minikube node, please try to use this example with hostpath volume directly from your pod:
apiVersion: v1
kind: Pod
metadata:
name: test-shell
namespace: default
spec:
volumes:
- name: pv
hostPath:
path: /path/shares # path to nfs mount point on minikube node
containers:
- name: shell
image: ubuntu
command: ["/bin/bash", "-c", "sleep 1000 "]
volumeMounts:
- name: pv
mountPath: /data
Solution 2.
If you are using PV/PVC approach:
kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set / or custom storageClassName name
nfs:
path: "/nfsroot"
server: "3.128.203.245"
readOnly: false
claimRef:
name: persistent-volume-claim
namespace: default
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persistent-volume-claim
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set / or custom storageClassName name
volumeName: persistent-volume
Note:
If you are not referencing any provisioner associated with your StorageClass
Helper programs relating to the volume type may be required for consumption of a PersistentVolume within a cluster. In this example, the PersistentVolume is of type NFS and the helper program /sbin/mount.nfs is required to support the mounting of NFS filesystems.
Please keep in mind that when you are creating pvc kubernetes persistent-controller is trying to bind pvc with proper pv. During this process different factors are take into account like: storageClassName (default/custom), accessModes, claimRef, volumeName.
In this case you can use:
PersistentVolume.spec.claimRef.name: persistent-volume-claim PersistentVolumeClaim.spec.volumeName: persistent-volume
Note:
The control plane can bind PersistentVolumeClaims to matching PersistentVolumes in the cluster. However, if you want a PVC to bind to a specific PV, you need to pre-bind them.
By specifying a PersistentVolume in a PersistentVolumeClaim, you declare a binding between that specific PV and PVC. If the PersistentVolume exists and has not reserved PersistentVolumeClaims through its claimRef field, then the PersistentVolume and PersistentVolumeClaim will be bound.
The binding happens regardless of some volume matching criteria, including node affinity. The control plane still checks that storage class, access modes, and requested storage size are valid.
Once the PV/pvc were created or in case of any problem with pv/pvc binding please use the following commands to figure current state:
kubectl get pv,pvc,sc
kubectl describe pv
kubectl describe pvc
kubectl describe pod
kubectl get events

Bound a PVC with status Released

let me put you in context. I got pod with a configuration that looks close to this:
spec:
nodeSets:
- name: default
count: 3
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: repd-ssd-xfs
I also have my StorageClass
apiVersion: ...
kind: StorageClass
metadata:
name: repd-ssd-xfs
parameters:
type: pd-ssd
fsType: xfs
replication-type: regional-pd
zones: us-central1-a, us-central1-b, us-central1-f
reclaimPolicy: Retain
volumeBindingMode: Immediate
I delete the namespace of the pod and then apply again and I notice that the pvc that my pod was using change and bound to a new pvc, the last pvc used by the pod is in state released. My question is that Is there any way to specify to the pod to use my old pvc? The StorageClass policy is Retain but that means that I can still using pvc with status released?
You can explicitly specify the persistent volume claim name in the pod spec if it's a deployment or a standalone pod like the code below:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
However, if it's a StatefulSet, it will automatically attach to the same PVC every time the pod restarts. Hope this helps.
In addition to the answer provided by #shashank tyagi.
Have a look at the documentation Persistent Volumes and at the section Retain you can find:
When the PersistentVolumeClaim is deleted, the PersistentVolume still
exists and the volume is considered “released”. But it is not yet
available for another claim because the previous claimant’s data
remains on the volume. An administrator can manually reclaim the
volume with the following steps.
Delete the PersistentVolume. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or
Cinder volume) still exists after the PV is deleted.
Manually clean up the data on the associated storage asset accordingly.
Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new PersistentVolume with the
storage asset definition.
It could be helpful to check the documentation Persistent volumes with Persistent Disks and this example How to set ReclaimPolicy for PersistentVolumeClaim.
UPDATE Have a look at the article Persistent Volume Claim for StatefulSet.

The PersistentVolume is invalid: spec: Required value: must specify a volume type

I'm trying to create a Persistent Volume on top of/based off of an existing Storage Class Name. Then I want to attach the PVC to it; so that they are bound. Running the code below, will give me the "sftp-pv-claim" I want, but it is not bound to my PV ("sftp-pv-storage"). It's status is "pending".
The error message I receive is: "The PersistentVolume "sftp-pv-storage" is invalid: spec: Required value: must specify a volume type". If anyone can point me in the right direction as to why I'm getting the error message, it'd be much appreciated.
Specs:
I'm creating the PV and PVC using a helm chart.
I'm using the Rancher UI to see if they are bound or not and if the PV is generated.
The storage I'm using is Ceph with Rook (to allow for dynamic provisioning of PVs).
Error:
The error message I receive is: "The PersistentVolume "sftp-pv-storage" is invalid: spec: Required value: must specify a volume type".
Attempts:
I've tried using claimRef and matchLabels to no avail.
I've added "volumetype: none" to my PV specs.
If I add "hostPath: path: "/mnt/data"" as a spec to the PV, it will show up as an Available PV (with a local node path), but my PVC is not bonded to it. (Also, for deployment purposes I don't want to use hostPath.
## Create Persistent Storage for SFTP
## Ref: https://www.cloudtechnologyexperts.com/kubernetes-persistent-volume-with-rook/
kind: PersistentVolume
apiVersion: v1
metadata:
name: sftp-pv-storage
labels:
type: local
name: sftp-pv-storage
spec:
storageClassName: rook-ceph-block
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
allowVolumeExpansion: true
volumetype: none
---
## Create Claim (links user to PV)
## ==> If pod is created, need to automatically create PVC for user (without their input)
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: sftp-pv-claim
spec:
storageClassName: sftp-pv-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
The PersistentVolume "sftp-pv-storage" is invalid: spec: Requiredvalue: must specify a volume type.
In PV manifest you must provide type of volume. List of all supported types are described here.
As you are using Ceph I assume you will use CephFS.
A cephfs volume allows an existing CephFS volume to be mounted into
your Pod. Unlike emptyDir, which is erased when a Pod is removed, the
contents of a cephfs volume are preserved and the volume is merely
unmounted. This means that a CephFS volume can be pre-populated with
data, and that data can be “handed off” between Pods. CephFS can be
mounted by multiple writers simultaneously.
Example of CephFS you can find in Github.
If I add "hostPath: path: "/mnt/data"" as a spec to the PV, it will show up as an Available PV (with a local node path), but my PVC is not bonded to it.
If you will check Official Kubernetes docs about storageClassName.
A claim can request a particular class by specifying the name of a
StorageClass using the attribute storageClassName. Only PVs of the
requested class, ones with the same storageClassName as the PVC, can
be bound to the PVC.
storageClassName of your PV and PVC are different.
PV:
spec:
storageClassName: rook-ceph-block
PVC:
spec:
storageClassName: sftp-pv-storage
Hope it will help.
You did not specify the "hostPath:" in your PersistentVolume
Add it and the error should be resolved. See sample below

Kubernetes Persistent Volume Claim Indefinitely in Pending State

I created a PersistentVolume sourced from a Google Compute Engine persistent disk that I already formatted and provision with data. Kubernetes says the PersistentVolume is available.
kind: PersistentVolume
apiVersion: v1
metadata:
name: models-1-0-0
labels:
name: models-1-0-0
spec:
capacity:
storage: 200Gi
accessModes:
- ReadOnlyMany
gcePersistentDisk:
pdName: models-1-0-0
fsType: ext4
readOnly: true
I then created a PersistentVolumeClaim so that I could attach this volume to multiple pods across multiple nodes. However, kubernetes indefinitely says it is in a pending state.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: models-1-0-0-claim
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 200Gi
selector:
matchLabels:
name: models-1-0-0
Any insights? I feel there may be something wrong with the selector...
Is it even possible to preconfigure a persistent disk with data and have pods across multiple nodes all be able to read from it?
I quickly realized that PersistentVolumeClaim defaults the storageClassName field to standard when not specified. However, when creating a PersistentVolume, storageClassName does not have a default, so the selector doesn't find a match.
The following worked for me:
kind: PersistentVolume
apiVersion: v1
metadata:
name: models-1-0-0
labels:
name: models-1-0-0
spec:
capacity:
storage: 200Gi
storageClassName: standard
accessModes:
- ReadOnlyMany
gcePersistentDisk:
pdName: models-1-0-0
fsType: ext4
readOnly: true
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: models-1-0-0-claim
spec:
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 200Gi
selector:
matchLabels:
name: models-1-0-0
With dynamic provisioning, you shouldn't have to create PVs and PVCs separately. In Kubernetes 1.6+, there are default provisioners for GKE and some other cloud environments, which should let you just create a PVC and have it automatically provision a PV and an underlying Persistent Disk for you.
For more on dynamic provisioning, see:
https://kubernetes.io/blog/2017/03/dynamic-provisioning-and-storage-classes-kubernetes/
Had the same issue but it was another reason that's why I am sharing it here to help community.
If you have deleted PersistentVolumeClaim and then re-create it again with the same definition, it will be Pending forever, why?
persistentVolumeReclaimPolicy is Retain by default in PersistentVolume. In case we have deleted PersistentVolumeClaim, the PersistentVolume still exists and the volume is considered released. But it is not yet available for another claim because the previous claimant's data remains on the volume.
so you need to manually reclaim the volume with the following steps:
Delete the PersistentVolume (associated underlying storage asset/resource like EBS, GCE PD, Azure Disk, ...etc will NOT be deleted, still exists)
(Optional) Manually clean up the data on the associated storage asset accordingly
(Optional) Manually delete the associated storage asset (EBS, GCE PD, Azure Disk, ...etc)
If you still need the same data, you may skip cleaning and deleting associated storage asset (step 2 and 3 above), so just simply re-create a new PersistentVolume with same storage asset definition then you should be good to create PersistentVolumeClaim again.
One last thing to mention, Retain is not the only option for persistentVolumeReclaimPolicy, below are some other options that you may need to use or try based on use-case scenarios:
Recycle: performs a basic scrub on the volume (e.g., rm -rf //*) - makes it available again for a new claim. Only NFS and HostPath support recycling.
Delete: Associated storage asset such as AWS EBS, GCE PD, Azure Disk, or OpenStack Cinder...etc volume is deleted
For more information, please check kubernetes documentation.
Still need more clarification or have any questions, please don't hesitate to leave a comment and I will be more than happy to clarify and assist.
If you're using Microk8s, you have to enable storage before you can start a PersistentVolumeClaim successfully.
Just do:
microk8s.enable storage
You'll need to delete your deployment and start again.
You may also need to manually delete the "pending" PersistentVolumeClaims because I found that uninstalling the Helm chart which created them didn't clear the PVCs out.
You can do this by first finding a list of names:
kubectl get pvc --all-namespaces
then deleting each name with:
kubectl delete pvc name1 name2 etc...
Once storage is enabled, reapplying your deployment should get things going.
I was facing the same problem, and realise that k8s actually does a just-in-time provision, i.e.
When a pvc is created, it stays in PENDING state, and no corresponding pv is created.
The pvc & pv (EBS volume) are created only after you have created a deployment which uses the pvc.
I am using EKS with kubernetes version 1.16 and the behaviour is controlled by StorageClass Volume Binding Mode.
I had same problem. My PersistentVolumeClaim yaml was originally as follows:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
storageClassName: “”
accessModes:
– ReadWriteOnce 
volumeName: pv
resources:
requests:
storage: 1Gi
and my pvc status was:
after remove volumeName :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
storageClassName: “”
accessModes:
– ReadWriteOnce 
resources:
requests:
storage: 1Gi
I've seen this behaviour in microk8s 1.14.1 when two PersistentVolumes have the same value for spec/hostPath/path, e.g.
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-name
labels:
type: local
app: app
spec:
storageClassName: standard
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/k8s-app-data"
It seems that microk8s is event-based (which isn't necessary on a one-node cluster) and throws away information about any failing operations resulting in unnecessary horrible feedback for almost all failures.
I had this problem with helmchart of the apache airflow(stable), setting storageClass to azurefile helped. What you should do in such cases with the cloud providers? Just search for the storage classes that support the needed access mode. ReadWriteMany means that SIMULTANEOUSLY many processes will read and write to the storage. In this case(azure) it is azurefile.
path: /opt/airflow/logs
## configs for the logs PVC
##
persistence:
## if a persistent volume is mounted at `logs.path`
##
enabled: true
## the name of an existing PVC to use
##
existingClaim: ""
## sub-path under `logs.persistence.existingClaim` to use
##
subPath: ""
## the name of the StorageClass used by the PVC
##
## NOTE:
## - if set to "", then `PersistentVolumeClaim/spec.storageClassName` is omitted
## - if set to "-", then `PersistentVolumeClaim/spec.storageClassName` is set to ""
##
storageClass: "azurefile"
## the access mode of the PVC
##
## WARNING:
## - must be: `ReadWriteMany`
##
## NOTE:
## - different StorageClass support different access modes:
## https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
##
accessMode: ReadWriteMany
## the size of PVC to request
##
size: 1Gi
When you want to bind manually a PVC to a PV with an existing disk, the storageClassName should not be specified... but... the cloud provider has set by default the "standard" StorageClass making it always entered whatever you try when patching the PVC/PV.
You can check your provider set it as default when doing kubectl get storageclass (it will be written "(default")).
To fix this the best is to get your existing StorageClass YAML and add this annotation:
annotations:
storageclass.kubernetes.io/is-default-class: "false"
Apply and good :)
Am using microk8s
Fixed the problem by running the commands below
systemctl start open-iscsi.service
(had open-iscsi installed earlier using apt install open-iscsi but had not started it)
Then enabled storage as follows
microk8s.enable storage
Then, deleted the Stateful Sets and the pending Persistence Volume Claims from Lens so I can start over.
Worked well after that.
I faced the same issue in which the PersistentVolumeClaim was in Pending Phase indefinitely, I tried providing the storageClassName as 'default' in PersistentVolume just like I did for PersistentVolumeClaim but it did not fix this issue.
I made one change in my persistentvolume.yml and moved the PersistentVolumeClaim config on top of the file and then PersistentVolume as the second config in the yml file. It has fixed that issue.
We need to make sure that PersistentVolumeClaim is created first and the PersistentVolume is created afterwards to resolve this 'Pending' phase issue.
I am posting this answer after testing it for a few times, hoping that it might help someone struggling with it.
Make sure your VM also has enough disk space.