kubernetes PVCs sharing a single PV? - kubernetes

I am trying to deploy a persistentvolume for 3 pods to work on and i want to use the cluster's node storage i.e. not an external storage like ebs spin off.
To achieve the above i did the following experiment's -
1) I applied only the below PVC resource defined below -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: pv1
name: pv1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status: {}
This spin's up a storage set by default storageclass, which in my case was digital ocean's volume. So it created a 1Gi volume.
2) Created a PV resource and PVC resource like below -
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: pv1
name: pv1
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status: {}
Post this i see my claim is bound.
pavan#p1:~$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv1 Bound task-pv-volume 10Gi RWO manual 2m5s
pavan#p1:~$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 10Gi RWO Retain Bound default/pv1 manual 118m
pavan#p1:~$ kubectl describe pvc
Name: pv1
Namespace: default
StorageClass: manual
Status: Bound
Volume: task-pv-volume
Labels: io.kompose.service=pv1
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"io.kompose.service":"mo...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 10Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 28s (x8 over 2m2s) persistentvolume-controller storageclass.storage.k8s.io "manual" not found
Below are my questions that i am hoping to get answers/pointers to -
The above warning, storage class could not be found, do i need to
create one? If so, can you tell me why and how? or any pointer. (Somehow this link misses to state that - https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/)
Notice the PV has storage capacity of 10Gi and PVC with request capacity of 1Gi, but still PVC was bound with 10Gi capacity? Can't i share the same PV capacity with other PVCs?
For question 2) If i have to create different PVs for different PVC with the required capacity, do i have to create storageclass as-well? Or same storage class and use selectors to select corresponding PV?

I was trying to reproduce all behavior to answer all your questions. However, I don't have access to DigitalOcean, so I tested it on GKE.
The above warning, storage class could not be found, do i need to
create one?
According to the documentation and best practices, it is highly recommended to create a storageclass and later create PV / PVC based on it. However, there is something called "manual provisioning". Which you did in this case.
Manual provisioning is when you need to manually create a PV first, and then a PVC with matching spec.storageClassName: field. Examples:
If you create a PVC without default storageclass, PV and storageClassName parameter (afaik kubeadm is not providing default storageclass) - PVC will be stuck on Pending with event: no persistent volumes available for this claim and no storage class is set.
If you create a PVC with default storageclass setup on cluster but without storageClassName parameter it will create it based on default storageclass.
If you create a PVC with storageClassName parameter (somewhere in the Cloud, Minikube or kubeadm) - PVC will also be Pending with the warning: storageclass.storage.k8s.io "manual" not found.
However, if you create PV with the same storageClassName parameter, it will be bound in a while.
Example:
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/task-pv-volume 10Gi RWO Retain Available manual 4s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv1 Pending manual 4m12s
...
kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/task-pv-volume 10Gi RWO Retain Bound default/pv1 manual 9s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv1 Bound task-pv-volume 10Gi RWO manual 4m17s
The disadvantage of manual provisioning is that you have to create PV for each PVC. If you use storageclass you can just create PVC.
If so, can you tell me why and how? or any pointer.
You can use documentation examples or check here. As you are using cloud with default storageclass you can export it to yaml by:
$ kubectl get sc -oyaml >> storageclass.yaml.
Or if you have more than one, you have to specify which one. Names of storageclass can be obtained by $ kubectl get sc.
Later you can refer to K8s API to customize your storageclass.
Notice the PV has storage capacity of 10Gi and PVC with request
capacity of 1Gi, but still PVC was bound with 10Gi capacity?
You created manually a PV with 10Gi and the PVC requested 1Gi. As PVC and PV are bounding 1:1, PVC searched for a PV which meets all conditions and bound to it. PVC (pv1) requested 1Gi and the PV (task-pv-volume) meet those requirements so Kubernetes bound them. Unfortunately much of the space was wasted in this case.
Can't i share the same PV capacity with other PVCs
Unfortunately, you cannot bound 2 PVC to 1 PV as the relationship between PVC and PV is 1:1, but you can configure many pods/deployments to use the same PVC.
I can advise you to look at this stackoverflow case as it explains very well AccessMode specifics.
If i have to create different PVs for different PVC with the required
capacity, do i have to create storageclass as-well? Or same storage
class and use selectors to select corresponding PV?
As I mentioned before, if you create PV manually with a specific size and a PVC bounded to it, which request less, the extra space will be wasted. So, you have to create PV and PVC with the same resource request, or let storageclass adjust the storage based on PVC request.

Yes, you have to create storage class, check, but I guess digital-ocean provide default storage class, you can check it with:
kubectl get storageclasses
You can share one PV, but only in read-only access, if you need write access for all pods you have to create multiple PV, check

Related

Why does a match Persistent Volume not bind to a match Persistent Volume Claim (using k3s)?

I have what seems like a straightforward PV and PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: www-pvc
spec:
storageClassName: ""
volumeName: www-pv
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: www-pv
spec:
storageClassName: ""
claimRef:
name: www-pvc
capacity:
storage: 1Mi
accessModes:
- ReadOnlyMany
nfs:
server: 192.168.1.100
path: "/www"
For some reason these do not bind to each other and the PVC stays "pending" forever:
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/www-pv 1Mi ROX Retain Available /www-pvc 107m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/www-pvc Pending www-pv 0 107m
How can I debug the matching? Which service does the matching in k3s? Would I be looking in the log of the k3s binary (running as a service under Debian)?
In Kubernetes documentation about Persistent Volumes you can find information that :
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources.
In Binding section you have information :
Claims will remain unbound indefinitely if a matching volume does not exist. Claims will be bound as matching volumes become available. For example, a cluster provisioned with many 50Gi PVs would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster.
In Openshift Documentation - Volume and Claim Pre-binding you can find information that when you are using pre-binding you are skipping some matchings.
If you know exactly what PersistentVolume you want your PersistentVolumeClaim to bind to, you can specify the PV in your PVC using the volumeName field. This method skips the normal matching and binding process. The PVC will only be able to bind to a PV that has the same name specified in volumeName. If such a PV with that name exists and is Available, the PV and PVC will be bound regardless of whether the PV satisfies the PVC’s label selector, access modes, and resource requests.
Issue 1
In your PV configuration you set
capacity:
storage: 1Mi
which means that you have storage with 1Mi which is ~ 1.04 MB.
Your PVC was configured to request 1Gi which is ~ 1.07GB.
resources:
requests:
storage: 1Gi
Your PV didn't fulfill your PVC request.
You can have many PV with example 5Gi storage but none of them will be bound if PVC request is higher than 5Gi, like 6Gi. But if PV storage is higher 6Gi and PVC request is lower, like 5Gi it will be bounded, however 1Gi will be wasted.
Issue 2
If you will describe your PVC you will find Warning below:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedBinding 2s (x2 over 17s) persistentvolume-controller volume "www-pv" already bound to a different claim.
In your configuration you are using something called Pre-Binding as you have specified volumeName in PVC and claimRef in PV.
This example is well described in OpenShift Documentation - Using Persistent Volumes. In your current setup you've used claimRef.name but you didn't specify claimRef.namespace.
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/www-pv 1Gi ROX Retain Available /www-pvc 4m28s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/www-pvc Pending www-pv 0 4m28s
But when you add claimRef.namespace it will work.
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/www-pv 1Gi ROX Retain Bound default/www-pvc 7m3s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/www-pvc Bound www-pv 1Gi ROX 7m3s
You should specify PVC's namespace in your PV's spec.claimRef.namespace as PVC is namespaced resource.
$ kubectl api-resources | grep pv
persistentvolumeclaims pvc true PersistentVolumeClaim
persistentvolumes pv false PersistentVolume
Solution
In your PV change spec.capacity.storage to 1Gi.
In your PV add spec.claimRef.namespace: default like on the example below:
spec:
storageClassName: ""
claimRef:
name: www-pvc
namespace: default # adding namespace: defaults
capacity:
storage: 1Gi # changed storage size
Please let me know if you were able to bind PV and PVC.
I think the problem is that the PVC is trying to get a PV of size 1Gi but your PV is of size 1M.
So, the bind is failing. You can fix this by either increasing the PV size or reducing the PVC size.
Use kubectl describe pvc to get more info about events and the reason for the failures.
To further clarify, a PVC is a request for the storage so if you say you need 1G of storage in claim but you only provision 1M of actual storage, the PVC is going to be stay in Pending state. Based on this, the size defined in PVC should always be less than or equal to the PV size.
This is an addition to the answers provided above, (pv/pvc size correction)
You should make sure you have nfs-common package installed and that you can mount that nfs export in the node itself.
Since storageClassName is empty in your definition - i can advise looking into
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
PV size can not be smaller than PVC size.
in otherwords
PVC 1 GB size can not be greater than PV 1 MB size.
Please update the PV & PVC sizes.

Why ReadWriteOnce is working on different nodes?

Our platform which runs on K8s has different components. We need to share the storage between two of these components (comp-A and comp-B) but by mistake, we defined the PV and PVC for that as ReadWriteOnce and even when those two components were running on different nodes everything was working and we were able to read and write to the storage from both components.
Based on the K8s docs the ReadWriteOnce can be mounted to one node and we have to use ReadWriteMany:
ReadWriteOnce -- the volume can be mounted as read-write by a single node
ReadOnlyMany -- the volume can be mounted read-only by many nodes
ReadWriteMany -- the volume can be mounted as read-write by many nodes"
So I am wondering why everything was working fine while it shouldn't?
More info:
We use NFS for storage and we are not using dynamic provisioning and below is how we defined our pv and pvc (we use helm):
- apiVersion: v1
kind: PersistentVolume
metadata:
name: gstreamer-{{ .Release.Namespace }}
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
mountOptions:
- hard
- nfsvers=4.1
nfs:
server: {{ .Values.global.nfsserver }}
path: /var/nfs/general/gstreamer-{{ .Release.Namespace }}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gstreamer-claim
namespace: {{ .Release.Namespace }}
spec:
volumeName: gstreamer-{{ .Release.Namespace }}
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Update
The output of some kubectl commands:
$ kubectl get -n 149 pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
gstreamer-claim Bound gstreamer-149 10Gi RWO 177d
$ kubectl get -n 149 pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
gstreamer-149 10Gi RWO Recycle Bound 149/gstreamer-claim 177d
I think somehow it takes care of it because the only thing the pods need to do is connecting to that IP.
It's quite misleading concept regarding accessMode, especially in NFS.
In Kubernetes Persistent Volume docs it's mentioned that NFS supports all types of Access. RWO, RXX and RWX.
However accessMode is something like matching criteria, same as storage size. It's described better in OpenShift Access Mode documentation
A PersistentVolume can be mounted on a host in any way supported by the resource provider. Providers have different capabilities and each PV’s access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read-write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV’s capabilities.
Claims are matched to volumes with similar access modes. The only two matching criteria are access modes and size. A claim’s access modes represent a request. Therefore, you might be granted more, but never less. For example, if a claim requests RWO, but the only volume available is an NFS PV (RWO+ROX+RWX), the claim would then match NFS because it supports RWO.
Direct matches are always attempted first. The volume’s modes must match or contain more modes than you requested. The size must be greater than or equal to what is expected. If two types of volumes, such as NFS and iSCSI, have the same set of access modes, either of them can match a claim with those modes. There is no ordering between types of volumes and no way to choose one type over another.
All volumes with the same modes are grouped, and then sorted by size, smallest to largest. The binder gets the group with matching modes and iterates over each, in size order, until one size matches.
In the next paragraph:
A volume’s AccessModes are descriptors of the volume’s capabilities. They are not enforced constraints. The storage provider is responsible for runtime errors resulting from invalid use of the resource.
For example, NFS offers ReadWriteOnce access mode. You must mark the claims as read-only if you want to use the volume’s ROX capability. Errors in the provider show up at runtime as mount errors.
Another example is that you can choose a few AccessModes as it is not constraint but a matching criteria.
$ cat <<EOF | kubectl create -f -
> apiVersion: v1
> kind: PersistentVolumeClaim
> metadata:
> name: exmaple-pvc
> spec:
> accessModes:
> - ReadOnlyMany
> - ReadWriteMany
> - ReadWriteOnce
> resources:
> requests:
> storage: 1Gi
> EOF
or as per GKE example:
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: exmaple-pvc-rwo-rom
spec:
accessModes:
- ReadOnlyMany
- ReadWriteOnce
resources:
requests:
storage: 1Gi
EOF
persistentvolumeclaim/exmaple-pvc-rwo-rom created
PVC Output
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
exmaple-pvc Pending standard 2m18s
exmaple-pvc-rwo-rom Bound pvc-d704d346-42b3-4090-af96-aebeee3053f5 1Gi RWO,ROX standard 6s
persistentvolumeclaim/exmaple-pvc created
exmaple-pvc is in Pending state as default GKE GCEPersistentDisk its not supporting RreadWriteMany.
Warning ProvisioningFailed 10s (x5 over 69s) persistentvolume-controller Failed to provision volume with StorageClass "standard": invalid AccessModes [ReadOnlyMany ReadWriteMany ReadWr
iteOnce]: only AccessModes [ReadWriteOnce ReadOnlyMany] are supported
However second pvc exmaple-pvc-rwo-rom were created and you can see it have 2 access mode RWO, ROX.
In short accessMode is more like requirement for PVC/PV to Bind. If NFS which is providing all access modes binds with RWO it fulfill requirement, however it will work as RWM as NFS providing that capability.
Hope it answered cleared a bit.
In addition you can check other StackOverflow threads regarding accessMode

How persistent volume and persistence volume claim bound each other in kubernetes

I am working on creating persistence volume & persistence volume claim in kubernetes. Both below configuration working fine and I am able to store the data in persistence volume storage path.
I created persistence volume
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-vol
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi #Size of the volume
accessModes:
- ReadWriteOnce #type of access
hostPath:
path: "/mnt/data" #host location
---
and Persistence volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
Here there is no connection between persistence volume & persistence volume claim in above configuration files. How both are bound to each other.
Persistence volume & persistence volume claim
Say in deployment.yml, we can point the name of persistence volume claim. So that POD -> PVC -> PV -> host machine storage location.
Could anyone help me to understand the how persistence volume & persistence volume claim bound to each other by above configuration files.
In a nutshell binding between PV and PVC is decided by matching capacity and accessModes. Since you have 1Gi and ReadWriteOnce in both PV and PVC the binding was successful.
From the docs here
A user creates, or in the case of dynamic provisioning, has already
created, a PersistentVolumeClaim with a specific amount of storage
requested and with certain access modes. A control loop in the master
watches for new PVCs, finds a matching PV (if possible), and binds
them together. If a PV was dynamically provisioned for a new PVC, the
loop will always bind that PV to the PVC. Otherwise, the user will
always get at least what they asked for, but the volume may be in
excess of what was requested. Once bound, PersistentVolumeClaim binds
are exclusive, regardless of how they were bound. A PVC to PV binding
is a one-to-one mapping, using a ClaimRef which is a bi-directional
binding between the PersistentVolume and the PersistentVolumeClaim.
Claims will remain unbound indefinitely if a matching volume does not
exist. Claims will be bound as matching volumes become available. For
example, a cluster provisioned with many 50Gi PVs would not match a
PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to
the cluster
Do note that the storage classes(manual) in both the pv and pvc are the same which is one of the reasons they are bound.if they are different, then the pvc will go to pending status. It's imperative that they are the same to be bound.
Hope this helps, You can also refer to this thread for various ways to bind.
Can a PVC be bound to a specific PV?
PVC documentation: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
PVCs don't necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster, depending on whether the DefaultStorageClass admission plugin is turned on.

The PersistentVolume is invalid: spec: Required value: must specify a volume type

I'm trying to create a Persistent Volume on top of/based off of an existing Storage Class Name. Then I want to attach the PVC to it; so that they are bound. Running the code below, will give me the "sftp-pv-claim" I want, but it is not bound to my PV ("sftp-pv-storage"). It's status is "pending".
The error message I receive is: "The PersistentVolume "sftp-pv-storage" is invalid: spec: Required value: must specify a volume type". If anyone can point me in the right direction as to why I'm getting the error message, it'd be much appreciated.
Specs:
I'm creating the PV and PVC using a helm chart.
I'm using the Rancher UI to see if they are bound or not and if the PV is generated.
The storage I'm using is Ceph with Rook (to allow for dynamic provisioning of PVs).
Error:
The error message I receive is: "The PersistentVolume "sftp-pv-storage" is invalid: spec: Required value: must specify a volume type".
Attempts:
I've tried using claimRef and matchLabels to no avail.
I've added "volumetype: none" to my PV specs.
If I add "hostPath: path: "/mnt/data"" as a spec to the PV, it will show up as an Available PV (with a local node path), but my PVC is not bonded to it. (Also, for deployment purposes I don't want to use hostPath.
## Create Persistent Storage for SFTP
## Ref: https://www.cloudtechnologyexperts.com/kubernetes-persistent-volume-with-rook/
kind: PersistentVolume
apiVersion: v1
metadata:
name: sftp-pv-storage
labels:
type: local
name: sftp-pv-storage
spec:
storageClassName: rook-ceph-block
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
allowVolumeExpansion: true
volumetype: none
---
## Create Claim (links user to PV)
## ==> If pod is created, need to automatically create PVC for user (without their input)
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: sftp-pv-claim
spec:
storageClassName: sftp-pv-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
The PersistentVolume "sftp-pv-storage" is invalid: spec: Requiredvalue: must specify a volume type.
In PV manifest you must provide type of volume. List of all supported types are described here.
As you are using Ceph I assume you will use CephFS.
A cephfs volume allows an existing CephFS volume to be mounted into
your Pod. Unlike emptyDir, which is erased when a Pod is removed, the
contents of a cephfs volume are preserved and the volume is merely
unmounted. This means that a CephFS volume can be pre-populated with
data, and that data can be “handed off” between Pods. CephFS can be
mounted by multiple writers simultaneously.
Example of CephFS you can find in Github.
If I add "hostPath: path: "/mnt/data"" as a spec to the PV, it will show up as an Available PV (with a local node path), but my PVC is not bonded to it.
If you will check Official Kubernetes docs about storageClassName.
A claim can request a particular class by specifying the name of a
StorageClass using the attribute storageClassName. Only PVs of the
requested class, ones with the same storageClassName as the PVC, can
be bound to the PVC.
storageClassName of your PV and PVC are different.
PV:
spec:
storageClassName: rook-ceph-block
PVC:
spec:
storageClassName: sftp-pv-storage
Hope it will help.
You did not specify the "hostPath:" in your PersistentVolume
Add it and the error should be resolved. See sample below

pod has unbound immediate PersistentVolumeClaims kubernetes nfs volume

I know there are lots of discussions round this topic but somehow, I can not get it working.
I am trying to install elastic search cluster with statefulset and nfs persistent volume on bare metal. My pv, pvc and sc configs are as below:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: manual
provisioner: kubernetes.io/no-provisioner
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-storage-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
nfs:
server: 172.23.240.85
path: /servers/scratch50g/vishalg/kube
Statefuleset has following pvc section defined:
volumeClaimTemplates:
- metadata:
name: beehive-pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: manual
resources:
requests:
storage: 1Gi
Now, when I try to deploy it, I get the following error on statefulset:
pod has unbound immediate PersistentVolumeClaims
When I get the pvc events, it shows:
Warning ProvisioningFailed 3s (x2 over 12s) persistentvolume-controller no volume plugin matched
I tried not giving any storageclass (did not create it) and removed it from pv and pvc both altogether. This time, I get below error:
no persistent volumes available for this claim and no storage class is set
I also tried setting storageclass as "" in pvc and not mention it in pv, but it did not work also.
Please help here. What can I check more to get it working?
Can it be related to nfs server and path (if by chance, it is mentioned incorrectly), though I see pv created successfully.
EDIT1:
One issue was that accessmode of pvc was different from accessmode of pv. I got it corrected and now my pvc is shown as bound.
But now even, I get following error:
pod has unbound immediate PersistentVolumeClaims
I tried using local volume also but again same error. PV and PVC are bound correctly but statefulset shows above error.
When using hostPath volume, everything works fine.
Is anything fundamentally that I am doing wrong here?
EDIT2
I got the local volume working. It takes some time to pod to bind to pvc. After waiting for coupl eof minutes, my pod got bind to pvc.
I think, nfs binding issue can be more of permission related. But still, k8s should give out some error for the same.
Could you try matching the accessModes as well?
The PVC is targeting a ReadWriteOnce volume right now.
And if you mount the nfs volume on the node manually, any access/security issue can be debugged.