Auto bound PVC to PV - kubernetes

Good day!
If you do not explicitly specify volumeName when creating PVC in Openshift, then to which PV will the PVC be bounded?
I think that PVC can be tied to any PV in the "Available" status if Storage size matches claim.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc_name
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10G
storageClassName: ""
volumeName:
Tell me how this process works?
Thanks.

In Kubernetes, since for the storageClassName: "" dynamic provisioning does not work, Kubernetes will look through the list of existing PVs for the smallest matching one that has no class, until the DefaultStorageClass admission plugin is turned on.
If a selector or access modes are provided, logical AND is applied to the requirements. Hence the only a PV with no class and with the requested mode and labels will be chosen.
With the DefaultStorageClass set, its value is then used to dynamically provision storage for PVCs that do not require any specific class.
As for the PVC sample in question, ccshih has provided nearly the exact answer: with no DefaultStorageClass enabled and configured, the smallest available PV of size 10G (10e9 bytes ~ 9.3 GiB) or larger with no class and ReadWriteOnce access mode will be bound.
Please see Lifecycle of a volume and claim
The provisioning logic is described here: Controller workflow for provisioning volumes

Related

What is the PersistentVolumeClaim policy for local PersistentVolume in Kubernetes?

Scenario 1:
I have 3 local-persistent-volumes provisioned, each pv is mounted on different node:
10.30.18.10
10.30.18.11
10.30.18.12
When I start my app with 3 replicas using:
kind: StatefulSet
metadata:
name: my-db
spec:
replicas: 3
...
...
volumeClaimTemplates:
- metadata:
name: my-local-vol
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-local-sc"
resources:
requests:
storage: 10Gi
Then I notice pods and pvs are on the same host:
pod1 with ip 10.30.18.10 has claimed the pv that is mounted on 10.30.18.10
pod2 with ip 10.30.18.11 has claimed the pv that is mounted on 10.30.18.11
pod3 with ip 10.30.18.12 has claimed the pv that is mounted on 10.30.18.12
(whats not happening is: pod1 with ip 10.30.18.10 has claimed the pv that is mounted on different node 10.30.18.12 etc)
The only common config between pv and pvc is storageClassName, so I didn't configure this behavior.
Question:
So, who is responsible for this magic? Kubernetes scheduler? Kubernetes provisioner?
Scenario 2:
I have 3 local-persistent-volumes provisioned:
pv1 has capacity.storage of 10Gi
pv2 has capacity.storage of 100Gi
pv3 has capacity.storage of 100Gi
Now, I start my app with 1 replica
kind: StatefulSet
metadata:
name: my-db
spec:
replicas: 1
...
...
volumeClaimTemplates:
- metadata:
name: my-local-vol
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-local-sc"
resources:
requests:
storage: 10Gi
I want to ensure that this StatefulSet always claim pv1 (10Gi) even this is on a different node, and don't claim pv2 (100Gi) and pv3 (100Gi)
Question:
Does this happen automatically?
How do I ensure the desired behavior? Should I use a separate storageClassName to ensure this?
What is the PersistentVolumeClaim policy? Where can I find more info?
EDIT:
yml used for StorageClass:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: my-local-pv
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
With local Persistent Volumes, this is the expected behaviour. Let me try to explain what happens when using local storage.
The usual setup for local storage on a cluster is the following:
A local storage class, configured to be WaitForFirstConsumer
A series of local persistent volumes, linked to the local storage class
And this is all well documented with examples in the official documentation: https://kubernetes.io/docs/concepts/storage/volumes/#local
With this done, Persistent Volume Claims can request storage from the local storage class and StatefulSets can have a volumeClaimTemplate which requests storage of the local storage class.
Let me take as example your StatefulSet with 3 replicas, each one requires local storage with the volumeClaimTemplate.
When the Pods are first created, they request a storage of the required storageClass. For example your my-local-sc
Since this storage class is manually created and does not support dynamically provisioning of new PVs (like, for example, Ceph or similar) it is checked if a PV attached to the storage class is available to be bound.
If a PV is selected, it is bound to the newly created PVC (and from now, can be used only with that particular PV, since it is now Bound)
Since the PV is of type local, the PV has a nodeAffinity required which selects a node.
This force the Pod, now bound to that PV, to be scheduled only on that particular node.
This is why each Pod was scheduled on the same node of the bounded persistent volume. And this means that the Pod is restricted to run on that node only.
You can test this easily by draining / cordoning one of the nodes and then trying to restart the Pod bound to the PV available on that particular node. What you should see is that the Pod will not start, as the PV is restricted from its nodeAffinity and the node is not available.
Once each Pod of the StatefulSet is bound to a PV, that Pod will be scheduled only on a specific node.. Pods will not change the PV that they are using, unless the PVC is removed (which will force the Pod to request again a new PV to bound)
Since local storage is handled manually, PV which were bounded and have the related PVC removed from the cluster, enter in Released state and cannot be claimed anymore, they must be handled by someone.. maybe deleting them and then recreating new ones at the same location (and maybe cleaning the filesystem as well, depending on the situation)
This means that local storage is OK to be used only:
If HA is not a problem.. for example, I don't care if my app is blocked by a single node not working
If HA is handled directly by the app itself. For example, a StatefulSet with 3 Pods like a multi-primary database (Galera, Clickhouse, Percona for examples) or ElasticSearch or Kafka, Zookeeper or something like that.. all will handle the HA on their own as they can resist one of their nodes being down as long as there's quorum.
UPDATE
Regarding the Scenario 2 of your question. Let's say you have multiple Available PVs and a single Pod which starts and wants to Bound to one of them. This is a normal behaviour and the control plane would select one of those PVs on its own (if they match with the requests in Claim)
There's a specific way to pre-bind a PV and a PVC, so that they will always bind together. This is described in the docs as "reserving a PV": https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reserving-a-persistentvolume
But the problem is that this cannot be applied to olume claim templates, as it requires the claim to be created manually with special properties.
The volume claim template tho, as a selector field which can be used to restrict the selection of a PV based on labels. It can be seen in the API specs ( https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#persistentvolumeclaimspec-v1-core )
When you create a PV, you label it with what you want.. for example you could label it like the following:
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-small-pv
labels:
size-category: small
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-big-pv
labels:
size-category: big
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node-2
And then the claim template can select a category of volumes based on the label. Or maybe it doesn't care so it doesn't specify selector and can use all of them (provided that the size is enough for its claim request)
This could be useful.. but it's not the only way to select or restrict which PVs can be selected, because when the PV is first bound, if the storage class is WaitForFirstConsumer, the following is also applied:
Delaying volume binding ensures that the PersistentVolumeClaim binding
decision will also be evaluated with any other node constraints the
Pod may have, such as node resource requirements, node selectors, Pod
affinity, and Pod anti-affinity.
Which means that if the Pod has a node affinity to one of the nodes, it will select for sure a PV on that node (if the local storage class used is WaitForFirstConsumer)
Last, let me quote the offical documentation for things that I think they could answer your questions:
From https://kubernetes.io/docs/concepts/storage/persistent-volumes/
A user creates, or in the case of dynamic provisioning, has already
created, a PersistentVolumeClaim with a specific amount of storage
requested and with certain access modes. A control loop in the master
watches for new PVCs, finds a matching PV (if possible), and binds
them together. If a PV was dynamically provisioned for a new PVC, the
loop will always bind that PV to the PVC. Otherwise, the user will
always get at least what they asked for, but the volume may be in
excess of what was requested. Once bound, PersistentVolumeClaim binds
are exclusive, regardless of how they were bound. A PVC to PV binding
is a one-to-one mapping, using a ClaimRef which is a bi-directional
binding between the PersistentVolume and the PersistentVolumeClaim.
Claims will remain unbound indefinitely if a matching volume does not
exist. Claims will be bound as matching volumes become available. For
example, a cluster provisioned with many 50Gi PVs would not match a
PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to
the cluster.
From https://kubernetes.io/docs/concepts/storage/volumes/#local
Compared to hostPath volumes, local volumes are used in a durable and
portable manner without manually scheduling pods to nodes. The system
is aware of the volume's node constraints by looking at the node
affinity on the PersistentVolume.
However, local volumes are subject to the availability of the
underlying node and are not suitable for all applications. If a node
becomes unhealthy, then the local volume becomes inaccessible by the
pod. The pod using this volume is unable to run. Applications using
local volumes must be able to tolerate this reduced availability, as
well as potential data loss, depending on the durability
characteristics of the underlying disk.

Why ReadWriteOnce is working on different nodes?

Our platform which runs on K8s has different components. We need to share the storage between two of these components (comp-A and comp-B) but by mistake, we defined the PV and PVC for that as ReadWriteOnce and even when those two components were running on different nodes everything was working and we were able to read and write to the storage from both components.
Based on the K8s docs the ReadWriteOnce can be mounted to one node and we have to use ReadWriteMany:
ReadWriteOnce -- the volume can be mounted as read-write by a single node
ReadOnlyMany -- the volume can be mounted read-only by many nodes
ReadWriteMany -- the volume can be mounted as read-write by many nodes"
So I am wondering why everything was working fine while it shouldn't?
More info:
We use NFS for storage and we are not using dynamic provisioning and below is how we defined our pv and pvc (we use helm):
- apiVersion: v1
kind: PersistentVolume
metadata:
name: gstreamer-{{ .Release.Namespace }}
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
mountOptions:
- hard
- nfsvers=4.1
nfs:
server: {{ .Values.global.nfsserver }}
path: /var/nfs/general/gstreamer-{{ .Release.Namespace }}
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gstreamer-claim
namespace: {{ .Release.Namespace }}
spec:
volumeName: gstreamer-{{ .Release.Namespace }}
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
Update
The output of some kubectl commands:
$ kubectl get -n 149 pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
gstreamer-claim Bound gstreamer-149 10Gi RWO 177d
$ kubectl get -n 149 pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
gstreamer-149 10Gi RWO Recycle Bound 149/gstreamer-claim 177d
I think somehow it takes care of it because the only thing the pods need to do is connecting to that IP.
It's quite misleading concept regarding accessMode, especially in NFS.
In Kubernetes Persistent Volume docs it's mentioned that NFS supports all types of Access. RWO, RXX and RWX.
However accessMode is something like matching criteria, same as storage size. It's described better in OpenShift Access Mode documentation
A PersistentVolume can be mounted on a host in any way supported by the resource provider. Providers have different capabilities and each PV’s access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read-write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV’s capabilities.
Claims are matched to volumes with similar access modes. The only two matching criteria are access modes and size. A claim’s access modes represent a request. Therefore, you might be granted more, but never less. For example, if a claim requests RWO, but the only volume available is an NFS PV (RWO+ROX+RWX), the claim would then match NFS because it supports RWO.
Direct matches are always attempted first. The volume’s modes must match or contain more modes than you requested. The size must be greater than or equal to what is expected. If two types of volumes, such as NFS and iSCSI, have the same set of access modes, either of them can match a claim with those modes. There is no ordering between types of volumes and no way to choose one type over another.
All volumes with the same modes are grouped, and then sorted by size, smallest to largest. The binder gets the group with matching modes and iterates over each, in size order, until one size matches.
In the next paragraph:
A volume’s AccessModes are descriptors of the volume’s capabilities. They are not enforced constraints. The storage provider is responsible for runtime errors resulting from invalid use of the resource.
For example, NFS offers ReadWriteOnce access mode. You must mark the claims as read-only if you want to use the volume’s ROX capability. Errors in the provider show up at runtime as mount errors.
Another example is that you can choose a few AccessModes as it is not constraint but a matching criteria.
$ cat <<EOF | kubectl create -f -
> apiVersion: v1
> kind: PersistentVolumeClaim
> metadata:
> name: exmaple-pvc
> spec:
> accessModes:
> - ReadOnlyMany
> - ReadWriteMany
> - ReadWriteOnce
> resources:
> requests:
> storage: 1Gi
> EOF
or as per GKE example:
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: exmaple-pvc-rwo-rom
spec:
accessModes:
- ReadOnlyMany
- ReadWriteOnce
resources:
requests:
storage: 1Gi
EOF
persistentvolumeclaim/exmaple-pvc-rwo-rom created
PVC Output
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
exmaple-pvc Pending standard 2m18s
exmaple-pvc-rwo-rom Bound pvc-d704d346-42b3-4090-af96-aebeee3053f5 1Gi RWO,ROX standard 6s
persistentvolumeclaim/exmaple-pvc created
exmaple-pvc is in Pending state as default GKE GCEPersistentDisk its not supporting RreadWriteMany.
Warning ProvisioningFailed 10s (x5 over 69s) persistentvolume-controller Failed to provision volume with StorageClass "standard": invalid AccessModes [ReadOnlyMany ReadWriteMany ReadWr
iteOnce]: only AccessModes [ReadWriteOnce ReadOnlyMany] are supported
However second pvc exmaple-pvc-rwo-rom were created and you can see it have 2 access mode RWO, ROX.
In short accessMode is more like requirement for PVC/PV to Bind. If NFS which is providing all access modes binds with RWO it fulfill requirement, however it will work as RWM as NFS providing that capability.
Hope it answered cleared a bit.
In addition you can check other StackOverflow threads regarding accessMode

How persistent volume and persistence volume claim bound each other in kubernetes

I am working on creating persistence volume & persistence volume claim in kubernetes. Both below configuration working fine and I am able to store the data in persistence volume storage path.
I created persistence volume
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-vol
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi #Size of the volume
accessModes:
- ReadWriteOnce #type of access
hostPath:
path: "/mnt/data" #host location
---
and Persistence volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
Here there is no connection between persistence volume & persistence volume claim in above configuration files. How both are bound to each other.
Persistence volume & persistence volume claim
Say in deployment.yml, we can point the name of persistence volume claim. So that POD -> PVC -> PV -> host machine storage location.
Could anyone help me to understand the how persistence volume & persistence volume claim bound to each other by above configuration files.
In a nutshell binding between PV and PVC is decided by matching capacity and accessModes. Since you have 1Gi and ReadWriteOnce in both PV and PVC the binding was successful.
From the docs here
A user creates, or in the case of dynamic provisioning, has already
created, a PersistentVolumeClaim with a specific amount of storage
requested and with certain access modes. A control loop in the master
watches for new PVCs, finds a matching PV (if possible), and binds
them together. If a PV was dynamically provisioned for a new PVC, the
loop will always bind that PV to the PVC. Otherwise, the user will
always get at least what they asked for, but the volume may be in
excess of what was requested. Once bound, PersistentVolumeClaim binds
are exclusive, regardless of how they were bound. A PVC to PV binding
is a one-to-one mapping, using a ClaimRef which is a bi-directional
binding between the PersistentVolume and the PersistentVolumeClaim.
Claims will remain unbound indefinitely if a matching volume does not
exist. Claims will be bound as matching volumes become available. For
example, a cluster provisioned with many 50Gi PVs would not match a
PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to
the cluster
Do note that the storage classes(manual) in both the pv and pvc are the same which is one of the reasons they are bound.if they are different, then the pvc will go to pending status. It's imperative that they are the same to be bound.
Hope this helps, You can also refer to this thread for various ways to bind.
Can a PVC be bound to a specific PV?
PVC documentation: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
PVCs don't necessarily have to request a class. A PVC with its storageClassName set equal to "" is always interpreted to be requesting a PV with no class, so it can only be bound to PVs with no class (no annotation or one set equal to ""). A PVC with no storageClassName is not quite the same and is treated differently by the cluster, depending on whether the DefaultStorageClass admission plugin is turned on.

The PersistentVolume is invalid: spec: Required value: must specify a volume type

I'm trying to create a Persistent Volume on top of/based off of an existing Storage Class Name. Then I want to attach the PVC to it; so that they are bound. Running the code below, will give me the "sftp-pv-claim" I want, but it is not bound to my PV ("sftp-pv-storage"). It's status is "pending".
The error message I receive is: "The PersistentVolume "sftp-pv-storage" is invalid: spec: Required value: must specify a volume type". If anyone can point me in the right direction as to why I'm getting the error message, it'd be much appreciated.
Specs:
I'm creating the PV and PVC using a helm chart.
I'm using the Rancher UI to see if they are bound or not and if the PV is generated.
The storage I'm using is Ceph with Rook (to allow for dynamic provisioning of PVs).
Error:
The error message I receive is: "The PersistentVolume "sftp-pv-storage" is invalid: spec: Required value: must specify a volume type".
Attempts:
I've tried using claimRef and matchLabels to no avail.
I've added "volumetype: none" to my PV specs.
If I add "hostPath: path: "/mnt/data"" as a spec to the PV, it will show up as an Available PV (with a local node path), but my PVC is not bonded to it. (Also, for deployment purposes I don't want to use hostPath.
## Create Persistent Storage for SFTP
## Ref: https://www.cloudtechnologyexperts.com/kubernetes-persistent-volume-with-rook/
kind: PersistentVolume
apiVersion: v1
metadata:
name: sftp-pv-storage
labels:
type: local
name: sftp-pv-storage
spec:
storageClassName: rook-ceph-block
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
allowVolumeExpansion: true
volumetype: none
---
## Create Claim (links user to PV)
## ==> If pod is created, need to automatically create PVC for user (without their input)
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: sftp-pv-claim
spec:
storageClassName: sftp-pv-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
The PersistentVolume "sftp-pv-storage" is invalid: spec: Requiredvalue: must specify a volume type.
In PV manifest you must provide type of volume. List of all supported types are described here.
As you are using Ceph I assume you will use CephFS.
A cephfs volume allows an existing CephFS volume to be mounted into
your Pod. Unlike emptyDir, which is erased when a Pod is removed, the
contents of a cephfs volume are preserved and the volume is merely
unmounted. This means that a CephFS volume can be pre-populated with
data, and that data can be “handed off” between Pods. CephFS can be
mounted by multiple writers simultaneously.
Example of CephFS you can find in Github.
If I add "hostPath: path: "/mnt/data"" as a spec to the PV, it will show up as an Available PV (with a local node path), but my PVC is not bonded to it.
If you will check Official Kubernetes docs about storageClassName.
A claim can request a particular class by specifying the name of a
StorageClass using the attribute storageClassName. Only PVs of the
requested class, ones with the same storageClassName as the PVC, can
be bound to the PVC.
storageClassName of your PV and PVC are different.
PV:
spec:
storageClassName: rook-ceph-block
PVC:
spec:
storageClassName: sftp-pv-storage
Hope it will help.
You did not specify the "hostPath:" in your PersistentVolume
Add it and the error should be resolved. See sample below

Kubernetes NFS Persistent Volumes - multiple claims on same volume? Claim stuck in pending?

Use case:
I have a NFS directory available and I want to use it to persist data for multiple deployments & pods.
I have created a PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: http://mynfs.com
path: /server/mount/point
I want multiple deployments to be able to use this PersistentVolume, so my understanding of what is needed is that I need to create multiple PersistentVolumeClaims which will all point at this PersistentVolume.
kind: PersistentVolumeClaim
apiVersion: v1
metaData:
name: nfs-pvc-1
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 50Mi
I believe this to create a 50MB claim on the PersistentVolume. When I run kubectl get pvc, I see:
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
nfs-pvc-1 Bound nfs-pv 10Gi RWX 35s
I don't understand why I see 10Gi capacity, not 50Mi.
When I then change the PersistentVolumeClaim deployment yaml to create a PVC named nfs-pvc-2 I get this:
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
nfs-pvc-1 Bound nfs-pv 10Gi RWX 35s
nfs-pvc-2 Pending 10s
PVC2 never binds to the PV. Is this expected behaviour? Can I have multiple PVCs pointing at the same PV?
When I delete nfs-pvc-1, I see the same thing:
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
nfs-pvc-2 Pending 10s
Again, is this normal?
What is the appropriate way to use/re-use a shared NFS resource between multiple deployments / pods?
Basically you can't do what you want, as the relationship PVC <--> PV is one-on-one.
If NFS is the only storage you have available and would like multiple PV/PVC on one nfs export, use Dynamic Provisioning and a default storage class.
It's not in official K8s yet, but this one is in the incubator and I've tried it and it works well: https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client
This will enormously simplify your volume provisioning as you only need to take care of the PVC, and the PV will be created as a directory on the nfs export / server that you have defined.
From: https://docs.openshift.org/latest/install_config/storage_examples/shared_storage.html
As Baroudi Safwen mentioned, you cannot bind two pvc to the same pv, but you can use the same pvc in two different pods.
volumes:
- name: nfsvol-2
persistentVolumeClaim:
claimName: nfs-pvc-1 <-- USE THIS ONE IN BOTH PODS
A persistent volume claim is exclusively bound to a persistent volume.
You cannot bind 2 pvc to the same pv. I guess you are interested in the dynamic provisioning. I faced this issue when I was deploying statefulsets, which require dynamic provisioning for pods. So you need to deploy an NFS provisioner in your cluster, the NFS provisioner(pod) will have access to the NFS folder(hostpath), and each time a pod requests a volume, the NFS provisioner will mount it in the NFS directory on behalf of the pod. Here is the github repository to deploy it:
https://github.com/kubernetes-incubator/external-storage/tree/master/nfs/deploy/kubernetes
You have to be careful though, you must ensure the nfs provisioner always runs on the same machine where you have the NFS folder by making use of the node selector since you the volume is of type hostpath.
For my future-self and everyone else looking for the official documentation:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#binding
Once bound, PersistentVolumeClaim binds are exclusive, regardless of
how they were bound. A PVC to PV binding is a one-to-one mapping,
using a ClaimRef which is a bi-directional binding between the
PersistentVolume and the PersistentVolumeClaim.
a few points on dynamic provisioning..
using dynamic provisioning of nfs prevents you for changing any of the default nfs mount options. On my platform this uses rsize/wsize of 1M. this can cause huge problems in some applications using small files or block reading. (I've just hit this issue in a big way)
dynamic is a great option if it suits your needs. I'm now stuck with creating 250 pv/pvc pairs for my application that was being handled by dynamic due to the 1-1 relationship.