How to reuse an existing persistent volume claims - kubernetes

I have deleted my elasticsearch cluster, but now after I've deployed a new cluster I need to access the old data that was stored on 3 Persistent Volumes PV described bellow:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
storage-es-data-0 Bound pvc-19429b0b-ba42-11e7-979d-42010a840ff7 12Gi RWO standard 10d
storage-es-data-1 Bound pvc-36505962-ba42-11e7-979d-42010a840ff7 12Gi RWO standard 10d
storage-es-data-2 Bound pvc-422da328-ba42-11e7-979d-42010a840ff7 12Gi RWO standard 10d
This is the description of the old PV claims:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-19429b0b-ba42-11e7-979d-42010a840ff7 12Gi RWO Delete Bound default/storage-es-data-0 standard 10d
pvc-36505962-ba42-11e7-979d-42010a840ff7 12Gi RWO Delete Bound default/storage-es-data-1 standard 10d
pvc-422da328-ba42-11e7-979d-42010a840ff7 12Gi RWO Delete Bound default/storage-es-data-2 standard 10d
My new deployment is described as follow:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es-data
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
imagePullPolicy: Always
ports:
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: storage
mountPath: /data
volumes:
- name: storage
persistentVolumeClaim:
claimName: storage-es-data-0
After connecting my pod with a Loadblance service, I didn't find any documents. Am I missing something? And how can I use the three PV in the same POD.

Your deployment yaml file is correct. You should be able to find files from pvc-19429b0b-ba42-11e7-979d-42010a840ff7 volume inside /data folder in your pod.
In order to use three PV in the same POD just add them to your deployment yaml:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es-data
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.3
imagePullPolicy: Always
ports:
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: storage-0
mountPath: /data0
- name: storage-1
mountPath: /data1
- name: storage-2
mountPath: /data2
volumes:
- name: storage-0
persistentVolumeClaim:
claimName: storage-es-data-0
- name: storage-1
persistentVolumeClaim:
claimName: storage-es-data-1
- name: storage-2
persistentVolumeClaim:
claimName: storage-es-data-2

Related

Nexus on k3s on restart does not persist Users and data

I have installed on K3S raspberry pi cluster nexus with the following setups for kubernetes learning purposes. First I created a StatefulSet:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nexus
namespace: dev-ops
spec:
serviceName: "nexus"
replicas: 1
selector:
matchLabels:
app: nexus-server
template:
metadata:
labels:
app: nexus-server
spec:
containers:
- name: nexus
image: klo2k/nexus3:latest
env:
- name: MAX_HEAP
value: "800m"
- name: MIN_HEAP
value: "300m"
resources:
limits:
memory: "4Gi"
cpu: "1000m"
requests:
memory: "2Gi"
cpu: "500m"
ports:
- containerPort: 8081
volumeMounts:
- name: nexusstorage
mountPath: /sonatype-work
volumes:
- name: nexusstorage
persistentVolumeClaim:
claimName: nexusstorage
Storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nexusstorage
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "30"
fsType: "ext4"
diskSelector: "ssd"
nodeSelector: "ssd"
pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nexusstorage
namespace: dev-ops
spec:
accessModes:
- ReadWriteOnce
storageClassName: nexusstorage
resources:
requests:
storage: 50Gi
Service
apiVersion: v1
kind: Service
metadata:
name: nexus-server
namespace: dev-ops
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /
prometheus.io/port: '8081'
spec:
selector:
app: nexus-server
type: LoadBalancer
ports:
- port: 8081
targetPort: 8081
nodePort: 32000
this setup will spin up nexus, but if I restart the pod the data will not persist and I have to create all the setups and users from scratch.
What I'm missing in this case?
UPDATE
I got it working, nexus needs on mount permissions on directory. The working StatefulSet looks as it follow
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nexus
namespace: dev-ops
spec:
serviceName: "nexus"
replicas: 1
selector:
matchLabels:
app: nexus-server
template:
metadata:
labels:
app: nexus-server
spec:
securityContext:
runAsUser: 200
runAsGroup: 200
fsGroup: 200
containers:
- name: nexus
image: klo2k/nexus3:latest
env:
- name: MAX_HEAP
value: "800m"
- name: MIN_HEAP
value: "300m"
resources:
limits:
memory: "4Gi"
cpu: "1000m"
requests:
memory: "2Gi"
cpu: "500m"
ports:
- containerPort: 8081
volumeMounts:
- name: nexus-storage
mountPath: /nexus-data
volumes:
- name: nexus-storage
persistentVolumeClaim:
claimName: nexus-storage
important snippet to get it working
securityContext:
runAsUser: 200
runAsGroup: 200
fsGroup: 200
I'm not familiar with that image, although checking dockerhub, they mention using a Dockerfile similar to that of Sonatype. Then, I would change the mountpoint for your volume, to /nexus-data
This is the default path storing data (they set this env var, then declare a VOLUME). Which we can confirm, looking at the repository that most likely produced your arm-capable image
And following up on your last comment, let's try to also mount it in /opt/sonatype/sonatype-work/nexus3...
In your statefulset, change volumeMounts, to this:
volumeMounts:
- name: nexusstorage
mountPath: /nexus-data
- name: nexusstorage
mountPath: /opt/sonatype/sonatype-work/nexus3
volumes:
- name: nexusstorage
persistentVolumeClaim:
claimName: nexusstorage
Although the second volumeMount entry should not be necessary, as far as I understand. Maybe something's wrong with your storage provider?
Are you sure your PVC is write-able? Reverting back to your initial configuration, enter your pod (kubectl exec -it) and try to write a file at the root of your PVC.

How to configure pv and pvc for single pod with multiple containers in kubernetes

Need to create a single pod with multiple containers for MySQL, MongoDB, MySQL. My question is should I need to create persistence volume and persistence volume claim for each container and specify the volume in pod configuration or single PV & PVC is enough for all the containers in a single pod-like below configs.
Could you verify below configuration is enough or not?
PV:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mypod-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypod-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mypod
labels:
app: mypod
spec:
replicas: 1
selector:
matchLabels:
app: mypod
template:
metadata:
labels:
app: mypod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: mypod-pvc
containers:
- name: mysql
image: mysql/mysql-server:latest
ports:
- containerPort: 3306
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/lib/mysql"
name: task-pv-storage
- name: mongodb
image: openshift/mongodb-24-centos7
ports:
- containerPort: 27017
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/lib/mongodb"
name: task-pv-storage
- name: mssql
image: mcr.microsoft.com/mssql/server
ports:
- containerPort: 1433
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/opt/mssql"
name: task-pv-storage
imagePullSecrets:
- name: devplat
You should not be running multiple database containers inside a single pod.
Consider running each database in a separate statefulset.
follow below reference for mysql
https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/
You need to adopt similar approach for mongodb or other databases as well.

pod has unbound immediate PersistentVolumeClaims (repeated 3 times)

What is wrong with below.
# config for es data node
apiVersion: v1
kind: ConfigMap
metadata:
namespace: infra
name: elasticsearch-data-config
labels:
app: elasticsearch
role: data
data:
elasticsearch.yml: |-
cluster.name: ${CLUSTER_NAME}
node.name: ${NODE_NAME}
discovery.seed_hosts: ${NODE_LIST}
cluster.initial_master_nodes: ${MASTER_NODES}
network.host: 0.0.0.0
node:
master: false
data: true
ingest: false
xpack.security.enabled: true
xpack.monitoring.collection.enabled: true
---
# service for es data node
apiVersion: v1
kind: Service
metadata:
namespace: infra
name: elasticsearch-data
labels:
app: elasticsearch
role: data
spec:
ports:
- port: 9300
name: transport
selector:
app: elasticsearch
role: data
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
namespace: infra
name: elasticsearch-data
labels:
app: elasticsearch
role: data
spec:
serviceName: "elasticsearch-data"
replicas: 1
template:
metadata:
labels:
app: elasticsearch-data
role: data
spec:
containers:
- name: elasticsearch-data
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
env:
- name: CLUSTER_NAME
value: elasticsearch
- name: NODE_NAME
value: elasticsearch-data
- name: NODE_LIST
value: elasticsearch-master,elasticsearch-data,elasticsearch-client
- name: MASTER_NODES
value: elasticsearch-master
- name: "ES_JAVA_OPTS"
value: "-Xms300m -Xmx300m"
ports:
- containerPort: 9300
name: transport
volumeMounts:
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
readOnly: true
subPath: elasticsearch.yml
- name: elasticsearch-data-persistent-storage
mountPath: /data/db
volumes:
- name: config
configMap:
name: elasticsearch-data-config
initContainers:
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: elasticsearch-data-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
statefull output:
Name: elasticsearch-data-0
Namespace: infra
Priority: 0
Node: <none>
Labels: app=elasticsearch-data
controller-revision-hash=elasticsearch-data-76bdf989b6
role=data
statefulset.kubernetes.io/pod-name=elasticsearch-data-0
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/elasticsearch-data
Init Containers:
increase-vm-max-map:
Image: busybox
Port: <none>
Host Port: <none>
Command:
sysctl
-w
vm.max_map_count=262144
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9nhmg (ro)
Containers:
elasticsearch-data:
Image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
Port: 9300/TCP
Host Port: 0/TCP
Environment:
CLUSTER_NAME: elasticsearch
NODE_NAME: elasticsearch-data
NODE_LIST: elasticsearch-master,elasticsearch-data,elasticsearch-client
MASTER_NODES: elasticsearch-master
ES_JAVA_OPTS: -Xms300m -Xmx300m
Mounts:
/data/db from elasticsearch-data-persistent-storage (rw)
/usr/share/elasticsearch/config/elasticsearch.yml from config (ro,path="elasticsearch.yml")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9nhmg (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
elasticsearch-data-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: elasticsearch-data-persistent-storage-elasticsearch-data-0
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: elasticsearch-data-config
Optional: false
default-token-9nhmg:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9nhmg
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 46s (x4 over 3m31s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
kubectl get sc
NAME PROVISIONER AGE
standard (default) kubernetes.io/gce-pd 5d19h
kubectl get pv
No resources found in infra namespace.
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
elasticsearch-data-persistent-storage-elasticsearch-data-0 Pending gp2 8h
It looks like there is some issue with your PVC.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
elasticsearch-data-persistent-storage-elasticsearch-data-0 Pending gp2 8h
As you can see your PV is also not created.I think there is an issue with your storage class.Looks like gp2 storage class is not available in your cluster.
Either run this yaml file if you are in AWS EKS
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp2
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
fsType: ext4
or simply change the storage class to standard in GCP GKE
From the docs here
The storage for a given Pod must either be provisioned by a
PersistentVolume Provisioner based on the requested storage class, or
pre-provisioned by an admin.
There should be a StorageClass which can dynamically provision the PV and mention that storageClassName in the volumeClaimTemplates or there needs to be a PV which can satisfy the PVC.
volumeClaimTemplates:
- metadata:
name: elasticsearch-data-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 10Gi
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
namespace: default
name: elasticsearch-data
labels:
app: elasticsearch
role: data
spec:
serviceName: "elasticsearch-data"
replicas: 1
template:
metadata:
labels:
app: elasticsearch-data
role: data
spec:
containers:
- name: elasticsearch-data
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
env:
- name: CLUSTER_NAME
value: elasticsearch
- name: NODE_NAME
value: elasticsearch-data
- name: NODE_LIST
value: elasticsearch-master,elasticsearch-data,elasticsearch-client
- name: MASTER_NODES
value: elasticsearch-master
- name: "ES_JAVA_OPTS"
value: "-Xms300m -Xmx300m"
ports:
- containerPort: 9300
name: transport
volumeMounts:
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
readOnly: true
subPath: elasticsearch.yml
- name: elasticsearch-data-persistent-storage
mountPath: /data/db
volumes:
- name: config
configMap:
name: elasticsearch-data-config
initContainers:
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: elasticsearch-data-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
---
This worked for me. Like Avinash said I simply changed the storage class to standard in GCP GKE

Kubernetes Permission denied for mounted nfs volume

The following is the k8s definition used:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pv-provisioning-demo
labels:
demo: nfs-pv-provisioning
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 200Gi
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-server
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
replicas: 1
selector:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: k8s.gcr.io/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: nfs-pv-provisioning-demo
---
kind: Service
apiVersion: v1
metadata:
name: nfs-server
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
# FIXME: use the right IP
server: nfs-server
path: "/"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
---
# This mounts the nfs volume claim into /mnt and continuously
# overwrites /mnt/index.html with the time and hostname of the pod.
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-busybox
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
replicas: 2
selector:
name: nfs-busybox
template:
metadata:
labels:
name: nfs-busybox
spec:
containers:
- image: busybox
command:
- sh
- -c
- 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done'
imagePullPolicy: IfNotPresent
name: busybox
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/mnt"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
Now /mnt directory in nfs-busybox should have 2000 as gid(as per docs). But it still have root and root as user and group. Since application is running with 1000/2000 its not able to create any logs or data in /mnt directory.
chmod might solve the issue, but it looks like work around. Is there any permenant solution for this?
Observations: If i replace nfs with some other PVC its working fine as told in docs.
Have you tried initContainers method? It fixes permissions on an exported directory:
initContainers:
- name: volume-mount-hack
image: busybox
command: ["sh", "-c", "chmod -R 777 /exports"]
volumeMounts:
- name: nfs
mountPath: /exports
If you use a standalone NFS server on Linux box, I suggest using no_root_squash option:
/exports *(rw,no_root_squash,no_subtree_check)
To manage the directory permission on nfs-server, there is a need to change security context and raise it to privileged mode:
apiVersion: v1
kind: Pod
metadata:
name: nfs-server
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: nfs-server
ports:
- name: nfs
containerPort: 2049
securityContext:
privileged: true

Kubernetes not claiming persistent volume - "failed due to PersistentVolumeClaim is not bound: "task-pv-claim", which is unexpected."

I'm not sure why the persistent volume is not being claimed, or what steps I could take to further diagnose this?
Should the claim size match the volume size? Should the volume size match the GCP volume size?
This is so difficult to test and figure out...
My goal here is just to be able to create a Wordpress instance with even a single replica as long as it would support rolling deployments....
Output of kubectl get pods:
NAME READY STATUS RESTARTS AGE
wordpress-1546832918-mz4rt 0/3 Pending 0 47m
wordpress-1546832918-p0s1s 0/3 Pending 0 47m
Output of kubectl describe pods:
...truncated...
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
47m 3s 168 default-scheduler Warning FailedScheduling [SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "task-pv-claim", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "task-pv-claim", which is unexpected.]
Output of kubectl get pvc:
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
task-pv-claim Pending manual 4h
Output of kubectl get pv:
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pv0001 10Gi RWX Retain Available manual 4h
production.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
replicas: 2
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
terminationGracePeriodSeconds: 30
containers:
- image: eu.gcr.io/abcxyz/wordpress:deploy-1502807720
name: wordpress
imagePullPolicy: "Always"
env:
- name: WORDPRESS_HOST
value: localhost
- name: WORDPRESS_DB_USERNAME
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
- image: eu.gcr.io/abcxyz/nginx:deploy-1502807720
name: nginx
imagePullPolicy: "Always"
ports:
- containerPort: 80
name: nginx
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
readOnly: true
- image: gcr.io/cloudsql-docker/gce-proxy:1.09
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=abcxyz:europe-west1:wordpressdb2=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: "task-pv-claim"
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
pVolume.yaml
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "pv0001"
spec:
storageClassName: manual
capacity:
storage: "10Gi"
accessModes:
- "ReadWriteMany"
gcePersistentDisk:
fsType: "ext4"
pdName: "wordpress-disk"
pVolumeClaim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
The spec.accessModes of your persistent volume claim has to match that in the persistent volume. Try change both of them to the same value.
If that didn't work, you can add the spec.selector definition to your persistent volume claim definition, by updating it to match your persistent volume metadata.labels like this:
apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "pv0001"
labels:
name: "pv0001" # can be anything as long as it matches the selector in the pvc
spec:
...
----
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
selector:
matchLabels:
name: "pv0001"
The spec.selector serves as a filter to ensure that only PV with the specified labels are matched.