Local persistent Volume 1 node(s) didn't find available persistent volumes to bind - kubernetes

I'm getting started with persistent volumes and k8s. Im trying to use a local folder on my RH7 box w minikube installed. Im getting the error:
0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind.
StorageClass.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
PersistentVolume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /usr/local/docker/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
PersistentVolumeClaim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-local-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 1Gi
Nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx
name: nginx-deployment
template:
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
containers:
- name: nginx-deployment
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: storage
volumes:
- name: storage
persistentVolumeClaim:
claimName: example-local-claim
[root#localhost docker]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
example-local-pv 1Gi RWO Retain Available local-storage 76s
[root#localhost docker]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
example-local-claim Pending local-storage 75s

The issue is in your persistent volume, your persistent volume has following node selector:
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
where your PV is trying to bind on node whose name is my-node, which it couldn't find as there is no node with that name. Please check your node-name using:
kubectl get nodes
And put the node name at the place of my-node and it will work. Hope this helps.

If you work with docker-desktop on windows, so try this:
Create dir in wsl:
docker run --rm -it -v /:/k8s alpine mkdir /k8s/mnt/k8s
Create storageclass,vol,vol claim,pod:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-storage
spec:
capacity:
storage: 50000Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/k8s
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgredb
namespace: xxx
spec:
storageClassName: local-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: postgres
name: postgres
namespace: xxx
spec:
serviceName: postgres
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:14-alpine
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgredb

Related

Kubernetes didn't find available persistent volumes to bind. What is the problem?

I launch under. Error appears: Warning FailedScheduling 40s default-scheduler 0/3 nodes are available: 3 node(s) didn't find available persistent volumes to bind. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
#Conten:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres
data:
POSTGRES_DB: db1
---
apiVersion: v1
kind: Secret
metadata:
name: postgres-secrets
data:
POSTGRES_PASSWORD: password
stringData:
POSTGRES_USER: user
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv2
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /var/lib/postgres
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: "local-storage" # For PV
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:14.7
ports:
- containerPort: 5432
envFrom:
- secretRef:
name: postgres-secrets
- configMapRef:
name: postgres-configmap
volumeMounts:
- name: postgres-database-storage
mountPath: /var/lib/pgsql/data
volumes:
- name: postgres-database-storage
persistentVolumeClaim:
claimName: postgres-pv-claim
I tried to make a pvc of a smaller volume, but this did not help.
I don’t even know what the problem was, after restarting the test cluster everything worked.

Kubernetes Minio Volume Node Affinity Conflict

I have setup a testing k3d cluster with 4 agents and a server.
I have a storage class defined thus:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
with a pv defined thus:
apiVersion: v1
kind: PersistentVolume
metadata:
name: basic-minio-storage
labels:
storage-type: object-store-path
spec:
capacity:
storage: 500Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data/basic_minio
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- k3d-test-agent-0
the pvc that I have defined is like:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
# This name uniquely identifies the PVC. Will be used in deployment below.
name: minio-pv-claim
labels:
app: basic-minio
spec:
# Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
# This is the request for storage. Should be available in the cluster.
requests:
storage: 500Gi
selector:
matchLabels:
storage-type: object-store-path
my deployment is like:
# Create a simple single node Minio linked to root drive
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: basic-minio
namespace: minio
spec:
selector:
matchLabels:
app: basic-minio
serviceName: "basic-minio"
template:
metadata:
labels:
app: basic-minio
spec:
containers:
- name: basic-minio
image: minio/minio:RELEASE.2021-10-10T16-53-30Z
imagePullPolicy: IfNotPresent
args:
- server
- /data
env:
- name: MINIO_ROOT_USER
valueFrom:
secretKeyRef:
name: minio-secret
key: minio-root-user
- name: MINIO_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: minio-secret
key: minio-root-password
ports:
- containerPort: 9000
volumeMounts:
- name: storage
mountPath: "/data"
volumes:
- name: storage
persistentVolumeClaim:
claimName: minio-pv-claim
In my kubernetes dashboard, I can see the that PV is provisioned and ready.
The PV has been setup and has bound to the PV.
But my pod shows the error: 0/5 nodes are available: 5 node(s) had volume node affinity conflict.
what is causing this issue and how can I debug it?
Your (local) volume is created on the worker node k3d-test-agent-0 but none of your pod is scheduled to run on this node. This is not a good approach but if you must run in this way, you can direct all pods to run on this host:
...
spec:
nodeSelector:
kubernetes.io/hostname: k3d-test-agent-0
containers:
- name: basic-minio
...

PersistentVolume not utilized in Docker Desktop (MacOS) hosted Kubernetes cluster

I have deployed (using Helm) some services in the K8s cluster hosted by Docker Desktop (MacOS). One of the "services" is MongoDB, for which I'm trying to set up a PersistedVolume, so that the actual data will be retained in a MacOS local directory between cluster (re)installs (or MongoDB pod replacements). Everything "works" per se, but the MongoDB container process keeps setting up its local directory /data/db, as if nothing is really setup in terms of Persistent Volumes. I've been pulling my hair for a while now and thought an extra set of eyes might spot whats wrong or missing.
I have several other resources deployed, e.g a small Micronaut based backend service which exposes an API to read from the MongoDB instance. All of that works just fine.
Here are the descriptors involved for MongoDB:
PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persons-mongodb-pvc
namespace: fo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
Deployment:
kind: Deployment
metadata:
name: fo-persons-mongodb
namespace: fo
labels:
app: fo-persons-mongodb
spec:
replicas: 1
selector:
matchLabels:
app: fo-persons-mongodb
template:
metadata:
labels:
app: fo-persons-mongodb
spec:
volumes:
- name: fo-persons-mongodb-volume-pvc
persistentVolumeClaim:
claimName: persons-mongodb-pvc
containers:
- name: fo-persons-mongodb
image: mongo
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
volumeMounts:
- name: fo-persons-mongodb-volume-pvc
mountPath: "/data/db"
Service:
apiVersion: v1
kind: Service
metadata:
name: fo-persons-mongodb
namespace: fo
spec:
type: ClusterIP
selector:
app: fo-persons-mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: fo-persons-mongodb-volume
labels:
type: local
spec:
storageClassName: local-storage
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /Users/mike/kubernetes/fo/persons/mongodb
Alright! I got it working. Seems I'd made two mistakes. Below are the updated descriptors for the PersistentVolumeClaim and PersistentVolume:
Error #1: Not setting the storageClassName in the spec of the PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: persons-mongodb-pvc
namespace: fo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: local-storage
Error #2: Not setting the node affinity and not using local.path instead of hostPath, both in the PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: fo-persons-mongodb-volume
labels:
type: local
spec:
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
local:
path: /Users/mike/kubernetes/fo/persons/mongodb
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop

Minikube: use persistent volume (shared disk) and mount it to host

I try to mount a linux directory as a shared directory for multiple containers in minikube.
Here is my config:
minikube start --insecure-registry="myregistry.com:5000" --mount --mount-string="/tmp/myapp/k8s/:/data/myapp/share/"
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: manual
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: myapp-share-storage
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
local:
path: "/data/myapp/share/"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-share-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: myapp-server
name: myapp-server
spec:
selector:
matchLabels:
io.kompose.service: myapp-server
template:
metadata:
labels:
io.kompose.service: myapp-server
spec:
containers:
- name: myapp-server
image: myregistry.com:5000/server-myapp:alpine
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /data/myapp/share
name: myapp-share
env:
- name: storage__root_directory
value: /data/myapp/share
volumes:
- name: myapp-share
persistentVolumeClaim:
claimName: myapp-share-claim
status: {}
It works with pitfalls: Statefulset are not supported, they bring deadlock errors :
pending PVC: waiting for first consumer to be created before binding
pending POD: 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind
Another option is to use minikube persistentvolumeclaim without persistentvolume (it will be created automatically). However:
The volume is created in /tmp (ex: /tmp/hostpath-provisioner/default/myapp-share-claim)
Minikube doesn't honor mount request
How can I make it just work?
Using your yaml file I've managed to create the volumes and deploy it without issue, but i had to use the command minikube mount /mydir/:/data/myapp/share/ after starting the minikube since --mount --mount-strings="/mydir/:/data/myapp/share/" wasn't working.

Unable to mount local PersistentVolume in k8s 1.13

I'm trying to deploy a stateful set with a persistent volume claim on a bare metal kubernetes cluster (v1.13) but the pod times out when trying to mount the volume.
I have a local-storage storage class defined:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
I have a PV defined:
kind: PersistentVolume
apiVersion: v1
metadata:
name: cassandradev1
labels:
app: cassandra
environment: dev
spec:
storageClassName: local-storage
capacity:
storage: 1Ti
accessModes:
- ReadWriteOnce
local:
path: "/data1/cassandradev1"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node1
And I have a stateful set that issues a claim (truncated):
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra-set
spec:
...
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
selector:
matchLabels:
app: "cassandra"
environment: "dev"
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 1Ti
When I try to apply the stateful set, the Pod gets scheduled but times out:
Normal Scheduled 2m13s default-scheduler Successfully assigned default/cassandra-set-0 to my-node1
Warning FailedMount 13s kubelet, my-node1 Unable to mount volumes for pod "cassandra-set-0 (dd252f77-fda3-11e8-96d3-1866dab905dc)": timeout expired waiting for volumes to attach or mount for pod "default"/"cassandra-set-0". list of unmounted volumes=[cassandra-data]. list of unattached volumes=[cassandra-data default-token-t2dg8]
If I look at the logs for the controller I see an error message for no volume plugin matched:
kubectl logs pod/kube-controller-manager -n kube-system
W1212 00:51:24.218114 1 plugins.go:845] FindExpandablePluginBySpec(cassandradev1) -> err:no volume plugin matched
Any ideas on where to look next?
First of all PV definition is incorrect, there is no hostPath in local-storage class. This is how you should define your local storage PV:
kind: PersistentVolume
apiVersion: v1
metadata:
name: cassandradev1
labels:
app: cassandra
environment: dev
spec:
storageClassName: local-storage
capacity:
storage: 1Ti
accessModes:
- ReadWriteOnce
local:
path: "/data1/cassandradev1"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node1
Also keep in mind unlike hostPath, /data1/cassandradev1 should exist on my-node1, local-storage doesn't automatically create the path and when you deploy statefulset and path is not there, it will give error related to mounting.
This should resolve your issue. Hope this helps.
EDIT: So, I setup the cassandra statefulset with local-storage by following yaml files. I have omitted some config maps and so it will not work as it is. Could you please try to check what is different there:
apiVersion: apps/v1
kind: StatefulSet
metadata:
generation: 1
labels:
state: cassandra
name: cassandra
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: cassandra
serviceName: cassandra
template:
metadata:
annotations:
pod.alpha.kubernetes.io/initialized: "true"
creationTimestamp: null
labels:
app: cassandra
spec:
containers:
- args:
- chmod -R 777 /logs/; /on_start.sh
command:
- /bin/sh
- -c
image: <image>
imagePullPolicy: Always
name: cassandra
ports:
- containerPort: 9042
protocol: TCP
resources: {}
volumeMounts:
- mountPath: /data
name: data
imagePullSecrets:
- name: gcr-imagepull-json-key
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: local-storage
PV.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
labels:
type: local
name: cassandra-data-vol-0
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: data-cassandra-0
namespace: default
local:
path: /data/cassandra-0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- ip-10-0-1-91.ec2.internal
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
Make sure /data/cassandra-0 exists before you create PV. Let me know if you face any issues.