Why is my persistent volume claim pending? - kubernetes

I'ts been 60min now and my persistent volume claim is still pending.
My storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
Minikube did not supply this one, I had to add it with the yaml above. In the dashboard I can click on it and it references the persistent volume which is green/ok.
My persistent volume (green, ok):
apiVersion: v1
kind: PersistentVolume
metadata:
name: small-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
The reason I need persistent storage is that nodered store its data in /data so that whats I'm trying to do here; provide it with persistent volume to store data. And since this is locally using minikube I can take advantage of /data folder on the minikube instance that per documentation is persistent.
My persistent volume claim for my nodered app.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nodered-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
If I add the deployment or not, the persistent storage claim is still yellow/pending in the dashboard. Any reason for that? What am I missing here?
Update:
kubectl describe pvc/nodered-claim:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 2m52s (x162 over 42m) persistentvolume-controller waiting for first consumer to be created before binding

Update your StorageClass to immediate:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate # <-- bind as soon as PVC is created
WaitForFirstConsumer will only bind when a Pod uses your PVC.
...If I add the deployment or not, the persistent storage claim is still yellow/pending in the dashboard.
Your deployment will also enter pending state if the PVC it needs failed to bind.

Related

RabbitMQ cluster-operator does not work in Kubernetes

RabbitMQ cluster operator does not work in Kubernetes.
I have a kubernetes cluster 1.17.17 of 3 nodes. I deployed it with a rancher.
According to this instruction I installed RabbitMQ cluster-operator:
https://www.rabbitmq.com/kubernetes/operator/quickstart-operator.html
kubectl apply -f "https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml"
Its ok! but..
I have created this very simple configuration for the instance according to the documentation:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq
namespace: test-rabbitmq
I have error:error while running "VolumeBinding" filter plugin for pod "rabbitmq-server-0": pod has unbound immediate PersistentVolumeClaims
after that i checked:
kubectl get storageclasses
and saw that there were no resources! I added the following storegeclasses:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
create pv and pvc:
kind: PersistentVolume
apiVersion: v1
metadata:
name: rabbitmq-data-sigma
labels:
type: local
namespace: test-rabbitmq
annotations:
volume.alpha.kubernetes.io/storage-class: rabbitmq-data-sigma
spec:
storageClassName: local-storage
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: "/opt/rabbitmq-data-sigma"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rabbitmq-data
namespace: test-rabbitmq
spec:
storageClassName: local-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
I end up getting an error in the volume - which is being generated automated:
FailedBinding no persistent volumes available for this claim and no storage class is set
please help to understand this problem!
You can configure Dynamic Volume Provisioning e.g. Dynamic NFS provisioning as describe in this article or you can manually create PersistentVolume ( it is NOT recommended approach).
I really recommend you to configure dynamic provisioning -
this will allow you to generate PersistentVolumes automatically.
Manually creating PersistentVolume
As I mentioned it isn't recommended approach but it may be useful when we want to check something quickly without configuring additional components.
First you need to create PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /mnt/rabbitmq # data will be stored in the "/mnt/rabbitmq" directory on the worker node
type: Directory
And then create the /mnt/rabbitmq directory on the node where the rabbitmq-server-0 Pod will be running. In your case you have 3 worker node so it may difficult to determine where the Pod will be running.
As a result you can see that the PersistentVolumeClaim was bound to the newly created PersistentVolume and the rabbitmq-server-0 Pod was created successfully:
# kubectl get pv,pvc -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv 10Gi RWO Recycle Bound test-rabbitmq/persistence-rabbitmq-server-0 11m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-rabbitmq persistentvolumeclaim/persistence-rabbitmq-server-0 Bound pv 10Gi RWO 11m
# kubectl get pod -n test-rabbitmq
NAME READY STATUS RESTARTS AGE
rabbitmq-server-0 1/1 Running 0 11m

What Persistent Volume storage solutions I can use when I run my Kubernetes cluster on GCP?

I have deployed my Kubernetes cluster on GCP Compute Engines and having 3 Master Nodes and 3 Worker Nodes (It's not a GKE Cluster). Can anybody suggest me what storage options I can use for my cluster? If I create a virtual disk on GCP, can I use that disk as a persistent storage?
You can use GCE Persistent Disk Storage Class.
Here is how you create the storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
Then you do the following to create the PV & PVC and to attach to your pod.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gce-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Pod
metadata:
name: webserver-pd
spec:
containers:
- image: httpd
name: webserver
volumeMounts:
- mountPath: /data
name: dynamic-volume
volumes:
- name: dynamic-volume
persistentVolumeClaim:
claimName: gce-claim
Example taken from this blog post
There are two types of provisioning Persistent Volumes: Static Provisioning and Dynamic Provisioning.
I will briefly describe each of these types.
Static Provisioning
Using this approach you need to create Disk, PersistentVolume and PersistentVolumeClaim manually.
I've create simple example for you to illustrate how it works.
First I created disk, on GCP we can use gcloud command:
$ gcloud compute disks create --size 10GB --region europe-west3-c test-disk
NAME ZONE SIZE_GB TYPE STATUS
test-disk europe-west3-c 10 pd-standard READY
Next I created PV and PVC using this manifest files:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
gcePersistentDisk:
pdName: test-disk # This GCE PD must already exist.
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
After applaying this manifest files, we can check status of PV and PVC:
root#km:~# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-test 10Gi RWO Retain Bound default/claim-test 12m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/claim-test Bound pv-test 10Gi RWO 12m
Finally I used above claim as volume:
apiVersion: v1
kind: Pod
metadata:
name: web
spec:
containers:
- name: web
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx"
name: vol-test
volumes:
- name: vol-test
persistentVolumeClaim:
claimName: claim-test
We can inspect created Pod to check if it works as expected:
root#km:~# kubectl exec -it web -- bash
root#web:/# df -h
Filesystem Size Used Avail Use% Mounted on
...
/dev/sdb 9.8G 37M 9.8G 1% /usr/share/nginx
...
Dynamic Provisioning
In this case volume is provisioned automatically when application requires it.
First you need to create StorageClass object to define a provisioner such e.g. kubernetes.io/gce-pd.
We don't need to create PersistenVolume anymore, it's created automatically by StorageClass for us.
I've also created simple example for you to illustrate how it works.
First I created StorageClass as default storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
fstype: ext4
And then PVC (the same as in the previous example) - but in this case PV was created automatically:
root#km:~# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-8dcd69f1-7081-45a7-8424-cc02e61a4976 10Gi RWO Delete Bound default/claim-test standard 3m10s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/claim-test Bound pvc-8dcd69f1-7081-45a7-8424-cc02e61a4976 10Gi RWO standard 3m12s
In more advanced cases it may be useful to create multiple StorageClasses with differnet persistent disks types.

Unable to setup couchbase operator 1.2 with persistent volume on local storage class

I am trying to setup couchbase operator 1.2 on my local system.
i followed the following steps :
Install the Couchbase Admission Controller.
Deploy the Couchbase Autonomous Operator.
Deploy the Couchbase Cluster.
Access CouchBase from UI.
But the problem with this is that as soon as the system or docker resets or the pod resets, the cluster's data is lost.
So for the same I tried to do it by adding persistent volume with local storage class as mentioned in the docs but the result was still the same. The pod still gets resets. and i am unable to find the reason for the same.
So if anyone can advise on how to do the same with persistent volume on local storage class. I have successfully created a storage class. Just having problem while getting the cluster up and keep the consistency for the same.
Here is the yamls that i used to create the storage class and pv and pv claim
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: myssd
provisioner: local
apiVersion: v1
kind: PersistentVolume
metadata:
name: couchbase-data-2
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
storageClassName: myssd
hostPath:
path: "/home/<user>/cb-storage/"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-test-claim-2
spec:
accessModes:
- ReadWriteOnce
storageClassName: myssd
resources:
requests:
storage: 1Gi
Thanks in advance
Persistent volume using hostPath is not durable. Use a local volume. Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume's node constraints by looking at the node affinity on the PersistentVolume.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: couchbase-data
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /home/<User>/cb-storage/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node1
- node2
- node3
- node4
You don't need to create a PersistentVolume manually because the storage class will do that internally.
Also you need to configure the local volume provisioner as discussed here so that dynamic provisioning using the local storage class happens.

Local PersistentVolumeClaim says "no volume plugin matched"

I recently started exploring Kubernetes and decided to try and deploy kafka to k8s. However I have a problem with creating the persistent volume. I create a storage class and a persistent volume, but the persistent volume claims stay in status pending saying "no volume plugin matched". This is the yaml files I used with the dashed lines denoting a new file. Anybody has an idea why this is happening?
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
reclaimPolicy: Retain
------------------------
apiVersion: v1
kind: PersistentVolume
metadata:
name: kafka-pv
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
---------------------------
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: zookeeper-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: local-storage
As MaggieO said changing ReadWriteMany to ReadWriteOnce was part of the problem. The other part was that I had to go and create the /mnt/disks/ssd1 folder on my C: drive manually and write "path: /c/mnt/disks/ssd1" instead. Something that is not present in my example, but I was trying to do anyway and might be helpful to others was that I was trying to have two PVCs for one PV which is impossible. The PV to PVC relationship is 1 to 1.
Your Persistent Volume Claim configuration file should look like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
Just change access mode from ReadWriteMany to ReadWriteOnce.
Let me know if it helped.
You have to bound Persistent-volume claim with your persistent volume.
As you have mentioned pvc storageclassName: local-storage.
try it with as storageclassName: kafka-pv So that your pvc get bounded to pv.
In my case, it's caused by the pv storage,just increase the mount size of the pv storage.
From our experience (under a security-hardened Kubernetes distribution called Openshift 3.11) this provisioning error happens most often when creating of PVCs is not possible due to the node having run out of PVs.
So the cluster admin needs to add some PVs and/or we need to release some other PVCs (unused but still bound to PVs).
Having requested an incorrect PVC capacity is another possible reason (when the cluster is poorly configured and accepts only a single "magic number" for capacity).

Kubernetes - PVC not binding the NFS PV

I created a physical volume using NFS and PVC for the same volume. However, the PVC always creates a EBS disk storage instead of binding to the PV. Please see the log below:
> kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
mynfspv 100Gi RWX Retain Available 7s
create PVC now
> kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
mynfspvc Bound pvc-a081c470-3f23-11e7-9d30-024e42ef6b60 100Gi RWX default 4s
> kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
mynfspv 100Gi RWX Retain Available 50s
pvc-a081c470-3f23-11e7-9d30-024e42ef6b60 100Gi RWX Delete Bound default/mynfspvc default 17s
nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mynfspv
labels:
name: nfs2
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: dbc56.efs.us-west-2.amazonaws.com
path: /
nfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mynfspvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
It looks like you have dynamic provisioning and default storageclass feature enabled, and the default class is AWS ebs. You can check your default class with following command:
$ kubectl get storageclasses
NAME TYPE
standard (default) kubernetes.io/aws-ebs
If this is correct, then I think you'll need to specify storage class to solve you problem.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: nfs-class
provisioner: kubernetes.io/fake-nfs
Add storageClassName to both of your PV
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mynfspvc
spec:
storageClassName: nfs-class
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
and PVC
apiVersion: v1
kind: PersistentVolume
metadata:
name: mynfspv
labels:
name: nfs2
spec:
storageClassName: nfs-class
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: dbc56.efs.us-west-2.amazonaws.com
path: /
You can check out https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1 for details.
Which version of Kube is this ? The root cause is as mentioned by #ddysher. In your setup the "Default" Storage class is "EBS" as you can see in the get pv{c} outputs. According to kube version you can also make use of 'calim selector' in PVC spec. Refer # https://github.com/kubernetes/community/blob/master/contributors/design-proposals/volume-selectors.md