Default Grafana K8s app PV issue: FailedBinding persistentvolume-controller no persistent volumes available for this claim and no storage class is set - kubernetes

I am simply trying to deploy this Grafana app as-is, no changes to the YAML have been made: https://grafana.com/docs/grafana/latest/setup-grafana/installation/kubernetes/
VMs are Ubuntu 20.04 LTS. The Kubernetes cluster is made up of the Control-Plane/Mstr & 3x Worker nodes:
root#k8s-master:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 35d v1.24.2
k8s-worker1 Ready worker 4h24m v1.24.2
k8s-worker2 Ready worker 4h24m v1.24.2
k8s-worker3 Ready worker 4h24m v1.24.2v
Other K8s Pods such as NGINX run without issue.
However, the Grafana pod cannot start and is stuck in a Pending state:
root#k8s-master:~# kubectl create -f grafana.yaml
persistentvolumeclaim/grafana-pvc created
deployment.apps/grafana created
service/grafana created
# time passed here...
root#k8s-master:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
grafana-9bd5bbd6b-k7ljz 0/1 Pending 0 3h39m
Troubleshooting this, I found there is an issue with the storage PersistentVolumeClaim (the pvc):
root#k8s-master:~# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
grafana-pvc Pending 2m22s
root#k8s-master:~#
root#k8s-master:~# kubectl describe pvc grafana-pvc
Name: grafana-pvc
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: grafana-9bd5bbd6b-k7ljz
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 6s (x11 over 2m30s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
UPDATE:
I created a StorageClass and set it as default:
root#k8s-master:~# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
generic (default) no-provisioner Delete Immediate false 19m
I also created a PersistentVolume:
root#k8s-master:~# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 10Gi RWO Retain Released default/task-pv-claim manual 12m
However, now when I try to deploy the Grafana PVC it is still stuck - why?
root#k8s-master:~# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
grafana-pvc Pending generic 4m16s
root#k8s-master:~# kubectl describe pvc grafana-pvc
Name: grafana-pvc
Namespace: default
StorageClass: generic
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner: no-provisioner
volume.kubernetes.io/storage-provisioner: no-provisioner
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: grafana-9bd5bbd6b-mmqs6
grafana-9bd5bbd6b-pvhtm
grafana-9bd5bbd6b-rtwgj
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 12s (x19 over 4m27s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "no-provisioner" or manually created by system administrator

I have tried creating a Grafana configuration file from the documentation, and was able to create successfully. The pod has a Running state, also the PVC(PersistentVolumeClaim) shows the Storage class as standard.
The below is the output of PVC:
$ kubectl describe pvc grafana-pvc
Name: grafana-pvc
Namespace: default
StorageClass: standard
Status: Bound
Volume: pvc-ee20cc5d-6ca5-4075-b5f3-d1a6323a5241
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: grafana-75789d79d4-wbgtv
Events: <none>
But in your use case the StorageClass field is showing as empty. So, try deleting the existing one and recreate the Grafana configuration file. If you were not able to create and are still facing the same error message which is “no persistent volumes available for this claim and no storage class is set” then you will have to create PV(PersistentVolume).
Because, your error says, "Your PVC hasn't found a matching PV and you also haven't mentioned any storageClass name". After you create the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim's requirements. If the control plane finds a suitable PersistentVolume with the same StorageClass, it binds the claim to the volume.
In order to resolve your issue you will need to create a StorageClass with no-provisioner and then create a PV(PersistentVolume) by defining this storageClassName. Then you have to create PVC and Pod/Deployment .
Refer to stackpost1 and stackpost2 for more information.

Related

Kubernetes OpenSearch Deployment | "no persistent volumes available for this claim and no storage class is set" error

We deployed OpenSearch using Kubernetes according documentation instructions on 3 nodes cluster (https://opensearch.org/docs/latest/opensearch/install/helm/) , after deployment pods are on Pending state and when checking it, we see following msg:
"
persistentvolume-controller no persistent volumes available for this claim and no storage class is set
"
Can you please advise what could be wrong in our OpenSearch/Kubernetes deployment or what can be missing from configuration perspective?
sharing some info:
Cluster nodes:
[root#I***-M1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ir***-m1 Ready control-plane,master 4h34m v1.23.4
ir***-w1 Ready 3h41m v1.23.4
ir***-w2 Ready 3h19m v1.23.4
Pods State:
[root#I****1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
opensearch-cluster-master-0 0/1 Pending 0 80m
opensearch-cluster-master-1 0/1 Pending 0 80m
opensearch-cluster-master-2 0/1 Pending 0 80m
[root#I****M1 ~]# kubectl describe pvc
Name: opensearch-cluster-master-opensearch-cluster-master-0
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: app.kubernetes.io/instance=my-deployment
app.kubernetes.io/name=opensearch
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: opensearch-cluster-master-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 2m24s (x18125 over 3d3h) persistentvolume-controller **no persistent
volumes available for this claim and no storage class is set**
.....
[root#IR****M1 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM
POLICY STATUS CLAIM STORAGECLASS REASON AGE
opensearch-cluster-master-opensearch-cluster-master-0 30Gi RWO Retain Available manual 6h24m
opensearch-cluster-master-opensearch-cluster-master-1 30Gi RWO Retain Available manual 6h22m
opensearch-cluster-master-opensearch-cluster-master-2 30Gi RWO Retain Available manual 6h23m
task-pv-volume 60Gi RWO Retain Available manual 7h48m
[root#I****M1 ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
opensearch-cluster-master-opensearch-cluster-master-0 Pending 3d3h
opensearch-cluster-master-opensearch-cluster-master-1 Pending 3d3h
opensearch-cluster-master-opensearch-cluster-master-2 Pending 3d3h
...no storage class is set...
Try upgrade your deployment with storage class, presumed you run on AWS EKS: helm upgrade my-deployment opensearch/opensearch --set persistence.storageClass=gp2
If you are running on GKE, change gp2 to standard. On AKS change to default.

EKS - Pod has unbound immediate PersistentVolumeClaims on t2.large instances (t2.large, Bottlerocket OS)

i've looked through several solutions but couldnt find an answer, so i'm trying to run a stateful set on the cluster, but the pod fails to run because of unbound claim. I'm running t2.large machines with Bottlerocket host types.
kubectl get events
28m Warning FailedScheduling pod/carabbitmq-0 pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
28m Normal Scheduled pod/carabbitmq-0 Successfully assigned default/carabbitmq-0 to ip-x.compute.internal
28m Normal SuccessfulAttachVolume pod/carabbitmq-0 AttachVolume.Attach succeeded for volume "pvc-f6e8ec20-4bc1-4539-8d11-2dd1b3dbd4d7"
28m Normal Pulled pod/carabbitmq-0 Container image "busybox:1.30.1" already present on machine
kubectl get pv,pvc + describe
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/data-carabbitmq-0 Bound pvc-f6e8ec20-4bc1-4539-8d11-2dd1b3dbd4d7 30Gi RWO gp2 12m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-f6e8ec20-4bc1-4539-8d11-2dd1b3dbd4d7 30Gi RWO Retain Bound rabbitmq/data-carabbitmq-0 gp2 12m
describe pv:
Name: pvc-f6e8ec20-4bc1-4539-8d11-2dd1b3dbd4d7
Labels: failure-domain.beta.kubernetes.io/region=eu-west-1
failure-domain.beta.kubernetes.io/zone=eu-west-1b
Annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: yes
pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pv-protection]
StorageClass: gp2
Status: Bound
Claim: rabbitmq/data-carabbitmq-0
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 30Gi
Node Affinity:
Required Terms:
Term 0: failure-domain.beta.kubernetes.io/zone in [eu-west-1b]
failure-domain.beta.kubernetes.io/region in [eu-west-1]
Message:
Source:
Type: AWSElasticBlockStore (a Persistent Disk resource in AWS)
VolumeID: aws://eu-west-1b/vol-xx
FSType: ext4
Partition: 0
ReadOnly: false
Events: <none>
describe pvc:
Name: data-carabbitmq-0
Namespace: rabbitmq
StorageClass: gp2
Status: Bound
Volume: pvc-f6e8ec20-4bc1-4539-8d11-2dd1b3dbd4d7
Labels: app=rabbitmq-ha
release=rabbit-mq
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 30Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: carabbitmq-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ProvisioningSucceeded 36m persistentvolume-controller Successfully provisioned volume pvc-f6e8ec20-4bc1-4539-8d11-2dd1b3dbd4d7 using kubernetes.io/aws-ebs
The storage type is gp2.
Name: gp2
IsDefaultClass: Yes
Annotations: storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/aws-ebs
Parameters: encrypted=true,type=gp2
AllowVolumeExpansion: <unset>
MountOptions:
debug
ReclaimPolicy: Retain
VolumeBindingMode: Immediate
Events: <none>
I'm not sure what i'm missing, same configuration used to work until i switched to "t" type of EC2s
So, it was weird but i had some readiness probe that failed its healthchecks, i thought that it was because the volume was not mounted well.
The healthcheck basically did some request to localhost, which it had issues on (not sure why) - changing to 127.0.0.1 made the check pass, and then the volume error disappeard.
So - if you have this weird issue (volumes were mounted, but you still get that error) - check the pod's probes.

PVC status is stuck on pending and PV status is available

I was trying to increase the PVC size from 10G to 20G, since we are running on 1.9.3 doing it online is not there. So i have deleted the PVC and created with new value of 20G as storage.
pvc-b196868cd-bc75-12e8-ad32-075738325c 100Gi RWO Retain Released myapp/myapp-backup-pv-claim` persistent 4m
As i deleted, the PV status turned on to "Released" and when i tried to recreate the PVC the it got created but with the status "lost"
myapp-myapp-backup-pv-claim Lost pvc-03b34iknca1-6fr3-19ea-af3b-0073yh2u97f 0 ntfts19-k8s-0101 13m
We are using the Vsphere volumes. Tried the solution of "kubectl patch pv pv-for-rabbitmq -p '{"spec":{"claimRef": null}}'" this helped me to bring back the pv in "Available" status, now the PVC is in stuck with "Pending" state.
pvc-b196868cd-bc75-12e8-ad32-075738325c 100Gi RWO Retain Available myapp/myapp-backup-pv-claim` persistent 2m
myapp-myapp-backup-pv-claim Pending pvc-03b34iknca1-6fr3-19ea-af3b-0073yh2u97f 0 ntfts19-k8s-0101 28m
PV Describe:
Name: myapp-myapp-backup-pv-claim
Namespace: myapp
StorageClass: ntfts19-k8s-0101
Status: Pending
Volume: pvc-03b34iknca1-6fr3-19ea-af3b-0073yh2u97f
Labels: app=my-app
Annotations: <none>
Finalizers: []
Capacity: 0
Access Modes:
Events: <none>
PVC Describe:
Name: pvc-b196868cd-bc75-12e8-ad32-075738325c
Labels: <none>
Annotations: <none>
StorageClass: persistent
Status: Available
Claim: myapp/myapp-backup-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 100Gi
Message:
Source:
Type: vSphereVolume (a Persistent Disk resource in vSphere)
VolumePath: StoragePolicyName: %v
FSType: [dsNTFTS19_0101] kubevols/kubernetes-dynamic-pvc-b196868cd-bc75-12e8-ad32-075738325c.vmdk
%!(EXTRA string=ext4, string=)Events: <none>
The problem was not having annotations, since this is a VSphere storage the annotation volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/vsphere-volume is a mandatory one.
The storage class for PV and PVC should be same. The control plane can only bind PVC with PV only if it can find the PV with same storage class.
You PV has storageClass: ntfts19-k8s-0101 and your PVC has storageClass: persistent. So control plane couldn't find a matching PV that has storageClass persistent.
Delete and recreate the PVC to match the storage class of the PV.
Please refer the official documentation

PVC in pending state

I am trying to provision PV with RBD using https://github.com/kubernetes/kubernetes/tree/release-1.7/examples/persistent-volume-provisioning/rbd
But I have faced an issue when my PVC is in Pending state without any meaningful log
root#ubuntu:~# kubectl describe pvc
Name: claim1
Namespace: default
StorageClass: fast
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/rbd
Capacity:
Access Modes:
Events: <none>
It seems you don't have volume defined in your pvc :
I got the problem also if the volumeName is not correct or if it doesn't exist. Indeed, there are no logs or events that show the problem.
If all is working fine, the status should be :
Status: Bound

PersistentVolumeMound Bound but with 0 capacity using Kubernetes on Google Compute Engine

I'm experiencing a strange problem using k8s 1.3.2 on GCE. I have a 100GB disk set up, and a valid (and Bound) PersistentVolume. However, my PersistentVolumeClaim is showing up with a capacity of 0, even though its status is Bound, and the pod that is trying to use it is stuck in ContainerCreating.
Hopefully the outputs from kubectl below summarise the problem:
$ gcloud compute disks list
NAME ZONE SIZE_GB TYPE STATUS
disk100-001 europe-west1-d 100 pd-standard READY
gke-unrest-micro-pool-199acc6c-3p31 europe-west1-d 100 pd-standard READY
gke-unrest-micro-pool-199acc6c-4q55 europe-west1-d 100 pd-standard READY
$ kubectl get pv
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
pv-disk100-001 100Gi RWO Bound default/graphite-statsd-claim 2m
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
graphite-statsd-claim Bound pv-disk100-001 0 3m
$ kubectl describe pvc
Name: graphite-statsd-claim
Namespace: default
Status: Bound
Volume: pv-disk100-001
Labels: <none>
Capacity: 0
Access Modes:
$ kubectl describe pv
Name: pv-disk100-001
Labels: <none>
Status: Bound
Claim: default/graphite-statsd-claim
Reclaim Policy: Recycle
Access Modes: RWO
Capacity: 100Gi
Message:
Source:
Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
PDName: disk100-001
FSType: ext4
Partition: 0
ReadOnly: false
# Events for pod that is supposed to mount this volume:
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
6h 1m 183 {kubelet gke-unrest-micro-pool-199acc6c-4q55} Warning FailedMount Unable to mount volumes for pod "graphite-statsd-1873928417-i05ef_default(bf9fa0e5-4d8e-11e6-881c-42010af001fe)": timeout expired waiting for volumes to attach/mount for pod "graphite-statsd-1873928417-i05ef"/"default". list of unattached/unmounted volumes=[graphite-data]
6h 1m 183 {kubelet gke-unrest-micro-pool-199acc6c-4q55} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "graphite-statsd-1873928417-i05ef"/"default". list of unattached/unmounted volumes=[graphite-data]
# Extract from deploy yaml file:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-disk100-001
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
gcePersistentDisk:
pdName: disk100-001
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: graphite-statsd-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
---
Any help gratefully received!
Dan, the first issue "PVC capacity 0" looks like a bug. I opened https://github.com/kubernetes/kubernetes/issues/29425 you can track it there.
The second issue sounds like https://github.com/kubernetes/kubernetes/issues/29166 which is currently under investigation. Feel free to add your repro information on there with your logs, and I'll take a look.