Persistent volume claim not claiming the volume - kubernetes

I have this persistent volume claim
$ kubectl get pvc -ngitlab-managed-apps
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
prometheus-prometheus-server Pending 0s
$ kubectl describe pvc prometheus-prometheus-server -ngitlab-managed-apps
Name: prometheus-prometheus-server
Namespace: gitlab-managed-apps
StorageClass:
Status: Pending
Volume:
Labels: app=prometheus
chart=prometheus-9.5.2
component=server
heritage=Tiller
release=prometheus
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: prometheus-prometheus-server-78bdf8f5b7-pkvcr
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 14s (x5 over 60s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
And I have created this persistent volume
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
prometheus-prometheus-server 8Gi RWO Retain Released gitlab-managed-apps/prometheus-prometheus-server manual 17m
$ kubectl describe pv prometheus-prometheus-server
Name: prometheus-prometheus-server
Labels: type=local
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"local"},"name":"prometheus-prometheus-server"}...
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Released
Claim: gitlab-managed-apps/prometheus-prometheus-server
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 8Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /var/prometheus-server
HostPathType:
Events: <none>
Why is the claim not claiming the volume? Besides the name, is there anything else that needs to match? Are there any logs I should look into? For now I only see "no persistent volumes available for this claim and no storage class is set"

To understand the error message, you need to understand how static and dynamic provisioning work.
When none of the static PVs the administrator created matches a user's PersistentVolumeClaim, the cluster may try to dynamically provision a volume especially for the PVC. This provisioning is based on StorageClasses.
After you create the PersistentVolumeClaim, the Kubernetes control plane looks for a PersistentVolume that satisfies the claim's requirements. If the control plane finds a suitable PersistentVolume with the same StorageClass, it binds the claim to the volume.
Your error says,"Your PVC hasn't found a matching PV and you also haven't mentioned any storageClass name".
Your PV has StorageClass: manual, but your PVC doesn't have any storageClass (StorageClass: ""). Adding StorageClass: manual to your PVC should solve your problem.
You must need to choose one of the following provisioning options.
Static provisioning:
create PV
create PVC with the label selector, so that it can find the PV with labels mentioned.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
selector: # <----- here
matchLabels:
release: "stable"
... ...
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
label: # <---- here
release: "stable"
Dynamic provisioning:
create storageClass with provisioner mentioned
create PVC with the storageClass name mentioned
Or
create both PV and PVC with the same storageClass name mentioned.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
provisioner: some-provisioner
... ... ...
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
storageClassName: slow # <---- here
... ... ...
This is a short description, please visit the official doc for more details.

Related

RabbitMQ cluster-operator does not work in Kubernetes

RabbitMQ cluster operator does not work in Kubernetes.
I have a kubernetes cluster 1.17.17 of 3 nodes. I deployed it with a rancher.
According to this instruction I installed RabbitMQ cluster-operator:
https://www.rabbitmq.com/kubernetes/operator/quickstart-operator.html
kubectl apply -f "https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml"
Its ok! but..
I have created this very simple configuration for the instance according to the documentation:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq
namespace: test-rabbitmq
I have error:error while running "VolumeBinding" filter plugin for pod "rabbitmq-server-0": pod has unbound immediate PersistentVolumeClaims
after that i checked:
kubectl get storageclasses
and saw that there were no resources! I added the following storegeclasses:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
create pv and pvc:
kind: PersistentVolume
apiVersion: v1
metadata:
name: rabbitmq-data-sigma
labels:
type: local
namespace: test-rabbitmq
annotations:
volume.alpha.kubernetes.io/storage-class: rabbitmq-data-sigma
spec:
storageClassName: local-storage
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: "/opt/rabbitmq-data-sigma"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rabbitmq-data
namespace: test-rabbitmq
spec:
storageClassName: local-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
I end up getting an error in the volume - which is being generated automated:
FailedBinding no persistent volumes available for this claim and no storage class is set
please help to understand this problem!
You can configure Dynamic Volume Provisioning e.g. Dynamic NFS provisioning as describe in this article or you can manually create PersistentVolume ( it is NOT recommended approach).
I really recommend you to configure dynamic provisioning -
this will allow you to generate PersistentVolumes automatically.
Manually creating PersistentVolume
As I mentioned it isn't recommended approach but it may be useful when we want to check something quickly without configuring additional components.
First you need to create PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /mnt/rabbitmq # data will be stored in the "/mnt/rabbitmq" directory on the worker node
type: Directory
And then create the /mnt/rabbitmq directory on the node where the rabbitmq-server-0 Pod will be running. In your case you have 3 worker node so it may difficult to determine where the Pod will be running.
As a result you can see that the PersistentVolumeClaim was bound to the newly created PersistentVolume and the rabbitmq-server-0 Pod was created successfully:
# kubectl get pv,pvc -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv 10Gi RWO Recycle Bound test-rabbitmq/persistence-rabbitmq-server-0 11m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-rabbitmq persistentvolumeclaim/persistence-rabbitmq-server-0 Bound pv 10Gi RWO 11m
# kubectl get pod -n test-rabbitmq
NAME READY STATUS RESTARTS AGE
rabbitmq-server-0 1/1 Running 0 11m

External provisioner "alicloud/disk fails to create volume

I use managed k8s silution on AliCloud.
I created storageClass such as:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: alicloud-pv-class
parameters:
type: cloud_ssd
regionid: cn-beijing
zoneid: cn-beijing-g
provisioner: alicloud/disk
reclaimPolicy: Retain
volumeBindingMode: Immediate
When I try to create pvc:
apiVersion: v1
kind: List
items:
- kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: node-pv
spec:
accessModes:
- ReadWriteOnce
storageClassName: alicloud-pv-class
resources:
requests:
storage: 8Gi
I get:
Name: node-pv
Namespace: default
StorageClass: alicloud-pv-class
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner: alicloud/disk
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 11s (x6 over 75s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "alicloud/disk" or manually created by system administrator
Eventhough I create manually pvc and bind it to pv and install helm chart for zookeeeper I am getting from the pod:
mkdir: cannot create directory '/bitnami/zookeeper/data': Permission denied
Any ideas?
I was not able to solve the issue but the problems I had was related to Aliyun Managed Serverless K8s. Even Aliyun Support admitted that such configuration is difficult. They have not provided any solution. We decided to go with Managed K8s (non-serveless). We use Terraform scripts. Everything works
out of the box including Ingress, LogTrail, PvC which were a real pain with serverless managed k8s.
The point is - dont waste your time with Managed Serveless K8s if you need logs and pvc. It doesnt work - at least hasn't worked for us and not much help from Aliyun support for that matter.
If you are using alicloud ASK, this file https://github.com/AliyunContainerService/serverless-k8s-examples/blob/master/volumes/alicloud-disk-controller.yaml is working for me.

Kubernetes: 2 PVCs in 2 namespaces binding to the same PV, one successful, one failed

So I have 2 PVCs in 2 namespaces binding to 1 PV:
The following are the PVCs:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-git
namespace: mlo-dev
labels:
type: local
spec:
storageClassName: mlo-git
volumeMode: Filesystem
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-git
namespace: mlo-stage
labels:
type: local
spec:
storageClassName: mlo-git
volumeMode: Filesystem
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
and the PV:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-git
labels:
type: local
spec:
storageClassName: mlo-git
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
hostPath:
path: /git
in the namespace "mlo-dev", the binding is successful:
$ kubectl describe pvc pvc-git -n mlo-dev
Name: pvc-git
Namespace: mlo-dev
StorageClass: mlo-git
Status: Bound
Volume: pv-git
Labels: type=local
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWX
VolumeMode: Filesystem
Mounted By:
...
various different pods here...
...
Events: <none>
Whereas in the namespace "mlo-stage", the binding is failed with the error message: storageclass.storage.k8s.io "mlo-git" not found
$ kubectl describe pvc pvc-git -n mlo-stage
Name: pvc-git
Namespace: mlo-stage
StorageClass: mlo-git
Status: Pending
Volume:
Labels: type=local
Annotations: Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By:
...
various different pods here...
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 3m4s (x302 over 78m) persistentvolume-controller storageclass.storage.k8s.io "mlo-git" not found
As I know, PV is not scoped to namespace, so it should be possible for PVCs in different namespaces to bind to the same PV?
+++++
Added:
+++++
when "kubectl describe pv pv-git", I got the following:
$ kubectl describe pv pv-git
Name: pv-git
Labels: type=local
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: mlo-git
Status: Bound
Claim: mlo-dev/pvc-git
Reclaim Policy: Retain
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /git
HostPathType:
Events: <none>
I've tried to reproduced your scenario (however if you would provide your storageclass yaml to exact reproduction, and changed AccessMode for tests) and in my opinion this behavior is correctly (worked as designed).
When you want to check if specific object is namespaced you can use command:
$ kubectl api-resources | grep pv
persistentvolumeclaims pvc true PersistentVolumeClaim
persistentvolumes pv false PersistentVolume
As PVC is true its mean pvc is namespaced, and PV is not.
PersistentVolumeClain and PersistentVolume are bounding in relationship 1:1. When your first PVC bounded to PV, this PV is taken and cannot be used again in that moment. You should create second PV. It can be changed depends on reclaimPolicy and what happend with pop/deployment
I guess you are using Static provisioning.
A cluster administrator creates a number of PVs. They carry the details of the real storage, which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.
In this case you have to create 1 PV to 1 PVC.
If you would use cloud environment, you would use Dynamic provisioning.
When none of the static PVs the administrator created match a user's PersistentVolumeClaim, the cluster may try to dynamically provision a volume specially for the PVC. This provisioning is based on StorageClasses: the PVC must request a storage class and the administrator must have created and configured that class for dynamic provisioning to occur.
As for example, on GKE I've tried to reproduce it and 1 PVC bound to PV. As GKE is using Dynamic provisioning, when you defined only PVC it used default storageclass and automatically created PV.
$ kubectl get pv,pvc -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-git 1Gi RWO Retain Bound mlo-dev/pvc-git mlo-git 15s
persistentvolume/pvc-e7a1e950-396b-40f6-b8d1-8dffc9a304d0 1Gi RWO Delete Bound mlo-stage/pvc-git mlo-git 6s
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mlo-dev persistentvolumeclaim/pvc-git Bound pv-git 1Gi RWO mlo-git 10s
mlo-stage persistentvolumeclaim/pvc-git Bound pvc-e7a1e950-396b-40f6-b8d1-8dffc9a304d0 1Gi RWO mlo-git 9s
Solution
To fix this issue, you should create another PersistentVolume to bound second PVC.
For more details about bounding you can check this topic. If you would like more information about PVC check this SO thread.
If second PV won't help, please provide more details about your environment (Minikube/Kubeadm, K8s version, OS, etc.) and your storageclass YAML.

Kubernetes : PVC binding status in pending

I create a PV and claimed the PV through PVC. I see that PV is created but the PVC binding status is stuck in pending.When i looked at the describe pvc output , I see that no persistent volumes available for this claim and no storage class is set. From the documentation I understand that storage class isnt mandatory . So, am unsure on what's missing in the PVC file.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-ghost
labels:
pv: pv-ghost
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 3Gi
hostPath:
path: /ghost/data
--------------------------
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-ghost
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
pv: pv-ghost
Out of describe PV and PVC
kubectl describe pv pv-ghost
Name: pv-ghost
Labels: pv=pv-ghost
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWX
Capacity: 3Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /ghost/data
HostPathType:
Events: <none>
kubectl describe pvc pvc-ghost
Name: pvc-ghost
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 8m44s (x8 over 10m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Normal FailedBinding 61s (x5 over 2m3s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Mounted By: <none>
You need to specify the volume source manually.
ReadWriteMany is only available for AzureFile, CephFS, Glusterfs, Quobyte, NFS, PortworxVolume.
Also Flexvolume depending on the drivers and VsphereVolume works when pods are collocated.
You can read it all in Kubernetes docs regarding Volume Mode
An example PV for aws would look like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-volume
spec:
capacity:
storage: 15Gi # Doesn't really matter, as EFS does not enforce it anyway
volumeMode: Filesystem
accessModes:
- ReadWriteMany
mountOptions:
- hard
- nfsvers=4.1
- rsize=1048576
- wsize=1048576
- timeo=300
- retrans=2
nfs:
path: /
server: fs-XXX.efs.eu-central-2.amazonaws.com
In the above issue,
The Capacity specified in the persistent volume is lesser than the Persistent volume claim Capacity. Try either increasing the Capacity number in the Persistent volume to 5Gi or reducing the Capacity number in the Persistent volume claim to 3Gi.
When you are using a hostPath in the Persistent volume the accessModes should be ReadWriteOnce.
Hostpath method is currently not supported in a multi-node cluster.

storageclass.storage.k8s.io "standard" not found for pvc on bare metal Kubernetes cluster

I've created a persistentVolumeClaim on my custom Kubernetes cluster, however it seems to be stuck in pending...
Do I need to install/configure some additional something? OR is this functionality only available on GCP / AWS?
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
metadata:
name: testingchris
describe pvc:
Name: testingchris
Namespace: diyclientapps
StorageClass: standard
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"testingchris","namespace":"diyclientapps"},"spec":{"accessModes"...
Finalizers: []
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 8s (x3 over 36s) persistentvolume-controller storageclass.storage.k8s.io "standard" not found
PVC is just a Claim, a declaration of ones requirements for persistent storage.
For PVC to bind, a PV that is matching PVC requirements must show up, and that can happen in two ways : manual provisioning (adding a PV from ie. kubectl) or with Dynamic Volume Provisioning
What you experience is that your current setup did not auto provision for your PVC