I am installing a chart using helm but its Pod and PVC are getting stuck in pending state but I see PV are in available state.
I face this issue intermittently while installing chart
Pod describe :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 0s (x4 over 2m17s) default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
PVC describe :
Name: web-claim0
Namespace: edge
StorageClass: edge-custom
Status: Pending
Volume:
Labels: app.kubernetes.io/managed-by=Helm
io.kompose.service=web-claim0
Annotations: meta.helm.sh/release-name: manifest
meta.helm.sh/release-namespace: edge
volume.beta.kubernetes.io/storage-provisioner: docker.io/hostpath
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: web-69bd64d5cf-lmnqd
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 6s (x17 over 3m54s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "docker.io/hostpath" or manually created by system administrator
I have storage class as
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: {{ .Values.prefix }}-custom
provisioner: docker.io/hostpath
reclaimPolicy: Retain
volumeBindingMode: Immediate
PVC.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: web-claim0
name: web-claim0
spec:
storageClassName: {{ .Values.prefix }}-custom
selector:
matchLabels:
for_app: {{ .Values.prefix }}-manifest-web
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status: {}
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .Values.prefix }}-manifest-web-pv
labels:
for_app: {{ .Values.prefix }}-manifest-web
type: local
spec:
persistentVolumeReclaimPolicy: Retain
storageClassName: {{ .Values.prefix }}-custom
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/{{ .Values.prefix }}-manifeststorage"
on other hand PV is in available state
I had two pv,pvc with same app name due to which first pvc was getting bounded to pv which was create to other pvc. when kept unique app name specific to pv and pvc issue was resolved
Seems like PVC is unable to bind the PV. What you can do is you can add one claimRef block inside PV.
Something like this under the spec of pv:
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: your pvc name that you will create after creating PV
namespace: your namespace name
Now you need to create the pvc with the name you mentioned previously inside the mentioned namespace. It will bind as soon as you create the pvc.
Related
RabbitMQ cluster operator does not work in Kubernetes.
I have a kubernetes cluster 1.17.17 of 3 nodes. I deployed it with a rancher.
According to this instruction I installed RabbitMQ cluster-operator:
https://www.rabbitmq.com/kubernetes/operator/quickstart-operator.html
kubectl apply -f "https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml"
Its ok! but..
I have created this very simple configuration for the instance according to the documentation:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq
namespace: test-rabbitmq
I have error:error while running "VolumeBinding" filter plugin for pod "rabbitmq-server-0": pod has unbound immediate PersistentVolumeClaims
after that i checked:
kubectl get storageclasses
and saw that there were no resources! I added the following storegeclasses:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
create pv and pvc:
kind: PersistentVolume
apiVersion: v1
metadata:
name: rabbitmq-data-sigma
labels:
type: local
namespace: test-rabbitmq
annotations:
volume.alpha.kubernetes.io/storage-class: rabbitmq-data-sigma
spec:
storageClassName: local-storage
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: "/opt/rabbitmq-data-sigma"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rabbitmq-data
namespace: test-rabbitmq
spec:
storageClassName: local-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
I end up getting an error in the volume - which is being generated automated:
FailedBinding no persistent volumes available for this claim and no storage class is set
please help to understand this problem!
You can configure Dynamic Volume Provisioning e.g. Dynamic NFS provisioning as describe in this article or you can manually create PersistentVolume ( it is NOT recommended approach).
I really recommend you to configure dynamic provisioning -
this will allow you to generate PersistentVolumes automatically.
Manually creating PersistentVolume
As I mentioned it isn't recommended approach but it may be useful when we want to check something quickly without configuring additional components.
First you need to create PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /mnt/rabbitmq # data will be stored in the "/mnt/rabbitmq" directory on the worker node
type: Directory
And then create the /mnt/rabbitmq directory on the node where the rabbitmq-server-0 Pod will be running. In your case you have 3 worker node so it may difficult to determine where the Pod will be running.
As a result you can see that the PersistentVolumeClaim was bound to the newly created PersistentVolume and the rabbitmq-server-0 Pod was created successfully:
# kubectl get pv,pvc -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv 10Gi RWO Recycle Bound test-rabbitmq/persistence-rabbitmq-server-0 11m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-rabbitmq persistentvolumeclaim/persistence-rabbitmq-server-0 Bound pv 10Gi RWO 11m
# kubectl get pod -n test-rabbitmq
NAME READY STATUS RESTARTS AGE
rabbitmq-server-0 1/1 Running 0 11m
How to use a storage class for statefulset? I've created the StorageClass. I also created PVC but i'm a bit confused if PVC needs to be create since PVC already requests storage and volumeClaimTemplates also requests storage. Eitherway its not working with or with out pvc.
I get the following error:
create Pod dbhost001-0 in StatefulSet dbhost001 failed error: failed to create PVC mysql-dev-dbhost001-0: PersistentVolumeClaim "mysql-dev-dbhost001-0" is invalid: spec.resources[storage]: Required value
create Claim mysql-dev-dbhost001-0 for Pod dbhost001-0 in StatefulSet dbhost001 failed error: PersistentVolumeClaim "mysql-dev-dbhost001-0" is invalid: spec.resources[storage]: Required value
storageClass.yml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
Statefultset.yml:
apiVersion: apps/v1
kind: StatefulSet
....
....
volumeClaimTemplates:
- metadata:
name: mysql-dev
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
stroage: 2Gi
I'm not sure if pvc is needed? I was using this for a normal replicaset deployment. But not sure if Statefulset needs this.
PersistentVolumeClaim.yml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-dev
namespace: test-db-dev
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
storage: 2Gi
Figured it out.
First there was a typo in Statefultset.yml it should be storage instead of stroage.
Second there is no need for PersistentVolumeClaim since volumeClaimTemplates is the same thing which claims from storage class.
Info:
Kubernetes Server version: 1.14
AWS Cloud Provider
EBS volume, storageclass
Details:
I have installed statefulset in our kubernetes cluster, however, it stuck it "ContainerCreating" status. Upon checking the logs, the error is "AttachVolume.Attach failed for volume pvc-xxxxxx: error finding instance ip-xxxxx : "instance not found"
It was succesfully installed around 17 days ago, but re-installing for an update caused the pod to stuck in ContainerCreating.
Manual attaching volume to the instance works. But doing it via storage class is not working and stuck in ContainerCreating status.
storageclass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: ssd-default
allowVolumeExpansion: true
parameters:
encrypted: "true"
type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: Immediate
pvc yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
finalizers:
- kubernetes.io/pvc-protection
labels:
app.kubernetes.io/instance: thanos-store
app.kubernetes.io/name: thanos-store
name: data-thanos-store-0
namespace: thanos
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
storageClassName: ssd-default
volumeMode: Filesystem
volumeName: pvc-xxxxxx
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 3Gi
phase: Bound
pv yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubernetes.io/createdby: aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
finalizers:
- kubernetes.io/pv-protection
labels:
failure-domain.beta.kubernetes.io/region: ap-xxx
failure-domain.beta.kubernetes.io/zone: ap-xxx
name: pvc-xxxx
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: aws://xxxxx
capacity:
storage: 3Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: data-athena-thanos-store-0
namespace: thanos
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: failure-domain.beta.kubernetes.io/region
operator: In
values:
- ap-xxx
- key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
- ap-xxx
persistentVolumeReclaimPolicy: Delete
storageClassName: ssd-default
volumeMode: Filesystem
status:
phase: Bound
Describe pvc:
Name: data-athena-thanos-store-0
Namespace: athena-thanos
StorageClass: ssd-encrypted
Status: Bound
Volume: pvc-xxxx
Labels: app.kubernetes.io/instance=athena-thanos-store
app.kubernetes.io/name=athena-thanos-store
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 3Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: athena-thanos-store-0
The FailedAttachVolume error occurs when an EBS volume can’t be detached from an instance and thus cannot be attached to another. The EBS volume has to be in the available state to be attached. FailedAttachVolume is usually a symptom of an underlying failure to unmount and detach the volume.
Notice that while describing the PVC the StorageClass name is ssd-encrypted which is a mismatch with the config you showed earlier where the kind: StorageClass name is ssd-default. That's why you can mount the volume manually but not via the StorageClass. You can drop and recreate the StorageClass with a proper data.
Also, I recommend going through this article and using volumeBindingMode: WaitForFirstConsumer instead of volumeBindingMode: Immediate. This setting instructs the volume provisioner to not create a volume immediately, and instead, wait for a pod using an associated PVC to run through scheduling.
My PersistentVolumeClaim will not use the PersistentVolume I have prepared for it.
I have this PersistentVolume in monitoring-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume
labels:
usage: monitoring
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/k8data/monitoring
After I have done
kubectl apply -f monitoring-pv.yaml
I can check that it exists with kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
monitoring-volume 50Gi RWO Retain Available 5m
My PersistentVolumeClaim in monitoring-pvc.yaml looks like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: monitoring-claim
namespace: monitoring
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
selector:
matchLabels:
usage: monitoring
When I do kubectl apply -f monitoring-pvc.yaml it gets created.
I can look at my new PersistentVolumeClaim with get pvc -n monitoringand I see
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
monitoring-claim Pending manual 31s
When I look at my PersistentVolume with kubectl get pv I can see that it's still available:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
monitoring-volume 50Gi RWO Retain Available 16m
I had expected the PersistentVolume to be Boundbut it isn't. When I use a ´PersistentVolumeClaim´ with the same name as this, a new PersistentVolumeClaim is created that is written in /tmp and therefore not very persistent.
When I do the same operations without a namespace for my PersistentVolumeClaim everything seems to work.
I'm on minikube on a Ubuntu 18.04.
What do I need to change to be able to connect the volume with the claim?
When I reviewed my question and compared it to a working solution, I noticed that I had missed storageClassName that was set to manual in an example without a namespace that I was able to use.
My updated PersistentVolumenow looks like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume
labels:
usage: monitoring
spec:
storageClassName: manual
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/k8data/monitoring
The only difference is
storageClassName: manual
My preliminary findings is that this was the silly mistake I had done.
Persistent Volume and Volume Claim should in same namespace. You need to add namespace: monitoring. now you can try this below code
for Persistent Volume
monitoring-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume
labels:
usage: monitoring
namespace: monitoring
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/k8data/monitoring
for Persistent volume claim
monitoring-pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: monitoring-claim
namespace: monitoring
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
selector:
matchLabels:
usage: monitoring
I am trying to create both a PersistentVolume and a PersistentVolumeClaim on Google Kubernetes Engine.
The way to link them is via labelSelector.
I am creating the objects with this definition:
volume.yml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
namespace: mynamespace
labels:
pv-owner: owner
pv-usage: pv-test
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data/test/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-test
namespace: mynamespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
pv-usage: pv-test
and running:
kubectl apply -f volume.yml
Both objects are successfully created, however, VolumeClaim apparently keeps pending forever awaiting a Volume that matches its requirements.
Could you please help me?
Thanks!
First of all, PersistentVolume resources don’t belong to any namespace. They’re cluster-level resources like nodes, but PersistentVolumeClaim objects can only be created in a specific namespace.
Seems like when you created the claim earlier, it was immediately bound to the PersistentVolume. Can you show output of the commands:
$ kubectl get pv
$ kubectl get pvc
Highly likely your persistentVolumeReclaimPolicy was set to Retain, so your PersistentVolume is in Released status now. Since there is no another PersistenVolume resource matches your claim's requirements your PersistentVolumeClaim is in Pending status.
Thanks for your help #konstantin-vustin
I found the solution. I had to specify storageClassName: manual attribute in the spec of both objects.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class
According to the doc
A PV with no storageClassName has no class and can only be bound to PVCs that request no particular class.
So IMO it should have worked before, so I am not sure if I clearly understood it.
This was the status before
kubectl get pv pv-test-vol && kubectl get pvc pv-test --namespace openwhisk
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-test-vol 2Gi RWO Retain Available manual 26s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-test Pending standard 26s
The updated definitions
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
namespace: mynamespace
labels:
pv-owner: owner
pv-usage: pv-test
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data/test/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-test
namespace: mynamespace
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
pv-usage: pv-test
This is the status after
kubectl get pv pv-test-vol && kubectl get pvc pv-test --namespace openwhisk
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-test-vol 2Gi RWO Retain Bound openwhisk/pv-test manual 4s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-test Bound pv-test-vol 2Gi RWO manual 4s