RabbitMQ cluster-operator does not work in Kubernetes - kubernetes

RabbitMQ cluster operator does not work in Kubernetes.
I have a kubernetes cluster 1.17.17 of 3 nodes. I deployed it with a rancher.
According to this instruction I installed RabbitMQ cluster-operator:
https://www.rabbitmq.com/kubernetes/operator/quickstart-operator.html
kubectl apply -f "https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml"
Its ok! but..
I have created this very simple configuration for the instance according to the documentation:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq
namespace: test-rabbitmq
I have error:error while running "VolumeBinding" filter plugin for pod "rabbitmq-server-0": pod has unbound immediate PersistentVolumeClaims
after that i checked:
kubectl get storageclasses
and saw that there were no resources! I added the following storegeclasses:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
create pv and pvc:
kind: PersistentVolume
apiVersion: v1
metadata:
name: rabbitmq-data-sigma
labels:
type: local
namespace: test-rabbitmq
annotations:
volume.alpha.kubernetes.io/storage-class: rabbitmq-data-sigma
spec:
storageClassName: local-storage
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: "/opt/rabbitmq-data-sigma"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rabbitmq-data
namespace: test-rabbitmq
spec:
storageClassName: local-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
I end up getting an error in the volume - which is being generated automated:
FailedBinding no persistent volumes available for this claim and no storage class is set
please help to understand this problem!

You can configure Dynamic Volume Provisioning e.g. Dynamic NFS provisioning as describe in this article or you can manually create PersistentVolume ( it is NOT recommended approach).
I really recommend you to configure dynamic provisioning -
this will allow you to generate PersistentVolumes automatically.
Manually creating PersistentVolume
As I mentioned it isn't recommended approach but it may be useful when we want to check something quickly without configuring additional components.
First you need to create PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /mnt/rabbitmq # data will be stored in the "/mnt/rabbitmq" directory on the worker node
type: Directory
And then create the /mnt/rabbitmq directory on the node where the rabbitmq-server-0 Pod will be running. In your case you have 3 worker node so it may difficult to determine where the Pod will be running.
As a result you can see that the PersistentVolumeClaim was bound to the newly created PersistentVolume and the rabbitmq-server-0 Pod was created successfully:
# kubectl get pv,pvc -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv 10Gi RWO Recycle Bound test-rabbitmq/persistence-rabbitmq-server-0 11m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-rabbitmq persistentvolumeclaim/persistence-rabbitmq-server-0 Bound pv 10Gi RWO 11m
# kubectl get pod -n test-rabbitmq
NAME READY STATUS RESTARTS AGE
rabbitmq-server-0 1/1 Running 0 11m

Related

What Persistent Volume storage solutions I can use when I run my Kubernetes cluster on GCP?

I have deployed my Kubernetes cluster on GCP Compute Engines and having 3 Master Nodes and 3 Worker Nodes (It's not a GKE Cluster). Can anybody suggest me what storage options I can use for my cluster? If I create a virtual disk on GCP, can I use that disk as a persistent storage?
You can use GCE Persistent Disk Storage Class.
Here is how you create the storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
Then you do the following to create the PV & PVC and to attach to your pod.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gce-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Pod
metadata:
name: webserver-pd
spec:
containers:
- image: httpd
name: webserver
volumeMounts:
- mountPath: /data
name: dynamic-volume
volumes:
- name: dynamic-volume
persistentVolumeClaim:
claimName: gce-claim
Example taken from this blog post
There are two types of provisioning Persistent Volumes: Static Provisioning and Dynamic Provisioning.
I will briefly describe each of these types.
Static Provisioning
Using this approach you need to create Disk, PersistentVolume and PersistentVolumeClaim manually.
I've create simple example for you to illustrate how it works.
First I created disk, on GCP we can use gcloud command:
$ gcloud compute disks create --size 10GB --region europe-west3-c test-disk
NAME ZONE SIZE_GB TYPE STATUS
test-disk europe-west3-c 10 pd-standard READY
Next I created PV and PVC using this manifest files:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
gcePersistentDisk:
pdName: test-disk # This GCE PD must already exist.
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
After applaying this manifest files, we can check status of PV and PVC:
root#km:~# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-test 10Gi RWO Retain Bound default/claim-test 12m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/claim-test Bound pv-test 10Gi RWO 12m
Finally I used above claim as volume:
apiVersion: v1
kind: Pod
metadata:
name: web
spec:
containers:
- name: web
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx"
name: vol-test
volumes:
- name: vol-test
persistentVolumeClaim:
claimName: claim-test
We can inspect created Pod to check if it works as expected:
root#km:~# kubectl exec -it web -- bash
root#web:/# df -h
Filesystem Size Used Avail Use% Mounted on
...
/dev/sdb 9.8G 37M 9.8G 1% /usr/share/nginx
...
Dynamic Provisioning
In this case volume is provisioned automatically when application requires it.
First you need to create StorageClass object to define a provisioner such e.g. kubernetes.io/gce-pd.
We don't need to create PersistenVolume anymore, it's created automatically by StorageClass for us.
I've also created simple example for you to illustrate how it works.
First I created StorageClass as default storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
fstype: ext4
And then PVC (the same as in the previous example) - but in this case PV was created automatically:
root#km:~# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-8dcd69f1-7081-45a7-8424-cc02e61a4976 10Gi RWO Delete Bound default/claim-test standard 3m10s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/claim-test Bound pvc-8dcd69f1-7081-45a7-8424-cc02e61a4976 10Gi RWO standard 3m12s
In more advanced cases it may be useful to create multiple StorageClasses with differnet persistent disks types.

Kubernetes: 2 PVCs in 2 namespaces binding to the same PV, one successful, one failed

So I have 2 PVCs in 2 namespaces binding to 1 PV:
The following are the PVCs:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-git
namespace: mlo-dev
labels:
type: local
spec:
storageClassName: mlo-git
volumeMode: Filesystem
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-git
namespace: mlo-stage
labels:
type: local
spec:
storageClassName: mlo-git
volumeMode: Filesystem
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
and the PV:
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv-git
labels:
type: local
spec:
storageClassName: mlo-git
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
hostPath:
path: /git
in the namespace "mlo-dev", the binding is successful:
$ kubectl describe pvc pvc-git -n mlo-dev
Name: pvc-git
Namespace: mlo-dev
StorageClass: mlo-git
Status: Bound
Volume: pv-git
Labels: type=local
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWX
VolumeMode: Filesystem
Mounted By:
...
various different pods here...
...
Events: <none>
Whereas in the namespace "mlo-stage", the binding is failed with the error message: storageclass.storage.k8s.io "mlo-git" not found
$ kubectl describe pvc pvc-git -n mlo-stage
Name: pvc-git
Namespace: mlo-stage
StorageClass: mlo-git
Status: Pending
Volume:
Labels: type=local
Annotations: Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By:
...
various different pods here...
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 3m4s (x302 over 78m) persistentvolume-controller storageclass.storage.k8s.io "mlo-git" not found
As I know, PV is not scoped to namespace, so it should be possible for PVCs in different namespaces to bind to the same PV?
+++++
Added:
+++++
when "kubectl describe pv pv-git", I got the following:
$ kubectl describe pv pv-git
Name: pv-git
Labels: type=local
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: mlo-git
Status: Bound
Claim: mlo-dev/pvc-git
Reclaim Policy: Retain
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /git
HostPathType:
Events: <none>
I've tried to reproduced your scenario (however if you would provide your storageclass yaml to exact reproduction, and changed AccessMode for tests) and in my opinion this behavior is correctly (worked as designed).
When you want to check if specific object is namespaced you can use command:
$ kubectl api-resources | grep pv
persistentvolumeclaims pvc true PersistentVolumeClaim
persistentvolumes pv false PersistentVolume
As PVC is true its mean pvc is namespaced, and PV is not.
PersistentVolumeClain and PersistentVolume are bounding in relationship 1:1. When your first PVC bounded to PV, this PV is taken and cannot be used again in that moment. You should create second PV. It can be changed depends on reclaimPolicy and what happend with pop/deployment
I guess you are using Static provisioning.
A cluster administrator creates a number of PVs. They carry the details of the real storage, which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.
In this case you have to create 1 PV to 1 PVC.
If you would use cloud environment, you would use Dynamic provisioning.
When none of the static PVs the administrator created match a user's PersistentVolumeClaim, the cluster may try to dynamically provision a volume specially for the PVC. This provisioning is based on StorageClasses: the PVC must request a storage class and the administrator must have created and configured that class for dynamic provisioning to occur.
As for example, on GKE I've tried to reproduce it and 1 PVC bound to PV. As GKE is using Dynamic provisioning, when you defined only PVC it used default storageclass and automatically created PV.
$ kubectl get pv,pvc -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-git 1Gi RWO Retain Bound mlo-dev/pvc-git mlo-git 15s
persistentvolume/pvc-e7a1e950-396b-40f6-b8d1-8dffc9a304d0 1Gi RWO Delete Bound mlo-stage/pvc-git mlo-git 6s
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mlo-dev persistentvolumeclaim/pvc-git Bound pv-git 1Gi RWO mlo-git 10s
mlo-stage persistentvolumeclaim/pvc-git Bound pvc-e7a1e950-396b-40f6-b8d1-8dffc9a304d0 1Gi RWO mlo-git 9s
Solution
To fix this issue, you should create another PersistentVolume to bound second PVC.
For more details about bounding you can check this topic. If you would like more information about PVC check this SO thread.
If second PV won't help, please provide more details about your environment (Minikube/Kubeadm, K8s version, OS, etc.) and your storageclass YAML.

PersistentVolumeClaim in a namespace does not connect to a PersistentVolume

My PersistentVolumeClaim will not use the PersistentVolume I have prepared for it.
I have this PersistentVolume in monitoring-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume
labels:
usage: monitoring
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/k8data/monitoring
After I have done
kubectl apply -f monitoring-pv.yaml
I can check that it exists with kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
monitoring-volume 50Gi RWO Retain Available 5m
My PersistentVolumeClaim in monitoring-pvc.yaml looks like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: monitoring-claim
namespace: monitoring
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
selector:
matchLabels:
usage: monitoring
When I do kubectl apply -f monitoring-pvc.yaml it gets created.
I can look at my new PersistentVolumeClaim with get pvc -n monitoringand I see
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
monitoring-claim Pending manual 31s
When I look at my PersistentVolume with kubectl get pv I can see that it's still available:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
monitoring-volume 50Gi RWO Retain Available 16m
I had expected the PersistentVolume to be Boundbut it isn't. When I use a ´PersistentVolumeClaim´ with the same name as this, a new PersistentVolumeClaim is created that is written in /tmp and therefore not very persistent.
When I do the same operations without a namespace for my PersistentVolumeClaim everything seems to work.
I'm on minikube on a Ubuntu 18.04.
What do I need to change to be able to connect the volume with the claim?
When I reviewed my question and compared it to a working solution, I noticed that I had missed storageClassName that was set to manual in an example without a namespace that I was able to use.
My updated PersistentVolumenow looks like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume
labels:
usage: monitoring
spec:
storageClassName: manual
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/k8data/monitoring
The only difference is
storageClassName: manual
My preliminary findings is that this was the silly mistake I had done.
Persistent Volume and Volume Claim should in same namespace. You need to add namespace: monitoring. now you can try this below code
for Persistent Volume
monitoring-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitoring-volume
labels:
usage: monitoring
namespace: monitoring
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/k8data/monitoring
for Persistent volume claim
monitoring-pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: monitoring-claim
namespace: monitoring
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
selector:
matchLabels:
usage: monitoring

VolumeClaim in Kubernetes Google Cloud

I am trying to create both a PersistentVolume and a PersistentVolumeClaim on Google Kubernetes Engine.
The way to link them is via labelSelector.
I am creating the objects with this definition:
volume.yml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
namespace: mynamespace
labels:
pv-owner: owner
pv-usage: pv-test
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data/test/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-test
namespace: mynamespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
pv-usage: pv-test
and running:
kubectl apply -f volume.yml
Both objects are successfully created, however, VolumeClaim apparently keeps pending forever awaiting a Volume that matches its requirements.
Could you please help me?
Thanks!
First of all, PersistentVolume resources don’t belong to any namespace. They’re cluster-level resources like nodes, but PersistentVolumeClaim objects can only be created in a specific namespace.
Seems like when you created the claim earlier, it was immediately bound to the PersistentVolume. Can you show output of the commands:
$ kubectl get pv
$ kubectl get pvc
Highly likely your persistentVolumeReclaimPolicy was set to Retain, so your PersistentVolume is in Released status now. Since there is no another PersistenVolume resource matches your claim's requirements your PersistentVolumeClaim is in Pending status.
Thanks for your help #konstantin-vustin
I found the solution. I had to specify storageClassName: manual attribute in the spec of both objects.
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class
According to the doc
A PV with no storageClassName has no class and can only be bound to PVCs that request no particular class.
So IMO it should have worked before, so I am not sure if I clearly understood it.
This was the status before
kubectl get pv pv-test-vol && kubectl get pvc pv-test --namespace openwhisk
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-test-vol 2Gi RWO Retain Available manual 26s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-test Pending standard 26s
The updated definitions
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-test
namespace: mynamespace
labels:
pv-owner: owner
pv-usage: pv-test
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /data/test/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-test
namespace: mynamespace
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
selector:
matchLabels:
pv-usage: pv-test
This is the status after
kubectl get pv pv-test-vol && kubectl get pvc pv-test --namespace openwhisk
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-test-vol 2Gi RWO Retain Bound openwhisk/pv-test manual 4s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-test Bound pv-test-vol 2Gi RWO manual 4s

Kubernetes - PVC not binding the NFS PV

I created a physical volume using NFS and PVC for the same volume. However, the PVC always creates a EBS disk storage instead of binding to the PV. Please see the log below:
> kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
mynfspv 100Gi RWX Retain Available 7s
create PVC now
> kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
mynfspvc Bound pvc-a081c470-3f23-11e7-9d30-024e42ef6b60 100Gi RWX default 4s
> kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
mynfspv 100Gi RWX Retain Available 50s
pvc-a081c470-3f23-11e7-9d30-024e42ef6b60 100Gi RWX Delete Bound default/mynfspvc default 17s
nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mynfspv
labels:
name: nfs2
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: dbc56.efs.us-west-2.amazonaws.com
path: /
nfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mynfspvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
It looks like you have dynamic provisioning and default storageclass feature enabled, and the default class is AWS ebs. You can check your default class with following command:
$ kubectl get storageclasses
NAME TYPE
standard (default) kubernetes.io/aws-ebs
If this is correct, then I think you'll need to specify storage class to solve you problem.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: nfs-class
provisioner: kubernetes.io/fake-nfs
Add storageClassName to both of your PV
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mynfspvc
spec:
storageClassName: nfs-class
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
and PVC
apiVersion: v1
kind: PersistentVolume
metadata:
name: mynfspv
labels:
name: nfs2
spec:
storageClassName: nfs-class
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: dbc56.efs.us-west-2.amazonaws.com
path: /
You can check out https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1 for details.
Which version of Kube is this ? The root cause is as mentioned by #ddysher. In your setup the "Default" Storage class is "EBS" as you can see in the get pv{c} outputs. According to kube version you can also make use of 'calim selector' in PVC spec. Refer # https://github.com/kubernetes/community/blob/master/contributors/design-proposals/volume-selectors.md