Cannot mount glusterfs volume on GKE - kubernetes

I have deployed a GlusterFS cluster on GCE, three nodes, one volume and one brick. I need to mount it on a pod deployed on GKE. I have successfully created the endpoint and PV, but I cannot create the PVC, If I introduce the volumeName refered to my PV, I get the following error:
2s Warning VolumeMismatch persistentvolumeclaim/my-glusterfs-pvc Cannot bind to requested volume "my-glusterfs-pv": storageClassName does not match
If I don't introduce a volumeName selector I get the following error:
3s Warning ProvisioningFailed persistentvolumeclaim/my-glusterfs-pvc Failed to provision volume with StorageClass "standard": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce ReadOnlyMany] are supported
I though that glusterfs has native support on GKE, however I cannot find on the GCP site:
https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#storage_driver_support
Is it related to the errors I'm getting while deploying the PVC? How could I solve it?
Thanks!
I have not defined any storageClass. I can only find Heketi provisioned GlusterFS storageClass on Kubernetes documentation. My YAMLs:
kind: PersistentVolume
metadata:
name: my-glusterfs-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
glusterfs:
path: mypath
endpoints: my-glusterfs-cluster
readOnly: false
persistentVolumeReclaimPolicy: Retain
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-glusterfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
volumeName: my-glusterfs-pv
I have deployed Heketi on my glusterfs cluster and it works correctly. However I cannot deploy the pvc on GKE. Glusterfs-client is running ok, the heketi secret is available, the gluster storageclass has been created without errors, but the PVC remains in Pending an I have found this event:
17m Warning ProvisioningFailed persistentvolumeclaim/liferay-glusterfs-pvc Failed to provision volume with StorageClass "gluster-heketi-external": failed to create volume: failed to create volume: see kube-controller-manager.log for details
Storageclass:
kind: StorageClass
#apiVersion: storage.k8s.io/v1beta1
apiVersion: storage.k8s.io/v1
metadata:
name: gluster-heketi-external
provisioner: kubernetes.io/glusterfs
parameters:
restauthenabled: "true"
resturl: "http://my-heketi-node-ip:8080"
clusterid: "idididididididididid"
restuser: "myadminuser"
secretName: "heketi-secret"
secretNamespace: "my-app"
volumetype: "replicate:3"
PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
volume.beta.kubernetes.io/storage-class: gluster-heketi-external
# finalizers:
# - kubernetes.io/pvc-protection
name: my-app-pvc
namespace: my-app
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
I also have created the firewall rule needed for Heketi, in order GKE nodes can access to Heketi resturl and port. I have checked that it´s working ok launching a telnet to 8080 port while I am sniffing the traffic on the Heketi node. Everything appears to be ok, I can see the answer and the packets. However I cannot see any packet when I deploy the pvc. There is nothing on syslog. I can create volumes when the heketi command is launched locally.
I have tried to curl Heketi node from a gluster-client pod and it works correctly:
kubectl exec -it gluster-client-6fqmb -- curl http://[myheketinodeip]:8080/hello
Hello from Heketi

Related

Failed to provision volume with StorageClass "gluster-heketi-storageclass" failed to create volume: see kube-controller-manager.log for details

My Env:
kubernetes:1.20.0
glusterfs-server-6.10-1.el7.x86_64
heketi-8.0.0-1.el7.x86_64
heketi-client-8.0.0-1.el7.x86_64
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gluster-heketi-storageclass
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Retain
parameters:
resturl: "http://1xxxxxxx:18080"
restauthenabled: "true"
restuser: "admin"
secretName: "heketi-secret"
secretNamespace: "default"
#volumetype: "none"
volumetype: "replicate:3"
clusterid: "60d0c41c0b232906f90b528fbb58400a"
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfs-vol-pvc02
namespace: default
spec:
storageClassName: gluster-heketi-storageclass
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
kubectl describe pvc glusterfs-vol-pvc02
Failed to provision volume with StorageClass "gluster-heketi-storageclass": failed to create volume: failed to create volume: see kube-controller-manager.log for details
I had a similar error. Connectivity between glusterfs nodes has been broken. Run
gluster peer status
command and check the /etc/hosts file on all glusterfs nodes. In my case, the hosts file lacked the necessary information and not all nodes could communicate with each other.

RabbitMQ cluster-operator does not work in Kubernetes

RabbitMQ cluster operator does not work in Kubernetes.
I have a kubernetes cluster 1.17.17 of 3 nodes. I deployed it with a rancher.
According to this instruction I installed RabbitMQ cluster-operator:
https://www.rabbitmq.com/kubernetes/operator/quickstart-operator.html
kubectl apply -f "https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml"
Its ok! but..
I have created this very simple configuration for the instance according to the documentation:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmq
namespace: test-rabbitmq
I have error:error while running "VolumeBinding" filter plugin for pod "rabbitmq-server-0": pod has unbound immediate PersistentVolumeClaims
after that i checked:
kubectl get storageclasses
and saw that there were no resources! I added the following storegeclasses:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
create pv and pvc:
kind: PersistentVolume
apiVersion: v1
metadata:
name: rabbitmq-data-sigma
labels:
type: local
namespace: test-rabbitmq
annotations:
volume.alpha.kubernetes.io/storage-class: rabbitmq-data-sigma
spec:
storageClassName: local-storage
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: "/opt/rabbitmq-data-sigma"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: rabbitmq-data
namespace: test-rabbitmq
spec:
storageClassName: local-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
I end up getting an error in the volume - which is being generated automated:
FailedBinding no persistent volumes available for this claim and no storage class is set
please help to understand this problem!
You can configure Dynamic Volume Provisioning e.g. Dynamic NFS provisioning as describe in this article or you can manually create PersistentVolume ( it is NOT recommended approach).
I really recommend you to configure dynamic provisioning -
this will allow you to generate PersistentVolumes automatically.
Manually creating PersistentVolume
As I mentioned it isn't recommended approach but it may be useful when we want to check something quickly without configuring additional components.
First you need to create PersistentVolume:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /mnt/rabbitmq # data will be stored in the "/mnt/rabbitmq" directory on the worker node
type: Directory
And then create the /mnt/rabbitmq directory on the node where the rabbitmq-server-0 Pod will be running. In your case you have 3 worker node so it may difficult to determine where the Pod will be running.
As a result you can see that the PersistentVolumeClaim was bound to the newly created PersistentVolume and the rabbitmq-server-0 Pod was created successfully:
# kubectl get pv,pvc -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv 10Gi RWO Recycle Bound test-rabbitmq/persistence-rabbitmq-server-0 11m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-rabbitmq persistentvolumeclaim/persistence-rabbitmq-server-0 Bound pv 10Gi RWO 11m
# kubectl get pod -n test-rabbitmq
NAME READY STATUS RESTARTS AGE
rabbitmq-server-0 1/1 Running 0 11m

kubernetes volume mount issue for google PD (kubeadm installed)

I have Kubernetes installed via kubeadm in google cloud. Trying to mount google PD.
storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
fstype: ext4
volumeBindingMode: WaitForFirstConsumer
pvc-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 8Gi
storageClassName: standard
but getting the following error:
error while running "VolumeBinding" prebind plugin for pod "test-pd": Failed to bind volumes: provisioning failed for PVC
Upon testing on a GKE cluster I was able to successfully deploy your StorageClass, PersistentVolumeClaim and a deployment of one Pod using it.
To discard the possibility of this being specific to vanilla Kubernetes it might help to also provide the YAML for your test-pd Pod/Deployment.

dynamic storage class on gcp kubernetes

I am trying to set k8 cluster on centos7 spinned on gcp cloud. i created 3 masters and 2 worker nodes. installation and checks are fine. now i am trying to create dynamic storage class to use gcp disk, but somehow it goes in pending state and no error messages found. can anyone point me to correct doc or steps to make sure it works.
[root#master1 ~]# cat slowdisk.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
zone: us-east1-b
[root#master1 ~]# cat pclaim-slow.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2G
[root#master1 ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim2 Pending 5m58s
[root#master1 ~]# kubectl describe pvc
Name: claim2
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"claim2","namespace":"default"},"spec":{"accessModes...
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: <none>
Events: <none>
You haven't mentioned the storage class name. The value storageClassName: standard needs to be added on to PVC manifest file pclaim-slow.yaml.
You are missing the storageClassName in PVC yaml. Below is the update pvc.yaml. Check the documentation as well.
kind: PersistentVolumeClaim
metadata:
name: claim2
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 2G
It's kind of weird that you are not getting any error events from your PersistenVolumeClaim.
You can read Provisioning regional Persistent Disks, which says:
It's in Beta
This is a Beta release of local PersistentVolumes. This feature is not covered by any SLA or deprecation policy and might be subject to backward-incompatible changes.
Also pd-standard type, which you have configured is supposed to be used for drives of at least 200Gi.
Note: To use regional persistent disks of type pd-standard, set the PersistentVolumeClaim.storage attribute to 200Gi or higher. If you need a smaller persistent disk, use pd-ssd instead of pd-standard.
You can read more regarding Dynamic Persistent Volumes.
If you encounter this error PVC pending Failed to get GCE GCECloudProvider that means your cluster was not configured to use GCE as Cloud Provider. It can be fixed by adding cloud-provider: gce to the controller manager or just applying the yaml:
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
controllerManager:
extraArgs:
cloud-provider: gce
cloud-config: /etc/kubernetes/cloud
extraVolumes:
- name: cloud-config
hostPath: /etc/kubernetes/cloud-config
mountPath: /etc/kubernetes/cloud
pathType: FileOrCreate
kubeadm upgrade apply --config gce.yaml
Let me know if that helps, if not I'll try to be more helpful.

How do I use PV and PVC for *reliable* persistent volumes?

I followed the instructions in this post:
how to bound a Persistent volume claim with a gcePersistentDisk?
And when I applied that, my PVC did not bind to the PV, instead I got this error in the event list:
14s 17s 2 test-pvc.155b8df6bac15b5b PersistentVolumeClaim Warning ProvisioningFailed persistentvolume-controller Failed to provision volume with StorageClass "standard": claim.Spec.Selector is not supported for dynamic provisioning on GCE
I found a github posting that suggested something that would fix this:
https://github.com/coreos/prometheus-operator/issues/323#issuecomment-299016953
But unfortunately that made no difference.
Is there a soup-to-nuts doc somewhere telling us exactly how to use PV and PVC to create truly persistent volumes? Specifically where you can shut down the pv and pvc and restore them later, and get all your content back? Because as it seems right now, if you lose your PVC for whatever reason, you lose connection to your volume and there is no way to get it back again.
The default StorageClass is not compatible with a gcePesistentDisk. Something like this would work:
$ cat <<EOF | kubectl create -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
EOF
then on your PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
labels:
app: test
spec:
accessModes:
- ReadWriteOnce
storageClassName: "slow" <== specify the storageClass
resources:
requests:
storage: 2Gi
selector:
matchLabels:
app: test
You can also set "slow" as the default storageClass in which case you wouldn't have to specify it on your PVC:
$ kubectl patch storageclass slow -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'