how to set rabbitmq data directory to pvc in kubernetes pod - kubernetes

I tried to create standalone rabbitmq kubernetes service. And the data should be mount to my persistent volume.
.....
apiVersion: v1
kind: ConfigMap
metadata:
name: rabbitmq-config
data:
enabled_plugins: |
[rabbitmq_management,rabbitmq_peer_discovery_k8s].
rabbitmq.conf: |
loopback_users.guest = false
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: standalone-rabbitmq
spec:
serviceName: standalone-rabbitmq
replicas: 1
template:
.....
volumeMounts:
- name: config-volume
mountPath: /etc/rabbitmq
- name: standalone-rabbitmq-data
mountPath: /data
- name: config-volume
configMap:
name: rabbitmq-config
items:
- key: rabbitmq.conf
path: rabbitmq.conf
- key: enabled_plugins
path: enabled_plugins
- name: standalone-rabbitmq-data
persistentVolumeClaim:
claimName: standalone-rabbitmq-pvc-test
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: standalone-rabbitmq-pvc-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: test-storage-class
According to my research, I understood that data directory of rabbitmq is RABBITMQ_MNESIA_DIR (please see https://www.rabbitmq.com/relocate.html). So I just wanted to set this parameter “/data”, so that my new PVC(standalone-rabbitmq-pvc-test) is used to keep the data.
Can you tell me how to configurate this?

HTH, So pretty much here's my configuration. From which you can see there are mount points. First is the data, second is config and third is definitions respectively as in the YML mentioned below.
volumeMounts:
- mountPath: /var/lib/rabbitmq
name: rmqdata
- mountPath: /etc/rabbitmq
name: config
- mountPath: /etc/definitions
name: definitions
readOnly: true
And here's the PVC template stuff.
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: rmqdata
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: nfs-provisioner
volumeMode: Filesystem
Post Update: As per the comments, you can mount it directly to the data folder like this. The section where you assign the rmqdata to mountpath will stay the same.
volumes:
- hostPath:
path: /data
type: DirectoryOrCreate
name: rmqdata

In order to add the path, create a new file, let's say 'rabbitmq.properties' and add all the environment variables you'll need, one per line:
echo "RABBITMQ_MNESIA_DIR=/data" >> rabbitmq.properties
Then run kubectl create configmap rabbitmq-config --from-file=rabbitmq.properties to generate the configmap.
If you need to aggregate multiple config files in one configmap point the folder's full path in the --from-file argument
Then you can run kubectl get configmaps rabbitmq-config -o yaml
to display the yaml that was created:
user#k8s:~$ kubectl get configmaps rabbitmq-config -o yaml
apiVersion: v1
data:
rabbitmq.properties: |
RABBITMQ_MNESIA_DIR=/data
kind: ConfigMap
metadata:
creationTimestamp: "2019-12-30T11:33:27Z"
name: rabbitmq-config
namespace: default
resourceVersion: "1106939"
selfLink: /api/v1/namespaces/default/configmaps/rabbitmq-config
uid: 4c6b1599-a54b-4e0e-9b7d-2799ea5d9e39
If all other aspects of your configmap is correct, you can just add in the data section of your configmap this lines:
rabbitmq.properties: |
RABBITMQ_MNESIA_DIR=/data

Related

Why local persistent volumes not visible in EKS?

In order to test if I can get self written software deployed in amazon using docker images,
I have a test eks cluster.
I have written a small test script that reads and writes a file to see if I understand how to deploy. I have successfully deployed it in minikube, using three replica's. The replica's all use a shared directory on my local file system, and in minikube that is mounted into the pods with a volume
The next step was to deploy that in the eks cluster. However, I cannot get it working in eks. The problem is that the pods don't see the contents of the mounted directory.
This does not completely surprise me, since in minikube I had to create a mount first to a local directory on the server. I have not done something similar on the eks server.
My question is what I should do to make this working (if possible at all).
I use this yaml file to create a pod in eks:
apiVersion: v1
kind: PersistentVolume
metadata:
name: "pv-volume"
spec:
storageClassName: local-storage
capacity:
storage: "1Gi"
accessModes:
- "ReadWriteOnce"
hostPath:
path: /data/k8s
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "pv-claim"
spec:
storageClassName: local-storage
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "500M"
---
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: "/config"
volumes:
- name: cmount
persistentVolumeClaim:
claimName: pv-claim
So what I expect is that I have a local directory, /data/k8s, that is visible in the pods as path /config.
When I apply this yaml, I get a pod that gives an error message that makes clear the data in the /data/k8s directory is not visible to the pod.
Kubectl gives me this info after creation of the volume and claim
[rdgon#NL013-PPDAPP015 probeer]$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-volume 1Gi RWO Retain Available 15s
persistentvolume/pvc-156edfef-d272-4df6-ae16-09b12e1c2f03 1Gi RWO Delete Bound default/pv-claim gp2 9s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv-claim Bound pvc-156edfef-d272-4df6-ae16-09b12e1c2f03 1Gi RWO gp2 15s
Which seems to indicate everything is OK. But it seems that the filesystem of the master node, on which I run the yaml file to create the volume, is not the location where the pods look when they access the /config dir.
On EKS, there's no storage class named 'local-storage' by default.
There is only a 'gp2' storage class, which is also used when you don't specify a storageClassName.
The 'gp2' storage class creates a dedicated EBS volume and attaches it your Kubernetes Node when required, so it doesn't use a local folder. You also don't need to create the pv manually, just the pvc:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "pv-claim"
spec:
storageClassName: gp2
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "500M"
---
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: "/config"
volumes:
- name: cmount
persistentVolumeClaim:
claimName: pv-claim
If you want a folder on the Node itself, you can use a 'hostPath' volume, and you don't need a pv or pvc for that:
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: "/config"
volumes:
- name: cmount
hostPath:
path: /data/k8s
This is a bad idea, since the data will be lost if another node starts up, and your pod is moved to the new node.
If it's for configuration only, you can also use a configMap, and put the files directly in your kubernetes manifest files.
apiVersion: v1
kind: ConfigMap
metadata:
name: ruud-config
data:
ruud.properties: |
my ruud.properties file content...
---
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: "/config"
volumes:
- name: cmount
configMap:
name: ruud-config
Please check whether the pv got created and its "bound" to PVC by running below commands
kubectl get pv
kubectl get pvc
Which will give information whether the objects are created properly
The local path you refer to is not valid. Try:
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: /config
volumes:
- name: cmount
hostPath:
path: /data/k8s
type: DirectoryOrCreate # <-- You need this since the directory may not exist on the node.

How to replace the in-memory storage with persistent storage using kustomize

I'm trying to replace the in-memory storage of Grafana deployment with persistent storage using kustomize. What I'm trying to do is that I'm removing the in-memory storage and then mapping persistent storage. But When I'm deploying it then it is giving me an error.
Error
The Deployment "grafana" is invalid: spec.template.spec.containers[0].volumeMounts[1].name: Not found: "grafana-storage"
Kustomize version
{Version:kustomize/v4.0.5 GitCommit:9e8e7a7fe99ec9fbf801463e8607928322fc5245 BuildDate:2021-03-08T20:53:03Z GoOs:linux GoArch:amd64}
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://github.com/prometheus-operator/kube-prometheus
- grafana-pvc.yaml
patchesStrategicMerge:
- grafana-patch.yaml
grafana-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-storage
namespace: monitoring
labels:
billingType: "hourly"
region: sng01
zone: sng01
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: ibmc-file-bronze
grafana-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
namespace: monitoring
spec:
template:
spec:
volumes:
# use persistent storage for storing users instead of in-memory storage
- $patch: delete <---- trying to remove the previous volume
name: grafana-storage
- name: grafana-storage
persistentVolumeClaim:
claimName: grafana-storage
containers:
- name: grafana
volumeMounts:
- name: grafana-storage
mountPath: /var/lib/grafana
please help.
The $patch: delete doesn't seem to work as I would expect.
It may be nice to open an issue on kustomize github: https://github.com/kubernetes-sigs/kustomize/issues and ask developers about it.
Although here is the patch I tried, and it seems to work:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
namespace: monitoring
spec:
template:
spec:
volumes:
- name: grafana-storage
emptyDir: null
persistentVolumeClaim:
claimName: grafana-storage
containers:
- name: grafana
volumeMounts:
- name: grafana-storage
mountPath: /var/lib/grafana
Based on https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/add-new-patchStrategy-to-clear-fields-not-present-in-patch.md
The following should also work in theory:
spec:
volumes:
- $retainKeys:
- name
- persistentVolumeClaim
name: grafana-storage
persistentVolumeClaim:
claimName: grafana-storage
But in practise it doesn't, and I think that's because kustomize has its own implementaions of strategic merge (different that k8s).

Multiple Persistent Volumes with the same mount path Kubernetes

I have created 3 CronJobs in Kubernetes. The format is exactly the same for every one of them except the names. These are the following specs:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: test-job-1 # for others it's test-job-2 and test-job-3
namespace: cron-test
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: test-job-1 # for others it's test-job-2 and test-job-3
image: busybox
imagePullPolicy: IfNotPresent
command:
- "/bin/sh"
- "-c"
args:
- cd database-backup && touch $(date +%Y-%m-%d:%H:%M).test-job-1 && ls -la # for others the filename includes test-job-2 and test-job-3 respectively
volumeMounts:
- mountPath: "/database-backup"
name: test-job-1-pv # for others it's test-job-2-pv and test-job-3-pv
volumes:
- name: test-job-1-pv # for others it's test-job-2-pv and test-job-3-pv
persistentVolumeClaim:
claimName: test-job-1-pvc # for others it's test-job-2-pvc and test-job-3-pvc
And also the following Persistent Volume Claims and Persistent Volume:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-job-1-pvc # for others it's test-job-2-pvc or test-job-3-pvc
namespace: cron-test
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
resources:
requests:
storage: 1Gi
volumeName: test-job-1-pv # depending on the name it's test-job-2-pv or test-job-3-pv
storageClassName: manual
volumeMode: Filesystem
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-job-1-pv # for others it's test-job-2-pv and test-job-3-pv
namespace: cron-test
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/database-backup"
So all in all there are 3 CronJobs, 3 PersistentVolumes and 3 PersistentVolumeClaims. I can see that the PersistentVolumeClaims and PersistentVolumes are bound correctly to each other. So test-job-1-pvc <--> test-job-1-pv, test-job-2-pvc <--> test-job-2-pv and so on. Also the pods associated with each PVC are are the corresponding pods created by each CronJob. For example test-job-1-1609066800-95d4m <--> test-job-1-pvc and so on. After letting the cron jobs run for a bit I create another pod with the following specs to inspect test-job-1-pvc:
apiVersion: v1
kind: Pod
metadata:
name: data-access
namespace: cron-test
spec:
containers:
- name: data-access
image: busybox
command: ["sleep", "infinity"]
volumeMounts:
- name: data-access-volume
mountPath: /database-backup
volumes:
- name: data-access-volume
persistentVolumeClaim:
claimName: test-job-1-pvc
Just a simple pod that keeps running all the time. When I get inside that pod with exec and see inside the /database-backup directory I see all the files created from all the pods created by the 3 CronJobs.
What I exepected to see?
I expected to see only the files created by test-job-1.
Is this something expected to happen? And if so how can you separate the PersistentVolumes to avoid something like this?
I suspect this is caused by the PersistentVolume definition: if you really only changed the name, all volumes are mapped to the same folder on the host.
hostPath:
path: "/database-backup"
Try giving each volume a unique folder, e.g.
hostPath:
path: "/database-backup/volume1"

StorageClass of type local with a pvc but gives an error in kubernetes

i want to use local volume that is mounted on my node on path: /mnts/drive.
so i created a storageclass (as shown in documentation for local storageclass),
and created a PVC and a simple pod which uses that volume.
so these are the configurations used:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-fast
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysampleclaim
spec:
storageClassName: local-fast
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: Pod
metadata:
name: mysamplepod
spec:
containers:
- name: frontend
image: nginx:1.13
volumeMounts:
- mountPath: "/var/www/html"
name: myvolume
volumes:
- name: myvolume
persistentVolumeClaim:
claimName: mysampleclaim
and when i try to create this yaml file gives me an error, don't know what i am missing:
Unable to mount volumes for pod "mysamplepod_default(169efc06-3141-11e8-8e58-02d4a61b9de4)": timeout expired list of unattached/unmounted volumes=[myvolume]
If you want to use local volume that is mounted on the node on /mnts/drive path, you just need to use hostPath volume in your pod:
A hostPath volume mounts a file or directory from the host node’s
filesystem into your pod.
The final pod.yaml is:
apiVersion: v1
kind: Pod
metadata:
name: mysamplepod
spec:
containers:
- name: frontend
image: nginx:1.13
volumeMounts:
- mountPath: "/var/www/html"
name: myvolume
volumes:
- name: myvolume
hostPath:
# directory location on host
path: /mnts/drive

Multiple Volume mounts with Kubernetes: one works, one doesn't

I am trying to create a Kubernetes pod with a single container which has two external volumes mounted on it. My .yml pod file is:
apiVersion: v1
kind: Pod
metadata:
name: my-project
labels:
name: my-project
spec:
containers:
- image: my-username/my-project
name: my-project
ports:
- containerPort: 80
name: nginx-http
- containerPort: 443
name: nginx-ssl-https
imagePullPolicy: Always
volumeMounts:
- mountPath: /home/projects/my-project/media/upload
name: pd-data
- mountPath: /home/projects/my-project/backups
name: pd2-data
imagePullSecrets:
- name: vpregistrykey
volumes:
- name: pd-data
persistentVolumeClaim:
claimName: pd-claim
- name: pd2-data
persistentVolumeClaim:
claimName: pd2-claim
I am using Persistent Volumes and Persisten Volume Claims, as such:
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pd-disk
labels:
name: pd-disk
spec:
capacity:
storage: 250Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: "pd-disk"
fsType: "ext4"
PVC
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pd-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Gi
I have initially created my disks using the command:
$ gcloud compute disks create --size 250GB pd-disk
Same goes for the second disk and second PV and PVC. Everything seems to work ok when I create the pod, no errors are thrown. Now comes the weird part: one of the paths is being mounted correctly (and is therefor persistent) and the other one is being erased every time I restart the pod...
I have tried re-creating everything from scratch, but nothing changes. Also, from the pod description, both volumes seem to be correctly mounted:
$ kubectl describe pod my-project
Name: my-project
...
Volumes:
pd-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd-claim
ReadOnly: false
pd2-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pd2-claim
ReadOnly: false
Any help is appreciated. Thanks.
The Kubernetes documentation states:
Volumes can not mount onto other volumes or have hard links to other
volumes
I had the same issue and in my case the problem was that both volume mounts had overlapping mountPaths, i.e. both started with /var/.
They mounted without issues after fixing that.
I do not see any direct problem for which such behavior as explained above has occurred! But what I can rather ask you to try is to use a "Deployment" instead of a "Pod" as suggested by many here, especially when using PVs and PVCs. Deployment takes care of many things to maintain the "Desired State". I have attached my code below for your reference which works and both the volumes are persistent even after deleting/terminating/restarting as this is managed by the Deployment's desired state.
Two difference which you would find in my code from yours are:
I have a deployment object instead of pod
I am using GlusterFs for my volume.
Deployment yml.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
namespace: platform
labels:
component: nginx
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
component: nginx
spec:
nodeSelector:
role: app-1
containers:
- name: nginx
image: vip-intOAM:5001/nginx:1.15.3
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/etc/nginx/conf.d/"
name: nginx-confd
- mountPath: "/var/www/"
name: nginx-web-content
volumes:
- name: nginx-confd
persistentVolumeClaim:
claimName: glusterfsvol-nginx-confd-pvc
- name: nginx-web-content
persistentVolumeClaim:
claimName: glusterfsvol-nginx-web-content-pvc
One of my PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: glusterfsvol-nginx-confd-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
glusterfs:
endpoints: gluster-cluster
path: nginx-confd
readOnly: false
persistentVolumeReclaimPolicy: Retain
claimRef:
name: glusterfsvol-nginx-confd-pvc
namespace: platform
PVC for the above
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: glusterfsvol-nginx-confd-pvc
namespace: platform
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi