RabbitMQ configuration files is not coping in the Kubernetes deployment - kubernetes

I'm trying to deploy RabbitMQ on the Kubernetes cluster and using the initcontainer to copy a file from ConfigMap. However, the file is not copying after POD is in a running state.
Initially, I have tried without using an initcontainer, but I was getting an error like "touch: cannot touch '/etc/rabbitmq/rabbitmq.conf': Read-only file system."
kind: Deployment
metadata:
name: broker01
namespace: s2sdocker
labels:
app: broker01
spec:
replicas: 1
selector:
matchLabels:
app: broker01
template:
metadata:
name: broker01
labels:
app: broker01
spec:
initContainers:
- name: configmap-copy
image: busybox
command: ['/bin/sh', '-c', 'cp /etc/rabbitmq/files/definitions.json /etc/rabbitmq/']
volumeMounts:
- name: broker01-definitions
mountPath: /etc/rabbitmq/files
- name: pre-install
mountPath: /etc/rabbitmq
containers:
- name: broker01
image: rabbitmq:3.7.17-management
envFrom:
- configMapRef:
name: broker01-rabbitmqenv-cm
ports:
volumeMounts:
- name: broker01-data
mountPath: /var/lib/rabbitmq
- name: broker01-log
mountPath: /var/log/rabbitmq/log
- name: broker01-definitions
mountPath: /etc/rabbitmq/files
volumes:
- name: pre-install
emptyDir: {}
- name: broker01-data
persistentVolumeClaim:
claimName: broker01-data-pvc
- name: broker01-log
persistentVolumeClaim:
claimName: broker01-log-pvc
- name: broker01-definitions
configMap:
name: broker01-definitions-cm
The file "definitions.json" should be copied to /etc/reabbitmq folder. I have followed "Kubernetes deployment read-only filesystem error". But issue did not fix.

After making changes in the "containers volumeMount section," I was able to copy the file on to /etc/rabbitmq folder.
Please find a modified code here.
- name: broker01
image: rabbitmq:3.7.17-management
envFrom:
- configMapRef:
name: broker01-rabbitmqenv-cm
ports:
volumeMounts:
- name: broker01-data
mountPath: /var/lib/rabbitmq
- name: broker01-log
mountPath: /var/log/rabbitmq/log
- name: pre-install
mountPath: /etc/rabbitmq

can you check permissions on /etc/rabbitmq/.
does the user has permission to copy the file to above location?
- name: pre-install
mountPath: /etc/rabbitmq
I see that /etc/rabbitmq is a mount point. it is a ready only file system and hence the file copy is failed.
can you update the permissions on 'pre-install' mount point

Related

Kubernetes says 'Deployment.apps "myDeploy" is invalid' after I deleted duplicated name of volumes

I had a Deployment, and a part of it was like this after a few update:
spec:
containers:
- image: myImage
name: myImage
volumeMounts:
- name: data
mountPath: /opt/data
volumes:
- name: data
configMap:
name: configmap-data
- name: data
configMap:
name: configmap-data
then I found that the Deployment had duplicated volumes so I deleted one:
spec:
containers:
- image: myImage
name: myImage
volumeMounts:
- name: data
mountPath: /opt/data
volumes:
- name: data
configMap:
name: configmap-data
However, after that I could not apply/patch new Deployment. It said:
cannot patch "myDeploy" with kind Deployment: Deployment.apps "myDeploy" is invalid: spec.template.spec.containers[0].volumeMounts[0].name: Not found: "data"
I would like to solve this without downtime. How can I do this?

How to use git-sync image as a sidecar in kubernetes that git pulls periodically

I am trying to use git-sync image as a side car in kubernetes that runs git-pull periodically and mounts cloned data to shared volume.
Everything is working fine when I configure it for sync one time. I want to run it periodically like every 10 mins. Somehow when I configure it to run periodically pod initializing is failing.
I read documentation but couldn't find proper answer. Would be nice if you help me to figure out what I am missing in my configuration.
Here is my configuration that failing.
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-helloworld
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: www-data
initContainers:
- name: git-sync
image: k8s.gcr.io/git-sync:v3.1.3
volumeMounts:
- name: www-data
mountPath: /data
env:
- name: GIT_SYNC_REPO
value: "https://github.com/musaalp/design-patterns.git" ##repo-path-you-want-to-clone
- name: GIT_SYNC_BRANCH
value: "master" ##repo-branch
- name: GIT_SYNC_ROOT
value: /data
- name: GIT_SYNC_DEST
value: "hello" ##path-where-you-want-to-clone
- name: GIT_SYNC_PERIOD
value: "10"
- name: GIT_SYNC_ONE_TIME
value: "false"
securityContext:
runAsUser: 0
volumes:
- name: www-data
emptyDir: {}
Pod
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx-helloworld
name: nginx-helloworld
spec:
containers:
- image: nginx
name: nginx-helloworld
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
you are using the git-sync as an initContainers, which run only during init (once in lifecycle)
A Pod can have multiple containers running apps within it, but it can also have one or more init containers, which are run before the app containers are started.
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
init-containers
So use this as a regular container
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: git-sync
image: k8s.gcr.io/git-sync:v3.1.3
volumeMounts:
- name: www-data
mountPath: /data
env:
- name: GIT_SYNC_REPO
value: "https://github.com/musaalp/design-patterns.git" ##repo-path-you-want-to-clone
- name: GIT_SYNC_BRANCH
value: "master" ##repo-branch
- name: GIT_SYNC_ROOT
value: /data
- name: GIT_SYNC_DEST
value: "hello" ##path-where-you-want-to-clone
- name: GIT_SYNC_PERIOD
value: "20"
- name: GIT_SYNC_ONE_TIME
value: "false"
securityContext:
runAsUser: 0
- name: nginx-helloworld
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: www-data
volumes:
- name: www-data
emptyDir: {}

ConfigMap volume is not mounting as volume along with secret

I have references to both Secrets and ConfigMaps in my Deployment YAML file. The Secret volume is mounting as expected, but not the ConfigMap.
I created a ConfigMap with multiple files and when I do a kubectl get configmap ... it shows all of the expected values. Also, when I create just ConfigMaps it's mounting the volume fine, but not along with a Secret.
I have tried different scenarios of having both in same directory to separating them but dont seem to work.
Here is my YAML. Am I referencing the ConfigMap correctly?
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
spec:
selector:
matchLabels:
app: hello
replicas: 1 # tells deployment to run 1 pod
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: XXX/yyy/image-50
ports:
- containerPort: 9009
protocol: TCP
volumeMounts:
- mountPath: /shared
name: configs-volume
readOnly: true
volumeMounts:
- mountPath: data/secrets
name: atp-secret
readOnly: true
volumes:
- name: configs-volume
configMap:
name: prompts-config
key: application.properties
volumes:
- name: atp-secret
secret:
defaultMode: 420
secretName: dev-secret
restartPolicy: Always
imagePullSecrets:
- name: ocirsecrets
You have two separate lists of volumes: and also two separate lists of volumeMounts:. When Kubernetes tries to find the list of volumes: in the Pod spec, it finds the last matching one in each set.
volumes:
- name: configs-volume
volumes: # this completely replaces the list of volumes
- name: atp-secret
In both cases you need a single volumes: or volumeMounts: key, and then multiple list items underneath those keys.
volumes:
- name: configs-volume
configMap: { ... }
- name: atp-secret # do not repeat volumes: key
secret: { ... }
containers:
- name: hello
volumeMounts:
- name: configs-volume
mountPath: /shared
- name: atp-secret # do not repeat volumeMounts: key
mounthPath: /data/secrets

Copying mounted files with command in Kubernetes is slow

I am creating an OrientDB cluster with Kubernetes (in Minikube) and I use Stateful Sets to create pods. I am trying to mount all the OrientDB cluster configs into a folder named configs. Using command I copy the files after mounting into the standard /orientdb/config folder. However, after I have added this functionality the pods were created at a slower rate and sometimes I get exceptions like:
Unable to connect to the server: net/http: TLS handshake timeout
Before that, I tried to mount configs directly into the /orientdb/config folder. But I had an error with permissions so I have researched that mounting to the root folder is prohibited.
How can I approach such an issue? And can I find some workaround?
The Stateful Set looks like this:
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: orientdbservice
spec:
serviceName: orientdbservice
replicas: 3
selector:
matchLabels:
service: orientdb
type: container-deployment
template:
metadata:
labels:
service: orientdb
type: container-deployment
spec:
containers:
- name: orientdbservice
image: orientdb:2.2.36
command: ["/bin/sh","-c", "cp /configs/* /orientdb/config/ ; /orientdb/bin/server.sh -Ddistributed=true" ]
env:
- name: ORIENTDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: orientdb-password
key: password.txt
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 2424
name: port-binary
- containerPort: 2480
name: port-http
volumeMounts:
- name: config
mountPath: /orientdb/config
- name: orientdb-config-backups
mountPath: /configs/backups.json
subPath: backups.json
- name: orientdb-config-events
mountPath: /configs/events.json
subPath: events.json
- name: orientdb-config-distributed
mountPath: /configs/default-distributed-db-config.json
subPath: default-distributed-db-config.json
- name: orientdb-config-hazelcast
mountPath: /configs/hazelcast.xml
subPath: hazelcast.xml
- name: orientdb-config-server
mountPath: /configs/orientdb-server-config.xml
subPath: orientdb-server-config.xml
- name: orientdb-config-client-logs
mountPath: /configs/orientdb-client-log.properties
subPath: orientdb-client-log.properties
- name: orientdb-config-server-logs
mountPath: /configs/orientdb-server-log.properties
subPath: orientdb-server-log.properties
- name: orientdb-config-plugin
mountPath: /configs/pom.xml
subPath: pom.xml
- name: orientdb-databases
mountPath: /orientdb/databases
- name: orientdb-backup
mountPath: /orientdb/backup
- name: orientdb-data
mountPath: /orientdb/bin/data
volumes:
- name: config
emptyDir: {}
- name: orientdb-config-backups
configMap:
name: orientdb-configmap-backups
- name: orientdb-config-events
configMap:
name: orientdb-configmap-events
- name: orientdb-config-distributed
configMap:
name: orientdb-configmap-distributed
- name: orientdb-config-hazelcast
configMap:
name: orientdb-configmap-hazelcast
- name: orientdb-config-server
configMap:
name: orientdb-configmap-server
- name: orientdb-config-client-logs
configMap:
name: orientdb-configmap-client-logs
- name: orientdb-config-server-logs
configMap:
name: orientdb-configmap-server-logs
- name: orientdb-config-plugin
configMap:
name: orientdb-configmap-plugin
- name: orientdb-data
hostPath:
path: /import_data
type: Directory
volumeClaimTemplates:
- metadata:
name: orientdb-databases
labels:
service: orientdb
type: pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
- metadata:
name: orientdb-backup
labels:
service: orientdb
type: pv-claim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi

Kubernetes/Openshift: how to use envFile to read from a filesystem file

I've configured two container into a pod.
The first of them is engaged to create a file like:
SPRING_DATASOURCE_USERNAME: username
SPRING_DATASOURCE_PASSWORD: password
I want that the second container reads from this location in order to initialize its env variables.
I'm using envFrom but I don't quite figure out how to use it.
This is my spec:
metadata:
annotations:
configmap.fabric8.io/update-on-change: ${project.artifactId}
labels:
name: wsec
name: wsec
spec:
replicas: 1
selector:
name: wsec
version: ${project.version}
provider: fabric8
template:
metadata:
labels:
name: wsec
version: ${project.version}
provider: fabric8
spec:
containers:
- name: sidekick
image: quay.io/ukhomeofficedigital/vault-sidekick:latest
args:
- -cn=secret:openshift/postgresql:env=USERNAME
env:
- name: VAULT_ADDR
value: "https://vault.vault-sidekick.svc:8200"
- name: VAULT_TOKEN
value: "34f8e679-3fbd-77b4-5de9-68b99217cc02"
volumeMounts:
- name: sidekick-backend-volume
mountPath: /etc/secrets
readOnly: false
- name: wsec
image: ${docker.image}
env:
- name: SPRING_APPLICATION_JSON
valueFrom:
configMapKeyRef:
name: wsec-configmap
key: SPRING_APPLICATION_JSON
envFrom:
???
volumes:
- name: sidekick-backend-volume
emptyDir: {}
It looks like you are packaging the environment variables with the first container and read then in the second container.
If that's the case, you can use initContainers.
Create a volume mapped to an emptyDir. Mount that on the initContainer(propertiesContainer) and the main container(springBootAppContainer) as a volumeMount. This directory is now visible to both containers.
image: properties/container/path
command: [bash, -c]
args: ["cp -r /location/in/properties/container /propsdir"]
volumeMounts:
- name: propsDir
mountPath: /propsdir
This will put the properties in /propsdir. When the main container starts, it can read properties from /propsdir