How can I put multi configmap in the same mount path in kubernetes? - kubernetes

I have two python files, testing_1.py and testing_2.py.
Then, I have created the configmap with testing-config1 to store testing_1.py and testing-config2 to store testing_2.py respectively.
In Kubernetes yaml,
...
containers:
- name: spark-master
image: bde2020/spark-master:2.4.4-hadoop2.7
volumeMounts:
- name: testing
mountPath: /jobs
- name: testing2
mountPath: /jobs
volumes:
- name: testing
configMap:
name: testing-config1
- name: testing2
configMap:
name: testing-config2
...
In the final result, the path only contains testing_1.py.

You can do it by providing subPath while specifying path. Change it as below:
containers:
- name: spark-master
image: bde2020/spark-master:2.4.4-hadoop2.7
volumeMounts:
- name: testing
mountPath: /jobs/testing_1.py
subPath: testing_1.py
- name: testing2
mountPath: /jobs/testing_2.py
subPath: testing_2.py
volumes:
- name: testing
configMap:
name: testing-config1
- name: testing2
configMap:
name: testing-config2

You can put both files in the same ConfigMap.
Or, use a projected volume:
A projected volume maps several existing volume sources into the same
directory.
Currently, the following types of volume sources can be projected:
secret
downwardAPI
configMap
serviceAccountToken

Related

Confimap volume with defaultMode no working?

yaml file like below
apiVersion: v1
kind: Pod
metadata:
name: fortune-configmap-volume
spec:
containers:
- image: luksa/fortune:env
env:
- name: INTERVAL
valueFrom:
configMapKeyRef:
name: fortune-config
key: sleep-interval
name: html-generator
volumeMounts:
- name: html
mountPath: /var/htdocs
- image: nginx:alpine
name: web-server
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
readOnly: true
- name: config
mountPath: /etc/nginx/conf.d
volumes:
- name: html
emptyDir: {}
- name: config
configMap:
name: fortune-config
defaultMode: 0777
As you can see,I set the configMap volume defaultMode 0777, when I try to modify file in container path /etc/nginx/conf.d, it told me operation is not allowed, but why?
https://github.com/kubernetes/kubernetes/issues/62099
Secret, configMap, downwardAPI and projected volumes will be mounted as read-only volumes. Applications that attempt to write to these volumes will receive read-only filesystem errors. Previously, applications were allowed to make changes to these volumes, but those changes were reverted at an arbitrary interval by the system. Applications should be re-configured to write derived files to another location.

Kubernetes says 'Deployment.apps "myDeploy" is invalid' after I deleted duplicated name of volumes

I had a Deployment, and a part of it was like this after a few update:
spec:
containers:
- image: myImage
name: myImage
volumeMounts:
- name: data
mountPath: /opt/data
volumes:
- name: data
configMap:
name: configmap-data
- name: data
configMap:
name: configmap-data
then I found that the Deployment had duplicated volumes so I deleted one:
spec:
containers:
- image: myImage
name: myImage
volumeMounts:
- name: data
mountPath: /opt/data
volumes:
- name: data
configMap:
name: configmap-data
However, after that I could not apply/patch new Deployment. It said:
cannot patch "myDeploy" with kind Deployment: Deployment.apps "myDeploy" is invalid: spec.template.spec.containers[0].volumeMounts[0].name: Not found: "data"
I would like to solve this without downtime. How can I do this?

ConfigMap volume is not mounting as volume along with secret

I have references to both Secrets and ConfigMaps in my Deployment YAML file. The Secret volume is mounting as expected, but not the ConfigMap.
I created a ConfigMap with multiple files and when I do a kubectl get configmap ... it shows all of the expected values. Also, when I create just ConfigMaps it's mounting the volume fine, but not along with a Secret.
I have tried different scenarios of having both in same directory to separating them but dont seem to work.
Here is my YAML. Am I referencing the ConfigMap correctly?
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
spec:
selector:
matchLabels:
app: hello
replicas: 1 # tells deployment to run 1 pod
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: XXX/yyy/image-50
ports:
- containerPort: 9009
protocol: TCP
volumeMounts:
- mountPath: /shared
name: configs-volume
readOnly: true
volumeMounts:
- mountPath: data/secrets
name: atp-secret
readOnly: true
volumes:
- name: configs-volume
configMap:
name: prompts-config
key: application.properties
volumes:
- name: atp-secret
secret:
defaultMode: 420
secretName: dev-secret
restartPolicy: Always
imagePullSecrets:
- name: ocirsecrets
You have two separate lists of volumes: and also two separate lists of volumeMounts:. When Kubernetes tries to find the list of volumes: in the Pod spec, it finds the last matching one in each set.
volumes:
- name: configs-volume
volumes: # this completely replaces the list of volumes
- name: atp-secret
In both cases you need a single volumes: or volumeMounts: key, and then multiple list items underneath those keys.
volumes:
- name: configs-volume
configMap: { ... }
- name: atp-secret # do not repeat volumes: key
secret: { ... }
containers:
- name: hello
volumeMounts:
- name: configs-volume
mountPath: /shared
- name: atp-secret # do not repeat volumeMounts: key
mounthPath: /data/secrets

How to mount multiple files / secrets into common directory in kubernetes?

I've multiple secrets created from different files. I'd like to store all of them in common directory /var/secrets/. Unfortunately, I'm unable to do that because kubernetes throws 'Invalid value: "/var/secret": must be unique error during pod validation step. Below is an example of my pod definition.
apiVersion: v1
kind: Pod
metadata:
labels:
run: alpine-secret
name: alpine-secret
spec:
containers:
- command:
- sleep
- "3600"
image: alpine
name: alpine-secret
volumeMounts:
- name: xfile
mountPath: "/var/secrets/"
readOnly: true
- name: yfile
mountPath: "/var/secrets/"
readOnly: true
volumes:
- name: xfile
secret:
secretName: my-secret-one
- name: yfile
secret:
secretName: my-secret-two
How can I store files from multiple secrets in the same directory?
Projected Volume
You can use a projected volume to have two secrets in the same directory
Example
apiVersion: v1
kind: Pod
metadata:
labels:
run: alpine-secret
name: alpine-secret
spec:
containers:
- command:
- sleep
- "3600"
image: alpine
name: alpine-secret
volumeMounts:
- name: xyfiles
mountPath: "/var/secrets/"
readOnly: true
volumes:
- name: xyfiles
projected:
sources:
- secret:
name: my-secret-one
- secret:
name: my-secret-two
(EDIT: Never mind - I just noticed #Jonas gave the same answer earlier. +1 from me)
Starting with Kubernetes v1.11+ it is possible with projected volumes:
A projected volume maps several existing volume sources into the same
directory.
Currently, the following types of volume sources can be projected:
secret
downwardAPI
configMap
serviceAccountToken
This is an example for "... how to use a projected Volume to mount several existing volume sources into the same directory".
May be subPath (using subPath) will help.
Example:
volumeMounts:
- name: app-redis-vol
mountPath: /app/config/redis.yaml
subPath: redis.yaml
- name: app-config-vol
mountPath: /app/config/app.yaml
subPath: app.yaml
volumes:
- name: app-redis-vol
configMap:
name: config-map-redis
items:
- key: yourKey
path: redis.yaml
- name: app-config-vol
configMap:
name: config-map-app
items:
- key: yourKey
path: app.yaml
Here your configMap named config-map-redis created from file redis.yaml mounted in app/config/ as file redis.yaml.
Also configMap config-map-app mounted in app/config/ as app.yaml
There is nice article about this here: Injecting multiple Kubernetes volumes to the same directory
Edited:
#Jonas answer is correct!
However, if you use volumes as I did in the question then, short answer is you cannot do that, You have to specify mountPath to an unused directory - volumes have to be unique and cannot be mounted to common directory.
Solution:
What I did at the end was, instead keeping files in separate secrets, I created one secret with multiple files.

StatefulSet volume mounts for OrientDB in Kubernetes

I am trying to deploy an OreintDB cluster with Kubernetes (minikube, specifically). I am using StatefulSet, however, when I use subpaths in volumeMounts declaration for all the OrientDB cluster configs, pods are not created. Although I would like to mount all the configMaps into one folder. ConfigMaps correspond to multiple configuration files required to setup the OrientDB cluster.
The StatefulSet looks like this:
volumeMounts:
- name: orientdb-config-backups
mountPath: /orientdb/config
subPath: backups
- name: orientdb-config-events
mountPath: /orientdb/config
subPath: events
- name: orientdb-config-distributed
mountPath: /orientdb/config
subPath: distributed
- name: orientdb-config-hazelcast
mountPath: /orientdb/config
subPath: hazelcast
- name: orientdb-config-server
mountPath: /orientdb/config
subPath: server
- name: orientdb-config-client-logs
mountPath: /orientdb/config
subPath: client-logs
- name: orientdb-config-server-logs
mountPath: /orientdb/config
subPath: server-log
- name: orientdb-databases
mountPath: /orientdb/databases
- name: orientdb-backup
mountPath: /orientdb/backup
Although, when I remove all the subPaths in a StatefulSet, pods are created and config files are placed into separate folders. So StatefulSet looks like this:
volumeMounts:
- name: orientdb-config-backups
mountPath: /orientdb/config/backups
- name: orientdb-config-events
mountPath: /orientdb/config/events
- name: orientdb-config-distributed
mountPath: /orientdb/config/distributed
- name: orientdb-config-hazelcast
mountPath: /orientdb/config/hazelcast
- name: orientdb-config-server
mountPath: /orientdb/config/server
- name: orientdb-config-client-logs
mountPath: /orientdb/config/client-logs
- name: orientdb-config-server-logs
mountPath: /orientdb/config/server-logs
- name: orientdb-databases
mountPath: /orientdb/databases
- name: orientdb-backup
mountPath: /orientdb/backup
- name: orientdb-data
mountPath: /orientdb/bin/data
What could be the reason of such behavior?
The issue is there is bug in hostpath volume provisioner that encounters an error with "lstat: no such file or directory" if there is subpath field present in deployment/statefulset, even if the field is empty. This error doesn't let statefulset come up and they goes into containerCreatingConfigErr (happened with me on kubeadm)
This issue is present on kubeadm as well where I encountered it.
https://github.com/kubernetes/minikube/issues/2256