How do I use relative path for a secret mountPath in deployment configuration - kubernetes

I'm having hard time configuring mountPath as a relative path.
Let's say I'm running the deployment from /user/app folder and I want to create secret file under /user/app/secret/secret-volume as follows:
apiVersion: v1
kind: Pod
metadata:
name: secret-test-pod
spec:
containers:
- name: test-container
image: nginx
volumeMounts:
# name must match the volume name below
- name: secret-volume
mountPath: secret/secret-volume
# The secret data is exposed to Containers in the Pod through a Volume.
volumes:
- name: secret-volume
secret:
secretName: test-secret
For some reason the file secret-volume is created in the root directory /secret/secret-volume.

It's because you have mountPath: secret/secret-volume change it to mountPath: /user/app/secret/secret-volume
Check documentation here.

Related

Confimap volume with defaultMode no working?

yaml file like below
apiVersion: v1
kind: Pod
metadata:
name: fortune-configmap-volume
spec:
containers:
- image: luksa/fortune:env
env:
- name: INTERVAL
valueFrom:
configMapKeyRef:
name: fortune-config
key: sleep-interval
name: html-generator
volumeMounts:
- name: html
mountPath: /var/htdocs
- image: nginx:alpine
name: web-server
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
readOnly: true
- name: config
mountPath: /etc/nginx/conf.d
volumes:
- name: html
emptyDir: {}
- name: config
configMap:
name: fortune-config
defaultMode: 0777
As you can see,I set the configMap volume defaultMode 0777, when I try to modify file in container path /etc/nginx/conf.d, it told me operation is not allowed, but why?
https://github.com/kubernetes/kubernetes/issues/62099
Secret, configMap, downwardAPI and projected volumes will be mounted as read-only volumes. Applications that attempt to write to these volumes will receive read-only filesystem errors. Previously, applications were allowed to make changes to these volumes, but those changes were reverted at an arbitrary interval by the system. Applications should be re-configured to write derived files to another location.

Pod unable to mount same path in two volumes

I am a newbie here , but I have a use case where I need to mount same path to two different PV's , When ever I try to give the same path my pod doesn't come up check mount paths below
- name : xxx
mountPath : "/home/{username}"
readOnly : false
static:
pvcName:
subPath: '{username}'
capacity: 10Gi
homeMountPath: '/home/{username}'
dynamic:
storageClass: nfs-client
pvcNameTemplate: claim-{username}{servername}
volumeNameTemplate: volume-{username}{servername}
storageAccessModes: [ReadWriteOnce]
But after changing the mount path pod comes up without any issue example
mountPath : "/home/test/{username}"
Is there something I am missing
If you want to mount the Single POD with multiple PVC it wont possible unless you use the ReadWiteMany.
But if you want to mount multiple path to POD it's possible using single PVC or multiple PVC depends on use case.
Single PVC single POD
apiVersion: v1
kind: Pod
metadata:
name: my-test-pod
spec:
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_HOST
value: "IP"
volumeMounts:
- mountPath: /var/lib/mysql
name: data
subPath: path1
- name: nginx
image: nginx
volumeMounts:
- mountPath: /var/www/html
name: data
subPath: path2
volumes:
- name: data
persistentVolumeClaim:
claimName: my-site-data
Multiple PVC and volume path to single POD
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
volumes:
# List of volumes
- name: volume1
< volume details, see below >
- name: volume2
< volume details, see below >
containers:
- name: mycontainer
volumeMounts:
# Path
# will mount 'volume1' into /var/www/html
- name: volume1
mountPath: /var/www/html
# will mount 'volume2' into /var/log
- name: volume2
mountPath: /var/log/
Reference : https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_kubernetes/get_started_provisioning_storage_in_kubernetes#example

Openshift 4 ConfigMap binary file is mounted as directory

I have created ConfigMap from Openshift 4 Windows 10 CLI:
.\oc create configmap my-cacerts --from-file=cacerts
I can see ConfigMap with name my-cacerts and download binary file cacerts from it using web interface of Openshift 4
Now I mount it (part of my-deployment.yaml)
containers:
volumeMounts:
- name: my-cacerts-volume
mountPath: /etc/my/cacerts
volumes:
- name: my-cacerts-volume
config-map:
name: my-cacerts
Unfortunately /etc/my/cacerts is mounted as a empty folder but not as a single binary file.
How can I mount cacerts as a file and not as a directory?
Update:
If I issue
.\oc get configmap my-cacerts
There is following output:
apiVersion: v1
binaryData:
cacerts: ... big long base64...
kind: ConfigMap
metadata: ...
If I issue
.\oc describe pod my-pod
I get
Volumes:
my-cacerts-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Your volumes definition is incorrect, config-map does not exist and is invalid, but it seems the API is silently falling back to an EmptyDir here, thus leading to an empty directory.
When you create a ConfigMap using the oc command above, the result will be a ConfigMap that looks like this (note that there is one key called "cacerts"):
apiVersion: v1
kind: ConfigMap
metadata:
name: my-cacerts
data:
cacerts: |
Hello world!
In the volumes section, then use configMap: together with subPath as follows to mount a only a single key ("cacerts") from your ConfigMap:
$ oc edit deployment my-deployment
[..]
spec:
containers:
- image: registry.fedoraproject.org/fedora-minimal:33
name: fedora-minimal
volumeMounts:
- mountPath: /etc/my/cacerts
name: my-cacerts-volume
subPath: cacerts
[..]
volumes:
- configMap:
name: my-cacerts
defaultMode: 420
name: my-cacerts-volume
This then results in:
$ oc rsh ...
sh-5.0$ ls -l /etc/my/cacerts
-rw-r--r--. 1 root 1000590000 13 Dec 3 19:11 /etc/my/cacerts
sh-5.0$ cat /etc/my/cacerts
Hello world!
You can also leave subPath out and set /etc/my/ as the destination for the same result, as for each key there will be a file:
[..]
volumeMounts:
- mountPath: /etc/my/
name: my-cacerts-volume
[..]
volumes:
- configMap:
name: my-cacerts
name: my-cacerts-volume
For the right syntax, you can also check the documentation
For Openshift 4 defaultMode should be specified:
volumeMounts:
- mountPath: /etc/my
name: cacerts-ref
readOnly: true
volumes:
- name: cacerts-ref
configMap:
defaultMode: 420
name: cacerts
After that configMap contents are mapped correctly.
.\oc describe pod my-pod
Volumes:
cacerts-ref:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: cacerts
Optional: false

How to mount multiple files / secrets into common directory in kubernetes?

I've multiple secrets created from different files. I'd like to store all of them in common directory /var/secrets/. Unfortunately, I'm unable to do that because kubernetes throws 'Invalid value: "/var/secret": must be unique error during pod validation step. Below is an example of my pod definition.
apiVersion: v1
kind: Pod
metadata:
labels:
run: alpine-secret
name: alpine-secret
spec:
containers:
- command:
- sleep
- "3600"
image: alpine
name: alpine-secret
volumeMounts:
- name: xfile
mountPath: "/var/secrets/"
readOnly: true
- name: yfile
mountPath: "/var/secrets/"
readOnly: true
volumes:
- name: xfile
secret:
secretName: my-secret-one
- name: yfile
secret:
secretName: my-secret-two
How can I store files from multiple secrets in the same directory?
Projected Volume
You can use a projected volume to have two secrets in the same directory
Example
apiVersion: v1
kind: Pod
metadata:
labels:
run: alpine-secret
name: alpine-secret
spec:
containers:
- command:
- sleep
- "3600"
image: alpine
name: alpine-secret
volumeMounts:
- name: xyfiles
mountPath: "/var/secrets/"
readOnly: true
volumes:
- name: xyfiles
projected:
sources:
- secret:
name: my-secret-one
- secret:
name: my-secret-two
(EDIT: Never mind - I just noticed #Jonas gave the same answer earlier. +1 from me)
Starting with Kubernetes v1.11+ it is possible with projected volumes:
A projected volume maps several existing volume sources into the same
directory.
Currently, the following types of volume sources can be projected:
secret
downwardAPI
configMap
serviceAccountToken
This is an example for "... how to use a projected Volume to mount several existing volume sources into the same directory".
May be subPath (using subPath) will help.
Example:
volumeMounts:
- name: app-redis-vol
mountPath: /app/config/redis.yaml
subPath: redis.yaml
- name: app-config-vol
mountPath: /app/config/app.yaml
subPath: app.yaml
volumes:
- name: app-redis-vol
configMap:
name: config-map-redis
items:
- key: yourKey
path: redis.yaml
- name: app-config-vol
configMap:
name: config-map-app
items:
- key: yourKey
path: app.yaml
Here your configMap named config-map-redis created from file redis.yaml mounted in app/config/ as file redis.yaml.
Also configMap config-map-app mounted in app/config/ as app.yaml
There is nice article about this here: Injecting multiple Kubernetes volumes to the same directory
Edited:
#Jonas answer is correct!
However, if you use volumes as I did in the question then, short answer is you cannot do that, You have to specify mountPath to an unused directory - volumes have to be unique and cannot be mounted to common directory.
Solution:
What I did at the end was, instead keeping files in separate secrets, I created one secret with multiple files.

How to allow a Kubernetes Job access to a file on host

I've been though the Kubernetes documentation thoroughly but am still having problems interacting with a file on the host filesystem with an application running inside a K8 job launched pod. This happens with even the simplest utility so I have included an stripped down example of my yaml config. The local file, 'hello.txt', referenced here does exist in /tmp on the host (ie. outside the Kubernetes environment) and I have even chmod 777'd it. I've also tried different places in the hosts filesystem than /tmp.
The pod that is launched by the Kubernetes Job terminates with Status=Error and generates the log ls: /testing/hello.txt: No such file or directory
Because I ultimately want to use this programmatically as part of a much more sophisticated workflow it really needs to be a Job not a Deployment. I hope that is possible. My current config file which I am launching with kubectl just for testing is:
apiVersion: batch/v1
kind: Job
metadata:
name: kio
namespace: kmlflow
spec:
# ttlSecondsAfterFinished: 5
template:
spec:
containers:
- name: kio-ingester
image: busybox
volumeMounts:
- name: test-volume
mountPath: /testing
imagePullPolicy: IfNotPresent
command: ["ls"]
args: ["-l", "/testing/hello.txt"]
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp
# this field is optional
# type: Directory
restartPolicy: Never
backoffLimit: 4
Thanks in advance for any assistance.
Looks like when the volume is mounted , the existing data can't be accessed.
You will need to make use of init container to pre-populate the data in the volume.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: busybox
command: ["echo","-n","{'address':'10.0.1.192:2379/db'}", ">","/data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
hostPath: {}
Reference:
https://medium.com/#jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519