I am trying to add files in volumeMounts to .dockerignore and trying to understand the difference between subPath and mountPath. Reading official documentation isn't clear to me.
I should add from what I read mountPath is the directory in the pod where volumes will be mounted.
from official documentation: "subPath The volumeMounts.subPath property specifies a sub-path inside the referenced volume instead of its root." https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath (this part isn't clear)
- mountPath: /root/test.pem
name: test-private-key
subPath: test.testing.com.key
In this example should I include both test.pem and test.testing.com.key to dockerignore?
mountPath shows where the referenced volume should be mounted in the container. For instance, if you mount a volume to mountPath: /a/b/c, the volume will be available to the container under the directory /a/b/c.
Mounting a volume will make all of the volume available under mountPath. If you need to mount only part of the volume, such as a single file in a volume, you use subPath to specify the part that must be mounted. For instance, mountPath: /a/b/c, subPath: d will make whatever d is in the mounted volume under directory /a/b/c
The difference between mountPath & subPath is that subPath is an addition to mountPath and it exists to solve a problem.
Look at my comments inside the example Pod manifest, I explain the problem and how subPath solves it.
To further understand the difference look at the "under the hood" section to see how kubernetes treats these two properties.
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
volumes:
- name: vol1
emptyDir: {}
containers:
- image: nginx
name: mypod-container
volumeMounts:
# In our example let's say we want to mount "vol1" on path "/a/b/c"
# So we declare this:
- name: vol1
mountPath: /a/b/c
# But what if we also want to use a child folder "d" ?
# If we try to use "/a/b/c/d" then we wont have access to /a/b/c
# because the mountPath /a/b/c is overwritten by mountPath /a/b/c/d
# So if we try the following mountPath we lose our above declaration:
# - name: vol1
# mountPath: /a/b/c/d # This overwrites the above mount to /a/b/c
# The solution:
# Using subPath we enjoy both worlds.
# (1) We dont overwrite the info in our volume at path /a/b/c .
# (2) We have a separate path /a/b/c/d that when we can write to
# without affecting the content in path /a/b/c.
# Here is how we write the correct declaration:
- name: vol1
mountPath: /a/b/c/d
subPath: d
Under the hood of mountPath & subPath
Lets look under the hood of kubernetes to see how it manages the mountPath & the subPath properties differently:
1. How kubernetes manages mountPath:
When a mountPath is declared then kubernetes creates a file with the name of the volume in the following path:
/var/lib/kubelet/pods/<pod-id>/volumes/kubernetes.io~empty-dir/<volume name>
So in our above manifest example this is what was created ("vol1" is the volume name):
/var/lib/kubelet/pods/301d...a71c/volumes/kubernetes.io~empty-dir/vol1
Now you can see that if we defined the "/a/b/c/d" mountPath we would have triggered a creation of another file "vol1" in same directory thus overwritting the original.
2. How kubernetes manages subPath:
When a subPath is declared then kubernetes creates a file with the SAME VOLUME name Volume but in a different path:
/var/lib/kubelet/pods/<pod-id>/volume-subpaths/<volume name>
So in our above manifest example this is what was created ("vol1" is the volume name):
/var/lib/kubelet/pods/3eaa...6d1/volume-subpaths/vol1
Conclusion:
Now you see that the subPath enables us to define an additional volumePath without colliding with the root voulmePath.
It does this by creating files with the same volume name but in different directories in kubernetes kubelet.
subPath is used to select a specific Path within the volume from which the container's volume should be mounted. This Defaults to "" (volume's root).
Check this mentioned here. So what that means is that you can still mount a volume on the path mentioned in your mountPath but instead of mounting it from the volume root, you can specify a separate subpath within the volume to be mounted under the volumeMount directory in your container.
To clarify on what this means, i created a simple volume on my minikube node.
docker#minikube:/mnt/data$ ls -lrth
total 8.0K
drwxr-xr-x 2 root root 4.0K Dec 30 16:23 path1
drwxr-xr-x 2 root root 4.0K Dec 30 16:23 path2
docker#minikube:/mnt/data$
docker#minikube:/mnt/data$ pwd
/mnt/data
As you can see that in this case, i have a directory and i have created two sub directories inside this volume. Under each of these path1 or path2 folder i have placed a different index file.
docker#minikube:/mnt/data$ pwd
/mnt/data
docker#minikube:/mnt/data$ cat path1/index.html
This is index file from path1
docker#minikube:/mnt/data$
docker#minikube:/mnt/data$ cat path2/index.html
This is index file from path2
docker#minikube:/mnt/data$
Now I created a sample PV using this volume on my minikube node using the sample manifest as below
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
After this, i created the sample PVC using the below manifest
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 800Mi
Now if i created a nginx pod and used this PVC under my volume, depending on the subPath config that i use in my pod spec, i will have the volume mounted from that specific subfolder.
i.e. If i used the below manifest for my nginx pod
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: nginx
volumeMounts:
# a mount for site-data
- name: config
mountPath: /usr/share/nginx/html
subPath: path1
volumes:
- name: config
persistentVolumeClaim:
claimName: task-pv-claim
and i do a curl on the POD IP, i get the index.html served from path1.
Gauravs-MBP:K8s alieninvader$ kubectl exec -it mycurlpod -- /bin/sh
/ $ curl 172.17.0.3
This is index file from path1
And if i changed my pod manifest and used the subpath as path2, so that the new manifest becomes this
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: nginx
volumeMounts:
# a mount for site-data
- name: config
mountPath: /usr/share/nginx/html
subPath: path2
volumes:
- name: config
persistentVolumeClaim:
claimName: task-pv-claim
Then as expected the curl to nginx pod would produce the below output serving the file from path2.
Gauravs-MBP:K8s alieninvader$ kubectl exec -it mycurlpod -- /bin/sh
/ $ curl 172.17.0.3
This is index file from path2
/ $
Related
I run Jenkins in k8s, and I have mount /var/jenkins_home folder with PVC already, now I want to mount /var/jenkins_home/config.xml as a configmap, other the folder still mount with pvc.
below is my yaml file:
volumeMounts:
- mountPath: /var/jenkins_home
name: jenkins-data
subPath: jenkins
- mountPath: /var/jenkins_home/config.xml
name: configxml
subPath: config.xml
volumes:
- name: jenkins-data
persistentVolumeClaim:
claimName: shdr-jenkins-k-test
- name: configxml
configMap:
name: jenkins-k-config
ites:
- key: jenkins-configfile
path: config.xml
when I open jenkins, it says:
Also: java.nio.file.FileSystemException: /var/jenkins_home/atomic14997153162086721303tmp -> /var/jenkins_home/config.xml: Device or resource busy
at java.base/sun.nio.fs.UnixException.translateToIOException(Unknown Source)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(Unknown Source)
at java.base/sun.nio.fs.UnixCopyFile.move(Unknown Source)
at java.base/sun.nio.fs.UnixFileSystemProvider.move(Unknown Source)
at java.base/java.nio.file.Files.move(Unknown Source)
at hudson.util.AtomicFileWriter.commit(AtomicFileWriter.java:194)
java.nio.file.FileSystemException: /var/jenkins_home/config.xml: Device or resource busy
Short Answer
You need to set the mountPath of jenkins-data volume to be /var/jenkins_home/jenkins. This will correctly configure the subPath functionality.
Detailed Explanation
If I understand correctly, what you are trying to achieve is the following:
You have a ConfigMap named jenkins-k-config. This contains a single parameter config.xml with contents of your Jenkins configuration as a value.
You want to mount this ConfigMap at the path /var/jenkins_home/ in your Jenkins pod, so the pod can use the /var/jenkins_home/config.xml file.
Here's how you can do this:
You will update your pod specification to add the ConfigMap as a volume.
You will then add a volumeMount to mount that the contents of that ConfigMap into the specified mount point in your pod's container.
Since your ConfigMap only contains a single key named config.xml, you don't have to specify the items anyway. Simply mounting the ConfigMap will work. See the manifest below:
volumeMounts:
- mountPath: /var/jenkins_home/
name: configxml
subPath: config.xml
readOnly: true. #<==== Recommended, so config remains immutable.
volumes:
- name:configxml
configMap:
name: jenkins-k-config
I also noticed that when all these mistakes have been fixed, we will end up with 2 volumes (jenkins-data, configxml) attempting to mount to the same mountPoint inside the pod. This is the reason you're seeing device or resource busy error, since the mountPoint is already busy, being mounted with the jenkins-data volume.
You can change the mount point to /var/jenkins_home/jenkins. This will also put your subPath variable in affect and you will be able to mount both volumes.
jenkins-data ====> /var/jenkins_home/jenkins and
config.xml ====> /var/jenkins_home/config.xml.
I need to share a directory between two containers: myapp and monitoring and to achieve this I created an emptyDir: {} and then volumeMount on both the containers.
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: myapp
volumeMounts:
- name: shared-data
mountPath: /etc/myapp/
- name: monitoring
volumeMounts:
- name: shared-data
mountPath: /var/read
This works fine as the data I write to the shared-data directory is visible in both containers. However, the config file that is created when creating the container under /etc/myapp/myapp.config is hidden as the shared-data volume is mounted over /etc/myapp path (overlap).
How can I force the container to first mount the volume to /etc/myapp path and then cause the docker image to place the myapp.config file under the default path /etc/myapp except that it is the mounted volume thus allowing the config file to be accessible by the monitoring container under /var/read?
Summary: let the monitoring container read the /etc/myapp/myapp.config file sitting on myapp container.
Can anyone advice please?
Can you mount shared-data at /var/read in an init container and copy config file from /etc/myapp/myapp.config to /var/read?
Consider using ConfigMaps with SubPaths.
A ConfigMap is an API object used to store non-confidential data in
key-value pairs. Pods can consume ConfigMaps as environment variables,
command-line arguments, or as configuration files in a volume.
Sometimes, it is useful to share one volume for multiple uses in a
single pod. The volumeMounts.subPath property specifies a sub-path
inside the referenced volume instead of its root.
ConfigMaps can be used as volumes. The volumeMounts inside the template.spec are the same as any other volume. However, the volumes section is different. Instead of specifying a persistentVolumeClaim or other volume type you reference the configMap by name. Than you can add the subPath property which would look something like this:
volumeMounts:
- name: shared-data
mountPath: /etc/myapp/
subPath: myapp.config
Here are the resources that would show you how to set it up:
Configure a Pod to Use a ConfigMap: official docs
Using ConfigMap SubPaths to Mount Files: step by step guide
Mount a file in your Pod using a ConfigMap: supplement
The big picture is: I'm trying to install WordPress with plugins in Kubernetes, for development in Minikube.
I want to use the official wp-cli Docker image to install the plugins. I am trying to use a write-enabled persistence volume. In Minikube, I turn on the mount to minikube cluster with command:
minikube mount ./src/plugins:/data/plugins
Now, the PV definition looks like this:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: wordpress-install-plugins-pv
labels:
app: wordpress
env: dev
spec:
capacity:
storage: 5Gi
storageClassName: ""
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
hostPath:
path: /data/plugins
The PVC looks like this:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wordpress-install-plugins-pvc
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: ""
volumeName: wordpress-install-plugins-pv
Both the creation and the binding are succesful. The Job definition for plugin installation looks like this:
---
apiVersion: batch/v1
kind: Job
metadata:
name: install-plugins
labels:
env: dev
app: wordpress
spec:
template:
spec:
securityContext:
fsGroup: 82 # www-data
volumes:
- name: plugins-volume
persistentVolumeClaim:
claimName: wordpress-install-plugins-pvc
- name: config-volume
configMap:
name: wordpress-plugins
containers:
- name: wpcli
image: wordpress:cli
volumeMounts:
- mountPath: "/configmap"
name: config-volume
- mountPath: "/var/www/html/wp-content/plugins"
name: plugins-volume
command: ["sh", "-c", "id; \
touch /var/www/html/wp-content/plugins/test; \
ls -al /var/www/html/wp-content; \
wp core download --skip-content --force && \
wp config create --dbhost=mysql \
--dbname=$MYSQL_DATABASE \
--dbuser=$MYSQL_USER \
--dbpass=$MYSQL_PASSWORD && \
cat /configmap/wp-plugins.txt | xargs -I % wp plugin install % --activate" ]
env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secrets
key: username
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secrets
key: password
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql-secrets
key: dbname
restartPolicy: Never
backoffLimit: 3
Again, the creation looks fine and all the steps look fine. The problem I have is that apparently the permissions to the mounted volume do not allow the current user to write to the folder. Here's the log contents:
uid=82(www-data) gid=82(www-data) groups=82(www-data)
touch: /var/www/html/wp-content/plugins/test: Permission denied
total 9
drwxr-xr-x 3 root root 4096 Mar 1 20:15 .
drwxrwxrwx 3 www-data www-data 4096 Mar 1 20:15 ..
drwxr-xr-x 1 1000 1000 64 Mar 1 17:15 plugins
Downloading WordPress 5.3.2 (en_US)...
md5 hash verified: 380d41ad22c97bd4fc08b19a4eb97403
Success: WordPress downloaded.
Success: Generated 'wp-config.php' file.
Installing WooCommerce (3.9.2)
Downloading installation package from https://downloads.wordpress.org/plugin/woocommerce.3.9.2.zip...
Unpacking the package...
Warning: Could not create directory.
Warning: The 'woocommerce' plugin could not be found.
Error: No plugins installed.
Am I doing something wrong? I tried different minikube mount options, but nothing really helped! Did anyone run into this issue with minikube?
This is a long-term issue that prevents a non-root user to write to a container when mounting a hostPath PersistentVolume in Minikube.
There are two common workarounds:
Simply use the root user.
Configure a Security Context for a Pod or Container using runAsUser, runAsGroup and fsGroup. You can find a detailed info with an example in the link provided.
Please let me know if that helped.
I looked deeper into the way the volume mount works in minikube, and I think I came up with solution.
TL;DR
minikube mount ./src/plugins:/data/mnt/plugins --uid 82 --gid 82
Explanataion
There are to mounting moments:
minikube mounting the directory with minikube mount
volumne being mounted in Kubernetes
minikube mount sets up the directory in the VM with the UID and GID provided as parameters, with the default being docker user and group.
When the volume is being mounted in the Pod as a directory, it gets mounted with the exact same UID and GID as the host one! You can see this in my question:
drwxr-xr-x 1 1000 1000 64 Mar 1 17:15 plugins
UID=1000 and GID=1000 refer to the docker UID and GID in the minikube host. Which gave me an idea, that I should try mounting with the UID and GID of the user in the Pod.
82 is the id of both the user and the group www-data in the wordpress:cli Docker image, and it works!
One last think worth mentioning: the volume is mounted as a subdirectory in the Pod (wp-content in my case). It turned out that wp-cli actually needs access to that directory as well to create temporary folder. What I ended up doing is adding an emptyDir volume, like this:
volumes
- name: content
emptyDir: {}
I hope it help anybody! For what it's worth, my version of minikube is 1.7.3, running on OS X with VirtualBox driver.
Unfortunately, for Minikube today, 2 (Configure a Security Context for a Pod or Container using runAsUser, runAsGroup and fsGroup. You can find a detailed info with an example in the link provided.) doesn't seem to be a viable option, because the HostPast provisioner, which is used under the hood, doesn't honor Security Context. There seems to be a newer HostPath Provisioner, which preemptively sets new mounts to 777, but the one that came with my 1.25 MiniKube is still returning these paths as 755.
I am trying to pass user credentials via Kubernetes secret to a mounted, password protected directory inside a Kubernetes Pod.
The NFS folder /mount/protected has user access restrictions, i.e. only certain users can access this folder.
This is my Pod configuration:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
volumes:
- name: my-volume
hostPath:
path: /mount/protected
type: Directory
secret:
secretName: my-secret
containers:
- name: my-container
image: <...>
command: ["/bin/sh"]
args: ["-c", "python /my-volume/test.py"]
volumeMounts:
- name: my-volume
mountPath: /my-volume
When applying it, I get the following error:
The Pod "my-pod" is invalid:
* spec.volumes[0].secret: Forbidden: may not specify more than 1 volume type
* spec.containers[0].volumeMounts[0].name: Not found: "my-volume"
I created my-secret according to the following guide:
https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/#create-a-secret
So basically:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
username: bXktYXBw
password: PHJlZGFjdGVkPg==
But when I mount the folder /mount/protected with:
spec:
volumes:
- name: my-volume
hostPath:
path: /mount/protected
type: Directory
I get a permission denied error python: can't open file '/my-volume/test.py': [Errno 13] Permission denied when running a Pod that mounts this volume path.
My question is how can I tell my Pod that it should use specific user credentials to gain access to this mounted folder?
You're trying to tell Kubernetes that my-volume should get its content from both a host path and a Secret, and it can only have one of those.
You don't need to manually specify a host path. Kubernetes will figure out someplace appropriate to put the Secret content and it will still be visible on the mountPath you specify within the container. (Specifying hostPath: at all is usually wrong, unless you can guarantee that the path will exist with the content you expect on every node in the cluster.)
So change:
volumes:
- name: my-volume
secret:
secretName: my-secret
# but no hostPath
I eventually figured out how to pass user credentials to a mounted directory within a Pod by using CIFS Flexvolume Plugin for Kubernetes (https://github.com/fstab/cifs).
With this Plugin, every user can pass her/his credentials to the Pod.
The user only needs to create a Kubernetes secret (cifs-secret), storing the username/password and use this secret for the mount within the Pod.
The volume is then mounted as follows:
(...)
volumes:
- name: test
flexVolume:
driver: "fstab/cifs"
fsType: "cifs"
secretRef:
name: "cifs-secret"
options:
networkPath: "//server/share"
mountOptions: "dir_mode=0755,file_mode=0644,noperm"
I was considering using secrets to mount a single file but it seems that you can only mount directory that will overwrites all the other content. How can I share a single config file without mounting a directory?
For example you have a configmap which contain 2 config files:
kubectl create configmap config --from-file <file1> --from-file <file2>
You could use subPath like this to mount single file into existing directory:
---
volumeMounts:
- name: "config"
mountPath: "/<existing folder>/<file1>"
subPath: "<file1>"
- name: "config"
mountPath: "/<existing folder>/<file2>"
subPath: "<file2>"
restartPolicy: Always
volumes:
- name: "config"
configMap:
name: "config"
---
Full example here
I'd start with this working example from here. Make sure you're using at least Kubernetes 1.3.
Simply create a ConfigMap like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: test-pd-plus-cfgmap
data:
file-from-cfgmap: file data
And then create a pod like this:
apiVersion: v1
kind: Pod
metadata:
name: test-pd-plus-cfgmap
spec:
containers:
- image: ubuntu
name: bash
stdin: true
stdinOnce: true
tty: true
volumeMounts:
- mountPath: /mnt
name: pd
- mountPath: /mnt/file-from-cfgmap
name: cfgmap
subPath: file-from-cfgmap
volumes:
- name: pd
gcePersistentDisk:
pdName: testdisk
- name: cfgmap
configMap:
name: test-pd-plus-cfgmap
An useful additional information to the accepted answer:
Let's say your origin file is called environment.js, and you want the destination file to be called destination_environment.js, then, your yaml file should look like this:
---
volumeMounts:
- name: "config"
mountPath: "/<existing folder>/destination_environment.js"
subPath: "environment.js"
volumes:
- name: "config"
configMap:
name: "config"
---
There is currently (v1.0, v1.1) no way to volume mount a single config file. The Secret structure is naturally capable of representing multiple secrets, which means it must be a directory.
When we get config objects, single files should be supported.
In the mean time you can mount a directory and symlink to it from your image, maybe?
I don't have a reputation to vote or reply to threads, so I'll post here. The most up-voted answer does not work as it is stated (at least in k8s 1.21.1):
volumeMounts:
- mountPath: /opt/project/config.override.json
name: config-override
subPath: config.override.json
command:
- ls
- -l
- /opt/project/config.override.json
produces an empty dir /opt/project/config.override.json.
I'm digging through docs and google for several hours already and I am still not able to mount this single json file as json file.
I've also tried this:
volumeMounts:
- mountPath: /opt/project/
name: config-override
subPath: config.override.json
command:
- ls
- -l
- /opt/project
Quite obviously it lists /opt/project as empty dir as it tries to mount a json file to it. File with name config.override.json is not created in this case.
PS: the only way to mount to file at all is this:
volumeMounts:
- mountPath: /opt/project/override
name: config-override
command:
- ls
- -l
- /opt/project/override
It creates a directory /opt/project/override and symlinks an original filename used in configMap creation to the needed content:
lrwxrwxrwx 1 root root 27 Jun 27 14:37 config.override.json -> ..data/config.override.json
Lets say you want to mount a new log4j2.xml into a running deployment to enhance logging
# Variables
k8s_namespace=xcs
deployment_name=orders-service
container_name=orders-service
container_working_dir=/opt/orders-service
# Create config map and patch deployment
kubectl -n ${k8s_namespace} create cm log4j \
--from-file=log4j2.xml=./log4j2.xml
kubectl -n ${k8s_namespace} patch deployment ${deployment_name} \
-p '{"spec":{"template":{"spec":{"volumes":[{"configMap":{"defaultMode": 420,"name": "log4j"},"name": "log4j"}]}}}}'
kubectl -n ${k8s_namespace} patch deployment ${deployment_name} \
-p '{"spec":{"template":{"spec":{"containers":[{"name": "'${container_name}'","volumeMounts": [{ "mountPath": "'${container_working_dir}'/log4j2.xml","name": "log4j","subPath": "log4j2.xml"}]}]}}}}'