I have a pod in which I'm running a image.
The pod is not mine but belongs to the company I work for.
Each time I mount a new image in the pod it has access to some predefined "Permanent" folders.
When I use the edit deployment command I see this:
volumeMounts:
- mountPath: /Data/logs
name: ba-server-api-dh-pvc
subPath: logs
- mountPath: /Data/ErrorAndAbortedBlobs
name: ba-server-api-dh-pvc
subPath: ErrorAndAbortedBlobs
- mountPath: /Data/SuccessfullyTransferredBlobs
name: ba-server-api-dh-pvc
subPath: SuccessfullyTransferredBlobs
- mountPath: /Data/BlobsToBeTransferred
name: ba-server-api-dh-pvc
subPath: BlobsToBeTransferred
Now I want to manually add another such mountPath so I get another folder in the pod. But when I add it to the deployment config (the one above) and try saving it I get the following error.
"error: deployments.extensions "ba-server-api-dh-deployment" is invalid"
What can I do to add another permanent folder to the POD?
kind regards
It looks like you haven't specified the volume.
Something looks like this.
...
volumeMounts:
- mountPath: /Data/BlobsToBeTransferred
name: ba-server-api-dh-pvc
subPath: BlobsToBeTransferred
...
volume:
- name: ba-server-api-dh-pvc
persistentVolumeClaim:
claimName: ba-server-api-dh-pvc
Note that you already have a PersistentVolumeClaim named ba-server-api-dh-pvc, otherwise you will have to create.
Related
The error:
CreateContainerConfigError: failed to prepare subPath for volumeMount "myVolumeMount" of container "myContainer"
Relevant extract from YAML:
volumeMounts:
- name: myVolumeMount
mountPath: /var/data/crash
subPath: files/.cores
readOnly: false
I am occasionally seeing the failure show up when deploying. The fact that it is occasional is what makes this confusing. Is this a potential bug? Using Kubernetes versions: client (0.22) and server (1.22),
If you want to use the path /var/data/crash/files/.cores as the subPath, you need to define the mountPath as /var/data/crash/files/.cores too. Then, define the subPath as just .cores. Thus, the final block should be like,
volumeMounts:
- name: myVolumeMount
mountPath: /var/data/crash/files/.cores
subPath: .cores
readOnly: false
This is how the documentation specifies too.
After reading a lot of manuals, I still don't understand if it is possible to save a file to the host after performing a job or a cronjob.
My Job got PVC and PV like that:
volumes:
- name: name1
persistentVolumeClaim:
claimName: name2
and:
volumeMounts:
- name: save-file
mountPath: /mnt/dir
Let's say I'm running a script that saves the output to a file:
command: ["dosomething.py"]
args: [" --save-to-file output.txt"]
Is there some way to save it to the host or send it somewhere?
Mount the volume for container and give a mount path-
volumeMounts:
- name: "name1"
mountPath: "/var/www/html"
The container can save file to it.
Read more details here - https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume
Could anyone clarify on the Persistent Volume in Kubernetes?
In this below example, the /my-test-project is in the persistent volume.Then, why I need these mounts as technically my entire directory /my-test-project is persisted? How these mountpath and subpath would help if entire directory is persisted.Thanks!
volumeMounts:
- name: empty-dir-volume
mountPath: /my-test-project/data-cache
subPath: data-cache
- name: empty-dir-volume
mountPath: /my-test-project/user-cache
subPath: user-cache
volumes:
- name: empty-dir-volume
emptyDir: {}
Your /my-test-project entire directory is not persisted.
mountPath or path in host /my-test-project/data-cache is persisted in empty-dir-volume in path data-cache
mountPath /my-test-project/user-cache is persisted in empty-dir-volume in path user-cache
Which mean when you create files inside /my-test-project/data-cache, it will be persisted in data-cache(subpath) inside emtpy-dir-volume. Similarly for user-cache. Whenever you create files inside /my-test-project/ it wont be persisted. Lets say you create /my-test-project/new-dir, now new-dir will not be persisted.
For better explaination, lets take the below example(two containers sharing the volume, but in different mounthPath):
apiVersion: v1
kind: Pod
metadata:
name: share-empty-dir
spec:
containers:
- name: container-1
image: alpine
command:
- "bin/sh"
- "-c"
- "sleep 10000"
volumeMounts:
- name: empty-dir-volume
mountPath: /my-test-project/data-cache
subPath: data-cache-subpath
- name: empty-dir-volume
mountPath: /my-test-project/user-cache
subPath: user-cache-subpath
- name: container-2
image: alpine
command:
- "bin/sh"
- "-c"
- "sleep 10000"
volumeMounts:
- name: empty-dir-volume
mountPath: /tmp/container-2
volumes:
- name: empty-dir-volume
emptyDir: {}
In container-1:
mountPath /my-test-project/user-cache is persisted in empty-dir-volume in path user-cache-subpath
mountPath /my-test-project/data-cache is persisted in empty-dir-volume in path data-cache-subpath
In container-2:
mountPath /tmp/container-2 is persisted in empty-dir-volume in path "" (which means "/")
Observations:
touch /my-test-project/user-cache/a.txt. we can see this file in container-2 at /tmp/container-2/user-cache-subpath/a.txt and reverse will work
touch /my-test-project/data-cache/b.txt. we can see this file in container-2 at /tmp/container-2/data-cache-subpath/a.txt and reverse will work
touch /tmp/container-2/new.txt, we can never this file in container-1 as the base path we are have specified subPaths in container-1
Play around similarly for even better understanding
Note: Just to be clear, you are using emptyDir type volume, which means whenever pod gets deleted, data will be lost. This type is used only to share the data between containers.
I have a volume with a secret called config-volume. I want to have that file in the /home/code/config folder, which is where the rest of the configuration files are. For that, I mount it as this:
volumeMounts:
- name: config-volumes
- mountPath: /home/code/config
The issue is that, after deploying, in the /home/code/config I only have the secret file and the rest of them are gone
So the /home/code/config is an existing folder (not empty), I suspect that the volumeMount overwrites the folder.
Is there a way that this can be done without overwriting everything?
You can do the following, taken from this GitHub issue
containers:
- volumeMounts:
- name: config-volumes
mountPath: /home/code/config
subPath: config
volumes:
- name: config-volumes
configMap:
name: my-config
Suggested that your ConfigMapis called my-config and that you have a key config in it.
Kubernetes Secrets are mounted as a directory, with each key as a file in that directory. So in your case, the config-volumes secret is mounted to /home/code/config, shadowing whatever that directory was before.
You could specify your volume mount as:
volumeMounts:
- name: config-volumes
- mountPath: /home/code/config/config-volumes
which would provide a config-volumes directory inside the config directory with files for your secret's keys inside.
I was considering using secrets to mount a single file but it seems that you can only mount directory that will overwrites all the other content. How can I share a single config file without mounting a directory?
For example you have a configmap which contain 2 config files:
kubectl create configmap config --from-file <file1> --from-file <file2>
You could use subPath like this to mount single file into existing directory:
---
volumeMounts:
- name: "config"
mountPath: "/<existing folder>/<file1>"
subPath: "<file1>"
- name: "config"
mountPath: "/<existing folder>/<file2>"
subPath: "<file2>"
restartPolicy: Always
volumes:
- name: "config"
configMap:
name: "config"
---
Full example here
I'd start with this working example from here. Make sure you're using at least Kubernetes 1.3.
Simply create a ConfigMap like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: test-pd-plus-cfgmap
data:
file-from-cfgmap: file data
And then create a pod like this:
apiVersion: v1
kind: Pod
metadata:
name: test-pd-plus-cfgmap
spec:
containers:
- image: ubuntu
name: bash
stdin: true
stdinOnce: true
tty: true
volumeMounts:
- mountPath: /mnt
name: pd
- mountPath: /mnt/file-from-cfgmap
name: cfgmap
subPath: file-from-cfgmap
volumes:
- name: pd
gcePersistentDisk:
pdName: testdisk
- name: cfgmap
configMap:
name: test-pd-plus-cfgmap
An useful additional information to the accepted answer:
Let's say your origin file is called environment.js, and you want the destination file to be called destination_environment.js, then, your yaml file should look like this:
---
volumeMounts:
- name: "config"
mountPath: "/<existing folder>/destination_environment.js"
subPath: "environment.js"
volumes:
- name: "config"
configMap:
name: "config"
---
There is currently (v1.0, v1.1) no way to volume mount a single config file. The Secret structure is naturally capable of representing multiple secrets, which means it must be a directory.
When we get config objects, single files should be supported.
In the mean time you can mount a directory and symlink to it from your image, maybe?
I don't have a reputation to vote or reply to threads, so I'll post here. The most up-voted answer does not work as it is stated (at least in k8s 1.21.1):
volumeMounts:
- mountPath: /opt/project/config.override.json
name: config-override
subPath: config.override.json
command:
- ls
- -l
- /opt/project/config.override.json
produces an empty dir /opt/project/config.override.json.
I'm digging through docs and google for several hours already and I am still not able to mount this single json file as json file.
I've also tried this:
volumeMounts:
- mountPath: /opt/project/
name: config-override
subPath: config.override.json
command:
- ls
- -l
- /opt/project
Quite obviously it lists /opt/project as empty dir as it tries to mount a json file to it. File with name config.override.json is not created in this case.
PS: the only way to mount to file at all is this:
volumeMounts:
- mountPath: /opt/project/override
name: config-override
command:
- ls
- -l
- /opt/project/override
It creates a directory /opt/project/override and symlinks an original filename used in configMap creation to the needed content:
lrwxrwxrwx 1 root root 27 Jun 27 14:37 config.override.json -> ..data/config.override.json
Lets say you want to mount a new log4j2.xml into a running deployment to enhance logging
# Variables
k8s_namespace=xcs
deployment_name=orders-service
container_name=orders-service
container_working_dir=/opt/orders-service
# Create config map and patch deployment
kubectl -n ${k8s_namespace} create cm log4j \
--from-file=log4j2.xml=./log4j2.xml
kubectl -n ${k8s_namespace} patch deployment ${deployment_name} \
-p '{"spec":{"template":{"spec":{"volumes":[{"configMap":{"defaultMode": 420,"name": "log4j"},"name": "log4j"}]}}}}'
kubectl -n ${k8s_namespace} patch deployment ${deployment_name} \
-p '{"spec":{"template":{"spec":{"containers":[{"name": "'${container_name}'","volumeMounts": [{ "mountPath": "'${container_working_dir}'/log4j2.xml","name": "log4j","subPath": "log4j2.xml"}]}]}}}}'