I have two containers inside my Pod:
Container A based on my own Dockerfile. Inside this Dockerfile, there is COPY ./files /my-files command. This image is inside my GitLab Docker registry.
Container B based on image from hub.docker.com.
I'd like to share data from Container A that are stored inside /my-files to Container B. I thought that I need to create a volume (it's not a persisted data) inside this pod and volumeMounts to the container.
Unfortunately when I add volumeMounts to Container A with mountPath: /my-files this directory is emptied and there are no files that were added when an image was created.
What should I do to keep this data and share it with Container B.
This is part of my Deployment.yaml file:
containers:
- name: Container-A
image: "my-gitlab-registry/my-image-a-with-copied-files"
volumeMounts:
- name: shared-data
mountPath: /my-files
- name: Container-B
image: "some-public-image"
volumeMounts:
- name: shared-data
mountPath: /files-from-container-a
volumes:
- name: shared-data
emptyDir: {}
A ugly hack, use init container, to copy data into emptyDir volume, then mount volume in 2nd container.
initContainers:
- name: init-config-data-copy-wait
image: datacontainer
command:
- sh
- "-ce"
- |
set -ex
cp -r /src-data/* /dst-data/
ls /dst-data/
volumeMounts:
- mountPath: /dst-data
name: dst-data-volume
volumes:
- name: dst-data-volume
emptyDir: {}
Related
I'm deploying a basic "solr:8.9.0" image to local Kubernetes env.
If I'm trying to mount pod's "/var/solr" directory, it works well.
I can see the files inside /var/solr in the mounted directory.
spec:
containers:
- image: solr:8.6.0
imagePullPolicy: IfNotPresent
name: solr
ports:
- name: solrport
containerPort: 8983
volumeMounts:
- mountPath: /var/solr/
name: solr-volume
volumes:
- name: solr-volume
persistentVolumeClaim:
claimName: solr-pvc
But somehow I can't mount "/etc/default/" directory. That doesn't work.
I knew there are files inside that directory but they are disappearing.
Any idea why?
Thanks!
this is because of how volumeMounts work.
A standard volumeMount mounts the volume in the suplied directory overwriting everything that is inside that directory.
You want to specify a subpath for the data you actually want to mount. By doing this the original contents of the directory won't get overridden.
see here for more information regarding the usage of subpaths.
After reading a lot of manuals, I still don't understand if it is possible to save a file to the host after performing a job or a cronjob.
My Job got PVC and PV like that:
volumes:
- name: name1
persistentVolumeClaim:
claimName: name2
and:
volumeMounts:
- name: save-file
mountPath: /mnt/dir
Let's say I'm running a script that saves the output to a file:
command: ["dosomething.py"]
args: [" --save-to-file output.txt"]
Is there some way to save it to the host or send it somewhere?
Mount the volume for container and give a mount path-
volumeMounts:
- name: "name1"
mountPath: "/var/www/html"
The container can save file to it.
Read more details here - https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume
I have a legacy app which keep checking an empty file inside a directory and perform certain action if the file timestamp is changed.
I am migrating this app to Kubernetes so I want to create an empty file inside the pod. I tried subpath like below but it doesn't create any file.
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
containers:
- name: demo
image: alpine
command: ["sleep", "3600"]
volumeMounts:
- name: volume-name
mountPath: '/volume-name-path'
subPath: emptyFile
volumes:
- name: volume-name
emptyDir: {}
describe pods shows
Containers:
demo:
Container ID: containerd://0b824265e96d75c5f77918326195d6029e22d17478ac54329deb47866bf8192d
Image: alpine
Image ID: docker.io/library/alpine#sha256:08d6ca16c60fe7490c03d10dc339d9fd8ea67c6466dea8d558526b1330a85930
Port: <none>
Host Port: <none>
Command:
sleep
3600
State: Running
Started: Wed, 10 Feb 2021 12:23:43 -0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4gp4x (ro)
/volume-name-path from volume-name (rw,path="emptyFile")
ls on the volume also shows nothing.
k8 exec -it demo-pod -c demo ls /volume-name-path
any suggestion??
PS: I don't want to use a ConfigMap and simply wants to create an empty file.
If the objective is to create a empty file when the Pod starts, then the most easy way is to either use the entrypoint of the docker image or an init container.
With the initContainer, you could go with something like the following (or with a more complex init image which you build and execute a whole bash script or something similar):
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
initContainers:
- name: create-empty-file
image: alpine
command: ["touch", "/path/to/the/directory/empty_file"]
volumeMounts:
- name: volume-name
mountPath: /path/to/the/directory
containers:
- name: demo
image: alpine
command: ["sleep", "3600"]
volumeMounts:
- name: volume-name
mountPath: /path/to/the/directory
volumes:
- name: volume-name
emptyDir: {}
Basically the init container gets executed first, runs its command and if it is successful, then it terminates and the main container starts running. They share the same volumes (and they can also mount them at different paths) so in the example, the init container mount the emptyDir volume, creates an empty file and then complete. When the main container starts, the file is already there.
Regarding your legacy application which is getting ported on Kubernetes:
If you have control of the Dockerfile, you could simply change it create an empty file at the path you are expecting it to be, so that when the app starts, the file is already created there, empty, from the beginning, just exactly as you add the application to the container, you can add also other files.
For more info on init container, please check the documentation (https://kubernetes.io/docs/concepts/workloads/pods/init-containers/)
I think you may be interested in Container Lifecycle Hooks.
In this case, the PostStart hook may help create an empty file as soon as the container is started:
This hook is executed immediately after a container is created.
In the example below, I will show you how you can use the PostStart hook to create an empty file-test file.
First I created a simple manifest file:
# demo-pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: demo-pod
name: demo-pod
spec:
containers:
- image: alpine
name: demo-pod
command: ["sleep", "3600"]
lifecycle:
postStart:
exec:
command: ["touch", "/mnt/file-test"]
After creating the Pod, we can check if the demo-pod container has an empty file-test file:
$ kubectl apply -f demo-pod.yml
pod/demo-pod created
$ kubectl exec -it demo-pod -- sh
/ # ls -l /mnt/file-test
-rw-r--r-- 1 root root 0 Feb 11 09:08 /mnt/file-test
/ # cat /mnt/file-test
/ #
Could anyone clarify on the Persistent Volume in Kubernetes?
In this below example, the /my-test-project is in the persistent volume.Then, why I need these mounts as technically my entire directory /my-test-project is persisted? How these mountpath and subpath would help if entire directory is persisted.Thanks!
volumeMounts:
- name: empty-dir-volume
mountPath: /my-test-project/data-cache
subPath: data-cache
- name: empty-dir-volume
mountPath: /my-test-project/user-cache
subPath: user-cache
volumes:
- name: empty-dir-volume
emptyDir: {}
Your /my-test-project entire directory is not persisted.
mountPath or path in host /my-test-project/data-cache is persisted in empty-dir-volume in path data-cache
mountPath /my-test-project/user-cache is persisted in empty-dir-volume in path user-cache
Which mean when you create files inside /my-test-project/data-cache, it will be persisted in data-cache(subpath) inside emtpy-dir-volume. Similarly for user-cache. Whenever you create files inside /my-test-project/ it wont be persisted. Lets say you create /my-test-project/new-dir, now new-dir will not be persisted.
For better explaination, lets take the below example(two containers sharing the volume, but in different mounthPath):
apiVersion: v1
kind: Pod
metadata:
name: share-empty-dir
spec:
containers:
- name: container-1
image: alpine
command:
- "bin/sh"
- "-c"
- "sleep 10000"
volumeMounts:
- name: empty-dir-volume
mountPath: /my-test-project/data-cache
subPath: data-cache-subpath
- name: empty-dir-volume
mountPath: /my-test-project/user-cache
subPath: user-cache-subpath
- name: container-2
image: alpine
command:
- "bin/sh"
- "-c"
- "sleep 10000"
volumeMounts:
- name: empty-dir-volume
mountPath: /tmp/container-2
volumes:
- name: empty-dir-volume
emptyDir: {}
In container-1:
mountPath /my-test-project/user-cache is persisted in empty-dir-volume in path user-cache-subpath
mountPath /my-test-project/data-cache is persisted in empty-dir-volume in path data-cache-subpath
In container-2:
mountPath /tmp/container-2 is persisted in empty-dir-volume in path "" (which means "/")
Observations:
touch /my-test-project/user-cache/a.txt. we can see this file in container-2 at /tmp/container-2/user-cache-subpath/a.txt and reverse will work
touch /my-test-project/data-cache/b.txt. we can see this file in container-2 at /tmp/container-2/data-cache-subpath/a.txt and reverse will work
touch /tmp/container-2/new.txt, we can never this file in container-1 as the base path we are have specified subPaths in container-1
Play around similarly for even better understanding
Note: Just to be clear, you are using emptyDir type volume, which means whenever pod gets deleted, data will be lost. This type is used only to share the data between containers.
My pod has two containers - a primary container, and a sidecar container that monitors the /var/run/utmp file in the primary container and takes action when it changes. I'm trying to figure out how to make this file visible in the sidecar container.
This page describes how to use an emptyDir volume to share directories between containers in a pod. However, this only seems to work for directories, not single files. I also can't use this strategy to share the entire /var/run/ directory in the primary container, since mounting a volume there erases the contents of the directory, which the container needs to run.
I tried to work around this by creating a symlink to utmp in another directory and mounting that directory, but it doesn't look like symlinks in volumes are resolved in the way they would need to be for this to work.
Is there any way I can make one file in a container visible to other containers in the same pod? The manifest I'm experimenting with looks like this:
apiVersion: v1
kind: Pod
metadata:
name: utmp-demo
spec:
restartPolicy: Never
containers:
- name: main
image: debian
command: ["/bin/bash"]
args: ["-c", "sleep infinity"]
volumeMounts:
- name: main-run
mountPath: /var/run # or /var/run/utmp, which crashes
- name: helper
image: debian
command: ["/bin/bash"]
args: ["-c", "sleep infinity"]
volumeMounts:
- name: main-run
mountPath: /tmp/main-run
volumes:
- name: main-run
emptyDir: {}
If you can move the file to be shared in an empty subfolder this could be a simple solution.
For example, move your file to /var/run/utmp/utmp and share /var/run/utmp folder with an emptydir.