Kubernetes Persisent Volume - kubernetes

Could anyone clarify on the Persistent Volume in Kubernetes?
In this below example, the /my-test-project is in the persistent volume.Then, why I need these mounts as technically my entire directory /my-test-project is persisted? How these mountpath and subpath would help if entire directory is persisted.Thanks!
volumeMounts:
- name: empty-dir-volume
mountPath: /my-test-project/data-cache
subPath: data-cache
- name: empty-dir-volume
mountPath: /my-test-project/user-cache
subPath: user-cache
volumes:
- name: empty-dir-volume
emptyDir: {}

Your /my-test-project entire directory is not persisted.
mountPath or path in host /my-test-project/data-cache is persisted in empty-dir-volume in path data-cache
mountPath /my-test-project/user-cache is persisted in empty-dir-volume in path user-cache
Which mean when you create files inside /my-test-project/data-cache, it will be persisted in data-cache(subpath) inside emtpy-dir-volume. Similarly for user-cache. Whenever you create files inside /my-test-project/ it wont be persisted. Lets say you create /my-test-project/new-dir, now new-dir will not be persisted.
For better explaination, lets take the below example(two containers sharing the volume, but in different mounthPath):
apiVersion: v1
kind: Pod
metadata:
name: share-empty-dir
spec:
containers:
- name: container-1
image: alpine
command:
- "bin/sh"
- "-c"
- "sleep 10000"
volumeMounts:
- name: empty-dir-volume
mountPath: /my-test-project/data-cache
subPath: data-cache-subpath
- name: empty-dir-volume
mountPath: /my-test-project/user-cache
subPath: user-cache-subpath
- name: container-2
image: alpine
command:
- "bin/sh"
- "-c"
- "sleep 10000"
volumeMounts:
- name: empty-dir-volume
mountPath: /tmp/container-2
volumes:
- name: empty-dir-volume
emptyDir: {}
In container-1:
mountPath /my-test-project/user-cache is persisted in empty-dir-volume in path user-cache-subpath
mountPath /my-test-project/data-cache is persisted in empty-dir-volume in path data-cache-subpath
In container-2:
mountPath /tmp/container-2 is persisted in empty-dir-volume in path "" (which means "/")
Observations:
touch /my-test-project/user-cache/a.txt. we can see this file in container-2 at /tmp/container-2/user-cache-subpath/a.txt and reverse will work
touch /my-test-project/data-cache/b.txt. we can see this file in container-2 at /tmp/container-2/data-cache-subpath/a.txt and reverse will work
touch /tmp/container-2/new.txt, we can never this file in container-1 as the base path we are have specified subPaths in container-1
Play around similarly for even better understanding
Note: Just to be clear, you are using emptyDir type volume, which means whenever pod gets deleted, data will be lost. This type is used only to share the data between containers.

Related

Kubernetes: creating permanent folders in pod

I have a pod in which I'm running a image.
The pod is not mine but belongs to the company I work for.
Each time I mount a new image in the pod it has access to some predefined "Permanent" folders.
When I use the edit deployment command I see this:
volumeMounts:
- mountPath: /Data/logs
name: ba-server-api-dh-pvc
subPath: logs
- mountPath: /Data/ErrorAndAbortedBlobs
name: ba-server-api-dh-pvc
subPath: ErrorAndAbortedBlobs
- mountPath: /Data/SuccessfullyTransferredBlobs
name: ba-server-api-dh-pvc
subPath: SuccessfullyTransferredBlobs
- mountPath: /Data/BlobsToBeTransferred
name: ba-server-api-dh-pvc
subPath: BlobsToBeTransferred
Now I want to manually add another such mountPath so I get another folder in the pod. But when I add it to the deployment config (the one above) and try saving it I get the following error.
"error: deployments.extensions "ba-server-api-dh-deployment" is invalid"
What can I do to add another permanent folder to the POD?
kind regards
It looks like you haven't specified the volume.
Something looks like this.
...
volumeMounts:
- mountPath: /Data/BlobsToBeTransferred
name: ba-server-api-dh-pvc
subPath: BlobsToBeTransferred
...
volume:
- name: ba-server-api-dh-pvc
persistentVolumeClaim:
claimName: ba-server-api-dh-pvc
Note that you already have a PersistentVolumeClaim named ba-server-api-dh-pvc, otherwise you will have to create.

☸️ Saving the file after performing job k8s

After reading a lot of manuals, I still don't understand if it is possible to save a file to the host after performing a job or a cronjob.
My Job got PVC and PV like that:
volumes:
- name: name1
persistentVolumeClaim:
claimName: name2
and:
volumeMounts:
- name: save-file
mountPath: /mnt/dir
Let's say I'm running a script that saves the output to a file:
command: ["dosomething.py"]
args: [" --save-to-file output.txt"]
Is there some way to save it to the host or send it somewhere?
Mount the volume for container and give a mount path-
volumeMounts:
- name: "name1"
mountPath: "/var/www/html"
The container can save file to it.
Read more details here - https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume

Share data from one Kubernetes container to another inside same pod

I have two containers inside my Pod:
Container A based on my own Dockerfile. Inside this Dockerfile, there is COPY ./files /my-files command. This image is inside my GitLab Docker registry.
Container B based on image from hub.docker.com.
I'd like to share data from Container A that are stored inside /my-files to Container B. I thought that I need to create a volume (it's not a persisted data) inside this pod and volumeMounts to the container.
Unfortunately when I add volumeMounts to Container A with mountPath: /my-files this directory is emptied and there are no files that were added when an image was created.
What should I do to keep this data and share it with Container B.
This is part of my Deployment.yaml file:
containers:
- name: Container-A
image: "my-gitlab-registry/my-image-a-with-copied-files"
volumeMounts:
- name: shared-data
mountPath: /my-files
- name: Container-B
image: "some-public-image"
volumeMounts:
- name: shared-data
mountPath: /files-from-container-a
volumes:
- name: shared-data
emptyDir: {}
A ugly hack, use init container, to copy data into emptyDir volume, then mount volume in 2nd container.
initContainers:
- name: init-config-data-copy-wait
image: datacontainer
command:
- sh
- "-ce"
- |
set -ex
cp -r /src-data/* /dst-data/
ls /dst-data/
volumeMounts:
- mountPath: /dst-data
name: dst-data-volume
volumes:
- name: dst-data-volume
emptyDir: {}

Pre-populating Local SSD disk in GCP Kubernetes for readonly multipods usage

What is the best way to preload large files into a local PersistentVolume SSD before it gets used by Kubernetes pods?
The goal is to have multiple pods (could be multiple instances of the same pod, or different), share the same local SSD drive in a read-only mode. The drive would need to be initialized somehow with a large dataset.
Google Local SSD docs describes the Running the local volume static provisioner, but that approach only creates a PersistedVolume, but does not initialize it.
Basically, you can add an init container to your pod that initializes the SSD: add data, etc.
apiVersion: v1
kind: Pod
metadata:
name: "test-ssd"
spec:
initContainers:
- name: "init"
image: "ubuntu:14.04"
command: ["/bin/init_my_ssd.ssh"]
volumeMounts:
- mountPath: "/test-ssd/"
name: "test-ssd:
containers:
- name: "shell"
image: "ubuntu:14.04"
command: ["/bin/sh", "-c"]
args: ["echo 'hello world' > /test-ssd/test.txt && sleep 1 && cat /test-ssd/test.txt"]
volumeMounts:
- mountPath: "/test-ssd/"
name: "test-ssd"
volumes:
- name: "test-ssd"
hostPath:
path: "/mnt/disks/ssd0"
nodeSelector:
cloud.google.com/gke-local-ssd: "true"

mountPath overrides the rest of the files in that same folder

I have a volume with a secret called config-volume. I want to have that file in the /home/code/config folder, which is where the rest of the configuration files are. For that, I mount it as this:
volumeMounts:
- name: config-volumes
- mountPath: /home/code/config
The issue is that, after deploying, in the /home/code/config I only have the secret file and the rest of them are gone
So the /home/code/config is an existing folder (not empty), I suspect that the volumeMount overwrites the folder.
Is there a way that this can be done without overwriting everything?
You can do the following, taken from this GitHub issue
containers:
- volumeMounts:
- name: config-volumes
mountPath: /home/code/config
subPath: config
volumes:
- name: config-volumes
configMap:
name: my-config
Suggested that your ConfigMapis called my-config and that you have a key config in it.
Kubernetes Secrets are mounted as a directory, with each key as a file in that directory. So in your case, the config-volumes secret is mounted to /home/code/config, shadowing whatever that directory was before.
You could specify your volume mount as:
volumeMounts:
- name: config-volumes
- mountPath: /home/code/config/config-volumes
which would provide a config-volumes directory inside the config directory with files for your secret's keys inside.