Sharing non-persistent volume between containers in a pod - kubernetes

I am trying to put two nodejs applications into the same pod, because normally they should sit in the same machine, and are unfortunately heavily coupled together in such a way that each of them is looking for the folder of the other (pos/app.js needs /pos-service, and pos-service/app.js needs /pos)
In the end, the folder is supposed to contain:
/pos
/pos-service
Their volume doesn't need to be persistent, so I tried sharing their volumes with an emptyDir like the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pos-deployment
labels:
app: pos
spec:
replicas: 1
selector:
matchLabels:
app: pos
template:
metadata:
labels:
app: pos
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: pos-service
image: pos-service:0.0.1
volumeMounts:
- name: shared-data
mountPath: /pos-service
- name: pos
image: pos:0.0.3
volumeMounts:
- name: shared-data
mountPath: /pos
However, when the pod is launched, and I exec into each of the containers, they still seem to be isolated and eachother's folders can't be seen
I would appereciate any help, thanks

This is a Community Wiki answer so feel free to edit it and add any additional details you consider important.
Since this issue has already been solved or rather clarified as in fact there is nothing to be solved here, let's post a Community Wiki answer as it's partially based on comments of a few different users.
As Matt and David Maze have already mentioned, it works as expected and in your example there is nothing that copies any content to your emptyDir volume:
With just the YAML you've shown, nothing ever copies anything into the
emptyDir volume, unless the images' startup knows to do that. – David
Maze Dec 28 '20 at 12:45
And as the name itselt may suggest, emptyDir comes totally empty, so it's your task to pre-populate it with the desired data. It can be done with the init container by temporarily mounting your emptyDir to a different mount point e.g. /mnt/my-epmty-dir and copying the content of specific directory or directories already present in your container e.g. /pos and /pos-service as in your example and then mounting it again to the desired location. Take a look at this example, presented in one of my older answers as it can be done in the very same way. Your Deployment may look as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pos-deployment
labels:
app: pos
spec:
replicas: 1
selector:
matchLabels:
app: pos
template:
metadata:
labels:
app: pos
spec:
volumes:
- name: shared-data
emptyDir: {}
initContainers:
- name: pre-populate-empty-dir-1
image: pos-service:0.0.1
command: ['sh', '-c', 'cp -a /pos-service/* /mnt/empty-dir-content/']
volumeMounts:
- name: shared-data
mountPath: "/mnt/empty-dir-content/"
- name: pre-populate-empty-dir-2
image: pos:0.0.3
command: ['sh', '-c', 'cp -a /pos/* /mnt/empty-dir-content/']
volumeMounts:
- name: shared-data
mountPath: "/mnt/empty-dir-content/"
containers:
- name: pos-service
image: pos-service:0.0.1
volumeMounts:
- name: shared-data
mountPath: /pos-service
- name: pos
image: pos:0.0.3
volumeMounts:
- name: shared-data
mountPath: /pos
It's worth mentioning that there is nothing surprising here as it is exacly how mount works on Linux or other nix-based operating systems.
If you have e.g. /var/log/your-app on your main disk, populated with logs and then you mount a new, empty disk defining as its mountpoint /var/log/your-app, you won't see any content there. It won't be deleted from its original location on your main disc, it will simply become unavailable as in this location now you've mounted completely different volume (which happens to be empty or may have totally different content). When you unmount and visit your /var/log/your-app again, you'll see its original content. I hope it's all clear.

Related

container level securityContext fsGroup

I'm trying to play with single pod multi container scenario.
The problem is one of my container (directus) is a node app that run as user 'node' with uid 1000
First try, I use hostpath as storage back end. With this, I need to change the host's directory mode with chmod manualy.
Now, I'm trying using longhorn.
And basicaly I don't want to change a host directory mod/ownership each time i deploy this deployment.
Here is my manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: lh-directus
namespace: lh-directus
spec:
replicas: 1
selector:
matchLabels:
app: lh-directus
template:
metadata:
labels:
app: lh-directus
spec:
nodeSelector:
kubernetes.io/os: linux
isGeneralDeployment: "true"
volumes:
- name: lh-directus-uploads-volume
persistentVolumeClaim:
claimName: lh-directus-uploads-pvc
- name: lh-directus-dbdata-volume
persistentVolumeClaim:
claimName: lh-directus-dbdata-pvc
containers:
# Redis Cache
- name: redis
image: redis:6
# Database
- name: database
image: postgres:12
volumeMounts:
- name: lh-directus-dbdata-volume
mountPath: /var/lib/postgresql/data
# Directus
- name: directus
image: directus/directus:latest
securityContext:
fsGroup: 1000
volumeMounts:
- name: lh-directus-uploads-volume
mountPath: /directus/uploads
When I Appy the manifest, I got error
error: error validating "lh-directus.yaml": error validating data: ValidationError(Deployment.spec.template.spec.containers[2].securityContext): unknown field "fsGroup" in io.k8s.api.core.v1.SecurityContext; if you choose to ignore these errors, turn validation off with --validate=false
I reads about initContainer ....
But Kindly please tell me how to fix this problem without initContainer and without manualy set/change host's path ownership/mod.
Sincerely
-bino-

Kubernetes Persistant Volume overwrites image data

I have a pod that reads from an image that contains data within var/www/html. I want this data to be stored in a persistent volume. This is my deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 1
selector:
matchLabels:
container: app
template:
metadata:
labels:
container: app
spec:
containers:
- name: app
image: my/toolkit-app:working
ports:
- containerPort: 80
volumeMounts:
- mountPath: /var/www/html
name: toolkit-volume
subPath: html
volumes:
- name: toolkit-volume
persistentVolumeClaim:
claimName: azurefile
imagePullSecrets:
- name: my-cred
However when I look into the pod I can see the directory is empty:
If I comment out the persistent volume:
#volumeMounts:
# - mountPath: /var/www/html
# name: toolkit-volume
# subPath: html
I can see that the image data is there:
So it seems like the persistent volume is overwriting the existing directory - is there a way round this? Ideally I want /var/www/html to be stored in a separate volume and for any existing files within the image to be stored there too.
This is more a problem of visibility: If you mount an empty volume at a specific path, you won't be able to see, what was placed there in the container image.
From your question I assume that you want to be able to rollout updates by the means of a new container image, but at the same time retain variable data, that was created at the same directory from your application.
You could achieve this with the following method:
Use an init container with the same image and mount your persistent directory to a different path, for example /data
As command for the init container copy the contents of /var/www/html to /data.
In the regular container image use the mount you already have, it will contain your variable data and the updated data from the init container.

How to cp data from one container to another using kubernetes

Say we have a simple deployment.yml file:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ikg-api-demo
name: ikg-api-demo
spec:
selector:
matchLabels:
app: ikg-api-demo
replicas: 3
template:
metadata:
labels:
app: ikg-api-demo
spec:
containers:
- name: ikg-api-demo
imagePullPolicy: Always
image: example.com/main_api:private_key
ports:
- containerPort: 80
the problem is that this image/container depends on another image/container - it needs to cp data from the other image, or use some shared volume.
How can I tell kubernetes to download another image, run it as a container, and then copy data from it to the container declared in the above file?
It looks like this article explains how.
but it's not 100% clear how it works. It looks like you create some shared volume, launch the two containers, using that shared volume?
so I according to that link, I added this to my deployment.yml:
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: ikg-api-demo
imagePullPolicy: Always
volumeMounts:
- name: shared-data
mountPath: /nltk_data
image: example.com/nltk_data:latest
- name: ikg-api-demo
imagePullPolicy: Always
volumeMounts:
- name: shared-data
mountPath: /nltk_data
image: example.com/main_api:private_key
ports:
- containerPort: 80
my primary hesitation is that mounting /nltk_data as a shared volume will overwrite what might be there already.
So I assume what I need to do is mount it at some other location, and then make the ENTRYPOINT for the source data container:
ENTRYPOINT ['cp', '-r', '/nltk_data_source', '/nltk_data']
so that will write it to the shared volume, once the container is launched.
So I have two questions:
How to run one container and finish a job, before another container starts using kubernetes?
How to write to a shared volume without having that shared volume overwrite what's in your image? In other words, if I have /xyz in the image/container, I don't want to have to copy /xyz to /shared_volume_mount_location if I don't have to.
How to run one container and finish a job, before another container starts using kubernetes?
Use initContainers - updated your deployment.yml, assuming example.com/nltk_data:latest is your data image
How to write to a shared volume without having that shared volume overwrite?
As you know what is there in your image, you need to select an appropriate mount path. I would use /mnt/nltk_data
Updated deployment.yml with init containers
spec:
volumes:
- name: shared-data
emptyDir: {}
initContainers:
- name: init-ikg-api-demo
imagePullPolicy: Always
# You can use command, if you don't want to change the ENTRYPOINT
command: ['sh', '-c', 'cp -r /nltk_data_source /mnt/nltk_data']
volumeMounts:
- name: shared-data
mountPath: /mnt/nltk_data
image: example.com/nltk_data:latest
containers:
- name: ikg-api-demo
imagePullPolicy: Always
volumeMounts:
- name: shared-data
mountPath: /nltk_data
image: example.com/main_api:private_key
ports:
- containerPort: 80

How to allow a Kubernetes Job access to a file on host

I've been though the Kubernetes documentation thoroughly but am still having problems interacting with a file on the host filesystem with an application running inside a K8 job launched pod. This happens with even the simplest utility so I have included an stripped down example of my yaml config. The local file, 'hello.txt', referenced here does exist in /tmp on the host (ie. outside the Kubernetes environment) and I have even chmod 777'd it. I've also tried different places in the hosts filesystem than /tmp.
The pod that is launched by the Kubernetes Job terminates with Status=Error and generates the log ls: /testing/hello.txt: No such file or directory
Because I ultimately want to use this programmatically as part of a much more sophisticated workflow it really needs to be a Job not a Deployment. I hope that is possible. My current config file which I am launching with kubectl just for testing is:
apiVersion: batch/v1
kind: Job
metadata:
name: kio
namespace: kmlflow
spec:
# ttlSecondsAfterFinished: 5
template:
spec:
containers:
- name: kio-ingester
image: busybox
volumeMounts:
- name: test-volume
mountPath: /testing
imagePullPolicy: IfNotPresent
command: ["ls"]
args: ["-l", "/testing/hello.txt"]
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp
# this field is optional
# type: Directory
restartPolicy: Never
backoffLimit: 4
Thanks in advance for any assistance.
Looks like when the volume is mounted , the existing data can't be accessed.
You will need to make use of init container to pre-populate the data in the volume.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: busybox
command: ["echo","-n","{'address':'10.0.1.192:2379/db'}", ">","/data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
hostPath: {}
Reference:
https://medium.com/#jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519

Directories creation inside the Kubernetes Persistent Volume

How would we create a directory inside the kubernetes persistent volume to mount to use in the container as subPath ? eg: mysql directory should be created while claiming the persistent volume
I would probably put an init container into my podspec that simply mounts the volume and runs a mkdir -p to create the directory and then exit. You could also do this in the target container itself with some kind of script.
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
This is how I implemented the wise solution of #brett-wagner with initContainer and mkdir -p. I create two sub-diretctories, my-app-data and my-app-media, in my NFS server volume /exports:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nfs-server-deploy
labels:
app: my-nfs-server
spec:
replicas: 1
selector:
matchLabels:
app: my-nfs-server
template:
spec:
containers:
- name: my-nfs-server-cntr
image: k8s.gcr.io/volume-nfs:0.8
volumeMounts:
- name: my-nfs-server-exports
mountPath: "/exports"
initContainers:
- name: volume-dirs-init-cntr
image: busybox:1.35
command:
- "/bin/mkdir"
args:
- "-p"
- "/exports/my-app-data"
- "/exports/my-app-media"
volumeMounts:
- name: my-nfs-server-exports
mountPath: "/exports"
volumes:
- name: my-nfs-server-exports
persistentVolumeClaim:
claimName: my-nfs-server-pvc
I think you could use the readinessProbe where you could use the execAction to create the sub folder. It will make sure your folder ready before container is ready to accept requests.
Otherwise you could use the COMMAND option to create it. But that will be executed after container starts.