How to mount the same directory to multiple containers in a pod - kubernetes

I'm running multiple containers in a pod. I have a persistence volume and mounting the same directories to containers.
My requirement is:
mount /opt/app/logs/app.log to container A where application writes data to app.log
mount /opt/app/logs/app.log to container B to read data back from app.log
- container-A
image: nginx
volumeMounts:
- mountPath: /opt/app/logs/ => container A is writing data here to **app.log** file
name: data
- container-B
image: busybox
volumeMounts:
- mountPath: /opt/app/logs/ => container B read data from **app.log**
name: data
The issue I'm facing is - when I mount the same directory /opt/app/logs/ to container-B, I'm not seeing the app.log file.
Can someone help me with this, please? This can be achievable but I'm not sure what I'm missing here.

According to your requirements, you need something like below:
- container-A
image: nginx
volumeMounts:
- mountPath: /opt/app/logs
name: data
- container-B
image: busybox
volumeMounts:
- mountPath: /opt/app/logs
name: data
Your application running on container-A will create or write files on the given path(/opt/app/logs) say app.log file. Then from container-B you'll find app.log file in the given path (/opt/app/logs). You can use any path here.
In your given spec you actually tried to mount a directory in a file(app.log). I think that's creating the issue.
Update-1:
Here I give a full yaml file from a working example. You can do it by yourself to see how things work.
kubectl exec -ti test-pd -c test-container sh
go to /test-path1
create some file using touch command. say "touch a.txt"
exit from test-container
kubectl exec -ti test-pd -c test sh
go to /test-path2
you will find a.txt file here.
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pv-claim
spec:
storageClassName:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: nginx
name: test-container
volumeMounts:
- mountPath: /test-path1
name: test-volume
- image: pkbhowmick/go-rest-api:2.0.1 #my-rest-api-server
name: test
volumeMounts:
- mountPath: /test-path2
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: test-pv-claim

From your post it seems you‘re having two separate paths.
Conatainer B ist mounted to /opt/app/logs/logs.

Have different file names for each of your containers and also fix the mount path from the container config. Please use this as an example :-
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory

Related

Pod unable to mount same path in two volumes

I am a newbie here , but I have a use case where I need to mount same path to two different PV's , When ever I try to give the same path my pod doesn't come up check mount paths below
- name : xxx
mountPath : "/home/{username}"
readOnly : false
static:
pvcName:
subPath: '{username}'
capacity: 10Gi
homeMountPath: '/home/{username}'
dynamic:
storageClass: nfs-client
pvcNameTemplate: claim-{username}{servername}
volumeNameTemplate: volume-{username}{servername}
storageAccessModes: [ReadWriteOnce]
But after changing the mount path pod comes up without any issue example
mountPath : "/home/test/{username}"
Is there something I am missing
If you want to mount the Single POD with multiple PVC it wont possible unless you use the ReadWiteMany.
But if you want to mount multiple path to POD it's possible using single PVC or multiple PVC depends on use case.
Single PVC single POD
apiVersion: v1
kind: Pod
metadata:
name: my-test-pod
spec:
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_HOST
value: "IP"
volumeMounts:
- mountPath: /var/lib/mysql
name: data
subPath: path1
- name: nginx
image: nginx
volumeMounts:
- mountPath: /var/www/html
name: data
subPath: path2
volumes:
- name: data
persistentVolumeClaim:
claimName: my-site-data
Multiple PVC and volume path to single POD
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
volumes:
# List of volumes
- name: volume1
< volume details, see below >
- name: volume2
< volume details, see below >
containers:
- name: mycontainer
volumeMounts:
# Path
# will mount 'volume1' into /var/www/html
- name: volume1
mountPath: /var/www/html
# will mount 'volume2' into /var/log
- name: volume2
mountPath: /var/log/
Reference : https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_kubernetes/get_started_provisioning_storage_in_kubernetes#example

two container pod creation in kubernetes

create a pod that runs two containers and ensure that the pod has shared volume that can be used by both containers to communicate with each other write an HTML file in one container and try accessing it from another container
can anyone tell me how to do it
Example pod with multiple containers
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]
Official document : https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
Above example is using the empty dir so if you POD restart or Start again you will lose the data.
If you have any requirements to save the data i would suggest using the PVC instead of the empty dir.
i would recommend using the NFS if you can.

k8s initContainer mountPath does not exist after kubectl pod deployment

Below is deployment yaml, after deployment, I could access the pod
and I can see the mountPath "/usr/share/nginx/html", but I could not find
"/work-dir" which should be created by initContainer.
Could someone explain me the reason?
Thanks and Rgds
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
The volume at "/work-dir" is mounted by the init container and the "/work-dir" location only exists in the init container. When the init container completes, its file system is gone so the "/work-dir" directory in that init container is "gone". The application (nginx) container mounts the same volume, too, (albeit at a different location) providing mechanism for the two containers to share its content.
Per the docs:
Init containers can run with a different view of the filesystem than
app containers in the same Pod.
The volume mount with a PVC allows you to share the contents of /work-dir/ and /use/share/nginx/html/ but it does not mean the nginx container will have the /work-dir folder. Given this, you may think that you could just mount the path / which would allow you to share all folders underneath. However, a mountPath does not work for /.
So, how do you solve your problem? You could have another pod mount /work-dir/ in case you actually need the folder. Here is an example (pvc and deployment with mounts):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-fs-pvc
namespace: default
labels:
mojix.service: default-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: shared-fs
labels:
mojix.service: shared-fs
spec:
replicas: 1
selector:
matchLabels:
mojix.service: shared-fs
template:
metadata:
creationTimestamp: null
labels:
mojix.service: shared-fs
spec:
terminationGracePeriodSeconds: 3
containers:
- name: nginx-c
image: nginx:latest
volumeMounts:
- name: shared-fs-volume
mountPath: /var/www/static/
- name: alpine-c
image: alpine:latest
command: ["/bin/sleep", "10000s"]
lifecycle:
postStart:
exec:
command: ["/bin/mkdir", "-p", "/work-dir"]
volumeMounts:
- name: shared-fs-volume
mountPath: /work-dir/
volumes:
- name: shared-fs-volume
persistentVolumeClaim:
claimName: shared-fs-pvc

How to allow a Kubernetes Job access to a file on host

I've been though the Kubernetes documentation thoroughly but am still having problems interacting with a file on the host filesystem with an application running inside a K8 job launched pod. This happens with even the simplest utility so I have included an stripped down example of my yaml config. The local file, 'hello.txt', referenced here does exist in /tmp on the host (ie. outside the Kubernetes environment) and I have even chmod 777'd it. I've also tried different places in the hosts filesystem than /tmp.
The pod that is launched by the Kubernetes Job terminates with Status=Error and generates the log ls: /testing/hello.txt: No such file or directory
Because I ultimately want to use this programmatically as part of a much more sophisticated workflow it really needs to be a Job not a Deployment. I hope that is possible. My current config file which I am launching with kubectl just for testing is:
apiVersion: batch/v1
kind: Job
metadata:
name: kio
namespace: kmlflow
spec:
# ttlSecondsAfterFinished: 5
template:
spec:
containers:
- name: kio-ingester
image: busybox
volumeMounts:
- name: test-volume
mountPath: /testing
imagePullPolicy: IfNotPresent
command: ["ls"]
args: ["-l", "/testing/hello.txt"]
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp
# this field is optional
# type: Directory
restartPolicy: Never
backoffLimit: 4
Thanks in advance for any assistance.
Looks like when the volume is mounted , the existing data can't be accessed.
You will need to make use of init container to pre-populate the data in the volume.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: busybox
command: ["echo","-n","{'address':'10.0.1.192:2379/db'}", ">","/data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
hostPath: {}
Reference:
https://medium.com/#jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519

Is there a way to get UID in pod spec

What I want to do is providing pod with unified log store, currently persisted to hostPath, but I also want this path including UID so I can easily get its path after pod destroyed.
For example:
apiVersion: v1
kind: Pod
metadata:
name: pod-with-logging-support
spec:
containers:
- image: python:2.7
name: web-server
command:
- "sh"
- "-c"
- "python -m SimpleHTTPServer > /logs/http.log 2>&1"
volumeMounts:
- mountPath: /logs
name: log-dir
volumes:
- name: log-dir
hostPath:
path: /var/log/apps/{metadata.uid}
type: DirectoryOrCreate
metadata.uid is what I want to fill in, but I do not how to do it.
For logging it's better to use another strategy.
I suggest you to look at this link.
Your logs are best managed if streamed to stdout and grabbed by an agent, like shown in this picture:
Don't persist your log on filesystem, but gather them using an agent and put them together for further analysis.
Fluentd is very popular and deserves to be known.
After searching the doc from kubernetes, I finally see a solution for my specific problem. This feature is exactly what I wanted.
So I can create the pod with
apiVersion: v1
kind: Pod
metadata:
name: pod-with-logging-support
spec:
containers:
- image: python:2.7
name: web-server
command:
- "sh"
- "-c"
- "python -m SimpleHTTPServer > /logs/http.log 2>&1"
env:
- name: POD_UID
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.uid
volumeMounts:
- mountPath: /logs
name: log-dir
subPath: $(POD_UID)
volumes:
- name: log-dir
hostPath:
path: /var/log/apps/
type: DirectoryOrCreate