create a pod that runs two containers and ensure that the pod has shared volume that can be used by both containers to communicate with each other write an HTML file in one container and try accessing it from another container
can anyone tell me how to do it
Example pod with multiple containers
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]
Official document : https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
Above example is using the empty dir so if you POD restart or Start again you will lose the data.
If you have any requirements to save the data i would suggest using the PVC instead of the empty dir.
i would recommend using the NFS if you can.
Related
This question already has an answer here:
How to share a file from initContainer to base container in Kubernetes
(1 answer)
Closed 4 months ago.
The community reviewed whether to reopen this question 4 months ago and left it closed:
Original close reason(s) were not resolved
I have a situation where we will have list of IP addresses (which are coming from the config map) then we need to validate these IP Addresses (i.e. check if they are accessible from this machine) then return the first accessible IP address so that application can access this ip address to process further actions.
I got the know that we can use InitContainers to do this stuff. But my question is how can we run a shell script in the initcontainer to identify the accessible IP Address and set it in the Environmental variable so that application process this further.
InitContainers can communicate with other, normal containers through volumes.
You can use emptyDir volume type, which is a directory that allows the pod to store data for the duration of its life cycle.
apiVersion: v1
kind: Pod
metadata:
name: pod-name
spec:
volumes:
- name: addresses
emptyDir: {}
initContainers:
- name: ip-selector
image: your-image
volumeMounts:
- name: addresses
mountPath: /path/to/ip/addresses
containers:
- name: ip-handler
image: your-image
volumeMounts:
- name: addresses
mountPath: /path/to/ip/addresses/handler
readOnly: true # optional
Your initContainer can now save .env file with addresses in /path/to/ip/addresses path and then your normal container can read this file from /path/to/ip/addresses/handler path.
Option : 1
Once you get the IP inside the init container you can create the secret with value and use it.
initContainers:
- name: secret
image: gcr.io/cloud-builders/kubectl:latest
command:
- sh
- -c
- kubectl create secret mysecret ... -o yaml | kubectl apply -f -
containers:
- name: test-container
image: image-uri:latest
envFrom:
- secretRef:
name: mysecret
Option : 2
You can also use the shared mount option to write IP into the file, and inside the container, you can run the command to set the file contents to Env. So when your main container start it will start with the IP as Env.
command: [bash, -c, "source /tmp/env && service command"]
Example
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: busybox
image: busybox
command: [bash, -c, "source /tmp/env && service command"]
volumeMounts:
- name: workdir
mountPath: /tmp/env
initContainers:
- name: ip-check
image: busybox:1.28
command:
- Command or script to check the IP write to > /tmp/env
volumeMounts:
- name: workdir
mountPath: "/tmp"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
I'm running multiple containers in a pod. I have a persistence volume and mounting the same directories to containers.
My requirement is:
mount /opt/app/logs/app.log to container A where application writes data to app.log
mount /opt/app/logs/app.log to container B to read data back from app.log
- container-A
image: nginx
volumeMounts:
- mountPath: /opt/app/logs/ => container A is writing data here to **app.log** file
name: data
- container-B
image: busybox
volumeMounts:
- mountPath: /opt/app/logs/ => container B read data from **app.log**
name: data
The issue I'm facing is - when I mount the same directory /opt/app/logs/ to container-B, I'm not seeing the app.log file.
Can someone help me with this, please? This can be achievable but I'm not sure what I'm missing here.
According to your requirements, you need something like below:
- container-A
image: nginx
volumeMounts:
- mountPath: /opt/app/logs
name: data
- container-B
image: busybox
volumeMounts:
- mountPath: /opt/app/logs
name: data
Your application running on container-A will create or write files on the given path(/opt/app/logs) say app.log file. Then from container-B you'll find app.log file in the given path (/opt/app/logs). You can use any path here.
In your given spec you actually tried to mount a directory in a file(app.log). I think that's creating the issue.
Update-1:
Here I give a full yaml file from a working example. You can do it by yourself to see how things work.
kubectl exec -ti test-pd -c test-container sh
go to /test-path1
create some file using touch command. say "touch a.txt"
exit from test-container
kubectl exec -ti test-pd -c test sh
go to /test-path2
you will find a.txt file here.
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pv-claim
spec:
storageClassName:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: nginx
name: test-container
volumeMounts:
- mountPath: /test-path1
name: test-volume
- image: pkbhowmick/go-rest-api:2.0.1 #my-rest-api-server
name: test
volumeMounts:
- mountPath: /test-path2
name: test-volume
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: test-pv-claim
From your post it seems you‘re having two separate paths.
Conatainer B ist mounted to /opt/app/logs/logs.
Have different file names for each of your containers and also fix the mount path from the container config. Please use this as an example :-
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
Say we have a simple deployment.yml file:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ikg-api-demo
name: ikg-api-demo
spec:
selector:
matchLabels:
app: ikg-api-demo
replicas: 3
template:
metadata:
labels:
app: ikg-api-demo
spec:
containers:
- name: ikg-api-demo
imagePullPolicy: Always
image: example.com/main_api:private_key
ports:
- containerPort: 80
the problem is that this image/container depends on another image/container - it needs to cp data from the other image, or use some shared volume.
How can I tell kubernetes to download another image, run it as a container, and then copy data from it to the container declared in the above file?
It looks like this article explains how.
but it's not 100% clear how it works. It looks like you create some shared volume, launch the two containers, using that shared volume?
so I according to that link, I added this to my deployment.yml:
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: ikg-api-demo
imagePullPolicy: Always
volumeMounts:
- name: shared-data
mountPath: /nltk_data
image: example.com/nltk_data:latest
- name: ikg-api-demo
imagePullPolicy: Always
volumeMounts:
- name: shared-data
mountPath: /nltk_data
image: example.com/main_api:private_key
ports:
- containerPort: 80
my primary hesitation is that mounting /nltk_data as a shared volume will overwrite what might be there already.
So I assume what I need to do is mount it at some other location, and then make the ENTRYPOINT for the source data container:
ENTRYPOINT ['cp', '-r', '/nltk_data_source', '/nltk_data']
so that will write it to the shared volume, once the container is launched.
So I have two questions:
How to run one container and finish a job, before another container starts using kubernetes?
How to write to a shared volume without having that shared volume overwrite what's in your image? In other words, if I have /xyz in the image/container, I don't want to have to copy /xyz to /shared_volume_mount_location if I don't have to.
How to run one container and finish a job, before another container starts using kubernetes?
Use initContainers - updated your deployment.yml, assuming example.com/nltk_data:latest is your data image
How to write to a shared volume without having that shared volume overwrite?
As you know what is there in your image, you need to select an appropriate mount path. I would use /mnt/nltk_data
Updated deployment.yml with init containers
spec:
volumes:
- name: shared-data
emptyDir: {}
initContainers:
- name: init-ikg-api-demo
imagePullPolicy: Always
# You can use command, if you don't want to change the ENTRYPOINT
command: ['sh', '-c', 'cp -r /nltk_data_source /mnt/nltk_data']
volumeMounts:
- name: shared-data
mountPath: /mnt/nltk_data
image: example.com/nltk_data:latest
containers:
- name: ikg-api-demo
imagePullPolicy: Always
volumeMounts:
- name: shared-data
mountPath: /nltk_data
image: example.com/main_api:private_key
ports:
- containerPort: 80
I've been though the Kubernetes documentation thoroughly but am still having problems interacting with a file on the host filesystem with an application running inside a K8 job launched pod. This happens with even the simplest utility so I have included an stripped down example of my yaml config. The local file, 'hello.txt', referenced here does exist in /tmp on the host (ie. outside the Kubernetes environment) and I have even chmod 777'd it. I've also tried different places in the hosts filesystem than /tmp.
The pod that is launched by the Kubernetes Job terminates with Status=Error and generates the log ls: /testing/hello.txt: No such file or directory
Because I ultimately want to use this programmatically as part of a much more sophisticated workflow it really needs to be a Job not a Deployment. I hope that is possible. My current config file which I am launching with kubectl just for testing is:
apiVersion: batch/v1
kind: Job
metadata:
name: kio
namespace: kmlflow
spec:
# ttlSecondsAfterFinished: 5
template:
spec:
containers:
- name: kio-ingester
image: busybox
volumeMounts:
- name: test-volume
mountPath: /testing
imagePullPolicy: IfNotPresent
command: ["ls"]
args: ["-l", "/testing/hello.txt"]
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp
# this field is optional
# type: Directory
restartPolicy: Never
backoffLimit: 4
Thanks in advance for any assistance.
Looks like when the volume is mounted , the existing data can't be accessed.
You will need to make use of init container to pre-populate the data in the volume.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: busybox
command: ["echo","-n","{'address':'10.0.1.192:2379/db'}", ">","/data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
hostPath: {}
Reference:
https://medium.com/#jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519
I am trying to "pass" a value from the init container to a container. Since values in a configmap are shared across the namespace, I figured I can use it for this purpose. Here is my job.yaml (with faked-out info):
apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
- name: in_artifactoryUrl
valueFrom:
configMapKeyRef:
name: test-config
key: artifactorySnapshotUrl
initContainers:
- name: artifactory-snapshot
image: busybox
command: ['kubectl', 'create configmap test-config --from-literal=artifactorySnapshotUrl=http://artifactory.com/some/url']
restartPolicy: Never
backoffLimit: 0
This does not seem to work (EDIT: although the statements following this edit note may still be correct, this is not working because kubectl is not a recognizable command in the busybox image), and I am assuming that the pod can only read values from a configmap created BEFORE the pod is created. Has anyone else come across the difficulty of passing values between containers, and what did you do to solve this?
Should I deploy the configmap in another pod and wait to deploy this one until the configmap exists?
(I know I can write files to a volume, but I'd rather not go that route unless it's absolutely necessary, since it essentially means our docker images must be coupled to an environment where some specific files exist)
You can create an EmptyDir volume, and mount this volume onto both containers. Unlike persistent volume, EmptyDir has no portability issue.
apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
volumeMounts:
- name: tmp
mountPath: /tmp/artifact
initContainers:
- name: artifactory-snapshot
image: busybox
command: ['/bin/sh', '-c', 'cp x /tmp/artifact/x']
volumeMounts:
- name: tmp
mountPath: /tmp/artifact
restartPolicy: Never
volumes:
- name: tmp
emptyDir: {}
backoffLimit: 0
If for various reasons, you don't want to use share volume. And you want to create a configmap or a secret, here is a solution.
First you need to use a docker image which contains kubectl : gcr.io/cloud-builders/kubectl:latest for example. (docker image which contains kubectl manage by Google).
Then this (init)container needs enough rights to create resource on Kubernetes cluster. Ok by default, kubernetes inject a token of default service account named : "default" in container, but I prefer to make more explicit, then add this line :
...
initContainers:
- # Already true by default but if use it, prefer to make it explicit
automountServiceAccountToken: true
name: artifactory-snapshot
And add "edit" role to "default" service account:
kubectl create rolebinding default-edit-rb --clusterrole=edit --serviceaccount=default:myapp --namespace=default
Then complete example :
apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
initContainers:
- # Already true by default but if use it, prefer to make it explicit.
automountServiceAccountToken: true
name: artifactory-snapshot
# You need to use docker image which contains kubectl
image: gcr.io/cloud-builders/kubectl:latest
command:
- sh
- -c
# the "--dry-run -o yaml | kubectl apply -f -" is to make command idempotent
- kubectl create configmap test-config --from-literal=artifactorySnapshotUrl=http://artifactory.com/some/url --dry-run -o yaml | kubectl apply -f -
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
- name: in_artifactoryUrl
valueFrom:
configMapKeyRef:
name: test-config
key: artifactorySnapshotUrl
First of all, kubectl is a binary. It was downloaded in your machine before you could use the command. But, In your POD, the kubectl binary doesn't exist. So, you can't use kubectl command from a busybox image.
Furthermore, kubectl uses some credential that is saved in your machine (probably in ~/.kube path). So, If you try to use kubectl from inside an image, this will fail because of missing credentials.
For your scenario, I will suggest the same as #ccshih, use volume sharing.
Here is the official doc about volume sharing between init-container and container.
The yaml that is used here is ,
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
Here init-containers saves a file in the volume and later the file was available in inside the container. Try the tutorial by yourself for better understanding.