Try to share a file between two containers within a pod, I must use volume and create file in mouth path for this volume.
apiVersion: v1
kind: Pod
metadata:
name: pod-5
spec:
containers:
- name: busybox
image: busybox
command: ["sh", "-c", "if [ -f /workdir/foo ]; then sleep 3600; else exit; fi"]
volumeMounts:
- name: workdir
mountPath: "/workdir"
initContainers:
- name: install
image: busybox
command: ["sh", "-c", "touch /workdir/foo; hostname > /workdir/foo"]
volumeMounts:
- name: workdir
mountPath: "/workdir"
volumes:
- name: workdir
emptyDir: {}
If I don't use volume and create a file in init container and try to read it from the other container, it will not work.
apiVersion: v1
kind: Pod
metadata:
name: pod-5
spec:
containers:
- name: busybox
image: busybox
command: ["sh", "-c", "if [ -f /workdir/foo ]; then sleep 3600; else exit; fi"]
initContainers:
- name: install
image: busybox
command: ["sh", "-c", "touch /workdir/foo; hostname > /workdir/foo"]
Why? I thought all containers within a pod should share both network and file system.
Containers within same pod share network namespace and IPC namespace but they have separate mount namespace and filesystem.Hence we use volumes for sharing mounts. To know more about namespaces check the linux namespace doc.
Let’s start by explaining what a Pod is in the first place. A Pod is is the smallest unit that can be deployed and managed by Kubernetes. In other words, if you need to run a single container in Kubernetes, then you need to create a Pod for that container. At the same time, a Pod can contain more than one container, usually because these containers are relatively tightly coupled. How tightly coupled? Well, think of it this way: the containers in a pod represent processes that would have run on the same server in a pre-container world.
Now think of pod as in your local machine where you are trying to run you containers.
Lets say for example you have your two container initcontainer (container 1) and the main container (container 2) running in the same network.They both are running on your local environment.Now if you create a file in one container and expect the file to be present in the other container thats simply not true.The file is present in a different container in its own file system and there is no way other container can have access to it.But to share the file system between two container you can create a volume mount from your local to container 1 and then mount the same path to container 2. Thus both the container can share the file system.
The same thing applies for Pod in kubernetes environment as well.
In Kubernetes, you can use a shared Kubernetes Volume as a simple and efficient way to share data between containers in a Pod. For most cases, it is sufficient to use a directory on the host that is shared with all containers within a Pod
Related
I've been searching and every answer seems to be the same example (https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/). In a pod you can create an empty volume, then mount that into two containers and any content written in that mount will be seen on each container. While this is fine my use case is slightly different.
Container A
/opt/content
Container B
/data
Container A has an install of about 4G of data. What I would like to do is mount /opt/content into Container B at /content. This way the 4G of data is accessible to Container B at runtime and I don't have to copy content or specially build Container B.
My question, is this possible. If it is, what would be the proper pod syntax.
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /opt/content
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /content
From my research and testing the best I can tell is within a POD two containers can not see each others file system. The volume mount will allow each container to have a mount created in the pod to the specified path (as the example shows) and then any items written to it after that point, will be seen on both. This works great for logs and stuff.
In my context, this proves to not be possible and creating this mount, and then having Container A copy the 4G directory to the newly created mount is to time consuming to make this an option.
Best I can tell is the only way to do this is create a Persistent Volume or other similar and mount that in the Container B. This way Container A contents are stored in the Persistent Volume and it can be easily mounted when needed. The only issue with this is the Persistent Volume will have to be setup in every Kube cluster defined which is the pain point.
If any of this is wrong and I just didn't find the right document please correct me. I would love to be able to do this.
Your code example in your question should work. Both are using the same volume and you mount them under different locations in the container.
nginx-container will have the shared-data content in /opt/content and debian-container will have it in /content.
With mountPath you specify where the volume should be mounted in the container
When a container is started, first the container image (or more precisely the layers of an image) are mounted. Afterwards, your custom volumes are mounted, hiding any data from the image at and below the mount path. So sharing data from an image among several containers without copying them is not possible.
The typical solution is and stays to use an init container which downloads or copies the actual data into an ephemeral volume, which then is shared by one or more other containers (https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-initialization/#create-a-pod-that-has-an-init-container).
initContainers:
- name: init
image: <image-containing-the-data-based-on-some-basic-image>
command: ["sh", "-c", "cp -ar /opt/content/* /mnt/target/"]
volumeMounts:
- name: shared-data
mountPath: /mnt/target
What you actually would need, is a kind of container storage interface (CSI) driver which supports creating volumes from container images. I found two projects which would exactly do that, but none of them states to be ready for production.
https://github.com/kubernetes-csi/csi-driver-image-populator
https://github.com/warm-metal/csi-driver-image
I have an mysql container I'm deploying through k8s in which I am mounting a directory which contains a script, once the pod is up and running the plan is to execute that script.
apiVersion: apps/v1
kind: Deployment
spec:
replicas: 1
template:
spec:
volumes:
- name: mysql-stuff
hostPath:
path: /home/myapp/scripts
type: Directory
containers:
- name: mysql-db
image: mysql:latest
volueMounts:
- name: mysql-stuff
mountPath: /scripts/
Once I have it up and running and run kubectl exec -it mysql-db -- bin/sh and ls scripts it returns nothing and the script that should be inside it is not there and I can't work out why.. For the sake of getting this working I have added no security context and am running the container as root. Any help would be greatly appreciated.
Since you are running your pod in a minikube cluster. Minikube itself is running in a VM , so the path mapping here implies the path of minikube VMs not your actual host.
However you can map your actual host path to the minikube path and then it will become accessible.
minikube mount /home/myapp/scripts:/home/myapp/scripts
See more here
https://minikube.sigs.k8s.io/docs/handbook/mount/
I wanna run a microservice which use DB. DB need to deploy in the same kubernetes cluster as well using PVC/PV. What is the kubernetes service name/command to use to implement such logic:
Deploy the DB instance
If 1 is successful, then deploy the microservice, else return to 1 and try (if 100 times fail - then stop and alarm)
If 2 is successful, use work with it, autoscale if needed (autoscale kubernetes option)
I concern mostly about 1-2: the service cannot work without the DB, but at the same time need to be in different pods ( or am I wrong and it's better to put 2 containers: DB and service at the same pod?)
I would say you should add initContainer to your microservice, which would search for the DB service, and whenever it's gonna be ready, then the microservice will be started.
e.g.
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]
As for the command simply use the kubectl apply with your yamls (with initContainer configured in your application).
If you want to do that in more automative way you can think about using fluxCD/argoCD.
As for the question from comments, containers that run before the main container runs and the main container must be in the same pod?
Yes, they have to be in the same pod. As the init container is going to work unless, f.e. the database service will be avaliable, then the main container is gonna start. There is great example with that in above initContainer documentation.
Sorry if this is a noob question:
I am creating a pod in a kubernetes cluster using a pod defintion yaml file.
This pod defines just one container. I'd like to ... copy a few files to a particular directory in the container.
sort of like in docker-compose:
volumes:
- ./testHelpers/certs:/var/private/ssl/certs
Is it possible to do that at this point (point of defining the pod?)
If not, what could my alternatives be?
PS - I understand that the sample from docker-compose is very different since this maps local directory to a directory in container
It's better to use volumes in pod definition.
Initialize the pod before the container runs
Apart from this, you can also use ConfigMap to store certs and other config files you needed and than can access them in the container as volumes.
More details here
You should create a config map, you can do it from files or a directory.
kubectl create configmap my-config --from-file=configuration/
And then mount the config map as directory:
apiVersion: v1
kind: Pod
metadata:
name: configmap-pod
spec:
containers:
- name: test
image: busybox
volumeMounts:
- name: my-config
mountPath: /etc/config
volumes:
- name: my-config
configMap:
name: my-config
Kubernetes features quite a few types of volumes, including emptyDir:
An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. As the name says, it is initially empty. Containers in the pod can all read and write the same files in the emptyDir volume, though that volume can be mounted at the same or different paths in each container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever.
...
By default, emptyDir volumes are stored on whatever medium is backing the node.
Is the emtpyDir actually mounted on the node, and accessible to a container outside the pod, or to the node FS itself?
Yes it is also accessible on the node. It is bind mounted into the container (sort of). The source directories are under /var/lib/kubelet/pods/PODUID/volumes/kubernetes.io~empty-dir/VOLUMENAME
You can find the location on the host like this:
sudo ls -l /var/lib/kubelet/pods/`kubectl get pod -n mynamespace mypod -o 'jsonpath={.metadata.uid}'`/volumes/kubernetes.io~empty-dir
You can list all emptyDir volumes on the host using this command
df
To view only volumes mapped to a specific volume
df | grep -i cache-volume
where cache-volume is the volume name in your pod definition
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}