Share filesystem across containers in a pod - kubernetes

Is there a way to share the filesystem of two containers in a multi-container pod? without using shared volumes?
I have following pod manifest
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: pod
name: pod
spec:
replicas: 1
selector:
matchLabels:
app: pod
template:
metadata:
labels:
app: pod
spec:
containers:
- image: nginx:latest
name: nginx
- image: jenkins
name: jenkins
I want to access /var/jenkins_home path which is available in jenkins container from nginx container.
This is just for experimental purposes, I am trying to learn ways to share filesystem/things in general across containers in a pod.

You can't share files between containers without some sort of shared volume.
Part of the goal of a containerized system is that the container filesystems are isolated from each other. There are a huge number of practical problems with sharing files specifically (what if the containers are on different nodes? what if you have three replicas each of Jenkins and Nginx? what if they're all trying to write the same files?) and in general it's better to just avoid sharing files altogether if that's a possibility.
In the specific example you've shown, the lifecycle of a Jenkins CI system and an Nginx server will just be fundamentally different; whenever Jenkins builds something you don't want to restart it to also restart the Web server, and you could very easily want to scale up the Web tier without adding Jenkins workers. A better approach here would be to have Jenkins generate custom Docker images, push them to a registry, and then use the Kubernetes API to create a separate Nginx Deployment.
In most cases (especially because of the scaling considerations) you should avoid multi-container pods altogether.
(A more specific example of a case where this setup does make sense is if you're storing credentials somewhere like a Hashicorp Vault server. You would need an init container to connect to Vault, retrieve the credentials, and deposit them in an emptyDir volume, and then the main container can start up having gotten those credentials. As far as the main server container is concerned it's the only important part of this pod, and logically the pod is nothing more than the server container with some auxiliary stuff.)

Below sample would help you how to share volume between cobtainers
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]

Related

How to check other containers file systems within a k8s pod?

I have a Pod which contains two containers - nginx and busybox. Now I want to access the files within nginx container from the busybox container, say doing 'ls /etc/nginx' in busybox container to list the files there. Is there any configuration in k8s pod to allow me to achieve this?
Below is my current pod yaml file.
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginxbusybox
name: nginxbusybox
spec:
shareProcessNamespace: true
containers:
- image: nginx
name: nginx
- image: busybox
name: busybox
command:
- sleep
- '86400'
P.S. This is for debugging a container which don't have the common linux tools within.
OK, I've found a way to achieve this. By using the 'kubectl debug' and checking the /proc/1/root/, I could see the filesystem in the nginx container. Below's what this looks like.

Read-only folder to be shared to another pod

I have a pod that needs to create a lot of jobs.
I'd like to share a read-only folder.
How can I do it?
Several ideas I can imagine (I'm newbie to Kubernetes):
Ephemeral volumes seem a good choice, but I've read it cannot be shared with another pod.
I thihk NFS is an overkill, too much for my needs.
Maybe, I could build a data only Docker image, but this is a deprecated feature of Docker.
kubectl cp to copy the data between the base pod to the pod in the job.
What would be the better solution for this?
You can use a PersistentVolume and mount it as read only volume inside the pod via PersistentVolumeClaim. To mount a read only volume, set .spec.containers[*].volumeMounts[*].readOnly to true.
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
readOnly: true
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
Check out these links:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes

K8S cronjob scheduling on existing pod?

I have my application running in K8S pods. my application writes logs to a particular path for which I already have volume mounted on the pod. my requirement is to schedule cronjob which will trigger weekly once and read the logs from that pod volume and generate a report base on my script (which is basically filtering the logs based on some keywords). and send the report via mail.
unfortunately I am not sure how I will proceed on this as I couldn't get any doc or blog which talks about integrating conrjob to existing pod or volume.
apiVersion: v1
kind: Pod
metadata:
name: webserver
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: nginx
image: nginx
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
- name: sidecar-container
image: busybox
command: ["sh","-c","while true; do cat /var/log/nginx/access.log /var/log/nginx/error.log; sleep 30; done"]
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: "discovery-cronjob"
labels:
app.kubernetes.io/name: discovery
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: log-report
image: busybox
command: ['/bin/sh']
args: ['-c', 'cat /var/log/nginx/access.log > nginx.log']
volumeMounts:
- mountPath: /log
name: shared-logs
restartPolicy: Never
I see two things here that you need to know:
Unfortunately, it is not possible to schedule a cronjob on an existing pod. Pods are ephemeral and job needs to finish. It would be impossible to tell if the job completed or not. This is by design.
Also in order to be able to see the files from one pod to another you must use a PVC. The logs created by your app have to be persisted if your job wants to access it. Here you can find some examples of how to Create ReadWriteMany PersistentVolumeClaims on your Kubernetes Cluster:
Kubernetes allows us to provision our PersistentVolumes dynamically
using PersistentVolumeClaims. Pods treat these claims as volumes. The
access mode of the PVC determines how many nodes can establish a
connection to it. We can refer to the resource provider’s docs for
their supported access modes.

K8S: It is possible to use disk of one node as cluster shared storage?

I have two PC avaliable at home, and want to make testlab for K8S.
One PC have big drive, so i think about use that store as avaliable for both nodes.
most info i found is about local storage or fully external storage.
ideally i want to have full k8s solution, which can be autoscaled via deployment(just need one more node with needed affinity and it will be scaled there as well).
So, it is possible? any guides how to do that?
Like it was already mentioned by #David Maze, there is no native way in Kubernetes for sharing nodes storage.
What you could do is setup a NFS storage on the node which has the highest storage and share it across the pod.
You could setup the NFS inside the k8s cluster as a pod using Docker NFS Server.
The NFS pod might be looking like this:
kind: Pod
apiVersion: v1
metadata:
name: nfs-server-pod
labels:
role: nfs
spec:
containers:
- name: nfs-server-container
image: cpuguy83/nfs-server
securityContext:
privileged: true
args:
# Pass the paths to share to the Docker image
- /exports
You will also have to expose the pod using a service:
kind: Service
apiVersion: v1
metadata:
name: nfs-service
spec:
selector:
role: nfs
ports:
# Open the ports required by the NFS server
# Port 2049 for TCP
- name: tcp-2049
port: 2049
protocol: TCP
# Port 111 for UDP
- name: udp-111
port: 111
protocol: UDP
Once done you will be able to use it for any pod in the cluster:
kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:
# Add the server as an NFS volume for the pod
volumes:
- name: nfs-volume
nfs:
# URL for the NFS server
server: 10.108.211.244 # Change this!
path: /
# In this container, we'll mount the NFS volume
# and write the date to a file inside it.
containers:
- name: app
image: alpine
# Mount the NFS volume in the container
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs
# Write to a file inside our NFS
command: ["/bin/sh"]
args: ["-c", "while true; do date >> /var/nfs/dates.txt; sleep 5; done"]
This is fully described at Kubernetes Volumes Guide by Matthew Palmer also you should read Deploying Dynamic NFS Provisioning in Kubernetes.

Specify Depends on in Kubernetes deployment

I have two kubernetes deployments say backend and frontend. frontend deployment depends on the backend deployments. Means after backend deployment pods are ready then the pods for frontend should be created. How can I specify this in the deployment yaml?
The solution you are looking for is Init container. Pod can have one or more Init containers and they run one after another before main Pod containers are started. Please be aware that each Init container runs until completion.
So you can use Init containers to check availability of your back-end applications. Here is an example:
apiVersion: v1
kind: Pod
metadata:
name: front-end
labels:
app: front-end
spec:
containers:
- name: front-end
image: node:boron
initContainers:
- name: init-backend
image: busybox
command: ['sh', '-c', 'until <put check condition for your back-end>; do echo waiting for back-end; sleep 2; done;']
For more information you can go through documentation.