Mounting container filesystem into sidecar in k8s - kubernetes

I'd like to run perf record and perf script on a process running in a container in Kubernetes (actually on Openshift). Following the approach from this blogpost I was able to get perf record working (in the sidecar). However the perf script cannot read symbols (in sidecar) because these are present only in the main container.
Therefore I'd like to mount the complete filesystem of the main container into the sidecar, e.g. under /main and then run perf script --symfs=/main. I don't want to copy the complete filesystem into an emptyDir. I've found another nice blogpost about using overlay filesystem; however IIUC I would need to create the overlay in the main container and I don't want to run that as a privileged container and require commands (like mount) to be present.
Is there any way to create sort of reverse mount, exposing a part of container to be mounted by other containers within the same pod?

Related

Automatically transfer files between containers using Kubernetes

I want to make a container that is able to transfer files between itself and other containers on the cluster. I have multiple containers that are responsible for executing a task, and they are waiting to get an input file to do so. I want a separate container to be responsible for handling files before and after the task is executed by the other containers. As an example:
have all files on the file manager container.
let the file manager container automatically copy a file to a task executing container.
let task executing container run the task.
transfer the output of the task executing container to the file manager container.
And i want to do this automatically, so that for example 400 input files can be processed to output files in this way. What would be the best way to realise such a process with kubernetes? Where should I start?
A simple approach would be to set up the NFS or use the File system like AWS EFS or so.
You can mount the File system or NFS directly to POD which will be in ReadWriteMany access method.
ReadWriteMany - Multiple POD can access the single file system.
If you don't want to use the Managed service like EFS or so you can also set up the file system on K8s checkout the MinIO : https://min.io/
All files will be saved in the File system and as per POD requirement, it can simply access it from the file system.
You can create different directories to separate the outputs.
If you want only read operation, meaning all PODs can read the files only you can also set up the ReadOnlyMany access mode.
If you are GCP you can checkout this nice document : https://cloud.google.com/filestore/docs/accessing-fileshares

Kubernetes pod went down

I am pretty new to Kubernetes so I don't have much idea. Last day a pod went down and I was thinking if I would be able to recover the tmp folder.
So basically I want to know that when a pod in Kubernetes goes down, does it lose access to the "/tmp" folder ?
Unless you configure otherwise, this folder will be considered storage within the container, and the contents will be lost when the container terminates.
Similarly to how you can run a container in docker, write something to the filesystem within the container, then stop and remove the container, start a new one, and find the file you wrote within the container is no longer there.
If you want to keep the /tmp folder contents between restarts, you'll need to attach a persistent volume to it and mount it as /tmp within the container, but with the caveat that if you do that, you cannot use that same volume with other replicas in a deployment unless you use a read-write-many capable filesystem underneath, like NFS.

Is there a way to specify a tar file of docker image in manifest file for kubernetes?

Is there a way to specify a tar file of a docker image in a deployment manifest file for kubernetes? The nodes have access to a mounted network drive that will have the tar file. There's a post where the image is loaded by docker on each node, but I was wondering if there's a way just to specify the tar file and have Kubernetes do the loading and running.
--edit--
To be more exact, say I have a mounted network drive on each node, is there a way with just the manifest file to instruct kubernetes to load that image directly from tar file and not have to put it into a docker registry.
In general, no, Kubernetes can only access container images from a registry, not from a network drive, see documentation.
However, you could have a private registry inside your cluster (see docs). You could also have the images locally on the nodes (pre-pulled images) and have Kubernetes access them from there by setting imagePullPolicy to Never (see docs).
You have provided quite limited information about your environment and how it would looks like.
Two things comes to my mind.
Use initContainer to download this file using wget or similar.
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
That way you can be sure that tar file will be downloaded before your application will start. Example can be found here
Use Mount Volume
In your deployment, statefulset, pod (not sure what you are using), you can Mount Volume into pod. After that you will be able to inside pod specified path from volume. Please keep in mind that you have to use proper access modes.
To run .tar file you can use some bash commands like in this documentation.

How to mimic Docker ability to pre-populate a volume from a container directory with Kubernetes

I am migrating my previous deployment made with docker-compose to Kubernetes.
In my previous deployment, some containers do have some data made at build time in some paths and these paths are mounted in persistent volumes.
Therefore, as the Docker volume documentation states,the persistent volume (not a bind mount) will be pre-populated with the container directory content.
I'd like to achieve this behavior with Kubernetes and its persistent volumes, How can I do ? Do I need to add some kind of logic using scripts in order to copy my container's files to the mounted path when data is not present the first time the container starts ?
Possibly related question: Kubernetes mount volume on existing directory with files inside the container
I think your options are
ConfigMap (are "some data" configuration files?)
Init containers (as mentioned)
CSI Volume Cloning (clone combining an init or your first app container)
there used to be a gitRepo; deprecated in favour of init containers where you can clone your config and data from
HostPath volume mount is an option too
An NFS volume is probably a very reasonable option and similar from an approach point of view to your Docker Volumes
Storage type: NFS, iscsi, awsElasticBlockStore, gcePersistentDisk and others can be pre-populated. There are constraints. NFS probably the most flexible for sharing bits & bytes.
FYI
The subPath might be of interest too depending on your use case and
PodPreset might help in streamlining the op across the fleet of your pods
HTH

Trying to mount a Lustre filesystem in a pod: tries and problems

For our data science platform we have user directories on a Lustre filesystem (HPC world) that we want to present inside Jupyter notebooks running on OpenShift (they are spawned on demand by JupyterHub after authentication and other custom actions).
We tried the following approaches:
Mounting the Lustre filesystem in a privileged sidecar container in the same pod as the notebook, sharing the mount through an EmptyDir volume (that way the notebook container does not need to be privileged). We have mount/umount of Lustre occur in the sidecar as post-start and pre-stop in the pod lifecycle. Problem is that sometimes when we delete the pod umount does not work or hang for whatever reason, and as EmptyDirs are "destroyed" it flushes everything in Lustre because the fs is still mounted. A real bad result...
Mounting the Lustre filesystem directly on the node and create a hostPath volume in the pod and a volumeMount in the notebook container. It kind of works, but only if the container is run as privileged, which we don't want of course.
We tried more specific sccs which authorize the hostPath volume (hostaccess, hostmount-anyuid), and also made a custom one with hostPath volume and 'allowHostDirVolumePlugin: true', but with no success. We can see the mount from the container, but even with everything wide open (777), we get a "permission denied" with whatever we try to do on the mount(ls, touch,...). Again, it works only if the container is privileged. Does not seem to be SELinux related, at least we have no alerts.
Does anyone see where the problem is, or has another suggestion of solution we can try?