How to mount Kubernetes directory as some other directory? - kubernetes

I have a Kubernetes app that I want to copy to another environment.
The original app uses container directory /data/X and synchronizes it with NFS server.
I have a following problems:
I want to run new app in another environment and must not overwrite NFS storage of the original app
I need to create /data/X directory in the container of new app, as the developers rely on this directory inside container - they install software there and create files inside
My question is: Is there any option for me to create directory /data/Y inside container on the environment and let it act as a folder /data/X so the software is installed inside container correctly, files are created and NFS storage of the original app is not overwritten?

Do you just want the /data/X folder to be inside the running container and not on the NFS for this new environment?
If so you can make sure that /data/X folder is created as part of the image, for example by putting this line in the Dockerfile
RUN mkdir -p /data/X
In the original environment you mount NFS storage on /data/X. This will behave like mounts do in Linux, i.e that they shadow everything in /data/X on the local file system and you appear to read/write to the NFS share.
In the new environment you are creating you just skip mounting anything there. The developers can still use /data/X as they usually would, but the content will not be persisted and it will not be possible to share it between containers

Related

Kubernetes: Merge files from a volume into a target directory, without using subPath

Using subPath, I can mount individual files from a volume into a container directory while still retaining the other files already present in the directory.
However, subPath requires creating an entry in the manifest for each file to be mounted.
I want to mount all files in a volume without naming each file explicitly in the manifest.
As far as possible I do not want to use init containers.
Is this possible in Kubernetes?

Kubernetes pod went down

I am pretty new to Kubernetes so I don't have much idea. Last day a pod went down and I was thinking if I would be able to recover the tmp folder.
So basically I want to know that when a pod in Kubernetes goes down, does it lose access to the "/tmp" folder ?
Unless you configure otherwise, this folder will be considered storage within the container, and the contents will be lost when the container terminates.
Similarly to how you can run a container in docker, write something to the filesystem within the container, then stop and remove the container, start a new one, and find the file you wrote within the container is no longer there.
If you want to keep the /tmp folder contents between restarts, you'll need to attach a persistent volume to it and mount it as /tmp within the container, but with the caveat that if you do that, you cannot use that same volume with other replicas in a deployment unless you use a read-write-many capable filesystem underneath, like NFS.

Is there a way to specify a tar file of docker image in manifest file for kubernetes?

Is there a way to specify a tar file of a docker image in a deployment manifest file for kubernetes? The nodes have access to a mounted network drive that will have the tar file. There's a post where the image is loaded by docker on each node, but I was wondering if there's a way just to specify the tar file and have Kubernetes do the loading and running.
--edit--
To be more exact, say I have a mounted network drive on each node, is there a way with just the manifest file to instruct kubernetes to load that image directly from tar file and not have to put it into a docker registry.
In general, no, Kubernetes can only access container images from a registry, not from a network drive, see documentation.
However, you could have a private registry inside your cluster (see docs). You could also have the images locally on the nodes (pre-pulled images) and have Kubernetes access them from there by setting imagePullPolicy to Never (see docs).
You have provided quite limited information about your environment and how it would looks like.
Two things comes to my mind.
Use initContainer to download this file using wget or similar.
Init containers are exactly like regular containers, except:
Init containers always run to completion.
Each init container must complete successfully before the next one starts.
That way you can be sure that tar file will be downloaded before your application will start. Example can be found here
Use Mount Volume
In your deployment, statefulset, pod (not sure what you are using), you can Mount Volume into pod. After that you will be able to inside pod specified path from volume. Please keep in mind that you have to use proper access modes.
To run .tar file you can use some bash commands like in this documentation.

Is it possible to mount a file in read/write mode in kubernetes deployment

I have application run inside the kuberentes pod that update the user configuration file and on every deployment it flush the data, as the file reside in a folder which cann't be mounted so I created the empty configmap to mount that file as configmap with subpath mounting and also set the defaultmode of file 777 but still my application is unable to update the content of the file.
Is there way I can mount a file with read/write permission enable for all user so my application can update the file at runtime.
No, a configmap mount is read-only since you need to go through the API to update things. If you just want scratch storage that is temporary you can use an emptyDir volume but it sounds like you want this to stick around so check out the docs on persistent volumes (https://kubernetes.io/docs/concepts/storage/persistent-volumes/). There's a lot of options and complexity, you'll need to work out what is the best match for your use case.

Kubernetes persistent volume claim on /var/www/html problem

I have a magento deployment on nginx which uses a persistent volume and a persistent volume claim. Everything works fine, but I am struggeling with one problem. I am using an initContainer to install magento via cli (which works fine) but as soon my POD starts and mounts the PVC to the /var/www/html (my webroot) the data previously (in the initContainer) installed data is lost (or better replaced by the new mount). My workaround was to install magento into /tmp/magento (in the initContainer) and as soon the "real" POD is up, the data from /tmp/magento is copied to /var/www/html. As you can imagine this takes a while and is kind of a permission hell, but it works.
Is there any way, that I can install my app directly in my target directory, without "overmapping" my files? I have to use an PV/PVC because I am mounting the POD directory via NFS and also I don't want to loose my files.
Update: The Magento deployment is inside a docker image and is installed during the docker build. So if I install the data into the target location, the kubernetes mount replaces the data with an empty mount. That's the main reason for the workaround. The goal is to have the whole installation inside the image.
If Magento is already installed inside the imaged and located by some path (say /tmp/magento) but you want it to be accessible by path /var/www/html/magento, why don't you just create a symlink pointing to the existing location?
So your Magento will be installed during the image build process and in the entrypoint an additional command
ln -s /tmp/magento /var/www/html/magento
will be run before the Nginx server starts itself. No need for intiContainers.