Helm: creating a volume with original image files - kubernetes-helm

I have an image, specifically Nginx, that I want to run with readOnlyRootFilesystem: true.
For this reason I started creating empty volumes (using volumes, emptyDir and then volumeMounts) for the places that a container would write to, like /etc/nginx. This way I can use readOnlyRootFilesystem with specific mounted writable folders.
However it seems there are some files in /etc/nginx, like /etc/nginx/template/nginx.tmpl, that are needed by the container.
How can I create the volumes in such a way that it contains the files present in the image? So that files like /etc/nginx/template/nginx.tmpl are present in the new volume but readOnlyRootFilesystem can still be used.

Related

Kubernetes: Merge files from a volume into a target directory, without using subPath

Using subPath, I can mount individual files from a volume into a container directory while still retaining the other files already present in the directory.
However, subPath requires creating an entry in the manifest for each file to be mounted.
I want to mount all files in a volume without naming each file explicitly in the manifest.
As far as possible I do not want to use init containers.
Is this possible in Kubernetes?

Kubernetes how different mountPath share data in single pod

I read an article from here which the data is shared in the same Pod with 2 different containers. These 2 containers both have volumnMount on name, shared-data. But both of them having different mountPath.
My question is, if these mountPath are not same, how are they sharing data? And what is the path for the volume shared-data? My thought is, both should have the same path in order to share data, and i seems like mistaken some concept, but not sure what.
Kubernetes maintains the storage internally. It doesn't have a fixed path that you can see, and it doesn't matter if it gets mounted in the same place in different containers.
By way of analogy, imagine you have an external USB drive. If you've unplugged the drive, it doesn't make sense to ask "what is its path"; and if you plug it in and mount it on /mnt/usb on one machine, that doesn't stop you from mounting it on /home/me/app/data when you plug it into a different machine.
The volume does have a name within its pod (in your example, shared-data). If the volume is backed by a PersistentVolumeClaim that will also have a name. Potentially the matching PersistentVolume is something like an AWS EBS volume, and that will have a name. But none of these names are fixed filesystem paths, and for the most part you can't directly use these to access the file content.
There is only one volume being created "shared-data" which in being declared in pod initially empty :
volumes:- name: shared-data emptyDir: {}
and shared between these two containers .That volume exists on the pod level and it existence only depends on the pod not the two containers .However its bind mounted by the two : meaning whatever you add/edit on the one container or the other , will affect the volume (in your case adding index.html from the debian container).. and yes you can find the path of the volume :/var/lib/kubelet/pods/PODUID/volumes/kubernetes.io~empty-dir/VOLUMENAME .. there is similar question answered here

Is it possible to mount a file in read/write mode in kubernetes deployment

I have application run inside the kuberentes pod that update the user configuration file and on every deployment it flush the data, as the file reside in a folder which cann't be mounted so I created the empty configmap to mount that file as configmap with subpath mounting and also set the defaultmode of file 777 but still my application is unable to update the content of the file.
Is there way I can mount a file with read/write permission enable for all user so my application can update the file at runtime.
No, a configmap mount is read-only since you need to go through the API to update things. If you just want scratch storage that is temporary you can use an emptyDir volume but it sounds like you want this to stick around so check out the docs on persistent volumes (https://kubernetes.io/docs/concepts/storage/persistent-volumes/). There's a lot of options and complexity, you'll need to work out what is the best match for your use case.

Kubernetes - Share single file between containers (within the same pod)

I have an API that describes itself through an openapi3 file. This app is contained in a pod that also has a sidecar app that is supposed to read this file at startup time.
My probleme is how my sidecar app can read the openapi file from the other container ?
I know I could do it using a volume (emptyDir) and modify the command so my api copies the file at startup time. I'd rather not go this route. I have been looking for a feature, where I define a volume which is mapped to an existing folder in my app, but without being empty. Is there such a thing ?
One of the simplest approaches is to use emptyDir: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
In your container that generates the file that needs to be shared, mount emptyDir volume with write access and copy the file there. In your sidecar that needs to read the file, mount the same volume as read only and read the file.
With this pattern, all containers in the pod can have access to the same file system with read / write as needed.

Where to store files in GKE container?

I'm having trouble understanding where to store files in a GKE container? I've seen the following documentation of the filesystem layout:
https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#file_system_layout
But then there are also Dockerfile examples on the web that copy executable files to other paths not listed in the layout, such as /usr or /go. One of these examples is here:
https://github.com/GoogleCloudPlatform/kubernetes-engine-samples/blob/master/hello-app/Dockerfile
Another question is: If I have runtime code that needs to download certain configuration information after the container starts, can I write the configuration file to the same directory as my executable? Or do I have to choose /etc or /tmp.
And finally, the layout documentation states that /home and /var store data for the the lifetime of the boot disk? What does that mean? How does that compare to the lifetime of the pod or the node?
When you want to store something in a container you can either store something ephemeral or permanent
To store ephemeral way just choose a path /tmp, /var, /opt etc (this depends on the container set up as well), once the container is restarted the information you would have is the same at the moment the container was created, for instance your binary files and initial config files.
To store permanent you must have to mount a volume, this is a support for your container where a volume (container path) is linked with a external storage. with this if your container is restarted the volume will be mounted once the container is ready again and you are no gonna lose anything.
In kubernetes this is called Persistent Volumes and you can leverage this even if you are in another cloud provider,
steps to used
Define a path where you would mount the volume in your source code example /myfiles/private
Create a storage class in your GKE https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/ssd-pd
Create a Persistent Volume Claim in your GKE https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/ssd-pd
Relate this storage class with your Kubernetes deployment
Example
link the volume with your container
volumeMounts:
- mountPath: /myfiles/private
name: any-name-you-want
relate the persistent volume with your deployment
volumes:
- name: any-name-you-want
persistentVolumeClaim:
claimName: my-claim-name
This is really up to you. By default most base images will leave /tmp writeable as per normal. But anything written inside the container will be gone if/when the container restarts for any reason. For something like config data, that might be fine, for a database probably less so. To get more stable storage you need to use a Volume. The exact type to use depends on your environment and how long the data should live. An emptyDir volume lives only as long as the pod but can be shared between containers in the same pod. Beyond that you would probably use a PersistentVolumeClaim to dynamically provision a new Google Cloud disk which will last unless the claim is deleted (or forever depending on your Reclaim setting).