Mounting /etc/default directory problem with SOLR image - kubernetes

I'm deploying a basic "solr:8.9.0" image to local Kubernetes env.
If I'm trying to mount pod's "/var/solr" directory, it works well.
I can see the files inside /var/solr in the mounted directory.
spec:
containers:
- image: solr:8.6.0
imagePullPolicy: IfNotPresent
name: solr
ports:
- name: solrport
containerPort: 8983
volumeMounts:
- mountPath: /var/solr/
name: solr-volume
volumes:
- name: solr-volume
persistentVolumeClaim:
claimName: solr-pvc
But somehow I can't mount "/etc/default/" directory. That doesn't work.
I knew there are files inside that directory but they are disappearing.
Any idea why?
Thanks!

this is because of how volumeMounts work.
A standard volumeMount mounts the volume in the suplied directory overwriting everything that is inside that directory.
You want to specify a subpath for the data you actually want to mount. By doing this the original contents of the directory won't get overridden.
see here for more information regarding the usage of subpaths.

Related

Kubernetes copy image data to volume mounts

I need to share a directory between two containers: myapp and monitoring and to achieve this I created an emptyDir: {} and then volumeMount on both the containers.
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: myapp
volumeMounts:
- name: shared-data
mountPath: /etc/myapp/
- name: monitoring
volumeMounts:
- name: shared-data
mountPath: /var/read
This works fine as the data I write to the shared-data directory is visible in both containers. However, the config file that is created when creating the container under /etc/myapp/myapp.config is hidden as the shared-data volume is mounted over /etc/myapp path (overlap).
How can I force the container to first mount the volume to /etc/myapp path and then cause the docker image to place the myapp.config file under the default path /etc/myapp except that it is the mounted volume thus allowing the config file to be accessible by the monitoring container under /var/read?
Summary: let the monitoring container read the /etc/myapp/myapp.config file sitting on myapp container.
Can anyone advice please?
Can you mount shared-data at /var/read in an init container and copy config file from /etc/myapp/myapp.config to /var/read?
Consider using ConfigMaps with SubPaths.
A ConfigMap is an API object used to store non-confidential data in
key-value pairs. Pods can consume ConfigMaps as environment variables,
command-line arguments, or as configuration files in a volume.
Sometimes, it is useful to share one volume for multiple uses in a
single pod. The volumeMounts.subPath property specifies a sub-path
inside the referenced volume instead of its root.
ConfigMaps can be used as volumes. The volumeMounts inside the template.spec are the same as any other volume. However, the volumes section is different. Instead of specifying a persistentVolumeClaim or other volume type you reference the configMap by name. Than you can add the subPath property which would look something like this:
volumeMounts:
- name: shared-data
mountPath: /etc/myapp/
subPath: myapp.config
Here are the resources that would show you how to set it up:
Configure a Pod to Use a ConfigMap: official docs
Using ConfigMap SubPaths to Mount Files: step by step guide
Mount a file in your Pod using a ConfigMap: supplement

Unable to access volume content using initContainers

I have a simple image (mdw:1.0.0) with some content in it:
FROM alpine:3.9
COPY /role /mdw
WORKDIR /mdw
I was expecting that my container 'nginx' would see the content of /mdw folder, but there is no file.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
initContainers:
- name: install
image: mdw:1.0.0
imagePullPolicy: Never
volumeMounts:
- name: workdir
mountPath: "/mdw"
containers:
- name: nginx
image: nginx
volumeMounts:
- name: workdir
mountPath: /mdw
command: ["ls", "-l", "/mdw"]
volumes:
- name: workdir
emptyDir: {}
Do you know what is the reason and how to fix it ?
Thank you very much
When mounting volume if directory already exists will get wiped. It's intentional and no fix really.
Only way would be to populate the directory after mounting is done.
Your init container doesn't do anything: the Dockerfile doesn't have a CMD and the Kubernetes deployment spec doesn't set a command: either. It starts and immediately exits. (The base Linux distribution images generally have a default command to launch an interactive shell, but absent a tty this will also immediately exit.)
Meanwhile, your Kubernetes setup is also mounting an empty directory over the only content you've put into the image, which prevents the init container from having an effect.
You can build a custom nginx image that directly copies the content in:
FROM nginx
COPY /role /usr/share/nginx/html
Don't use initContainers:, and use that image as the main containers: image.
There is a Docker-specific feature, using Docker named volumes, that can populate a named volume on first use, and you're probably thinking of this feature. This comes with a couple of important caveats (it only takes effect the very first time you run a container, and ignores updates to the image; it doesn't work with bind mounts). This is a plain-Docker-specific feature: Kubernetes will never auto-populate a volume for you.

stop k8s initContainers volume overwriting container folder

I need to download files into a specific folder of a container on a pod, at startup. The image for this container already has an existing folder with other files in it. (example is adding plugin jars to an application)
I've attempted the below example, however k8s volumeMounts overwrites the folder on container.
In the example below '/existing-folder-on-my-app-image/' is a folder on the my-app image which already contains files. When using the below I only get the downloaded plugin.jar in folder '/existing-folder-on-my-app-image/' and existing files are removed.
I want to add other files to this folder, but still keep those files which where there to start with.
How can I stop k8s from overwriting '/existing-folder-on-my-app-image/' to only have the files from initContainer?
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
initContainers:
- name: config-data
image: joosthofman/wget:1.0
command: ["sh","-c","wget https://url.to.plugins/plugin.jar --no-check-certificate"]
volumeMounts:
- name: config-data
mountPath: /config
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /existing-folder-on-my-app-image/
volumes:
- name: config-data
emptyDir: {}
volume mounts always shadow the directory they are mounted to. a volume mount is the only way for an init container to manage files that are also visible to another container in the pod. if you want to copy files into a directory that already contains files in the main container image, you'll need to perform that copy as part of the container startup

Can I share a single file between containers in a pod?

My pod has two containers - a primary container, and a sidecar container that monitors the /var/run/utmp file in the primary container and takes action when it changes. I'm trying to figure out how to make this file visible in the sidecar container.
This page describes how to use an emptyDir volume to share directories between containers in a pod. However, this only seems to work for directories, not single files. I also can't use this strategy to share the entire /var/run/ directory in the primary container, since mounting a volume there erases the contents of the directory, which the container needs to run.
I tried to work around this by creating a symlink to utmp in another directory and mounting that directory, but it doesn't look like symlinks in volumes are resolved in the way they would need to be for this to work.
Is there any way I can make one file in a container visible to other containers in the same pod? The manifest I'm experimenting with looks like this:
apiVersion: v1
kind: Pod
metadata:
name: utmp-demo
spec:
restartPolicy: Never
containers:
- name: main
image: debian
command: ["/bin/bash"]
args: ["-c", "sleep infinity"]
volumeMounts:
- name: main-run
mountPath: /var/run # or /var/run/utmp, which crashes
- name: helper
image: debian
command: ["/bin/bash"]
args: ["-c", "sleep infinity"]
volumeMounts:
- name: main-run
mountPath: /tmp/main-run
volumes:
- name: main-run
emptyDir: {}
If you can move the file to be shared in an empty subfolder this could be a simple solution.
For example, move your file to /var/run/utmp/utmp and share /var/run/utmp folder with an emptydir.

mountPath overrides the rest of the files in that same folder

I have a volume with a secret called config-volume. I want to have that file in the /home/code/config folder, which is where the rest of the configuration files are. For that, I mount it as this:
volumeMounts:
- name: config-volumes
- mountPath: /home/code/config
The issue is that, after deploying, in the /home/code/config I only have the secret file and the rest of them are gone
So the /home/code/config is an existing folder (not empty), I suspect that the volumeMount overwrites the folder.
Is there a way that this can be done without overwriting everything?
You can do the following, taken from this GitHub issue
containers:
- volumeMounts:
- name: config-volumes
mountPath: /home/code/config
subPath: config
volumes:
- name: config-volumes
configMap:
name: my-config
Suggested that your ConfigMapis called my-config and that you have a key config in it.
Kubernetes Secrets are mounted as a directory, with each key as a file in that directory. So in your case, the config-volumes secret is mounted to /home/code/config, shadowing whatever that directory was before.
You could specify your volume mount as:
volumeMounts:
- name: config-volumes
- mountPath: /home/code/config/config-volumes
which would provide a config-volumes directory inside the config directory with files for your secret's keys inside.