How to place configuration files inside pods? - kubernetes

For example I want to place an application configuration file inside:
/opt/webserver/my_application/config/my_config_file.xml
I create a ConfigMap from file and then place it in a volume like:
/opt/persistentData/
The idea is to run afterwards an script that does something like:
cp /opt/persistentData/my_config_file.xml /opt/webserver/my_application/config/
But it could be any startup.sh script that does needed actions.
How do I run this command/script? (during Pod initialization before Tomcat startup).

I would first try if this works.
spec:
containers:
- volumeMounts:
- mountPath: /opt/webserver/my_application/config/my_config_file.xml
name: config
subPath: my_config_file.xml
volumes:
- configMap:
items:
- key: KEY_OF_THE_CONFIG
path: my_config_file.xml
name: config
name: YOUR_CONFIGMAP_NAME
If not, add an init container to copy the file.
spec:
initContainers:
- name: copy-config
image: busybox
command: ['sh', '-c', '/bin/cp /opt/persistentData/my_config_file.xml /opt/webserver/my_application/config/']

How about mounting the ConfigMap where you actually want it instead of copying over?
update:
The init container #ccshih mentioned should do, but one can try other options too:
Build a custom image modyfying the base one, using a Docker recipe. The example below takes a java+tomcat7 openshift image, adds an additional folder to the app classpath, so you can mount your ConfigMap to /mnt/config without overwriting anything, keeping both folders available.
.
FROM openshift/webserver31-tomcat7-openshift:1.2-6
# add classpaths to config
RUN sed -i 's/shared.loader=/shared.loader=\/mnt\/config/'
/opt/webserver/conf/catalina.properties
Change the ENTRYPOINT of the application, either by modifying the image, or by the DeploymentConfig hooks, see: https://docs.okd.io/latest/dev_guide/deployments/deployment_strategies.html#pod-based-lifecycle-hook
With the hooks one just needs to remember to call the original entrypoint or launch script after all the custom stuff is done.
.
spec:
containers:
-
name: my-app
image: 'image'
command:
- /bin/sh
args:
- '-c'
- cp /wherever/you/have/your-config.xml /wherever/you/want/it/ && /opt/webserver/bin/launch.sh

Related

There's any way, in K8S, to source an env file, dynamically generated during an initcontainer?

I'm planning to have an initcontainer that will handle some crypto stuff and then generate a source file to be sourced by a container.
The source file will be dynamically generated, the VARS will be dynamic, this means I will never know the VAR names or it's contents. This also means I cannot use k8s env.
The file name will always be the same.
I know I can change the Dockerfile from my applications and include an entrypoint to execute a script before running the workload to source the file, but, still, is this the only option?
There's no way to achieve this in k8s?
My container can mount the dir where the file was created by the initcontainer. But it can't, somehow, source the file?
apiVersion: v1
kind: Pod
metadata:
name: pod-init
namespace: default
spec:
nodeSelector:
env: sm
initContainers:
name: genenvfile
image: busybox
imagePullPolicy: Always
command: ["/bin/sh"]
# just an example, there will be a software here that will translate some encrypted stuff into VARS and then append'em to a file
args: ["-c", "echo MYVAR=func > /tmp/sm/filetobesourced"]
volumeMounts:
- mountPath: /tmp/sm
name: tmpdir
containers:
image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
imagePullPolicy: IfNotPresent
name: mypod-cm
tty: true
volumeMounts:
- mountPath: /tmp/sm
readOnly: true
name: tmpdir
volumes:
name: tmpdir
emptyDir:
medium: Memory
The step-by-step that I'm thinking would be:
initcontainer mounts /tmp/sm and generates a file called /tmp/sm/filetobesourced
container mounts the /tmp/sm
container source the /tmp/sm/filetobesourced
workload runs using all the vars sourced by the last step
Am I missing something to get the third step done?
Change the command and/or args on the main container to be more like bash -c 'source /tmp/sm/filetobesourced && exec whatevertheoriginalcommandwas'.

Volume shared between two containers "is busy or locked"

I have a deployment that runs two containers. One of the containers attempts to build (during deployment) a javascript bundle that the other container, nginx, tries to serve.
I want to use a shared volume to place the javascript bundle after it's built.
So far, I have the following deployment file (with irrelevant pieces removed):
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
hostNetwork: true
containers:
- name: personal-site
image: wheresmycookie/personal-site:3.1
volumeMounts:
- name: build-volume
mountPath: /var/app/dist
- name: nginx-server
image: nginx:1.19.0
volumeMounts:
- name: build-volume
mountPath: /var/app/dist
volumes:
- name: build-volume
emptyDir: {}
To the best of my ability, I have followed these guides:
https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/
One other things to point out is that I'm trying to run this locally atm using minikube.
EDIT: The Dockerfile I used to build this image is:
FROM node:alpine
WORKDIR /var/app
COPY . .
RUN npm install
RUN npm install -g #vue/cli#latest
CMD ["npm", "run", "build"]
I realize that I do not need to build this when I actually run the image, but my next goal is to insert pod instance information as environment variables, so with javascript unfortunately I can only build once that information is available to me.
Problem
The logs from the personal-site container reveal:
- Building for production...
ERROR Error: EBUSY: resource busy or locked, rmdir '/var/app/dist'
Error: EBUSY: resource busy or locked, rmdir '/var/app/dist'
I'm not sure why the build is trying to remove /dist, but also have a feeling that this is irrelevant. I could be wrong?
I thought that maybe this could be related to the lifecycle of containers/volumes, but the docs suggest that "An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node".
Question
What are some reasons that a volume might not be available to me after the containers are already running? Given that you probably have much more experience than I do with Kubernetes, what would you look into next?
The best way is to customize your image's entrypoint as following:
Once you finish building the /var/app/dist folder, copy(or move) this folder to another empty path (.e.g: /opt/dist)
cp -r /var/app/dist/* /opt/dist
PAY ATTENTION: this Step must be done in the script of ENTRYPOINT not in the RUN layer.
Now use /opt/dist instead..:
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
hostNetwork: true
containers:
- name: personal-site
image: wheresmycookie/personal-site:3.1
volumeMounts:
- name: build-volume
mountPath: /opt/dist # <--- make it consistent with image's entrypoint algorithm
- name: nginx-server
image: nginx:1.19.0
volumeMounts:
- name: build-volume
mountPath: /var/app/dist
volumes:
- name: build-volume
emptyDir: {}
Good luck!
If it's not clear how to customize the entrypoint, share with us your entrypoint of the image and we will implement it.

Unable to access volume content using initContainers

I have a simple image (mdw:1.0.0) with some content in it:
FROM alpine:3.9
COPY /role /mdw
WORKDIR /mdw
I was expecting that my container 'nginx' would see the content of /mdw folder, but there is no file.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
initContainers:
- name: install
image: mdw:1.0.0
imagePullPolicy: Never
volumeMounts:
- name: workdir
mountPath: "/mdw"
containers:
- name: nginx
image: nginx
volumeMounts:
- name: workdir
mountPath: /mdw
command: ["ls", "-l", "/mdw"]
volumes:
- name: workdir
emptyDir: {}
Do you know what is the reason and how to fix it ?
Thank you very much
When mounting volume if directory already exists will get wiped. It's intentional and no fix really.
Only way would be to populate the directory after mounting is done.
Your init container doesn't do anything: the Dockerfile doesn't have a CMD and the Kubernetes deployment spec doesn't set a command: either. It starts and immediately exits. (The base Linux distribution images generally have a default command to launch an interactive shell, but absent a tty this will also immediately exit.)
Meanwhile, your Kubernetes setup is also mounting an empty directory over the only content you've put into the image, which prevents the init container from having an effect.
You can build a custom nginx image that directly copies the content in:
FROM nginx
COPY /role /usr/share/nginx/html
Don't use initContainers:, and use that image as the main containers: image.
There is a Docker-specific feature, using Docker named volumes, that can populate a named volume on first use, and you're probably thinking of this feature. This comes with a couple of important caveats (it only takes effect the very first time you run a container, and ignores updates to the image; it doesn't work with bind mounts). This is a plain-Docker-specific feature: Kubernetes will never auto-populate a volume for you.

Can I share a single file between containers in a pod?

My pod has two containers - a primary container, and a sidecar container that monitors the /var/run/utmp file in the primary container and takes action when it changes. I'm trying to figure out how to make this file visible in the sidecar container.
This page describes how to use an emptyDir volume to share directories between containers in a pod. However, this only seems to work for directories, not single files. I also can't use this strategy to share the entire /var/run/ directory in the primary container, since mounting a volume there erases the contents of the directory, which the container needs to run.
I tried to work around this by creating a symlink to utmp in another directory and mounting that directory, but it doesn't look like symlinks in volumes are resolved in the way they would need to be for this to work.
Is there any way I can make one file in a container visible to other containers in the same pod? The manifest I'm experimenting with looks like this:
apiVersion: v1
kind: Pod
metadata:
name: utmp-demo
spec:
restartPolicy: Never
containers:
- name: main
image: debian
command: ["/bin/bash"]
args: ["-c", "sleep infinity"]
volumeMounts:
- name: main-run
mountPath: /var/run # or /var/run/utmp, which crashes
- name: helper
image: debian
command: ["/bin/bash"]
args: ["-c", "sleep infinity"]
volumeMounts:
- name: main-run
mountPath: /tmp/main-run
volumes:
- name: main-run
emptyDir: {}
If you can move the file to be shared in an empty subfolder this could be a simple solution.
For example, move your file to /var/run/utmp/utmp and share /var/run/utmp folder with an emptydir.

how to pass a configuration file thought yaml on kubernetes to create new replication controller

i am trying to pass a configuration file(which is located on master) on nginx container at the time of replication controller creation through kubernetes.. ex. as we are using ADD command in Dockerfile...
There isn't a way to dynamically add file to a pod specification when instantiating it in Kubernetes.
Here are a couple of alternatives (that may solve your problem):
Build the configuration file into your container (using the docker ADD command). This has the advantage that it works in the way which you are already familiar but the disadvantage that you can no longer parameterize your container without rebuilding it.
Use environment variables instead of a configuration file. This may require some refactoring of your code (or creating a side-car container to turn environment variables into the configuration file that your application expects).
Put the configuration file into a volume. Mount this volume into your pod and read the configuration file from the volume.
Use a secret. This isn't the intended use for secrets, but secrets manifest themselves as files inside your container, so you can base64 encode your configuration file, store it as a secret in the apiserver, and then point your application to the location of the secret file that is created inside your pod.
I believe you can also download config during container initialization.
See example below, you may download config instead index.html but I would not use it for sensetive info like passwords.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}