Kubernetes Execute Script Before Container Start - kubernetes

I want to execute script before I run my container
If I execute script in container like that
containers:
- name: myservice
image: myservice.azurecr.io/myservice:1.0.6019912
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
command:
- '/bin/bash'
- '-c'
- 'ls /mnt/secrets-store;'
then that command replaces my entrypoint and the pod exits. How can I execute command but then start the container after that

A common way to do this is too use Init Containers but I'm unsure what you're trying to run before you run the ENTRYPOINT.
You can apply the same volume mounts in the init container(s), if the init work requires changing state of the mounted file system content.
Another solution may be to run the ENTRYPOINT's command as the last statement in the script.

Related

Unable to access volume content using initContainers

I have a simple image (mdw:1.0.0) with some content in it:
FROM alpine:3.9
COPY /role /mdw
WORKDIR /mdw
I was expecting that my container 'nginx' would see the content of /mdw folder, but there is no file.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
initContainers:
- name: install
image: mdw:1.0.0
imagePullPolicy: Never
volumeMounts:
- name: workdir
mountPath: "/mdw"
containers:
- name: nginx
image: nginx
volumeMounts:
- name: workdir
mountPath: /mdw
command: ["ls", "-l", "/mdw"]
volumes:
- name: workdir
emptyDir: {}
Do you know what is the reason and how to fix it ?
Thank you very much
When mounting volume if directory already exists will get wiped. It's intentional and no fix really.
Only way would be to populate the directory after mounting is done.
Your init container doesn't do anything: the Dockerfile doesn't have a CMD and the Kubernetes deployment spec doesn't set a command: either. It starts and immediately exits. (The base Linux distribution images generally have a default command to launch an interactive shell, but absent a tty this will also immediately exit.)
Meanwhile, your Kubernetes setup is also mounting an empty directory over the only content you've put into the image, which prevents the init container from having an effect.
You can build a custom nginx image that directly copies the content in:
FROM nginx
COPY /role /usr/share/nginx/html
Don't use initContainers:, and use that image as the main containers: image.
There is a Docker-specific feature, using Docker named volumes, that can populate a named volume on first use, and you're probably thinking of this feature. This comes with a couple of important caveats (it only takes effect the very first time you run a container, and ignores updates to the image; it doesn't work with bind mounts). This is a plain-Docker-specific feature: Kubernetes will never auto-populate a volume for you.

How to set environment variable in container from Kubernetes?

I want to set an environment variable (I'll just name it ENV_VAR_VALUE) to a container during deployment through Kubernetes.
I have the following in the pod yaml configuration:
...
...
spec:
containers:
- name: appname-service
image: path/to/registry/image-name
ports:
- containerPort: 1234
env:
- name: "ENV_VAR_VALUE"
value: "some.important.value"
...
...
The container needs to use the ENV_VAR_VALUE's value.
But in the container's application logs, it's value always comes out empty.
So, I tried checking it's value from inside the container:
$ kubectl exec -it appname-service bash
root#appname-service:/# echo $ENV_VAR_VALUE
some.important.value
root#appname-service:/#
So, the value was successfully set.
I imagine it's because the environment variables defined from Kubernetes are set after the container is already initialized.
So, I tried overriding the container's CMD from the pod yaml configuration:
...
...
spec:
containers:
- name: appname-service
image: path/to/registry/image-name
ports:
- containerPort: 1234
env:
- name: "ENV_VAR_VALUE"
value: "some.important.value"
command: ["/bin/bash"]
args: ["-c", "application-command"]
...
...
Even still, the value of ENV_VAR_VALUE is still empty during the execution of the command.
Thankfully, the application has a restart function
-- because when I restart the app, ENV_VAR_VALUE get used successfully.
-- I can at least do some other tests in the mean time.
So, the question is...
How should I configure this in Kubernetes so it isn't a tad too late in setting the environment variables?
As requested, here is the Dockerfile.
I apologize for the abstraction...
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y some-dependencies
COPY application-script.sh application-script.sh
RUN ./application-script.sh
# ENV_VAR_VALUE is set in this file which is populated when application-command is executed
COPY app-config.conf /etc/app/app-config.conf
CMD ["/bin/bash", "-c", "application-command"]
You can try also running two commands in Kubernetes POD spec:
(read in env vars): "source /env/required_envs.env" (would come via secret mount in volume)
(main command): "application-command"
Like this:
containers:
- name: appname-service
image: path/to/registry/image-name
ports:
- containerPort: 1234
command: ["/bin/sh", "-c"]
args:
- source /env/db_cred.env;
application-command;
Why don't you move the
RUN ./application-script.sh
below
COPY app-config.conf /etc/app/app-config.conf
Looks like the app is running before the env conf is available for it.

How to place configuration files inside pods?

For example I want to place an application configuration file inside:
/opt/webserver/my_application/config/my_config_file.xml
I create a ConfigMap from file and then place it in a volume like:
/opt/persistentData/
The idea is to run afterwards an script that does something like:
cp /opt/persistentData/my_config_file.xml /opt/webserver/my_application/config/
But it could be any startup.sh script that does needed actions.
How do I run this command/script? (during Pod initialization before Tomcat startup).
I would first try if this works.
spec:
containers:
- volumeMounts:
- mountPath: /opt/webserver/my_application/config/my_config_file.xml
name: config
subPath: my_config_file.xml
volumes:
- configMap:
items:
- key: KEY_OF_THE_CONFIG
path: my_config_file.xml
name: config
name: YOUR_CONFIGMAP_NAME
If not, add an init container to copy the file.
spec:
initContainers:
- name: copy-config
image: busybox
command: ['sh', '-c', '/bin/cp /opt/persistentData/my_config_file.xml /opt/webserver/my_application/config/']
How about mounting the ConfigMap where you actually want it instead of copying over?
update:
The init container #ccshih mentioned should do, but one can try other options too:
Build a custom image modyfying the base one, using a Docker recipe. The example below takes a java+tomcat7 openshift image, adds an additional folder to the app classpath, so you can mount your ConfigMap to /mnt/config without overwriting anything, keeping both folders available.
.
FROM openshift/webserver31-tomcat7-openshift:1.2-6
# add classpaths to config
RUN sed -i 's/shared.loader=/shared.loader=\/mnt\/config/'
/opt/webserver/conf/catalina.properties
Change the ENTRYPOINT of the application, either by modifying the image, or by the DeploymentConfig hooks, see: https://docs.okd.io/latest/dev_guide/deployments/deployment_strategies.html#pod-based-lifecycle-hook
With the hooks one just needs to remember to call the original entrypoint or launch script after all the custom stuff is done.
.
spec:
containers:
-
name: my-app
image: 'image'
command:
- /bin/sh
args:
- '-c'
- cp /wherever/you/have/your-config.xml /wherever/you/want/it/ && /opt/webserver/bin/launch.sh

What is the best way to exec into another container and access its directory?

I have a container running inside a pod and I want to be able to monitor its content every week. I want to write a Kube cronjob for it. Is there a best way to do this?
At the moment I am doing this by running a script in my local machine that does kubectl exec my-container and monitors the content of the directory in that container.
kubectl exec my-container sounds perfectly fine to me. You might want to look at this if you want to run kubectl in a pod (Kubernetes CronJob).
There are other ways but depending on what you are trying to do in the long term it might be an overkill. For example:
You can set up a Fluentd or tail/grep sidecar (or ls, if you are using a binary file?) to send the content or part of the content of that file to an Elasticsearch cluster.
You can set up Prometheus in Kubernetes to scrape metrics on the pod mounted filesystems. You will probably have to use a custom exporter in the pod or something else that exports files in mount points in the pod. This is a similar example.
You can run your script in another sidecar of your pod.
Define a empty directory volume
Mount this volume as your content directory
Also mount this directory to sidecar, so that it can access and able to monitor.
Example:
apiVersion: v1
kind: Pod
metadata:
name: monitor-by-sidecar
spec:
restartPolicy: Never
volumes: # empty directory volume
- name: shared-data
emptyDir: {}
containers:
- name: container-which-produce-content # This container is main application which generate contect. Suppose in /usr/share/nginx/html directory
image: debian
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
command: ["/bin/bash", "-c"]
args:
- while true;
do
echo "hello world";
echo "----------------" > /usr/share/nginx/html/index.html;
cat /usr/share/nginx/html/index.html;
done
- name: container-which-run-script-to-monitor # this container will run your monitor scripts. this container mount main application's volume in /pod-data directory and run required scripts.
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh", "-c"]
args:
- while true;
do
echo "hello";
sleep 10;
ls -la /pod-data/;
cat /pod-data/index.html;
done
Example Description
First container(named container-which-produce-content) is main application, which mount a emptyDir volume in /usr/share/nginx/html. In this directory main application will generate data.
Second container(named container-which-run-script-to-monitor) will mount same emptyDir volume (named shared-data which also mounted by main application in /usr/share/nginx/html dir) in /pod-data directory. This /pod-data contains whole data which main application generated in /usr/share/nginx/html directory. You can then run your scripts on this directory.

OpenShift's YAML execution precedence regarding volume mounting and commands

As a beginner in container administration, I can't find a clear description of OpenShift's deployment stages and related YAML statements, specifically when persistent volume mounting and shell commands execution are involved. For example, in the RedHat documentation there is a lot of examples. A simple one is 16.4. Pod Object Definition:
apiVersion: v1
kind: Pod
metadata:
name: busybox-nfs-pod
labels:
name: busybox-nfs-pod
spec:
containers:
- name: busybox-nfs-pod
image: busybox
command: ["sleep", "60000"]
volumeMounts:
- name: nfsvol-2
mountPath: /usr/share/busybox
readOnly: false
securityContext:
supplementalGroups: [100003]
privileged: false
volumes:
- name: nfsvol-2
persistentVolumeClaim:
claimName: nfs-pvc
Now the question is: does the command sleep (or any other) execute before or after the mount of nfsvol-2 is finished? In other words, is it possible to use the volume's resources in such commands? And if it's not possible in this config, then which event handlers to use instead? I don't see any mention about an event like volume mounted.
does the command sleep (or any other) execute before or after the
mount of nfsvol-2 is finished?
To understand this, lets dig into the underlying concepts for Openshift.
OpenShift is a container application platform that brings docker and Kubernetes to the enterprise. So Openshift is nothing but an abstraction layer on top of docker and kubernetes along with additional features.
Regarding the volumes and commands lets consider the following example:
Let's run the docker container by mounting a volume, which is the home directory of host machine to the root path of the container(-v is option to attach volume).
$ docker run -it -v /home:/root ubuntu /bin/bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
50aff78429b1: Pull complete
f6d82e297bce: Pull complete
275abb2c8a6f: Pull complete
9f15a39356d6: Pull complete
fc0342a94c89: Pull complete
Digest: sha256:f871d0805ee3ce1c52b0608108dbdf1b447a34d22d5c7278a3a9dd78fc12c663
Status: Downloaded newer image for ubuntu:latest
root#1f07f083ba79:/# cd /root/
root#1f07f083ba79:~# ls
lost+found raghavendralokineni raghu user1
root#1f07f083ba79:~/raghavendralokineni# pwd
/root/raghavendralokineni
Now execute the sleep command in the container and exit.
root#1f07f083ba79:~/raghavendralokineni# sleep 10
root#1f07f083ba79:~/raghavendralokineni#
root#1f07f083ba79:~/raghavendralokineni# exit
Check the files available in the /home path which we have mounted to the container. This content is same as that of /root path in the container.
raghavendralokineni#iconic-glider-186709:/home$ ls
lost+found raghavendralokineni raghu user1
So when a volume is mounted to the container, any changes in the volume will be effected in the host machine as well.
Hence the volume will be mounted along with the container and commands will be executed after container is started.
Coming back to the your YAML file,
volumeMounts:
- name: nfsvol-2
mountPath: /usr/share/busybox
It says ,mount the volume nfsvol-2 to the container and the information regarding the volume is mentioned in volumes:
volumes:
- name: nfsvol-2
persistentVolumeClaim:
claimName: nfs-pvc
So mount the volume to the container and execute the command which is specifed:
containers:
- name: busybox-nfs-pod
image: busybox
command: ["sleep", "60000"]
Hope this helps.