execute sh command from Deployment.yaml containers postStart life cycle hook - sh

Below is the values.yaml file I am using in helm chart.
global:
drek:
database:
drekSaPassword: myDatabasePassword
I want to use .Values.global.drek.database.drekSaPassword value in deployment.yaml file as below:
Deployment.yaml file:
spec:
containers:
- name: drek
image: <drek-image>
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "
echo ${.Values.global.drek.database.drekSaPassword}
"]
Can someone help here and let me know the correct way to do this?

Related

How to source an sh file in a pod to create env variables?

I have a pod with multiple init containers and one main container. One of the init container create a sh file with some export commands like:
export Foo=Bar
I want to source the file so it creates the env variable like this:
containers:
- name: test
command:
- "bash"
- "-c"
args:
- "source /path/to/file"
It doesn't create the env variable. But if I run the source command directly in the container it works. What is the best way to do this using the command option in the pod definition?
If you are looking for create the sh in the init container with the variables and then use in the "main container" here is a quick example:
manifest
apiVersion: v1
kind: Pod
metadata:
name: mypod
labels:
name: mypod
spec:
initContainers:
- name: my-init-container
image: alpine:latest
command: ["sh", "-c", "echo export Foo=bar > /shared/script.sh && chmod +x /shared/script.sh"]
volumeMounts:
- name: shared
mountPath: /shared
containers:
- name: mycontainer
image: mycustomimage
resources:
limits:
memory: "32Mi"
cpu: "100m"
volumeMounts:
- name: shared
mountPath: /shared
volumes:
- name: shared
Dockerfile
FROM alpine:latest
COPY entrypoint.sh .
ENTRYPOINT ["./entrypoint.sh"]
CMD ...
entrypoint.sh
#!/bin/sh
. /shared/script.sh
env
exec "$#"
logs
$ kubectl logs pod/mypod
<...>
Foo=bar
<...>
As you can see we can created a script file in the init container with Foo=bar variable and source the file in the "main container", the script is there the volume shared mounted in both containers.
Most of the situations we use configMaps/secrets/vaults and inject that as variables in the containers as the others answers mentioned. I recommend checking if those can solve your problem first.
Kubernetes configmap can used to have the key values as env variable inside a container.
Instead of using the init container, you can directly use the configmap or secret to inject the variables as environment variable into pod.
So your script will be able to access those variables directly.
Example : https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/

When should I use commands or args in readinessProbes

I am working my way through killer.sh.for the CKAD. I encountered a pod definition file that has a command field under the readiness probe and the container executes another command but uses args.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod6
name: pod6
spec:
containers:
- args:
- sh
- -c
- touch /tmp/ready && sleep 1d
image: busybox:1.31.0
name: pod6
resources: {}
readinessProbe: # add
exec: # add
command: # add
- sh # add
- -c # add
- cat /tmp/ready # add
initialDelaySeconds: 5 # add
periodSeconds: 10 # add
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
If the readiness probe weren't used and this pod were created implicitly, args wouldn't be utilized.
kubectl run pod6 --image=busybox:1.31.0 --dry-run=client --command -- sh -c "touch /tmp/ready && sleep 1d" > 6.yaml
The output YAML would look like this:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod69
name: pod69
spec:
containers:
- command:
- sh
- -c
- touch /tmp/ready && sleep 1d
image: busybox:1.31.9
name: pod69
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
Why is command not used on both the readinessProbe and the container?
When do commands become args?
Is there a way to tell?
I've read through this document: https://kubernetes.io/docs/tasks/inject-data-application/_print/
but I still haven't had much luck understanding this situation and when to switch to args.
The reason why you have both cmd + args in Kubernetes is because it gives you options to override the default Commands + Args from the image that you are trying to run.
In your specific case, the busybox image does not have any default Commands with the image so specifying the starting command in either cmd or args in the Pod.yaml file is essentially the same.
To your question of when do commands become args - they dont, when a container is spun up using your image, it simply executes cmd + args. And if the cmd is empty in (both the image & the yaml file) then only the args are executed.
The thread here may give you some more explanation

why busybox container is in completed state instead of running state

I am using busybox container to understand kubernetes concepts.
but if run a simple test-pod.yaml with busy box image, it is in completed state instead of running state
can anyone explain the reason
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never
Look, you should understand the basic concept here. Your docker container will be running till its main process is live. And it will be completed as soon as your main process will stop.
Going step-by-step in your case:
You launch busybox container with main process "/bin/sh", "-c", "ls /etc/config/" and obviously this process has the end. Once command is completed and you listed directory - your process throw Exit:0 status, container stop to work and you see completed pod as a result.
If you want to run container longer, you should explicitly run some command inside the main process, that will keep your container running as long as you need.
Possible solutions
#Daniel's answer - container will execute ls /etc/config/ and will stay alive additional 3600 sec
use sleep infinity option. Please be aware that there was a long time ago issue, when this option hasn't worked properly exactly with busybox. That was fixed in 2019, more information here. Actually this is not INFINITY loop, however it should be enough for any testing purpose. You can find huge explanation in Bash: infinite sleep (infinite blocking) thread
Example:
apiVersion: v1
kind: Pod
metadata:
name: busybox-infinity
spec:
containers:
-
command:
- /bin/sh
- "-c"
- "ls /etc/config/"
- "sleep infinity"
image: busybox
name: busybox-infinity
you can use different varioations of while loops, tailing and etc etc. That only rely on your imagination and needs.
Examples:
["sh", "-c", "tail -f /dev/null"]
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
That is because busybox runs the command and exits. You can solve it by updating your command in the containers section with the following command:
[ "/bin/sh", "-c", "ls /etc/config/", "sleep 3600"]

How to set environment variable in container from Kubernetes?

I want to set an environment variable (I'll just name it ENV_VAR_VALUE) to a container during deployment through Kubernetes.
I have the following in the pod yaml configuration:
...
...
spec:
containers:
- name: appname-service
image: path/to/registry/image-name
ports:
- containerPort: 1234
env:
- name: "ENV_VAR_VALUE"
value: "some.important.value"
...
...
The container needs to use the ENV_VAR_VALUE's value.
But in the container's application logs, it's value always comes out empty.
So, I tried checking it's value from inside the container:
$ kubectl exec -it appname-service bash
root#appname-service:/# echo $ENV_VAR_VALUE
some.important.value
root#appname-service:/#
So, the value was successfully set.
I imagine it's because the environment variables defined from Kubernetes are set after the container is already initialized.
So, I tried overriding the container's CMD from the pod yaml configuration:
...
...
spec:
containers:
- name: appname-service
image: path/to/registry/image-name
ports:
- containerPort: 1234
env:
- name: "ENV_VAR_VALUE"
value: "some.important.value"
command: ["/bin/bash"]
args: ["-c", "application-command"]
...
...
Even still, the value of ENV_VAR_VALUE is still empty during the execution of the command.
Thankfully, the application has a restart function
-- because when I restart the app, ENV_VAR_VALUE get used successfully.
-- I can at least do some other tests in the mean time.
So, the question is...
How should I configure this in Kubernetes so it isn't a tad too late in setting the environment variables?
As requested, here is the Dockerfile.
I apologize for the abstraction...
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y some-dependencies
COPY application-script.sh application-script.sh
RUN ./application-script.sh
# ENV_VAR_VALUE is set in this file which is populated when application-command is executed
COPY app-config.conf /etc/app/app-config.conf
CMD ["/bin/bash", "-c", "application-command"]
You can try also running two commands in Kubernetes POD spec:
(read in env vars): "source /env/required_envs.env" (would come via secret mount in volume)
(main command): "application-command"
Like this:
containers:
- name: appname-service
image: path/to/registry/image-name
ports:
- containerPort: 1234
command: ["/bin/sh", "-c"]
args:
- source /env/db_cred.env;
application-command;
Why don't you move the
RUN ./application-script.sh
below
COPY app-config.conf /etc/app/app-config.conf
Looks like the app is running before the env conf is available for it.

PreStop hook in kubernetes never gets executed

I am trying to create a little Pod example with two containers that share data via an emptyDir volume. In the first container I am waiting a couple of seconds before it gets destroyed.
In the postStart I am writing a file to the shared volume with the name "started", in the preStop I am writing a file to the shared volume with the name "finished".
In the second container I am looping for a couple of seconds outputting the content of the shared volume but the "finished" file never gets created. Describing the pod doesn't show an error with the hooks either.
Maybe someone has an idea what I am doing wrong
apiVersion: v1
kind: Pod
metadata:
name: shared-data-example
labels:
app: shared-data-example
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: first-container
image: ubuntu
command: ["/bin/bash"]
args: ["-c", "for i in {1..4}; do echo Welcome $i;sleep 1;done"]
imagePullPolicy: Never
env:
- name: TERM
value: xterm
volumeMounts:
- name: shared-data
mountPath: /myshareddata
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "echo First container finished > /myshareddata/finished"]
postStart:
exec:
command: ["/bin/sh", "-c", "echo First container started > /myshareddata/started"]
- name: second-container
image: ubuntu
command: ["/bin/bash"]
args: ["-c", "for i in {1..20}; do ls /myshareddata;sleep 1;done"]
imagePullPolicy: Never
env:
- name: TERM
value: xterm
volumeMounts:
- name: shared-data
mountPath: /myshareddata
restartPolicy: Never
It is happening because the final status of your pod is Completed and applications inside containers stopped without any external calls.
Kubernetes runs preStop hook only if pod resolves an external signal to stop. Hooks were made to implement a graceful custom shutdown for applications inside a pod when you need to stop it. In your case, your application is already gracefully stopped by itself, so Kubernetes has no reason to call the hook.
If you want to check how a hook works, you can try to create Deployment and update it's image by kubectl rolling-update, for example. In that case, Kubernetes will stop the old version of the application, and preStop hook will be called.