helm chart init container with multiple commands - kubernetes-helm

we are defining a initContainer for our helm chart. relevant part is the following
initContainers:
- name: "set-volumes-init"
image: "IMAGE AND TAG"
command: ['sh', '-c', 'COMMAND 1 && COMMAND 2 && COMMAND 3']
volumeMounts:
- name: volume-summary
mountPath: /usr/summ
the question is: how do i make the "command" like have the different commands according to if a value is defined or not?
e.g: if i have the value: podx.val2 defined, i want the COMMAND 2 to be included, but if its not, then i dont want it.
same for other COMMANDS

If I were doing this, I'd build a custom image that contained the shell script, and have it controlled by environment variables.
#!/bin/sh
if [ -n "$DO_COMMAND_2" ]; then
command2
fi
The style you've written could work with a combination of YAML block syntax and Helm conditionals. This is probably harder to maintain and test, but something like this should work:
command: >-
command1
{{ if .Values.val2 }}
&& command2
{{ end }}
&& command3
The YAML >- syntax will cause everything indented after it to get folded into a single line, which helps the whitespace-control issues.

Related

Kubernetes deployment helm running first command in shell then run outside the shell

I am trying to run some commands in my K8 deployment yaml.
spec:
containers:
- command:
- /bin/sh
- -c
- export ABC=hijkl
- command 2
Basically, I need to run the export command in the shell. After which, it should continue to run command 2 outside the shell. I can't seem to get the syntax right (eg. am I missing &&, or double quotes etc). Can anyone help? Thanks in advance!
The Bourne shell sh -c option takes only a single command word, so anything you want to run in that shell needs to be in a single YAML list item.
spec:
containers:
- command:
- /bin/sh
- -c
- export ABC=hijkl; command 2
You'll frequently see YAML block scalars used in a context like this, so you can have embedded newlines and it will look more like a normal shell script.
If you're just setting an environment variable to a fixed string, you can also do that at the Kubernetes layer and skip the intermediate shell:
spec:
containers:
- env:
- name: ABC
value: hijkl
command:
- command
- '2' # (note, YAML single quotes so this is read as a string)
Can you try using the export command within ` or '
Below is a reference :
spec:
containers:
- command:
- /bin/sh
- -c
- export 'ABC=hijkl'
- command 2

Kubernetes POD Command and argument

I am learning kubernetes and have the following question related to command and argument syntax for POD.
Are there any specific syntax that we need to follow to write a shell script kind of code in the arguments of a POD? For example
In the following code, how will I know that the while true need to end with a semicolon ; why there is no semi colon after do but after If etc
while true;
do
echo $i;
if [ $i -eq 5 ];
then
echo "Exiting out";
break;
fi;
i=$((i+1));
sleep "1";
done
We don't write shell script in the similar way from semicolon prespective so why do we have to do this in POD.
I tried the command in /bin/bash format as well
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: bash
name: bash
spec:
containers:
- image: nginx
name: nginx
args:
- /bin/bash
- -c
- >
for i in 1 2 3 4 5
do
echo "Welcome $i times"
done
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
Error with new code
/bin/bash: -c: line 2: syntax error near unexpected token `echo'
/bin/bash: -c: line 2: ` echo "Welcome $i times"'
Are there any specific syntax that we need to follow to write a shell script kind of code in the arguments of a POD?
No, shell syntax is the same across.
...how will I know that the while true need to end with a semicolon
Used | for your text block to be treated like an ordinary shell script:
...
args:
- /bin/bash
- -c
- |
for i in 1 2 3 4 5
do
echo "Welcome $i times"
done
When you use > your text block is merge into a single line where newline is replaced with white space. Your command become invalid in such case. If you want your command to be a single line, then write them with ; like you would in ordinary terminal. This is shell scripting standard and is not K8s specific.
If you must use >, you need to either add empty line or indented the next line correctly:
apiVersion: v1
kind: Pod
metadata:
labels:
run: bash
name: bash
spec:
containers:
- image: nginx
name: nginx
args:
- /bin/bash
- -c
- >
for i in 1 2 3 4 5
do
echo "Welcome $i times"
done
restartPolicy: Never
kubectl logs bash to see the 5 echos and kubectl delete pod bash to clean-up.

Helm. Execute bash script to choose proper image

Helmfile:
spec:
containers:
- name: {{ .Values.app.name }}
image: {{ .Values.image.name }} --> execute shell script here
imagePullPolicy: Always
ports:
- containerPort: 8081
env:
- name: BACKEND_HOST
value: {{ .Values.backend.host }}
I want to execute bash script to check if this image exists. If not, than other image would be taken. How to do it with helm? Or is there any solution to do it?
Helm doesn't have any way to call out to other processes, make network connections, or do any other sort of external lookup (with one specific exception where it can read Kubernetes objects out of the cluster). You'd have to pass this value in when you run the helm install command instead:
helm install release-name ./chart-directory \
--set image.name=$(the command you want to run)
If this is getting run from part of some larger process, you may find it easier to write a JSON or YAML file that can be passed to the helm install -f option instead of dynamically calling out to the script; the helm install --set option has some unusual syntax and behavior. You can even go one step further and check that per-installation YAML file into source control, and have another step in your deployment pipeline notice the commit and actually do the installation ("GitOps" style).

Helm and command with &&

I have the following Helm Job for a Django application to run the migrations and to collect the static files:
apiVersion: batch/v1
kind: Job
metadata:
name: django-app-job
labels:
app.kubernetes.io/name: django-app-job
helm.sh/chart: django-app
app.kubernetes.io/instance: staging-admin
app.kubernetes.io/managed-by: Tiller
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "1"
"helm.sh/hook-delete-policy": hook-succeeded,hook-failed
spec:
template:
metadata:
labels:
app.kubernetes.io/name: django-app-job
app.kubernetes.io/instance: foobar
spec:
restartPolicy: OnFailure
containers:
- name: django-app
command:
- "/bin/bash"
- "-c"
- "python3 ./manage.py migrate"
- "&&"
- "python3 ./manage.py collectstatic --noinput"
But this only executes the migrate to update the DB schema but it nevers run the collect static. Even if the migration run ok. The job doesn't fails because if not the upgrade will fail and that doesn't happens.
But if I change the command to this:
containers:
- name: django-app
command:
- "/bin/bash"
- "-c"
- "python3 ./manage.py migrate && python3 ./manage.py collectstatic --noinput"
now the jobs run the migrations and the collect static. What is the difference between the 2 commands?
At a low level, all Unix commands are actually executed as a sequence of words. Normally the shell splits up command lines into words for you, but in a Kubernetes manifest, you have to manually specify one word at a time.
In your example, the Bourne shell sh -c option reads the next single word only and executes it as a command, applying the normal shell rules. Any remaining words are used as positional parameters if the command happens to use variables like $1.
You can demonstrate this outside of Kubernetes in your local shell, using quoting to force the shell to break up words the way you want:
# Option one
'/bin/sh' '-c' 'echo foo' '&&' 'echo bar'
# Prints "foo"
# Option two
'/bin/sh' '-c' 'echo foo && echo bar'
# Prints "foo", "bar"
One trick that shows up somewhat often is to use YAML block scalars to write a single string across multiple lines, giving something that sort of looks like a shell script but isn't actually.
command: ['/bin/sh', '-c']
args: >-
python3 ./manage.py migrate
&&
python3 ./manage.py collectstatic --noinput

How to pass args to pods based on Ordinal Index in StatefulSets?

Is it possible to pass different args to pods based on their ordinal index in StatefulSets? Didn't find the answer on the StatefulSets documentation. Thanks!
Recommended way, see https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/#statefulset
# Generate server-id from pod ordinal index.
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
# ${ordinal} now holds the replica number
server-id=$((100 + $ordinal))
# Copy appropriate conf.d files from config-map to emptyDir.
if [[ $ordinal -eq 0 ]]; then
# do something
else
# do something else
fi
I don't know of a non-hacky way to do it, but I know a hack that works. First, each pod in a StatefulSet gets a unique predictable name. It can discover that name via the downward API or just by calling hostname. So I have shell script as then entrypoint to my container and that script gets it's pod/hostname. From there it calls the "real" executable using command line args appropriate for the specific host.
For example, one of my scripts expects the pod name to be mapped into the environment as POD_NAME via downward api. It then does something like:
#!/bin/bash
pet_number=${POD_NAME##*-}
if [ pet_number == 0 ]
then
# stuff here
fi
# etc.
I found one less hacky way to pass in ordinal index into container using the lifecycle hooks
containers:
- name: cp-kafka
imagePullPolicy: Always
image: confluentinc/cp-kafka:4.0.0
resources:
requests:
memory: "2Gi"
cpu: 0.5
ports:
- containerPort: 9093
name: server
protocol: TCP
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "export KAFKA_BROKER_ID=${HOSTNAME##*-}"]
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: zk-cs.default.svc.cluster.local:2181
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://localhost:9093
In case you want to track the progress of this, the ticket in question for this feature is here: https://github.com/kubernetes/kubernetes/issues/30427
The proposal involves putting the ordinal as a label on the pod and then using the downward api to pull that out into an environment variable or something.
You can avoid using the Downward API by using HOSTNAME:
command:
- bash
- c
- |
ordinal=${HOSTNAME##*-}
if [[ "$ordinal" = "0" ]]; then
...
else
...
fi