Helm and command with && - kubernetes-helm

I have the following Helm Job for a Django application to run the migrations and to collect the static files:
apiVersion: batch/v1
kind: Job
metadata:
name: django-app-job
labels:
app.kubernetes.io/name: django-app-job
helm.sh/chart: django-app
app.kubernetes.io/instance: staging-admin
app.kubernetes.io/managed-by: Tiller
annotations:
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "1"
"helm.sh/hook-delete-policy": hook-succeeded,hook-failed
spec:
template:
metadata:
labels:
app.kubernetes.io/name: django-app-job
app.kubernetes.io/instance: foobar
spec:
restartPolicy: OnFailure
containers:
- name: django-app
command:
- "/bin/bash"
- "-c"
- "python3 ./manage.py migrate"
- "&&"
- "python3 ./manage.py collectstatic --noinput"
But this only executes the migrate to update the DB schema but it nevers run the collect static. Even if the migration run ok. The job doesn't fails because if not the upgrade will fail and that doesn't happens.
But if I change the command to this:
containers:
- name: django-app
command:
- "/bin/bash"
- "-c"
- "python3 ./manage.py migrate && python3 ./manage.py collectstatic --noinput"
now the jobs run the migrations and the collect static. What is the difference between the 2 commands?

At a low level, all Unix commands are actually executed as a sequence of words. Normally the shell splits up command lines into words for you, but in a Kubernetes manifest, you have to manually specify one word at a time.
In your example, the Bourne shell sh -c option reads the next single word only and executes it as a command, applying the normal shell rules. Any remaining words are used as positional parameters if the command happens to use variables like $1.
You can demonstrate this outside of Kubernetes in your local shell, using quoting to force the shell to break up words the way you want:
# Option one
'/bin/sh' '-c' 'echo foo' '&&' 'echo bar'
# Prints "foo"
# Option two
'/bin/sh' '-c' 'echo foo && echo bar'
# Prints "foo", "bar"
One trick that shows up somewhat often is to use YAML block scalars to write a single string across multiple lines, giving something that sort of looks like a shell script but isn't actually.
command: ['/bin/sh', '-c']
args: >-
python3 ./manage.py migrate
&&
python3 ./manage.py collectstatic --noinput

Related

k8s cronjob run next command if current fails

I have a cronjob like below
apiVersion: batch/v1
kind: CronJob
metadata:
name: foo-bar
namespace: kube-system
spec:
schedule: "*/30 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: foo-cleaner
containers:
- name: cleanup
image: bitnami/kubectl
imagePullPolicy: IfNotPresent
command:
- "bin/bash"
- -c
- command1;
command2;
command3;
- new_command1;
new_command2;
new_command3;
Sometimes command2 fails, throws error and cronjob execution fails. I want to run new_command1 even if any command in previous block fails
In the command section you need to pass the command and args below :
command: ["/bin/sh","-c"] args: ["command 1 || command 2; Command 3 && command 4"]
The command ["/bin/sh", "-c"] is to run a shell, and execute the following instructions. The args are then passed as commands to the shell.
In shell scripting a semicolon separates commands, and && conditionally runs the following command if the first succeeds, Grep/Pipe (||) runs command1 if it fails then runs command2 also.
As per above command it always runs command 1 if it fails or gives any error then it continues to run command2. If command3 succeeds then only it runs command4. Change accordingly in your Yaml and have a try.
Refer this Doc for cron jobs.

Kubernetes POD Command and argument

I am learning kubernetes and have the following question related to command and argument syntax for POD.
Are there any specific syntax that we need to follow to write a shell script kind of code in the arguments of a POD? For example
In the following code, how will I know that the while true need to end with a semicolon ; why there is no semi colon after do but after If etc
while true;
do
echo $i;
if [ $i -eq 5 ];
then
echo "Exiting out";
break;
fi;
i=$((i+1));
sleep "1";
done
We don't write shell script in the similar way from semicolon prespective so why do we have to do this in POD.
I tried the command in /bin/bash format as well
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: bash
name: bash
spec:
containers:
- image: nginx
name: nginx
args:
- /bin/bash
- -c
- >
for i in 1 2 3 4 5
do
echo "Welcome $i times"
done
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
Error with new code
/bin/bash: -c: line 2: syntax error near unexpected token `echo'
/bin/bash: -c: line 2: ` echo "Welcome $i times"'
Are there any specific syntax that we need to follow to write a shell script kind of code in the arguments of a POD?
No, shell syntax is the same across.
...how will I know that the while true need to end with a semicolon
Used | for your text block to be treated like an ordinary shell script:
...
args:
- /bin/bash
- -c
- |
for i in 1 2 3 4 5
do
echo "Welcome $i times"
done
When you use > your text block is merge into a single line where newline is replaced with white space. Your command become invalid in such case. If you want your command to be a single line, then write them with ; like you would in ordinary terminal. This is shell scripting standard and is not K8s specific.
If you must use >, you need to either add empty line or indented the next line correctly:
apiVersion: v1
kind: Pod
metadata:
labels:
run: bash
name: bash
spec:
containers:
- image: nginx
name: nginx
args:
- /bin/bash
- -c
- >
for i in 1 2 3 4 5
do
echo "Welcome $i times"
done
restartPolicy: Never
kubectl logs bash to see the 5 echos and kubectl delete pod bash to clean-up.

Flag '-c' in kubernetes cronjobs command/args

Creating a Cron Job
What does the flag '-c' do in the Kubernetes Cronjob?
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Community wiki for clearness and further searches.
#François is completely correct. /bin/sh -c comes directly from unix and simply means command you issue after shell. This is NOT a parameter for k8s cronjob :
If the -c option is present, then commands are read from string. If
there are arguments after the
string, they are assigned to the positional parameters, starting with $0.
You can also check What is /bin/sh -c?
The "-c" flag does not belong to the Cronjob, it is used by unix sh executing the command:
/bin/sh -c date; echo Hello from the Kubernetes cluster
So you need to read the documentation for unix sh, not kubernetes.

Restart a Kubernetes Job or Pod with a different command

I'm looking for a way to quickly run/restart a Job/Pod from the command line and override the command to be executed in the created container.
For context, I have a Kubernetes Job that gets executed as a part of our deploy process. Sometimes that Job crashes and I need to run certain commands inside the container the Job creates to debug and fix the problem (subsequent Jobs then succeed).
The way I have done this so far is:
Copy the YAML of the Job, save into a file
Clean up the YAML (delete Kubernetes-managed fields)
Change the command: field to tail -f /dev/null (so that the container stays alive)
kubectl apply -f job.yaml && kubectl get all && kubectl exec -ti pod/foobar bash
Run commands inside the container
kubectl delete job/foobar when I am done
This is very tedious. I am looking for a way to do something like the following
kubectl restart job/foobar --command "tail -f /dev/null"
# or even better
kubectl run job/foobar --exec --interactive bash
I cannot use the run command to create a Pod:
kubectl run --image xxx -ti
because the Job I am trying to restart has certain volumeMounts and other configuration I need to reuse. So I would need something like kubectl run --from-config job/foobar.
Is there a way to achieve this or am I stuck with juggling the YAML definition file?
Edit: the Job YAML looks approx. like this:
apiVersion: batch/v1
kind: Job
metadata:
name: database-migrations
labels:
app: myapp
service: myapp-database-migrations
spec:
backoffLimit: 0
template:
metadata:
labels:
app: myapp
service: myapp-database-migrations
spec:
restartPolicy: Never
containers:
- name: migrations
image: registry.example.com/myapp:977b44c9
command:
- "bash"
- "-c"
- |
set -e -E
echo "Running database migrations..."
do-migration-stuff-here
echo "Migrations finished at $(date)"
imagePullPolicy: Always
volumeMounts:
- mountPath: /home/example/myapp/app/config/conf.yml
name: myapp-config-volume
subPath: conf.yml
- mountPath: /home/example/myapp/.env
name: myapp-config-volume
subPath: .env
volumes:
- name: myapp-config-volume
configMap:
name: myapp
imagePullSecrets:
- name: k8s-pull-project
The commands you suggested don't exist. Take a look at this reference where you can find all available commands.
Based on that documentation the task of the Job is to create one or more Pods and continue retrying execution them until the specified number of successfully terminated ones will be achieved. Then the Job tracks the successful completions. You cannot just update the Job because these fields are not updatable. To do what's you want you should delete current job and create one once again.
I recommend you to keep all your configurations in files. If you have a problem with configuring job commands, practice says that you should modify these settings in yaml and apply to the cluster - if your deployment crashes - by storing the configuration in files, you have a backup.
If you are interested how to improve this task, you can try those 2 examples describe below:
Firstly I've created several files:
example job (job.yaml):
apiVersion: batch/v1
kind: Job
metadata:
name: test1
spec:
template:
spec:
containers:
- name: test1
image: busybox
command: ["/bin/sh", "-c", "sleep 300"]
volumeMounts:
- name: foo
mountPath: "/script/foo"
volumes:
- name: foo
configMap:
name: my-conf
defaultMode: 0755
restartPolicy: OnFailure
patch-file.yaml:
spec:
template:
spec:
containers:
- name: test1
image: busybox
command: ["/bin/sh", "-c", "echo 'patching test' && sleep 500"]
and configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-conf
data:
test: |
#!/bin/sh
echo "skrypt test"
If you want to automate this process you can use plugin
A plugin is a standalone executable file, whose name begins with kubectl-. To install a plugin, move its executable file to anywhere on your PATH.
There is no plugin installation or pre-loading required. Plugin executables receive the inherited environment from the kubectl binary. A plugin determines which command path it wishes to implement based on its name.
Here is the file that can replace your job
A plugin determines the command path that it will implement based on its filename.
kubectl-job:
#!/bin/bash
kubectl patch -f job.yaml -p "$(cat patch-job.yaml)" --dry-run=client -o yaml | kubectl replace --force -f - && kubectl wait --for=condition=ready pod -l job-name=test1 && kubectl exec -it $(kubectl get pod -l job-name=test1 --no-headers -o custom-columns=":metadata.name") -- /bin/sh
This command uses an additional file (patch-job.yaml, see this link) - within we can put our changes for job.
Then you should change the permissions of this file and move it:
sudo chmod +x .kubectl-job
sudo mv ./kubectl-job /usr/local/bin
It's all done. Right now you can use it.
$ kubectl job
job.batch "test1" deleted
job.batch/test1 replaced
pod/test1-bdxtm condition met
pod/test1-nh2pv condition met
/ #
As you can see Job has been replaced (deleted and created).
You can also use single-line command, here is the example:
kubectl get job test1 -o json | jq "del(.spec.selector)" | jq "del(.spec.template.metadata.labels)" | kubectl patch -f - --patch '{"spec": {"template": {"spec": {"containers": [{"name": "test1", "image": "busybox", "command": ["/bin/sh", "-c", "sleep 200"]}]}}}}' --dry-run=client -o yaml | kubectl replace --force -f -
With this command you can change your job entering parameters "by hand". Here is the output:
job.batch "test1" deleted
job.batch/test1 replaced
As you can see this solution works as well.

Issue Deleting Temporary pods

I am trying to delete temporary pods and other artifacts using helm delete. I am trying to run this helm delete to run on a schedule. Here is my stand alone command which works
helm delete --purge $(helm ls -a -q temppods.*)
However if i try to run this on a schedule as below i am running into issues.
Here is what mycron.yaml looks like:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronbox
namespace: mynamespace
spec:
serviceAccount: cron-z
successfulJobsHistoryLimit: 1
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: cronbox
image: alpine/helm:2.9.1
args: ["delete", "--purge", "$(helm ls -a -q temppods.*)"
env:
- name: TILLER_NAMESPACE
value: mynamespace-build
- name: KUBECONFIG
value: /kube/config
volumeMounts:
- mountPath: /kube
name: kubeconfig
restartPolicy: OnFailure
volumes:
- name: kubeconfig
configMap:
name: cronjob-kubeconfig
I ran
oc create -f ./mycron.yaml
This created the cronjob
Every 5th minute a pod is getting created and the helm command that is part of the cron job runs.
I am expecting the artifacts/pods name beginning with temppods* to be deleted.
What i see in the logs of the pod is:
Error: invalid release name, must match regex ^(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])+$ and the length must not longer than 53
The CronJob container spec is trying to delete a release named (literally):
$(helm ls -a -q temppods.*)
This release doesn't exist, and fails helms expected naming conventions.
Why
The alpine/helm:2.9.1 container image has an entrypoint of helm. This means any arguments are passes directly to the helm binary via exec. No shell expansion ($()) occurs as there is no shell running.
Fix
To do what you are expecting you can use sh which is available in alpine images.
sh -uexc 'releases=$(helm ls -a -q temppods.*); helm delete --purge $releases'
In a Pod spec this translates to:
spec:
containers:
- name: cronbox
command: 'sh'
args:
- '-uexc'
- 'releases=$(helm ls -a -q temppods.*); helm delete --purge $releases;'
Helm
As a side note, helm is not the most reliable tool when clusters or releases get into vague states. Running multiple helm commands interacting with within the same release at the same time usually spells disaster and this seems on the surface like that is likely. Maybe there is a question in other ways to achieve this process your are implementing?