Flag '-c' in kubernetes cronjobs command/args - kubernetes

Creating a Cron Job
What does the flag '-c' do in the Kubernetes Cronjob?
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure

Community wiki for clearness and further searches.
#François is completely correct. /bin/sh -c comes directly from unix and simply means command you issue after shell. This is NOT a parameter for k8s cronjob :
If the -c option is present, then commands are read from string. If
there are arguments after the
string, they are assigned to the positional parameters, starting with $0.
You can also check What is /bin/sh -c?

The "-c" flag does not belong to the Cronjob, it is used by unix sh executing the command:
/bin/sh -c date; echo Hello from the Kubernetes cluster
So you need to read the documentation for unix sh, not kubernetes.

Related

k8s cronjob run next command if current fails

I have a cronjob like below
apiVersion: batch/v1
kind: CronJob
metadata:
name: foo-bar
namespace: kube-system
spec:
schedule: "*/30 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: foo-cleaner
containers:
- name: cleanup
image: bitnami/kubectl
imagePullPolicy: IfNotPresent
command:
- "bin/bash"
- -c
- command1;
command2;
command3;
- new_command1;
new_command2;
new_command3;
Sometimes command2 fails, throws error and cronjob execution fails. I want to run new_command1 even if any command in previous block fails
In the command section you need to pass the command and args below :
command: ["/bin/sh","-c"] args: ["command 1 || command 2; Command 3 && command 4"]
The command ["/bin/sh", "-c"] is to run a shell, and execute the following instructions. The args are then passed as commands to the shell.
In shell scripting a semicolon separates commands, and && conditionally runs the following command if the first succeeds, Grep/Pipe (||) runs command1 if it fails then runs command2 also.
As per above command it always runs command 1 if it fails or gives any error then it continues to run command2. If command3 succeeds then only it runs command4. Change accordingly in your Yaml and have a try.
Refer this Doc for cron jobs.

Script in a pod is not getting executed

I have an EKS cluster and an RDS (mariadb). I am trying to make a backup of given databases though a script in a CronJob. The CronJob object looks like this:
apiVersion: batch/v1
kind: CronJob
metadata:
name: mysqldump
namespace: mysqldump
spec:
schedule: "* * * * *"
concurrencyPolicy: Replace
jobTemplate:
spec:
template:
spec:
containers:
- name: mysql-backup
image: viejo/debian-mysqldump:latest
envFrom:
- configMapRef:
name: mysqldump-config
args:
- /bin/bash
- -c
- /root/mysqldump.sh "(${MYSQLDUMP_DATABASES})" > /proc/1/fd/1 2>/proc/1/fd/2 || echo KO > /tmp/healthcheck
resources:
limits:
cpu: "0.5"
memory: "0.5Gi"
restartPolicy: OnFailure
The script is called mysqldump.sh, which gets all necessary details from a ConfigMap object. It makes the dump of the databases in an environment variable MYSQLDUMP_DATABASES, and moves it to S3 bucket.
Note: I am going to move some variables to a Secret, but before I need this to work.
What happens is NOTHING. The script is never getting executed I tried putting a "echo starting the backup", before the script, and "echo backup ended" after it, but I don't see none of them. If I'd access the container and execute the same exact command manually, it works:
root#mysqldump-27550908-sjwfm:/# /root/mysqldump.sh "(${MYSQLDUMP_DATABASES})" > /proc/1/fd/1 2>/proc/1/fd/2 || echo KO > /tmp/healthcheck
root#mysqldump-27550908-sjwfm:/#
Can anyone point out a possible issue?
Try change args to command:
...
command:
- /bin/bash
- -c
- /root/mysqldump.sh "(${MYSQLDUMP_DATABASES})" > /proc/1/fd/1 2>/proc/1/fd/2 || echo KO > /tmp/healthcheck
...

cronjob yml file with wget command

Hi I'm new with Kubernetes. I'm trying to run wget command in cronjob.yml file to get data from url each day. For now I'm testing it and pass schedule as 1min. I also add some echo command just to get some response from that job. Below is my yml file. I'm changing directory to folder where I want to save data and passing url with site from which I'm taking it. I tried url in terminal with wget url and it works and download json file hidden in url.
apiVersion: batch/v1
kind: CronJob
metadata:
name: reference
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: reference
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
- cd /mnt/c/Users/path_to_folder
- wget {url}
restartPolicy: OnFailure
When I create job and watch the pod logs nothing happen with url, I don't get any response.
Commands I run are:
kubectl create -f cronjob.yml
kubectl get pods
kubectl logs <pod_name>
In return I just get only command with date (img above)
When I leave just command with wget, nothing happen. In pods I can see in STATUS CrashLoopBackOff. So the command has problem to run.
command:
- cd /mnt/c/Users/path_to_folder
- wget {url}
How does wget command in cronjob.yml should look like?
The command in kubernetes is docker equivalent to entrypoint in docker. For any container, there should be only one process as entry point. Either the default entry point in the image or supplied via command.
Here you are using /bin/sh as a single process and everything else as it's argument. The way you were executing /bin/sh -c , it means providing date; echo Hello from the Kubernetes cluster as input command. NOT the cd and wget commands. Change your manifest to the following to feed everything as one block to the /bin/sh. Note that, all the commands is fit as 1 argument.
apiVersion: batch/v1
kind: CronJob
metadata:
name: reference
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: reference
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster; cd /mnt/c/Users/path_to_folder;wget {url}
restartPolicy: OnFailure
To illustrate the problem, check the following examples. Note that only 1st argument is executed.
/bin/sh -c date
Tue 24 Aug 2021 12:28:30 PM CDT
/bin/sh -c echo hi
/bin/sh -c 'echo hi'
hi
/bin/sh -c 'echo hi && date'
hi
Tue 24 Aug 2021 12:28:45 PM CDT
/bin/sh -c 'echo hi' date #<-----your case is similar to this, no date printed.
hi
-c Read commands from the command_string operand instead of from the standard input. Special parameter 0
will be set from the command_name operand and the positional parameters ($1, $2, etc.) set from the re‐
maining argument operands.

Pass in Dynamic Formatted Datetime to K8s Container Config

I have a CronJob that runs a process in a container in Kubernetes.
This process takes in a time window that is defined by a --since and --until flag. This time window needs to be defined at container start time (when the cron is triggered) and is a function of the current time. An example running this process would be:
$ my-process --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ") --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
So for the example above, I would like the time window to be from 1 hour ago to 1 hour in the future. Is there a way in Kubernetes to pass in a formatted datetime as a command argument to a process?
An example of what I am trying to do would be the following config:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-process
spec:
schedule: "*/2 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: my-process
image: my-image
args:
- my-process
- --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ")
- --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
When doing this, the literal string "$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ")" would be passed in as the --since flag.
Is something like this possible? If so, how would I do it?
Note that in your CronJob you don't run bash or any other shell and command substitution is a shell feature and without one will not work. In your example only one command my-process is started in the container and as it is not a shell, it is unable to perform command substitution.
This one:
$ my-process --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ") --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
will work properly because it is started in a shell so it may take advantage of shell features such as mentioned command substitution
One thing: date -v -1H +"%Y-%m-%dT%H:%M:%SZ" doesn't expand properly in bash shell with default GNU/Linux date implementation. Among others -v option is not recognized so I guess you're using it on MacOSX or some kind of BSD system. In my examples below I will use date version that works on Debian.
So for testing it on GNU/Linux it will be something like this:
date --date='-1 hour' +"%Y-%m-%dT%H:%M:%SZ"
For testing purpose I've tried it with simple CronJob from this example with some modifications:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
env:
- name: FROM
value: $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ")
- name: TILL
value: $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
args:
- /bin/sh
- -c
- date; echo from $(FROM) till $(TILL)
restartPolicy: OnFailure
It works properly. Below you can see the result of CronJob execution:
$ kubectl logs hello-1569947100-xmglq
Tue Oct 1 16:25:11 UTC 2019
from 2019-10-01T15:25:11Z till 2019-10-01T17:25:11Z
Apart from the example with use of environment variables I tested it with following code:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
args:
- /bin/sh
- -c
- date; echo from $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ") till $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
restartPolicy: OnFailure
and as you can see here command substitution also works properly:
$ kubectl logs hello-1569949680-fk782
Tue Oct 1 17:08:09 UTC 2019
from 2019-10-01T16:08:09Z till 2019-10-01T18:08:09Z
It works properly because in both examples first we spawn bash shell in our container and subsequently it runs other commands as simple echo provided as its argument. You can use your my-process command instead of echo only you'll need to provide it in one line with all its arguments, like this:
args:
- /bin/sh
- -c
- my-process --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ") --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
This example will not work as there is no shell involved. echo command not being a shell will not be able to perform command substitution which is a shell feature:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
args:
- /bin/echo
- from $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ") till $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
restartPolicy: OnFailure
and the results will be a literal string:
$ kubectl logs hello-1569951180-fvghz
from $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ") till $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
which is similar to your case as your command, like echo isn't a shell and it cannot perform command substitution.
To sum up: The solution for that is wrapping your command as a shell argument. In first two examples echo command is passed along with other commands as shell argument.
Maybe it is visible more clearly in the following example with a bit different syntax:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
command: ["/bin/sh","-c"]
args: ["FROM=$(date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ'); TILL=$(date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ') ;echo from $FROM till $TILL"]
restartPolicy: OnFailure
man bash says:
-c If the -c option is present, then commands are read from the first non-option argument command_string.
so command: ["/bin/sh","-c"] basically means run a shell and execute following commands which then we pass to it using args. In bash commands should be separated with semicolon ; so they are run independently (subsequent command is executed no matter what was the result of executing previous command/commands).
In the following fragment:
args: ["FROM=$(date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ'); TILL=$(date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ') ;echo from $FROM till $TILL"]
we provide to /bin/sh -c three separate commands:
FROM=$(date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ')
which sets FROM environment variable to result of execution of date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ' command,
TILL=$(date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ')
which sets TILL environment variable to result of execution of date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ' command
and finally we run
echo from $FROM till $TILL
which uses both variables.
Exactly the same can be done with any other command.

Issue Deleting Temporary pods

I am trying to delete temporary pods and other artifacts using helm delete. I am trying to run this helm delete to run on a schedule. Here is my stand alone command which works
helm delete --purge $(helm ls -a -q temppods.*)
However if i try to run this on a schedule as below i am running into issues.
Here is what mycron.yaml looks like:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: cronbox
namespace: mynamespace
spec:
serviceAccount: cron-z
successfulJobsHistoryLimit: 1
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: cronbox
image: alpine/helm:2.9.1
args: ["delete", "--purge", "$(helm ls -a -q temppods.*)"
env:
- name: TILLER_NAMESPACE
value: mynamespace-build
- name: KUBECONFIG
value: /kube/config
volumeMounts:
- mountPath: /kube
name: kubeconfig
restartPolicy: OnFailure
volumes:
- name: kubeconfig
configMap:
name: cronjob-kubeconfig
I ran
oc create -f ./mycron.yaml
This created the cronjob
Every 5th minute a pod is getting created and the helm command that is part of the cron job runs.
I am expecting the artifacts/pods name beginning with temppods* to be deleted.
What i see in the logs of the pod is:
Error: invalid release name, must match regex ^(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])+$ and the length must not longer than 53
The CronJob container spec is trying to delete a release named (literally):
$(helm ls -a -q temppods.*)
This release doesn't exist, and fails helms expected naming conventions.
Why
The alpine/helm:2.9.1 container image has an entrypoint of helm. This means any arguments are passes directly to the helm binary via exec. No shell expansion ($()) occurs as there is no shell running.
Fix
To do what you are expecting you can use sh which is available in alpine images.
sh -uexc 'releases=$(helm ls -a -q temppods.*); helm delete --purge $releases'
In a Pod spec this translates to:
spec:
containers:
- name: cronbox
command: 'sh'
args:
- '-uexc'
- 'releases=$(helm ls -a -q temppods.*); helm delete --purge $releases;'
Helm
As a side note, helm is not the most reliable tool when clusters or releases get into vague states. Running multiple helm commands interacting with within the same release at the same time usually spells disaster and this seems on the surface like that is likely. Maybe there is a question in other ways to achieve this process your are implementing?