how to run a cronjob every 10 seconds in kubernetes? - kubernetes

"I just want to run a cronjob in Kubernetes in every 10 seconds. what would be the imperative command for that?"

You can’t use CronJob kubernetes object for running less than 1 minute. You might be using the wrong tool for a process that has to run so often. https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/
Create an infinite loop on a Deployment (daemonize it)
You’ll need to use a bash formula (or whatever programming language you like best, Go, Java, Python or Ruby) to make an infinite loop and sleep 10 seconds per each execution inside a Deployment. Here an example with bash/sh:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cronjob-deployment
labels:
app: cronjob
spec:
replicas: 1
selector:
matchLabels:
app: cronjob
template:
metadata:
labels:
app: cronjob
spec:
containers:
- name: cronjob
image: busybox
args:
- /bin/sh
- -c
- while true; do echo call ./script.sh here; sleep 10; done
Create 1 CronJob with several containers
If you still want to use CronJobs you can do it with 6 containers inside the definition. One without delay, and the others with 10, 20, 30, 40 and 50 seconds of delay.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: no_delay
image: busybox
args:
- /bin/sh
- -c
- echo call ./script.sh here
- name: 10_seconds
image: busybox
args:
- /bin/sh
- -c
- sleep 10; echo call ./script.sh here
- name: 20_seconds
image: busybox
args:
- /bin/sh
- -c
- sleep 20; echo call ./script.sh here
- name: 30_seconds
image: busybox
args:
- /bin/sh
- -c
- sleep 30; echo call ./script.sh here
- name: 40_seconds
image: busybox
args:
- /bin/sh
- -c
- sleep 40; echo call ./script.sh here
- name: 50_seconds
image: busybox
args:
- /bin/sh
- -c
- sleep 50; echo call ./script.sh here
restartPolicy: OnFailure
Of course, one of the problems you might encounter is that your process might be overlapped (runned concurrently at the same time). This will depend on the amount of seconds your process needs to run, and the time kubernetes needs to schedule and create a container.

If your task needs to run that frequently, cron is the wrong tool.
Aside from the fact that it simply won't launch jobs that frequently, you also risk some serious problems if the job takes longer to run than the interval between launches. Rewrite your task to daemonize and run persistently, then launch it from cron if necessary (while making sure that it won't relaunch if it's already running).

You can write a script that executes for 6 times with an interval of 10 seconds.
and set Kubernetes cron job to run every minute.
in that manner, in every minute your script starts running which in turn execute the task in every 10 seconds.
script to run logic in every 10 seconds for 6 times when cron job executes after one minute.
This will print hello world in every 10 seconds for 6 times :
#!/bin/bash -x
a=0
until [ $a -gt 5 ]
do
echo "hello world"
a=expr $a + 1
sleep 10
done
cronjob sample :
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image:
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- ./sample.sh
restartPolicy: OnFailure
~
So in that way your cron job executes in every one minute .which in turns starts your srcipt which runs in every 10 seconds and execute buisness logic for 6 minutes.
This is the idea which you can follow to make cron job work in seconds as Kubernetes does not provide value for scheduling lower than 1 minute.
Although in this approach you need to set the strategy of not overlapping next execution of cron job.
for example, if your business logic takes 15 seconds to executes and you are running business logic every 10 seconds 6 times in a minute.
As business logic takes 15seconds so ideally, it should run for 4 times rather 6 times in a minute. Accordingly, you need to tweak the repetition inside the script.

Related

k8s cronjob run next command if current fails

I have a cronjob like below
apiVersion: batch/v1
kind: CronJob
metadata:
name: foo-bar
namespace: kube-system
spec:
schedule: "*/30 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: foo-cleaner
containers:
- name: cleanup
image: bitnami/kubectl
imagePullPolicy: IfNotPresent
command:
- "bin/bash"
- -c
- command1;
command2;
command3;
- new_command1;
new_command2;
new_command3;
Sometimes command2 fails, throws error and cronjob execution fails. I want to run new_command1 even if any command in previous block fails
In the command section you need to pass the command and args below :
command: ["/bin/sh","-c"] args: ["command 1 || command 2; Command 3 && command 4"]
The command ["/bin/sh", "-c"] is to run a shell, and execute the following instructions. The args are then passed as commands to the shell.
In shell scripting a semicolon separates commands, and && conditionally runs the following command if the first succeeds, Grep/Pipe (||) runs command1 if it fails then runs command2 also.
As per above command it always runs command 1 if it fails or gives any error then it continues to run command2. If command3 succeeds then only it runs command4. Change accordingly in your Yaml and have a try.
Refer this Doc for cron jobs.

Script in a pod is not getting executed

I have an EKS cluster and an RDS (mariadb). I am trying to make a backup of given databases though a script in a CronJob. The CronJob object looks like this:
apiVersion: batch/v1
kind: CronJob
metadata:
name: mysqldump
namespace: mysqldump
spec:
schedule: "* * * * *"
concurrencyPolicy: Replace
jobTemplate:
spec:
template:
spec:
containers:
- name: mysql-backup
image: viejo/debian-mysqldump:latest
envFrom:
- configMapRef:
name: mysqldump-config
args:
- /bin/bash
- -c
- /root/mysqldump.sh "(${MYSQLDUMP_DATABASES})" > /proc/1/fd/1 2>/proc/1/fd/2 || echo KO > /tmp/healthcheck
resources:
limits:
cpu: "0.5"
memory: "0.5Gi"
restartPolicy: OnFailure
The script is called mysqldump.sh, which gets all necessary details from a ConfigMap object. It makes the dump of the databases in an environment variable MYSQLDUMP_DATABASES, and moves it to S3 bucket.
Note: I am going to move some variables to a Secret, but before I need this to work.
What happens is NOTHING. The script is never getting executed I tried putting a "echo starting the backup", before the script, and "echo backup ended" after it, but I don't see none of them. If I'd access the container and execute the same exact command manually, it works:
root#mysqldump-27550908-sjwfm:/# /root/mysqldump.sh "(${MYSQLDUMP_DATABASES})" > /proc/1/fd/1 2>/proc/1/fd/2 || echo KO > /tmp/healthcheck
root#mysqldump-27550908-sjwfm:/#
Can anyone point out a possible issue?
Try change args to command:
...
command:
- /bin/bash
- -c
- /root/mysqldump.sh "(${MYSQLDUMP_DATABASES})" > /proc/1/fd/1 2>/proc/1/fd/2 || echo KO > /tmp/healthcheck
...

Checking result of command in helm chart (helm-hooks)

I am trying to execute a pre install job using helm charts. Can someone help getting result of command (parameter in yaml file) that I put in the below file:
apiVersion: batch/v1
kind: Job
metadata:
name: pre-install-job
annotations:
"helm.sh/hook": "pre-install"
spec:
template:
spec:
containers:
- name: pre-install
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'touch somefile.txt && echo $PWD && sleep 15']
restartPolicy: OnFailure
terminationGracePeriodSeconds: 0
backoffLimit: 3
completions: 1
parallelism: 1
I want to know where somefile.txt is created and echo is printed. And the reason I know it is working because "sleep 15" works. I see a 15 second difference in start and end time of pod creation.
Any file you create in a container environment is created inside the container filesystem. Unless you've mounted some storage into the container, the file will be lost as soon as the container exits.
Anything a Kubernetes process writes to its stdout will be captured by the Kubernetes log system. You can retrieve it using kubectl logs pre-install-job-... -c pre-install.

Handling cronjobs in a Pod with multiple containers

I have a requirement in which I need to create a cronjob in kubernetes but the pod is having multiple containers (with single container its working fine).
Is it possible?
The requirement is something like this:
1. First container: Run the shell script to do a job.
2. Second container: run fluentbit conf to parse the log and send it.
Previously I thought to have a deployment in place and that is working fine but since that deployment was used just for 10 mins jobs I thought to make it a cron job.
Any help is really appreciated.
Also about the cronjob I am not sure if a pod can support multiple containers to do that same.
Thank you,
Sunny
Yes you can create a cronjob with multiple containers. CronJob is an abstraction on top of pod. So in the pod spec you can have multiple containers just like you can have in a normal pod. As an example
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
namespace: default
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
- name: app
image: alpine
command:
- echo
- Hello World!
restartPolicy: OnFailure
I need to agree with the answer provided by #Arghya Sadhu. It shows how you can run multi container Pod with a CronJob. Before the answer I would like to give more attention to the comment provided by #Chris Stryczynski:
It's not clear whether the containers are run in parallel or sequentially
It is not entirely clear if the workload that you are trying to run:
The requirement is something like this:
First container: Run the shell script to do a job.
Second container: run fluentbit conf to parse the log and send it.
could be used in parallel (both running at the same time) or require sequential approach (after X completed successfully, run Y).
If the workload could be run in parallel the answer provided by #Arghya Sadhu is correct, however if one workload is depending on another, I'd reckon you should be using initContainers instead of multi container Pods.
The example of a CronJob that implements the initContainer could be following:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: ubuntu
image: ubuntu
command: [/bin/bash]
args: ["-c","cat /data/hello_there.txt"]
volumeMounts:
- name: data-dir
mountPath: /data
initContainers:
- name: echo
image: busybox
command: ["bin/sh"]
args: ["-c", "echo 'General Kenobi!' > /data/hello_there.txt"]
volumeMounts:
- name: data-dir
mountPath: "/data"
volumes:
- name: data-dir
emptyDir: {}
This CronJob will write a specific text to a file with an initContainer and then a "main" container will display its result. It's worth to mention that the main container will not start if the initContainer won't succeed with its operations.
$ kubectl logs hello-1234567890-abcde
General Kenobi!
Additional resources:
Linchpiner.github.io: K8S multi container pods
Whats about sidecar container for logging as second container which keep running without exit code. Even the job might run the state of the job still failed.

Pass in Dynamic Formatted Datetime to K8s Container Config

I have a CronJob that runs a process in a container in Kubernetes.
This process takes in a time window that is defined by a --since and --until flag. This time window needs to be defined at container start time (when the cron is triggered) and is a function of the current time. An example running this process would be:
$ my-process --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ") --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
So for the example above, I would like the time window to be from 1 hour ago to 1 hour in the future. Is there a way in Kubernetes to pass in a formatted datetime as a command argument to a process?
An example of what I am trying to do would be the following config:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-process
spec:
schedule: "*/2 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: my-process
image: my-image
args:
- my-process
- --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ")
- --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
When doing this, the literal string "$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ")" would be passed in as the --since flag.
Is something like this possible? If so, how would I do it?
Note that in your CronJob you don't run bash or any other shell and command substitution is a shell feature and without one will not work. In your example only one command my-process is started in the container and as it is not a shell, it is unable to perform command substitution.
This one:
$ my-process --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ") --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
will work properly because it is started in a shell so it may take advantage of shell features such as mentioned command substitution
One thing: date -v -1H +"%Y-%m-%dT%H:%M:%SZ" doesn't expand properly in bash shell with default GNU/Linux date implementation. Among others -v option is not recognized so I guess you're using it on MacOSX or some kind of BSD system. In my examples below I will use date version that works on Debian.
So for testing it on GNU/Linux it will be something like this:
date --date='-1 hour' +"%Y-%m-%dT%H:%M:%SZ"
For testing purpose I've tried it with simple CronJob from this example with some modifications:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
env:
- name: FROM
value: $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ")
- name: TILL
value: $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
args:
- /bin/sh
- -c
- date; echo from $(FROM) till $(TILL)
restartPolicy: OnFailure
It works properly. Below you can see the result of CronJob execution:
$ kubectl logs hello-1569947100-xmglq
Tue Oct 1 16:25:11 UTC 2019
from 2019-10-01T15:25:11Z till 2019-10-01T17:25:11Z
Apart from the example with use of environment variables I tested it with following code:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
args:
- /bin/sh
- -c
- date; echo from $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ") till $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
restartPolicy: OnFailure
and as you can see here command substitution also works properly:
$ kubectl logs hello-1569949680-fk782
Tue Oct 1 17:08:09 UTC 2019
from 2019-10-01T16:08:09Z till 2019-10-01T18:08:09Z
It works properly because in both examples first we spawn bash shell in our container and subsequently it runs other commands as simple echo provided as its argument. You can use your my-process command instead of echo only you'll need to provide it in one line with all its arguments, like this:
args:
- /bin/sh
- -c
- my-process --since=$(date -v -1H +"%Y-%m-%dT%H:%M:%SZ") --until=$(date -v +1H +"%Y-%m-%dT%H:%M:%SZ")
This example will not work as there is no shell involved. echo command not being a shell will not be able to perform command substitution which is a shell feature:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
args:
- /bin/echo
- from $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ") till $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
restartPolicy: OnFailure
and the results will be a literal string:
$ kubectl logs hello-1569951180-fvghz
from $(date --date="-1 hour" +"%Y-%m-%dT%H:%M:%SZ") till $(date --date="+1 hour" +"%Y-%m-%dT%H:%M:%SZ")
which is similar to your case as your command, like echo isn't a shell and it cannot perform command substitution.
To sum up: The solution for that is wrapping your command as a shell argument. In first two examples echo command is passed along with other commands as shell argument.
Maybe it is visible more clearly in the following example with a bit different syntax:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: debian
command: ["/bin/sh","-c"]
args: ["FROM=$(date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ'); TILL=$(date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ') ;echo from $FROM till $TILL"]
restartPolicy: OnFailure
man bash says:
-c If the -c option is present, then commands are read from the first non-option argument command_string.
so command: ["/bin/sh","-c"] basically means run a shell and execute following commands which then we pass to it using args. In bash commands should be separated with semicolon ; so they are run independently (subsequent command is executed no matter what was the result of executing previous command/commands).
In the following fragment:
args: ["FROM=$(date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ'); TILL=$(date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ') ;echo from $FROM till $TILL"]
we provide to /bin/sh -c three separate commands:
FROM=$(date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ')
which sets FROM environment variable to result of execution of date --date='-1 hour' +'%Y-%m-%dT%H:%M:%SZ' command,
TILL=$(date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ')
which sets TILL environment variable to result of execution of date --date='+1 hour' +'%Y-%m-%dT%H:%M:%SZ' command
and finally we run
echo from $FROM till $TILL
which uses both variables.
Exactly the same can be done with any other command.