How Can I set Kubernetes Cronjob to run at a specific time - kubernetes

When I set the Cronjob Schedule as */1 * * * *,it would work.
When I set any number which is in 0-59 to the crontab minute,such as 30 * * * *,it would work as well.
However when I set the Cronjob Schedule as 30 11 * * *,it even doesn`t create a job at 11:30.
All the config is followed:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "33 11 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello-cronjob
image: busybox
command: ["bash","-c","date;echo Hello from the Kubernetes cluste"]
restartPolicy: OnFailure

This is probably because your cluster is running in a different timezone then the one used by you.
You can check what timezone will be set in a POD using:
kubectl run -i --tty busybox --image=busybox --restart=Never -- date.
As for your yaml it looks good, there is no need to change anything with the spec.schedule value.
A small hint that might be helpful to you which is checking the logs from Jobs.
When you create CronJob when it's scheduled it will spawn a Job, you can see them using kubectl get jobs.
$ kubectl get jobs
NAME DESIRED SUCCESSFUL AGE
hello-1552390680 1 1 7s
If you use the name of that job hello-1552390680 and set it as a variable you can check the logs from that job.
$ pods=$(kubectl get pods --selector=job-name=hello-1552390680 --output=jsonpath={.items..metadata.name})
You can later check logs:
$ kubectl logs $pods
Tue Mar 12 11:38:04 UTC 2019
Hello from the Kubernetes cluster

Try this once and test result
0 30 11 1/1 * ? *
http://www.cronmaker.com/

Related

kubernetes cronjob unexpected scheduling behavior

I'm using kubernetes 1.21 cronjob to schedule a few jobs to run at a certain time every day.
I scheduled a job to be run at 4pm, via kubectl apply -f <name of yaml file>. Subsequently, I updated the yaml schedule: "0 22 * * *" to trigger the job at 10pm, using the same command kubectl apply -f <name of yaml file>
However, after applying the configuration at around 1pm, the job still triggers at 4pm (shouldn't have happened), and then triggers again at 10pm (intended trigger time).
Is there an explanation as to why this happens, and can I prevent it?
Sample yaml for the cronjob below:
apiVersion: batch/v1
kind: CronJob
metadata:
name: job-name-1
spec:
schedule: "0 16 * * *" # 4pm
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- image: sample-image
name: job-name-1
args:
- node
- ./built/script.js
env:
- name: NODE_OPTIONS
value: "--max-old-space-size=5000"
restartPolicy: Never
nodeSelector:
app: cronjob
I'm expecting the job to only trigger at 10pm.
Delete the cronjob and reapply it seems to eliminate such issues, but there are scenarios where I cannot the delete the job (because it's still running).
As you use kubectl apply -f <name of yaml file> to schedule a second Job at 10pm which means it will schedule a new Job but it will not replace the existing job. so the reason was that the Job at 4pm also scheduled and it runned.
Instead you need to use the below command to replace the Job with another scheduled Job.
kubectl patch cronjob my-cronjob -p '{"spec":{"schedule": "0 22 * * *"}}'
This will run Job only at 10Pm.
In order to delete the running Job use the below Process :
run in console:
crontab -e
then you will get crontab opened with an editor, simply delete the line there, save the file and quit the editor - that's it.
if you are running with a root user then use the below command and proceed as above step.
sudo crontab -e

Kubernetes doesn't remove completed jobs for a Cronjob

Kubernetes doesn't delete a manually created completed job when historylimit is set when using newer versions of kubernetes clients.
mycron.yaml:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
namespace: myjob
spec:
schedule: "* * 10 * *"
successfulJobsHistoryLimit: 0
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Creating cronjob:
kubectl create -f mycron.yaml
Creating job manually:
kubectl create job -n myjob --from=cronjob/hello hello-job
Result:
Job is completed but not removed
NAME COMPLETIONS DURATION AGE
hello-job 1/1 2s 6m
Tested with kubernetes server+client versions of 1.19.3 and 1.20.0
However when I used an older client version (1.15.5) against the server's 1.19/1.20 it worked well.
Comparing the differences while using different client versions:
kubernetes-controller log:
Using client v1.15.5 I have this line in the log (But missing when using client v1.19/1.20):
1 event.go:291] "Event occurred" object="myjob/hello" kind="CronJob" apiVersion="batch/v1beta1" type="Normal" reason="SuccessfulDelete" message="Deleted job hello-job"
Job yaml:
Exactly the same, except the ownerReference part:
For client v1.19/1.20
ownerReferences:
- apiVersion: batch/v1beta1
kind: CronJob
name: hello
uid: bb567067-3bd4-4e5f-9ca2-071010013727
For client v1.15
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: CronJob
name: hello
uid: bb567067-3bd4-4e5f-9ca2-071010013727
And that is it. No other informations in the logs, no errors, no warnings ..nothing (checked all the pods logs in kube-system)
Summary:
It seems to be a bug in kubectl client itself but not in kubernetes server. But don't know how to proceed further.
edit:
When I let the cronjob itself to do the job (ie hitting the time in the expression), it will remove the completed job successfully.

How to verify a cronjob successfully completed in Kubernetes

I am trying to create a cronjob that runs the command date in a single busybox container. The command should run every minute and must complete within 17 seconds or be terminated by Kubernetes. The cronjob name and container name should both be hello.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
jobTemplate:
metadata:
name: hello
spec:
completions: 1
activeDeadlineSeconds: 17
template:
metadata:
creationTimestamp: null
spec:
containers:
- image: busybox
name: hello
command: ["/bin/sh","-c","date"]
resources: {}
restartPolicy: OnFailure
schedule: '*/1 * * * *'
status: {}
I want to verify that the job executed successfully at least once.
I tried it using the command k get cronjob -w which gives me this result.
Is there another way to verify that the job executes successfully? Is it a good way to add a command date to the container?
CronJob internally creates Job which internally creates Pod. Watch for the job that gets created by the CronJob
kubectl get jobs --watch
The output is similar to this:
NAME COMPLETIONS DURATION AGE
hello-4111706356 0/1 0s
hello-4111706356 0/1 0s 0s
hello-4111706356 1/1 5s 5s
And you can see the number of COMPLETIONS
#Replace "hello-4111706356" with the job name in your system
pods=$(kubectl get pods --selector=job-name=hello-4111706356 --output=jsonpath={.items[*].metadata.name})
Check the pod logs
kubectl logs $pods
you can check the logs of the pods created by the cronjob resource. Have a look at this question and let me know if this solves your query.
You can directly check the status of the jobs. Cronjob is just controlling a Kubernetes job.
Run kubectl get jobs and it will give you the completion status.
> kubectl get jobs
NAME COMPLETIONS DURATION AGE
datee-job 0/1 of 3 24m 24m

Check logs for a Kubernetes resource CronJob

I created a CronJob resource in Kubernetes.
I want to check the logs to validate that my crons are run. But not able to find any way to do that. I have gone through the commands but looks like all are for pod resource type.
Also tried following
$ kubectl logs cronjob/<resource_name>
error: cannot get the logs from *v1beta1.CronJob: selector for *v1beta1.CronJob not implemented
Questions:
How to check logs of CronJob Resource type?
If I want this resource to be in specific namespace, how to implement that same?
You need to check the logs of the pods which are created by the cronjob. The pods will be in completed state but you can check logs.
# here you can get the pod_name from the stdout of the cmd `kubectl get pods`
$ kubectl logs -f -n default <pod_name>
For creating a cronjob in a namespace just add namespace in metadata section. The pods will created in that namespace.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
namespace: default
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Ideally you should be sending the logs to a log aggregator system such as ELK or Splunk.
In case you create job from cronjob it works like this:
kubectl -n "namespace" logs jobs.batch/<resource_name> --tail 4

What is the difference between oc and kubectl commands?

I am trying to create a cron job in openshift and having trouble doing this with oc so I am looking for alternatives.
I have already tried: "oc run cron --image={imagename} \ --dry-run=false"
This created another resource. There was no parameter to create a cron job
There's already a good answer on how the two platforms overlap. You mentioned there was no parameter to create a cronjob. You can do that with oc through the following (resource):
oc run pi --image=perl --schedule='*/1 * * * *' \
--restart=OnFailure --labels parent="cronjobpi" \
--command -- perl -Mbignum=bpi -wle 'print bpi(2000)'
Or you can do it through a yaml file like the following:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
And then run:
oc create -f cronjob.yaml -n default
oc stands for openshift client which is a wrapper created on top of kubectl. It is created to communicate with openshift api server.
It supports all operations that are provided by kubectl and others that are specific to OpenShift like operations on templates, builds, build and development config, image stream etc