I want to run one cron at different times.
Is it possible to do something like this in my YML file:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-cronjob
spec:
schedule:
- "*/10 00-08 * * *"
- "*/5 09-18 * * *"
- "*/10 19-23 * * *"
concurrencyPolicy: Forbid
...
or do I have to create separate YML files for every schedule time?
The short answer is: no, you cannot create one CronJob YML with several crontab times schedules.
The easy solution would be to use separate CronJob resource for each crontab line from your example. You can use the same image for each of your CronJobs.
Related
I'm using kubernetes 1.21 cronjob to schedule a few jobs to run at a certain time every day.
I scheduled a job to be run at 4pm, via kubectl apply -f <name of yaml file>. Subsequently, I updated the yaml schedule: "0 22 * * *" to trigger the job at 10pm, using the same command kubectl apply -f <name of yaml file>
However, after applying the configuration at around 1pm, the job still triggers at 4pm (shouldn't have happened), and then triggers again at 10pm (intended trigger time).
Is there an explanation as to why this happens, and can I prevent it?
Sample yaml for the cronjob below:
apiVersion: batch/v1
kind: CronJob
metadata:
name: job-name-1
spec:
schedule: "0 16 * * *" # 4pm
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- image: sample-image
name: job-name-1
args:
- node
- ./built/script.js
env:
- name: NODE_OPTIONS
value: "--max-old-space-size=5000"
restartPolicy: Never
nodeSelector:
app: cronjob
I'm expecting the job to only trigger at 10pm.
Delete the cronjob and reapply it seems to eliminate such issues, but there are scenarios where I cannot the delete the job (because it's still running).
As you use kubectl apply -f <name of yaml file> to schedule a second Job at 10pm which means it will schedule a new Job but it will not replace the existing job. so the reason was that the Job at 4pm also scheduled and it runned.
Instead you need to use the below command to replace the Job with another scheduled Job.
kubectl patch cronjob my-cronjob -p '{"spec":{"schedule": "0 22 * * *"}}'
This will run Job only at 10Pm.
In order to delete the running Job use the below Process :
run in console:
crontab -e
then you will get crontab opened with an editor, simply delete the line there, save the file and quit the editor - that's it.
if you are running with a root user then use the below command and proceed as above step.
sudo crontab -e
So I've a cron job like this:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-cron-job
spec:
schedule: "0 0 31 2 *"
failedJobsHistoryLimit: 3
successfulJobsHistoryLimit: 1
concurrencyPolicy: "Forbid"
startingDeadlineSeconds: 30
jobTemplate:
spec:
backoffLimit: 0
activeDeadlineSeconds: 120
...
Then i trigger the job manually like so:
kubectl create job my-job --namespace precompile --from=cronjob/my-cron-job
But it seams like I can trigger the job as often as I want and the concurrencyPolicy: "Forbid" is ignored.
Is there a way so that manually triggered jobs will respect this or do I have to check this manually?
Note that concurrency policy only applies to the jobs created by the same cron job.
The concurrencyPolicy field only applies to jobs created by the same cron job, as stated in the documentation: https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy
When executing $ kubectl create job my-job --namespace precompile --from=cronjob/my-cron-job you are essentially creating a one-time job on its own that uses the spec.jobTemplate field as a reference to create it. Since concurrencyPolicy is a cronjob field, it is not even being evaluated.
TL;DR
This actually is the expected behavior. Manually created jobs are not effected by concurrencyPolicy. There is no flag you could pass to change this behavior.
I have an yaml. I want to parameterize the schedule of that kubernetes cronjob. On environment file I declared JobFrequencyInMinutes: "10"
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: scheduled-mongo-cronjob
spec:
schedule: "*/$(JobFrequencyInMinutes) * * * *"
concurrencyPolicy: "Forbid"
jobTemplate:
spec:
template:
spec:
containers:
- name: scheduled-mongo-cronjob
image: xxxx
env:
- name: JobFrequencyInMinutes
valueFrom:
configMapKeyRef:
key: JobFrequencyInMinutes
name: env-conf
When I am applying the above yaml I am getting an error.
The CronJob "scheduled-mongo-cronjob" is invalid: spec.schedule: Invalid value: "*/$(JobFrequencyInMinutes) * * * *": Failed to parse int from $(JobFrequencyInMinutes): strconv.Atoi: parsing "$(JobFrequencyInMinutes)": invalid syntax
Please guide me if there is any alternative way to achieve this.
The issue here is that the environment variable will be just available when the CronJob is created and inside the job itself, but it is failing to create because the variable $JobFrequencyInMinutes does not exists in the node level.
I would say that to achieve what you are trying to do, you would need to have an environment variable at cluster level. Whenever you want to update your schedule, you would need to set a new value to it and then re-create your CronJob.
It seems though that the declarative way it's not working (via your YAML), so you would need to create using the imperative way:
kubectl run scheduled-mongo-cronjob --schedule="*/$JobFrequencyInMinutes * * * *" --restart=OnFailure --image=xxxx
I have a Kubernetes cron job that creates a zip file which takes about 1 hour. After it's completion I want to upload this zip file to an AWS s3 bucket.
How do I tell the cron job to only do the s3 command after the zip is created?
Should the s3 command be within the same cron job?
Currently my YAML looks like this:
kind: CronJob
metadata:
name: create-zip-upload
spec:
schedule: "27 5 * * *" # everyday at 05:27 AM
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: mycontainer
image: 123456789.my.region.amazonaws.com/mycompany/myproject/rest:latest
args:
- /usr/bin/python3
- -m
- scripts.createzip
Kubernetes doesn't have a concept of a relationship between resources. There isn't an official or clean way to have something occurring in one resource cause an effect on another resource.
Because of this, the best solution is to just put the s3 cmd into the same cronjob.
There's two ways to do this:
Add the s3 cmd logic to your existing container.
Create a new container in the same cronjob that watches for the file and then runs the s3 cmd.
I'm using activeDeadlineSeconds in my Job definition but it doesn't appear to have any effect. I have a CronJob that kicks off a job every minute, and I'd like that job to automatically kill off all its pods before another one is created (so 50 seconds seems reasonable). I know there are other ways to do this but this is ideal for our circumstances.
I'm noticing that the pods aren't being killed off, however. Are there any limitations with activeDeadlineSeconds? I don't see anything in the documentation for K8s 1.7 - https://v1-7.docs.kubernetes.io/docs/api-reference/v1.7/#jobspec-v1-batch I've also checked more recent versions.
Here is a condensed version of my CronJob definition -
apiVersion: batch/v2alpha1
kind: CronJob
metadata:
name: kafka-consumer-cron
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec: # JobSpec
activeDeadlineSeconds: 50 # This needs to be shorter than the cron interval ## TODO - NOT WORKING!
parallelism: 1
...
You can use concurrencyPolicy: "Replace". This will terminate previous running pod then start a new one.
Check comments from here: ConcurrencyPolicy
It turns out this is actually a known bug in 1.7. It was fixed in version 1.8
https://github.com/openshift/origin/issues/10755
https://github.com/kubernetes/kubernetes/issues/32149