I have a program that executes some code, sleeps for 10 minutes, then repeats. This continues in an infinite loop. I'm wondering if theres a way to let Kubernetes/GKE handle this scheduling.
I see that GKE offers cron scheduling. I could schedule a pod to run every 10 minutes. The problem is that in some scenarios the program could take more than 10 minutes to complete.
Ideally, I could let the pod run to completion, schedule it to run in 10 minutes, repeat. Is this possible?
Is this possible on Kubernetes?
In K8S there's a specific resource for that goal: CronJob
In the following example you see a schedule section with the tipical cron notation:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: your-cron
spec:
schedule: "*/20 8-19 * * 1-5"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
metadata:
labels:
app: your-periodic-batch-job
spec:
containers:
- name: redmine-cron
image: your_image
imagePullPolicy: IfNotPresent
restartPolicy: OnFailure
Related
I'm running the following cronjob in my minikube:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
concurrencyPolicy: Allow
suspend: false
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- somefailure
restartPolicy: OnFailure
I've added the "somefailure" to force failing of the job. My problem is that it seems that my minikube installation (running v1.23.3) ignores successfulJobsHistoryLimit and failedJobsHistoryLimit. I've checked the documentation on https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/ and it says that both parameters are available, but in the end, Kubernetes generates up to 10 jobs. When I add ttlSecondsAfterFinished: 1, it removes the container after 1 second, but the other parameters are completely ignored.
So I wonder if I need to enable something in minikube or if these parameters are deprecated or what's the reason why it doesn't work. Any idea?
It seems it's a Kubernetes bug: https://github.com/kubernetes/kubernetes/issues/53331.
Whenever a job run fails I want to suspend cronjob so that no further jobs are started. Is their any possible way?
k8s version: 1.10
you can configure it simply using suspend: true
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
spec:
suspend: true
parallelism: 2
completions: 10
template:
spec:
containers:
- name: my-container
image: busybox
command: ["sleep", "5"]
restartPolicy: Never
Any currently running jobs will complete but future jobs will be suspended.
Read more at : https://kubernetes.io/blog/2021/04/12/introducing-suspended-jobs/
If you are on an older version you can use backoffLimit: 1
apiVersion: batch/v1
kind: Job
metadata:
name: error
spec:
backoffLimit: 1
template:
.spec.backoffLimit can limit the number of time a pod is restarted when running inside a job
If you can't suspend it however we make sure job won't get re-run using
backoffLimit means the number of times it will retry before it is
considered failed. The default is 6.
concurrencyPolicy means it will run 0 or 1 times, but
not more.
restartPolicy: Never means it won't restart on failure.
I have a couple of helm charts in a myapp/templates/ directory, and they deploy as expected with helm install myapp.
These two templates are for example:
database.yaml
cronjob.yaml
I'd like for the cronjob.yaml to only run after the database.yaml is in a running state. I currently have an issue where database.yaml fairly regularly fails in a way we half expect (it's not ideal, but it is what it is).
I've found hooks, but I think I'm either using them incorrectly, or they don't determine whether the pod is in Running, Pending, some state of crashed, etc...
There are no changes I've made to database.yaml in order to use hooks, but my cronjob.yaml which I only want to run if database.yaml is in a running state, I added the annotations as follows:
cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: database
annotations:
"helm.sh/hook": "post-install"
labels:
app: database
service: database
spec:
schedule: "* * * * *"
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: customtask
image: "{{ .Values.myimage }}"
command:
- /bin/sh
- -c
- supercooltask.sh
restartPolicy: Never
How can I change this hook configuration to allow cronjob.yaml to only run if database.yaml deploys and runs successfully?
Use init containers in the Pod Spec of Cron Job to check DB is up and running.
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#podspec-v1-core
Example:
spec:
template:
spec:
initContainers:
..
containers:
..
restartPolicy: OnFailure
I have one pod that I want to automatically restart once a day. I've looked at the Cronjob documentation and I think I'm close, but I keep getting an Exit Code 1 error. I'm not sure if there's an obvious error in my .yaml. If not, I can post the error log as well. Here's my code:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-deployment-restart
spec:
schedule: "0 20 * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl
command:
- 'kubectl'
- 'rollout'
- 'restart'
- 'deployment my-deployment'
You would need to give it permissions to access the API, that means making a ServiceAccount and some RBAC policy objects (Role, RoleBinding) and then set serviceAccountName in your pod spec there.
I have a simple cronjob that runs every 10 minutes:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: myjob
spec:
schedule: "*/10 * * * *" #every 10 minutes
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: job
image: image
imagePullPolicy: Always
restartPolicy: OnFailure
It indeed runs every 10 minutes, but i would like it to run first time when i deploy the cronjob. is it possible?
You could have a one time CronJob trigger the scheduled CronJob:
kubectl create job --from=cronjob/<name of cronjob> <name of job>
Source
The one time CronJob would need to run after the scheduled CronJob has been created, and its image would need to include the kubectl binary. Api-server permissions needed to run kubectl within the container could be provided by linking a ServiceAccount to the one time CronJob.