Is it possible to trigger a kubernetes cronjob also upon deployment? - kubernetes

I have a simple cronjob that runs every 10 minutes:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: myjob
spec:
schedule: "*/10 * * * *" #every 10 minutes
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: job
image: image
imagePullPolicy: Always
restartPolicy: OnFailure
It indeed runs every 10 minutes, but i would like it to run first time when i deploy the cronjob. is it possible?

You could have a one time CronJob trigger the scheduled CronJob:
kubectl create job --from=cronjob/<name of cronjob> <name of job>
Source
The one time CronJob would need to run after the scheduled CronJob has been created, and its image would need to include the kubectl binary. Api-server permissions needed to run kubectl within the container could be provided by linking a ServiceAccount to the one time CronJob.

Related

Kubernetes Cronjobs are not removed

I'm running the following cronjob in my minikube:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
concurrencyPolicy: Allow
suspend: false
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- somefailure
restartPolicy: OnFailure
I've added the "somefailure" to force failing of the job. My problem is that it seems that my minikube installation (running v1.23.3) ignores successfulJobsHistoryLimit and failedJobsHistoryLimit. I've checked the documentation on https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/ and it says that both parameters are available, but in the end, Kubernetes generates up to 10 jobs. When I add ttlSecondsAfterFinished: 1, it removes the container after 1 second, but the other parameters are completely ignored.
So I wonder if I need to enable something in minikube or if these parameters are deprecated or what's the reason why it doesn't work. Any idea?
It seems it's a Kubernetes bug: https://github.com/kubernetes/kubernetes/issues/53331.

Kubernetes: How do I find which jobs a cronjob has generated

Say that I have a job history limit > 1, is there a way to use kubectl to find which jobs that have been spawned by a CronJob?
Use label.
$kubectl get jobs -n namespace -l created-by=cronjob
created-by=cronjob which define at your cronjob.
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
jobTemplate:
metadata:
labels:
created-by: cronjob
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Ideally you would use the --field-selector label to get all jobs belonging to a particular CronJob, but field selector does not support all the fields. The best way I could find is using jsonPath:
Substitute CronJobName to the name of your cronjob to get all jobs that belong to that CronJob
kubectl get jobs $(kubectl get jobs -o=jsonpath='{.items[?(#.metadata.ownerReferences[*].name=="CronJobName")].metadata.name}')

K8s Cronjob Rolling Restart Every Day

I have one pod that I want to automatically restart once a day. I've looked at the Cronjob documentation and I think I'm close, but I keep getting an Exit Code 1 error. I'm not sure if there's an obvious error in my .yaml. If not, I can post the error log as well. Here's my code:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-deployment-restart
spec:
schedule: "0 20 * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl
command:
- 'kubectl'
- 'rollout'
- 'restart'
- 'deployment my-deployment'
You would need to give it permissions to access the API, that means making a ServiceAccount and some RBAC policy objects (Role, RoleBinding) and then set serviceAccountName in your pod spec there.

How to schedule pods restart

Is it possible to restart pods automatically based on the time?
For example, I would like to restart the pods of my cluster every morning at 8.00 AM.
Use a cronjob, but not to run your pods, but to schedule a Kubernetes API command that will restart the deployment everyday (kubectl rollout restart). That way if something goes wrong, the old pods will not be down or removed.
Rollouts create new ReplicaSets, and wait for them to be up, before killing off old pods, and rerouting the traffic. Service will continue uninterrupted.
You have to setup RBAC, so that the Kubernetes client running from inside the cluster has permissions to do needed calls to the Kubernetes API.
---
# Service account the client will use to reset the deployment,
# by default the pods running inside the cluster can do no such things.
kind: ServiceAccount
apiVersion: v1
metadata:
name: deployment-restart
namespace: <YOUR NAMESPACE>
---
# allow getting status and patching only the one deployment you want
# to restart
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: deployment-restart
namespace: <YOUR NAMESPACE>
rules:
- apiGroups: ["apps", "extensions"]
resources: ["deployments"]
resourceNames: ["<YOUR DEPLOYMENT NAME>"]
verbs: ["get", "patch", "list", "watch"] # "list" and "watch" are only needed
# if you want to use `rollout status`
---
# bind the role to the service account
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: deployment-restart
namespace: <YOUR NAMESPACE>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: deployment-restart
subjects:
- kind: ServiceAccount
name: deployment-restart
namespace: <YOUR NAMESPACE>
And the cronjob specification itself:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: deployment-restart
namespace: <YOUR NAMESPACE>
spec:
concurrencyPolicy: Forbid
schedule: '0 8 * * *' # cron spec of time, here, 8 o'clock
jobTemplate:
spec:
backoffLimit: 2 # this has very low chance of failing, as all this does
# is prompt kubernetes to schedule new replica set for
# the deployment
activeDeadlineSeconds: 600 # timeout, makes most sense with
# "waiting for rollout" variant specified below
template:
spec:
serviceAccountName: deployment-restart # name of the service
# account configured above
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl # probably any kubectl image will do,
# optionaly specify version, but this
# should not be necessary, as long the
# version of kubectl is new enough to
# have `rollout restart`
command:
- 'kubectl'
- 'rollout'
- 'restart'
- 'deployment/<YOUR DEPLOYMENT NAME>'
Optionally, if you want the cronjob to wait for the deployment to roll out, change the cronjob command to:
command:
- bash
- -c
- >-
kubectl rollout restart deployment/<YOUR DEPLOYMENT NAME> &&
kubectl rollout status deployment/<YOUR DEPLOYMENT NAME>
Another quick and dirty option for a pod that has a restart policy of Always (which cron jobs are not supposed to handle - see creating a cron job spec pod template) is a livenessProbe that simply tests the time and restarts the pod on a specified schedule
ex. After startup, wait an hour, then check hour every minute, if hour is 3(AM) fail probe and restart, otherwise pass
livenessProbe:
exec:
command:
- exit $(test $(date +%H) -eq 3 && echo 1 || echo 0)
failureThreshold: 1
initialDelaySeconds: 3600
periodSeconds: 60
Time granularity is up to how you return the date and test ;)
Of course this does not work if you are already utilizing the liveness probe as an actual liveness probe ¯\_(ツ)_/¯
I borrowed idea from #Ryan Lowe but modified it a bit. It will restart pod older than 24 hours
livenessProbe:
exec:
command:
- bin/sh
- -c
- "end=$(date -u +%s);start=$(stat -c %Z /proc/1 | awk '{print int($1)}'); test $(($end-$start)) -lt 86400"
There's a specific resource for that: CronJob
Here an example:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: your-cron
spec:
schedule: "*/20 8-19 * * 1-5"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
metadata:
labels:
app: your-periodic-batch-job
spec:
containers:
- name: my-image
image: your-image
imagePullPolicy: IfNotPresent
restartPolicy: OnFailure
change spec.concurrencyPolicy to Replace if you want to replace the old pod when starting a new pod. Using Forbid, the new pod creation will be skip if the old pod is still running.
According to cronjob-in-kubernetes-to-restart-delete-the-pod-in-a-deployment
you could create a kind: CronJob with a jobTemplate having containers. So your CronJob will start those containers with a activeDeadlineSeconds of one day (until restart). According to you example, it will be then schedule: 0 8 * * ? for 8:00AM
We were able to do this by modifying the manifest (passing a random param every 3 hours) file of deployment from a CRON job:
We specifically used Spinnaker for triggering deployments:
We created a CRON job in Spinnaker like below:
Configuration step looks like:
The Patch Manifest looks like: (K8S restarts PODS when YAML changes, to counter that check bottom of post)
As there can be a case where all pods can restart at a same time, causing downtime, we have a policy for Rolling Restart where maxUnavailablePods is 0%
spec:
# replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 50%
maxUnavailable: 0%
This spawns new pods and then terminates old ones.

Schedule pod to run every X minutes

I have a program that executes some code, sleeps for 10 minutes, then repeats. This continues in an infinite loop. I'm wondering if theres a way to let Kubernetes/GKE handle this scheduling.
I see that GKE offers cron scheduling. I could schedule a pod to run every 10 minutes. The problem is that in some scenarios the program could take more than 10 minutes to complete.
Ideally, I could let the pod run to completion, schedule it to run in 10 minutes, repeat. Is this possible?
Is this possible on Kubernetes?
In K8S there's a specific resource for that goal: CronJob
In the following example you see a schedule section with the tipical cron notation:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: your-cron
spec:
schedule: "*/20 8-19 * * 1-5"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
metadata:
labels:
app: your-periodic-batch-job
spec:
containers:
- name: redmine-cron
image: your_image
imagePullPolicy: IfNotPresent
restartPolicy: OnFailure