Kubernetes Cron Jobs - Run multiple pods for a cron job - kubernetes

Our requirement is we need to do batch processing every 3 hours but single process can not handle the work load. we have to run multiple pods for the same cron job. Is there any way to do that ?
Thank you.

You can provide parallelism: <num_of_pods> to cronjob.spec.jobTemplate.spec and it will run the multiple pods () at the same time.
Following is the example of a cronjob which runs 3 nginx pod every minute.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
creationTimestamp: null
labels:
run: cron1
name: cron1
spec:
concurrencyPolicy: Allow
jobTemplate:
metadata:
creationTimestamp: null
spec:
parallelism: 3
template:
metadata:
creationTimestamp: null
labels:
run: cron1
spec:
containers:
- image: nginx
name: cron1
resources: {}
restartPolicy: OnFailure
schedule: '*/1 * * * *'
concurrencyPolicy: Forbid
status: {}

Related

openshift cronjob with imagestream

when i update my imagestream and then trigger a run of a cronjob, the cronjob will still use the previous imagestream instead of pulling latest. despite the fact i have configured the cronjob to always pull.
the way ive been testing this is to:
push a new image to the stream, and verify the imagestream is updated
check the cronjob obj and verify the image associated to the container still has the old image stream hash
trigger a new run of the cronjob, which I would think would pull a new image since the pull policy is always. -- but it does not, the cronjob starts with a container using the old image stream.
heres the yaml:
apiVersion: template.openshift.io/v1
kind: Template
metadata:
name: cool-cron-job-template
parameters:
- name: ENVIRONMENT
displayName: Environment
required: true
objects:
- apiVersion: v1
kind: ImageStream
metadata:
name: cool-cron-job
namespace: cool-namespace
labels:
app: cool-cron-job
owner: cool-owner
spec:
lookupPolicy:
local: true
- apiVersion: batch/v1
kind: CronJob
metadata:
name: cool-cron-job-cron-job
namespace: cool-namespace
labels:
app: cool-cron-job
owner: cool-owner
spec:
schedule: "10 0 1 * *"
concurrencyPolicy: "Forbid"
startingDeadlineSeconds: 200
suspend: false
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
metadata:
labels:
app: cool-cron-job
cronjob: "true"
annotations:
alpha.image.policy.openshift.io/resolve-names: '*'
spec:
dnsPolicy: ClusterFirst
restartPolicy: OnFailure
securityContext: { }
terminationGracePeriodSeconds: 0
containers:
- command: [ "python", "-m", "cool_cron_job.handler" ]
imagePullPolicy: Always
name: cool-cron-job-container
image: cool-cron-job:latest

Kubernetes Cronjobs are not removed

I'm running the following cronjob in my minikube:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
concurrencyPolicy: Allow
suspend: false
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- somefailure
restartPolicy: OnFailure
I've added the "somefailure" to force failing of the job. My problem is that it seems that my minikube installation (running v1.23.3) ignores successfulJobsHistoryLimit and failedJobsHistoryLimit. I've checked the documentation on https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/ and it says that both parameters are available, but in the end, Kubernetes generates up to 10 jobs. When I add ttlSecondsAfterFinished: 1, it removes the container after 1 second, but the other parameters are completely ignored.
So I wonder if I need to enable something in minikube or if these parameters are deprecated or what's the reason why it doesn't work. Any idea?
It seems it's a Kubernetes bug: https://github.com/kubernetes/kubernetes/issues/53331.

kubernetes cronjob dont works correctly when values are customized

I am using rancher 2.3.3
When I config cronjob with schedule values like #hourly and #daily, works fine.
but when I config it with values like "6 1 * * *" , doesn't work.
OS times are sync between all cluster nodes
My config file
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: samplename
namespace: samplenamespace
spec:
schedule: "6 1 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: samplename
image: myimageaddress:1.0
restartPolicy: OnFailure
I find the root cause
scheduler container has different timezone, so it run with a few hours delay

Not able to connect rabbitmq from kubernetes cron jobs

I am using rabbitmq at a remote (cloudamqp.com) and I create a cron job on Kubernetes. On my local machine, my job is working fine and the Kubernetes cronJob schedules perfectly well but the Job redirects the rabbitmq connection URL to 127.0.0.1:5672 and I get an error.
pika.exceptions.ConnectionClosed: Connection to 127.0.0.1:5672 failed: [Errno 111] Connection refused
I check logs of cron job and my connection URL is perfectly fine but when pika is trying to connect to the host it automatically redirects to 127.0.0.1:5672 as we know the cron pod is not running any rabbitmq server so it refuses the connection.
CronJob.yml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: scrape-news
spec:
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
metadata:
labels:
app: scrape-news
spec:
containers:
- name: scrape-news
image: SCRAPER_IMAGE
imagePullPolicy: Always
restartPolicy: Never
schedule: '* * * * *'
successfulJobsHistoryLimit: 3
RabbitMQ Connection
print(env.RABBIT_URL)
self.params = pika.URLParameters(env.RABBIT_URL)
self.connection = pika.BlockingConnection(parameters=self.params)
self.channel = self.connection.channel() # start a channel
Connection URL is exact same and works on my local setup.
Based on your CronJob spec you are not passing the environment variable RABBIT_URL.
Your code looks as if it is expecting this variable to be set, which it is not, and which is likely why it is defaulting to localhost.
self.params = pika.URLParameters(env.RABBIT_URL)
You probably want something like this:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: scrape-news
spec:
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
metadata:
labels:
app: scrape-news
spec:
containers:
- name: scrape-news
image: SCRAPER_IMAGE
imagePullPolicy: Always
env:
- name: RABBIT_URL
value: cloudamqp.com
restartPolicy: Never
schedule: '* * * * *'
successfulJobsHistoryLimit: 3

How to ensure kubernetes cronjob does not restart on failure

I have a cronjob that sends out emails to customers. It occasionally fails for various reasons. I do not want it to restart, but it still does.
I am running Kubernetes on GKE. To get it to stop, I have to delete the CronJob and then kill all the pods it creates manually.
This is bad, for obvious reasons.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
creationTimestamp: 2018-06-21T14:48:46Z
name: dailytasks
namespace: default
resourceVersion: "20390223"
selfLink: [redacted]
uid: [redacted]
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
- command:
- kubernetes/daily_tasks.sh
env:
- name: DB_HOST
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
envFrom:
- secretRef:
name: my-secrets
image: [redacted]
imagePullPolicy: IfNotPresent
name: dailytasks
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
schedule: 0 14 * * *
successfulJobsHistoryLimit: 3
suspend: true
status:
active:
- apiVersion: batch
kind: Job
name: dailytasks-1533218400
namespace: default
resourceVersion: "20383182"
uid: [redacted]
lastScheduleTime: 2018-08-02T14:00:00Z
It turns out that you have to set a backoffLimit: 0 in combination with restartPolicy: Never in combination with concurrencyPolicy: Forbid.
backoffLimit means the number of times it will retry before it is considered failed. The default is 6.
concurrencyPolicy set to Forbid means it will run 0 or 1 times, but not more.
restartPolicy set to Never means it won't restart on failure.
You need to do all 3 of these things, or your cronjob may run more than once.
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 1
jobTemplate:
metadata:
creationTimestamp: null
spec:
[ADD THIS -->]backoffLimit: 0
template:
... MORE STUFF ...
The kubernetes cronjob resources has a field, suspend in its spec.
You can't do it by default, but if you want to ensure it doesn't run, you could update the script that sends emails and have it patch the cronjob resource to add suspend: true if it fails
Something like this
kubectl patch cronjob <name> -p '{"spec": { "suspend": true }}'