How to run kubernetes cronjob immediately - kubernetes

Im very new to kubernetes ,here i tired a cronjob yaml in which the pods are created at every 1 minute.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
but the pods are created only after 1 minute.is it possible to run job immediately and after that every 1 minute ?

As already stated in the comments CronJob is backed by Job. What you can do is literally launch Cronjob and Job resources using the same spec at the same time. You can do that conveniently using helm chart or kustomize.
Alternatively you can place both manifests in the same file or two files in the same directory and then use:
kubectl apply -f <file/dir>
With this workaround initial Job is started and then after some time Cronjob.
The downside of this solution is that first Job is standalone and it is not included in the Cronjob's history. Another possible side effect is that the first Job and first CronJob can run in parallel if the Job cannot finish its tasks fast enough. concurrencyPolicy does not take that Job into consideration.
From the documentation:
A cron job creates a job object about once per execution time of its
schedule. We say "about" because there are certain circumstances where
two jobs might be created, or no job might be created. We attempt to
make these rare, but do not completely prevent them.
So if you want to keep the task execution more strict, perhaps it may be better to use Bash wrapper script with sleep 1 between task executions or design an app that forks sub processes after specified interval, create a container image and run it as a Deployment.

Related

kubernetes cronjob unexpected scheduling behavior

I'm using kubernetes 1.21 cronjob to schedule a few jobs to run at a certain time every day.
I scheduled a job to be run at 4pm, via kubectl apply -f <name of yaml file>. Subsequently, I updated the yaml schedule: "0 22 * * *" to trigger the job at 10pm, using the same command kubectl apply -f <name of yaml file>
However, after applying the configuration at around 1pm, the job still triggers at 4pm (shouldn't have happened), and then triggers again at 10pm (intended trigger time).
Is there an explanation as to why this happens, and can I prevent it?
Sample yaml for the cronjob below:
apiVersion: batch/v1
kind: CronJob
metadata:
name: job-name-1
spec:
schedule: "0 16 * * *" # 4pm
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- image: sample-image
name: job-name-1
args:
- node
- ./built/script.js
env:
- name: NODE_OPTIONS
value: "--max-old-space-size=5000"
restartPolicy: Never
nodeSelector:
app: cronjob
I'm expecting the job to only trigger at 10pm.
Delete the cronjob and reapply it seems to eliminate such issues, but there are scenarios where I cannot the delete the job (because it's still running).
As you use kubectl apply -f <name of yaml file> to schedule a second Job at 10pm which means it will schedule a new Job but it will not replace the existing job. so the reason was that the Job at 4pm also scheduled and it runned.
Instead you need to use the below command to replace the Job with another scheduled Job.
kubectl patch cronjob my-cronjob -p '{"spec":{"schedule": "0 22 * * *"}}'
This will run Job only at 10Pm.
In order to delete the running Job use the below Process :
run in console:
crontab -e
then you will get crontab opened with an editor, simply delete the line there, save the file and quit the editor - that's it.
if you are running with a root user then use the below command and proceed as above step.
sudo crontab -e

Horizontal Pod Autoscaling (HPA) with an initContainer that requires a Job

I have a specific scenario where I'd like to have a deployment controlled by horizontal pod autoscaling. To handle database migrations in pods when pushing a new deployment, I followed this excellent tutorial by Andrew Lock here.
In short, you must define an initContainer that waits for a Kubernetes Job to complete a process (like running db migrations) before the new pods can run.
This works well, however, I'm not sure how to handle HPA after the initial deployment because if the system detects the need to add another Pod in my node, the initContainer defined in my deployment requires a Job to be deployed and run, but since Jobs are one-off processes, the pod can not initialize and run properly (a ttlSecondsAfterFinished attribute removes the Job anyways).
How can I define an initContainer to run when I deploy my app so I can push my database migrations in a Job, but also allow HPA to control dynamically adding a Pod without needing an initContainer?
Here's what my deployment looks like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: graphql-deployment
spec:
replicas: 1
selector:
matchLabels:
app: graphql-pod
template:
metadata:
labels:
app: graphql-pod
spec:
initContainers:
- name: wait-for-graphql-migration-job
image: groundnuty/k8s-wait-for:v1.4 # This is an image that waits for a process to complete
args:
- job
- graphql-migration-job # this job is defined next
containers:
- name: graphql-container
image: image(graphql):tag(graphql)
The following Job is also deployed
apiVersion: batch/v1
kind: Job
metadata:
name: graphql-migration-job
spec:
ttlSecondsAfterFinished: 30
template:
spec:
containers:
- name: graphql-migration-container
image: image(graphql):tag(graphql)
command: ["npm", "run", "migrate:reset"]
restartPolicy: Never
So basically what happens is:
I deploy these two resources to my node
Job is initialized
initContainer on Pod waits for Job to complete using an image called groundnuty/k8s-wait-for:v1.4
Job completes
initContainer completes
Pod initializes
(after 30 TTL seconds) Job is removed from node
(LOTS OF TRAFFIC)
HPA realizes a need for another pod
initContainer for NEW pod is started, but cant run because Job doesn't exist
...crashLoopBackOff
Would love any insight on the proper way to handle this scenario!
There is, unfortunately, no simple Kubernetes feature to resolve your issue.
I recommend extending your deployment tooling/scripts to separate the migration job and your deployment. During the deploy process, you first execute the migration job and then deploy your deployment. Without the job attached, the HPA can nicely scale your pods.
There is a multitude of ways to achieve this:
Have a bash, etc. script first to execute the job, wait and then update your deployment
Leverage more complex deployment tooling like Helm, which allows you to add a 'pre-install hook' to your job to execute them when you deploy your application

Why does a kubernetes cronjob pauses

I have cronjob that is defined by this manifest:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: trigger
spec:
concurrencyPolicy: Forbid
startingDeadlineSeconds: 5
schedule: "*/1 * * * *"
jobTemplate:
spec:
activeDeadlineSeconds: 50
backoffLimit: 1
parallelism: 1
template:
spec:
containers:
- env:
- name: ApiKey
valueFrom:
secretKeyRef:
key: apiKey
name: something
name: trigger
image: curlimages/curl:7.71.1
args:
- -H
- "Content-Type: application/json"
- -H
- "Authorization: $(ApiKey)"
- -d
- '{}'
- http://url
restartPolicy: Never
It sort of works, but not 100%. For some reason it runs 10 jobs, then it pauses for 5-10 minutes or so and then run 10 new jobs. No errors are reported, but we don't understand why it pauses.
Any ideas on what might cause a cronjob in kubernetes to pause?
The most common problem of running CronJobs on k8s is
spawning to many pods which consume all cluster resources.
It is very important to set proper CronJob limitations. So try to set memory limits for pods.
Also speaking about concurrencyPolicy you set concurrencyPolicy param to Forbid which means that the cron job does not allow concurrent runs; if it is time for a new job run and the previous job run hasn't finished yet, the cron job skips the new job run.
The .spec.concurrencyPolicy field is optional. It specifies how to treat concurrent executions of a job that is created by this cron job. There are following concurrency policies:
Allow (default): The cron job allows concurrently running jobs
Forbid: explained above
Replace: If it is time for a new job run and the previous job run hasn't finished yet, the cron job replaces the currently running job run with a new job run
Try to change policy to allow or replace according to your needs.
Speaking about a non-parallel Job, you can leave .spec.parallelism unset. When it is unset, it is defaulted to 1.
Take a look: cron-jobs-running-for-one-cron-execution-point-in-kubernetes, cron-job-limitations, cron-jobs.

Is there a way to delete pods automatically through YAML after they have status 'Completed'?

I have a YAML file which creates a pod on execution. This pod extracts data from one of our internal systems and uploads to GCP. It takes around 12 mins to do so after which the status of the pod changes to 'Completed', however I would like to delete this pod once it has completed.
apiVersion: v1
kind: Pod
metadata:
name: xyz
spec:
restartPolicy: Never
volumes:
- name: mount-dir
hostPath:
path: /data_in/datos/abc/
initContainers:
- name: abc-ext2k8s
image: registrysecaas.azurecr.io/secaas/oracle-client11c:11.2.0.4-latest
volumeMounts:
- mountPath: /media
name: mount-dir
command: ["/bin/sh","-c"]
args: ["sqlplus -s CLOUDERA/MYY4nGJKsf#hal5:1531/dbmk #/media/ext_hal5_lk_org_localfisico.sql"]
imagePullSecrets:
- name: regcred
Is there a way to acheive this?
Typically you don't want to create bare Kubernetes pods. The pattern you're describing of running some moderate-length task in a pod, and then having it exit, matches a Job. (Among other properties, a job will reschedule a pod if the node it's on fails.)
Just switching this to a Job doesn't directly address your question, though. The documentation notes:
When a Job completes, no more Pods are created, but the Pods are not deleted either. Keeping them around allows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output. The job object also remains after it is completed so that you can view its status. It is up to the user to delete old jobs after noting their status.
So whatever task creates the pod (or job) needs to monitor it for completion, and then delete the pod (or job). (Consider using the watch API or equivalently the kubectl get -w option to see when the created objects change state.) There's no way to directly specify this in the YAML file since there is a specific intent that you can get useful information from a completed pod.
If this is actually a nightly task that you want to run at midnight or some such, you do have one more option. A CronJob will run a job on some schedule, which in turn runs a single pod. The important relevant detail here is that CronJobs have an explicit control for how many completed Jobs they keep. So if a CronJob matches your pattern, you can set successfulJobsHistoryLimit: 0 in the CronJob spec, and created jobs and their matching pods will be deleted immediately.

How to fail a (cron) job after a certain number of retries?

We have a Kubernetes cluster of web scraping cron jobs set up. All seems to go well until a cron job starts to fail (e.g., when a site structure changes and our scraper no longer works). It looks like every now and then a few failing cron jobs will continue to retry to the point it brings down our cluster. Running kubectl get cronjobs (prior to a cluster failure) will show too many jobs running for a failing job.
I've attempted following the note described here regarding a known issue with the pod backoff failure policy; however, that does not seem to work.
Here is our config for reference:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: scrape-al
spec:
schedule: '*/15 * * * *'
concurrencyPolicy: Allow
failedJobsHistoryLimit: 0
successfulJobsHistoryLimit: 0
jobTemplate:
metadata:
labels:
app: scrape
scrape: al
spec:
template:
spec:
containers:
- name: scrape-al
image: 'govhawk/openstates:1.3.1-beta'
command:
- /opt/openstates/openstates/pupa-scrape.sh
args:
- al bills --scrape
restartPolicy: Never
backoffLimit: 3
Ideally we would prefer that a cron job would be terminated after N retries (e.g., something like kubectl delete cronjob my-cron-job after my-cron-job has failed 5 times). Any ideas or suggestions would be much appreciated. Thanks!
You can tell your Job to stop retrying using backoffLimit.
Specifies the number of retries before marking this job failed.
In your case
spec:
template:
spec:
containers:
- name: scrape-al
image: 'govhawk/openstates:1.3.1-beta'
command:
- /opt/openstates/openstates/pupa-scrape.sh
args:
- al bills --scrape
restartPolicy: Never
backoffLimit: 3
You set 3 asbackoffLimit of your Job. That means when a Job is created by CronJob, It will retry 3 times if fails. This controls Job, not CronJob
When Job is failed, another Job will be created again as your scheduled period.
You want:
If I am not wrong, you want to stop scheduling new Job, when your scheduled Jobs are failed for 5 times. Right?
Answer:
In that case, this is not possible automatically.
Possible solution:
You need to suspend CronJob so than it stop scheduling new Job.
Suspend: true
You can do this manually. If you do not want to do this manually, you need to setup a watcher, that will watch your CronJob status, and will update CronJob to suspend if necessary.