kubernetes cronjob dont works correctly when values are customized - kubernetes

I am using rancher 2.3.3
When I config cronjob with schedule values like #hourly and #daily, works fine.
but when I config it with values like "6 1 * * *" , doesn't work.
OS times are sync between all cluster nodes
My config file
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: samplename
namespace: samplenamespace
spec:
schedule: "6 1 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: samplename
image: myimageaddress:1.0
restartPolicy: OnFailure

I find the root cause
scheduler container has different timezone, so it run with a few hours delay

Related

Kubernetes patch multiple resources not working

I'm trying to apply the same job history limits to a number of CronJobs using a patch like the following, named kubeJobHistoryLimit.yml:
apiVersion: batch/v1beta1
kind: CronJob
spec:
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
My kustomization.yml looks like:
bases:
- ../base
configMapGenerator:
- name: inductions-config
env: config.properties
patches:
- path: kubeJobHistoryLimit.yml
target:
kind: CronJob
patchesStrategicMerge:
- job_specific_patch_1.yml
- job_specific_patch_2.yml
...
resources:
- secrets-uat.yml
And at some point in my CI pipeline I have:
kubectl --kubeconfig $kubeconfig apply --force -k ./
The kubectl version is 1.21.9.
The issue is that the job history limit values don't seem to be getting picked up. Is there something wrong w/ the configuration or the version of K8s I'm using?
With kustomize 4.5.2, your patch as written doesn't apply; it fails with:
Error: trouble configuring builtin PatchTransformer with config: `
path: kubeJobHistoryLimit.yml
target:
kind: CronJob
`: unable to parse SM or JSON patch from [apiVersion: batch/v1
kind: CronJob
spec:
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
]
This is because it's missing metadata.name, which is required, even if it's ignored when patching multiple objects. If I modify the patch to look like this:
apiVersion: batch/v1
kind: CronJob
metadata:
name: ignored
spec:
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
It seems to work.
If I have base/cronjob1.yaml that looks like:
apiVersion: batch/v1
kind: CronJob
metadata:
name: cronjob1
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 5
jobTemplate:
spec:
template:
spec:
containers:
- command:
- sleep
- 60
image: docker.io/alpine:latest
name: example
schedule: 30 3 * * *
Then using the above patch and a overlay/kustomization.yaml like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
- path: kubeJobHistoryLimit.yml
target:
kind: CronJob
I see the following output from kustomize build overlay:
apiVersion: batch/v1
kind: CronJob
metadata:
name: cronjob2
spec:
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- command:
- sleep
- 60
image: docker.io/alpine:latest
name: example
schedule: 30 3 * * *
successfulJobsHistoryLimit: 1
You can see the two attributes have been updated correctly.

Kubernetes Cronjobs are not removed

I'm running the following cronjob in my minikube:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
concurrencyPolicy: Allow
suspend: false
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- somefailure
restartPolicy: OnFailure
I've added the "somefailure" to force failing of the job. My problem is that it seems that my minikube installation (running v1.23.3) ignores successfulJobsHistoryLimit and failedJobsHistoryLimit. I've checked the documentation on https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/ and it says that both parameters are available, but in the end, Kubernetes generates up to 10 jobs. When I add ttlSecondsAfterFinished: 1, it removes the container after 1 second, but the other parameters are completely ignored.
So I wonder if I need to enable something in minikube or if these parameters are deprecated or what's the reason why it doesn't work. Any idea?
It seems it's a Kubernetes bug: https://github.com/kubernetes/kubernetes/issues/53331.

error validating data: [ValidationError(CronJob.spec.jobTemplate.spec.template.spec): unknown field "container" in io.k8s.api.core.v1.PodSpec,

This is my yaml file that i am trying to use for cronJob creation. I am getting error like unknown field "container" in io.k8s.api.core.v1.PodSpec,
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: abc-service-cron-job
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
container:
- name: abc-service-cron-job
image: docker.repo1.xyz.com/hui-services/abc-application/REPLACE_ME
imagePullPolicy: Always
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
apiVersion: batch/v1beta1
kind: CronJob
metadata:
...
spec:
...
jobTemplate:
spec:
template:
spec:
containers: # <-- you have spelling error here, should be "containers"
...

Kubernetes Cron Jobs - Run multiple pods for a cron job

Our requirement is we need to do batch processing every 3 hours but single process can not handle the work load. we have to run multiple pods for the same cron job. Is there any way to do that ?
Thank you.
You can provide parallelism: <num_of_pods> to cronjob.spec.jobTemplate.spec and it will run the multiple pods () at the same time.
Following is the example of a cronjob which runs 3 nginx pod every minute.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
creationTimestamp: null
labels:
run: cron1
name: cron1
spec:
concurrencyPolicy: Allow
jobTemplate:
metadata:
creationTimestamp: null
spec:
parallelism: 3
template:
metadata:
creationTimestamp: null
labels:
run: cron1
spec:
containers:
- image: nginx
name: cron1
resources: {}
restartPolicy: OnFailure
schedule: '*/1 * * * *'
concurrencyPolicy: Forbid
status: {}

Not able to connect rabbitmq from kubernetes cron jobs

I am using rabbitmq at a remote (cloudamqp.com) and I create a cron job on Kubernetes. On my local machine, my job is working fine and the Kubernetes cronJob schedules perfectly well but the Job redirects the rabbitmq connection URL to 127.0.0.1:5672 and I get an error.
pika.exceptions.ConnectionClosed: Connection to 127.0.0.1:5672 failed: [Errno 111] Connection refused
I check logs of cron job and my connection URL is perfectly fine but when pika is trying to connect to the host it automatically redirects to 127.0.0.1:5672 as we know the cron pod is not running any rabbitmq server so it refuses the connection.
CronJob.yml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: scrape-news
spec:
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
metadata:
labels:
app: scrape-news
spec:
containers:
- name: scrape-news
image: SCRAPER_IMAGE
imagePullPolicy: Always
restartPolicy: Never
schedule: '* * * * *'
successfulJobsHistoryLimit: 3
RabbitMQ Connection
print(env.RABBIT_URL)
self.params = pika.URLParameters(env.RABBIT_URL)
self.connection = pika.BlockingConnection(parameters=self.params)
self.channel = self.connection.channel() # start a channel
Connection URL is exact same and works on my local setup.
Based on your CronJob spec you are not passing the environment variable RABBIT_URL.
Your code looks as if it is expecting this variable to be set, which it is not, and which is likely why it is defaulting to localhost.
self.params = pika.URLParameters(env.RABBIT_URL)
You probably want something like this:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: scrape-news
spec:
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
metadata:
labels:
app: scrape-news
spec:
containers:
- name: scrape-news
image: SCRAPER_IMAGE
imagePullPolicy: Always
env:
- name: RABBIT_URL
value: cloudamqp.com
restartPolicy: Never
schedule: '* * * * *'
successfulJobsHistoryLimit: 3