Kubernetes Cronjobs are not removed - kubernetes

I'm running the following cronjob in my minikube:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
concurrencyPolicy: Allow
suspend: false
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- somefailure
restartPolicy: OnFailure
I've added the "somefailure" to force failing of the job. My problem is that it seems that my minikube installation (running v1.23.3) ignores successfulJobsHistoryLimit and failedJobsHistoryLimit. I've checked the documentation on https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/ and it says that both parameters are available, but in the end, Kubernetes generates up to 10 jobs. When I add ttlSecondsAfterFinished: 1, it removes the container after 1 second, but the other parameters are completely ignored.
So I wonder if I need to enable something in minikube or if these parameters are deprecated or what's the reason why it doesn't work. Any idea?

It seems it's a Kubernetes bug: https://github.com/kubernetes/kubernetes/issues/53331.

Related

How do I make Helm chart Hooks post-install work if other charts are in running state

I have a couple of helm charts in a myapp/templates/ directory, and they deploy as expected with helm install myapp.
These two templates are for example:
database.yaml
cronjob.yaml
I'd like for the cronjob.yaml to only run after the database.yaml is in a running state. I currently have an issue where database.yaml fairly regularly fails in a way we half expect (it's not ideal, but it is what it is).
I've found hooks, but I think I'm either using them incorrectly, or they don't determine whether the pod is in Running, Pending, some state of crashed, etc...
There are no changes I've made to database.yaml in order to use hooks, but my cronjob.yaml which I only want to run if database.yaml is in a running state, I added the annotations as follows:
cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: database
annotations:
"helm.sh/hook": "post-install"
labels:
app: database
service: database
spec:
schedule: "* * * * *"
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: customtask
image: "{{ .Values.myimage }}"
command:
- /bin/sh
- -c
- supercooltask.sh
restartPolicy: Never
How can I change this hook configuration to allow cronjob.yaml to only run if database.yaml deploys and runs successfully?
Use init containers in the Pod Spec of Cron Job to check DB is up and running.
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#podspec-v1-core
Example:
spec:
template:
spec:
initContainers:
..
containers:
..
restartPolicy: OnFailure

K8s Cronjob Rolling Restart Every Day

I have one pod that I want to automatically restart once a day. I've looked at the Cronjob documentation and I think I'm close, but I keep getting an Exit Code 1 error. I'm not sure if there's an obvious error in my .yaml. If not, I can post the error log as well. Here's my code:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-deployment-restart
spec:
schedule: "0 20 * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl
command:
- 'kubectl'
- 'rollout'
- 'restart'
- 'deployment my-deployment'
You would need to give it permissions to access the API, that means making a ServiceAccount and some RBAC policy objects (Role, RoleBinding) and then set serviceAccountName in your pod spec there.

Is it possible to trigger a kubernetes cronjob also upon deployment?

I have a simple cronjob that runs every 10 minutes:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: myjob
spec:
schedule: "*/10 * * * *" #every 10 minutes
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: job
image: image
imagePullPolicy: Always
restartPolicy: OnFailure
It indeed runs every 10 minutes, but i would like it to run first time when i deploy the cronjob. is it possible?
You could have a one time CronJob trigger the scheduled CronJob:
kubectl create job --from=cronjob/<name of cronjob> <name of job>
Source
The one time CronJob would need to run after the scheduled CronJob has been created, and its image would need to include the kubectl binary. Api-server permissions needed to run kubectl within the container could be provided by linking a ServiceAccount to the one time CronJob.

kubernetes cronjob dont works correctly when values are customized

I am using rancher 2.3.3
When I config cronjob with schedule values like #hourly and #daily, works fine.
but when I config it with values like "6 1 * * *" , doesn't work.
OS times are sync between all cluster nodes
My config file
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: samplename
namespace: samplenamespace
spec:
schedule: "6 1 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: samplename
image: myimageaddress:1.0
restartPolicy: OnFailure
I find the root cause
scheduler container has different timezone, so it run with a few hours delay

How to set minimum-container-ttl-duration in yml

I'm trying to set the minimum-container-ttl-duration property on a Kubernetes CronJob. I see a bunch of properties like this that appear to be configurable, but the documentation doesn't appear to show where, in the yml file, they can actually be set.
In this example yml, where would I put this property?
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
minimum-container-ttl-duration is not a property on CronJob but is a Node-level property set via a command line parameter: kubelet ... --minimum-container-ttl-duration=x.
https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/#user-configuration:
minimum-container-ttl-duration, minimum age for a finished container before it is garbage collected. Default is 0 minute, which means every finished container will be garbage collected.
The usage of this flag is deprecated.