I'm trying to set the minimum-container-ttl-duration property on a Kubernetes CronJob. I see a bunch of properties like this that appear to be configurable, but the documentation doesn't appear to show where, in the yml file, they can actually be set.
In this example yml, where would I put this property?
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
minimum-container-ttl-duration is not a property on CronJob but is a Node-level property set via a command line parameter: kubelet ... --minimum-container-ttl-duration=x.
https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/#user-configuration:
minimum-container-ttl-duration, minimum age for a finished container before it is garbage collected. Default is 0 minute, which means every finished container will be garbage collected.
The usage of this flag is deprecated.
Related
I'm running the following cronjob in my minikube:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
concurrencyPolicy: Allow
suspend: false
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- somefailure
restartPolicy: OnFailure
I've added the "somefailure" to force failing of the job. My problem is that it seems that my minikube installation (running v1.23.3) ignores successfulJobsHistoryLimit and failedJobsHistoryLimit. I've checked the documentation on https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/ and it says that both parameters are available, but in the end, Kubernetes generates up to 10 jobs. When I add ttlSecondsAfterFinished: 1, it removes the container after 1 second, but the other parameters are completely ignored.
So I wonder if I need to enable something in minikube or if these parameters are deprecated or what's the reason why it doesn't work. Any idea?
It seems it's a Kubernetes bug: https://github.com/kubernetes/kubernetes/issues/53331.
Say that I have a job history limit > 1, is there a way to use kubectl to find which jobs that have been spawned by a CronJob?
Use label.
$kubectl get jobs -n namespace -l created-by=cronjob
created-by=cronjob which define at your cronjob.
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
jobTemplate:
metadata:
labels:
created-by: cronjob
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
Ideally you would use the --field-selector label to get all jobs belonging to a particular CronJob, but field selector does not support all the fields. The best way I could find is using jsonPath:
Substitute CronJobName to the name of your cronjob to get all jobs that belong to that CronJob
kubectl get jobs $(kubectl get jobs -o=jsonpath='{.items[?(#.metadata.ownerReferences[*].name=="CronJobName")].metadata.name}')
I have one pod that I want to automatically restart once a day. I've looked at the Cronjob documentation and I think I'm close, but I keep getting an Exit Code 1 error. I'm not sure if there's an obvious error in my .yaml. If not, I can post the error log as well. Here's my code:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-deployment-restart
spec:
schedule: "0 20 * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl
command:
- 'kubectl'
- 'rollout'
- 'restart'
- 'deployment my-deployment'
You would need to give it permissions to access the API, that means making a ServiceAccount and some RBAC policy objects (Role, RoleBinding) and then set serviceAccountName in your pod spec there.
Let's say I have such CronJob definition:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *" # pass this value to container's env
jobTemplate:
spec:
template:
spec:
containers:
- name: cronjob-test
image: myiamge:latest
imagePullPolicy: Never
env:
- name: schedule
value: # pass the schedule value here
restartPolicy: OnFailure
How do I pass the schedule value from the CronJob.spec into CronJob.spec.jobTemplate.spec.template.spec.containers.env? Is it even possible?
Normally, I would do something like this:
valueFrom:
fieldRef:
fieldPath: spec.timezone
But in this case, it won't get this value.
Thanks in advance for help!
Unfortunately, schedule should be certain value when CronJob defines. And the values referred by fieldRef is passed to its container as the env variables when your pod(container) run, so fieldRef cannot pass the certain value to schedule when the CronJob defines. Usually, in this use case, a template format is appropriate. For instance like Helm and similar ones.
apiVersion: batch/v1beta1
kind: CronJob
:
spec:
schedule: {{ .Values.schedule }}
You can replace schedule value using Helm format.
// Values.yaml in the Helm format
schedule: '"*/1 * * * *"'
I have trouble setting the result value of a shell script to arguments for Kubernetes Cronjob regularly.
Is there any good way to set the value refreshed everyday?
I use a Kubernetes cronjob in order to perform some daily task.
With the cronjob, a Rust application is launched and execute a batch process.
As one of arguments for the Rust app, I pass target date (yyyy-MM-dd formatted string) as a command-line argument.
Therefore, I tried to pass the date value into the definition yaml file for cronjob as follows.
And I try setting ${TARGET_DATE} value with following script.
In the sample.sh, the value for TARGET_DATE is exported.
cat sample.yml | envsubst | kubectl apply -f sample.sh
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: some-batch
namespace: some-namespace
spec:
schedule: "00 1 * * 1-5"
jobTemplate:
spec:
template:
spec:
containers:
- name: some-container
image: sample/some-image
command: ["./run"]
args: ["${TARGET_DATE}"]
restartPolicy: Never
I expected that this will create TARGET_DATE value everyday, but it does not change from the date I just set for the first time.
Is there any good way to set result of shell script into args of cronjob yaml regularly?
Thanks.
You can use init containers for that https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
The idea is the following: you run your script that setting up this value inside init container, write this value into shared emptyDir volume. Then read this value from the main container. Here is example:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: some-batch
namespace: some-namespace
spec:
schedule: "00 1 * * 1-5"
jobTemplate:
spec:
template:
spec:
initContainers:
- name: init-script
image: my-init-image
volumeMounts:
- name: date
mountPath: /date
command:
- sh
- -c
- "/my-script > /date/target-date.txt"
containers:
- name: some-container
image: sample/some-image
command: ["./run"]
args: ["${TARGET_DATE}"] # adjust this part to read from file
volumeMounts:
- name: date
mountPath: /date
restartPolicy: Never
volumes:
- name: date
emptyDir: {}
You can overwrite your docker entrypoint/ k8s container cmd and do this in one shot:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: some-batch
namespace: some-namespace
spec:
schedule: "00 1 * * 1-5"
jobTemplate:
spec:
template:
spec:
containers:
- name: some-container
image: sample/some-image
command: ["/bin/sh"]
args:
- -c
- "./run ${TARGET_DATE}"
restartPolicy: Never