Kubernetes - Pass the cronjob schedule to container env - kubernetes

Let's say I have such CronJob definition:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *" # pass this value to container's env
jobTemplate:
spec:
template:
spec:
containers:
- name: cronjob-test
image: myiamge:latest
imagePullPolicy: Never
env:
- name: schedule
value: # pass the schedule value here
restartPolicy: OnFailure
How do I pass the schedule value from the CronJob.spec into CronJob.spec.jobTemplate.spec.template.spec.containers.env? Is it even possible?
Normally, I would do something like this:
valueFrom:
fieldRef:
fieldPath: spec.timezone
But in this case, it won't get this value.
Thanks in advance for help!

Unfortunately, schedule should be certain value when CronJob defines. And the values referred by fieldRef is passed to its container as the env variables when your pod(container) run, so fieldRef cannot pass the certain value to schedule when the CronJob defines. Usually, in this use case, a template format is appropriate. For instance like Helm and similar ones.
apiVersion: batch/v1beta1
kind: CronJob
:
spec:
schedule: {{ .Values.schedule }}
You can replace schedule value using Helm format.
// Values.yaml in the Helm format
schedule: '"*/1 * * * *"'

Related

kubernetes cronjob dont works correctly when values are customized

I am using rancher 2.3.3
When I config cronjob with schedule values like #hourly and #daily, works fine.
but when I config it with values like "6 1 * * *" , doesn't work.
OS times are sync between all cluster nodes
My config file
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: samplename
namespace: samplenamespace
spec:
schedule: "6 1 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: samplename
image: myimageaddress:1.0
restartPolicy: OnFailure
I find the root cause
scheduler container has different timezone, so it run with a few hours delay

Specify CronJob Schedule using config map

I have a CronJob defined in a yam file, deployed to istio as
apiVersion: batch/v1beta1
kind: CronJob
spec:
schedule: "*/12 * * * *"
I want to have different schedules in different environments, so tried to set the schedule from a config map:
apiVersion: batch/v1beta1
kind: CronJob
spec:
schedule:
- valueFrom:
configMapKeyRef:
name: config-name
key: service-schedule
It fails to sync with the error
invalid type for io.k8s.api.batch.v1beta1.CronJobSpec.schedule: got "array", expected "string"
Is it possible to use config map in this way?
ConfigMap is used to set environment variables inside container or is mounted as volume.
I don't think you can use configmap to set schedule in cronjob.

How to set the result of shell script into arguments of Kubernetes Cronjob regularly

I have trouble setting the result value of a shell script to arguments for Kubernetes Cronjob regularly.
Is there any good way to set the value refreshed everyday?
I use a Kubernetes cronjob in order to perform some daily task.
With the cronjob, a Rust application is launched and execute a batch process.
As one of arguments for the Rust app, I pass target date (yyyy-MM-dd formatted string) as a command-line argument.
Therefore, I tried to pass the date value into the definition yaml file for cronjob as follows.
And I try setting ${TARGET_DATE} value with following script.
In the sample.sh, the value for TARGET_DATE is exported.
cat sample.yml | envsubst | kubectl apply -f sample.sh
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: some-batch
namespace: some-namespace
spec:
schedule: "00 1 * * 1-5"
jobTemplate:
spec:
template:
spec:
containers:
- name: some-container
image: sample/some-image
command: ["./run"]
args: ["${TARGET_DATE}"]
restartPolicy: Never
I expected that this will create TARGET_DATE value everyday, but it does not change from the date I just set for the first time.
Is there any good way to set result of shell script into args of cronjob yaml regularly?
Thanks.
You can use init containers for that https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
The idea is the following: you run your script that setting up this value inside init container, write this value into shared emptyDir volume. Then read this value from the main container. Here is example:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: some-batch
namespace: some-namespace
spec:
schedule: "00 1 * * 1-5"
jobTemplate:
spec:
template:
spec:
initContainers:
- name: init-script
image: my-init-image
volumeMounts:
- name: date
mountPath: /date
command:
- sh
- -c
- "/my-script > /date/target-date.txt"
containers:
- name: some-container
image: sample/some-image
command: ["./run"]
args: ["${TARGET_DATE}"] # adjust this part to read from file
volumeMounts:
- name: date
mountPath: /date
restartPolicy: Never
volumes:
- name: date
emptyDir: {}
You can overwrite your docker entrypoint/ k8s container cmd and do this in one shot:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: some-batch
namespace: some-namespace
spec:
schedule: "00 1 * * 1-5"
jobTemplate:
spec:
template:
spec:
containers:
- name: some-container
image: sample/some-image
command: ["/bin/sh"]
args:
- -c
- "./run ${TARGET_DATE}"
restartPolicy: Never

CronJob: unknown field "configMapRef"

I'm applying a Kubernetes CronJob.
So far it works.
Now I want to add the environment variables. (env: -name... see below)
While tryng to apply I get the error
unknown field "configMapRef" in io.k8s.api.core.v1.EnvVarSource
I don't like to set all singles variables here. I prefer to link the configmap to not to double the variables. How is it possible set a link to the configmap.yaml variables in a CronJob file, how to code it?
Frank
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: ad-sync
creationTimestamp: 2019-02-15T09:10:20Z
namespace: default
selfLink: /apis/batch/v1beta1/namespaces/default/cronjobs/ad-sync
spec:
concurrencyPolicy: Allow
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 3
suspend: false
schedule: "0 */1 * * *"
jobTemplate:
metadata:
labels:
job: ad-sync
spec:
template:
spec:
containers:
- name: ad-sync
image: foo.azurecr.io/foobar/ad-sync
command: ["dotnet", "AdSyncService.dll"]
args: []
env:
- name: AdSyncService
valueFrom:
configMapRef:
name: ad-sync-service-configmap
restartPolicy: OnFailure
There is no such field configMapRef in env field instead there is a field called configMapKeyRef
in order to get more detail about kubernetes objects, its convenient to use kubectl explain --help
for example if you would like to check all of the keys and their types you can use following command
kubectl explain cronJob --recursive
kubectl explain cronjob.spec.jobTemplate.spec.template.spec.containers.env.valueFrom.configMapKeyRef
You should use configMapKeyRef for single value or configMapRef with envFrom
It works this way:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
...
spec:
...
jobTemplate:
metadata:
...
spec:
template:
spec:
containers:
- name: ad-sync
...
envFrom:
- configMapRef:
name: ad-sync-service-configmap
command: ["dotnet", "AdSyncService.dll"]
There are two approaches, using valueFrom for individual values or envFrom for multiple values.
valueFrom is used inside the env attribute, like this:
spec:
template:
spec:
containers:
- name: ad-sync
image: foo.azurecr.io/foobar/ad-sync
command: ["dotnet", "AdSyncService.dll"]
args: []
env:
- name: AdSyncService
valueFrom:
configMapKeyRef:
name: ad-sync-service-configmap
key: log_level
envFrom is used direct inside the container attribure like this:
spec:
template:
spec:
containers:
- name: ad-sync
image: foo.azurecr.io/foobar/ad-sync
command: ["dotnet", "AdSyncService.dll"]
envFrom:
- configMapRef:
name: ad-sync-service-configmap
ConfigMap for reference:
apiVersion: v1
kind: ConfigMap
metadata:
name: ad-sync-service-configmap
namespace: default
data:
log_level: INFO
The main difference on both is:
valueFrom will inject the value of a a key from the referenced configMap
envFrom will inject All configMap keys as environment variables
The main issue with you example is that you used the configMapRef from envFrom inside the valueFrom where should actually be configMapKeyRef.
Also, the configMapKeyRef need a key attribute to identify where the data is comming from.
For more details, please check in this docs.

How to set minimum-container-ttl-duration in yml

I'm trying to set the minimum-container-ttl-duration property on a Kubernetes CronJob. I see a bunch of properties like this that appear to be configurable, but the documentation doesn't appear to show where, in the yml file, they can actually be set.
In this example yml, where would I put this property?
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
minimum-container-ttl-duration is not a property on CronJob but is a Node-level property set via a command line parameter: kubelet ... --minimum-container-ttl-duration=x.
https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/#user-configuration:
minimum-container-ttl-duration, minimum age for a finished container before it is garbage collected. Default is 0 minute, which means every finished container will be garbage collected.
The usage of this flag is deprecated.