Specify CronJob Schedule using config map - kubernetes

I have a CronJob defined in a yam file, deployed to istio as
apiVersion: batch/v1beta1
kind: CronJob
spec:
schedule: "*/12 * * * *"
I want to have different schedules in different environments, so tried to set the schedule from a config map:
apiVersion: batch/v1beta1
kind: CronJob
spec:
schedule:
- valueFrom:
configMapKeyRef:
name: config-name
key: service-schedule
It fails to sync with the error
invalid type for io.k8s.api.batch.v1beta1.CronJobSpec.schedule: got "array", expected "string"
Is it possible to use config map in this way?

ConfigMap is used to set environment variables inside container or is mounted as volume.
I don't think you can use configmap to set schedule in cronjob.

Related

Kubernetes: Define environment variables dependent on other ones using "envFrom"

I have two ConfigMap files. One is supposed to be "secret" values and the other has regular values and should import the secrets.
Here's the sample secret ConfigMap:
kind: ConfigMap
metadata:
name: secret-cm
data:
MY_SEKRET: 'SEKRET'
And the regular ConfigMap file:
kind: ConfigMap
metadata:
name: regular-cm
data:
SOME_CONFIG: 123
USING_SEKRET: $(MY_SEKRET)
And my deployment is as follows:
kind: Deployment
spec:
template:
spec:
containers:
- name: my_container
envFrom:
- configMapRef:
name: secret-cm
- configMapRef:
name: regular-cm
I was hoping that my variable USING_SEKRET would be "SEKRET" because of the order the envFrom files are imported but they just appear as "$(MY_SEKRET)" on the Pods.
I've also tried setting the dependent variable as an env directly at the Deployment but it results on the same problem:
kind: Deployment
...
env:
- name: MY_SEKRET
# Not the expected result because the variable is openly visible but should be hidden
value: 'SEKRET'
I was trying to follow the documentation guides, based on the Define an environment dependent variable for a container but I haven't seen examples similar to what I want to do.
Is there a way to do this?
EDIT:
To explain my idea behind this structure, secret-cm whole file will be encrypted at the repository so not all peers will be able to see its contents.
On the other hand, I still want to be able to show everyone where its variables are used, hence the dependency on regular-cm.
With that, authorized peers can run kubectl commands and variable replacements of secret-cm would work properly but for everyone else the file is hidden.
You did not explain why you want to define two configmap (one getting value from another) but I am assuming that you want the env parameter name define in confgimap be independent of paramter name used by your container in pod. If that is the case then create your configmap
kind: ConfigMap metadata: name: secret-cm data: MY_SEKRET: 'SEKRET'
Then in your deployment use the env variable from configmap
kind: Deployment
spec:
template:
spec:
containers:
- name: my_container
env:
- name: USING_SEKRET
valueFrom:
configMapKeyRef:
name: secret-cm
key: MY_SEKRET
Now when you access env variable $USING_SEKRET, it will show value as 'SEKRET'
incase your requirement is different then ignore this response and provide more details.

Kubernetes - Pass the cronjob schedule to container env

Let's say I have such CronJob definition:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *" # pass this value to container's env
jobTemplate:
spec:
template:
spec:
containers:
- name: cronjob-test
image: myiamge:latest
imagePullPolicy: Never
env:
- name: schedule
value: # pass the schedule value here
restartPolicy: OnFailure
How do I pass the schedule value from the CronJob.spec into CronJob.spec.jobTemplate.spec.template.spec.containers.env? Is it even possible?
Normally, I would do something like this:
valueFrom:
fieldRef:
fieldPath: spec.timezone
But in this case, it won't get this value.
Thanks in advance for help!
Unfortunately, schedule should be certain value when CronJob defines. And the values referred by fieldRef is passed to its container as the env variables when your pod(container) run, so fieldRef cannot pass the certain value to schedule when the CronJob defines. Usually, in this use case, a template format is appropriate. For instance like Helm and similar ones.
apiVersion: batch/v1beta1
kind: CronJob
:
spec:
schedule: {{ .Values.schedule }}
You can replace schedule value using Helm format.
// Values.yaml in the Helm format
schedule: '"*/1 * * * *"'

Kubernetes Run job using CronJob

Is there a way through which I can run an existing Job using CronJob resource.
In CronJob Spec template can we apply a selector using labels. Something like this:
Job Spec: (Link to job docs)
apiVersion: batch/v1
kind: Job
label:
name: pi
spec:
template:
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
backoffLimit: 4
Cron Spec:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: pi-cron
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
labelSelector:
name: pi # refer to the job created above
I came across this. I want to try inverse of this.
Create-Job-From-Cronjob
No, you can not do this in the way you want. kubectl only allows you to create jobs based on cronjob, but not vise-versa.
kubectl create job NAME [--image=image --from=cronjob/name] -- [COMMAND] [args...] [flags] [options]
Available commands right now for kubectl create:
clusterrole Create a ClusterRole.
clusterrolebinding Create a ClusterRoleBinding for a particular ClusterRole
configmap Create a configmap from a local file, directory or literal value
deployment Create a deployment with the specified name.
job Create a job with the specified name.
namespace Create a namespace with the specified name
poddisruptionbudget Create a pod disruption budget with the specified name.
priorityclass Create a priorityclass with the specified name.
quota Create a quota with the specified name.
role Create a role with single rule.
rolebinding Create a RoleBinding for a particular Role or ClusterRole
secret Create a secret using specified subcommand
service Create a service using specified subcommand.
serviceaccount Create a service account with the specified name

Kubernetes CronJob Not Correctly Using Docker Secret

I have a Kubernetes cluster that I am trying to set up a CronJob on. I have the CronJob set up in the default namespace, with the image pull secret I want to use set up in the default namespace as well. I set the imagePullSecrets in the CronJob to reference the secret I use to pull images from my private docker registry, I can verify this secret is valid because I have deployments in the same cluster and namespace that use this secret to successfully pull docker images. However, when the CronJob pod starts, I see the following error:
no basic auth credentials
I understand this happens when the pod doesn't have credentials to pull the image from the docker registry. But I am using the same secret for my deployments in the same namespace and they successfully pull down the image. Is there a difference in the configuration of using imagePullSecrets between deployments and cronjobs?
Server Version: v1.9.3
CronJob config:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
namespace: default
name: my-cronjob
spec:
concurrencyPolicy: Forbid
schedule: 30 13 * * *
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
imagePullSecrets:
- name: my-secret
containers:
- image: my-image
name: my-cronjob
command:
- my-command
args:
- my-args

ConfigMap Kubernetes YAML: space in value is causing error

For some strange and unknown reason, when I use a ConfigMap with key value pairs that will be set as environment variables in the pods (using envFrom), my pods fail to start.
Here is the ConfigMap portion of my YAML:
apiVersion: v1
kind: ConfigMap
metadata:
name: all-config
data:
# DB configuration
dbServer: "host.docker.internal"
dbPort: "3306"
# problematic config
validationQuery: 'Select 1'
If I comment out the validationQuery key/value pair, the pod starts. If I leave it in, it fails. If I remove the space, it runs! Very strange behavior as it boils down to a whitespace.
Any ideas on why this fails and how users have been getting around this? Can someone try to reproduce?
I honestly believe that it's something with your application not liking environment variables with spaces. I tried this myself and I can see the environment variable with the space nice and dandy when I shell into the pod/container.
PodSpec:
...
spec:
containers:
- command:
- /bin/sleep
- infinity
env:
- name: WHATEVER
valueFrom:
configMapKeyRef:
key: myenv
name: j
...
$ kubectl get cm j -o=yaml
apiVersion: v1
data:
myenv: Select 1
kind: ConfigMap
metadata:
creationTimestamp: 2018-10-10T20:44:02Z
name: j
namespace: default
resourceVersion: "11111111"
selfLink: /api/v1/namespaces/default/configmaps/j
uid: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaa
root#mypod-xxxxxxxxxx-xxxxx:/# echo $WHATEVER
Select 1
root#mypod-xxxxxxxxxx-xxxxx:/#