CronJob for running a ConfigMap - kubernetes

I am trying to write a CronJob for executing a shell script within a ConfigMap for Kafka.
My intention is to reassign partitions at specific intervals of time.
However, I am facing issues with it. I am very new to it. Any help would be appreciated.
cron-job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: partition-cron
spec:
schedule: "*/10 * * * *"
startingDeadlineSeconds: 20
successfulJobsHistoryLimit: 5
jobTemplate:
spec:
completions: 2
template:
spec:
containers:
- name: partition-reassignment
image: busybox
command: ["/configmap/runtimeConfig.sh"]
volumeMounts:
- name: configmap
mountPath: /configmap
restartPolicy: Never
volumes:
- name: configmap
configMap:
name: configmap-config
configmap-config.yaml
{{- if .Values.topics -}}
{{- $zk := include "zookeeper.url" . -}}
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: {{ template "kafka.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: "{{ .Release.Service }}"
release: "{{ .Release.Name }}"
name: {{ template "kafka.fullname" . }}-config
data:
runtimeConfig.sh: |
#!/bin/bash
set -e
cd /usr/bin
until kafka-configs --zookeeper {{ $zk }} --entity-type topics --describe || (( count++ >= 6 ))
do
echo "Waiting for Zookeeper..."
sleep 20
done
until nc -z {{ template "kafka.fullname" . }} 9092 || (( retries++ >= 6 ))
do
echo "Waiting for Kafka..."
sleep 20
done
echo "Applying runtime configuration using {{ .Values.image }}:{{ .Values.imageTag }}"
{{- range $n, $topic := .Values.topics }}
{{- if and $topic.partitions $topic.replicationFactor $topic.reassignPartitions }}
cat << EOF > {{ $topic.name }}-increase-replication-factor.json
{"version":1, "partitions":[
{{- $partitions := (int $topic.partitions) }}
{{- $replicas := (int $topic.replicationFactor) }}
{{- range $i := until $partitions }}
{"topic":"{{ $topic.name }}","partition":{{ $i }},"replicas":[{{- range $j := until $replicas }}{{ $j }}{{- if ne $j (sub $replicas 1) }},{{- end }}{{- end }}]}{{- if ne $i (sub $partitions 1) }},{{- end }}
{{- end }}
]}
EOF
kafka-reassign-partitions --zookeeper {{ $zk }} --reassignment-json-file {{ $topic.name }}-increase-replication-factor.json --execute
kafka-reassign-partitions --zookeeper {{ $zk }} --reassignment-json-file {{ $topic.name }}-increase-replication-factor.json --verify
{{- end }}
{{- end -}}
My intention is to run the runtimeConfig.sh script as a cron job at regular intervals for partition reassignment in Kafka.
I am not sure if my approach is correct.
Also, I have randomly put image: busybox in the cron-job.yaml file. I am not sure about what should I be putting in there.
Information Part
$ kubectl get cronjobs
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
partition-cron */10 * * * * False 1 5m 12m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
elegant-hedgehog-metrics-server-58995fcf8b-2vzg6 1/1 Running 0 5d
my-kafka-0 1/1 Running 1 12m
my-kafka-1 1/1 Running 0 10m
my-kafka-2 1/1 Running 0 9m
my-kafka-config-644f815a-pbpl8 0/1 Completed 0 12m
my-kafka-zookeeper-0 1/1 Running 0 12m
partition-cron-1548672000-w728w 0/1 ContainerCreating 0 5m
$ kubectl logs partition-cron-1548672000-w728w
Error from server (BadRequest): container "partition-reassignment" in pod "partition-cron-1548672000-w728w" is waiting to start: ContainerCreating
Modified Cron Job YAML
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: partition-cron
spec:
schedule: "*/5 * * * *"
startingDeadlineSeconds: 20
successfulJobsHistoryLimit: 5
jobTemplate:
spec:
completions: 1
template:
spec:
containers:
- name: partition-reassignment
image: busybox
command: ["/configmap/runtimeConfig.sh"]
volumeMounts:
- name: configmap
mountPath: /configmap
restartPolicy: Never
volumes:
- name: configmap
configMap:
name: {{ template "kafka.fullname" . }}-config
Now, I am getting Status of Cron Job pods as ContainerCannotRun.

You've set the ConfigMap to name: {{ template "kafka.fullname" . }}-config but in the job you are mounting configmap-config. Unless you installed the Helm chart using configmap as the name of the release, that Job will never start.
One way to fix it would be to define the volume as:
volumes:
- name: configmap
configMap:
name: {{ template "kafka.fullname" . }}-config

Related

Kubernetes CronJob Helm template can not set value to true

i have a helm template vor an cronjob in a chart:
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ include "gdp-chart.fullname" . }}-importjob
labels:
{{- include "gdp-chart.labels" . | nindent 4 }}
spec:
suspend: {{ .Values.import.suspend }}
schedule: {{ .Values.import.schedule }}
jobTemplate:
metadata:
name: import-job
spec:
template:
spec:
containers:
- image: curlimages/curl
name: import-job
args:
- "$(GDP_INT_IMPORT_URL)"
{{- with .Values.import.env}}
env:
{{- toYaml . | nindent 12}}
{{- end}}
restartPolicy: Never
I want to change spec.suspend by command. If i set it to from true to false it works, suspend is set to false:
helm upgrade --reuse-values --set import.suspend=false gdp gdp-api
but if i try to set it to from false true, the value of suspend does not change to true:
helm upgrade --reuse-values --set import.suspend='true' gdp gdp-api
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
gdp-api-importjob 0 0 31 2 * False 0 7h32m 3d7h
Why is this the case?

helm chart getting secrets and configmap values using envFrom

I m trying to inject env vars in my helm chart deployment file. my values file looks like this.
values.yaml
envFrom:
- configMapRef:
name: my-config
- secretRef:
name: my-secret
I want to iterate through secrets and configmaps values . This is what I did in deployment.yaml file
envFrom:
{{- range $item := .Values.envFrom }}
{{- $item | toYaml | nindent 14 }}
{{- end }}
But i didn t get the desired result
You can directly use the defined value like:
...
envFrom:
{{- toYaml .Values.envFrom | nindent 6 }}
...
Or Instead of use range, you can use with.
Here is an example:
values.yaml:
envFrom:
- configMapRef:
name: my-config
- secretRef:
name: my-secret
pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
namespace: test
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
# {{- with .Values.envFrom }} can be here if you dont
# want to define envFrom in this container if envFrom
# is not defined in values.yaml.
# If you want to do that, remove the one below.
envFrom:
{{- with .Values.envFrom }}
{{- toYaml . | nindent 8 }}
{{- end }}
restartPolicy: Never
The output is:
c[_] > helm template test .
---
# Source: test/templates/test.yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
namespace: test
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: my-config
- secretRef:
name: my-secret
restartPolicy: Never

Helm Template iterating over map to create multiple jobs

I'm trying to iterate over map in Helm chart to create multiple Kubernetes Cronjobs. Since I had trouble generating multiple manifests from a single template I used '---' to separate the manifests. Otherwise it kept generating only a one manifest.
{{- range $k, $job := .Values.Jobs }}
{{- if $job.enabled }}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ $job.name }}
namespace: {{ $.Release.Namespace }}
spec:
schedule: {{ $job.schedule }}
startingDeadlineSeconds: xxx
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: x
failedJobsHistoryLimit: x
jobTemplate:
spec:
template:
spec:
containers:
- name: {{ $job.name }}
image: {{ $.Values.cronJobImage }}
command:
- /bin/sh
- -c
- curl {{ $.Values.schedulerBaseUrl }}/{{ $job.url }}
restartPolicy: Never
---
{{- end }}
{{ end }}
values.yaml
Jobs:
- name: "xxx-job"
enabled: true
schedule: "00 18 * * *"
url: "jobs/xxx"
- name: "xxx-job"
enabled: true
schedule: "00 18 * * *"
url: "jobs/xxx"
From this way it works and generates all the Jobs defined in the values.yaml. I was wandering is there an any better way to do this?
I have the same situation and am having trouble writing tests; we are using https://github.com/quintush/helm-unittest/blob/master/DOCUMENT.md for unit tests.
The problem is, which document index, in this case, do we have to use to separate manifest? For example, in the case above, iterate four times, and in two cases will fail!

{{ If }} clause inside of range scope doesn't see values

The task is to range over workers collection and if the current worker has autoscaling.enabled=true create an hpa for it.
I've tried to compare .autoscaling.enabled to "true" but it returned "error calling eq: incompatible types for comparison". Here people say that it actually means that .autoscaling.enabled is nil. So {{ if .autoscaling.enabled }} somehow doesn't see the variable and assumes it doesn't exist.
Values:
...
workers:
- name: worker1
command: somecommand1
memoryRequest: 500Mi
memoryLimit: 1400Mi
cpuRequest: 50m
cpuLimit: 150m
autoscaling:
enabled: false
- name: worker2
command: somecommand2
memoryRequest: 512Mi
memoryLimit: 1300Mi
cpuRequest: 50m
cpuLimit: 150m
autoscaling:
enabled: false
- name: workerWithAutoscaling
command: somecommand3
memoryRequest: 600Mi
memoryLimit: 2048Mi
cpuRequest: 150m
cpuLimit: 400m
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 5
targetCPUUtilization: 50
targetMemoryUtilization: 50
...
template:
...
{{- range .Values.workers }}
{{- if .autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
...
name: "hpa-{{ .name }}-{{ $.Realeas.Name }}"
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .name }}
minReplicas: {{ .minReplicas }}
maxReplicas: {{ .maxReplicas }}
metrics:
{{- with .targetCPUUtilization}}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ . }}
{{- end }}
{{- with .targetMemoryUtilization}}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ . }}
{{- end }}
---
{{- end }}
{{- end }}
I expect the manifest for one hpa that targets workerWithAutoscaling, but the actual output is totally empty.
Your use of {{- range .Values.workers }} and {{- if .autoscaling.enabled }} is fine. You are not getting any values because .minReplicas, .maxReplicas, etc, are inside .autoscaling scope.
See Modifying scope using with
Adding {{- with .autoscaling}} will solve the issue.
{{- range .Values.workers }}
{{- if .autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
name: "hpa-{{ .name }}-{{ $.Release.Name }}"
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .name }}
{{- with .autoscaling}}
minReplicas: {{ .minReplicas }}
maxReplicas: {{ .maxReplicas }}
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .targetCPUUtilization}}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ .targetMemoryUtilization}}
{{- end }}
{{- end }}
{{- end }}
helm template .
---
# Source: templates/hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
name: "hpa-workerWithAutoscaling-release-name"
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: workerWithAutoscaling
minReplicas: 1
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
- type: Resource
resource:
name: memory
targetAverageUtilization: 50

Helm Charts - can you dynamically generate n StatefulSets?

I have an application that requires a configurable number of master nodes and replicas. Is there any way to dynamically generate a n stateful sets where n is the number of master nodes I have? The number of master nodes is currently set in values.yaml.
Yes, it is possible with until function.
values.yaml:
masterCount: 5
templates/statefulset.yaml:
{{ range $k, $v := until ( .Values.masterCount | int) }}
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-{{ $v }}
spec:
serviceName: "nginx-{{ $v }}"
replicas: 3
selector:
matchLabels:
app: nginx-{{ $v }}
template:
metadata:
labels:
app: nginx-{{ $v }}
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
{{ end }}