Helm Template iterating over map to create multiple jobs - kubernetes-helm

I'm trying to iterate over map in Helm chart to create multiple Kubernetes Cronjobs. Since I had trouble generating multiple manifests from a single template I used '---' to separate the manifests. Otherwise it kept generating only a one manifest.
{{- range $k, $job := .Values.Jobs }}
{{- if $job.enabled }}
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ $job.name }}
namespace: {{ $.Release.Namespace }}
spec:
schedule: {{ $job.schedule }}
startingDeadlineSeconds: xxx
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: x
failedJobsHistoryLimit: x
jobTemplate:
spec:
template:
spec:
containers:
- name: {{ $job.name }}
image: {{ $.Values.cronJobImage }}
command:
- /bin/sh
- -c
- curl {{ $.Values.schedulerBaseUrl }}/{{ $job.url }}
restartPolicy: Never
---
{{- end }}
{{ end }}
values.yaml
Jobs:
- name: "xxx-job"
enabled: true
schedule: "00 18 * * *"
url: "jobs/xxx"
- name: "xxx-job"
enabled: true
schedule: "00 18 * * *"
url: "jobs/xxx"
From this way it works and generates all the Jobs defined in the values.yaml. I was wandering is there an any better way to do this?

I have the same situation and am having trouble writing tests; we are using https://github.com/quintush/helm-unittest/blob/master/DOCUMENT.md for unit tests.
The problem is, which document index, in this case, do we have to use to separate manifest? For example, in the case above, iterate four times, and in two cases will fail!

Related

Kubernetes CronJob Helm template can not set value to true

i have a helm template vor an cronjob in a chart:
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ include "gdp-chart.fullname" . }}-importjob
labels:
{{- include "gdp-chart.labels" . | nindent 4 }}
spec:
suspend: {{ .Values.import.suspend }}
schedule: {{ .Values.import.schedule }}
jobTemplate:
metadata:
name: import-job
spec:
template:
spec:
containers:
- image: curlimages/curl
name: import-job
args:
- "$(GDP_INT_IMPORT_URL)"
{{- with .Values.import.env}}
env:
{{- toYaml . | nindent 12}}
{{- end}}
restartPolicy: Never
I want to change spec.suspend by command. If i set it to from true to false it works, suspend is set to false:
helm upgrade --reuse-values --set import.suspend=false gdp gdp-api
but if i try to set it to from false true, the value of suspend does not change to true:
helm upgrade --reuse-values --set import.suspend='true' gdp gdp-api
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
gdp-api-importjob 0 0 31 2 * False 0 7h32m 3d7h
Why is this the case?

helm helpers file can't evaluate field type interface array/string

I am rather new to helm, and I am trying to create a chart, but running into values not transforming from the values.yaml file into my generated chart.
here are my values.yaml
apiVersion: security.istio.io/v1alpha2
kind: RequestAuthentication
metadata:
name: name01
namespace: ns-01
spec:
selector:
matchLabels:
app: app-label
jwtRules:
- issuer: foo
jwksUri: bar
forwardOriginalToken: true
audiences:
- user1
- user2
then with my helm template:
apiVersion: networking.istio.io/v1alpha2
kind: RequestAuthentication
metadata:
name: name01
spec:
selector:
matchLabels:
app: {{ .Values.spec.selector.matchLabels.app }}
jwtRules:
- issuer: foo
jwksUri: bar
forwardOriginalToken: true
audiences: |-
{{- range .Values.spec.jwtRules.audiences }}
- {{ . | title | quote }}
{{ end }}
---
I also have a helpers file.
_helpers.tpl
{{/* vim: set filetype=mustache: */}}
{{- define "jwtRules.audiences" -}}
{{- range $.Values.spec.jwtRules.audiences }}
audiences:
- {{ . | quote }}
{{- end }}
{{- end }}
the error its producing: at <.Values.spec.jwtRules.audiences>: can't evaluate field audiences in type interface {}
This one is simple - you don't have a spec.jwtRules.audiences in your values file! jwtRules contains an array, so you'll have to use some index or iterate over it. Also, i don't think that neither your indentation, nor using of |- for audiences is correct, per docs it should be an array of strings.
So i came up with this example (your values are unchanged):
apiVersion: networking.istio.io/v1alpha2
kind: RequestAuthentication
metadata:
name: name01
spec:
selector:
matchLabels:
app: {{ .Values.spec.selector.matchLabels.app }}
jwtRules:
- issuer: foo
jwksUri: bar
forwardOriginalToken: true
audiences:
{{- with (first .Values.spec.jwtRules) }}
{{- range .audiences }}
- {{ . | title | quote -}}
{{- end }}
{{- end }}
renders into:
apiVersion: networking.istio.io/v1alpha2
kind: RequestAuthentication
metadata:
name: name01
spec:
selector:
matchLabels:
app: app-label
jwtRules:
- issuer: foo
jwksUri: bar
forwardOriginalToken: true
audiences:
- "User1"
- "User2"
In this case it uses a first element of array
Thank you #andrew. I also came to a simple solution but would like feedback on it.
I removed the helpers file then modified my helm chart with the following.
values.yaml (is kept the same as above)
helmchart:
apiVersion: networking.istio.io/v1alpha2
kind: RequestAuthentication
metadata:
name: name01
spec:
selector:
matchLabels:
app: {{ .Values.spec.selector.matchLabels.app }}
jwtRules:
- issuer: foo
jwksUri: bar
forwardOriginalToken: true
{{- with index .Values.spec.jwtRules 0 }}
audiences:
{{- range $a := .audiences }}
- {{ $a -}}
{{ end }}
{{ end }}
---
This produces the following:
apiVersion: networking.istio.io/v1alpha2
kind: RequestAuthentication
metadata:
name: name01
spec:
selector:
matchLabels:
app: app-label
jwtRules:
- issuer: foo
jwksUri: bar
forwardOriginalToken: true
audiences:
- user1
- user2
Should I continue to use a helpers file?
Thanks again for all the help.

{{ If }} clause inside of range scope doesn't see values

The task is to range over workers collection and if the current worker has autoscaling.enabled=true create an hpa for it.
I've tried to compare .autoscaling.enabled to "true" but it returned "error calling eq: incompatible types for comparison". Here people say that it actually means that .autoscaling.enabled is nil. So {{ if .autoscaling.enabled }} somehow doesn't see the variable and assumes it doesn't exist.
Values:
...
workers:
- name: worker1
command: somecommand1
memoryRequest: 500Mi
memoryLimit: 1400Mi
cpuRequest: 50m
cpuLimit: 150m
autoscaling:
enabled: false
- name: worker2
command: somecommand2
memoryRequest: 512Mi
memoryLimit: 1300Mi
cpuRequest: 50m
cpuLimit: 150m
autoscaling:
enabled: false
- name: workerWithAutoscaling
command: somecommand3
memoryRequest: 600Mi
memoryLimit: 2048Mi
cpuRequest: 150m
cpuLimit: 400m
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 5
targetCPUUtilization: 50
targetMemoryUtilization: 50
...
template:
...
{{- range .Values.workers }}
{{- if .autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
...
name: "hpa-{{ .name }}-{{ $.Realeas.Name }}"
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .name }}
minReplicas: {{ .minReplicas }}
maxReplicas: {{ .maxReplicas }}
metrics:
{{- with .targetCPUUtilization}}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ . }}
{{- end }}
{{- with .targetMemoryUtilization}}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ . }}
{{- end }}
---
{{- end }}
{{- end }}
I expect the manifest for one hpa that targets workerWithAutoscaling, but the actual output is totally empty.
Your use of {{- range .Values.workers }} and {{- if .autoscaling.enabled }} is fine. You are not getting any values because .minReplicas, .maxReplicas, etc, are inside .autoscaling scope.
See Modifying scope using with
Adding {{- with .autoscaling}} will solve the issue.
{{- range .Values.workers }}
{{- if .autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
name: "hpa-{{ .name }}-{{ $.Release.Name }}"
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ .name }}
{{- with .autoscaling}}
minReplicas: {{ .minReplicas }}
maxReplicas: {{ .maxReplicas }}
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .targetCPUUtilization}}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ .targetMemoryUtilization}}
{{- end }}
{{- end }}
{{- end }}
helm template .
---
# Source: templates/hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
labels:
name: "hpa-workerWithAutoscaling-release-name"
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: workerWithAutoscaling
minReplicas: 1
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
- type: Resource
resource:
name: memory
targetAverageUtilization: 50

CronJob for running a ConfigMap

I am trying to write a CronJob for executing a shell script within a ConfigMap for Kafka.
My intention is to reassign partitions at specific intervals of time.
However, I am facing issues with it. I am very new to it. Any help would be appreciated.
cron-job.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: partition-cron
spec:
schedule: "*/10 * * * *"
startingDeadlineSeconds: 20
successfulJobsHistoryLimit: 5
jobTemplate:
spec:
completions: 2
template:
spec:
containers:
- name: partition-reassignment
image: busybox
command: ["/configmap/runtimeConfig.sh"]
volumeMounts:
- name: configmap
mountPath: /configmap
restartPolicy: Never
volumes:
- name: configmap
configMap:
name: configmap-config
configmap-config.yaml
{{- if .Values.topics -}}
{{- $zk := include "zookeeper.url" . -}}
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: {{ template "kafka.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: "{{ .Release.Service }}"
release: "{{ .Release.Name }}"
name: {{ template "kafka.fullname" . }}-config
data:
runtimeConfig.sh: |
#!/bin/bash
set -e
cd /usr/bin
until kafka-configs --zookeeper {{ $zk }} --entity-type topics --describe || (( count++ >= 6 ))
do
echo "Waiting for Zookeeper..."
sleep 20
done
until nc -z {{ template "kafka.fullname" . }} 9092 || (( retries++ >= 6 ))
do
echo "Waiting for Kafka..."
sleep 20
done
echo "Applying runtime configuration using {{ .Values.image }}:{{ .Values.imageTag }}"
{{- range $n, $topic := .Values.topics }}
{{- if and $topic.partitions $topic.replicationFactor $topic.reassignPartitions }}
cat << EOF > {{ $topic.name }}-increase-replication-factor.json
{"version":1, "partitions":[
{{- $partitions := (int $topic.partitions) }}
{{- $replicas := (int $topic.replicationFactor) }}
{{- range $i := until $partitions }}
{"topic":"{{ $topic.name }}","partition":{{ $i }},"replicas":[{{- range $j := until $replicas }}{{ $j }}{{- if ne $j (sub $replicas 1) }},{{- end }}{{- end }}]}{{- if ne $i (sub $partitions 1) }},{{- end }}
{{- end }}
]}
EOF
kafka-reassign-partitions --zookeeper {{ $zk }} --reassignment-json-file {{ $topic.name }}-increase-replication-factor.json --execute
kafka-reassign-partitions --zookeeper {{ $zk }} --reassignment-json-file {{ $topic.name }}-increase-replication-factor.json --verify
{{- end }}
{{- end -}}
My intention is to run the runtimeConfig.sh script as a cron job at regular intervals for partition reassignment in Kafka.
I am not sure if my approach is correct.
Also, I have randomly put image: busybox in the cron-job.yaml file. I am not sure about what should I be putting in there.
Information Part
$ kubectl get cronjobs
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
partition-cron */10 * * * * False 1 5m 12m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
elegant-hedgehog-metrics-server-58995fcf8b-2vzg6 1/1 Running 0 5d
my-kafka-0 1/1 Running 1 12m
my-kafka-1 1/1 Running 0 10m
my-kafka-2 1/1 Running 0 9m
my-kafka-config-644f815a-pbpl8 0/1 Completed 0 12m
my-kafka-zookeeper-0 1/1 Running 0 12m
partition-cron-1548672000-w728w 0/1 ContainerCreating 0 5m
$ kubectl logs partition-cron-1548672000-w728w
Error from server (BadRequest): container "partition-reassignment" in pod "partition-cron-1548672000-w728w" is waiting to start: ContainerCreating
Modified Cron Job YAML
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: partition-cron
spec:
schedule: "*/5 * * * *"
startingDeadlineSeconds: 20
successfulJobsHistoryLimit: 5
jobTemplate:
spec:
completions: 1
template:
spec:
containers:
- name: partition-reassignment
image: busybox
command: ["/configmap/runtimeConfig.sh"]
volumeMounts:
- name: configmap
mountPath: /configmap
restartPolicy: Never
volumes:
- name: configmap
configMap:
name: {{ template "kafka.fullname" . }}-config
Now, I am getting Status of Cron Job pods as ContainerCannotRun.
You've set the ConfigMap to name: {{ template "kafka.fullname" . }}-config but in the job you are mounting configmap-config. Unless you installed the Helm chart using configmap as the name of the release, that Job will never start.
One way to fix it would be to define the volume as:
volumes:
- name: configmap
configMap:
name: {{ template "kafka.fullname" . }}-config

Helm Charts - can you dynamically generate n StatefulSets?

I have an application that requires a configurable number of master nodes and replicas. Is there any way to dynamically generate a n stateful sets where n is the number of master nodes I have? The number of master nodes is currently set in values.yaml.
Yes, it is possible with until function.
values.yaml:
masterCount: 5
templates/statefulset.yaml:
{{ range $k, $v := until ( .Values.masterCount | int) }}
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-{{ $v }}
spec:
serviceName: "nginx-{{ $v }}"
replicas: 3
selector:
matchLabels:
app: nginx-{{ $v }}
template:
metadata:
labels:
app: nginx-{{ $v }}
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
{{ end }}