Transform a helm dict into a list - kubernetes-helm

I'm using a Helm chart to control what environment variables are set for a certain container in a deployment.
In my Values.yaml, I have an entry called env which is a dictionary:
image:
repository: xxxx.yyyyy.com/myimage
pullPolicy: IfNotPresent
# Enviroment variables that will be passed to the container.
env: {}
Now, I'll pass variables to the env dict using --set:
helm upgrade mydeployment chart --set env.VARIABLE=test
However, this must be transformed into a list to adhere to Kubernetes yaml:
spec:
template:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
# This should come from that dict
env:
- name: VARIABLE
value: "test"
I don't know how to use the template language from Helm (sprig / go) to achieve that. Is it even possible?

To iterate through the map, the core Go text/template language provides a range keyword that can iterate through maps or arrays.
{{ range $key, $value := .Values.env }}
...
{{ end }}
Inside of this you can put arbitrary text. Helm doesn't require this to be any particular kind of YAML construct, just so long as the final result is valid YAML. For this setup a typical loop would look like
env:
{{- range $key, $value := .Values.env }}
- name: {{ quote $key }}
value: {{ quote $value }}
{{- end }}
You do need to be careful with indentation here. As a rule of thumb it often will work to include a - "swallow whitespace" indicator inside the open {{ and to not include one inside the close }}. The - name: must be at least as indented as the env: above it (ignoring the range line), and value: must be aligned with name:. I might put all of the template-language lines (the range and end) starting at the first column, even if they're embedded in a structure that's nested more.
spec:
template:
spec:
containers:
- name: {{ template "chart.fullname" . }}
env:
{{- range $key, $value := .Values.env }}
- name: {{ quote $key }}
value: {{ quote $value }}
{{- end }}
image: {{ .Values.registry }}/{{ .Values.image }}:{{ .Values.tag }}

Related

How to add a PersistentVolumeClaim to a deployment running GitLab AutoDevops?

What am I trying to achieve?
We are using a self-hosted GitLab instance and use GitLab AutoDevops to deploy our projects to a Kubernetes cluster. At the time of writing, we are only using one node within the cluster. For one of our projects it is important that the built application (i.e. the pod(s)) is able to access (read only) files stored on the Kubernetes cluster's node itself.
What have I tried?
Created a (hostPath) PersistentVolume (PV) on our cluster
Created a PersistentVolumeClaim (PVC) on our cluster (named "test-api-claim")
Now GitLab AutoDevops uses a default helm chart to deploy the applications. In order to modify it's behavior, I've added this chart to the project's repository (GitLab AutoDevops automatically uses the chart in a project's ./chart directory if found). So my line of thinking was to modify the chart so that the deployed pods use the PV and PVC which I created manually on the cluster.
Therefore I modified the deployment.yaml file that can be found here. As you can see in the following code-snippet, I have added the volumeMounts & volumes keys (not present in the default/original file). Scroll to the end of the snippet to see the added keys.
{{- if not .Values.application.initializeCommand -}}
apiVersion: {{ default "extensions/v1beta1" .Values.deploymentApiVersion }}
kind: Deployment
metadata:
name: {{ template "trackableappname" . }}
annotations:
{{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
{{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
labels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if or .Values.enableSelector (eq (default "extensions/v1beta1" .Values.deploymentApiVersion) "apps/v1") }}
selector:
matchLabels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
release: {{ .Release.Name }}
{{- end }}
replicas: {{ .Values.replicaCount }}
{{- if .Values.strategyType }}
strategy:
type: {{ .Values.strategyType | quote }}
{{- end }}
template:
metadata:
annotations:
checksum/application-secrets: "{{ .Values.application.secretChecksum }}"
{{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
{{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
labels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
release: {{ .Release.Name }}
spec:
imagePullSecrets:
{{ toYaml .Values.image.secrets | indent 10 }}
containers:
- name: {{ .Chart.Name }}
image: {{ template "imagename" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.application.secretName }}
envFrom:
- secretRef:
name: {{ .Values.application.secretName }}
{{- end }}
env:
{{- if .Values.postgresql.managed }}
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: app-postgres
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: app-postgres
key: password
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: app-postgres
key: privateIP
{{- end }}
- name: DATABASE_URL
value: {{ .Values.application.database_url | quote }}
- name: GITLAB_ENVIRONMENT_NAME
value: {{ .Values.gitlab.envName | quote }}
- name: GITLAB_ENVIRONMENT_URL
value: {{ .Values.gitlab.envURL | quote }}
ports:
- name: "{{ .Values.service.name }}"
containerPort: {{ .Values.service.internalPort }}
livenessProbe:
{{- if eq .Values.livenessProbe.probeType "httpGet" }}
httpGet:
path: {{ .Values.livenessProbe.path }}
scheme: {{ .Values.livenessProbe.scheme }}
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "tcpSocket" }}
tcpSocket:
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "exec" }}
exec:
command:
{{ toYaml .Values.livenessProbe.command | indent 14 }}
{{- end }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
readinessProbe:
{{- if eq .Values.readinessProbe.probeType "httpGet" }}
httpGet:
path: {{ .Values.readinessProbe.path }}
scheme: {{ .Values.readinessProbe.scheme }}
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "tcpSocket" }}
tcpSocket:
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "exec" }}
exec:
command:
{{ toYaml .Values.readinessProbe.command | indent 14 }}
{{- end }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- end -}}
volumeMounts:
- mountPath: /data
name: test-pvc
volumes:
- name: test-pvc
persistentVolumeClaim:
claimName: test-api-claim
What is the problem?
Now when I trigger the Pipeline to deploy the application (using AutoDevops with my modified helm chart), I am getting this error:
Error: YAML parse error on auto-deploy-app/templates/deployment.yaml: error converting YAML to JSON: yaml: line 71: did not find expected key
Line 71 in the script refers to the valueFrom.secretKeyRef.name in the yaml:
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: app-postgres
key: privateIP
The weird thing is that when I delete the volumes and volumeMounts keys, it works as expected (and the valueFrom.secretKeyRef.name is still presented and causes no trouble..).
I am not using tabs in the yaml file and I double checked the indentation.
Two questions
Could there be something wrong with my yaml?
Does anyone know of another solution to achieve my desired behavior? (adding PVC to the deployment so that pods actually use it?)
General information
We use GitLab EE 13.12.11
For auto-deploy-image (which provides the helm chart I am referring to) we use version 1.0.7
Thanks in advance and have a nice day!
it seems that adding persistence is now supported in the default helm chart.
Check the pvc.yaml and deployment.yaml.
Given that, it should be enough to edit values in .gitlab/auto-deploy-values.yaml to meet your needs. Check defaults in values.yaml for more context.

Helm Chart environment from values file

I have the following values file:
MYVAR: 12123
MYVAR2: 214123
I want to iterate over them and use them as env variables in my deployment template:
env:
{{- range .Values.examplemap }}
- name: {{ .name }}
value: {{ .value }}
{{- end }}
I tried this
For iterate over a map in helm you can try put this in the values.yaml
extraEnvs:
- name: ENV_NAME_1
value: value123
- name: ENV_NAME_2
value: value123
So in your template you must iterate the extraEnvs like this:
extraEnvs:
{{- range .Values.image.extraEnvs }}
- name: {{ .name | quote }}
value: {{ .value | quote }}
{{- end }}
In the core Go text/template language, the range operator can iterate over either a list or a map. There's specific syntax to assign the key-value pairs in a map to local variables:
env:
{{- $k, $v := range .Values.examplemap }}
- name: {{ $k }}
value: {{ $v }}
{{- end }}

Helm iterate over keys

I'm trying to iterate over list of secrets in my values files and mount it as env variables on the pods but having hard time with it.
The helm template command which spits the sample template lists the keys & value as I expect it to be but when i deploy only the "name" key gets mounted on the pods but nothing after that like the valuefrom and secretkeyref. Really appreciate any help on this :)
**DEPLOYMENT YAML**
{{- if $root.Values.secrets }}
{{- range $secrets := $root.Values.secrets }}
{{- range $data := $secrets.data }}
- name: {{ $data.name }}
valuefrom:
secretKeyRef:
name: {{ $secrets.name }}
key: {{ $data.name }}
{{ end }}
{{ end }}
{{ end }}
secrets:
- name: aws-secrets
data:
- key: "secret1"
name: DATABASE_HOST
- key: "secret2"
name: DATABASE_NAME
I fat fingered should be "valueFrom". 🤦

Helm - Templating variables in values.yaml

I'm trying to template variables from a map inside the values.yaml into my final Kubernetes ConfigMap YAML.
I've read through https://github.com/helm/helm/issues/2492 and https://helm.sh/docs/chart_template_guide/ but can't seem to find an answer.
For some context, this is roughly what I'm trying to do:
values.yaml
config:
key1: value
key2: value-{{ .Release.Name }}
configmap.yaml
kind: ConfigMap
data:
config-file: |
{{- range $key, $value := .Values.config }}
{{ $key }} = {{ $value }}
{{- end }}
Where the desired output with would be:
helm template --name v1 mychart/
kind: ConfigMap
data:
config-file: |
key1 = value
key2 = value-v1
I've tried a few variations using template functions and pipelining, but to no avail:
{{ $key }} = {{ tpl $value . }}
{{ $key }} = {{ $value | tpl . }}
{{ $key }} = {{ tpl $value $ }}
The above would also have worked in this way
values.yaml
config:
key1: "value"
key2: "value-{{ .Release.Name }}"
configmap.yaml
kind: ConfigMap
data:
config-file: |
{{- range $key, $value := .Values.config }}
{{ $key }} = {{ tpl $value $ }}
{{- end }}
What I changed was : I put value in quotes in value.yaml and used template tpl in the config map.
I'll refer to the question's title regarding templating variables in helm and suggest another option to use on values.yaml which is YAML Anchors.
Docs reference
As written in here:
The YAML spec provides a way to store a reference to a value, and
later refer to that value by reference. YAML refers to this as
"anchoring":
coffee: "yes, please"
favorite: &favoriteCoffee "Cappucino"
coffees:
- Latte
- *favoriteCoffee
- Espresso
In the above, &favoriteCoffee sets a reference to Cappuccino.
Later, that reference is used as *favoriteCoffee.
So coffees becomes Latte, Cappuccino, Espresso.
A more practical example
Referring to a common image setup (Registry and PullPolicy) in all values.yaml.
Notice how the default values are being set at Global.Image next to the reference definition which starts with &:
Global:
Image:
Registry: &global-docker-registry "12345678910.dkr.ecr.us-west-2.amazonaws.com" # <--- Default value
PullPolicy: &global-pull-policy "IfNotPresent" # <--- Default value
Nginx:
Image:
Registry: *global-docker-registry
PullPolicy: *global-pull-policy
Version: 1.21.4
Port: 80
MySql:
Image:
Registry: *global-docker-registry
PullPolicy: *global-pull-policy
Name: mysql
Version: 8.0.27
Port: 3306
Managed to solve this using the following syntax:
configmap.yaml
kind: ConfigMap
data:
config-file: |
{{- range $key, $value := .Values.config }}
{{ $key }} = {{ tpl ($value | toString) $ }}
{{- end }}
there is fight in this PR here about this topic.
I know that it's possible now, but this require maintenance of the chart to be in-house (e.g. answer of Amrut ).
Let's summarize :
To have templating in values.yaml , these are the available options:
helm may support that in future ( watch this thread about this topic.)
use tpl function inside the chart
use another tool on top of helm : terraform or helmfile.

How to pass dynamic arguments to a helm chart that runs a job

I'd like to allow our developers to pass dynamic arguments to a helm template (Kubernetes job). Currently my arguments in the helm template are somewhat static (apart from certain values) and look like this
Args:
--arg1
value1
--arg2
value2
--sql-cmd
select * from db
If I were run a task using the docker container without Kubernetes, I would pass parameters like so:
docker run my-image --arg1 value1 --arg2 value2 --sql-cmd "select * from db"
Is there any way to templatize arguments in a helm chart in such way that any number of arguments could be passed to a template.
For example.
cat values.yaml
...
arguments: --arg1 value1 --arg2 value2 --sql-cmd "select * from db"
...
or
cat values.yaml
...
arguments: --arg3 value3
...
I've tried a few approaches but was not successful. Here is one example:
Args:
{{ range .Values.arguments }}
{{ . }}
{{ end }}
Yes. In values.yaml you need to give it an array instead of a space delimited string.
cat values.yaml
...
arguments: ['--arg3', 'value3', '--arg2', 'value2']
...
or
cat values.yaml
...
arguments:
- --arg3
- value3
- --arg2
- value2
...
and then you like you mentioned in the template should do it:
args:
{{ range .Values.arguments }}
- {{ . }}
{{ end }}
If you want to override the arguments on the command line you can pass an array with --set like this:
--set arguments={--arg1, value1, --arg2, value2, --arg3, value3, ....}
In your values file define arguments as:
extraArgs:
argument1: value1
argument2: value2
booleanArg1:
In your template do:
args:
{{- range $key, $value := .Values.extraArgs }}
{{- if $value }}
- --{{ $key }}={{ $value }}
{{- else }}
- --{{ $key }}
{{- end }}
{{- end }}
Rico's answer needed to be improved.
Using the previous example I've received errors like:
templates/deployment.yaml: error converting YAML to JSON: yaml or
failed to get versionedObject: unable to convert unstructured object to apps/v1beta2, Kind=Deployment: cannot restore slice from string
This is my working setup with coma in elements:
( the vertical format for the list is more readable )
cat values.yaml
...
arguments: [
"--arg3,",
"value3,",
"--arg2,",
"value2,",
]
...
in the template should do it:
args: [
{{ range .Values.arguments }}
{{ . }}
{{ end }}
]
because of some limitations, I had to work with split and to use a delimiter, so in my case:
deployment.yaml :
{{- if .Values.deployment.args }}
args:
{{- range (split " " .Values.deployment.args) }}
- {{ . }}
{{- end }}
{{- end }}
when use --set:
helm install --set deployment.args="--inspect server.js" ...
results with:
- args:
- --inspect
- server.js
The arguments format needs to be kept consistent in such cases.
Here is my case and it works fine.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.app.name }}
labels:
app: {{ .Values.app.name }}
instance: test
spec:
replicas: {{ .Values.master.replicaCount }}
selector:
matchLabels:
app: {{ .Values.app.name }}
instance: test
template:
metadata:
labels:
app: {{ .Values.app.name }}
instance: test
spec:
imagePullSecrets:
- name: gcr-pull-secret
containers:
- name: {{ .Values.app.name }}
image: {{ .Values.app.image }}
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
args:
[
"--users={{int .Values.cmd.users}}",
"--spawn-rate={{int .Values.cmd.rate}}",
"--host={{.Values.cmd.host}}",
"--logfile={{.Values.cmd.logfile}}",
"--{{.Values.cmd.role}}"]
ports:
- containerPort: {{ .Values.container.port }}
resources:
requests:
memory: {{ .Values.container.requests.memory }}
cpu: {{ .Values.container.requests.cpu }}
limits:
memory: {{ .Values.container.limits.memory }}
cpu: {{ .Values.container.limits.cpu }}
Unfortunately following mixed args format does not work within container construct -
mycommand -ArgA valA --ArgB valB --ArgBool1 -ArgBool2 --ArgC=valC
The correct format of above command expected is -
mycommand --ArgA=valA --ArgB=valB --ArgC=valC --ArgBool1 --ArgBool2
This can be achieved by following constructs -
#Dockerfile last line
ENTRYPOINT [mycommand]
#deployment.yaml
containers:
- name: {{ .Values.app.name }}
image: {{ .Values.app.image }}
args: [
"--ArgA={{ .Values.cmd.ArgA }}",
"--ArgB={{ .Values.cmd.ArgB }}",
"--ArgC={{ .Values.cmd.ArgC }}",
"--{{ .Values.cmd.ArgBool1 }}",
"--{{ .Values.cmd.ArgBool2 }}" ]
#values.yaml
cmd:
ArgA: valA
ArgB: valB
ArgC: valC
ArgBool1: "ArgBool1"
ArgBool2: "ArgBool2"
helm install --name "airflow" stable/airflow --set secrets.database=mydatabase,secrets.password=mypassword
So this is the helm chart in question: https://github.com/helm/charts/tree/master/stable/airflow
Now I want to overwrite the default values in the helm chart
secrets.database and
secrets.password so I use --set argument and then it is key=value pairs separated by commas.
helm install --name "<name for your chart>" <chart> --set key0=value0,key1=value1,key2=value2,key3=value3
Did you try this?
{{ range .Values.arguments }}
{{ . | quote }}
{{ end }}
Acid R's key/value solution was the only thing that worked for me.
I ended up with this:
values.yaml
arguments:
url1: 'http://something1.example.com'
url2: 'http://something2.example.com'
url3: 'http://something3.example.com'
url4: 'http://something3.example.com'
And in my template:
args:
{{- range $key, $value := .Values.arguments }}
- --url={{ $value }}
{{- end }}