from values.yaml to the deployment with spaces - kubernetes-helm

I'm trying to add the following line from values.yaml to the deployment with spaces. But I couldn't.
in values.yaml
affinityNode: |
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: nodepool
operator: In
values:
- loadbalancer-pool
in deployment.yaml
{{- toYaml .Values.affinityNode }}

You could just use normal yaml key pairs instead of a multiline string in your values file:
values.yaml
affinityNode:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: nodepool
operator: In
values:
- loadbalancer-pool
deployment.yaml
{{- with .Values.affinityNode }}
{{ toYaml . | nindent 8 }} # <- use nindent to fix indentation
{{- end }}
But if you want to keep things as is just add a fromYaml statement:
{{- fromYaml .Values.affinityNode | toYaml }}

Related

combination of helper template and values in helm

I am new to helm charts and I am building helm chart to deploy an app on kubernetes, as part of it I have created a deployment template as below,
{{- $outer := . -}}
{{- range $index, $service := .Values.myservices}}
{{- with $outer }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $service.name }}
labels:
{{- include "myhelm.labels" $ | nindent 4 }}
spec:
.
.
.
{{- end }}
{{- end }}
Here I am using a template "myhelm.labels", which is defined in _helpers.tpl as below,
{{/*
Common labels
*/}}
{{- define "myhelm.labels" -}}
helm.sh/chart: {{ include "myhelm" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
I like to include some more labels provided in the values.yaml as below
myservices:
api:
name: "com-api"
labels:
app: "com-api"
selectorLabels:
app: "com-cp"
podAnnotations: {}
container:
image: "com-api"
port: 24000
name: "api"
nodeSelector:
app: "com-cp-api"
affinity: {}
tolerations: {}
ui:
name: "com-ui"
labels:
app: "com-ui"
selectorLabels:
app: "com-ui"
podAnnotations: {}
container:
image: "com-ui"
port: 23000
name: "ui"
nodeSelector:
app: "com-cp-ui"
affinity: {}
tolerations: {}
Along with "myhelm.labels" (common labels) I also want to include service specific labels like $service.labels.
Please help me, how can I do it ?
I am able to add specific labels one by one like,
labels:
{{- include "dlc-project-service-control-plane.labels" $ | nindent 4 }}
app: {{ $service.labels.app }}
But, I am looking for a solution, if I have multiple labels under $service.labels in values.yaml and want to add all of them in a single statement in deployment template.
Please share the code snippet if you already know the solution, it helps.
Thanks
I am able to achieve it using the toYaml, below is the code snippet,
{{- $outer := . -}}
{{- range $index, $service := .Values.myservices}}
{{- with $outer }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $service.name }}
labels:
{{- include "myhelm.labels" $ | nindent 4 }}
{{- toYaml $service.labels | nindent 4 }}
spec:
.
.
.
{{- end }}
{{- end }}

Iterate over a YAML complex map in helm

I am using Helm v3 and trying to iterate over a complex object/map in a YAML definition file for Kubernetes network policy with the following content:
values.yaml:
networkPolicies:
egress:
- service: microservice-name
destination:
- podLabels:
app=microservice-name
namespaceLabels:
company.com/microservices: microservice-name
protocol: TCP
ports:
- 8444
At the k8s definition file, I have this code:
egress-networkpolicy.yaml:
{{- range $v, $rule := .Values.networkPolicies.egress }}---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-kong-to-{{ $rule.service }}-egress
namespace: kong
annotations:
description: Kong egress policies
spec:
podSelector:
matchLabels:
{{- range $label,$value := $rule.podSelector }}
{{ $label }}: {{ $value }}
{{- end }}
egress:
- to:
{{- range $from := $rule.to }}
- podSelector:
matchLabels:
{{- range $label,$value := $from.podLabels }}
{{ $label }}: {{ $value }}
{{- end }}
{{- if has $from "namespaceLabels" }}
namespaceSelector:
{{- if eq ( len $from.namespaceLabels ) 0 }} {}
{{- else }}
matchLabels:
{{- range $label,$value := $from.namespaceLabels }}
{{ $label }}: {{ $value }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}
{{- if has $rule "ports" }}
ports:
{{- range $port := $rule.service.ports }}
- protocol: {{ $port.protocol }}
- port: {{ $port }}
{{- end }}
{{- end }}
policyTypes:
- Egress
{{ end -}}
Unfortunately when I run helm template name-of-the-template it throws the following error:
❯ helm template name-of-the-template --debug
install.go:178: [debug] Original chart version: ""ate name-of-the-template --debug
install.go:199: [debug] CHART PATH: /Users/user/charts/name-of-the-template
Error: template: name-of-the-template/templates/egress-networkpolicy.yaml:34:13: executing "name-of-the-template/templates/egress-networkpolicy.yaml" at <has $rule "ports">: error calling has: Cannot find has on type string
helm.go:88: [debug] template: name-of-the-template/templates/egress-networkpolicy.yaml:34:13: executing "name-of-the-template/templates/egress-networkpolicy.yaml" at <has $rule "ports">: error calling has: Cannot find has on type string
I can't find the reason for Helm throwing this error at that line, but not in similar code before that line.
The has template function checks for membership in a list. In this context $rule is a mapping or dictionary (it is one of the items in the list under egress) and for that type you need to use hasKey instead.
{{- if hasKey $rule "ports" }}{{/* hasKey, not has */}}
ports:
...
{{- end }}
One thing that can simplify this slightly is to use the Go template with construct instead of if here. with acts just like if, except it also rebinds the special variable . to the conditional value if it's truthy. Accessing an undefined key in a map is okay but returns nil, which is falsey. So in this context I might write
{{- with $rule.ports }}
ports:
{{- range . }}
- protocol: {{ $rule.protocol }}
port: {{ . }}
{{- end }}
{{- end }}
(But note as I've written it that the . in the third and fifth lines are different variables, and that this usage also changes . as appears at the start of .Values or in template "name" . In this little loop that's not going to be a practical problem.)

Using Helm helper.tpl to set repository and image from the values.yaml or Chart.yaml

I a have a couple of charts for different products. In the first one a helper was written to build out the repo, image name and tag/version. This works but as the other Chart is quite different I've gone through a simpler approach but it does not work. I get the error,
error calling include: template: MYChart/templates/_helpers.tpl:94:28: executing "getImageName" at <.Values.registryName>: nil pointer evaluating interface {}.registryName
This is the helper.
{{/*
This allows us to not have image: .Values.xxxx.ssss/.Values.xxx.xxx:.Values.ssss
in every single template.
*/}}
{{- define "imageName" -}}
{{- $registryName := .Values.registryName -}}
{{- $imageName := .Values.imageName -}}
{{- $tag := .Chart.AppVersion -}}
{{- printf "%s/%s:%s" $registryName $imageName $tag -}}
{{- end -}}
These are the values
registry:
registryName: "index.docker.io/myrepo"
image_Name: "myimage"
Calling a value like the above in a _helper.tpl should work, there are plenty of examples that use this approach. What am I missing?
The template file :
{{- $root := . -}}
{{- $FullChartName := include "myapp.fullname" . -}}
{{- $ChartName := include "myapp.name" . -}}
{{- range $worker, $parameter := .Values.workerPods }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $parameter.name }}-worker
spec:
replicas: {{ $parameter.replicas }}
selector:
matchLabels:
app.kubernetes.io/name: {{ $parameter.name }}-worker
app.kubernetes.io/instance: {{ $root.Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ $parameter.name }}-worker
app.kubernetes.io/instance: {{ $root.Release.Name }}
autoscale: "true"
annotations:
{{- if $root.Values.worker.annotations }}
{{ toYaml $root.Values.worker.annotations | indent 8 }}
{{- end }}
spec:
imagePullSecrets:
- name: myapp-registry-credentials
containers:
- name: {{ $parameter.name }}-worker
image: {{ template "imageName" . }}
imagePullPolicy: {{ $root.Values.worker.image.pullPolicy }}
command: ["/bin/sh"]
args: ["-c", "until /usr/bin/pg_isready -h $DATABASE_HOST; do sleep 2; done; bundle exec rake jobs:work"]
{{- range $container, $containerResources := $root.Values.containers }}
{{- if eq $container $parameter.size }}
resources:
{{- toYaml $containerResources.resources | nindent 12 }}
{{- end }}
{{- end }}
envFrom:
- configMapRef:
name: common-env
- secretRef:
name: myapp-secrets
volumeMounts:
- name: mnt-data
mountPath: "/mnt/data"
volumes:
- name: mnt-data
persistentVolumeClaim:
claimName: myapp-pvc
{{- with $root.Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with $root.Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with $root.Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
I also tried this approach and added the following to the Chart.yaml but got a similar error, I'll be honest and wasn't sure this would even work but would be interested to hear other's thoughts.
annotations:
image: "myimage"
registry: "index.docker.io/myrepo"
And the helper looked like this.
{{/*
This allows us to not have image: .Values.xxxx.ssss/.Values.xxx.xxx:.Values.ssss
in every single template.
*/}}
{{- define "imageName" -}}
{{- $registryName := .Chart.Annotations.registry -}}
{{- $imageName := .Chart.Annotations.image -}}
{{- $tag := .Chart.AppVersion -}}
{{- printf "%s/%s:%s" $registryName $imageName $tag -}}
{{- end -}}
You're calling the template with the wrong parameter. Reducing the Helm template file to the bare minimum to demonstrate this:
{{- $root := . -}}
{{- range $worker, $parameter := .Values.workerPods }}
image: {{ template "imageName" . }}
imagePullPolicy: {{ $root.Values.worker.image.pullPolicy }}
{{- end }}
The standard Go text/template range statement rebinds the . variable (I believe to the same thing as $parameter). So then when you call the imageName template, its parameter isn't the Helm root value but rather the block from the values file; .Values is undefined and returns nil; and then .Values.registryName is a lookup on nil which produces the error you see.
One standard workaround to this is to save . to a variable outside the range loop and use that variable everywhere you would have used .. And in fact you already do this, the $root.Values.worker... reference in the following line should work correctly. You just need to change this at the point of call:
image: {{ template "imageName" $root }}

helm chart / go-template | Translate environment variables from string

I have a general helm chart in my Kubernetes cluster taking a multiline text field with environment variables (identified by KEY=VALUE), translating them into the deployment.yaml like this:
Inside the Rancher dialog:
In the deployment.yaml:
{{- if .Values.envAsMultiline }}
{{- range (split "\n" .Values.envAsMultiline) }}
- name: "{{ (split "=" .)._0 }}"
value: "{{ (split "=" .)._1 }}"
{{- end }}
{{- end }}
This works fine so far. But the problem now is: If I have a "=" in my environment variable (Like in the JAVA_OPTS above), it splits the environment variable value at the second "=" of the line:
JAVA_OPTS=-Xms1024m -Xmx2048m -XX:MetaspaceSize=256M -XX:MaxMetaspaceSize=512m
is translated to
-Xms1024m -Xmx2048m -XX:MetaspaceSize
The "=256M -XX:MaxMetaspaceSize=512m" is missing here.
How do I correct my deployment.yaml template accordingly?
Plan 1:
One of the simplest implementation methods
You can directly use the yaml file injection method, put the env part here as it is, so you can write the kv form value and the ref form value in the values in the required format.
As follows:
values.yaml
env:
- name: ENVIRONMENT1
value: "testABC"
- name: JAVA_OPTS
value: "-Xms1024m -Xmx2048m -XX:MetaspaceSize=256M -XX:MaxMetaspaceSize=512M"
- name: TZ
value: "Europe/Berlin"
deployment.yaml
containers:
- name: {{ .Chart.Name }}
env:
{{ toYaml .Values.env | nindent xxx }}
(ps: xxx --> actual indent)
Plan 2:
Env is defined in the form of kv, which is rendered in an iterative manner
values.yaml
env:
ENVIRONMENT1: "testABC"
JAVA_OPTS: "-Xms1024m -Xmx2048m -XX:MetaspaceSize=256M -XX:MaxMetaspaceSize=512M"
TZ: "Europe/Berlin"
deployment.yaml
containers:
- name: {{ .Chart.Name }}
env:
{{- range $k, $v := .Values.env }}
- name: {{ $k | quote }}
value: {{ $v | quote }}
{{- end }}
Plan 3:
If you still need to follow your previous writing, then you can do this
values.yaml
env: |
ENVIRONMENT1=testABC
JAVA_OPTS=-Xms1024m -Xmx2048m -XX:MetaspaceSize=256M -XX:MaxMetaspaceSize=512M
TZ=Europe/Berlin
deployment.yaml
containers:
- name: {{ .Chart.Name }}
{{- if .Values.env }}
env:
{{- range (split "\n" .Values.env) }}
- name: {{ (split "=" .)._0 }}
value: {{ . | trimPrefix (split "=" .)._0 | trimPrefix "=" | quote }}
{{- end }}
{{- end }}
output:
env:
- name: ENVIRONMENT1
value: "testABC"
- name: JAVA_OPTS
value: "-Xms1024m -Xmx2048m -XX:MetaspaceSize=256M -XX:MaxMetaspaceSize=512M"
- name: TZ
value: "Europe/Berlin"

Error converting YAML to JSON: did not find expected key - error in pipeline

I am getting the below error in my deployment pipeline
Error: YAML parse error on cnhsst/templates/deployment.yaml: error converting YAML to JSON: yaml: line 38: did not find expected key
The yml file corresponding to this error is below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "fullname" . }}
namespace: {{ .Values.namespace }}
labels:
app: {{ template "fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ template "fullname" . }}
release: "{{ .Release.Name }}"
# We dont need a large deployment history limit as Helm keeps it's own
# history
revisionHistoryLimit: 2
template:
metadata:
namespace: {{ .Values.namespace }}
labels:
app: {{ template "fullname" . }}
release: "{{ .Release.Name }}"
annotations:
recreatePods: {{ randAlphaNum 8 | quote }}
spec:
containers:
- name: {{ template "fullname" . }}
image: {{ template "docker-image" . }}
imagePullPolicy: Always
ports:
# The port that our container listens for HTTP requests on
- containerPort: {{ default 8000 .Values.portOverride }}
name: http
{{- if .Values.resources }}
resources:
{{ toYaml .Values.resources | indent 10 }}
{{- end }}
{{- if and (.Values.livenessProbe) (.Values.apipod)}}
livenessProbe:
{{ toYaml .Values.livenessProbe | indent 10 }}
{{- end }}
{{- if and (.Values.readinessProbe) (.Values.apipod)}}
readinessProbe:
{{ toYaml .Values.readinessProbe | indent 10 }}
{{- end }}
imagePullSecrets:
- name: regcred
securityContext:
runAsNonRoot: true
runAsUser: 5000
runAsGroup: 5000
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- {{ template "fullname" . }}
topologyKey: failure-domain.beta.kubernetes.io/zone
I am stuck with this issue for few hours. I have gone through numerous posts, tried online tools trying to figure out syntax errors, but unfortunately no luck. If anyone is able to point out the issue, that would be really great.
You can see the mismatched indentation under regcred:
imagePullSecrets:
- name: regcred
# <-- indented "-"
#VVV not indented
securityContext:
runAsNonRoot: true
which, as luck would have it, is the 38th line in the output YAML
$ helm template --debug my-chart . 2>&1| sed -e '1,/^apiVersion:/d' | sed -ne 38p
securityContext: