Inheritance of multiline helm chart template - kubernetes

I want to set resources to pods with helm chart with template of resource section from subchart. Because it should be several different reource templates in subchart.
I have values.yaml , main-values.yaml and templates/deployment.yaml
The command to update helm chart is
helm upgrade -i mynamespace ./kubernetes/mynamespace --namespace mynamespace --create-namespace -f kubernetes/mynamespace/main-values.yaml --reset-values
Files are cuted to show just an example:
main-values.yaml :
namespace: mynamespace
baseUrl: myurl.com
customBranch: dev
components:
postgresql:
nodeport: 5432
elasticsearch:
nodeport: 9200
resources_minimum:
requests:
memory: "100M"
cpu: "100m"
limits:
memory: "300M"
cpu: "200m"
values.yaml
namespace:
baseUrl:
customBranch:
components:
service:
name: service
image: docker-registry.service.{{ .Values.customBranch }}
imagePullPolicy: Always
resources: "{{ .Values.resources_minimum }}"
tag: latest
port: 8080
accessType: ClusterIP
cut
And deployment.yaml is
cut
containers:
- name: {{ $val.name }}
securityContext:
{{- toYaml $.Values.securityContext | nindent 12 }}
image: "{{ tpl $val.image $ }}:{{ $val.tag | default "latest" }}"
imagePullPolicy: {{ $val.imagePullPolicy }}
resources: "{{ tpl $val.resources $ }}"
cut
And the deployment section of resources does not work at all. However image section with intermediate template {{ .Values.customBranch }} works and nodeport template works fine in services.yaml
spec:
type: {{ $val.accessType }}
ports:
- port: {{ $val.port }}
name: mainport
targetPort: {{ $val.port }}
protocol: TCP
{{ if and $val.nodeport }}
nodePort: {{ $val.nodeport }}
I've tried $val, toYaml, tpl , and plain $.Values options in resources section of deployment.yaml and got several errors like:
error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Values.resources_minimum":interface {}(nil)}
or
error converting YAML to JSON: yaml: line 29: could not find expected ':'
and other error like so.
Is it impossible to push yaml values of multiline resources_minimum through values.yaml to deployment.yaml?
Which syntax should I use?
What documentation can you advice me to read?

It's not possible to use template code in values.yaml files.
But you can merge several values.yaml files to reuse configuration values.
main-values.yaml
components:
service:
image: docker-registry.service.dev
resources:
requests:
memory: "100M"
cpu: "100m"
limits:
memory: "300M"
cpu: "200m"
values.yaml
components:
service:
name: service
imagePullPolicy: Always
tag: latest
port: 8080
accessType: ClusterIP
If you add this to your template, it will contain values from both value files:
components: {{ deepCopy .Values.components | merge | toYaml | nindent 6 }}
merge + deepCopy will merge the values of all your values files.
toYaml will output the result in yaml syntax.
You also have to check the correct indentation. 6 is just a guess.
Call helm template --debug ...
This generates even invalid yaml output where you can easily check the correct indentation and see other errors.

Ok. Fellows helped me with elegant solution.
values.yaml :
resource_pool:
minimum:
limits:
memory: "200M"
cpu: "200m"
requests:
memory: "100M"
cpu: "100m"
...
components:
service:
name: service
image: docker.image
imagePullPolicy: Always
tag: latest
resources_local: minimum
And deployment.yaml :
{{- range $keyResources, $valResources := $.Values.resource_pool }}
{{- if eq $val.resources_local $keyResources }}
{{ $valResources | toYaml | nindent 12}}
{{- end }}
{{- end }}
Any sugestion what to read to get familiar with all Helm trics?

Related

helm secrets that replace variables inside configurations files

I am trying to deploy a rest api application in kubernetes with helm. Some of the configuration files have credentials in them and I would like to replace the variables inside the helm templates during the deployment with Kubernetes secrets.
Does anyone have a pointer to a documentation where I can explore this please ?
If you are looking forward to directly deploy the ENV to the deployment file you can also do it if you can few environment variables however best practices to create the secret and inject them all into the deployment.
here sharing the direct example to inject the secret into the deployment
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ .Chart.Name }}-deployment"
labels:
chart: '{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}'
spec:
replicas: {{ .Values.replicaCount }}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
selector:
matchLabels:
app: "{{ .Chart.Name }}-selector"
version: "current"
revisionHistoryLimit: {{ .Values.revisionHistoryLimit }}
template:
metadata:
labels:
app: "{{ .Chart.Name }}-selector"
version: "current"
spec:
containers:
- name: "{{ .Chart.Name }}"
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.service.servicePort}}
resources:
requests:
cpu: "{{ .Values.image.resources.requests.cpu }}"
memory: "{{ .Values.image.resources.requests.memory }}"
env:
- name: PORT
value : "{{ .Values.service.servicePort }}"
{{- if .Values.image.livenessProbe }}
livenessProbe:
{{ toYaml .Values.image.livenessProbe | indent 10 }}
{{- end }}
{{- if .Values.image.readinessProbe }}
readinessProbe:
{{ toYaml .Values.image.readinessProbe | indent 10 }}
{{- end }}
values.yaml
image:
repository: nodeserver
tag: 1.0.0
pullPolicy: IfNotPresent
resources:
requests:
cpu: 200m
memory: 300Mi
readinessProbe: {}
# Example (replace readinessProbe: {} with the following):
# readinessProbe:
# httpGet:
# path: /ready
# port: 3000
# initialDelaySeconds: 3
# periodSeconds: 5
livenessProbe: {}
# Example (replace livenessProbe: {} with the following)::
# livenessProbe:
# httpGet:
# path: /live
# port: 3000
# initialDelaySeconds: 40
# periodSeconds: 10
service:
name: Node
type: NodePort
servicePort: 3000
you can see inside the deployment.yaml code block
env:
- name: PORT
value : "{{ .Values.service.servicePort }}"
it's fetching the values from values.yaml file
service:
name: Node
type: NodePort
servicePort: 3000
if you don't want to update the values.yaml file you can rewrite the value using the command also
helm install chart my-chart -n namespace-name --set service.servicePort=5000
Create a Secret template in your templates folder. Then, you can pass the values through helm cli.
For example, here is my secret.yaml file.
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
password: {{ .Values.password | b64enc }}
Now, I can set the value for password as bellow:
helm install my-chart-instance my-chart -n my-namespace --set password=my-secret-value

acumos AI clio installation fails with "error converting YAML to JSON"

I have been trying to install clio release.
VM :
ubuntu 18.04
16 Cores
32 GB RAM
500 GB Storage.
Command :
bash /home/ubuntu/system-integration/tools/aio_k8s_deployer/aio_k8s_deployer.sh all acai-server ubuntu generic
All most all steps of installation have completed successfully but during "setup-lum", I got below error.
Error:
YAML parse error on lum-helm/templates/deployment.yaml:
error converting YAML to JSON: yaml: line 36: mapping values are not allowed in this context
Workaround :
I was able to get away with these error(tested via helm install --dry-run ) by
a. removing "resource, affinity and tolerant blocks
b. replace "Release.Name" with actual release value( e.g. license-clio-configmap)
but when I run the full installation command, those helms charts are updated again.
Full error :
...
helm install -f kubernetes/values.yaml --name license-clio --namespace default --debug ./kubernetes/license-usage-manager/lum-helm
[debug] Created tunnel using local port: '46109'
[debug] SERVER: "127.0.0.1:46109"
[debug] Original chart version: ""
[debug] CHART PATH: /deploy/system-integration/AIO/lum/kubernetes/license-usage-manager/lum-helm
YAML parse error on lum-helm/templates/deployment.yaml: error converting YAML to JSON: yaml: line 36: mapping values are not allowed in this context
Yaml of deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "lum-helm.fullname" . }}
labels:
app: {{ template "lum-helm.name" . }}
chart: {{ template "lum-helm.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "lum-helm.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "lum-helm.name" . }}
release: {{ .Release.Name }}
spec:
initContainers:
- name: wait-for-db
image: busybox:1.28
command:
- 'sh'
- '-c'
- >
until nc -z -w 2 {{ .Release.Name }}-postgresql {{ .Values.postgresql.servicePort }} && echo postgresql ok;
do sleep 2;
done
containers:
- name: {{ .Chart.Name }}
image: nexus3.acumos.org:10002/acumos/lum-server:default
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgresql
key: postgresql-password
- name: NODE
volumeMounts:
- name: config-volume
mountPath: /opt/app/lum/etc/config.json
subPath: lum-config.json
ports:
- name: http
containerPort: 2080
protocol: TCP
livenessProbe:
httpGet:
path: '/api/healthcheck'
port: http
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 10
readinessProbe:
httpGet:
path: '/api/healthcheck'
port: http
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 10
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}-configmap
This error was resolved as per Error trying to install Acumos Clio using AIO
I provided an imagetag:1.3.2 in my actual value.yaml and lum deployment was successful
in acumos setup there are two copied of setup-lum.sh and values.yaml
actual :
~/system-integration/AIO/lum/kubernetes/value.yaml
and run time copy
~/aio_k8s_deployer/deploy/system-integration/AIO/lum/kubernetes/value.yaml
I found this workaround:
Uncommented the IMAGE-TAG line in the values.yaml file
Commented the following lines in the setup-lum.sh file (these were already executed at the first run and in this way I skipped the overwriting problem)
rm -frd kubernetes/license-usage-manager
git clone "https://gerrit.acumos.org/r/license-usage-manager" \
kubernetes/license-usage-manager

How to refer whole structure from values.yaml instead of specifying one by one?

I am trying to deploy helm chart in local vritual box on minikube using helm command shown below.
I am referring livenessProbe , readinessProbe configuration directly from values.yam in the deployment.yaml as shown below. However following this approach gives me the error specified below , if i change this to refer each attribute value independently i see the chart is getting installed , pod deploys successfully.
livenessProbe:
- {{ .Values.monitorConfig.liveness }}
readinessProbe:
- {{ .Values.monitorConfig.readiness }}
Can anyone please let me know what can be done to avoid the error and why??
Thank you.
Helm Command
helm install --debug -n pspk ./pkg/helm/my-service/
Error
Error: release pspk failed: Deployment in version "v1beta1" cannot be
handled as a Deployment: v1beta1.Deployment.Spec:
v1beta1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec:
v1.PodSpec.Containers: []v1.Container: v1.Container.LivenessProbe:
readObjectStart: expect { or n, but found [, error found in #10 byte
of ...|ssProbe":["map[failu|..., bigger context
...|"imagePullPolicy":"IfNotPresent","livenessProbe":["map[failureThreshold:3
httpGet:map[path:/greeting|...
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "fullname" . }}
labels:
app: {{ template "fullname" .}}
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
release: "{{ .Release.Name }}"
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "fullname" . }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 50443
protocol: TCP
- name: grpc
containerPort: 50051
protocol: TCP
livenessProbe:
- {{ .Values.monitorConfig.liveness }}
readinessProbe:
- {{ .Values.monitorConfig.readiness }}
resources:
{{ toYaml .Values.resources | indent 12 }}
values.yaml
replicaCount: 2
application:
track: stable
image:
repository: test/go-k8s
tag: 0.1.1
pullPolicy: IfNotPresent
# SQL migration scripts
service:
enabled: false
type: NodePort
port: 80
grpc_port: 50051
env:
# POSTGRES_HOST
postgresHost: localhost
# POSTGRES_PORT
postgresPort: "5432"
# POSTGRES_SSL_MODE
postgresSSLMode: "disable"
# POSTGRES_DB
postgresDB: test
# POSTGRES_USER
postgresUser: test
# POSTGRES_PASSWORD
postgresPassword: "test"
monitorConfig:
liveness:
httpGet:
path: "/greeting"
port: 50443
periodSeconds: 2
timeoutSeconds: 10
initialDelaySeconds: 5
failureThreshold: 3
successThreshold: 1
readiness:
httpGet:
path: "/greeting"
port: 50443
periodSeconds: 2
timeoutSeconds: 10
initialDelaySeconds: 5
failureThreshold: 3
successThreshold: 1
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
You need to do two things to make this work correctly: explicitly serialize the value as YAML, and make the indentation correct. This tends to look something like
livenessProbe:
- {{ .Values.monitorConfig.liveness | toYaml | indent 8 | trim }}
The default serialization will be a Go-native dump format, which isn't YAML and leads to the weird map[failureThreshold:1] output; toYaml fixes this. indent 8 puts spaces at the front of every line in the resulting block (you will need to adjust the "8"). trim removes leading and trailing spaces. (toYaml is Helm-specific and isn't documented well; the other two functions come from the Sprig support library.)
You should double-check this output with
helm template -n pspk ./pkg/helm/my-service/
and if it doesn't look like valid YAML, adjust it further.
In your YAML:
livenessProbe:
- {{ .Values.monitorConfig.liveness }}
readinessProbe:
- {{ .Values.monitorConfig.readiness }}
You insert your values into sequence items. Sequence items in YAML are started with -. However, the contents of livenessProbe is expected to be a YAML mapping. The error message is poor but tells you what goes wrong:
expect { or n, but found [,
{ starts a YAML mapping (in flow style), [ starts a YAML sequence (in flow style). The message tells you that the start of a YAML mapping is expected, but the start of a YAML sequence is found. Note that since you're using block style, you don't actually use { and [ here.
So to fix it, simply remove the - so that your inserted mapping (as seen in your values.yaml) is the direct value of livenessProbe and not contained in a sequence:
livenessProbe:
{{ .Values.monitorConfig.liveness }}
readinessProbe:
{{ .Values.monitorConfig.readiness }}
Thanks to the community answers/comments and helm template guide,
it can be combined into:
{{- if .Values.monitorConfig.liveness }}
livenessProbe:
{{ toYaml .Values.monitorConfig.liveness | indent 12 }}
{{- end }}
This will give more flexibility.

how can I iteratively create pods from list using Helm?

I'm trying to create a number of pods from a yaml loop in helm. if I run with --debug --dry-run the output matches my expectations, but when I actually deploy to to a cluster, only the last iteration of the loop is present.
some yaml for you:
{{ if .Values.componentTests }}
{{- range .Values.componentTests }}
apiVersion: v1
kind: Pod
metadata:
name: {{ . }}
labels:
app: {{ . }}
chart: {{ $.Chart.Name }}-{{ $.Chart.Version | replace "+" "_" }}
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
spec:
{{ toYaml $.Values.global.podSpec | indent 2 }}
restartPolicy: Never
containers:
- name: {{ . }}
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/{{ . }}:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
{{- end }}
{{ end }}
when I run helm upgrade --install --set componentTests="{a,b,c}" --debug --dry-run
I get the following output:
# Source: <path-to-file>.yaml
apiVersion: v1
kind: Pod
metadata:
name: a
labels:
app: a
chart: integrationtests-0.0.1
release: funny-ferret
heritage: Tiller
spec:
restartPolicy: Never
containers:
- name: content-tests
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/a:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
apiVersion: v1
kind: Pod
metadata:
name: b
labels:
app: b
chart: integrationtests-0.0.1
release: funny-ferret
heritage: Tiller
spec:
restartPolicy: Never
containers:
- name: b
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/b:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
apiVersion: v1
kind: Pod
metadata:
name: c
labels:
app: users-tests
chart: integrationtests-0.0.1
release: funny-ferret
heritage: Tiller
spec:
restartPolicy: Never
containers:
- name: c
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/c:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
---
(some parts have been edited/removed due to sensitivity/irrelevance)
which looks to me like I it does what I want it to, namely create a pod for a another for b and a third for c.
however, when actually installing this into a cluster, I always only end up with the pod corresponding to the last element in the list. (in this case, c) it's almost as if they overwrite each other, but given that they have different names I don't think they should? even running with --debug but not --dry-run the output tells me I should have 3 pods, but using kubectl get pods I can see only one.
How can I iteratively create pods from a list using Helm?
found it!
so apparently, helm uses --- as a separator between specifications of pods/services/whatHaveYou.
specifying the same fields multiple times in a single chart is valid, it will use the last specified value for for any given field. To avoid overwriting values and instead have multiple pods created, simply add the separator at the end of the loop:
{{ if .Values.componentTests }}
{{- range .Values.componentTests }}
apiVersion: v1
kind: Pod
metadata:
name: {{ . }}
labels:
app: {{ . }}
chart: {{ $.Chart.Name }}-{{ $.Chart.Version | replace "+" "_" }}
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
spec:
{{ toYaml $.Values.global.podSpec | indent 2 }}
restartPolicy: Never
containers:
- name: {{ . }}
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/{{ . }}:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
---
{{- end }}
{{ end }}

helm: how to remove newline after toYaml function

From official documentation:
When the template engine runs, it removes the contents inside of {{ and }}, but it leaves the remaining whitespace exactly as is. The curly brace syntax of template declarations can be modified with special characters to tell the template engine to chomp whitespace. {{- (with the dash and space added) indicates that whitespace should be chomped left, while -}} means whitespace to the right should be consumed.
But I try all variations with no success. Have anyone solution how to place yaml inside yaml? I don't want to use range
apiVersion: v1
kind: Pod
metadata:
name: app
labels:
app: app
spec:
containers:
- name: app
image: image
volumeMounts:
- mountPath: test
name: test
resources:
{{ toYaml .Values.pod.resources | indent 6 }}
volumes:
- name: test
emptyDir: {}
when I use this code without -}} it's adding a newline:
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 20m
memory: 64Mi
volumes:
- name: test
emptyDir: {}
but when I use -}} it's concate with another position.
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 20m
memory: 64Mi
volumes: <- shoud be in indent 2
- name: test
emptyDir: {}
values.yaml is
pod:
resources:
requests:
cpu: 20m
memory: 64Mi
limits:
cpu: 100m
memory: 128Mi
This worked for me:
{{ toYaml .Values.pod.resources | trim | indent 6 }}
The below variant is correct:
{{ toYaml .Values.pod.resources | indent 6 }}
Adding a newline doesn't create any issue here.
I've tried your pod.yaml and got the following error:
$ helm install .
Error: release pilfering-pronghorn failed: Pod "app" is invalid: spec.containers[0].volumeMounts[0].mountPath: Invalid value: "test": must be an absolute path
which means that mountPath of volumeMounts should be something like /mnt.
So, the following pod.yaml works pretty good and creates a pod with the exact resources we defined in values.yaml:
apiVersion: v1
kind: Pod
metadata:
name: app
labels:
app: app
spec:
containers:
- name: app
image: image
volumeMounts:
- mountPath: /mnt
name: test
resources:
{{ toYaml .Values.pod.resources | indent 6 }}
volumes:
- name: test
emptyDir: {}
{{- toYaml .Values.pod.resources | indent 6 -}}
This removes a new line
#Nickolay, it is not a valid yaml file, according to helm - at least helm barfs and says:
error converting YAML to JSON: yaml: line 51: did not find expected key
For me, line 51 is the empty space - and whatever follows should not be indented to the same level