helm: how to remove newline after toYaml function - kubernetes-helm

From official documentation:
When the template engine runs, it removes the contents inside of {{ and }}, but it leaves the remaining whitespace exactly as is. The curly brace syntax of template declarations can be modified with special characters to tell the template engine to chomp whitespace. {{- (with the dash and space added) indicates that whitespace should be chomped left, while -}} means whitespace to the right should be consumed.
But I try all variations with no success. Have anyone solution how to place yaml inside yaml? I don't want to use range
apiVersion: v1
kind: Pod
metadata:
name: app
labels:
app: app
spec:
containers:
- name: app
image: image
volumeMounts:
- mountPath: test
name: test
resources:
{{ toYaml .Values.pod.resources | indent 6 }}
volumes:
- name: test
emptyDir: {}
when I use this code without -}} it's adding a newline:
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 20m
memory: 64Mi
volumes:
- name: test
emptyDir: {}
but when I use -}} it's concate with another position.
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 20m
memory: 64Mi
volumes: <- shoud be in indent 2
- name: test
emptyDir: {}
values.yaml is
pod:
resources:
requests:
cpu: 20m
memory: 64Mi
limits:
cpu: 100m
memory: 128Mi

This worked for me:
{{ toYaml .Values.pod.resources | trim | indent 6 }}

The below variant is correct:
{{ toYaml .Values.pod.resources | indent 6 }}
Adding a newline doesn't create any issue here.
I've tried your pod.yaml and got the following error:
$ helm install .
Error: release pilfering-pronghorn failed: Pod "app" is invalid: spec.containers[0].volumeMounts[0].mountPath: Invalid value: "test": must be an absolute path
which means that mountPath of volumeMounts should be something like /mnt.
So, the following pod.yaml works pretty good and creates a pod with the exact resources we defined in values.yaml:
apiVersion: v1
kind: Pod
metadata:
name: app
labels:
app: app
spec:
containers:
- name: app
image: image
volumeMounts:
- mountPath: /mnt
name: test
resources:
{{ toYaml .Values.pod.resources | indent 6 }}
volumes:
- name: test
emptyDir: {}

{{- toYaml .Values.pod.resources | indent 6 -}}
This removes a new line

#Nickolay, it is not a valid yaml file, according to helm - at least helm barfs and says:
error converting YAML to JSON: yaml: line 51: did not find expected key
For me, line 51 is the empty space - and whatever follows should not be indented to the same level

Related

Dynamically merge fields in helm chart

I'm trying to combine sections of a helm template with sections provided in the values file.
I have this in my template yaml
{{- $name := "test" }}
{{- if hasKey .Values.provisioners $name }}
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
name: test
spec:
providerRef:
name: p2p
labels:
workload: test
limits:
memory: 20
cpu: 16
requirements:
- key: karpenter.k8s.aws/instance-category
operator: In
values:
- t
- m
{{- end }}
and this is my values file:
provisioners:
gp:
limits:
cpu: 75
nvidia.com/gpu: 2
test:
limits:
memory: 10
cpu: 10
requirements:
- key: karpenter.k8s.aws/instance-category
operator: In
values:
- r
This will only install the manifest if there is a "test" section in provisioners. But what I want to do is 'inject' the limits and requirements from the matching provisioners section in the values file or overwrite that value if the item already exists in the template.
One possible complication is that the fields in the values file won't always be static, there can be a number of limits that can be applied so it would need to be able to copy every item that's in that section.
Likewise with the requirements section there can be any number of keys. If there's a matching key value in the values file then it needs to overwrite it otherwise append it
So the resulting template would be this if $name is set to "test"
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
name: test
spec:
providerRef:
name: p2p
labels:
workload: test
limits:
memory: 10
cpu: 10
requirements:
- key: karpenter.k8s.aws/instance-category
operator: In
values:
- r
And the resulting template would be this if $name is set to "gp"
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
name: test
spec:
providerRef:
name: p2p
labels:
workload: test
limits:
memory: 20
cpu: 75
nvidia.com/gpu: 2
requirements:
- key: karpenter.k8s.aws/instance-category
operator: In
values:
- t
- m
I'm hoping someone can point in the right direction of how this can be achieved. I have no idea where to start with this!!!
Thanks in advance

Resource Limits to be thrice of requests

In my values.yaml (which are more than 100) ; I just have resource requests mentioned. I added a logic in my helm deployment template which will make the limits as thrice of the requests. I am facing an issue with the units of memory and CPU. In some values.yaml; it is mentioned in Mi and in some as Gi for memory and 1 or 1000m for CPU. I tried to trim the unit to perform the multiplication and then I added "m" back. This would work in case the unit m but how can I do it for other units. I know this is not a best way to do this hence I am looking for a better approach.
enter image description here
You can use regex to parse your value, assuming your value contains only float (with of without dot) and suffix part as text, multiply float part and then append suffix later. Example, with 2x multiplication:
values.yaml
limit: 1.6Gi
pod.yaml
{{- $limit_value := .Values.limit | toString | regexFind "[0-9.]+" -}}
{{- $limit_suffix := .Values.limit | toString | regexFind "[^0-9.]+" -}}
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: app
image: images.my-company.example/app:v4
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: {{ mulf $limit_value 2 }}{{ $limit_suffix }}
- name: log-aggregator
image: images.my-company.example/log-aggregator:v6
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Result of helm template
# Source: regex/templates/pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: app
image: images.my-company.example/app:v4
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: 3.2Gi
- name: log-aggregator
image: images.my-company.example/log-aggregator:v6
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Notice usage of mulf instead of mul, it's required for float multiplication, toString function fixes type error if value specified without suffix.
Regexes are simple enough for proof of concept, you should make them stricter
Also, please don't use images of code to your questions, paste it directly, see Why should I not upload images of code/data/errors when asking a question?

yq - How to keep only certains keys in a (nested) object?

I have a bunch of Kubernetes resources (i.e. a lot of yaml files), and I would like to have a result with only certain paths.
My current brutal approach looks like:
cat my-list-of-deployments | yq eval 'select(.kind == "Deployment") \
| del(.metadata.labels, .spec.replicas, .spec.selector, .spec.strategy, .spec.template.metadata) \
| del(.spec.template.spec.containers.[0].env, del(.spec.template.spec.containers.[0].image))' -
Of course this is super inefficient.
In the path .spec.template.spec.containers.[0] I actually want ideally delete anything except: .spec.template.spec.containers.[*].image and .spec.template.spec.containers.[*].resources (where "*" means, keep all array elements).
I tried something like
del(.spec.template.spec.containers.[0] | select(. != "name"))
But this did not work for me. How can I make this better?
Example input:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-one
spec:
template:
spec:
containers:
- image: app-one:0.2.0
name: app-one
ports:
- containerPort: 80
name: http
resources:
limits:
cpu: 50m
memory: 512Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-two
spec:
template:
spec:
containers:
- image: redis:3.2-alpine
livenessProbe:
exec:
command:
- redis-cli
- info
- server
periodSeconds: 20
name: app-two
readinessProbe:
exec:
command:
- redis-cli
- ping
resources:
limits:
cpu: 100m
memory: 128Mi
startupProbe:
periodSeconds: 2
tcpSocket:
port: 6379
Desired output:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-one
spec:
template:
spec:
containers:
- name: app-one
resources:
limits:
cpu: 50m
memory: 512Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-two
spec:
template:
spec:
containers:
- name: app-two
resources:
limits:
cpu: 100m
memory: 128Mi
The key is to use the with_entries function inside the .containers array to manually mark the required fields - name, resources and use the |= update operator to put the modified result back
yq eval '
select(.kind == "Deployment").spec.template.spec.containers[] |=
with_entries( select(.key == "name" or .key == "resources") ) ' yaml

Inheritance of multiline helm chart template

I want to set resources to pods with helm chart with template of resource section from subchart. Because it should be several different reource templates in subchart.
I have values.yaml , main-values.yaml and templates/deployment.yaml
The command to update helm chart is
helm upgrade -i mynamespace ./kubernetes/mynamespace --namespace mynamespace --create-namespace -f kubernetes/mynamespace/main-values.yaml --reset-values
Files are cuted to show just an example:
main-values.yaml :
namespace: mynamespace
baseUrl: myurl.com
customBranch: dev
components:
postgresql:
nodeport: 5432
elasticsearch:
nodeport: 9200
resources_minimum:
requests:
memory: "100M"
cpu: "100m"
limits:
memory: "300M"
cpu: "200m"
values.yaml
namespace:
baseUrl:
customBranch:
components:
service:
name: service
image: docker-registry.service.{{ .Values.customBranch }}
imagePullPolicy: Always
resources: "{{ .Values.resources_minimum }}"
tag: latest
port: 8080
accessType: ClusterIP
cut
And deployment.yaml is
cut
containers:
- name: {{ $val.name }}
securityContext:
{{- toYaml $.Values.securityContext | nindent 12 }}
image: "{{ tpl $val.image $ }}:{{ $val.tag | default "latest" }}"
imagePullPolicy: {{ $val.imagePullPolicy }}
resources: "{{ tpl $val.resources $ }}"
cut
And the deployment section of resources does not work at all. However image section with intermediate template {{ .Values.customBranch }} works and nodeport template works fine in services.yaml
spec:
type: {{ $val.accessType }}
ports:
- port: {{ $val.port }}
name: mainport
targetPort: {{ $val.port }}
protocol: TCP
{{ if and $val.nodeport }}
nodePort: {{ $val.nodeport }}
I've tried $val, toYaml, tpl , and plain $.Values options in resources section of deployment.yaml and got several errors like:
error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Values.resources_minimum":interface {}(nil)}
or
error converting YAML to JSON: yaml: line 29: could not find expected ':'
and other error like so.
Is it impossible to push yaml values of multiline resources_minimum through values.yaml to deployment.yaml?
Which syntax should I use?
What documentation can you advice me to read?
It's not possible to use template code in values.yaml files.
But you can merge several values.yaml files to reuse configuration values.
main-values.yaml
components:
service:
image: docker-registry.service.dev
resources:
requests:
memory: "100M"
cpu: "100m"
limits:
memory: "300M"
cpu: "200m"
values.yaml
components:
service:
name: service
imagePullPolicy: Always
tag: latest
port: 8080
accessType: ClusterIP
If you add this to your template, it will contain values from both value files:
components: {{ deepCopy .Values.components | merge | toYaml | nindent 6 }}
merge + deepCopy will merge the values of all your values files.
toYaml will output the result in yaml syntax.
You also have to check the correct indentation. 6 is just a guess.
Call helm template --debug ...
This generates even invalid yaml output where you can easily check the correct indentation and see other errors.
Ok. Fellows helped me with elegant solution.
values.yaml :
resource_pool:
minimum:
limits:
memory: "200M"
cpu: "200m"
requests:
memory: "100M"
cpu: "100m"
...
components:
service:
name: service
image: docker.image
imagePullPolicy: Always
tag: latest
resources_local: minimum
And deployment.yaml :
{{- range $keyResources, $valResources := $.Values.resource_pool }}
{{- if eq $val.resources_local $keyResources }}
{{ $valResources | toYaml | nindent 12}}
{{- end }}
{{- end }}
Any sugestion what to read to get familiar with all Helm trics?

how can I iteratively create pods from list using Helm?

I'm trying to create a number of pods from a yaml loop in helm. if I run with --debug --dry-run the output matches my expectations, but when I actually deploy to to a cluster, only the last iteration of the loop is present.
some yaml for you:
{{ if .Values.componentTests }}
{{- range .Values.componentTests }}
apiVersion: v1
kind: Pod
metadata:
name: {{ . }}
labels:
app: {{ . }}
chart: {{ $.Chart.Name }}-{{ $.Chart.Version | replace "+" "_" }}
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
spec:
{{ toYaml $.Values.global.podSpec | indent 2 }}
restartPolicy: Never
containers:
- name: {{ . }}
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/{{ . }}:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
{{- end }}
{{ end }}
when I run helm upgrade --install --set componentTests="{a,b,c}" --debug --dry-run
I get the following output:
# Source: <path-to-file>.yaml
apiVersion: v1
kind: Pod
metadata:
name: a
labels:
app: a
chart: integrationtests-0.0.1
release: funny-ferret
heritage: Tiller
spec:
restartPolicy: Never
containers:
- name: content-tests
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/a:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
apiVersion: v1
kind: Pod
metadata:
name: b
labels:
app: b
chart: integrationtests-0.0.1
release: funny-ferret
heritage: Tiller
spec:
restartPolicy: Never
containers:
- name: b
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/b:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
apiVersion: v1
kind: Pod
metadata:
name: c
labels:
app: users-tests
chart: integrationtests-0.0.1
release: funny-ferret
heritage: Tiller
spec:
restartPolicy: Never
containers:
- name: c
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/c:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
---
(some parts have been edited/removed due to sensitivity/irrelevance)
which looks to me like I it does what I want it to, namely create a pod for a another for b and a third for c.
however, when actually installing this into a cluster, I always only end up with the pod corresponding to the last element in the list. (in this case, c) it's almost as if they overwrite each other, but given that they have different names I don't think they should? even running with --debug but not --dry-run the output tells me I should have 3 pods, but using kubectl get pods I can see only one.
How can I iteratively create pods from a list using Helm?
found it!
so apparently, helm uses --- as a separator between specifications of pods/services/whatHaveYou.
specifying the same fields multiple times in a single chart is valid, it will use the last specified value for for any given field. To avoid overwriting values and instead have multiple pods created, simply add the separator at the end of the loop:
{{ if .Values.componentTests }}
{{- range .Values.componentTests }}
apiVersion: v1
kind: Pod
metadata:
name: {{ . }}
labels:
app: {{ . }}
chart: {{ $.Chart.Name }}-{{ $.Chart.Version | replace "+" "_" }}
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
spec:
{{ toYaml $.Values.global.podSpec | indent 2 }}
restartPolicy: Never
containers:
- name: {{ . }}
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/{{ . }}:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
---
{{- end }}
{{ end }}