Setting a list to a field in Helm - kubernetes-helm

I have the following in my values.yaml
router:
env:
- name: JSON_LOGGING
value: True
In my Deployment I would simply like to set this list to the env field like so:
spec:
containers:
# ..
env: {{ $.Values.router.env }}
However, it appears that this produces an incorrect YAML file:
Error: UPGRADE FAILED: YAML parse error on translation/templates/translation-router.yaml: error converting YAML to JSON: yaml: line 35: did not find expected ',' or ']'
Is there a way to make this work?

You need to inline part of your configuration using toYaml.
spec:
containers:
# ..
env:
{{- toYaml .Values.rounter.env | nindent 6 }}

Related

helm --set to select environment

Struggling with some helm templating...
I'm trying to pass a separate yaml file with springboot parameters to helm, and have them split by environment... then I want to pass the environment to helm using --set env=staging
Feels like I've tried everything but clearly I'm lacking a fundamental understanding...
My _helpers.tpl contains these:
{{- define "env" }}
{{- printf "%s" .Values.env }}
{{- end }}
{{ define "configmap.metadata" }}
name: {{ .Values.name }}-config
{{ end }}
{{ define "configmap.properties" }}
{{ index .Values.environment (include "env" .) "properties" | indent 4 }}
{{ end }}
The template for the config map:
apiVersion: v1
kind: ConfigMap
metadata:
{{ include "configmap.metadata" . }}
data:
app.properties: |-
{{ include "configmap.properties" .}}
And the yaml file containing the properties looks like this:
environment:
staging:
properties:
spring:
datasource:
url: something
username: something
password: something
app1:
key: something
secret: something
baseUri: something
app2:
bootstrap_server: something
bootstrap_port: something
registry_schema: something
production:
properties:
spring:
etc, etc
And then I want to select the environment using set. I'm testing with:
helm template test . -f values.yaml -f properties.yaml --set env=staging
I think I've just tried so many things that I just can't see the wood for the trees! The error I'm seeing is:
Error: template: microservice/templates/configmap.yaml:7:7: executing "microservice/templates/configmap.yaml" at <include "configmap.properties" .>: error calling include: template: microservice/templates/_helpers.tpl:56:76: executing "configmap.properties" at <4>: wrong type for value; expected string; got map[string]interface {}
EDIT:
After tweaking, I'm still getting an error, but I'm seeing something in the configmap.. but I wonder if the error is due to the 8 spaces on the first line..
data:
app.properties: |-
app2:
bootstrap_port: something
bootstrap_server: something
registry_schema: something
app1:
baseUri: something
key: something
secret: something
spring:
datasource:
password: something
url: something
username: something
I think your actual error message is around the way you're using the .Values.environment.production.properties value. It's a YAML map, but the indent function expects it to be a string. You should be able to see some odd indentation and maybe an odd [map spring [map datasource ...]] string if you use the helm template --debug option.
When you go to render the ConfigMap, you need to make sure to do two things. Since the data you have is structured properties, you need to use the lightly-documented toYaml function to convert it back to YAML. This will begin at the first column, so you need to apply the indent function to it, and then you need to make sure the markup that invokes it is also at the first column (indent should be the only thing that supplies indentation).
data:
app.properties: |-
{{ include "configmap.properties" . | indent 4}}
{{/*- starts at column 1, but includes the `indent` function */}}
{{ define "configmap.properties" }}
{{- index .Values.environment (.Values.env) "properties" | toYaml }}
{{/*- starts at first column, includes `toYaml`, does not include `indent` */}}
{{- end }}

Not able to render the helm template without quotes

I have used almost all possible ways to render the helm template. But now I am out of ideas and seeking help:
values.yaml:
rollout:
namespace: xyz
project: default
baseDomain: "stage.danger.zone"
clusterDomain: cluster.local
manifest.yaml
apps:
certmanager:
source:
repoURL: 'https://artifactory.intern.example.io/artifactory/helm'
targetRevision: 0.0.6
chart: abc
helm:
releaseName: abc
values:
global:
imagePullSecrets:
- name: artifactory
baseDomain: "{{ .Values.rollout.baseDomain }}"
When I try to render the template using the below command in my main.yaml file that will produce the final result:
values: {{- tpl (toYaml $appValue.values | indent 6) $ }}
Expected result:
baseDomain: stage.danger.zone (without quotes)
What I am getting is:
baseDomain: 'stage.danger.zone'
If I try to remove the double quotes from: baseDomain: "{{ .Values.rollout.baseDomain }}", I get the following error:
[debug] error converting YAML to JSON: yaml: invalid map key: map[interface {}]interface {}{".Values.baseDomain":interface {}(nil)}
Any help or ideas to achieve the same?
This is an expected behaviour by YAML.
One dirty and bad hack would be
values: {{- tpl (toYaml $appValue.values | fromYaml | toYaml | indent 6) $ }}
and then you will not see the single quotes.
However, This is not a problem at all even if you have ' single quotes in your value. You can include this variable for example something like this below:
hosts:
- host: some-gateway.{{ .Values.rollout.baseDomain }}
serviceName: gateway
servicePort: 8080
path: /
Then it will show you your variable value without ' single quotes.
Example rendered output:
hosts:
- host: some-gateway.stage.danger.zone
path: /
serviceName: gateway
servicePort: 8080

Error when running Helm Chart with environment variables

I am creating a Helm Chart (v3) for a Kubernetes Deployment.
In the deployment.yaml I am defining some environment variables
spec:
...
env:
- name: GRAPHITE_ENABLED
value: {{ .Values.env.graphiteEnabled }}
- name: GRAPHITE_HOSTNAME
value: {{ .Values.env.graphiteHostname }}
and specifying values for these environment variables in values.yaml
env:
graphiteEnabled: "false"
graphiteHostname: "localhost"
When running the Chart using this command
helm install --debug api-test ./rest-api
the following error is caused:
Error: Deployment in version "v1beta1" cannot be handled as a Deployment: v1beta1.Deployment.Spec: v1beta1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found f
Turned out the issue was caused by the value "false".
After a --dry-run I saw that the output of the generated values was
- name: GRAPHITE_ENABLED
value: false
But the environment variable must be defined with quotes.
Using the quote function for the value in the values.yaml fixed the issue
- name: GRAPHITE_ENABLED
value: {{ .Values.env.graphiteEnabled | quote }}
which generated the following output
- name: GRAPHITE_ENABLED
value: "false"

Helm Config-Map with yaml file

I am trying to access a file inside my helm templates as a config map, like below. I get an error as below.
However, it works when my application.yml doesn't have nested objects (Eg - name: test). Any ideas on what I could be doing wrong?
config-map.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
{{.Files.Get “application.yml”}}
application.yml:
some-config:
application:
name: some-application-name
ERROR:
*ConfigMap in version “v1" cannot be handled as a ConfigMap: v1.ConfigMap.Data: ReadString: expects ” or n, but found {, error found in #10 byte of ...|ication”*
Looks like you have an indentation issue on your application.yaml file. Perhaps invalid YAML? If I try your very same files I get the following:
○ → helm template ./mychart -x templates/configmap.yaml
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: release-name-configmap
data:
some-config:
application:
name: some-application-name
As per documentation:
Templates should be indented using two spaces (never tabs).
Template directives should have whitespace after the opening braces and before the closing braces.
finally it should looks like:
{{ .Files.Get "application.yml" | nindent 2 }}
or
{{- .Files.Get "application.yml" | nindent 2 }}
to chomp whitespace on the left

How to pull environment variables with Helm charts

I have my deployment.yaml file within the templates directory of Helm charts with several environment variables for the container I will be running using Helm.
Now I want to be able to pull the environment variables locally from whatever machine the helm is ran so I can hide the secrets that way.
How do I pass this in and have helm grab the environment variables locally when I use Helm to run the application?
Here is some part of my deployment.yaml file
...
...
spec:
restartPolicy: Always
containers:
- name: sample-app
image: "sample-app:latest"
imagePullPolicy: Always
env:
- name: "USERNAME"
value: "app-username"
- name: "PASSWORD"
value: "28sin47dsk9ik"
...
...
How can I pull the value of USERNAME and PASSWORD from local environment variables when I run helm?
Is this possible? If yes, then how do I do this?
You can export the variable and use it while running helm install.
Before that, you have to modify your chart so that the value can be set while installation.
Skip this part, if you already know, how to setup template fields.
As you don't want to expose the data, so it's better to have it saved as secret in kubernetes.
First of all, add this two lines in your Values file, so that these two values can be set from outside.
username: root
password: password
Now, add a secret.yaml file inside your template folder. and, copy this code snippet into that file.
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-auth
data:
password: {{ .Values.password | b64enc }}
username: {{ .Values.username | b64enc }}
Now tweak your deployment yaml template and make changes in env section, like this
...
...
spec:
restartPolicy: Always
containers:
- name: sample-app
image: "sample-app:latest"
imagePullPolicy: Always
env:
- name: "USERNAME"
valueFrom:
secretKeyRef:
key: username
name: {{ .Release.Name }}-auth
- name: "PASSWORD"
valueFrom:
secretKeyRef:
key: password
name: {{ .Release.Name }}-auth
...
...
If you have modified your template correctly for --set flag,
you can set this using environment variable.
$ export USERNAME=root-user
Now use this variable while running helm install,
$ helm install --set username=$USERNAME ./mychart
If you run this helm install in dry-run mode, you can verify the changes,
$ helm install --dry-run --set username=$USERNAME --debug ./mychart
[debug] Created tunnel using local port: '44937'
[debug] SERVER: "127.0.0.1:44937"
[debug] Original chart version: ""
[debug] CHART PATH: /home/maruf/go/src/github.com/the-redback/kubernetes-yaml-drafts/helm-charts/mychart
NAME: irreverant-meerkat
REVISION: 1
RELEASED: Fri Apr 20 03:29:11 2018
CHART: mychart-0.1.0
USER-SUPPLIED VALUES:
username: root-user
COMPUTED VALUES:
password: password
username: root-user
HOOKS:
MANIFEST:
---
# Source: mychart/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: irreverant-meerkat-auth
data:
password: password
username: root-user
---
# Source: mychart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: irreverant-meerkat
labels:
app: irreverant-meerkat
spec:
replicas: 1
template:
metadata:
name: irreverant-meerkat
labels:
app: irreverant-meerkat
spec:
containers:
- name: irreverant-meerkat
image: alpine
env:
- name: "USERNAME"
valueFrom:
secretKeyRef:
key: username
name: irreverant-meerkat-auth
- name: "PASSWORD"
valueFrom:
secretKeyRef:
key: password
name: irreverant-meerkat-auth
imagePullPolicy: IfNotPresent
restartPolicy: Always
selector:
matchLabels:
app: irreverant-meerkat
You can see that the data of username in secret has changed to root-user.
I have added this example into github repo.
There is also some discussion in kubernetes/helm repo regarding this. You can see this issue to know about all other ways to use environment variables.
you can pass env key value from the value yaml by setting the deployment yaml as below :
spec:
restartPolicy: Always
containers:
- name: sample-app
image: "sample-app:latest"
imagePullPolicy: Always
env:
{{- range $name, $value := .Values.env }}
- name: {{ $name }}
value: {{ $value }}
{{- end }}
in the values.yaml :
env:
- name: "USERNAME"
value: ""
- name: "PASSWORD"
value: ""
when you install the chart you can pass the username password value
helm install chart_name --name release_name --set env.USERNAME="app-username" --set env.PASSWORD="28sin47dsk9ik"
For those looking to use data structures instead lists for their env variable files, this has worked for me:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range $key, $val := .Values.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
values.yaml:
env:
FOO: "BAR"
USERNAME: "CHANGEME"
PASWORD: "CHANGEME"
That way I can access specific values by name in other parts of the helm chart and pass the sensitive values via helm command line.
To get away from having to set each secret manually, you can use:
export MY_SECRET=123
envsubst < values.yaml | helm install my-release . --values -
where ${MY_SECRET} is referenced in your values.yaml file like:
mychart:
secrets:
secret_1: ${MY_SECRET}
Helm 3.1 supports post rendering (https://helm.sh/docs/topics/advanced/#post-rendering) which passes the manifest to a script before it is actually send to Kubernetes API. Post rendering allows to manipulate the manifest in multiple ways (e.g. use kustomize on top of Helm).
The simplest form of a post renderer which replaces predefined environment values could look like this:
#!/bin/sh
envsubst <&0
Note this will replace every occurance of $<VARNAME> which could collide with variables in the templates like shell scripts in liveness probes. So better explicitly define the variables you want to get replaced: envsubst '${USERNAME} ${PASSWORD}' <&0
Define your env variables in the shell:
export USERNAME=john PASSWORD=my-secret
In the tempaltes (e.g. secret.yaml) use the values defined in the values.yaml:
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-auth
data:
username: {{ .Values.username }}
password: {{ .Values.password }}
Note that you can not apply string transformations like b64enc on the strings as the get injected in the manifest after Helm has already processed all YAML files. Instead you can encode them in the post renderer if required.
In the values.yaml use the variable placeholders:
...
username: ${USERNAME}
password: ${PASSWORD}
The parameter --post-renderer is supported in several Helm commands e.g.
helm install --dry-run --post-renderer ./my-post-renderer.sh my-chart
By using the post renderer the variables/placeholders automatically get replaced by envsubst without additional scripting.
i guess the question is how to lookup for env variable inside chart by looking at the env variables it-self and not by passing this with --set.
for example: i have set a key "my_db_password" and want to change the values by looking at the value in env variable is not supported.
I am not very sure on GO template, but I guess this is disabled as what they explain in helm documentation. "We removed two for security reasons: env and expandenv (which would have given chart authors access to Tiller’s environment)." https://helm.sh/docs/developing_charts/#know-your-template-functions
I think one simple way is just set the value directly. for example, in your Values.yml, you want pass the service name:
...
myapp:
service:
name: ""
...
Your service.yml just use this value as usual:
{{ .Values.myapp.service.name }}
Then to set the value, use --set, like: --set myapp.service.name=hello
Then, for example, if you want to use the environment variable, do export before that:
#set your env variable
export MYAPP_SERVICE=hello
#pass it to helm
helm install myapp --set myapp.service.name=$MYAPP_SERVICE.
If you do debug like:
helm install myapp --set myapp.service.name=$MYAPP_SERVICE --debug --dry-run ./myapp
You can see this information at the beginning of your yml which your "hello" was set.
USER-SUPPLIED VALUES:
myapp:
service:
name: hello
As an alternative to pass local environment variables, I like to store these kind of sensitive values in a folder ignored by your VCS, and use Helm .Files object to read them and provide the values to your templates.
In my opinion, the advantage is that it doesn't require the host that will operate the Helm chart to set any OS specific environment variable, and makes the chart self-contained whilst not exposing these values.
# In a folder not committed, e.g. <chart_base_directory>/secrets
username: app-username
password: 28sin47dsk9ik
Then in your chart templates:
# In deployment.yaml file
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-auth
stringData::
{{ .Files.Get "<chart_base_directory>/secrets" | indent 2 }}
As a result, everything the Chart needs is accessible from within the directory where you define everything else. And instead of setting system-wide env vars, it just needs a file.
This file can be generated automatically, or copied from a committed template with dummy values. Helm will also fire an error early on install/update if this isn't defined, as opposed to creating your secret with username="" and password="" if your env vars haven't been defined, which only becomes obvious once your changes are applied to the cluster.