How do I access the current user in a helm chart template - kubernetes

I have a helm chart template, and I would like to use the result of whoami as a template variable. How do I do this?
So if my values.yaml file has:
env:
uniqueId: {{ whoami? }}
how might I do this?
note: I am on os x, so whoami I believe assumes a linux environment, however, in the spirit of this being deployment agnostic I presume there is a non-unix way of doing this.

The Helm Chart's "values.yaml" file is typically for default values. Anything that you'd like to override should be done at time of install/upgrade of the chart.
The Helm docs show a lot of different ways in which values can be used: https://github.com/kubernetes/helm/blob/master/docs/charts.md
In this case, one option is to set the value on the command line:
helm install -set env.whoami=$(id -un) ./your-chart.tgz
You could then have a value.yaml file like:
env:
whoami: "default"
Finally, you can use it in a template like:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Chart.Version }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: WHOAMI
value: {{ .Values.env.whoami }}
Obviously your template will vary, the above is just a snippet.

Related

Helm private values

I could not find anything by just googling, does Helm support private values?
So I have my chart and my values.yaml
privateProp: hello
publicProp: world
I have some values that I want to exposed to the end user of my chart and others that I do not want, however those "private" values are being used in many places.
For example: publicProps is overridable by the user of the chart, but I would like to block access to privateProp, however it is reused in many places:
containers:
name: {{.Values.privateProp}}
nodeSelector:
name: {{.Values.privateProp}}
I saw there is {{$privateProp := "hello"}}, but it is not clear how I can access it elsewhere in my files
How can I achieve this?
Ok, I have found a solution to my problem.
You can create a file called _variables.tpl, the name does not matter
and then declare a variable:
{{- define "privateProp" -}}
{{- print "hello" -}}
{{- end -}}
and then you can use it wherever you want in your chart by doing this:
spec:
containers:
- name: {{ .Values.dashboard.containers.name }}
image: {{ .Values.dashboard.containers.image.repository }}:{{ .Values.dashboard.containers.image.tag }}
imagePullPolicy: Always
ports:
- containerPort: {{ include "privateProp" . }} # <== This

Kubernetes multiple environment variables per environment using Helm 3 go template

I have 2 different values.yaml files per stage and production environment such as values.dev.yaml > values.prod.yaml and using with Helm 3. I would like to learn the best practices how to pass environment variables per environments.For instance we need to set different parameters to NODE_ENV variable.
-Should I specify the variable as hard coded as below and pass the environment variables when running helm upgrade/install command with --set flag?
-What is the correct way to use go template to do this. Can we specify something {{ .Values.node_env.value}} and then pass this env value in values yaml and use only -f values.yaml flag?
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
protocol: TCP
resources:
{{- toYaml .Values.resources | nindent 12 }}
env:
- name: "NODE_ENV"
value: "stage"
- name: "NODE_ENV"
value: "production"
If you have one value file per environment (it is not clear to me this is your case.) like values.prod.yaml (for prod) and values.dev.yaml (for dev), then your templeate can look like this.
This will cause the template to look for extraEnv: in your values{dev/prod}.yaml and iterate over all key/values from that section.
env:
{{- range $key, $value := .Values.extraEnv }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
In your values.dev.yaml files you add all your KEY: values that are specific for this environment. Note you can have multiple key values here, all of them will be loaded. In this case we have NODE_ENV, ANOTHER_KEY, YET_ANOTHER_KEY - all of them will be loaded.
extraEnv:
NODE_ENV: stage
ANOTHER_KEY: value
YET_ANOTHER_KEY: value
same in your values.prod.yaml multiple KEY: value pairs can be specified and all of them will be loaded.
extraEnv:
NODE_ENV: production
ANOTHER_KEY: value

Helm chart failing with Required value

I am trying to create a Helm chart for kafka-connect. For the testing purpose and to find out where I am exactly wrong I am not using the secrets for my access key and secret access key.
My helm chart is failing with the error:
helm install helm-kafka-0.1.0.tgz --namespace prod -f helm-kafka/values.yaml
Error: release loping-grizzly failed: Deployment.apps "kafka-connect" is invalid: spec.template.spec.containers[0].env[15].name: Required value
Based on issue: https://github.com/kubernetes/kubernetes/issues/46861
I changed my number to be a string. But still, the issue persists.
Can someone point me on how to troubleshoot/solve this?
My template/deployment.yaml
spec:
containers:
- name: kafka-connect
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
env:
- name: "CONNECT_LOG4J_LOGGERS"
value: "org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR"
- name: "CONNECT_OFFSET_STORAGE_TOPIC"
value: "connect-offsets"
- name: "CONNECT_PLUGIN_PATH"
value: "/usr/share/java"
- name: "CONNECT_PRODUCER_ACKS"
value: "all"
- name: "CONNECT_PRODUCER_COMPRESSION_TYPE"
value: "snappy"
- nane: "CONNECT_STATUS_STORAGE_TOPIC"
value: "connect-status"
In:
- nane: "CONNECT_STATUS_STORAGE_TOPIC"
value: "connect-status"
nane: should have an "m".
When the error message says spec.template.spec.containers[0].env[15].name you can find the first (zero-indexed) container definition, and within that the sixteenth (zero-indexed) environment variable, which has this typo.
There's something wrong with the substitution of:
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
One or both the values don't exist in your Values.yaml. Or one or both have extra characters, possibly newlines.
If you look at the upstream chart, you see that it has image and imageTag, so in your template, you would have to have something like this:
image: {{ .Values.image }}:{{ .Values.imageTag }}

How to pull environment variables with Helm charts

I have my deployment.yaml file within the templates directory of Helm charts with several environment variables for the container I will be running using Helm.
Now I want to be able to pull the environment variables locally from whatever machine the helm is ran so I can hide the secrets that way.
How do I pass this in and have helm grab the environment variables locally when I use Helm to run the application?
Here is some part of my deployment.yaml file
...
...
spec:
restartPolicy: Always
containers:
- name: sample-app
image: "sample-app:latest"
imagePullPolicy: Always
env:
- name: "USERNAME"
value: "app-username"
- name: "PASSWORD"
value: "28sin47dsk9ik"
...
...
How can I pull the value of USERNAME and PASSWORD from local environment variables when I run helm?
Is this possible? If yes, then how do I do this?
You can export the variable and use it while running helm install.
Before that, you have to modify your chart so that the value can be set while installation.
Skip this part, if you already know, how to setup template fields.
As you don't want to expose the data, so it's better to have it saved as secret in kubernetes.
First of all, add this two lines in your Values file, so that these two values can be set from outside.
username: root
password: password
Now, add a secret.yaml file inside your template folder. and, copy this code snippet into that file.
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-auth
data:
password: {{ .Values.password | b64enc }}
username: {{ .Values.username | b64enc }}
Now tweak your deployment yaml template and make changes in env section, like this
...
...
spec:
restartPolicy: Always
containers:
- name: sample-app
image: "sample-app:latest"
imagePullPolicy: Always
env:
- name: "USERNAME"
valueFrom:
secretKeyRef:
key: username
name: {{ .Release.Name }}-auth
- name: "PASSWORD"
valueFrom:
secretKeyRef:
key: password
name: {{ .Release.Name }}-auth
...
...
If you have modified your template correctly for --set flag,
you can set this using environment variable.
$ export USERNAME=root-user
Now use this variable while running helm install,
$ helm install --set username=$USERNAME ./mychart
If you run this helm install in dry-run mode, you can verify the changes,
$ helm install --dry-run --set username=$USERNAME --debug ./mychart
[debug] Created tunnel using local port: '44937'
[debug] SERVER: "127.0.0.1:44937"
[debug] Original chart version: ""
[debug] CHART PATH: /home/maruf/go/src/github.com/the-redback/kubernetes-yaml-drafts/helm-charts/mychart
NAME: irreverant-meerkat
REVISION: 1
RELEASED: Fri Apr 20 03:29:11 2018
CHART: mychart-0.1.0
USER-SUPPLIED VALUES:
username: root-user
COMPUTED VALUES:
password: password
username: root-user
HOOKS:
MANIFEST:
---
# Source: mychart/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: irreverant-meerkat-auth
data:
password: password
username: root-user
---
# Source: mychart/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: irreverant-meerkat
labels:
app: irreverant-meerkat
spec:
replicas: 1
template:
metadata:
name: irreverant-meerkat
labels:
app: irreverant-meerkat
spec:
containers:
- name: irreverant-meerkat
image: alpine
env:
- name: "USERNAME"
valueFrom:
secretKeyRef:
key: username
name: irreverant-meerkat-auth
- name: "PASSWORD"
valueFrom:
secretKeyRef:
key: password
name: irreverant-meerkat-auth
imagePullPolicy: IfNotPresent
restartPolicy: Always
selector:
matchLabels:
app: irreverant-meerkat
You can see that the data of username in secret has changed to root-user.
I have added this example into github repo.
There is also some discussion in kubernetes/helm repo regarding this. You can see this issue to know about all other ways to use environment variables.
you can pass env key value from the value yaml by setting the deployment yaml as below :
spec:
restartPolicy: Always
containers:
- name: sample-app
image: "sample-app:latest"
imagePullPolicy: Always
env:
{{- range $name, $value := .Values.env }}
- name: {{ $name }}
value: {{ $value }}
{{- end }}
in the values.yaml :
env:
- name: "USERNAME"
value: ""
- name: "PASSWORD"
value: ""
when you install the chart you can pass the username password value
helm install chart_name --name release_name --set env.USERNAME="app-username" --set env.PASSWORD="28sin47dsk9ik"
For those looking to use data structures instead lists for their env variable files, this has worked for me:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range $key, $val := .Values.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
values.yaml:
env:
FOO: "BAR"
USERNAME: "CHANGEME"
PASWORD: "CHANGEME"
That way I can access specific values by name in other parts of the helm chart and pass the sensitive values via helm command line.
To get away from having to set each secret manually, you can use:
export MY_SECRET=123
envsubst < values.yaml | helm install my-release . --values -
where ${MY_SECRET} is referenced in your values.yaml file like:
mychart:
secrets:
secret_1: ${MY_SECRET}
Helm 3.1 supports post rendering (https://helm.sh/docs/topics/advanced/#post-rendering) which passes the manifest to a script before it is actually send to Kubernetes API. Post rendering allows to manipulate the manifest in multiple ways (e.g. use kustomize on top of Helm).
The simplest form of a post renderer which replaces predefined environment values could look like this:
#!/bin/sh
envsubst <&0
Note this will replace every occurance of $<VARNAME> which could collide with variables in the templates like shell scripts in liveness probes. So better explicitly define the variables you want to get replaced: envsubst '${USERNAME} ${PASSWORD}' <&0
Define your env variables in the shell:
export USERNAME=john PASSWORD=my-secret
In the tempaltes (e.g. secret.yaml) use the values defined in the values.yaml:
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-auth
data:
username: {{ .Values.username }}
password: {{ .Values.password }}
Note that you can not apply string transformations like b64enc on the strings as the get injected in the manifest after Helm has already processed all YAML files. Instead you can encode them in the post renderer if required.
In the values.yaml use the variable placeholders:
...
username: ${USERNAME}
password: ${PASSWORD}
The parameter --post-renderer is supported in several Helm commands e.g.
helm install --dry-run --post-renderer ./my-post-renderer.sh my-chart
By using the post renderer the variables/placeholders automatically get replaced by envsubst without additional scripting.
i guess the question is how to lookup for env variable inside chart by looking at the env variables it-self and not by passing this with --set.
for example: i have set a key "my_db_password" and want to change the values by looking at the value in env variable is not supported.
I am not very sure on GO template, but I guess this is disabled as what they explain in helm documentation. "We removed two for security reasons: env and expandenv (which would have given chart authors access to Tiller’s environment)." https://helm.sh/docs/developing_charts/#know-your-template-functions
I think one simple way is just set the value directly. for example, in your Values.yml, you want pass the service name:
...
myapp:
service:
name: ""
...
Your service.yml just use this value as usual:
{{ .Values.myapp.service.name }}
Then to set the value, use --set, like: --set myapp.service.name=hello
Then, for example, if you want to use the environment variable, do export before that:
#set your env variable
export MYAPP_SERVICE=hello
#pass it to helm
helm install myapp --set myapp.service.name=$MYAPP_SERVICE.
If you do debug like:
helm install myapp --set myapp.service.name=$MYAPP_SERVICE --debug --dry-run ./myapp
You can see this information at the beginning of your yml which your "hello" was set.
USER-SUPPLIED VALUES:
myapp:
service:
name: hello
As an alternative to pass local environment variables, I like to store these kind of sensitive values in a folder ignored by your VCS, and use Helm .Files object to read them and provide the values to your templates.
In my opinion, the advantage is that it doesn't require the host that will operate the Helm chart to set any OS specific environment variable, and makes the chart self-contained whilst not exposing these values.
# In a folder not committed, e.g. <chart_base_directory>/secrets
username: app-username
password: 28sin47dsk9ik
Then in your chart templates:
# In deployment.yaml file
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-auth
stringData::
{{ .Files.Get "<chart_base_directory>/secrets" | indent 2 }}
As a result, everything the Chart needs is accessible from within the directory where you define everything else. And instead of setting system-wide env vars, it just needs a file.
This file can be generated automatically, or copied from a committed template with dummy values. Helm will also fire an error early on install/update if this isn't defined, as opposed to creating your secret with username="" and password="" if your env vars haven't been defined, which only becomes obvious once your changes are applied to the cluster.

How to reference a value defined in a template in a sub-chart in helm for kubernetes?

I'm starting to write helm charts for our services.
There are two things I'm not sure how they are supposed to work or what to do with them.
First: the release name. When installing a chart, you specify a name which helm uses to create a release. This release name is often referenced within a chart to properly isolate chart installs from each other? For example the postgres chart contains:
{{- define "postgresql.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
Which is then used for the service:
metadata:
name: {{ template "postgresql.fullname" . }}
It does look like "myrelease-postgresql" in the end in kubernetes.
I wonder what a good release name is? What is typically used for this? A version? Or some code-name like the ubuntu releases?
Second: referencing values.
My chart uses postgresql as a sub-chart. I'd like to not duplicate the way the value for the name of the postgresql service is created (see snipped above).
Is there a way I can reference the service name of a sub-chart or that template define {{ template "postgresql.fullname" . }} in the parent chart? I need it to pass it into my service as database host (which works if I hardcode everything but that cannot be the meaning of this).
I tried:
env:
- name: DB_HOST
value: {{ template "mychart.postgresql.fullname" . }}
But that lead into an error message:
template "mychart.postgresql.fullname" not defined
I've seen examples of Charts doing similar things, like the odoo chart. But in here that logic how the postgresql host name is created is copied and an own define in the template is created.
So is there a way to access sub-chart names? Or values or template defines?
Thanks!
Update after some digging:
According to Subcharts and Globals the templates are shared between charts.
So what I can do is this:
In my chart in _helpers.tpl I add (overwrite) the postgres block:
{{- define "postgresql.fullname" -}}
{{- $name := .Values.global.name -}}
{{- printf "%s-%s" $name "postgresql" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
So this value is used when the sub-chart is deployed. I cannot reference all values or the chart name in here as it will be different in the sub-chart - so I used a global value.
Like this I know the value of the service that is created in the sub-chart.
Not sure if this is the best way to do this :-/
Are you pulling in postgresql as a subchart of your chart (via your chart's requirements.yaml)? If so, both the postgresql (sub) chart and your chart will have the same .Release.Name - thus, you could specify your container's environment as
env:
- name: DB_HOST
value: {{ printf "%s-postgresql" .Release.Name }}
if you override postgresql's name by adding the following to your chart's values.yaml:
postgresql:
nameOverride: your-postgresql
then your container's env would be:
env:
- name: DB_HOST
value: {{ printf "%s-%s" .Release.Name .Values.postgresql.nameOverride }}
You can overwrite the values of the subchart with the values of the parent chart as described here:
https://helm.sh/docs/chart_template_guide/subcharts_and_globals/
I don't think it's possible (and it also doesn't make sense) to override the template name of the subchart.
What I would do is define the database service name in the .Values files both in the parent and sub charts and let helm override the one in the subchart - that way you will always have the database name in the parent chart. This would however mean that the service name of the database should not be {{ template "name" . }}, but something like {{ .Values.database.service.name }}
mychart/.Values
mysubchart:
service:
name: my-database
mychart/templates/deployment.yaml
env:
- name: DB_HOST
value: {{ .Values.mysubchart.service.name }}
mychart/charts/mysubchart/.Values
service:
name: my-database
mychart/charts/mysubchart/templates/service.yaml:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.service.name }}
Another way is to use global chart values, also described in https://helm.sh/docs/chart_template_guide/subcharts_and_globals/
For values in the helper.tpl instead of values.yaml
To access a value from a chart you do the following:
{{ template "keycloak.fullname" . }}
To access a value from a sub chart
{{ template "keycloak.fullname" .Subcharts.keycloak }}
You could import values from a sub chart as described here: https://helm.sh/docs/topics/charts/#importing-child-values-via-dependencies.
However there is a caveat. This works not for values defined at the root level in the values.yaml.
See this issue for more information: https://github.com/helm/helm/issues/9817