Should I use configMap for every environment variable? - kubernetes

I am using helm right now. My project is like that:
values.yaml:
environmentVariables:
KEY1: VALUE1
KEY2: VALUE2
configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "myproject.fullname" . }}
data:
{{- range $k, $v := .Values.environmentVariables }}
{{ $k }}: {{ $v | quote }}
{{- end }}
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "myproject.fullname" . }}
spec:
template:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range $k, $v := .Values.environmentVariables }}
- name: {{ $k }}
valueFrom:
configMapKeyRef:
name: {{ template "myproject.fullname" $ }}
key: {{ $k }}
{{- end }}
...
But right now, I'm really confused. Am I really need this configmap? Is there any benefit to use configmap for environment variables?

Aside from the points about separation of config from pods, one advantage of a ConfigMap is it lets you make the values of the variables accessible to other Pods or apps that are not necessarily part of your chart.
It does add a little extra complexity though and there can be a large element of preference about when to use a ConfigMap. Since your ConfigMap keys are the names of the environment variables you could simplify your Deployment a little by using 'envFrom'

It would work even if you don't use a configmap, but it has some advantages:
You can update the values at runtime, without updating a deployment. Which means you might not need to restart your application (pods). If you don't use a config map, everytime you update the value, your application (or pod) will be recreated.
Separation of concerns, i.e. deployment configuration and external values separated

I feel like this is largely a matter of taste; but I've generally been avoiding ConfigMaps for cases like these.
env:
{{- range $k, $v := .Values.environmentVariables }}
- name: {{ quote $k }}
value: {{ quote $v }}
{{- end }}
You generally want a single source of truth and Helm can be that: you don't want to be in a situation where someone has edited a ConfigMap outside of Helm and a redeployment breaks local changes. So there's not a lot of value in a ConfigMap being "more editable" than a Deployment spec.
In principle (as #Hazim notes) you can update a ConfigMap contents without restarting a container, but that intrinsically can't update environment variables in running containers, and restarting containers is so routine that doing it once shouldn't matter much.

Related

Helm create secret from env file

Kubectl provides a nice way to convert environment variable files into secrets using:
$ kubectl create secret generic my-env-list --from-env-file=envfile
Is there any way to achieve this in Helm? I tried the below snippet but the result was quite different:
kind: Secret
metadata:
name: my-env-list
data:
{{ .Files.Get "envfile" | b64enc }}
It appears kubectl just does the simple thing and only splits on a single = character so the Helm way would be to replicate that behavior (helm has regexSplit which will suffice for our purposes):
apiVersion: v1
kind: Secret
data:
{{ range .Files.Lines "envfile" }}
{{ if . }}
{{ $parts := regexSplit "=" . 2 }}
{{ index $parts 0 }}: {{ index $parts 1 | b64enc }}
{{ end }}
{{ end }}
that {{ if . }} is because .Files.Lines returned an empty string which of course doesn't comply with the pattern
Be aware that kubectl's version accepts barewords looked up from the environment which helm has no support for doing, so if your envfile is formatted like that, this specific implementation will fail
I would like to use env files but it seems to be helm doesn't support that yet.
Instead of using an env file you could use a yaml file.
I mean to convert from this env file
#envfile
MYENV1=VALUE1
MYENV2=VALUE2
to this yaml file (verify the yaml format, always it should be an empty space after the colon)
#envfile.yaml
MYENV1: VALUE1
MYENV2: VALUE2
After this, you should move the envfile.yaml generated in the root folder of your helm chart (same level of values yaml files)
You have to set up your secret.yaml in this way:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
annotations:
checksum/config: {{ (tpl (.Files.Glob "envfile.yaml").AsSecrets . ) | sha256sum }}
type: Opaque
data:
{{- $v := $.Files.Get "envfile.yaml" | fromYaml }}
{{- range $key, $val := $v }}
{{ $key | indent 2 }}: {{ $val | b64enc }}
{{- end}}
We are iterating in the data property the envfile.yaml generated and encoding the value to base64. The result secret will be the next:
kubectl get secret my-secret -o yaml
apiVersion: v1
data:
MYENV1: VkFMVUUx
MYENV2: VkFMVUUy
kind: Secret
metadata:
annotations:
checksum/config: 8365925e9f9cf07b2a2b7f2ad8525ff79837d67eb0d41bb64c410a382bc3fcbc
creationTimestamp: "2022-07-09T10:25:16Z"
labels:
app.kubernetes.io/managed-by: Helm
name: my-secret
resourceVersion: "645673"
uid: fc2b3722-e5ef-435e-85e0-57c63725bd8b
type: Opaque
Also, I'm using checksum/config annotation to update the secret object every time a value is updated.

nil pointer evaluating interface when installing a helm chart

I'm trying to install a chart to my cluster but I'm getting a error
Error: template: go-api/templates/deployment.yaml:18:24: executing "go-api/templates/deployment.yaml"
at <.Values.deployment.container.name>: nil pointer evaluating interface {}.name
However I executed the same commands for another 2 charts and it worked fine.
The template file I'm using is this:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: {{ .Values.namespace}}
labels: {{- include "chart.labels" . | nindent 4 }}
name: {{ .Values.deployment.name}}
spec:
replicas: {{ .Values.deployment.replicas}}
selector:
matchLabels: {{ include "chart.matchLabels" . | nindent 6 }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ template "chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Values.deployment.container.name }}
image: {{ .Values.deployment.container.image }}
imagePullPolicy: Never
ports:
- containerPort: {{ .Values.deployment.container.port }}
This can happen if the Helm values you're using to install don't have that particular block:
namespace: default
deployment:
name: a-name
replicas: 1
# but no container:
To avoid this specific error in the template code, it's useful to pick out the parent dictionary into a variable; then if the parent is totally absent, you can decide what to do about it. This technique is a little more useful if there are optional fields or sensible defaults:
{{- $deployment := .Values.deployment | default dict }}
metadata:
name: {{ $deployment.name | default (include "chart.fullname" .) }}
spec:
{{- if $deployment.replicas }}
replicas: {{ $deployment.replicas }}
{{- end }}
If you really can't work without the value, Helm has an undocumented required function that can print a more specific error message.
{{- $deployment := .Values.deployment | required "deployment configuration is required" }}
(My experience has been that required values are somewhat frustrating as an end user, particularly if you're trying to run someone else's chart, and I would try to avoid this if possible.)
Given what you show, it's also possible you're making the chart too configurable. The container name, for example, is mostly a detail that only appears if you have a multi-container pod (or are using Istio); the container port is a fixed attribute of the image you're running. You can safely fix these values in the Helm template file, and then it's reasonable to provide defaults for things like the replica count or image name (consider setting the repository name, image name, and tag as separate variables).
{{- $deployment := .Values.deployment | default dict }}
{{- $registry := $deployment.registry | default "docker.io" }}
{{- $image := $deployment.image | default "my/image" }}
{{- $tag := $deployment.tag | default "latest" }}
containers:
- name: app # fixed
image: {{ printf "%s/%s:%s" $registry $image $tag }}
{{- with .Values.imagePullPolicy }}
imagePullPolicy: {{ . }}
{{- end }}
ports:
- name: http
containerPort: 3000 # fixed
If the value is defined in your values file, and you're still getting the error then the issue could be due to accessing that value inside range or similar function which changes the context.
For example, to use a named template mySuffix that has been defined with .Values and using that inside range with template function, we need to provide $ to the template function instead of the usual .:
{{- define "mySuffix" -}}
{{- .Values.suffix }}
{{- end }}
...
{{- range .Values.listOfValues }}
echo {{ template "mySuffix" $ }}
{{- end }}

Helm Environment Variables in if else

When am building the image path, this is how I want to build the image Path, where the docker registry address, I want to fetch it from the configMap.
I can't hard code the registry address in the values.yaml file because for each customer the registry address would be different and I don't want to ask customer to enter this input manually. These helm charts are deployed via argoCD, so fetching registryIP via shell and then invoking the helm command is also not an option.
I tried below code, it isn't working because the env variables will not be available in the context where image path is present.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "helm-guestbook.fullname" . }}
spec:
template:
metadata:
labels:
app: {{ template "helm-guestbook.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
{{- if eq .Values.isOnPrem "true" }}
image: {{ printf "%s/%s:%s" $dockerRegistryIP .Values.image.repository .Values.image.tag }}
{{- else }}
env:
- name: DOCKER_REGISTRY_IP
valueFrom:
configMapKeyRef:
name: docker-registry-config
key: DOCKER_REGISTRY_IP
Any pointers on how can I solve this using helm itself ? Thanks
Check out the lookup function, https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function
Though this could get very complicated very quickly, so be careful to not overuse it.

Inheritance helm template

I have close to 25 deployments and services. Each Helm template has the following:
apiVersion: v1
kind: Deployment
metadata:
name: aa1
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: aa1
spec:
containers:
- env:
- name: {{ .Values.env1 }}
value: "yes"
volumeMounts:
- name: {{ .Values.v1 }}
mountPath: {{ .Values.v1monthPath}}
subPath: {{ .Values.v1subpath }}
Similarly I have same env and volumeMount defined for all 25 templates.
Is there any way I can use template inheritance in Helm?
A single Helm .tpl file can contain multiple YAML documents, separated with the --- start-of-document separator. Also, you can use all of the standard Go text/template capabilities, including its looping constructs.
That means that you could write a template like this:
{{- $top := . -}}
{{- range .Values.names -}}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ . }}
spec:
...
template:
spec:
containers:
- env:
- name: {{ $top.Values.env1 }}
value: "yes"
---
apiVersion: v1
kind: Service
metadata:
name: {{ . }}
spec: { ... }
{{ end -}}
What this does is first to save away the current context; since range resets the current template default context . to the iterator value, we need to remember what its current value is. We then iterate through every value in a list from the values file. (The Sprig list function could create the list in the template.) For each item, we create a Deployment and a Service, each starting with a --- on its own line. When we need the current name, it is .; when we need something from the Helm values, we need to explicitly look it up in $top.Values.
Another possible approach is to write separate templates for the deployment and service, and then have files invoke each of them. Technically a template only takes one parameter, but you can use list and index to have that one parameter be a list.
{{/* _deployment.tpl */}}
{{- define "deployment" -}}
{{- $name := index 0 . -}}
{{- $top := index 1 . -}}
---
apiVersion: apps/v1
kind: Deployment
...
name: {{ $name }}
...
name: {{ $top.Values.env1 }}
...
{{ end -}}
{{/* aa1.yaml */}}
{{- template "deployment" (list "aa1" .) -}}
{{- template "service" (list "aa1" .) -}}
This can get almost arbitrarily complex (you can have conditionals based on the current object name; you can use the template index function to look up values in the values object; and so on). helm template will do some validation and write the rendered template to stdout without invoking Kubernetes, which is helpful for eyeballing it and can be a basis for automated tests.
The text/template "inheritance" capabilities (using e.g. block) aren't a good match here. The documentation for (*text/template.Template).Parse notes that, if Parse is called multiple times, the last define found wins, which gives you a room to Parse a base template with a placeholder block and then Parse a refined template that redefines that. Helm doesn't really use this functionality; while it loads all of the YAML and supporting template files into the same template instance, it renders all of the top-level files separately instead of rendering some root template that allows for overrides.

How do I load multiple templated config files into a helm chart?

So I am trying to build a helm chart.
in my templates file I've got a file like:
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map
data:
{{ Do something here to load up a set of files | indent 2 }}
I have another directory in my chart: configmaps
where a set of json files, that themselves will have templated variables in them:
a.json
b.json
c.json
Ultimately I'd like to be sure in my chart I can reference:
volumes:
- name: config-a
configMap:
name: config-map
items:
- key: a.json
path: a.json
I had same problem for a few weeks ago with adding files and templates directly to container.
Look for the sample syntax:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configmap-{{ .Release.Name }}
namespace: {{ .Release.Namespace }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
nginx_conf: {{ tpl (.Files.Get "files/nginx.conf") . | quote }}
ssl_conf: {{ tpl (.Files.Get "files/ssl.conf") . | quote }}
dhparam_pem: {{ .Files.Get "files/dhparam.pem" | quote }}
fastcgi_conf: {{ .Files.Get "files/fastcgi.conf" | quote }}
mime_types: {{ .Files.Get "files/mime.types" | quote }}
proxy_params_conf: {{ .Files.Get "files/proxy_params.conf" | quote }}
Second step is to reference it from deployment:
volumes:
- name: {{ $.Release.Name }}-configmap-volume
configMap:
name:nginx-configmap-{{ $.Release.Name }}
items:
- key: dhparam_pem
path: dhparam.pem
- key: fastcgi_conf
path: fastcgi.conf
- key: mime_types
path: mime.types
- key: nginx_conf
path: nginx.conf
- key: proxy_params_conf
path: proxy_params.conf
- key: ssl_conf
path: ssl.conf
It's actual for now. Here you can find 2 types of importing:
regular files without templating
configuration files with dynamic variables inside
Please do not forget to read official docs:
https://helm.sh/docs/chart_template_guide/accessing_files/
Good luck!
include all files from directory config-dir/, with {{ range ..:
my-configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
{{- $files := .Files }}
{{- range $key, $value := .Files }}
{{- if hasPrefix "config-dir/" $key }} {{/* only when in config-dir/ */}}
{{ $key | trimPrefix "config-dir/" }}: {{ $files.Get $key | quote }} {{/* adapt $key as desired */}}
{{- end }}
{{- end }}
my-deployment.yaml
apiVersion: apps/v1
kind: Deployment
...
spec:
template:
...
spec:
containers:
- name: my-pod-container
...
volumeMounts:
- name: my-volume
mountPath: /config
readOnly: true # is RO anyway for configMap
volumes:
- name: my-volume
configMap:
name: my-configmap
# defaultMode: 0555 # mode rx for all
I assume that a.json,b.json,c.json etc. is a defined list and you know all the contents (apart from the bits that you want to set as values through templated variables). I'm also assuming you only want to expose parts of the content of the files to users and not to let the user configure the whole file content. (But if I'm assuming wrong and you do want to let users set the whole file content then the suggestion from #hypnoglow of following the datadog chart seems to me a good one.) If so I'd suggest the simplest way to do it is to do:
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map
data:
a.json:
# content of a.json in here, including any templated stuff with {{ }}
b.json:
# content of b.json in here, including any templated stuff with {{ }}
c.json:
# content of c.json in here, including any templated stuff with {{ }}
I guess you'd like to mount then to the same directory. It would be tempting for cleanliness to use different configmaps but that would then be a problem for mounting to the same directory. It would also be nice to be able to load the files independently using .Files.Glob to be able to reference the files without having to put the whole content in the configmap but I don't think you can do that and still use templated variables in them... However, you can do it with Files.Get to read the file content as a string and the pass that into tpl to put it through the templating engine as #Oleg Mykolaichenko suggests in https://stackoverflow.com/a/52009992/9705485. I suggest everyone votes for his answer as it is the better solution. I'm only leaving my answer here because it explains why his suggestion is so good and some people may prefer the less abstract approach.