So I am trying to build a helm chart.
in my templates file I've got a file like:
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map
data:
{{ Do something here to load up a set of files | indent 2 }}
I have another directory in my chart: configmaps
where a set of json files, that themselves will have templated variables in them:
a.json
b.json
c.json
Ultimately I'd like to be sure in my chart I can reference:
volumes:
- name: config-a
configMap:
name: config-map
items:
- key: a.json
path: a.json
I had same problem for a few weeks ago with adding files and templates directly to container.
Look for the sample syntax:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configmap-{{ .Release.Name }}
namespace: {{ .Release.Namespace }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
nginx_conf: {{ tpl (.Files.Get "files/nginx.conf") . | quote }}
ssl_conf: {{ tpl (.Files.Get "files/ssl.conf") . | quote }}
dhparam_pem: {{ .Files.Get "files/dhparam.pem" | quote }}
fastcgi_conf: {{ .Files.Get "files/fastcgi.conf" | quote }}
mime_types: {{ .Files.Get "files/mime.types" | quote }}
proxy_params_conf: {{ .Files.Get "files/proxy_params.conf" | quote }}
Second step is to reference it from deployment:
volumes:
- name: {{ $.Release.Name }}-configmap-volume
configMap:
name:nginx-configmap-{{ $.Release.Name }}
items:
- key: dhparam_pem
path: dhparam.pem
- key: fastcgi_conf
path: fastcgi.conf
- key: mime_types
path: mime.types
- key: nginx_conf
path: nginx.conf
- key: proxy_params_conf
path: proxy_params.conf
- key: ssl_conf
path: ssl.conf
It's actual for now. Here you can find 2 types of importing:
regular files without templating
configuration files with dynamic variables inside
Please do not forget to read official docs:
https://helm.sh/docs/chart_template_guide/accessing_files/
Good luck!
include all files from directory config-dir/, with {{ range ..:
my-configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
{{- $files := .Files }}
{{- range $key, $value := .Files }}
{{- if hasPrefix "config-dir/" $key }} {{/* only when in config-dir/ */}}
{{ $key | trimPrefix "config-dir/" }}: {{ $files.Get $key | quote }} {{/* adapt $key as desired */}}
{{- end }}
{{- end }}
my-deployment.yaml
apiVersion: apps/v1
kind: Deployment
...
spec:
template:
...
spec:
containers:
- name: my-pod-container
...
volumeMounts:
- name: my-volume
mountPath: /config
readOnly: true # is RO anyway for configMap
volumes:
- name: my-volume
configMap:
name: my-configmap
# defaultMode: 0555 # mode rx for all
I assume that a.json,b.json,c.json etc. is a defined list and you know all the contents (apart from the bits that you want to set as values through templated variables). I'm also assuming you only want to expose parts of the content of the files to users and not to let the user configure the whole file content. (But if I'm assuming wrong and you do want to let users set the whole file content then the suggestion from #hypnoglow of following the datadog chart seems to me a good one.) If so I'd suggest the simplest way to do it is to do:
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map
data:
a.json:
# content of a.json in here, including any templated stuff with {{ }}
b.json:
# content of b.json in here, including any templated stuff with {{ }}
c.json:
# content of c.json in here, including any templated stuff with {{ }}
I guess you'd like to mount then to the same directory. It would be tempting for cleanliness to use different configmaps but that would then be a problem for mounting to the same directory. It would also be nice to be able to load the files independently using .Files.Glob to be able to reference the files without having to put the whole content in the configmap but I don't think you can do that and still use templated variables in them... However, you can do it with Files.Get to read the file content as a string and the pass that into tpl to put it through the templating engine as #Oleg Mykolaichenko suggests in https://stackoverflow.com/a/52009992/9705485. I suggest everyone votes for his answer as it is the better solution. I'm only leaving my answer here because it explains why his suggestion is so good and some people may prefer the less abstract approach.
Related
We configured CSI driver in our cluster for secret management and used the below secret provider class template to automatically assign secrets to the deployments env variable. The above setup is working fine.
But 2 things where I have issues. Whenever new changes were done to the secret, say if adding a new secret to the YAML and key vault, the next release will fail with the helm upgrade command, stating specified secret is not found.
So in order to solve this, I have to uninstall all helm releases and need to install the helm release again, which means down time, how can I achieve this scenario without any down time?
Secondly, is there any recommended way to restart the Pods when the secret template changes:
values.yaml for MyAppA
keyvault:
name: mykv
tenantId: ${tenantId}$
clientid: "#{spid}#"
clientsecret: "#{spsecret}#"
secrets:
- MyAPPA_SECRET1_NAME1
- MyAPPA_SECRET2_NAME2
- MyAPPA_SECRET3_NAME3
deployment.yaml, ENV part is as below
{{- if eq .Values.keyvault.enabled true }}
{{- range .Values.keyvault.secrets }}{{/* <-- only one range loop */}}
- name: {{ . }}
valueFrom:
secretKeyRef:
name: {{ $.Release.Name }}-kvsecret
key: {{ . }}
{{- end }}
{{- end }}
volumeMounts:
- name: {{ $.Release.Name }}-volume
mountPath: '/mnt/secrets-store'
readOnly: true
volumes:
- name: {{ $.Release.Name }}-volume
csi:
driver: 'secrets-store.csi.k8s.io'
readOnly: true
volumeAttributes:
secretProviderClass: {{ $.Release.Name }}-secretproviderclass
nodePublishSecretRef:
name: {{ $.Release.Name }}-secrets-store-creds
secretProviderClass yaml file is as below.
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: {{ $.Release.Name }}-secretproviderclass
labels:
app: {{ $.Release.Name }}
chart: "{{ $.Release.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
provider: azure
secretObjects:
- data:
{{- range .Values.keyvault.secrets }}{{/* <-- only one range loop */}}
- key: {{ . }}
objectName: {{ $.Release.Name | upper }}-{{ . }}
{{- end }}
secretName: {{ $.Release.Name }}-kvsecret
type: opaque
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "false"
userAssignedIdentityID: ""
keyvaultName: {{ .Values.keyvault.name | default "mydev-kv" }}
objects: |
array:
{{- range .Values.keyvault.secrets }}{{/* <-- only one range loop */}}
- |
objectName: {{ $.Release.Name | upper }}-{{ . }}
objectType: secret
{{- end }}
tenantId: {{ .Values.keyvault.tenantid }}
{{- end }}
{{- end -}}
{{- define "commonobject.secretproviderclass" -}}
{{- template "commonobject.util.merge" (append . "commonobject.secretproviderclass.tpl") -}}
{{- end -}}
The problem is not in the "helm upgrade" command. I discovered this is a limitation of a CSI driver or SecretProviderClass. When the deployment is already created, the SecretProviderClass resource is updated but the "SecretProviderClassPodStatuses" is not, so secrets are not updated.
Two potential solutions to update secrets:
delete secret and restart/recreate pod => this works but it sounds more like a workaround than an actual solution
set enableSecretRotation to true => it has been implemented in a CSI driver recently and it's in an 'alpha' version
https://secrets-store-csi-driver.sigs.k8s.io/topics/secret-auto-rotation.html
Edited:
In the end, I used this command to use automatic secret rotation in Azure Kubernetes Service:
az aks addon update -g [resource-group] -n [aks-name] -a azure-keyvault-secrets-provider --enable-secret-rotation --rotation-poll-interval 0.5m
You can use the following command to check if this option is enabled:
az aks addon show -g [resource-group] -n [aks-name] -a azure-keyvault-secrets-provider
More info here:
https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver#enable-and-disable-autorotation
Kubectl provides a nice way to convert environment variable files into secrets using:
$ kubectl create secret generic my-env-list --from-env-file=envfile
Is there any way to achieve this in Helm? I tried the below snippet but the result was quite different:
kind: Secret
metadata:
name: my-env-list
data:
{{ .Files.Get "envfile" | b64enc }}
It appears kubectl just does the simple thing and only splits on a single = character so the Helm way would be to replicate that behavior (helm has regexSplit which will suffice for our purposes):
apiVersion: v1
kind: Secret
data:
{{ range .Files.Lines "envfile" }}
{{ if . }}
{{ $parts := regexSplit "=" . 2 }}
{{ index $parts 0 }}: {{ index $parts 1 | b64enc }}
{{ end }}
{{ end }}
that {{ if . }} is because .Files.Lines returned an empty string which of course doesn't comply with the pattern
Be aware that kubectl's version accepts barewords looked up from the environment which helm has no support for doing, so if your envfile is formatted like that, this specific implementation will fail
I would like to use env files but it seems to be helm doesn't support that yet.
Instead of using an env file you could use a yaml file.
I mean to convert from this env file
#envfile
MYENV1=VALUE1
MYENV2=VALUE2
to this yaml file (verify the yaml format, always it should be an empty space after the colon)
#envfile.yaml
MYENV1: VALUE1
MYENV2: VALUE2
After this, you should move the envfile.yaml generated in the root folder of your helm chart (same level of values yaml files)
You have to set up your secret.yaml in this way:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
annotations:
checksum/config: {{ (tpl (.Files.Glob "envfile.yaml").AsSecrets . ) | sha256sum }}
type: Opaque
data:
{{- $v := $.Files.Get "envfile.yaml" | fromYaml }}
{{- range $key, $val := $v }}
{{ $key | indent 2 }}: {{ $val | b64enc }}
{{- end}}
We are iterating in the data property the envfile.yaml generated and encoding the value to base64. The result secret will be the next:
kubectl get secret my-secret -o yaml
apiVersion: v1
data:
MYENV1: VkFMVUUx
MYENV2: VkFMVUUy
kind: Secret
metadata:
annotations:
checksum/config: 8365925e9f9cf07b2a2b7f2ad8525ff79837d67eb0d41bb64c410a382bc3fcbc
creationTimestamp: "2022-07-09T10:25:16Z"
labels:
app.kubernetes.io/managed-by: Helm
name: my-secret
resourceVersion: "645673"
uid: fc2b3722-e5ef-435e-85e0-57c63725bd8b
type: Opaque
Also, I'm using checksum/config annotation to update the secret object every time a value is updated.
I'm trying to install a chart to my cluster but I'm getting a error
Error: template: go-api/templates/deployment.yaml:18:24: executing "go-api/templates/deployment.yaml"
at <.Values.deployment.container.name>: nil pointer evaluating interface {}.name
However I executed the same commands for another 2 charts and it worked fine.
The template file I'm using is this:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: {{ .Values.namespace}}
labels: {{- include "chart.labels" . | nindent 4 }}
name: {{ .Values.deployment.name}}
spec:
replicas: {{ .Values.deployment.replicas}}
selector:
matchLabels: {{ include "chart.matchLabels" . | nindent 6 }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ template "chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Values.deployment.container.name }}
image: {{ .Values.deployment.container.image }}
imagePullPolicy: Never
ports:
- containerPort: {{ .Values.deployment.container.port }}
This can happen if the Helm values you're using to install don't have that particular block:
namespace: default
deployment:
name: a-name
replicas: 1
# but no container:
To avoid this specific error in the template code, it's useful to pick out the parent dictionary into a variable; then if the parent is totally absent, you can decide what to do about it. This technique is a little more useful if there are optional fields or sensible defaults:
{{- $deployment := .Values.deployment | default dict }}
metadata:
name: {{ $deployment.name | default (include "chart.fullname" .) }}
spec:
{{- if $deployment.replicas }}
replicas: {{ $deployment.replicas }}
{{- end }}
If you really can't work without the value, Helm has an undocumented required function that can print a more specific error message.
{{- $deployment := .Values.deployment | required "deployment configuration is required" }}
(My experience has been that required values are somewhat frustrating as an end user, particularly if you're trying to run someone else's chart, and I would try to avoid this if possible.)
Given what you show, it's also possible you're making the chart too configurable. The container name, for example, is mostly a detail that only appears if you have a multi-container pod (or are using Istio); the container port is a fixed attribute of the image you're running. You can safely fix these values in the Helm template file, and then it's reasonable to provide defaults for things like the replica count or image name (consider setting the repository name, image name, and tag as separate variables).
{{- $deployment := .Values.deployment | default dict }}
{{- $registry := $deployment.registry | default "docker.io" }}
{{- $image := $deployment.image | default "my/image" }}
{{- $tag := $deployment.tag | default "latest" }}
containers:
- name: app # fixed
image: {{ printf "%s/%s:%s" $registry $image $tag }}
{{- with .Values.imagePullPolicy }}
imagePullPolicy: {{ . }}
{{- end }}
ports:
- name: http
containerPort: 3000 # fixed
If the value is defined in your values file, and you're still getting the error then the issue could be due to accessing that value inside range or similar function which changes the context.
For example, to use a named template mySuffix that has been defined with .Values and using that inside range with template function, we need to provide $ to the template function instead of the usual .:
{{- define "mySuffix" -}}
{{- .Values.suffix }}
{{- end }}
...
{{- range .Values.listOfValues }}
echo {{ template "mySuffix" $ }}
{{- end }}
When am building the image path, this is how I want to build the image Path, where the docker registry address, I want to fetch it from the configMap.
I can't hard code the registry address in the values.yaml file because for each customer the registry address would be different and I don't want to ask customer to enter this input manually. These helm charts are deployed via argoCD, so fetching registryIP via shell and then invoking the helm command is also not an option.
I tried below code, it isn't working because the env variables will not be available in the context where image path is present.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "helm-guestbook.fullname" . }}
spec:
template:
metadata:
labels:
app: {{ template "helm-guestbook.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
{{- if eq .Values.isOnPrem "true" }}
image: {{ printf "%s/%s:%s" $dockerRegistryIP .Values.image.repository .Values.image.tag }}
{{- else }}
env:
- name: DOCKER_REGISTRY_IP
valueFrom:
configMapKeyRef:
name: docker-registry-config
key: DOCKER_REGISTRY_IP
Any pointers on how can I solve this using helm itself ? Thanks
Check out the lookup function, https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function
Though this could get very complicated very quickly, so be careful to not overuse it.
I am using helm right now. My project is like that:
values.yaml:
environmentVariables:
KEY1: VALUE1
KEY2: VALUE2
configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "myproject.fullname" . }}
data:
{{- range $k, $v := .Values.environmentVariables }}
{{ $k }}: {{ $v | quote }}
{{- end }}
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "myproject.fullname" . }}
spec:
template:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range $k, $v := .Values.environmentVariables }}
- name: {{ $k }}
valueFrom:
configMapKeyRef:
name: {{ template "myproject.fullname" $ }}
key: {{ $k }}
{{- end }}
...
But right now, I'm really confused. Am I really need this configmap? Is there any benefit to use configmap for environment variables?
Aside from the points about separation of config from pods, one advantage of a ConfigMap is it lets you make the values of the variables accessible to other Pods or apps that are not necessarily part of your chart.
It does add a little extra complexity though and there can be a large element of preference about when to use a ConfigMap. Since your ConfigMap keys are the names of the environment variables you could simplify your Deployment a little by using 'envFrom'
It would work even if you don't use a configmap, but it has some advantages:
You can update the values at runtime, without updating a deployment. Which means you might not need to restart your application (pods). If you don't use a config map, everytime you update the value, your application (or pod) will be recreated.
Separation of concerns, i.e. deployment configuration and external values separated
I feel like this is largely a matter of taste; but I've generally been avoiding ConfigMaps for cases like these.
env:
{{- range $k, $v := .Values.environmentVariables }}
- name: {{ quote $k }}
value: {{ quote $v }}
{{- end }}
You generally want a single source of truth and Helm can be that: you don't want to be in a situation where someone has edited a ConfigMap outside of Helm and a redeployment breaks local changes. So there's not a lot of value in a ConfigMap being "more editable" than a Deployment spec.
In principle (as #Hazim notes) you can update a ConfigMap contents without restarting a container, but that intrinsically can't update environment variables in running containers, and restarting containers is so routine that doing it once shouldn't matter much.