How to check if a key is defined in a configmap in a Helm chart? - kubernetes

I want to apply the postgres username or section in a Helm chart only if a key exists in a configmap.
I have the following deployment.yaml example:
{{-if key exists in myconfigmap }} // then do the following, is this possible?
{{- if .Values.postgres }}
{{- if .Values.postgres.postgresUsername }}
- name: {{ .Values.postgres.postgresUsername }}
{{- else }}
- name: POSTGRES_USERNAME
{{- end }}
valueFrom:
secretKeyRef:
name: {{ .Values.postgres.secretfile }}
key: username
Is that possible using a configmap in a Helm chart? Basically I want to load a configmap that is deployed on my cluster called myconfigmap and check if there is a certain key in that map.

Related

Helm upgrade is making deployment failure

We configured CSI driver in our cluster for secret management and used the below secret provider class template to automatically assign secrets to the deployments env variable. The above setup is working fine.
But 2 things where I have issues. Whenever new changes were done to the secret, say if adding a new secret to the YAML and key vault, the next release will fail with the helm upgrade command, stating specified secret is not found.
So in order to solve this, I have to uninstall all helm releases and need to install the helm release again, which means down time, how can I achieve this scenario without any down time?
Secondly, is there any recommended way to restart the Pods when the secret template changes:
values.yaml for MyAppA
keyvault:
name: mykv
tenantId: ${tenantId}$
clientid: "#{spid}#"
clientsecret: "#{spsecret}#"
secrets:
- MyAPPA_SECRET1_NAME1
- MyAPPA_SECRET2_NAME2
- MyAPPA_SECRET3_NAME3
deployment.yaml, ENV part is as below
{{- if eq .Values.keyvault.enabled true }}
{{- range .Values.keyvault.secrets }}{{/* <-- only one range loop */}}
- name: {{ . }}
valueFrom:
secretKeyRef:
name: {{ $.Release.Name }}-kvsecret
key: {{ . }}
{{- end }}
{{- end }}
volumeMounts:
- name: {{ $.Release.Name }}-volume
mountPath: '/mnt/secrets-store'
readOnly: true
volumes:
- name: {{ $.Release.Name }}-volume
csi:
driver: 'secrets-store.csi.k8s.io'
readOnly: true
volumeAttributes:
secretProviderClass: {{ $.Release.Name }}-secretproviderclass
nodePublishSecretRef:
name: {{ $.Release.Name }}-secrets-store-creds
secretProviderClass yaml file is as below.
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: {{ $.Release.Name }}-secretproviderclass
labels:
app: {{ $.Release.Name }}
chart: "{{ $.Release.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
provider: azure
secretObjects:
- data:
{{- range .Values.keyvault.secrets }}{{/* <-- only one range loop */}}
- key: {{ . }}
objectName: {{ $.Release.Name | upper }}-{{ . }}
{{- end }}
secretName: {{ $.Release.Name }}-kvsecret
type: opaque
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "false"
userAssignedIdentityID: ""
keyvaultName: {{ .Values.keyvault.name | default "mydev-kv" }}
objects: |
array:
{{- range .Values.keyvault.secrets }}{{/* <-- only one range loop */}}
- |
objectName: {{ $.Release.Name | upper }}-{{ . }}
objectType: secret
{{- end }}
tenantId: {{ .Values.keyvault.tenantid }}
{{- end }}
{{- end -}}
{{- define "commonobject.secretproviderclass" -}}
{{- template "commonobject.util.merge" (append . "commonobject.secretproviderclass.tpl") -}}
{{- end -}}
The problem is not in the "helm upgrade" command. I discovered this is a limitation of a CSI driver or SecretProviderClass. When the deployment is already created, the SecretProviderClass resource is updated but the "SecretProviderClassPodStatuses" is not, so secrets are not updated.
Two potential solutions to update secrets:
delete secret and restart/recreate pod => this works but it sounds more like a workaround than an actual solution
set enableSecretRotation to true => it has been implemented in a CSI driver recently and it's in an 'alpha' version
https://secrets-store-csi-driver.sigs.k8s.io/topics/secret-auto-rotation.html
Edited:
In the end, I used this command to use automatic secret rotation in Azure Kubernetes Service:
az aks addon update -g [resource-group] -n [aks-name] -a azure-keyvault-secrets-provider --enable-secret-rotation --rotation-poll-interval 0.5m
You can use the following command to check if this option is enabled:
az aks addon show -g [resource-group] -n [aks-name] -a azure-keyvault-secrets-provider
More info here:
https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver#enable-and-disable-autorotation

Helm Environment Variables in if else

When am building the image path, this is how I want to build the image Path, where the docker registry address, I want to fetch it from the configMap.
I can't hard code the registry address in the values.yaml file because for each customer the registry address would be different and I don't want to ask customer to enter this input manually. These helm charts are deployed via argoCD, so fetching registryIP via shell and then invoking the helm command is also not an option.
I tried below code, it isn't working because the env variables will not be available in the context where image path is present.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "helm-guestbook.fullname" . }}
spec:
template:
metadata:
labels:
app: {{ template "helm-guestbook.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
{{- if eq .Values.isOnPrem "true" }}
image: {{ printf "%s/%s:%s" $dockerRegistryIP .Values.image.repository .Values.image.tag }}
{{- else }}
env:
- name: DOCKER_REGISTRY_IP
valueFrom:
configMapKeyRef:
name: docker-registry-config
key: DOCKER_REGISTRY_IP
Any pointers on how can I solve this using helm itself ? Thanks
Check out the lookup function, https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function
Though this could get very complicated very quickly, so be careful to not overuse it.

How can I reuse common configuration across different kubernetes manifests?

Assume I have this manifest:
apiVersion: batch/v1
kind: Job
metadata:
name: initialize-assets-fixtures
spec:
template:
spec:
initContainers:
- name: wait-for-minio
image: bitnami/minio-client
env:
- name: MINIO_SERVER_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio
key: access-key
- name: MINIO_SERVER_SECRET_KEY
valueFrom:
secretKeyRef:
name: minio
key: secret-key
- name: MINIO_SERVER_HOST
value: minio
- name: MINIO_SERVER_PORT_NUMBER
value: "9000"
- name: MINIO_ALIAS
value: minio
command:
- /bin/sh
- -c
- |
mc config host add ${MINIO_ALIAS} http://${MINIO_SERVER_HOST}:${MINIO_SERVER_PORT_NUMBER} ${MINIO_SERVER_ACCESS_KEY} ${MINIO_SERVER_SECRET_KEY}
containers:
- name: initialize-assets-fixtures
image: bitnami/minio
env:
- name: MINIO_SERVER_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio
key: access-key
- name: MINIO_SERVER_SECRET_KEY
valueFrom:
secretKeyRef:
name: minio
key: secret-key
- name: MINIO_SERVER_HOST
value: minio
- name: MINIO_SERVER_PORT_NUMBER
value: "9000"
- name: MINIO_ALIAS
value: minio
command:
- /bin/sh
- -c
- |
mc config host add ${MINIO_ALIAS} http://${MINIO_SERVER_HOST}:${MINIO_SERVER_PORT_NUMBER} ${MINIO_SERVER_ACCESS_KEY} ${MINIO_SERVER_SECRET_KEY}
for category in `ls`; do
for f in `ls $category/*` ; do
mc cp $f ${MINIO_ALIAS}/$category/$(basename $f)
done
done
restartPolicy: Never
You see I have here one initContainer and one container. In both containers, I have the same configuration, i.e. the same env section.
Assume I have yet another Job manifest where I use the very same env section again.
It's a lot of duplicated configuration that I bet I can simplify drastically, but I don't know how to do it. Any hint? Any link to some documentation? After some googling, I was not able to come up with anything useful. Maybe with kustomize, but I'm not sure. Or maybe I'm doing it the wrong way with all those environment variables, but I don't think I have a choice, depending on the service I'm using (here it's minio, but I want to do the same kind of stuff with other services which might not be as flexible as minio).
Based on my knowledge you have those 3 options
Kustomize
Helm
ConfigMap
ConfigMap
You can use either kubectl create configmap or a ConfigMap generator in kustomization.yaml to create a ConfigMap.
The data source corresponds to a key-value pair in the ConfigMap, where
key = the file name or the key you provided on the command line
value = the file contents or the literal value you provided on the command line.
More about how to use it in pod here
Helm
As #Matt mentionted in comments you can use helm
helm lets you template the yaml with values. Also once you get into it there are ways to create and include partial templates – Matt
By the way, helm has it's own created minio chart, you might take a look how it is created there.
Kustomize
It's well described here and here how could you do that in kustomize.
Let me know if you have any more questions.
So, long story short: to solve my problem, I first created a new chart for my service and transformed the k8s manifests I had into helm templates. Then, I completed the _helpers.tpl with the following code:
{{/*
Common minio environment variables setup
*/}}
{{- define "minio.envvarsblock" -}}
- name: MINIO_SERVER_ACCESS_KEY
valueFrom:
secretKeyRef:
name: {{ .Values.minio.fullname }}
key: access-key
- name: MINIO_SERVER_SECRET_KEY
valueFrom:
secretKeyRef:
name: {{ .Values.minio.fullname }}
key: secret-key
- name: MINIO_SERVER_HOST
value: {{ .Values.minio.fullname }}
- name: MINIO_SERVER_PORT_NUMBER
value: {{ .Values.minio.server.port | quote }}
- name: MINIO_ALIAS
value: {{ .Values.minio.client.alias }}
{{- end -}}
{{/*
Wait for minio init container definition
*/}}
{{- define "wait-for-minio" -}}
- name: wait-for-minio
image: {{ .Values.minio.client.image }}
env: {{- include "minio.envvarsblock" . | nindent 4 }}
command:
- /bin/sh
- -c
- |
mc config host add ${MINIO_ALIAS} http://${MINIO_SERVER_HOST}:${MINIO_SERVER_PORT_NUMBER} ${MINIO_SERVER_ACCESS_KEY} ${MINIO_SERVER_SECRET_KEY}
{{- end -}}
The first section above allows to reuse the env section throughout all my templates and the second allows to reuse an initContainer that I use all over the place too. I was then able to inject those partial templates into my helm templates like so (to take the example I put in my original post):
{{- if .Values.fixtures.enabled -}}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "chart.fullname" . }}-init-fixtures
labels:
{{ include "chart.labels" . | indent 4 }}
spec:
template:
spec:
initContainers:
{{- include "wait-for-minio" . | nindent 6 }}
containers:
- name: {{ .Chart.Name }}-init-fixtures
image: {{ .Values.image }}
env: {{- include "minio.envvarsblock" . | nindent 10 }}
command:
- /bin/sh
- -c
- |
mc config host add ${MINIO_ALIAS} http://${MINIO_SERVER_HOST}:${MINIO_SERVER_PORT_NUMBER} ${MINIO_SERVER_ACCESS_KEY} ${MINIO_SERVER_SECRET_KEY}
for category in `ls`; do
for f in `ls $category/*` ; do
mc cp $f ${MINIO_ALIAS}/$category/$(basename $f)
done
done
restartPolicy: OnFailure
{{- end -}}

Should I use configMap for every environment variable?

I am using helm right now. My project is like that:
values.yaml:
environmentVariables:
KEY1: VALUE1
KEY2: VALUE2
configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "myproject.fullname" . }}
data:
{{- range $k, $v := .Values.environmentVariables }}
{{ $k }}: {{ $v | quote }}
{{- end }}
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "myproject.fullname" . }}
spec:
template:
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
{{- range $k, $v := .Values.environmentVariables }}
- name: {{ $k }}
valueFrom:
configMapKeyRef:
name: {{ template "myproject.fullname" $ }}
key: {{ $k }}
{{- end }}
...
But right now, I'm really confused. Am I really need this configmap? Is there any benefit to use configmap for environment variables?
Aside from the points about separation of config from pods, one advantage of a ConfigMap is it lets you make the values of the variables accessible to other Pods or apps that are not necessarily part of your chart.
It does add a little extra complexity though and there can be a large element of preference about when to use a ConfigMap. Since your ConfigMap keys are the names of the environment variables you could simplify your Deployment a little by using 'envFrom'
It would work even if you don't use a configmap, but it has some advantages:
You can update the values at runtime, without updating a deployment. Which means you might not need to restart your application (pods). If you don't use a config map, everytime you update the value, your application (or pod) will be recreated.
Separation of concerns, i.e. deployment configuration and external values separated
I feel like this is largely a matter of taste; but I've generally been avoiding ConfigMaps for cases like these.
env:
{{- range $k, $v := .Values.environmentVariables }}
- name: {{ quote $k }}
value: {{ quote $v }}
{{- end }}
You generally want a single source of truth and Helm can be that: you don't want to be in a situation where someone has edited a ConfigMap outside of Helm and a redeployment breaks local changes. So there's not a lot of value in a ConfigMap being "more editable" than a Deployment spec.
In principle (as #Hazim notes) you can update a ConfigMap contents without restarting a container, but that intrinsically can't update environment variables in running containers, and restarting containers is so routine that doing it once shouldn't matter much.

How do I load multiple templated config files into a helm chart?

So I am trying to build a helm chart.
in my templates file I've got a file like:
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map
data:
{{ Do something here to load up a set of files | indent 2 }}
I have another directory in my chart: configmaps
where a set of json files, that themselves will have templated variables in them:
a.json
b.json
c.json
Ultimately I'd like to be sure in my chart I can reference:
volumes:
- name: config-a
configMap:
name: config-map
items:
- key: a.json
path: a.json
I had same problem for a few weeks ago with adding files and templates directly to container.
Look for the sample syntax:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configmap-{{ .Release.Name }}
namespace: {{ .Release.Namespace }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
nginx_conf: {{ tpl (.Files.Get "files/nginx.conf") . | quote }}
ssl_conf: {{ tpl (.Files.Get "files/ssl.conf") . | quote }}
dhparam_pem: {{ .Files.Get "files/dhparam.pem" | quote }}
fastcgi_conf: {{ .Files.Get "files/fastcgi.conf" | quote }}
mime_types: {{ .Files.Get "files/mime.types" | quote }}
proxy_params_conf: {{ .Files.Get "files/proxy_params.conf" | quote }}
Second step is to reference it from deployment:
volumes:
- name: {{ $.Release.Name }}-configmap-volume
configMap:
name:nginx-configmap-{{ $.Release.Name }}
items:
- key: dhparam_pem
path: dhparam.pem
- key: fastcgi_conf
path: fastcgi.conf
- key: mime_types
path: mime.types
- key: nginx_conf
path: nginx.conf
- key: proxy_params_conf
path: proxy_params.conf
- key: ssl_conf
path: ssl.conf
It's actual for now. Here you can find 2 types of importing:
regular files without templating
configuration files with dynamic variables inside
Please do not forget to read official docs:
https://helm.sh/docs/chart_template_guide/accessing_files/
Good luck!
include all files from directory config-dir/, with {{ range ..:
my-configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
{{- $files := .Files }}
{{- range $key, $value := .Files }}
{{- if hasPrefix "config-dir/" $key }} {{/* only when in config-dir/ */}}
{{ $key | trimPrefix "config-dir/" }}: {{ $files.Get $key | quote }} {{/* adapt $key as desired */}}
{{- end }}
{{- end }}
my-deployment.yaml
apiVersion: apps/v1
kind: Deployment
...
spec:
template:
...
spec:
containers:
- name: my-pod-container
...
volumeMounts:
- name: my-volume
mountPath: /config
readOnly: true # is RO anyway for configMap
volumes:
- name: my-volume
configMap:
name: my-configmap
# defaultMode: 0555 # mode rx for all
I assume that a.json,b.json,c.json etc. is a defined list and you know all the contents (apart from the bits that you want to set as values through templated variables). I'm also assuming you only want to expose parts of the content of the files to users and not to let the user configure the whole file content. (But if I'm assuming wrong and you do want to let users set the whole file content then the suggestion from #hypnoglow of following the datadog chart seems to me a good one.) If so I'd suggest the simplest way to do it is to do:
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map
data:
a.json:
# content of a.json in here, including any templated stuff with {{ }}
b.json:
# content of b.json in here, including any templated stuff with {{ }}
c.json:
# content of c.json in here, including any templated stuff with {{ }}
I guess you'd like to mount then to the same directory. It would be tempting for cleanliness to use different configmaps but that would then be a problem for mounting to the same directory. It would also be nice to be able to load the files independently using .Files.Glob to be able to reference the files without having to put the whole content in the configmap but I don't think you can do that and still use templated variables in them... However, you can do it with Files.Get to read the file content as a string and the pass that into tpl to put it through the templating engine as #Oleg Mykolaichenko suggests in https://stackoverflow.com/a/52009992/9705485. I suggest everyone votes for his answer as it is the better solution. I'm only leaving my answer here because it explains why his suggestion is so good and some people may prefer the less abstract approach.