How to append Secret/ConfigMap hash prefix properly in Helm? - kubernetes-helm

I want to append the hash of my Secret or ConfigMap contents to the name of the resource in order to trigger a rolling update and keep the old version of that resource around in case there is a mistake in the new configuration.
This can almost be achieved using "helm.sh/resource-policy": keep on the Secret/ConfigMap but these will never be cleaned up. Is there a way of saying 'keep all but the last two' in Helm or an alternative way of achieving this behaviour?
$ helm version
version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}

Automatically Roll Deployments
In order to update resource when Secret or Configmap changes, you can add checksum annotation to your deployment
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
You can revert to your previous configuration with helm rollback command
Update:
A ssuming that your Configmap is generated using values.yaml file, you can add a _helper.tpl function
{{- define "mychart.configmapChecksum" -}}
{{ printf "configmap-%s" (.Values.bar | sha256sum) }}
{{- end }}
And use {{ include "mychart.configmapChecksumed" . }} both as configmap name and reference in deployment.
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "mychart.configmapChecksumed" . }}
annotations:
"helm.sh/resource-policy": keep
data:
config.properties: |
foo={{ .Values.bar }}
deployment.yaml
...
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: {{ include "mychart.configmapChecksumed" . }}
Please note that you have to keep "helm.sh/resource-policy": keep annotation on Configmap telling helm to not delete the previous versions.
You can not use {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} as a configmap name directly because helm rendering will fail with
error calling include: rendering template has a nested reference name

Related

Removing secretName from ingress.yaml results in dangling certificate in the K8s cluster as it doesn't automatically get deleted, any workaround?

My ingress.yaml looks like so:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Values.name }}-a
namespace: {{ .Release.Namespace }}
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "{{ .Values.canary.weight }}"
nginx.ingress.kubernetes.io/proxy-read-timeout: "120"
spec:
tls:
- hosts:
- {{ .Values.urlFormat | quote }}
secretName: {{ .Values.name }}-cert // <-------------- This Line
ingressClassName: nginx-customer-wildcard
rules:
- host: {{ .Values.urlFormat | quote }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Values.name }}-a
port:
number: {{ .Values.backendPort }}
Assume Values.name = customer-tls then secretName will become customer-tls-cert.
On removing secretName: {{ .Values.name }}-cert the the nginx ingress start to use default certificate which is fine as I expect it to be but this also results in the customer-tls-cert certificate still hanging around in the cluster though unused. Is there a way that when I delete the cert from helm config it also removed the certificate from the cluster.
Otherwise, some mechanism that will will figure out the certificates that are no longer in use and will get deleted automatically ?
My nginx version is nginx/1.19.9
K8s versions:
Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.24.6
I experimented with --enable-dynamic-certificates a lil bit but that's not supported anymore on the versions that I am using. I am not even sure if that would have solved my problem.
For now I have just manually deleted the certificate from the cluster using kubectl delete secret customer-tls-cert -n edge where edge is the namespace where cert resides.
Edit: This is how my certificate.yaml looks like,
{{- if eq .Values.certificate.enabled true }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.name }}-cert
namespace: edge
annotations:
vault.security.banzaicloud.io/vault-addr: {{ .Values.vault.vaultAddress | quote }}
vault.security.banzaicloud.io/vault-role: {{ .Values.vault.vaultRole | quote }}
vault.security.banzaicloud.io/vault-path: {{ .Values.vault.vaultPath | quote }}
vault.security.banzaicloud.io/vault-namespace : {{ .Values.vault.vaultNamespace | quote }}
type: kubernetes.io/tls
data:
tls.crt: {{ .Values.certificate.cert }}
tls.key: {{ .Values.certificate.key }}
{{- end }}
Kubernetes in general will not delete things simply because they are not referenced. There is a notion of ownership which doesn't apply here (if you delete a Job, the cluster also deletes the corresponding Pod). If you have a Secret or a ConfigMap that's referenced by name, the object will still remain even if you delete the last reference to it.
In Helm, if a chart contains some object, and then you upgrade the chart to a newer version or values that don't include that object, then Helm will delete the object. This would require that the Secret actually be part of the chart, like
{{/* templates/cert-secret.yaml */}}
{{- if .Values.createSecret -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.name }}-cert
...
{{ end -}}
If your chart already included this, and you ran helm upgrade with values that set createSecret to false, then Helm would delete the Secret.
If you're not in this situation, though – your chart references the Secret by name, but you expect something else to create it – then you'll also need to manually destroy it, maybe with kubectl delete.

Access an input file from vault template injector

I use Vault to retrieve some secrets that I put inside a configuration file. All works fine until this configuration gets bigger and I want it to be saved in sub configs in a folder. The issue is that those files can't get imported using go templating used to fill passwords..
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: my-app
spec:
...
template:
metadata:
annotations:
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/secret-volume-path-my-config: "/my-path/etc"
vault.hashicorp.com/agent-inject-file-my-config: "my-app.conf"
vault.hashicorp.com/agent-inject-secret-my-config: secret/data/my-app/config
vault.hashicorp.com/agent-inject-template-my-config: |
{{- $file := .Files }}
{{ .Files.Get "configurations/init.conf" }}
{{- with secret "secret/data/my-app/config" -}}
...
{{- end }}
The file configurations/init.conf for example doesn't seem to be visible by the vault injector and so gets simply replaced by <no value>. Is there a way to make those files in configurations/* visible to vault injector maybe by mounting them somewhere?

Helm create secret from env file

Kubectl provides a nice way to convert environment variable files into secrets using:
$ kubectl create secret generic my-env-list --from-env-file=envfile
Is there any way to achieve this in Helm? I tried the below snippet but the result was quite different:
kind: Secret
metadata:
name: my-env-list
data:
{{ .Files.Get "envfile" | b64enc }}
It appears kubectl just does the simple thing and only splits on a single = character so the Helm way would be to replicate that behavior (helm has regexSplit which will suffice for our purposes):
apiVersion: v1
kind: Secret
data:
{{ range .Files.Lines "envfile" }}
{{ if . }}
{{ $parts := regexSplit "=" . 2 }}
{{ index $parts 0 }}: {{ index $parts 1 | b64enc }}
{{ end }}
{{ end }}
that {{ if . }} is because .Files.Lines returned an empty string which of course doesn't comply with the pattern
Be aware that kubectl's version accepts barewords looked up from the environment which helm has no support for doing, so if your envfile is formatted like that, this specific implementation will fail
I would like to use env files but it seems to be helm doesn't support that yet.
Instead of using an env file you could use a yaml file.
I mean to convert from this env file
#envfile
MYENV1=VALUE1
MYENV2=VALUE2
to this yaml file (verify the yaml format, always it should be an empty space after the colon)
#envfile.yaml
MYENV1: VALUE1
MYENV2: VALUE2
After this, you should move the envfile.yaml generated in the root folder of your helm chart (same level of values yaml files)
You have to set up your secret.yaml in this way:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
annotations:
checksum/config: {{ (tpl (.Files.Glob "envfile.yaml").AsSecrets . ) | sha256sum }}
type: Opaque
data:
{{- $v := $.Files.Get "envfile.yaml" | fromYaml }}
{{- range $key, $val := $v }}
{{ $key | indent 2 }}: {{ $val | b64enc }}
{{- end}}
We are iterating in the data property the envfile.yaml generated and encoding the value to base64. The result secret will be the next:
kubectl get secret my-secret -o yaml
apiVersion: v1
data:
MYENV1: VkFMVUUx
MYENV2: VkFMVUUy
kind: Secret
metadata:
annotations:
checksum/config: 8365925e9f9cf07b2a2b7f2ad8525ff79837d67eb0d41bb64c410a382bc3fcbc
creationTimestamp: "2022-07-09T10:25:16Z"
labels:
app.kubernetes.io/managed-by: Helm
name: my-secret
resourceVersion: "645673"
uid: fc2b3722-e5ef-435e-85e0-57c63725bd8b
type: Opaque
Also, I'm using checksum/config annotation to update the secret object every time a value is updated.

Reuse uuid in helm charts

I am writing helm charts and it creates one deployment and one statefulset component.
Now I want to generate uuid and send the value to both k8s components.
I a using uuid function to generate the uuid. But need help how I can send this value to both components.
Here is my chart folder structure --
projectdir
chart1
templates
statefulset.yaml
chart2
templates
deployment.yaml
helperchart
templates
_helpers.tpl
I have to write the logic to generate the uuid in _helpers.tpl.
Edit: It seems defining it in the _helpers.tpl does not work - thank you for pointing it out.
I have lookup it up a bit, and it seems currently the only way to achieve that is to put both of the manifests, separated by --- to the same file under the templates/. See the following example, where the UUID is defined in the first line and then used in the both Deployment and the StatefulSet:
{{- $mySharedUuid := uuidv4 -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "uuid-test.fullname" . }}-1
labels:
{{- include "uuid-test.labels" . | nindent 4 }}
annotations:
my-uuid: {{ $mySharedUuid }}
spec:
...
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "uuid-test.fullname" . }}-2
labels:
{{- include "uuid-test.labels" . | nindent 4 }}
annotations:
my-uuid: {{ $mySharedUuid }}
spec:
...
After templating, the output is:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: uuid-test-app-1
labels:
helm.sh/chart: uuid-test-0.1.0
app.kubernetes.io/name: uuid-test
app.kubernetes.io/instance: uuid-test-app
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
annotations:
my-uuid: fe0346f5-a963-4ca1-ada0-af17405f3155
spec:
...
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: uuid-test-app-2
labels:
helm.sh/chart: uuid-test-0.1.0
app.kubernetes.io/name: uuid-test
app.kubernetes.io/instance: uuid-test-app
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
annotations:
my-uuid: fe0346f5-a963-4ca1-ada0-af17405f3155
spec:
...
See the same issue: https://github.com/helm/helm/issues/6456
Note that this approach will still cause the UUID to be regenerated when you do a helm upgrade. To circumvent that, you would need to use another workaround along with this one.
You should explicitly pass the value in as a Helm value; don't try to generate it in the chart.
The other answers to this question highlight a couple of the issues you'll run into. #UtkuÖzdemir notes that every time you call the Helm uuidv4 function it will create a new random UUID, so you can only call that function once in the chart ever; and #srr further notes that there's no way to persist a generated value like this, so if you helm upgrade the chart the UUID value will be regenerated, which will cause all of the involved Kubernetes objects to be redeployed.
The Bitnami RabbitMQ chart has an interesting middle road here. One of its configuration options is an "Erlang cookie", also a random string that needs to be consistent across all replicas and upgrades. On an initial install it generates a random value if one isn't provided, and tells you how to retrieve it from a Secret; but if .Release.IsUpgrade then you must provide the value directly, and the error message explains how to get it from your existing deployment.
You may be able to get around the "only call uuidv4 once ever" problem by putting the value into a ConfigMap or Secret, and then referencing it from elsewhere. This works only if the only place you use the UUID value is in an environment variable, or something else that can have a value injected from a secret; it won't help if you need it in an annotation or label.
apiVersion: v1
kind: Secret
metadata:
name: {{ template "chart.name" . }}
data:
the-uuid: {{ .Values.theUuid | default uuidv4 | b64enc }}
{{-/* this is the only place uuidv4 ^^^^^^ is called at all */}}
env:
- name: THE_UUID
valueFrom:
secretKeyRef:
name: {{ template "chart.name" . }}
key: the-uuid
As suggested in helm issue tracker https://github.com/helm/helm/issues/6456, we have to put both components in same file and looks like thats the only solution right now.
Its a surprise, Helm not supporting cache the value to share across charts/components. I wish Helm support this feature in future.

Helm chart restart pods when configmap changes

I am trying to restart the pods when there is a confimap or secret change. I have tried the same piece of code as described in: https://github.com/helm/helm/blob/master/docs/charts_tips_and_tricks.md#automatically-roll-deployments-when-configmaps-or-secrets-change
However, after updating the configmap, my pod does not get restarted. Would you have any idea what could has been done wrong here?
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ template "app.fullname" . }}
labels:
app: {{ template "app.name" . }}
{{- include "global_labels" . | indent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "app.name" . }}
release: {{ .Release.Name }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/secret.yml") . | sha256sum }}
https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments Helm3 has this feature now. deployments are rolled out when there is change in configmap template file.
Neither Helm nor Kubernetes provide a specific rolling update for a ConfigMap change. The workaround has been for a while is to just patch the deployment which triggers the rolling update:
kubectl patch deployment your-deployment -n your-namespace -p '{"spec":{"template":{"metadata":{"annotations":{"date":"$(date)"}}}}}'
And you can see the status:
kubectl rollout status deployment your-deployment
Note this works on a nix machine. This is until this feature is added.
Update 05/05/2021
Helm and kubectl provide this now:
Helm: https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
kubectl: kubectl rollout restart deploy WORKLOAD_NAME
it worked for me, below is the code snippet from my deployment.yaml file, make sure your configmap and secret yaml file are same as what referred in the annotations:
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/my-configmap.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/my-secret.yaml") . | sha256sum }}
I deployed pod with configmap with this feature https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments.
When I edited the configmap at runtime , it didn't trigger the roll-deployment.