Re-using UUID in helm configmap - kubernetes

There's a similar question that alludes to the possibility of auto-generating a uuid in helm charts when used as a secret or configmap. I'm trying precisely to do that, but I'm getting a new uuid each time.
My test case:
---
{{- $config := (lookup "v1" "ConfigMap" .Release.Namespace "{{ .Release.Name }}-testcase") -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Release.Name }}-testcase"
namespace: "{{ .Release.Namespace }}"
labels:
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
data:
{{- if $config }}
TEST_VALUE: {{ $config.data.TEST_VALUE | quote }}
{{- else }}
TEST_VALUE: {{ uuidv4 | quote }}
{{ end }}
I initially deploy this with:
helm upgrade --install --namespace test mytest .
If I run it again, or run with helm diff upgrade --namespace test mytest . I get a new value for TEST_VALUE. When I dump the contents of $config it's an empty map {}.
I'm using Helm v3.9.0, kubectl 1.24, and kube server is 1.22.
NOTE: I couldn't ask in a comment thread on the other post because I don't have enough reputation.

Refering to my issue where you enclosed your stack overflow post : https://github.com/helm/helm/issues/11187
A way to make your configmap work is to save as a variable before conditionnaly set your value. This means every time you upgrade, you'll generate a UUID which will normally will not be used, but this is not dramatic.
When assigning an existing value, := should become =.
Also don't forget to b64enc your value in your manifest
{{- $config := uuidv4 | b64enc | quote -}}
{{- $config_lookup := (lookup "v1" "ConfigMap" .Release.Namespace "{{ .Release.Name }}-testcase") -}}
{{- if $config_lookup -}}
{{- $config = $config_lookup.data.TEST_VALUE -}}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Release.Name }}-testcase"
namespace: "{{ .Release.Namespace }}"
labels:
app.kubernetes.io/managed-by: "{{ .Release.Service }}"
app.kubernetes.io/instance: "{{ .Release.Name }}"
app.kubernetes.io/version: "{{ .Chart.AppVersion }}"
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
data:
TEST_VALUE: {{ $config | quote }}

Related

Create kubernetes docker-registry secret from yaml file for each lookup namespaces?

I am trying to dynamic lookup available namespaces and able to create secrets in the namespaces using below helm chart.
templates/secrets.yaml
{{ range $index, $namespace := (lookup "v1" "Namespace" "" "").items }}
apiVersion: v1
kind: Secret
metadata:
name: myregcred
namespace: {{ $namespace.metadata.name }}
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "imagePullSecret" . }}
{{- end}}
values.yaml
imageCredentials:
registry: quay.io
username: someone
password: sillyness
email: someone#host.com
_helpers.tpl
{{- define "imagePullSecret" }}
{{- with .Values.imageCredentials }}
{{- printf "{\"auths\":{\"%s\":{\"username\":\"%s\",\"password\":\"%s\",\"email\":\"%s\",\"auth\":\"%s\"}}}" .registry .username .password .email (printf "%s:%s" .username .password | b64enc) | b64enc }}
{{- end }}
{{- end }}
When i run this helm chart, i get below error
Error: INSTALLATION FAILED: template: secrets/templates/_helpers.tpl:2:16: executing "imagePullSecret" at <.Values.imageCredentials>: nil pointer evaluating interface {}.imageCredentials
I dont know what I am doing wrong here.
When you reference the named template "imagePullSecret" inside the range, the context "." you are providing refers to the body of the loop, which does not have the "Values" attribute.
Try providing the root context instead:
{{ range $index, $namespace := (lookup "v1" "Namespace" "" "").items }}
apiVersion: v1
kind: Secret
metadata:
name: myregcred
namespace: {{ $namespace.metadata.name }}
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "imagePullSecret" $ }}
---
{{- end}}

How to add a PersistentVolumeClaim to a deployment running GitLab AutoDevops?

What am I trying to achieve?
We are using a self-hosted GitLab instance and use GitLab AutoDevops to deploy our projects to a Kubernetes cluster. At the time of writing, we are only using one node within the cluster. For one of our projects it is important that the built application (i.e. the pod(s)) is able to access (read only) files stored on the Kubernetes cluster's node itself.
What have I tried?
Created a (hostPath) PersistentVolume (PV) on our cluster
Created a PersistentVolumeClaim (PVC) on our cluster (named "test-api-claim")
Now GitLab AutoDevops uses a default helm chart to deploy the applications. In order to modify it's behavior, I've added this chart to the project's repository (GitLab AutoDevops automatically uses the chart in a project's ./chart directory if found). So my line of thinking was to modify the chart so that the deployed pods use the PV and PVC which I created manually on the cluster.
Therefore I modified the deployment.yaml file that can be found here. As you can see in the following code-snippet, I have added the volumeMounts & volumes keys (not present in the default/original file). Scroll to the end of the snippet to see the added keys.
{{- if not .Values.application.initializeCommand -}}
apiVersion: {{ default "extensions/v1beta1" .Values.deploymentApiVersion }}
kind: Deployment
metadata:
name: {{ template "trackableappname" . }}
annotations:
{{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
{{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
labels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
{{- if or .Values.enableSelector (eq (default "extensions/v1beta1" .Values.deploymentApiVersion) "apps/v1") }}
selector:
matchLabels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
release: {{ .Release.Name }}
{{- end }}
replicas: {{ .Values.replicaCount }}
{{- if .Values.strategyType }}
strategy:
type: {{ .Values.strategyType | quote }}
{{- end }}
template:
metadata:
annotations:
checksum/application-secrets: "{{ .Values.application.secretChecksum }}"
{{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
{{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
labels:
app: {{ template "appname" . }}
track: "{{ .Values.application.track }}"
tier: "{{ .Values.application.tier }}"
release: {{ .Release.Name }}
spec:
imagePullSecrets:
{{ toYaml .Values.image.secrets | indent 10 }}
containers:
- name: {{ .Chart.Name }}
image: {{ template "imagename" . }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.application.secretName }}
envFrom:
- secretRef:
name: {{ .Values.application.secretName }}
{{- end }}
env:
{{- if .Values.postgresql.managed }}
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: app-postgres
key: username
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: app-postgres
key: password
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: app-postgres
key: privateIP
{{- end }}
- name: DATABASE_URL
value: {{ .Values.application.database_url | quote }}
- name: GITLAB_ENVIRONMENT_NAME
value: {{ .Values.gitlab.envName | quote }}
- name: GITLAB_ENVIRONMENT_URL
value: {{ .Values.gitlab.envURL | quote }}
ports:
- name: "{{ .Values.service.name }}"
containerPort: {{ .Values.service.internalPort }}
livenessProbe:
{{- if eq .Values.livenessProbe.probeType "httpGet" }}
httpGet:
path: {{ .Values.livenessProbe.path }}
scheme: {{ .Values.livenessProbe.scheme }}
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "tcpSocket" }}
tcpSocket:
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "exec" }}
exec:
command:
{{ toYaml .Values.livenessProbe.command | indent 14 }}
{{- end }}
initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
readinessProbe:
{{- if eq .Values.readinessProbe.probeType "httpGet" }}
httpGet:
path: {{ .Values.readinessProbe.path }}
scheme: {{ .Values.readinessProbe.scheme }}
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "tcpSocket" }}
tcpSocket:
port: {{ .Values.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "exec" }}
exec:
command:
{{ toYaml .Values.readinessProbe.command | indent 14 }}
{{- end }}
initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- end -}}
volumeMounts:
- mountPath: /data
name: test-pvc
volumes:
- name: test-pvc
persistentVolumeClaim:
claimName: test-api-claim
What is the problem?
Now when I trigger the Pipeline to deploy the application (using AutoDevops with my modified helm chart), I am getting this error:
Error: YAML parse error on auto-deploy-app/templates/deployment.yaml: error converting YAML to JSON: yaml: line 71: did not find expected key
Line 71 in the script refers to the valueFrom.secretKeyRef.name in the yaml:
- name: POSTGRES_HOST
valueFrom:
secretKeyRef:
name: app-postgres
key: privateIP
The weird thing is that when I delete the volumes and volumeMounts keys, it works as expected (and the valueFrom.secretKeyRef.name is still presented and causes no trouble..).
I am not using tabs in the yaml file and I double checked the indentation.
Two questions
Could there be something wrong with my yaml?
Does anyone know of another solution to achieve my desired behavior? (adding PVC to the deployment so that pods actually use it?)
General information
We use GitLab EE 13.12.11
For auto-deploy-image (which provides the helm chart I am referring to) we use version 1.0.7
Thanks in advance and have a nice day!
it seems that adding persistence is now supported in the default helm chart.
Check the pvc.yaml and deployment.yaml.
Given that, it should be enough to edit values in .gitlab/auto-deploy-values.yaml to meet your needs. Check defaults in values.yaml for more context.

Helm: "Template" keyword

Can someone explain to me what the role of the keyword "template" is in this code :
apiVersion: v1
kind: Secret
metadata:
name: {{ template "identity-openidconnect" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "microService.name" . }}
release: "{{ .Release.Name }}"
xxxx
xxxxxxxxxxxx
The keyword "template" means, that Helm will find the previously created template and complete the yaml file according to the template in the template. It has to be created in advance. This type of construction allows you to refer to the same scheme many times.
For example, we can define a template to encapsulate a Kubernetes block of labels:
{{- define "mychart.labels" }}
labels:
generator: helm
date: {{ now | htmlDate }}
{{- end }}
Now we can embed this template inside of our existing ConfigMap, and then include it with the template action:
{{- define "mychart.labels" }}
labels:
generator: helm
date: {{ now | htmlDate }}
{{- end }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
{{- template "mychart.labels" }}
data:
myvalue: "Hello World"
{{- range $key, $val := .Values.favorite }}
{{ $key }}: {{ $val | quote }}
{{- end }}
When the template engine reads this file, it will store away the reference to mychart.labels until template "mychart.labels" is called. Then it will render that template inline. So the result will look like this:
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: running-panda-configmap
labels:
generator: helm
date: 2016-11-02
data:
myvalue: "Hello World"
drink: "coffee"
food: "pizza"
Note: a define does not produce output unless it is called with a template, as in this example.
For more info about templates you can read this page.

Using include inside range in Go templates (helm)

I have a template that get rendered several times with a range iteration and I can access variables external variables such as $.Release.Name without a problem. However, when I include templates I can't get it to work:
{{ range $key, $val := $.Values.resources }}
...
annotations:
checksum/config: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum }}
{{ end }}
And in secrets.yaml:
apiVersion: "v1"
kind: "Secret"
metadata:
name: {{ $.Release.Name }}-secrets
I got this error:
Error: render error in "botfront-project/templates/deployment.yaml": template: [filename] :19:28: executing [filename] at <include (print $.Template.BasePath "/secrets.yaml") .>: error calling include: template: .../secrets.yaml:4:19: executing ".../secrets.yaml" at <$.Release.Name>: nil pointer evaluating interface {}.Name
How do I access variables inside an included template?
TL;DR;
just replace . with $ to use the global scope instead of the local one you created .
Example:
{{- include "my-chart.labels" $ | nindent 4 }}
Explanations
According to the docs, https://helm.sh/docs/chart_template_guide/control_structures/#modifying-scope-using-with:
we can use $ for accessing the object Release.Name from the parent
scope.
$ is mapped to the root scope when template execution begins
and it does not change during template execution
With range we change the scope inside the loop. Indeed, {{- include "my-chart.labels" . | nindent 4 }} would invoke the current scope ..
So if you dig into this "scope" thing in helm doc, you eventually find this part: https://helm.sh/docs/chart_template_guide/variables/
With this example:
{{- range .Values.tlsSecrets }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .name }}
labels:
# Many helm templates would use `.` below, but that will not work,
# however `$` will work here
app.kubernetes.io/name: {{ template "fullname" $ }}
# I cannot reference .Chart.Name, but I can do $.Chart.Name
helm.sh/chart: "{{ $.Chart.Name }}-{{ $.Chart.Version }}"
app.kubernetes.io/instance: "{{ $.Release.Name }}"
# Value from appVersion in Chart.yaml
app.kubernetes.io/version: "{{ $.Chart.AppVersion }}"
app.kubernetes.io/managed-by: "{{ $.Release.Service }}"
type: kubernetes.io/tls
data:
tls.crt: {{ .certificate }}
tls.key: {{ .key }}
---
{{- end }}

Helm chart: reference a secret gives name: %!s(<nil>)-%!s(<nil>)

I am creating a Helm chart. When doing a dry run I get a error:
Error: YAML parse error on vstsagent/templates/vsts-buildrelease-agent.yaml: error converting YAML to JSON: yaml: line 28: found character that cannot start any token
The dry run also outputs the secret and deployment YAML file which I created. The part where it goes wrong in the deployment shows:
- name: ACCOUNT
valueFrom:
secretKeyRef:
name: %!s(<nil>)-%!s(<nil>)
key: ACCOUNT
- name: TOKEN
valueFrom:
secretKeyRef:
name: %!s(<nil>)-%!s(<nil>)
key: TOKEN
The output from the dry run for the secret looks fine.
The templates I created:
apiVersion: v1
kind: Secret
metadata:
name: {{ template "chart.fullname" . }}
type: Opaque
data:
ACCOUNT: {{ .Values.chart.secret.account }}
TOKEN: {{ .Values.chart.secret.token }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "chart.fullname" . }}
labels:
app: {{ template "chart.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
release: {{ .Release.Name }}
app: {{ template "chart.name" . }}
annotations:
agentVersion: {{ .Values.chart.image.tag }}
spec:
containers:
- name: {{ template "chart.name" . }}
image: {{ .Values.chart.image.name }}
imagePullPolicy: {{ .Values.chart.image.pullPolicy }}
env:
- name: ACCOUNT
valueFrom:
secretKeyRef:
name: {{ template "chart.fullname" }}
key: ACCOUNT
- name: TOKEN
valueFrom:
secretKeyRef:
name: {{ template "chart.fullname" }}
key: TOKEN
The _helper.tpl looks like this:
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "chart.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "chart.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
Where am I going wrong in this?
I missed 2 dots....
- name: ACCOUNT
valueFrom:
secretKeyRef:
name: {{ template "chart.fullname" . }}
key: ACCOUNT
- name: TOKEN
valueFrom:
secretKeyRef:
name: {{ template "chart.fullname" . }}
key: TOKEN