Helm create template for template - kubernetes-helm

Is it possible to create a helm template for a template file inside templates/ folder? My use case is following: there are several kubernetes deployment files which differs only in deployment name and docker image that is pulled from a repo(e.g. deployment for service1, service2 etc). I want to create one chart to deploy all that services. Currently there are a lot of copy-paste in my deployment templates. I want to have some kind of a template for that templates. Also all that deployment templates will have different Values.yaml on different environments(e.g. service1 and service2 will have values-prod.yaml on prod env and values-stage.yaml on staging env)
If not possible, what are the alternative solutions? Thanks

For those who are interseted in a solution:
as proposed by David Maze in the comments to my question I should use somehing similar to {{ range .Values.resourceNames }} in the template file(e.g. deployment.yaml) and divide those resources with --- on the beginning of a loop(kubernetes allows to have several resources defined in a single file separated by ---), where .Values.resourceNames is just an example. So having that deployment.yaml template could look like following:
{{- range .Values.resourceNames }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .name }}-deployment
labels:
app: {{ .name }}
spec:
replicas: {{ .replicaCount }}
selector:
...
{{- end }}
where values.yaml file looks like folloiwng:
resourceNames:
- name: service1
replicaCount: 2
- name: service2
replicaCount: 1

Related

Reuse uuid in helm charts

I am writing helm charts and it creates one deployment and one statefulset component.
Now I want to generate uuid and send the value to both k8s components.
I a using uuid function to generate the uuid. But need help how I can send this value to both components.
Here is my chart folder structure --
projectdir
chart1
templates
statefulset.yaml
chart2
templates
deployment.yaml
helperchart
templates
_helpers.tpl
I have to write the logic to generate the uuid in _helpers.tpl.
Edit: It seems defining it in the _helpers.tpl does not work - thank you for pointing it out.
I have lookup it up a bit, and it seems currently the only way to achieve that is to put both of the manifests, separated by --- to the same file under the templates/. See the following example, where the UUID is defined in the first line and then used in the both Deployment and the StatefulSet:
{{- $mySharedUuid := uuidv4 -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "uuid-test.fullname" . }}-1
labels:
{{- include "uuid-test.labels" . | nindent 4 }}
annotations:
my-uuid: {{ $mySharedUuid }}
spec:
...
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "uuid-test.fullname" . }}-2
labels:
{{- include "uuid-test.labels" . | nindent 4 }}
annotations:
my-uuid: {{ $mySharedUuid }}
spec:
...
After templating, the output is:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: uuid-test-app-1
labels:
helm.sh/chart: uuid-test-0.1.0
app.kubernetes.io/name: uuid-test
app.kubernetes.io/instance: uuid-test-app
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
annotations:
my-uuid: fe0346f5-a963-4ca1-ada0-af17405f3155
spec:
...
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: uuid-test-app-2
labels:
helm.sh/chart: uuid-test-0.1.0
app.kubernetes.io/name: uuid-test
app.kubernetes.io/instance: uuid-test-app
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
annotations:
my-uuid: fe0346f5-a963-4ca1-ada0-af17405f3155
spec:
...
See the same issue: https://github.com/helm/helm/issues/6456
Note that this approach will still cause the UUID to be regenerated when you do a helm upgrade. To circumvent that, you would need to use another workaround along with this one.
You should explicitly pass the value in as a Helm value; don't try to generate it in the chart.
The other answers to this question highlight a couple of the issues you'll run into. #UtkuƖzdemir notes that every time you call the Helm uuidv4 function it will create a new random UUID, so you can only call that function once in the chart ever; and #srr further notes that there's no way to persist a generated value like this, so if you helm upgrade the chart the UUID value will be regenerated, which will cause all of the involved Kubernetes objects to be redeployed.
The Bitnami RabbitMQ chart has an interesting middle road here. One of its configuration options is an "Erlang cookie", also a random string that needs to be consistent across all replicas and upgrades. On an initial install it generates a random value if one isn't provided, and tells you how to retrieve it from a Secret; but if .Release.IsUpgrade then you must provide the value directly, and the error message explains how to get it from your existing deployment.
You may be able to get around the "only call uuidv4 once ever" problem by putting the value into a ConfigMap or Secret, and then referencing it from elsewhere. This works only if the only place you use the UUID value is in an environment variable, or something else that can have a value injected from a secret; it won't help if you need it in an annotation or label.
apiVersion: v1
kind: Secret
metadata:
name: {{ template "chart.name" . }}
data:
the-uuid: {{ .Values.theUuid | default uuidv4 | b64enc }}
{{-/* this is the only place uuidv4 ^^^^^^ is called at all */}}
env:
- name: THE_UUID
valueFrom:
secretKeyRef:
name: {{ template "chart.name" . }}
key: the-uuid
As suggested in helm issue tracker https://github.com/helm/helm/issues/6456, we have to put both components in same file and looks like thats the only solution right now.
Its a surprise, Helm not supporting cache the value to share across charts/components. I wish Helm support this feature in future.

Can confuration for a progam running in a container/pod be placed in a Deployment yaml instead of ConfigMap yaml?

Can a confuration for a progam running in container/pod be placed in a Deployment yaml instead of ConfigMap yaml - like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
spec:
containers:
- env:
-name: "MyConfigKey"
value: "MyConfigValue"
Single environment
Putting values in environment variables in the Deployment works.
Problem: You should not work on the environment that is the production environment, so you will need at least another environment.
Using docker, containers and Kubernetes makes it very easy to create more than one environment.
Multiple environements
When you want to use more than one environment, you want to keep the difference as small as possible. This is important to fast detect problems and to limit the management needed.
Problem: Maintaining the difference between environments and also avoid unique problems (config drift / snowflake servers).
Therefore, keep as much as possible common for the environments, e.g. use the same Deployment.
Only use unique instances of ConfigMap, Secret and probably Ingress for each app and environment.
This is my approach when you want to set env directly from deployment:
If you using Helm:
Helm values.yaml file:
deployment:
env:
enabled: false
vars:
KEY1: VALUE1
KEY2: VALUE2
Deployment templates deployment.yaml:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: ...
{{- if .Values.deployment.env.enabled }}
env:
{{- range $key, $val := .Values.deployment.env.vars }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end}}
{{- end }}
...
And if you just want to apply directly from kubectl command and deployment.yaml file:
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: ...
env:
- name: key1
value: value1
- name: key2
value: value2
...

How to append Secret/ConfigMap hash prefix properly in Helm?

I want to append the hash of my Secret or ConfigMap contents to the name of the resource in order to trigger a rolling update and keep the old version of that resource around in case there is a mistake in the new configuration.
This can almost be achieved using "helm.sh/resource-policy": keep on the Secret/ConfigMap but these will never be cleaned up. Is there a way of saying 'keep all but the last two' in Helm or an alternative way of achieving this behaviour?
$ helm version
version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}
Automatically Roll Deployments
In order to update resource when Secret or Configmap changes, you can add checksum annotation to your deployment
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
You can revert to your previous configuration with helm rollback command
Update:
A ssuming that your Configmap is generated using values.yaml file, you can add a _helper.tpl function
{{- define "mychart.configmapChecksum" -}}
{{ printf "configmap-%s" (.Values.bar | sha256sum) }}
{{- end }}
And use {{ include "mychart.configmapChecksumed" . }} both as configmap name and reference in deployment.
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "mychart.configmapChecksumed" . }}
annotations:
"helm.sh/resource-policy": keep
data:
config.properties: |
foo={{ .Values.bar }}
deployment.yaml
...
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: {{ include "mychart.configmapChecksumed" . }}
Please note that you have to keep "helm.sh/resource-policy": keep annotation on Configmap telling helm to not delete the previous versions.
You can not use {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} as a configmap name directly because helm rendering will fail with
error calling include: rendering template has a nested reference name

Inheritance helm template

I have close to 25 deployments and services. Each Helm template has the following:
apiVersion: v1
kind: Deployment
metadata:
name: aa1
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: aa1
spec:
containers:
- env:
- name: {{ .Values.env1 }}
value: "yes"
volumeMounts:
- name: {{ .Values.v1 }}
mountPath: {{ .Values.v1monthPath}}
subPath: {{ .Values.v1subpath }}
Similarly I have same env and volumeMount defined for all 25 templates.
Is there any way I can use template inheritance in Helm?
A single Helm .tpl file can contain multiple YAML documents, separated with the --- start-of-document separator. Also, you can use all of the standard Go text/template capabilities, including its looping constructs.
That means that you could write a template like this:
{{- $top := . -}}
{{- range .Values.names -}}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ . }}
spec:
...
template:
spec:
containers:
- env:
- name: {{ $top.Values.env1 }}
value: "yes"
---
apiVersion: v1
kind: Service
metadata:
name: {{ . }}
spec: { ... }
{{ end -}}
What this does is first to save away the current context; since range resets the current template default context . to the iterator value, we need to remember what its current value is. We then iterate through every value in a list from the values file. (The Sprig list function could create the list in the template.) For each item, we create a Deployment and a Service, each starting with a --- on its own line. When we need the current name, it is .; when we need something from the Helm values, we need to explicitly look it up in $top.Values.
Another possible approach is to write separate templates for the deployment and service, and then have files invoke each of them. Technically a template only takes one parameter, but you can use list and index to have that one parameter be a list.
{{/* _deployment.tpl */}}
{{- define "deployment" -}}
{{- $name := index 0 . -}}
{{- $top := index 1 . -}}
---
apiVersion: apps/v1
kind: Deployment
...
name: {{ $name }}
...
name: {{ $top.Values.env1 }}
...
{{ end -}}
{{/* aa1.yaml */}}
{{- template "deployment" (list "aa1" .) -}}
{{- template "service" (list "aa1" .) -}}
This can get almost arbitrarily complex (you can have conditionals based on the current object name; you can use the template index function to look up values in the values object; and so on). helm template will do some validation and write the rendered template to stdout without invoking Kubernetes, which is helpful for eyeballing it and can be a basis for automated tests.
The text/template "inheritance" capabilities (using e.g. block) aren't a good match here. The documentation for (*text/template.Template).Parse notes that, if Parse is called multiple times, the last define found wins, which gives you a room to Parse a base template with a placeholder block and then Parse a refined template that redefines that. Helm doesn't really use this functionality; while it loads all of the YAML and supporting template files into the same template instance, it renders all of the top-level files separately instead of rendering some root template that allows for overrides.

How do I customize PostgreSQL configurations using helm chart?

I'm trying to deploy an application that uses PostgreSQL as a database to my minikube. I'm using helm as a package manager, and add have added PostgreSQL dependency to my requirements.yaml. Now the question is, how do I set postgres user, db and password for that deployment? Here's my templates/applicaion.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ template "sgm.fullname" . }}-service
spec:
type: NodePort
selector:
app: {{ template "sgm.fullname" . }}
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "sgm.fullname" . }}-deployment
spec:
replicas: 2
selector:
matchLabels:
app: {{ template "sgm.fullname" . }}
template:
metadata:
labels:
app: {{ template "sgm.fullname" . }}
spec:
containers:
- name: sgm
image: mainserver/sgm
env:
- name: POSTGRES_HOST
value: {{ template "postgres.fullname" . }}.default.svc.cluster.local
I've tried adding a configmap as it is stated in the postgres helm chart github Readme, but seems like I'm doing something wrong
This is lightly discussed in the Helm documentation: your chart's values.yaml file contains configuration blocks for the charts it includes. The GitHub page for the Helm stable/postgresql chart lists out all of the options.
Either in your chart's values.yaml file, or in a separate YAML file you pass to the helm install -f option, you can set parameters like
postgresql:
postgresqlDatabase: stackoverflow
postgresqlPassword: enterImageDescriptionHere
Note that the chart doesn't create a non-admin user (unlike its sibling MySQL chart). If you're okay with the "normal" database user having admin-level privileges (like creating and deleting databases) then you can set postgresqlUser here too.
In your own chart you can reference these values like any other
- name: PGUSER
value: {{ .Values.postgresql.postgresqlUser }}