Reuse uuid in helm charts - kubernetes-helm

I am writing helm charts and it creates one deployment and one statefulset component.
Now I want to generate uuid and send the value to both k8s components.
I a using uuid function to generate the uuid. But need help how I can send this value to both components.
Here is my chart folder structure --
projectdir
chart1
templates
statefulset.yaml
chart2
templates
deployment.yaml
helperchart
templates
_helpers.tpl
I have to write the logic to generate the uuid in _helpers.tpl.

Edit: It seems defining it in the _helpers.tpl does not work - thank you for pointing it out.
I have lookup it up a bit, and it seems currently the only way to achieve that is to put both of the manifests, separated by --- to the same file under the templates/. See the following example, where the UUID is defined in the first line and then used in the both Deployment and the StatefulSet:
{{- $mySharedUuid := uuidv4 -}}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "uuid-test.fullname" . }}-1
labels:
{{- include "uuid-test.labels" . | nindent 4 }}
annotations:
my-uuid: {{ $mySharedUuid }}
spec:
...
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "uuid-test.fullname" . }}-2
labels:
{{- include "uuid-test.labels" . | nindent 4 }}
annotations:
my-uuid: {{ $mySharedUuid }}
spec:
...
After templating, the output is:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: uuid-test-app-1
labels:
helm.sh/chart: uuid-test-0.1.0
app.kubernetes.io/name: uuid-test
app.kubernetes.io/instance: uuid-test-app
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
annotations:
my-uuid: fe0346f5-a963-4ca1-ada0-af17405f3155
spec:
...
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: uuid-test-app-2
labels:
helm.sh/chart: uuid-test-0.1.0
app.kubernetes.io/name: uuid-test
app.kubernetes.io/instance: uuid-test-app
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
annotations:
my-uuid: fe0346f5-a963-4ca1-ada0-af17405f3155
spec:
...
See the same issue: https://github.com/helm/helm/issues/6456
Note that this approach will still cause the UUID to be regenerated when you do a helm upgrade. To circumvent that, you would need to use another workaround along with this one.

You should explicitly pass the value in as a Helm value; don't try to generate it in the chart.
The other answers to this question highlight a couple of the issues you'll run into. #UtkuƖzdemir notes that every time you call the Helm uuidv4 function it will create a new random UUID, so you can only call that function once in the chart ever; and #srr further notes that there's no way to persist a generated value like this, so if you helm upgrade the chart the UUID value will be regenerated, which will cause all of the involved Kubernetes objects to be redeployed.
The Bitnami RabbitMQ chart has an interesting middle road here. One of its configuration options is an "Erlang cookie", also a random string that needs to be consistent across all replicas and upgrades. On an initial install it generates a random value if one isn't provided, and tells you how to retrieve it from a Secret; but if .Release.IsUpgrade then you must provide the value directly, and the error message explains how to get it from your existing deployment.
You may be able to get around the "only call uuidv4 once ever" problem by putting the value into a ConfigMap or Secret, and then referencing it from elsewhere. This works only if the only place you use the UUID value is in an environment variable, or something else that can have a value injected from a secret; it won't help if you need it in an annotation or label.
apiVersion: v1
kind: Secret
metadata:
name: {{ template "chart.name" . }}
data:
the-uuid: {{ .Values.theUuid | default uuidv4 | b64enc }}
{{-/* this is the only place uuidv4 ^^^^^^ is called at all */}}
env:
- name: THE_UUID
valueFrom:
secretKeyRef:
name: {{ template "chart.name" . }}
key: the-uuid

As suggested in helm issue tracker https://github.com/helm/helm/issues/6456, we have to put both components in same file and looks like thats the only solution right now.
Its a surprise, Helm not supporting cache the value to share across charts/components. I wish Helm support this feature in future.

Related

How do I Reference Service's Property from another Kubernetes Entity?

Is there any option to reference service's property from another entity, Like Config Map or Deployment? To be more specific I want to put service's name in ConfigMap, not by myself, but rather to link it programmatically.
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map
namespace: ConfigMap-Namespace
Data:
ServiceName: <referenced-service-name>
---
apiVersion: v1
kind: Service
metadata:
name: service-name /// that name I want to put in ConfigMap.
namespace: ConfigMap-Namespace
spec:
....
Thanks...
Using plain kubectl, there is no way to dynamically fill in content like this. There are very limited exceptions around injecting values into environment variables in Pods (and PodSpecs inside other objects) but a ConfigMap must contain fixed content.
In this example, the Service object name is fixed, and I'd just embed the same fixed string into the ConfigMap.
If you were using a templating engine like Helm then you could call the same template code to render the Service name in both places. For example:
{{- define "service.name" -}}
{{ .Release.Name }}-service
{{- end -}}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "service.name" . }}
...
---
apiVersion: v1
kind: ConfigMap
metadata: { ... }
data:
serviceName: {{ include "service.name" . }}

Access an input file from vault template injector

I use Vault to retrieve some secrets that I put inside a configuration file. All works fine until this configuration gets bigger and I want it to be saved in sub configs in a folder. The issue is that those files can't get imported using go templating used to fill passwords..
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: my-app
spec:
...
template:
metadata:
annotations:
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/secret-volume-path-my-config: "/my-path/etc"
vault.hashicorp.com/agent-inject-file-my-config: "my-app.conf"
vault.hashicorp.com/agent-inject-secret-my-config: secret/data/my-app/config
vault.hashicorp.com/agent-inject-template-my-config: |
{{- $file := .Files }}
{{ .Files.Get "configurations/init.conf" }}
{{- with secret "secret/data/my-app/config" -}}
...
{{- end }}
The file configurations/init.conf for example doesn't seem to be visible by the vault injector and so gets simply replaced by <no value>. Is there a way to make those files in configurations/* visible to vault injector maybe by mounting them somewhere?

Helm create template for template

Is it possible to create a helm template for a template file inside templates/ folder? My use case is following: there are several kubernetes deployment files which differs only in deployment name and docker image that is pulled from a repo(e.g. deployment for service1, service2 etc). I want to create one chart to deploy all that services. Currently there are a lot of copy-paste in my deployment templates. I want to have some kind of a template for that templates. Also all that deployment templates will have different Values.yaml on different environments(e.g. service1 and service2 will have values-prod.yaml on prod env and values-stage.yaml on staging env)
If not possible, what are the alternative solutions? Thanks
For those who are interseted in a solution:
as proposed by David Maze in the comments to my question I should use somehing similar to {{ range .Values.resourceNames }} in the template file(e.g. deployment.yaml) and divide those resources with --- on the beginning of a loop(kubernetes allows to have several resources defined in a single file separated by ---), where .Values.resourceNames is just an example. So having that deployment.yaml template could look like following:
{{- range .Values.resourceNames }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .name }}-deployment
labels:
app: {{ .name }}
spec:
replicas: {{ .replicaCount }}
selector:
...
{{- end }}
where values.yaml file looks like folloiwng:
resourceNames:
- name: service1
replicaCount: 2
- name: service2
replicaCount: 1

Helm not deleting all the related resources of a chart

I had a helm release whose deployment was not successful. I tried uninstalling it so that I can create a fresh one.
The weird thing which I found is that there were some resources created partially (a couple of Jobs) because of the failed deployment, Uninstalling the failed deployment using helm does not removes those partially created resources which could cause issues when I try to install the release again with some changes.
My question is: Is there a way where I can ask helm to delete all the related resources of a release completely.
Since there are no details on partially created resources. One scenario could be where helm uninstall/delete would not delete the PVC's in the namespace. We resolved this by creating a separate namespace to deploy the application and helm release is uninstalled/deleted, we delete the namespace as well. For a fresh deployment, create a namespace again and do a helm installation on the namespace for a clean install or you can also change the reclaimPolicy to "Delete" while creating the storageClass (by default Reclaimpolicy is retain) as mentioned in the below post
PVC issue on helm: https://github.com/goharbor/harbor-helm/issues/268#issuecomment-505822451
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-ceph-block
provisioner: ceph.rook.io/block
parameters:
blockPool: replicapool
# The value of "clusterNamespace" MUST be the same as the one in which your rook cluster exist
clusterNamespace: rook-ceph-system
# Specify the filesystem type of the volume. If not specified, it will use `ext4`.
# Optional, default reclaimPolicy is "Delete". Other options are: "Retain", "Recycle" as documented in https://kubernetes.io/docs/concepts/storage/storage-classes/
reclaimPolicy: Delete
As you said in the comment that the partially created object is a job. In helm there is a concept name hook, which also runs a job for different situations like: pre-install, post-install etc. I thing you used one of this.
The yaml of an example is given below, where you can set the "helm.sh/hook-delete-policy": hook-failed instead of hook-succeeded then if the hook failed the job will be deleted. For more please see the official doc of helm hook
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name | quote }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: {{ .Release.Name | quote }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
spec:
restartPolicy: Never
containers:
- name: pre-install-job
image: "ubuntu"
#command: ["/bin/sleep","{{ default "10" .Values.hook.job.sleepyTime }}"]
args:
- /bin/bash
- -c
- echo
- "pre-install hook"

How to append Secret/ConfigMap hash prefix properly in Helm?

I want to append the hash of my Secret or ConfigMap contents to the name of the resource in order to trigger a rolling update and keep the old version of that resource around in case there is a mistake in the new configuration.
This can almost be achieved using "helm.sh/resource-policy": keep on the Secret/ConfigMap but these will never be cleaned up. Is there a way of saying 'keep all but the last two' in Helm or an alternative way of achieving this behaviour?
$ helm version
version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}
Automatically Roll Deployments
In order to update resource when Secret or Configmap changes, you can add checksum annotation to your deployment
kind: Deployment
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
You can revert to your previous configuration with helm rollback command
Update:
A ssuming that your Configmap is generated using values.yaml file, you can add a _helper.tpl function
{{- define "mychart.configmapChecksum" -}}
{{ printf "configmap-%s" (.Values.bar | sha256sum) }}
{{- end }}
And use {{ include "mychart.configmapChecksumed" . }} both as configmap name and reference in deployment.
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "mychart.configmapChecksumed" . }}
annotations:
"helm.sh/resource-policy": keep
data:
config.properties: |
foo={{ .Values.bar }}
deployment.yaml
...
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: {{ include "mychart.configmapChecksumed" . }}
Please note that you have to keep "helm.sh/resource-policy": keep annotation on Configmap telling helm to not delete the previous versions.
You can not use {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} as a configmap name directly because helm rendering will fail with
error calling include: rendering template has a nested reference name