Helm Hook for checking correct release or release is already available - kubernetes

I would like to create a helm hook to check whether the chart release already exists before install or upgrade. If not exists, it starts a fresh install. (Check for the revision would also be nice).
I am having trouble with writing the spec for above functions.
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# hooks are defined here
"helm.sh/hook": pre-upgrade,pre-install
#"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation
spec:
template:
metadata:
name: "{{ .Release.Name }}"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:

Related

Helm lookup always empty

While deploying a Kubernetes application, I want to check if a resource is already present. If so it shall not be rendered. To archive this behaviour the lookup function of helm is used. As it seems is always empty while deploying (no dry-run). Any ideas what I am doing wrong?
---
{{- if not (lookup "v1" "ServiceAccount" "my-namespace" "my-sa") }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Chart.Name }}-{{ .Values.environment }}
namespace: {{ .Values.namespace }}
labels:
app: {{ $.Chart.Name }}
environment: {{ .Values.environment }}
annotations:
"helm.sh/resource-policy": keep
iam.gke.io/gcp-service-account: "{{ .Chart.Name }}-{{ .Values.environment }}#{{ .Values.gcpProjectId }}.iam.gserviceaccount.com"
{{- end }}
running the corresponding kubectl command return the expected service account
kubectl get ServiceAccount my-sa -n my-namespace lists the expected service account
helm version: 3.5.4
i think you cannot use this if-statement to validate what you want.
the lookup function returns a list of objects that were found by your lookup. so, if you want to validate that there are no serviceaccounts with the properties you specified, you should check if the returned list is empty.
test something like
---
{{ if eq (len (lookup "v1" "ServiceAccount" "my-namespace" "my-sa")) 0 }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Chart.Name }}-{{ .Values.environment }}
namespace: {{ .Values.namespace }}
labels:
app: {{ $.Chart.Name }}
environment: {{ .Values.environment }}
annotations:
"helm.sh/resource-policy": keep
iam.gke.io/gcp-service-account: "{{ .Chart.Name }}-{{ .Values.environment }}#{{ .Values.gcpProjectId }}.iam.gserviceaccount.com"
{{- end }}
see: https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function

DB Migration with helm

I am trying to migrate our cassandra tables to use liquibase. Basically the idea is trivial, have a pre-install and pre-upgrade job that will run some liquibase scripts and will manage our database upgrade.
For that purpose I have created a custom docker image that will have the actual liquibase cli and then I can invoke it from the job. For example:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}-update-job"
namespace: spring-k8s
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install, pre-upgrade
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
template:
metadata:
name: "{{ .Release.Name }}-cassandra-update-job"
namespace: spring-k8s
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
restartPolicy: Never
containers:
- name: pre-install-upgrade-job
image: "lq/liquibase-cassandra:1.0.0"
command: ["/bin/bash"]
args:
- "-c"
- "./liquibase-cassandra.sh --username {{ .Values.liquibase.username }} --password {{ .Values.liquibase.username }} --url {{ .Values.liquibase.url | squote }} --file {{ .Values.liquibase.file }}"
Where .Values.liquibase.file == databaseChangelog.json.
So this image lq/liquibase-cassandra:1.0.0 basically has a script liquibase-cassandra.sh that when passed some arguments can do its magic and update the DB schema (not going to go into the details).
The problem is that last argument: --file {{ .Values.liquibase.file }}. This file resides not in the image, obviously, but in each micro-services repository.
I need a way to "copy" that file to the image, so that I could invoke it. One way would be to build this lq/liquibase-cassandra all the time (with the same lifecycle of the project itself) and copy the file into it, but that will take time and seems like at least cumbersome. What am I missing?
It turns out that helm hooks can be used for other things, not only jobs. As such, I can mount this file into a ConfigMap before the Job even starts (the file I care about resides in resources/databaseChangelog.json):
apiVersion: v1
kind: ConfigMap
metadata:
name: "liquibase-changelog-config-map"
namespace: spring-k8s
annotations:
helm.sh/hook: pre-install, pre-upgrade
helm.sh/hook-delete-policy: hook-succeeded
helm.sh/hook-weight: "1"
data:
{{ (.Files.Glob "resources/*").AsConfig | indent 2 }}
And then just reference it inside the job:
.....
spec:
restartPolicy: Never
volumes:
- name: liquibase-changelog-config-map
configMap:
name: liquibase-changelog-config-map
defaultMode: 0755
containers:
- name: pre-install-upgrade-job
volumeMounts:
- name: liquibase-changelog-config-map
mountPath: /liquibase-changelog-file
image: "lq/liquibase-cassandra:1.0.0"
command: ["/bin/bash"]
args:
- "-c"
- "./liquibase-cassandra.sh --username {{ .Values.liquibase.username }} --password {{ .Values.liquibase.username }} --url {{ .Values.liquibase.url | squote }} --file {{ printf "/liquibase-changelog-file/%s" .Values.liquibase.file }}"

Error running script in Helm pre-install hook - script not found

I have defined a Helm pre-install hook in one of my microservices, to create a user, database and several tables in PostgreSQL before the service is started for the first time. My pre-install.yaml file is:
apiVersion: batch/v1
kind: Job
metadata:
name: psql-init-job
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": hook-succeeded
# hook-succeeded, hook-failed
spec:
template:
metaData:
name: psql-init-job
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
restartPolicy: Never
containers:
- name: psql-init-job
image: registry.local:5000/sandbox/init-pgsql:1.0.0
imagePullPolicy: "IfNotPresent"
command: ["/bin/sh", "-c", "/k8s-init.sh" ]
env:
- name: DATABASE_HOST
value: psql-postgresql.svc.cluster.local
- name: DATABASE_PORT
value: "5432"
- name: DATABASE_WAIT_TIME
value: "60s"
backoffLimit: 0
The Dockerfile includes:
# Copy the SQL script.
COPY ./docker/pgsql/psql/k8s-init.sh /k8s-init.sh
# Copy the SQL files.
COPY ./docker/pgsql/psql/k8s-init-postgres.sql /k8s-init-postgres.sql
COPY ./docker/pgsql/psql/k8s-init-test.sql /k8s-init-test.sql
The script that it's trying to run is:
#!/bin/sh
# Import the SQL files.
pg_restore -d postgres://username:password#psql-postgresql.svc.cluster.local:5432/postgres --jobs 4 /k8s-init-postgres.sql
pg_restore -d postgres://username:password#psql-postgresql.svc.cluster.local:5432/test --jobs 4 /k8s-init-test.sql
When I install the Helm chart that contains the pre-install hook, I get a startError from the microservice's pod:
/bin/sh: /k8s-init.sh: not found
Any idea why it can't find the k8s-init.sh script?

How does one pass an override file to specific Service YAML file using Helm?

I am trying to pass a toleration when deploying to a chart located in stable. The toleration should be applied to a specific YAML file in the templates directory, NOT the values.yaml file as it is doing by default.
I've applied using patch and I can see that the change I need would work if it were applied to the right Service, which is a DaemonSet.
Currently I'm trying "helm install -f tolerations.yaml --name release_here"
This is simply creating a one-off entry when running get chart release_here, and is not in the correct service YAML
Quoting your requirement
The toleration should be applied to a specific YAML file in the
templates directory
First, in order to make it happen your particular helm chart file needs to allow such an end-user customization.
Here is the example based on stable/kiam chart:
Definition of kiam/templates/server-daemonset.yaml
{{- if .Values.server.enabled -}}
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
labels:
app: {{ template "kiam.name" . }}
chart: {{ template "kiam.chart" . }}
component: "{{ .Values.server.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "kiam.fullname" . }}-server
spec:
selector:
matchLabels:
app: {{ template "kiam.name" . }}
component: "{{ .Values.server.name }}"
release: {{ .Release.Name }}
template:
metadata:
{{- if .Values.server.podAnnotations }}
annotations:
{{ toYaml .Values.server.podAnnotations | indent 8 }}
{{- end }}
labels:
app: {{ template "kiam.name" . }}
component: "{{ .Values.server.name }}"
release: {{ .Release.Name }}
{{- if .Values.server.podLabels }}
{{ toYaml .Values.server.podLabels | indent 8 }}
{{- end }}
spec:
serviceAccountName: {{ template "kiam.serviceAccountName.server" . }}
hostNetwork: {{ .Values.server.useHostNetwork }}
{{- if .Values.server.nodeSelector }}
nodeSelector:
{{ toYaml .Values.server.nodeSelector | indent 8 }}
{{- end }}
tolerations: <---- TOLERATIONS !
{{ toYaml .Values.server.tolerations | indent 8 }}
{{- if .Values.server.affinity }}
affinity:
{{ toYaml .Values.server.affinity | indent 10 }}
{{- end }}
volumes:
- name: tls
Override default values.yaml with your customs-values to set toleration in Pod spec of DeamonSet.
server:
enabled: true
tolerations: ## Agent container resources
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: foo.bar.com/role
operator: In
values:
- master
Render the resulting manifest file, to see how it would look like when overriding default values with install/upgrade helm command using --values/--set argument:
helm template --name my-release . -x templates/server-daemonset.yaml --values custom-values.yaml
Rendered file (output truncated):
---
# Source: kiam/templates/server-daemonset.yaml
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
labels:
app: kiam
chart: kiam-2.5.1
component: "server"
heritage: Tiller
release: my-release
name: my-release-kiam-server
spec:
selector:
matchLabels:
app: kiam
component: "server"
release: my-release
template:
metadata:
labels:
app: kiam
component: "server"
release: my-release
spec:
serviceAccountName: my-release-kiam-server
hostNetwork: false
tolerations:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: foo.bar.com/role
operator: In
values:
- master
volumes:
...
I hope this will help you to solve your problem.

How can i rename a helm chart once created?

I have created a helm chart named "abc" with the command
helm create abc
Now when I install this chart, all the kuberenets resources created will have a name containing "abc".
Now I have to rename the chart "abc" to "xyz".
If i use
helm install --name xyz ./abc
only the chart name is changed to xyz. The resources inside it remain with "abc".
I need to rename the entire chart (with its resources) to be renamed.
Do I have any option for it?
I met this problem many times and I found the simplest solution.
Just 2 steps:
find and replace the name in the chart files.
rename chart directory name
Very simple if your chart name is quite unique.
By default chart name is used in the files:
Chart.yaml,
templates_helpers.tpl
You can access xyz with {{ .Release.Name }} and need to update the resource names with {{ .Release.Name }} so that the names are picked up dynamically every time:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
labels:
app.kubernetes.io/name: {{ .Values.app.dbName }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: {{ .Values.app.dbName }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ .Values.app.dbName }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- image: mysql:5.6
name: "{{ .Release.Name }}-mysql" // or just {{ .Release.Name }}