I am new to helm and I have tried to deploy a few tutorial charts. Had a couple of queries:
I have a Kubernetes job which I need to deploy. Is it possible to deploy a job via helm?
Also, currently my kubernetes job is deployed from my custom docker image and it runs a bash script to complete the job. I wanted to pass a few parameters to this chart/job so that the bash commands takes the input parameters. That's the reason I decided to move to helm because it provided a more flexibility. Is that possible?
You can use helm. Helm installs all the kubernetes resources like job,pods,configmaps,secrets inside the templates folder. You can control the order of installation by helm hooks. Helm offers hooks like pre-install, post-install, pre-delete with respect to deployment. if two or more jobs are pre-install then their weights will be compared for installing.
|-scripts/runjob.sh
|-templates/post-install.yaml
|-Chart.yaml
|-values.yaml
Many times you need to change the variables in the script as per the environment. so instead of hardcoding variable in script, you can also pass parameters to script by setting them as environment variables to your custom docker image. Change the values in values.yaml instead of changing in your script.
values.yaml
key1:
someKey1: value1
key2:
someKey2: value1
post-install.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: post-install-job
labels:
provider: stackoverflow
microservice: {{ template "name" . }}
release: "{{ .Release.Name }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
"helm.sh/hook": pre-install,pre-upgrade,pre-rollback
"helm.sh/hook-delete-policy": before-hook-creation
"helm.sh/hook-weight": "3"
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
provider: stackoverflow
microservice: {{ template "name" . }}
release: "{{ .Release.Name }}"
app: {{ template "fullname" . }}
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "custom-docker-image:v1"
command: ["/bin/sh", "-c", {{ .Files.Get "scripts/runjob.sh" | quote }} ]
env:
#setting KEY1 as environment variable in the container,value of KEY1 in container is value1(read from values.yaml)
- name: KEY1
value: {{ .Values.key1.someKey1 }}
- name: KEY2
value: {{ .Values.key2.someKey2 }}
runjob.sh
# you can access the variable from env variable
echo $KEY1
echo $KEY2
# some stuff
You can use Helm Hooks to run jobs. Depending on how you set up your annotations you can run a different type of hook (pre-install, post-install, pre-delete, post-delete, pre-upgrade, post-upgrade, pre-rollback, post-rollback, crd-install). An example from the doc is as follows:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.3"
command: ["/bin/sleep","{{default "10" .Values.sleepyTime}}"]
You can pass your parameters as secrets or configMaps to your job as you would to a pod.
I had a similar scenario where I had a job I wanted to pass a variety of arguments to. I ended up doing something like this:
Template:
apiVersion: batch/v1
kind: Job
metadata:
name: myJob
spec:
template:
spec:
containers:
- name: myJob
image: myImage
args: {{ .Values.args }}
Command (powershell):
helm template helm-chart --set "args={arg1\, arg2\, arg3}" | kubectl apply -f -
Related
I am trying to use a constant in skaffold, and to access it in skaffold profile:
example export SOME_IP=199.99.99.99 && skaffold run -p dev
skaffold.yaml
...
deploy:
helm:
flags:
global:
- "--debug"
releases:
- name: ***
chartPath: ***
imageStrategy:
helm:
explicitRegistry: true
createNamespace: true
namespace: "***"
setValueTemplates:
SKAFFOLD_SOME_IP: "{{.SOME_IP}}"
and in dev.yaml profile I need somehow to access it,
something like:
{{ .Template.SKAFFOLD_SOME_IP }} and it should be rendered as 199.99.99.99
I tried to use skaffold envTemplate and setValueTemplates fields, but could not get success, and could not find any example on web
Basically found a solution which I truly don't like, but it works:
in dev profile: values.dev.yaml I added a placeholder
_anchors_:
- &_IPAddr_01 "<IPAddr_01_TAG>" # will be replaced with SOME_IP
The <IPAddr_01_TAG> will be replaced with const SOME_IP which will become 199.99.99.99 at the skaffold run
Now to run skaffold I will do:
export SOME_IP=199.99.99.99
sed -i "s/<IPAddr_01_TAG>/$SOME_IP/g" values/values.dev.yaml
skaffold run -p dev
so after the above sed, in dev profile: values.dev.yaml, we will see the SOME_IP const instead of placeholder
_anchors_:
- &_IPAddr_01 "199.99.99.99"
To use the SKAFFOLD_SOME_IP variable that you have set in your skaffold.yaml you can write the chart template for Kubernetes Deployment like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}
labels:
app: {{ .Chart.Name }}
spec:
selector:
matchLabels:
app: {{ .Chart.Name }}
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ .Chart.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: {{ .Values.image }}
env:
- name: SKAFFOLD_SOME_IP
value: "{{ .Values.SKAFFOLD_SOME_IP }}"
This will create an environment variable SKAFFOLD_SOME_IP for Kubernetes pods. And you can access it using 'go', for example, like this:
os.Getenv("SKAFFOLD_SOME_IP")
I am trying to migrate our cassandra tables to use liquibase. Basically the idea is trivial, have a pre-install and pre-upgrade job that will run some liquibase scripts and will manage our database upgrade.
For that purpose I have created a custom docker image that will have the actual liquibase cli and then I can invoke it from the job. For example:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}-update-job"
namespace: spring-k8s
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install, pre-upgrade
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
template:
metadata:
name: "{{ .Release.Name }}-cassandra-update-job"
namespace: spring-k8s
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
restartPolicy: Never
containers:
- name: pre-install-upgrade-job
image: "lq/liquibase-cassandra:1.0.0"
command: ["/bin/bash"]
args:
- "-c"
- "./liquibase-cassandra.sh --username {{ .Values.liquibase.username }} --password {{ .Values.liquibase.username }} --url {{ .Values.liquibase.url | squote }} --file {{ .Values.liquibase.file }}"
Where .Values.liquibase.file == databaseChangelog.json.
So this image lq/liquibase-cassandra:1.0.0 basically has a script liquibase-cassandra.sh that when passed some arguments can do its magic and update the DB schema (not going to go into the details).
The problem is that last argument: --file {{ .Values.liquibase.file }}. This file resides not in the image, obviously, but in each micro-services repository.
I need a way to "copy" that file to the image, so that I could invoke it. One way would be to build this lq/liquibase-cassandra all the time (with the same lifecycle of the project itself) and copy the file into it, but that will take time and seems like at least cumbersome. What am I missing?
It turns out that helm hooks can be used for other things, not only jobs. As such, I can mount this file into a ConfigMap before the Job even starts (the file I care about resides in resources/databaseChangelog.json):
apiVersion: v1
kind: ConfigMap
metadata:
name: "liquibase-changelog-config-map"
namespace: spring-k8s
annotations:
helm.sh/hook: pre-install, pre-upgrade
helm.sh/hook-delete-policy: hook-succeeded
helm.sh/hook-weight: "1"
data:
{{ (.Files.Glob "resources/*").AsConfig | indent 2 }}
And then just reference it inside the job:
.....
spec:
restartPolicy: Never
volumes:
- name: liquibase-changelog-config-map
configMap:
name: liquibase-changelog-config-map
defaultMode: 0755
containers:
- name: pre-install-upgrade-job
volumeMounts:
- name: liquibase-changelog-config-map
mountPath: /liquibase-changelog-file
image: "lq/liquibase-cassandra:1.0.0"
command: ["/bin/bash"]
args:
- "-c"
- "./liquibase-cassandra.sh --username {{ .Values.liquibase.username }} --password {{ .Values.liquibase.username }} --url {{ .Values.liquibase.url | squote }} --file {{ printf "/liquibase-changelog-file/%s" .Values.liquibase.file }}"
I have defined a Helm pre-install hook in one of my microservices, to create a user, database and several tables in PostgreSQL before the service is started for the first time. My pre-install.yaml file is:
apiVersion: batch/v1
kind: Job
metadata:
name: psql-init-job
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": hook-succeeded
# hook-succeeded, hook-failed
spec:
template:
metaData:
name: psql-init-job
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
restartPolicy: Never
containers:
- name: psql-init-job
image: registry.local:5000/sandbox/init-pgsql:1.0.0
imagePullPolicy: "IfNotPresent"
command: ["/bin/sh", "-c", "/k8s-init.sh" ]
env:
- name: DATABASE_HOST
value: psql-postgresql.svc.cluster.local
- name: DATABASE_PORT
value: "5432"
- name: DATABASE_WAIT_TIME
value: "60s"
backoffLimit: 0
The Dockerfile includes:
# Copy the SQL script.
COPY ./docker/pgsql/psql/k8s-init.sh /k8s-init.sh
# Copy the SQL files.
COPY ./docker/pgsql/psql/k8s-init-postgres.sql /k8s-init-postgres.sql
COPY ./docker/pgsql/psql/k8s-init-test.sql /k8s-init-test.sql
The script that it's trying to run is:
#!/bin/sh
# Import the SQL files.
pg_restore -d postgres://username:password#psql-postgresql.svc.cluster.local:5432/postgres --jobs 4 /k8s-init-postgres.sql
pg_restore -d postgres://username:password#psql-postgresql.svc.cluster.local:5432/test --jobs 4 /k8s-init-test.sql
When I install the Helm chart that contains the pre-install hook, I get a startError from the microservice's pod:
/bin/sh: /k8s-init.sh: not found
Any idea why it can't find the k8s-init.sh script?
I'm deploying a Kubernetes stateful set and I would like to get the pod index inside the helm chart so I can configure each pod with this pod index.
For example in the following template I'm using the variable {{ .Values.podIndex }} to retrieve the pod index in order to use it to configure my app.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ .Values.replicaCount }}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 50%
template:
metadata:
labels:
app: {{ .Values.name }}
spec:
containers:
- image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: Always
name: {{ .Values.name }}
command: ["launch"],
args: ["-l","{{ .Values.podIndex }}"]
ports:
- containerPort: 4000
imagePullSecrets:
- name: gitlab-registry
You can't do this in the way you're describing.
Probably the best path is to change your Deployment into a StatefulSet. Each pod launched from a StatefulSet has an identity, and each pod's hostname gets set to the name of the StatefulSet plus an index. If your launch command looks at hostname, it will see something like name-0 and know that it's the first (index 0) pod in the StatefulSet.
A second path would be to create n single-replica Deployments using Go templating. This wouldn't be my preferred path, but you can
{{ range $podIndex := until .Values.replicaCount -}}
---
apiVersion: v1
kind: Deployment
metadata:
name: {{ .Values.name }}-{{ $podIndex }}
spec:
replicas: 1
template:
spec:
containers:
- name: {{ .Values.name }}
command: ["launch"]
args: ["-l", "{{ $podIndex }}"]
{{ end -}}
The actual flow here is that Helm reads in all of the template files and produces a block of YAML files, then submits these to the Kubernetes API server (with no templating directives at all), and the Kubernetes machinery acts on it. You can see what's being submitted by running helm template. By the time a Deployment is creating a Pod, all of the template directives have been stripped out; you can't make fields in the pod spec dependent on things like which replica it is or which node it got scheduled on.
i.e. from name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod below
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ template "project1234.name" . }}
chart: {{ template "project1234.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: {{ template "project1234.module5678.fullname" . }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "project1234.name" . }}
template:
metadata:
labels:
app: {{ template "project1234.name" . }}
spec:
containers:
- image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod
ports:
- containerPort: 1234
imagePullSecrets:
- name: {{ .Values.image.pullSecret }}
I am expecting the pod name to be:
pod/project1234-module5678-pod
Instead, the resulting Pod name is:
pod/chartname-project1234-module5678-dc7db787-skqvv
...where (in my understanding):
chartname is from: helm install --name chartname -f values.yaml .
project1234 is from:
# Chart.yaml
apiVersion: v1
appVersion: "1.0"
description: project1234 Helm chart for Kubernetes
name: project1234
version: 0.1.0
module5678 is from:
# values.yaml
rbac:
create: true
serviceAccounts:
module5678:
create: true
name:
image:
name: <image location>
tag: 1.5
pullSecret: <pull secret>
gitlab:
secretName: <secret name>
username: foo
password: bar
module5678:
enabled: true
name: module5678
ingress:
enabled: true
replicaCount: 1
resources: {}
I've tried changing name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod into a plain string value like "podname1234" and it isn't followed. I even tried removing the name setting entirely and the resulting pod name remains the same.
Pods created from a Deployment always have a generated name based on the Deployment's name (and also the name of the intermediate ReplicaSet, if you go off and look for it). You can't override it.
Given the YAML you've shown, I'd expect that this fragment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "project1234.module5678.fullname" . }}
expands out to a Deployment name of chartname-project1234-module5678; the remaining bits are added in by the ReplicaSet and then the Pod itself.
If you do look up the Pod and kubectl describe pod chartname-project1234-module5678-dc7db787-skqvv you will probably see that it has a single container that has the expected name project1234-module5678-pod. Pretty much the only time you need to use this is if you need to kubectl logs (or, more rarely, kubectl exec) in a multi-container pod; if you are in this case, you'll appreciate having a shorter name, and since the container names are always scoped to the specific pod in which they appear, there's nothing wrong with using a short fixed name here
spec:
containers:
- name: container