I have defined a Helm pre-install hook in one of my microservices, to create a user, database and several tables in PostgreSQL before the service is started for the first time. My pre-install.yaml file is:
apiVersion: batch/v1
kind: Job
metadata:
name: psql-init-job
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": hook-succeeded
# hook-succeeded, hook-failed
spec:
template:
metaData:
name: psql-init-job
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
restartPolicy: Never
containers:
- name: psql-init-job
image: registry.local:5000/sandbox/init-pgsql:1.0.0
imagePullPolicy: "IfNotPresent"
command: ["/bin/sh", "-c", "/k8s-init.sh" ]
env:
- name: DATABASE_HOST
value: psql-postgresql.svc.cluster.local
- name: DATABASE_PORT
value: "5432"
- name: DATABASE_WAIT_TIME
value: "60s"
backoffLimit: 0
The Dockerfile includes:
# Copy the SQL script.
COPY ./docker/pgsql/psql/k8s-init.sh /k8s-init.sh
# Copy the SQL files.
COPY ./docker/pgsql/psql/k8s-init-postgres.sql /k8s-init-postgres.sql
COPY ./docker/pgsql/psql/k8s-init-test.sql /k8s-init-test.sql
The script that it's trying to run is:
#!/bin/sh
# Import the SQL files.
pg_restore -d postgres://username:password#psql-postgresql.svc.cluster.local:5432/postgres --jobs 4 /k8s-init-postgres.sql
pg_restore -d postgres://username:password#psql-postgresql.svc.cluster.local:5432/test --jobs 4 /k8s-init-test.sql
When I install the Helm chart that contains the pre-install hook, I get a startError from the microservice's pod:
/bin/sh: /k8s-init.sh: not found
Any idea why it can't find the k8s-init.sh script?
Related
I would like to create a helm hook to check whether the chart release already exists before install or upgrade. If not exists, it starts a fresh install. (Check for the revision would also be nice).
I am having trouble with writing the spec for above functions.
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# hooks are defined here
"helm.sh/hook": pre-upgrade,pre-install
#"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation
spec:
template:
metadata:
name: "{{ .Release.Name }}"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
I am trying to use a constant in skaffold, and to access it in skaffold profile:
example export SOME_IP=199.99.99.99 && skaffold run -p dev
skaffold.yaml
...
deploy:
helm:
flags:
global:
- "--debug"
releases:
- name: ***
chartPath: ***
imageStrategy:
helm:
explicitRegistry: true
createNamespace: true
namespace: "***"
setValueTemplates:
SKAFFOLD_SOME_IP: "{{.SOME_IP}}"
and in dev.yaml profile I need somehow to access it,
something like:
{{ .Template.SKAFFOLD_SOME_IP }} and it should be rendered as 199.99.99.99
I tried to use skaffold envTemplate and setValueTemplates fields, but could not get success, and could not find any example on web
Basically found a solution which I truly don't like, but it works:
in dev profile: values.dev.yaml I added a placeholder
_anchors_:
- &_IPAddr_01 "<IPAddr_01_TAG>" # will be replaced with SOME_IP
The <IPAddr_01_TAG> will be replaced with const SOME_IP which will become 199.99.99.99 at the skaffold run
Now to run skaffold I will do:
export SOME_IP=199.99.99.99
sed -i "s/<IPAddr_01_TAG>/$SOME_IP/g" values/values.dev.yaml
skaffold run -p dev
so after the above sed, in dev profile: values.dev.yaml, we will see the SOME_IP const instead of placeholder
_anchors_:
- &_IPAddr_01 "199.99.99.99"
To use the SKAFFOLD_SOME_IP variable that you have set in your skaffold.yaml you can write the chart template for Kubernetes Deployment like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}
labels:
app: {{ .Chart.Name }}
spec:
selector:
matchLabels:
app: {{ .Chart.Name }}
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ .Chart.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: {{ .Values.image }}
env:
- name: SKAFFOLD_SOME_IP
value: "{{ .Values.SKAFFOLD_SOME_IP }}"
This will create an environment variable SKAFFOLD_SOME_IP for Kubernetes pods. And you can access it using 'go', for example, like this:
os.Getenv("SKAFFOLD_SOME_IP")
I am trying to migrate our cassandra tables to use liquibase. Basically the idea is trivial, have a pre-install and pre-upgrade job that will run some liquibase scripts and will manage our database upgrade.
For that purpose I have created a custom docker image that will have the actual liquibase cli and then I can invoke it from the job. For example:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ .Release.Name }}-update-job"
namespace: spring-k8s
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install, pre-upgrade
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": before-hook-creation
spec:
template:
metadata:
name: "{{ .Release.Name }}-cassandra-update-job"
namespace: spring-k8s
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
restartPolicy: Never
containers:
- name: pre-install-upgrade-job
image: "lq/liquibase-cassandra:1.0.0"
command: ["/bin/bash"]
args:
- "-c"
- "./liquibase-cassandra.sh --username {{ .Values.liquibase.username }} --password {{ .Values.liquibase.username }} --url {{ .Values.liquibase.url | squote }} --file {{ .Values.liquibase.file }}"
Where .Values.liquibase.file == databaseChangelog.json.
So this image lq/liquibase-cassandra:1.0.0 basically has a script liquibase-cassandra.sh that when passed some arguments can do its magic and update the DB schema (not going to go into the details).
The problem is that last argument: --file {{ .Values.liquibase.file }}. This file resides not in the image, obviously, but in each micro-services repository.
I need a way to "copy" that file to the image, so that I could invoke it. One way would be to build this lq/liquibase-cassandra all the time (with the same lifecycle of the project itself) and copy the file into it, but that will take time and seems like at least cumbersome. What am I missing?
It turns out that helm hooks can be used for other things, not only jobs. As such, I can mount this file into a ConfigMap before the Job even starts (the file I care about resides in resources/databaseChangelog.json):
apiVersion: v1
kind: ConfigMap
metadata:
name: "liquibase-changelog-config-map"
namespace: spring-k8s
annotations:
helm.sh/hook: pre-install, pre-upgrade
helm.sh/hook-delete-policy: hook-succeeded
helm.sh/hook-weight: "1"
data:
{{ (.Files.Glob "resources/*").AsConfig | indent 2 }}
And then just reference it inside the job:
.....
spec:
restartPolicy: Never
volumes:
- name: liquibase-changelog-config-map
configMap:
name: liquibase-changelog-config-map
defaultMode: 0755
containers:
- name: pre-install-upgrade-job
volumeMounts:
- name: liquibase-changelog-config-map
mountPath: /liquibase-changelog-file
image: "lq/liquibase-cassandra:1.0.0"
command: ["/bin/bash"]
args:
- "-c"
- "./liquibase-cassandra.sh --username {{ .Values.liquibase.username }} --password {{ .Values.liquibase.username }} --url {{ .Values.liquibase.url | squote }} --file {{ printf "/liquibase-changelog-file/%s" .Values.liquibase.file }}"
I am running a django application in a Kubernetes cluster on gcloud. I implemented the database migration as a helm pre-intall hook that launches my app container and does the database migration. I use cloud-sql-proxy in a sidecar pattern as recommended in the official tutorial: https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine
Basically this launches my app and a cloud-sql-proxy containers within the pod described by the job. The problem is that cloud-sql-proxy never terminates after my app has completed the migration causing the pre-intall job to timeout and cancel my deployment. How do I gracefully exit the cloud-sql-proxy container after my app container completes so that the job can complete?
Here is my helm pre-intall hook template definition:
apiVersion: batch/v1
kind: Job
metadata:
name: database-migration-job
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install,pre-upgrade
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": hook-succeeded,hook-failed
spec:
activeDeadlineSeconds: 230
template:
metadata:
name: "{{ .Release.Name }}"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
restartPolicy: Never
containers:
- name: db-migrate
image: {{ .Values.my-project.docker_repo }}{{ .Values.backend.image }}:{{ .Values.my-project.image.tag}}
imagePullPolicy: {{ .Values.my-project.image.pullPolicy }}
env:
- name: DJANGO_SETTINGS_MODULE
value: "{{ .Values.backend.django_settings_module }}"
- name: SENDGRID_API_KEY
valueFrom:
secretKeyRef:
name: sendgrid-api-key
key: sendgrid-api-key
- name: DJANGO_SECRET_KEY
valueFrom:
secretKeyRef:
name: django-secret-key
key: django-secret-key
- name: DB_USER
value: {{ .Values.postgresql.postgresqlUsername }}
- name: DB_PASSWORD
{{- if .Values.postgresql.enabled }}
value: {{ .Values.postgresql.postgresqlPassword }}
{{- else }}
valueFrom:
secretKeyRef:
name: database-password
key: database-pwd
{{- end }}
- name: DB_NAME
value: {{ .Values.postgresql.postgresqlDatabase }}
- name: DB_HOST
{{- if .Values.postgresql.enabled }}
value: "postgresql"
{{- else }}
value: "127.0.0.1"
{{- end }}
workingDir: /app-root
command: ["/bin/sh"]
args: ["-c", "python manage.py migrate --no-input"]
{{- if eq .Values.postgresql.enabled false }}
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command:
- "/cloud_sql_proxy"
- "-instances=<INSTANCE_CONNECTION_NAME>=tcp:<DB_PORT>"
- "-credential_file=/secrets/service_account.json"
securityContext:
#fsGroup: 65532
runAsNonRoot: true
runAsUser: 65532
volumeMounts:
- name: db-con-mnt
mountPath: /secrets/
readOnly: true
volumes:
- name: db-con-mnt
secret:
secretName: db-service-account-credentials
{{- end }}
Funny enough, if I kill the job with "kubectl delete jobs database-migration-job" after the migration is done the helm upgrade completes and my new app version gets installed.
Well, I have a solution which will work but might be hacky. First of all this is Kubernetes is lacking feature which is in discussion in this issue.
With Kubernetes v1.17, containers in same Pods can share process namespaces. This enables us to kill proxy container from app container. Since this is a Kubernetes job there shouldn't be any anomalies to enable postStop handlers for app container.
With this solution when your app finishes and exits normally(or abnormally) then Kubernetes will run one last command from your dying container which will be kill another process in this case. This should result in job completion with success or fail depending on how you will be killing process. Process exit code will be container exit code, then it will be job exit code basically.
I am new to helm and I have tried to deploy a few tutorial charts. Had a couple of queries:
I have a Kubernetes job which I need to deploy. Is it possible to deploy a job via helm?
Also, currently my kubernetes job is deployed from my custom docker image and it runs a bash script to complete the job. I wanted to pass a few parameters to this chart/job so that the bash commands takes the input parameters. That's the reason I decided to move to helm because it provided a more flexibility. Is that possible?
You can use helm. Helm installs all the kubernetes resources like job,pods,configmaps,secrets inside the templates folder. You can control the order of installation by helm hooks. Helm offers hooks like pre-install, post-install, pre-delete with respect to deployment. if two or more jobs are pre-install then their weights will be compared for installing.
|-scripts/runjob.sh
|-templates/post-install.yaml
|-Chart.yaml
|-values.yaml
Many times you need to change the variables in the script as per the environment. so instead of hardcoding variable in script, you can also pass parameters to script by setting them as environment variables to your custom docker image. Change the values in values.yaml instead of changing in your script.
values.yaml
key1:
someKey1: value1
key2:
someKey2: value1
post-install.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: post-install-job
labels:
provider: stackoverflow
microservice: {{ template "name" . }}
release: "{{ .Release.Name }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
"helm.sh/hook": pre-install,pre-upgrade,pre-rollback
"helm.sh/hook-delete-policy": before-hook-creation
"helm.sh/hook-weight": "3"
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
provider: stackoverflow
microservice: {{ template "name" . }}
release: "{{ .Release.Name }}"
app: {{ template "fullname" . }}
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "custom-docker-image:v1"
command: ["/bin/sh", "-c", {{ .Files.Get "scripts/runjob.sh" | quote }} ]
env:
#setting KEY1 as environment variable in the container,value of KEY1 in container is value1(read from values.yaml)
- name: KEY1
value: {{ .Values.key1.someKey1 }}
- name: KEY2
value: {{ .Values.key2.someKey2 }}
runjob.sh
# you can access the variable from env variable
echo $KEY1
echo $KEY2
# some stuff
You can use Helm Hooks to run jobs. Depending on how you set up your annotations you can run a different type of hook (pre-install, post-install, pre-delete, post-delete, pre-upgrade, post-upgrade, pre-rollback, post-rollback, crd-install). An example from the doc is as follows:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.3"
command: ["/bin/sleep","{{default "10" .Values.sleepyTime}}"]
You can pass your parameters as secrets or configMaps to your job as you would to a pod.
I had a similar scenario where I had a job I wanted to pass a variety of arguments to. I ended up doing something like this:
Template:
apiVersion: batch/v1
kind: Job
metadata:
name: myJob
spec:
template:
spec:
containers:
- name: myJob
image: myImage
args: {{ .Values.args }}
Command (powershell):
helm template helm-chart --set "args={arg1\, arg2\, arg3}" | kubectl apply -f -