Helm upgrade --install isn't picking up new changes - kubernetes-helm

I'm using the command below in my build CI such that the deployments to helm happen on each build. However, I'm noticing that the changes aren't being deployed.
helm upgrade --install --force \
--namespace=default \
--values=kubernetes/values.yaml \
--set image.tag=latest \
--set service.name=my-service \
--set image.pullPolicy=Always \
myService kubernetes/myservice
Do I need to tag the image each time? Does helm not do the install if the same version exists?

You don't have to tag the image each time with a new tag. Just add
date: "{{ now | unixEpoch }}"
under spec/template/metadata/labels and set imagePullPolicy: Always. Helm will detect the changes in the deployment object and will pull the latest image each time:
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ .Release.Name }}-{{ .Values.app.frontendName }}-deployment"
labels:
app.kubernetes.io/name: {{ .Values.app.frontendName }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ .Values.app.frontendName }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ .Values.app.frontendName }}
app.kubernetes.io/instance: {{ .Release.Name }}
date: "{{ now | unixEpoch }}"
spec:
containers:
- name: {{ .Values.app.frontendName }}
image: "rajesh12/myimage:latest"
imagePullPolicy: Always
Run helm upgrade releaseName ./my-chart to upgrade your release

With helm 3, the --recreate-pods flag is deprecated.
Instead you can use
kind: Deployment
spec:
template:
metadata:
annotations:
rollme: {{ randAlphaNum 5 | quote }}
This will create a random string annotation, that always changes and causes the deployment to roll.
Helm - AUTOMATICALLY ROLL DEPLOYMENTS

Another label, perhaps more robust than the seconds you can add is simply the chart revision number:
...
metadata:
...
labels:
helm-revision: "{{ .Release.Revision }}"
...

Yes, you need to tag each build rather than use 'latest'. Helm does a diff between the template evaluated from your parameters and the currently deployed one. Since both are 'latest' it sees no change and doesn't apply any upgrade (unless something else changed). This is why the helm best practices guide advises that "container image should use a fixed tag or the SHA of the image". (See also https://docs.helm.sh/chart_best_practices/ and Helm upgrade doesn't pull new container )

Related

pass constant to skaffold

I am trying to use a constant in skaffold, and to access it in skaffold profile:
example export SOME_IP=199.99.99.99 && skaffold run -p dev
skaffold.yaml
...
deploy:
helm:
flags:
global:
- "--debug"
releases:
- name: ***
chartPath: ***
imageStrategy:
helm:
explicitRegistry: true
createNamespace: true
namespace: "***"
setValueTemplates:
SKAFFOLD_SOME_IP: "{{.SOME_IP}}"
and in dev.yaml profile I need somehow to access it,
something like:
{{ .Template.SKAFFOLD_SOME_IP }} and it should be rendered as 199.99.99.99
I tried to use skaffold envTemplate and setValueTemplates fields, but could not get success, and could not find any example on web
Basically found a solution which I truly don't like, but it works:
in dev profile: values.dev.yaml I added a placeholder
_anchors_:
- &_IPAddr_01 "<IPAddr_01_TAG>" # will be replaced with SOME_IP
The <IPAddr_01_TAG> will be replaced with const SOME_IP which will become 199.99.99.99 at the skaffold run
Now to run skaffold I will do:
export SOME_IP=199.99.99.99
sed -i "s/<IPAddr_01_TAG>/$SOME_IP/g" values/values.dev.yaml
skaffold run -p dev
so after the above sed, in dev profile: values.dev.yaml, we will see the SOME_IP const instead of placeholder
_anchors_:
- &_IPAddr_01 "199.99.99.99"
To use the SKAFFOLD_SOME_IP variable that you have set in your skaffold.yaml you can write the chart template for Kubernetes Deployment like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}
labels:
app: {{ .Chart.Name }}
spec:
selector:
matchLabels:
app: {{ .Chart.Name }}
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ .Chart.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: {{ .Values.image }}
env:
- name: SKAFFOLD_SOME_IP
value: "{{ .Values.SKAFFOLD_SOME_IP }}"
This will create an environment variable SKAFFOLD_SOME_IP for Kubernetes pods. And you can access it using 'go', for example, like this:
os.Getenv("SKAFFOLD_SOME_IP")

Error running script in Helm pre-install hook - script not found

I have defined a Helm pre-install hook in one of my microservices, to create a user, database and several tables in PostgreSQL before the service is started for the first time. My pre-install.yaml file is:
apiVersion: batch/v1
kind: Job
metadata:
name: psql-init-job
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "5"
"helm.sh/hook-delete-policy": hook-succeeded
# hook-succeeded, hook-failed
spec:
template:
metaData:
name: psql-init-job
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
restartPolicy: Never
containers:
- name: psql-init-job
image: registry.local:5000/sandbox/init-pgsql:1.0.0
imagePullPolicy: "IfNotPresent"
command: ["/bin/sh", "-c", "/k8s-init.sh" ]
env:
- name: DATABASE_HOST
value: psql-postgresql.svc.cluster.local
- name: DATABASE_PORT
value: "5432"
- name: DATABASE_WAIT_TIME
value: "60s"
backoffLimit: 0
The Dockerfile includes:
# Copy the SQL script.
COPY ./docker/pgsql/psql/k8s-init.sh /k8s-init.sh
# Copy the SQL files.
COPY ./docker/pgsql/psql/k8s-init-postgres.sql /k8s-init-postgres.sql
COPY ./docker/pgsql/psql/k8s-init-test.sql /k8s-init-test.sql
The script that it's trying to run is:
#!/bin/sh
# Import the SQL files.
pg_restore -d postgres://username:password#psql-postgresql.svc.cluster.local:5432/postgres --jobs 4 /k8s-init-postgres.sql
pg_restore -d postgres://username:password#psql-postgresql.svc.cluster.local:5432/test --jobs 4 /k8s-init-test.sql
When I install the Helm chart that contains the pre-install hook, I get a startError from the microservice's pod:
/bin/sh: /k8s-init.sh: not found
Any idea why it can't find the k8s-init.sh script?

How can i rename a helm chart once created?

I have created a helm chart named "abc" with the command
helm create abc
Now when I install this chart, all the kuberenets resources created will have a name containing "abc".
Now I have to rename the chart "abc" to "xyz".
If i use
helm install --name xyz ./abc
only the chart name is changed to xyz. The resources inside it remain with "abc".
I need to rename the entire chart (with its resources) to be renamed.
Do I have any option for it?
I met this problem many times and I found the simplest solution.
Just 2 steps:
find and replace the name in the chart files.
rename chart directory name
Very simple if your chart name is quite unique.
By default chart name is used in the files:
Chart.yaml,
templates_helpers.tpl
You can access xyz with {{ .Release.Name }} and need to update the resource names with {{ .Release.Name }} so that the names are picked up dynamically every time:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
labels:
app.kubernetes.io/name: {{ .Values.app.dbName }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: {{ .Values.app.dbName }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ .Values.app.dbName }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- image: mysql:5.6
name: "{{ .Release.Name }}-mysql" // or just {{ .Release.Name }}

Deploying a kubernetes job via helm

I am new to helm and I have tried to deploy a few tutorial charts. Had a couple of queries:
I have a Kubernetes job which I need to deploy. Is it possible to deploy a job via helm?
Also, currently my kubernetes job is deployed from my custom docker image and it runs a bash script to complete the job. I wanted to pass a few parameters to this chart/job so that the bash commands takes the input parameters. That's the reason I decided to move to helm because it provided a more flexibility. Is that possible?
You can use helm. Helm installs all the kubernetes resources like job,pods,configmaps,secrets inside the templates folder. You can control the order of installation by helm hooks. Helm offers hooks like pre-install, post-install, pre-delete with respect to deployment. if two or more jobs are pre-install then their weights will be compared for installing.
|-scripts/runjob.sh
|-templates/post-install.yaml
|-Chart.yaml
|-values.yaml
Many times you need to change the variables in the script as per the environment. so instead of hardcoding variable in script, you can also pass parameters to script by setting them as environment variables to your custom docker image. Change the values in values.yaml instead of changing in your script.
values.yaml
key1:
someKey1: value1
key2:
someKey2: value1
post-install.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: post-install-job
labels:
provider: stackoverflow
microservice: {{ template "name" . }}
release: "{{ .Release.Name }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
"helm.sh/hook": pre-install,pre-upgrade,pre-rollback
"helm.sh/hook-delete-policy": before-hook-creation
"helm.sh/hook-weight": "3"
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
provider: stackoverflow
microservice: {{ template "name" . }}
release: "{{ .Release.Name }}"
app: {{ template "fullname" . }}
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "custom-docker-image:v1"
command: ["/bin/sh", "-c", {{ .Files.Get "scripts/runjob.sh" | quote }} ]
env:
#setting KEY1 as environment variable in the container,value of KEY1 in container is value1(read from values.yaml)
- name: KEY1
value: {{ .Values.key1.someKey1 }}
- name: KEY2
value: {{ .Values.key2.someKey2 }}
runjob.sh
# you can access the variable from env variable
echo $KEY1
echo $KEY2
# some stuff
You can use Helm Hooks to run jobs. Depending on how you set up your annotations you can run a different type of hook (pre-install, post-install, pre-delete, post-delete, pre-upgrade, post-upgrade, pre-rollback, post-rollback, crd-install). An example from the doc is as follows:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.3"
command: ["/bin/sleep","{{default "10" .Values.sleepyTime}}"]
You can pass your parameters as secrets or configMaps to your job as you would to a pod.
I had a similar scenario where I had a job I wanted to pass a variety of arguments to. I ended up doing something like this:
Template:
apiVersion: batch/v1
kind: Job
metadata:
name: myJob
spec:
template:
spec:
containers:
- name: myJob
image: myImage
args: {{ .Values.args }}
Command (powershell):
helm template helm-chart --set "args={arg1\, arg2\, arg3}" | kubectl apply -f -

Helm chart restart pods when configmap changes

I am trying to restart the pods when there is a confimap or secret change. I have tried the same piece of code as described in: https://github.com/helm/helm/blob/master/docs/charts_tips_and_tricks.md#automatically-roll-deployments-when-configmaps-or-secrets-change
However, after updating the configmap, my pod does not get restarted. Would you have any idea what could has been done wrong here?
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: {{ template "app.fullname" . }}
labels:
app: {{ template "app.name" . }}
{{- include "global_labels" . | indent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "app.name" . }}
release: {{ .Release.Name }}
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/secret.yml") . | sha256sum }}
https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments Helm3 has this feature now. deployments are rolled out when there is change in configmap template file.
Neither Helm nor Kubernetes provide a specific rolling update for a ConfigMap change. The workaround has been for a while is to just patch the deployment which triggers the rolling update:
kubectl patch deployment your-deployment -n your-namespace -p '{"spec":{"template":{"metadata":{"annotations":{"date":"$(date)"}}}}}'
And you can see the status:
kubectl rollout status deployment your-deployment
Note this works on a nix machine. This is until this feature is added.
Update 05/05/2021
Helm and kubectl provide this now:
Helm: https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
kubectl: kubectl rollout restart deploy WORKLOAD_NAME
it worked for me, below is the code snippet from my deployment.yaml file, make sure your configmap and secret yaml file are same as what referred in the annotations:
spec:
template:
metadata:
annotations:
checksum/config: {{ include (print $.Template.BasePath "/my-configmap.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/my-secret.yaml") . | sha256sum }}
I deployed pod with configmap with this feature https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments.
When I edited the configmap at runtime , it didn't trigger the roll-deployment.