How run post-init-container like job in k8s - kubernetes

I am deploying k8s service using helm. Whenever i scale or update, i want to run a job. I am looking for something like post-init-container which can run everytime and get terminated when completed.
How we can achieve this case on k8s cluster. I am considering side car but wanted to know if k8s can support as this case as platform.
Thanks.

You can use helm post-install hook: https://helm.sh/docs/topics/charts_hooks/
Just add the "helm.sh/hook": post-install annotation.
For example:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
app.kubernetes.io/version: {{ .Chart.AppVersion }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.3"
command: ["/bin/sleep","{{default "10" .Values.sleepyTime}}"]

Why not just use a Job? Isn't that exactly what you're looking for? Why even complicate it with a post commit hook. Does it have to be after the container is established? A job runs once and then is terminated.

Related

Helm not deleting all the related resources of a chart

I had a helm release whose deployment was not successful. I tried uninstalling it so that I can create a fresh one.
The weird thing which I found is that there were some resources created partially (a couple of Jobs) because of the failed deployment, Uninstalling the failed deployment using helm does not removes those partially created resources which could cause issues when I try to install the release again with some changes.
My question is: Is there a way where I can ask helm to delete all the related resources of a release completely.
Since there are no details on partially created resources. One scenario could be where helm uninstall/delete would not delete the PVC's in the namespace. We resolved this by creating a separate namespace to deploy the application and helm release is uninstalled/deleted, we delete the namespace as well. For a fresh deployment, create a namespace again and do a helm installation on the namespace for a clean install or you can also change the reclaimPolicy to "Delete" while creating the storageClass (by default Reclaimpolicy is retain) as mentioned in the below post
PVC issue on helm: https://github.com/goharbor/harbor-helm/issues/268#issuecomment-505822451
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-ceph-block
provisioner: ceph.rook.io/block
parameters:
blockPool: replicapool
# The value of "clusterNamespace" MUST be the same as the one in which your rook cluster exist
clusterNamespace: rook-ceph-system
# Specify the filesystem type of the volume. If not specified, it will use `ext4`.
# Optional, default reclaimPolicy is "Delete". Other options are: "Retain", "Recycle" as documented in https://kubernetes.io/docs/concepts/storage/storage-classes/
reclaimPolicy: Delete
As you said in the comment that the partially created object is a job. In helm there is a concept name hook, which also runs a job for different situations like: pre-install, post-install etc. I thing you used one of this.
The yaml of an example is given below, where you can set the "helm.sh/hook-delete-policy": hook-failed instead of hook-succeeded then if the hook failed the job will be deleted. For more please see the official doc of helm hook
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name | quote }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: {{ .Release.Name | quote }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
spec:
restartPolicy: Never
containers:
- name: pre-install-job
image: "ubuntu"
#command: ["/bin/sleep","{{ default "10" .Values.hook.job.sleepyTime }}"]
args:
- /bin/bash
- -c
- echo
- "pre-install hook"

how to make helm hooks retry only one time?

I have seen that if my post-upgrade helm hook fails, it retries 5 times before giving up. How do I make sure that the hook only tries once to succeed and give up if failed? Or can I make the helm hook retry only on specific conditions if failed, rather than always?
I could not find any documentation/parameters for this use-case here.
You can set Pod backoff failure policy.
As far the k8s doc:
Pod backoff failure policy:
There are situations where you want to fail a Job after some amount of retries due to a logical error in configuration etc. To do so, set .spec.backoffLimit to specify the number of retries before considering a Job as failed. The back-off limit is set by default to 6.
Add backoffLimit: 1 in the pod spec, Ex:
spec:
containers:
- name:
image:
backoffLimit: 1
restartPolicy: Never
Full examle:
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name | quote }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-upgrade
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: {{ .Release.Name | quote }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
spec:
backoffLimit: 1
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.3"
command: ["/bin/sleep","{{ default "10" .Values.hook.job.sleepyTime }}"]
If you want the Helm hook to run just once, you have to set the backoffLimit to 0. That means, it will run and it won't back off, at all.
If you want the hook to retry once after running (and failing), then you have to set backoffLimit to 1 indeed, but that's not the same thing.

Does helm or K8S override spec.template.metadata.labels with spec.selector.matchLabels?

I have a chart I am applying with helm v3 and when I render it locally it looks like this;
apiVersion: apps/v1
kind: Deployment
metadata:
name: RELEASE-NAME-generic
labels:
app: generic
chart: generic-1.1.2
release: RELEASE-NAME
heritage: Helm
app.kubernetes.io/name: generic
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version:
spec:
replicas: 1
selector:
matchLabels:
app: generic
release: RELEASE-NAME
template:
metadata:
labels:
app.kubernetes.io/name: generic
app.kubernetes.io/instance: RELEASE-NAME
spec:
imagePullSecrets:
- name: ""
containers:
- name: generic
image: ":"
imagePullPolicy: IfNotPresent
ports:
resources:
{}
Note that the spec.selector.matchLabels and spec.template.metadata.labels do not match here. This is potentially a problem, but this was just for a test.
When I apply it to the cluster (GKE, latest version) and inspect the yaml there, it looks like this (roughly);
apiVersion: apps/v1
kind: Deployment
metadata:
name: RELEASE-NAME-generic
labels:
app: generic
chart: generic-1.1.2
release: RELEASE-NAME
heritage: Helm
app.kubernetes.io/name: generic
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version:
spec:
replicas: 1
selector:
matchLabels:
app: generic
release: RELEASE-NAME
template:
metadata:
labels:
app: generic
release: RELEASE-NAME
spec:
imagePullSecrets:
- name: ""
containers:
- name: generic
image: ":"
imagePullPolicy: IfNotPresent
ports:
resources:
{}
The spec.template.metadata.labels has been overwritten with the labels in spec.selector.matchLabels.
Now, this makes sense from the perspective of having a working deployment, but I cannot find this behaviour documented anywhere, in either K8S or Helm, and I'm wondering if this is actually supposed to happen, or if I'm going nuts here...
Those labels are added for finding resources managed by Helm. Below you can find some information from documentation:
Helm's github site might be helpful in understanding how labels and selectors work:
spec.selector.matchLabels defined in
Deployments/StatefulSets/DaemonSets >=v1/beta2 must not contain
helm.sh/chart label or any label containing a version of the chart,
because the selector is immutable. The chart label string contains the
version, so if it is specified, whenever the Chart.yaml version
changes, Helm's attempt to change this immutable field would cause the
upgrade to fail.
Additional information found on helm's site:
An item of metadata should be a label
under the following conditions:
It is used by Kubernetes to identify this resource It is useful to
expose to operators for the purpose of querying the system. For
example, we suggest using helm.sh/chart: NAME-VERSION as a label so
that operators can conveniently find all of the instances of a
particular chart to use.
If an item of metadata is not used for querying, it should be set as
an annotation instead.

Deploying a kubernetes job via helm

I am new to helm and I have tried to deploy a few tutorial charts. Had a couple of queries:
I have a Kubernetes job which I need to deploy. Is it possible to deploy a job via helm?
Also, currently my kubernetes job is deployed from my custom docker image and it runs a bash script to complete the job. I wanted to pass a few parameters to this chart/job so that the bash commands takes the input parameters. That's the reason I decided to move to helm because it provided a more flexibility. Is that possible?
You can use helm. Helm installs all the kubernetes resources like job,pods,configmaps,secrets inside the templates folder. You can control the order of installation by helm hooks. Helm offers hooks like pre-install, post-install, pre-delete with respect to deployment. if two or more jobs are pre-install then their weights will be compared for installing.
|-scripts/runjob.sh
|-templates/post-install.yaml
|-Chart.yaml
|-values.yaml
Many times you need to change the variables in the script as per the environment. so instead of hardcoding variable in script, you can also pass parameters to script by setting them as environment variables to your custom docker image. Change the values in values.yaml instead of changing in your script.
values.yaml
key1:
someKey1: value1
key2:
someKey2: value1
post-install.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: post-install-job
labels:
provider: stackoverflow
microservice: {{ template "name" . }}
release: "{{ .Release.Name }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
"helm.sh/hook": pre-install,pre-upgrade,pre-rollback
"helm.sh/hook-delete-policy": before-hook-creation
"helm.sh/hook-weight": "3"
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
provider: stackoverflow
microservice: {{ template "name" . }}
release: "{{ .Release.Name }}"
app: {{ template "fullname" . }}
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "custom-docker-image:v1"
command: ["/bin/sh", "-c", {{ .Files.Get "scripts/runjob.sh" | quote }} ]
env:
#setting KEY1 as environment variable in the container,value of KEY1 in container is value1(read from values.yaml)
- name: KEY1
value: {{ .Values.key1.someKey1 }}
- name: KEY2
value: {{ .Values.key2.someKey2 }}
runjob.sh
# you can access the variable from env variable
echo $KEY1
echo $KEY2
# some stuff
You can use Helm Hooks to run jobs. Depending on how you set up your annotations you can run a different type of hook (pre-install, post-install, pre-delete, post-delete, pre-upgrade, post-upgrade, pre-rollback, post-rollback, crd-install). An example from the doc is as follows:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.3"
command: ["/bin/sleep","{{default "10" .Values.sleepyTime}}"]
You can pass your parameters as secrets or configMaps to your job as you would to a pod.
I had a similar scenario where I had a job I wanted to pass a variety of arguments to. I ended up doing something like this:
Template:
apiVersion: batch/v1
kind: Job
metadata:
name: myJob
spec:
template:
spec:
containers:
- name: myJob
image: myImage
args: {{ .Values.args }}
Command (powershell):
helm template helm-chart --set "args={arg1\, arg2\, arg3}" | kubectl apply -f -

Helm upgrade --install isn't picking up new changes

I'm using the command below in my build CI such that the deployments to helm happen on each build. However, I'm noticing that the changes aren't being deployed.
helm upgrade --install --force \
--namespace=default \
--values=kubernetes/values.yaml \
--set image.tag=latest \
--set service.name=my-service \
--set image.pullPolicy=Always \
myService kubernetes/myservice
Do I need to tag the image each time? Does helm not do the install if the same version exists?
You don't have to tag the image each time with a new tag. Just add
date: "{{ now | unixEpoch }}"
under spec/template/metadata/labels and set imagePullPolicy: Always. Helm will detect the changes in the deployment object and will pull the latest image each time:
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ .Release.Name }}-{{ .Values.app.frontendName }}-deployment"
labels:
app.kubernetes.io/name: {{ .Values.app.frontendName }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ .Values.app.frontendName }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ .Values.app.frontendName }}
app.kubernetes.io/instance: {{ .Release.Name }}
date: "{{ now | unixEpoch }}"
spec:
containers:
- name: {{ .Values.app.frontendName }}
image: "rajesh12/myimage:latest"
imagePullPolicy: Always
Run helm upgrade releaseName ./my-chart to upgrade your release
With helm 3, the --recreate-pods flag is deprecated.
Instead you can use
kind: Deployment
spec:
template:
metadata:
annotations:
rollme: {{ randAlphaNum 5 | quote }}
This will create a random string annotation, that always changes and causes the deployment to roll.
Helm - AUTOMATICALLY ROLL DEPLOYMENTS
Another label, perhaps more robust than the seconds you can add is simply the chart revision number:
...
metadata:
...
labels:
helm-revision: "{{ .Release.Revision }}"
...
Yes, you need to tag each build rather than use 'latest'. Helm does a diff between the template evaluated from your parameters and the currently deployed one. Since both are 'latest' it sees no change and doesn't apply any upgrade (unless something else changed). This is why the helm best practices guide advises that "container image should use a fixed tag or the SHA of the image". (See also https://docs.helm.sh/chart_best_practices/ and Helm upgrade doesn't pull new container )