Kubernetes exec script after helm upgrade - kubernetes

Is there a way to execute some code in pod containers when config maps are updated via helm. Preferrably without a custom sidecar doing constant file watching?
I am thinking along the lines of postStart and preExit lifecycle events of Kubernetes, but in my case a "postPatch".

This might be something a post-install or post-upgrade hook would be perfect for:
https://helm.sh/docs/topics/charts_hooks/
You can trigger these jobs to start after an install (post-install) and/or after an upgrade (post-upgrade) and they will run to completion before the chart is considered installed or upgraded.
So you can to the upgrade, then as part of that upgrade, the hook would trigger after the update and run your update code. I know the nginx ingress controller chart does something like this.

Related

Stopping all pods in Kubernetes cluster before running database migration job

I deploy my App into the Kubernetes cluster using Helm. App works with database, so i have to run db migrations before installing new version of the app. I run migrations with Kubernetes Job object using Helm "pre-upgrade" hook.
The problem is when the migration job starts old version pods are still working with database. They can block objects in database and because of that migration job may fail.
So, i want somehow to automatically stop all the pods in cluster before migration job starts. Is there any way to do that using Kubernetes + Helm? Will appreciate all the answers.
There are two ways I can see that you can do this.
First option is to scale down the pods before the deployment (for example, via Jenkins, CircleCI, GitLab CI, etc)
kubectl scale --replicas=0 -n {namespace} {deployment-name}
helm install .....
The second option (which might be easier depending on how you want to maintain this going forward) is to add an additional pre-upgrade hook with a higher priority than the migrations hook so it runs before the upgrade job; and then use that do the kubectl scale down.

Initcontainer vs Helm Hook post-install

What is a difference between Helm Hooks post-install and Kubernetes initcontainers? What I am understood is that Hooks are used to define some actions during different stages of Pod lifecycle, in that case - post-install, and initcontainers, on the other hand, allow to initialize the container before it is deployed to Pod.
post-install and initcontainer as I understand allow to do the same thing, i.e. initialize a database.
Is that correct? Which is the better approach?
For database setup I would prefer a Helm hook, but even then there are some subtleties.
Say your service is running as a Deployment with replicas: 3 for a little bit of redundancy. Every one of these replicas will run an init container, if it's specified in the pod spec, without any sort of synchronization. If one of the pods crashes, or its node fails, its replacement will run the init container again. For the sort of setup task you're talking about, you don't want to repeat it that often.
The fundamental difference here is that a Helm hook is a separate Kubernetes object, typically a Job. You can arrange for this Job to be run exactly once on each helm upgrade and at no other times, which makes it a reasonable place to run things like migrations.
The one important subtlety here is that you can have multiple versions of a service running at once. Take the preceding Deployment with replicas: 3, but then helm upgrade --set tag=something-newer. The Deployment controller will first start a new pod with the new image, and only once it's up and running will it tear down an old pod, and now you have both versions going together. Similar things will happen if you helm rollback to an older version. This means you need some tolerance for the database not quite having the right schema.
If the job is more like a "seed" job that preloads some initial data, this is easier to manage: do it in a post-install hook, which you expect to run only once ever. You don't need to repeat it on every upgrade (as a post-upgrade hook) or on every pod start (as an init container).
Helm install hook and initcontainer are fundamentally different.Install hooks in helm creates a completely separate pod altogether which means that pod will not have access to main pods directly using localhost or they cannot use same volume mount etc while initcontainer can do so.
Init container which is comparable to helm pre install hooks is limited in way because it can only do initial tasks before the main pod is started and can not do any tasks which need to be executed after the pod is started for example any clean up activity.
Initialization of DB etc needs to be done before the actual container is started and I think initcontainer is sufficient enough for this use case but a helm pre install hook can also be used.
You need to use post hook, since first you have to create a db pod and then initialize db. You will notice that a pod for post hook comes up after the db pod starts running. The post hook pod will be removed after the hook is executed.

How to run a script in a pod once, manually, using helm

I'm looking for the correct way to run a one-time maintenance script on my Kubernetes cluster.
I've got my deployment configured via Helm, so everything is bundled in my chart and works extremely well from an automation point of view.
Problem is running a script just once. I know Helm has hooks, but I don't think those can be configured to run manually (only pre/post upgrade/install etc.). This is compared to running kubectl apply -f my-maintenance-script.yaml, which I can do just once and be done with.
Is there a best-practice way of doing this? I want to be able to use Helm since I can feed all my config/template values into the Job.
You can use Kubernetes Job, and use helm test to run the Job.

Deploying a kubernetes pods after a job is completed - helm

Goal - Start deployment of pod when a particular jobs in completed using helm.
Currently I am using helm to deploy the configmap/pods/jobs. When I do "helm install" everything get deployed at the same time.
Can I add delay/trigger saying when a particular jobs is completed then only deploy other pods.
I tried using "init container" but it difficult to get status of job in "init container"
Helm chart hooks can do this. There's a series of points where Helm can deploy a set of resources one at a time and wait for them to be ready or completed.
For what you're describing, it is enough to use an annotation to mark a job as a pre-install hook:
apiVersion: batch/v1
kind: Job
metadata:
annotations:
"helm.sh/hook": pre-install
None of the other resources in the chart will be deployed until the hook executes successfully. If the Job fails, it will block deploying any other resources. This pre-install hook only runs on first installation, but if you want the hook to run on upgrades or rollbacks, there are corresponding hooks to be able to do this.
There are still some workflows that are hard to express this way. For instance, if your service includes a database and you want a job to run migrations or seed data, you can't really deploy the database StatefulSet, then block on a Job hook, then deploy everything else; your application still needs to tolerate things maybe not being in the exact state it expects.
This is somewhat out of the wheelhouse of Helm. You can use Hooks to get some relatively simplistic forms of this, but many people find them frustrating as complexity grows. The more complete form of this pattern requires writing an operator instead.

Dependency between pods in helm charts

I am trying to deploy a helm chart and I need help for my use case.
My requirement is that in helm chart templates folder, I have few deployment yml and .tpl files, When I invoke helm install command, one of the deployment yml in the template folder will deploy as kind 'job' with only one pod associated to it. The other deployment ymls in the templates folder should wait for this job to be finished successfully and then only should get deployed on kubernetes as a pod.
When I will trigger helm install command , helm will read all the yml and hence will try to deploy all the pods at once which I don't want. I want my job to be succeeded first and then only the other pods should start getting deployed. While the job is running , all the other pods should wait or should not start as they all are dependent on job to be successful.
How can I achieve this case using helm. Please suggest.How can I make other pods wait and let them know that job has been successfully completed now.
You are looking for helm hooks:
Helm provides a hook mechanism to allow chart developers to intervene
at certain points in a release's life cycle. For example, you can use
hooks to:
Load a ConfigMap or Secret during install before any other charts are
loaded.
Execute a Job to back up a database before installing a new
chart, and then execute a second job after the upgrade in order to
restore data.
Run a Job before deleting a release to gracefully take a
service out of rotation before removing it.
Add the following annotation to your job:
metadata:
annotations:
"helm.sh/hook": "pre-install"
You can even configure your hook to be run before any install or upgrade (see other options here)
metadata:
annotations:
"helm.sh/hook": "pre-install, pre-upgrade"
The resources that a hook creates are not tracked or managed as part of the release. Once Tiller verifies that the hook has reached its ready state, it will leave your job resource alone (or you can set a "helm.sh/hook-delete-policy" to delete it).