Dependency between pods in helm charts - kubernetes-helm

I am trying to deploy a helm chart and I need help for my use case.
My requirement is that in helm chart templates folder, I have few deployment yml and .tpl files, When I invoke helm install command, one of the deployment yml in the template folder will deploy as kind 'job' with only one pod associated to it. The other deployment ymls in the templates folder should wait for this job to be finished successfully and then only should get deployed on kubernetes as a pod.
When I will trigger helm install command , helm will read all the yml and hence will try to deploy all the pods at once which I don't want. I want my job to be succeeded first and then only the other pods should start getting deployed. While the job is running , all the other pods should wait or should not start as they all are dependent on job to be successful.
How can I achieve this case using helm. Please suggest.How can I make other pods wait and let them know that job has been successfully completed now.

You are looking for helm hooks:
Helm provides a hook mechanism to allow chart developers to intervene
at certain points in a release's life cycle. For example, you can use
hooks to:
Load a ConfigMap or Secret during install before any other charts are
loaded.
Execute a Job to back up a database before installing a new
chart, and then execute a second job after the upgrade in order to
restore data.
Run a Job before deleting a release to gracefully take a
service out of rotation before removing it.
Add the following annotation to your job:
metadata:
annotations:
"helm.sh/hook": "pre-install"
You can even configure your hook to be run before any install or upgrade (see other options here)
metadata:
annotations:
"helm.sh/hook": "pre-install, pre-upgrade"
The resources that a hook creates are not tracked or managed as part of the release. Once Tiller verifies that the hook has reached its ready state, it will leave your job resource alone (or you can set a "helm.sh/hook-delete-policy" to delete it).

Related

Kubernetes exec script after helm upgrade

Is there a way to execute some code in pod containers when config maps are updated via helm. Preferrably without a custom sidecar doing constant file watching?
I am thinking along the lines of postStart and preExit lifecycle events of Kubernetes, but in my case a "postPatch".
This might be something a post-install or post-upgrade hook would be perfect for:
https://helm.sh/docs/topics/charts_hooks/
You can trigger these jobs to start after an install (post-install) and/or after an upgrade (post-upgrade) and they will run to completion before the chart is considered installed or upgraded.
So you can to the upgrade, then as part of that upgrade, the hook would trigger after the update and run your update code. I know the nginx ingress controller chart does something like this.

Ensure ArgoCD running pre-install steps before upgrading

When installing Kong with Helm through ArgoCD the installation fails, because ArgoCD can only run helm upgrade. So the step initialising the Database is not run, resulting in the pre_migration pod to fail.
The docs statet that i can "Annotate pre-install and post-install with hook-weight: "-1". This will make sure it runs to success before any upgrade hooks." How do i correctly annotate this?
I tried adding:
helm.sh/hook = pre-install
helm.sh/hook = weight: -1
To the app's configuration using UI in Annotation field. This change is strangely not reflected in manifest.yaml and also not working.
So how do I ensure that ArgoCD runs the pre-install steps of a helm chart beforehand?
This probably works
annotations:
helm.sh/hook: pre-install
helm.sh/hook-weight: '-1'

Kubernetes single deployment yaml file for spinning up the application

I am setting up kubernetes for an application with 8 microservices,activemq,postgres,redis and mongodb.
After the entire configuration of pods and deployment ,is there any way to create a single master deployment yaml file which will create the entire set of services,replcas etc for the entire application.
Note:I will be using multiple deployment yaml files,statefulsets etc for all above mentioned services.
You can use this script:
NAMESPACE="your_namespace"
RESOURCES="configmap secret daemonset deployment service hpa"
for resource in ${RESOURCES};do
rsrcs=$(kubectl -n ${NAMESPACE} get -o json ${resource}|jq '.items[].metadata.name'|sed "s/\"//g")
for r in ${rsrcs};do
dir="${NAMESPACE}/${resource}"
mkdir -p "${dir}"
kubectl -n ${NAMESPACE} get -o yaml ${resource} ${r} > "${dir}/${r}.yaml"
done
done
Remember to specify what resources you want exported in the script.
More info here
Is there any way to create a single master deployment yaml file which will create the entire set of services,replicas etc for the entire application.
Since you already mentioned kubernetes-helm why don't you actually used it for that exact purpose? In short helm is sort of package manager for Kubernetes, some say similar to yum or apt. It deploys charts which you can actually refer to as packed application. Its pack of all your pre-configured applications which can be deploy as one unit. It's not entirely one file but more collection of files that build so called helm chart.
What are the helm charts?
Well they are basically K8s yaml manifest combined into a single package that can be installed to your cluster. And installing the package is just as simple as running single command such as helm install. Once done the charts are highly reusable which reduces the time for creating dev, test and prod environments.
As an example of a complex helm chart deploying multiple resources you many want to check Stackstorm.
Basically once deployed without any custom config this chart will deploy 2 replicas for each component of StackStorm as well as backends like RabbitMQ, MongoDB and Redis.

Deleting kubernetes yaml: how to prevent old objects from floating around?

i'm working on a continuous deployment routine for a kubernetes application: everytime i push a git tag, a github action is activated which calls kubectl apply -f kubernetes to apply a bunch of yaml kubernetes definitions
let's say i add yaml for a new service, and deploy it -- kubectl will add it
but then later on, i simply delete the yaml for that service, and redeploy -- kubectl will NOT delete it
is there any way that kubectl can recognize that the service yaml is missing, and respond by deleting the service automatically during continuous deployment? in my local test, the service remains floating around
does the developer have to know to connect kubectl to the production cluster and delete the service manually, in addition to deleting the yaml definition?
is there a mechanism for kubernetes to "know what's missing"?
You need to use a CI/CD tool for Kubernetes to achieve what you need. As mentioned by Sithroo Helm is a very good option.
Helm lets you fetch, deploy and manage the lifecycle of applications,
both 3rd party products and your own.
No more maintaining random groups of YAML files (or very long ones)
describing pods, replica sets, services, RBAC settings, etc. With
helm, there is a structure and a convention for a software package
that defines a layer of YAML templates and another layer that
changes the templates called values. Values are injected into
templates, thus allowing a separation of configuration, and defines
where changes are allowed. This whole package is called a Helm
Chart.
Essentially you create structured application packages that contain
everything they need to run on a Kubernetes cluster; including
dependencies the application requires. Source
Before you start, I recommend you these articles explaining it's quirks and features.
The missing CI/CD Kubernetes component: Helm package manager
Continuous Integration & Delivery (CI/CD) for Kubernetes Using CircleCI & Helm
There's no such way. You can deploy resources from yaml file from anywhere if you can reach the node and configure kube config. So kubernetes will not know how to respond on a file deletion. If you still want to do this, you can write a program (a go code) which checks the availability of files in one place and deletes the corresponding resource whenever the file gets deleted.
There's one way via kubernetes is by using kubernetes operator, and whenever there is any change in your files you can update the crd used to deploy resources via operator.
Before deleting the yaml file, you can run kubectl delete -f file.yaml, this way all the resources created by this file will be deleted.
However, what you are looking for, is achieving the desired state using k8s. You can do this by using tools like Helmfile.
Helmfile, allow you to specify the resources you want to have all in one file, and it will achieve the desired state every time you run helmfile apply

Deploying a kubernetes pods after a job is completed - helm

Goal - Start deployment of pod when a particular jobs in completed using helm.
Currently I am using helm to deploy the configmap/pods/jobs. When I do "helm install" everything get deployed at the same time.
Can I add delay/trigger saying when a particular jobs is completed then only deploy other pods.
I tried using "init container" but it difficult to get status of job in "init container"
Helm chart hooks can do this. There's a series of points where Helm can deploy a set of resources one at a time and wait for them to be ready or completed.
For what you're describing, it is enough to use an annotation to mark a job as a pre-install hook:
apiVersion: batch/v1
kind: Job
metadata:
annotations:
"helm.sh/hook": pre-install
None of the other resources in the chart will be deployed until the hook executes successfully. If the Job fails, it will block deploying any other resources. This pre-install hook only runs on first installation, but if you want the hook to run on upgrades or rollbacks, there are corresponding hooks to be able to do this.
There are still some workflows that are hard to express this way. For instance, if your service includes a database and you want a job to run migrations or seed data, you can't really deploy the database StatefulSet, then block on a Job hook, then deploy everything else; your application still needs to tolerate things maybe not being in the exact state it expects.
This is somewhat out of the wheelhouse of Helm. You can use Hooks to get some relatively simplistic forms of this, but many people find them frustrating as complexity grows. The more complete form of this pattern requires writing an operator instead.