How to delete values introduced on airflow through helm? - kubernetes

helm install airflow . --namespace airflow -f my_values.yaml -f my_other_values.yaml
I executed the command from above but had to interrupt it, cannot re-execute it because gives me the error:
Error: cannot re-use a name that is still in use
How can I fix it?
Thank you

Either helm uninstall the existing release
helm uninstall airflow
helm install airflow . -n airflow -f values.dev.yaml ...
or use helm upgrade to replace it with a new one
helm upgrade airflow . -n airflow -f values.dev.yaml ...
Both will have almost the same effect. You can helm rollback the upgrade but the uninstall discards that history.
Mechanically, helm install and helm upgrade just send Kubernetes manifests to the cluster, and from there the cluster takes responsibility for actually doing the work. Unless the chart has time-consuming hook jobs, it's actually possible that your current installation is fine and you don't need to do any of this (even if helm install --wait didn't report the Deployments were ready yet).
(The commands above assume you're using the current version 3 of Helm. Helm 2 has slightly different syntax and commands, but at this point is unsupported and end-of-lifed.)

Related

Command to force new deployment in helm?

I use gitlab + kubernetes.
I use this command:
helm secrets -d vault upgrade --install --atomic --namespace NAMESPACE --values VALUES.yml --set image.tag="WHATEVER" DEPLOYMENT_NAME FILE_TO_THIS_DEPLOYMENT
the moment the CI pipeline fails i cannot restart it again, because of some Kubernetes/Helm errors:
another operation (install/upgrade/rollback) is in progress
I know that i can just fix this inside kubernetes and then i can rerun, but this is just a shitty experience for people who dont know that much about Kubernetes/Helm.
Is there a one-shot command which is just really deploying a new version and if the old version was somehow in a failing state, delete/fix it beforehand?
I really just want to execute the same commands again and again and just expect it to work without manually fixing kubernetes state everytime anything happens.

What's the difference between helm uninstall, helm delete and kubectl delete

I want to remove pod that I deployed to my cluster with helm install.
I used 3 ways to do so:
helm uninstall <release name> -> remove the pod from the cluster and from the helm list
helm delete <release name> -> remove the pod from the cluster and from the helm list
kubectl delete -n <namespace> deploy <deployment name> -> remove the pod from the cluster but not from the helm list
What's the difference between them?
Is one better practice then the other?
helm delete is an alias for helm uninstall and you can see this when you check the --help syntax:
$ helm delete --help
...
Usage:
helm uninstall RELEASE_NAME [...] [flags]
kubectl delete ... just removes the resource in the cluster.
Doing helm uninstall ... won't just remove the pod, but it will remove all the resources created by helm when it installed the chart. For a single pod, this might not be any different to using kubectl delete... but when you have tens or hundreds of different resources and dependent charts, doing all this manually by doing kubectl delete... becomes cumbersome, time-consuming and error-prone.
Generally if you're deleting something off the cluster, use the same method you used to install it in in the first place. If you used helm to install it into the cluster, use helm to remove it. If you used kubectl create or kubectl apply, use kubectl delete to remove it.
I will add a point that we use, quite a lot. helm uninstall/install/upgrade has hooks attached to its lifecycle. This matters a lot, here is a small example.
We have database scripts that are run as part of a job. Say you prepare a release with version 1.2.3 and as part of that release you add a column in a table - you have a script for that (liquibase/flyway whatever) that will run automatically when the chart is installed. In plain english helm install allows you to say in this case : "before installing the code, upgrade the DB schema". This is awesome and allows you to tie the lifecycle of such scripts, to the lifecycle of the chart.
The same works for downgrade, you could say that when you downgrade, revert the schema, or take any needed action. kubectl delete simply does not have such functionality.
For me it is the same thing: uninstall, del, delete, and un for the helm (check Aliases):
$ helm del --help
This command takes a release name and uninstalls the release.
It removes all of the resources associated with the last release of the chart
as well as the release history, freeing it up for future use.
Use the '--dry-run' flag to see which releases will be uninstalled without actually
uninstalling them.
Usage:
helm uninstall RELEASE_NAME [...] [flags]
Aliases:
uninstall, del, delete, un
Helm delete is older command which is now replaced by helm uninstall. This command basically uninstall all the resources in helm chart, which was previously deployed using helm install/upgrade.
Kubectl delete will delete just resource which will get redeployed again if it was deployed by helm chart. So these command is usefull if you want to redeploy pod or to delete resource if it was not deployed using helm chart approach.

Kubernetes, Helm - helm upgrade fails when config is specified - JupyterHub

If I run
helm upgrade --cleanup-on-fail \
$RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.9.0 \
--values config.yaml
It fails, with this error: Error: UPGRADE FAILED: pre-upgrade hooks failed: timed out waiting for the condition. It just hangs for a bit and ultimately times out. It seems like too small of a change to cause a true timeout. I found this command in the Zero to JupyterHub docs, where it describes how to apply changes to the configuration file.
I've tried several permutations, including leaving out cleanup, leaving out version, etc. The only thing I could get to work was helm upgrade jhub jupyterhub/jupyterhub, but I don't think it's producing the desired effect.
For example, when I add a line in my config.yaml to change the default to Jupyter Lab, it doesn't work if I run helm upgrade jhub jupyterhub/jupyterhub. I believe I need to specify config.yaml using --values or -f
My overall project is to set up JupyterHub on a cloud Kubernetes environment. I'm using GKE and the online terminal.
Thanks
Solved: I specified tag incorrectly in config.yaml. I put the digest rather than the actual tag. Here are the images on DockerHub

Helm Umbrella Chart, dependency on remote Chart

I am new to Helm Kubernetes. I am currently using a list of bash commands to create a local Minikube cluster with many containers installed. In order to alleviate the manual burden we were thinking of creating an (umbrella) Helm Chart to execute the whole list of commands.
Between the commands that I would need to run in the Chart there are few (cleanup) kubectl deletes, i.e. :
kubectl delete all,configmap --all -n system --force --grace-period=0
and also some helm installs, i.e.:
helm repo add bitnami https://charts.bitnami.com/bitnami && \
helm install postgres bitnami/postgresql --set postgresqlPassword=test,postgresqlDatabase=test && \
Question1: is it possible to include kubectl command in my Helm Chart?
Question2: is it possible to add a dependency from a Chart only remotely available? I.e. the dependency from postgres above.
Question3: If you think Helm is not the correct tool for doing this, what would you suggest instead?
Thank you
You can't embed imperative kubectl commands in a Helm chart. An installed Helm chart keeps track of a specific set of Kubernetes resources it owns; you can helm delete the release, and that will delete that specific set of things. Similarly, if you have an installed Helm chart, you can helm upgrade it, and the new chart contents will replace the old ones.
For the workflow you describe – you're maintaining a developer environment based on Minikube, and you want to be able to start clean – there are two good approaches to take:
helm delete the release(s) that are already there, which will uninstall their managed Kubernetes resources; or
minikube delete the whole "cluster" (as a single container or VM), and then minikube start a new empty "cluster".

Helm re-install resources that already exist

How can I execute "helm install" command and re-install resources that I have defined in "templates"? I have some custom resources that already exist so I want to re-install them. It is possible to do that through a parameter in helm command?
I think your main question is:
I have some custom resources that already exist so I want to re-install them.
Which means DELETE then CREATE again.
Short answer
No.. but it can be done thru workaround
Detailed answer
Helm manages the RELEASE of the Kubernetes manifests by either:
creating helm install
updating helm upgrade
deleting helm delete
However, you can recreate resources following one of these approaches :
1. Twice Consecutive Upgrade
If your chart is designed to enable/disable installation of resources with Values ( .e.g: .Values.customResources.enabled) you can do the following:
helm -n namespace upgrade <helm-release> <chart> --set customResources.enabled=false
# Then another Run
helm -n namespace upgrade <helm-release> <chart> --set customResources.enabled=true
So, if you are the builder of the chart, your task is to make the design functional.
2. Using helmfile hooks
Helmfile is Helm of Helm.
It manage your helm releases within a single file called helmfile.yaml.
Not only that, but it also can call some LOCAL commands before/or/after installing/or/upgrading a Helm release.
This call which happen before or after, is named hook.
For your case, you will need presync hook.
If we organize your helm release as a Helmfile definition , it should be :
releases:
- name: <helm-release>
chart: <chart>
namespace: <namespace>
hooks:
- events: ["presync"]
showlogs: true
command: kubectl
args: [ "-n", "{{`{{ .Release.Namespace }}`}}", "delete", "crd", "my-custom-resources" ]
Now you just need to run helmfile apply
I know that CRD are not namespaced, but I put namespace in the hook just to demonstrate that Helmfile can give you the namespace of release as variable and no need to repeat your self.
You can use helm upgrade to upgrade any existing deployed chart with changes.
The upgrade arguments must be a release and chart. The chart argument can be either: a chart reference(example/mariadb), a path to a chart directory, a packaged chart, or a fully qualified URL. For chart references, the latest version will be specified unless the --version flag is set.
To override values in a chart, use either the --values flag and pass in a file or use the --set flag and pass configuration from the command line, to force string values, use --set-string. In case a value is large and therefore you want not to use neither --values nor --set, use --set-file to read the single large value from file.
You can specify the --values'/'-f flag multiple times. The priority will be given to the last (right-most) file specified. For example, if both myvalues.yaml and override.yaml contained a key called 'Test', the value set in override.yaml would take precedence
For example
helm upgrade -f myvalues.yaml -f override.yaml redis ./redis
easier way I follow, especially for pre existing jobs during helm upgrade is do kubectl delete job db-migrate-job --ignore-not-found