What's the difference between helm uninstall, helm delete and kubectl delete - kubernetes

I want to remove pod that I deployed to my cluster with helm install.
I used 3 ways to do so:
helm uninstall <release name> -> remove the pod from the cluster and from the helm list
helm delete <release name> -> remove the pod from the cluster and from the helm list
kubectl delete -n <namespace> deploy <deployment name> -> remove the pod from the cluster but not from the helm list
What's the difference between them?
Is one better practice then the other?

helm delete is an alias for helm uninstall and you can see this when you check the --help syntax:
$ helm delete --help
...
Usage:
helm uninstall RELEASE_NAME [...] [flags]
kubectl delete ... just removes the resource in the cluster.
Doing helm uninstall ... won't just remove the pod, but it will remove all the resources created by helm when it installed the chart. For a single pod, this might not be any different to using kubectl delete... but when you have tens or hundreds of different resources and dependent charts, doing all this manually by doing kubectl delete... becomes cumbersome, time-consuming and error-prone.
Generally if you're deleting something off the cluster, use the same method you used to install it in in the first place. If you used helm to install it into the cluster, use helm to remove it. If you used kubectl create or kubectl apply, use kubectl delete to remove it.

I will add a point that we use, quite a lot. helm uninstall/install/upgrade has hooks attached to its lifecycle. This matters a lot, here is a small example.
We have database scripts that are run as part of a job. Say you prepare a release with version 1.2.3 and as part of that release you add a column in a table - you have a script for that (liquibase/flyway whatever) that will run automatically when the chart is installed. In plain english helm install allows you to say in this case : "before installing the code, upgrade the DB schema". This is awesome and allows you to tie the lifecycle of such scripts, to the lifecycle of the chart.
The same works for downgrade, you could say that when you downgrade, revert the schema, or take any needed action. kubectl delete simply does not have such functionality.

For me it is the same thing: uninstall, del, delete, and un for the helm (check Aliases):
$ helm del --help
This command takes a release name and uninstalls the release.
It removes all of the resources associated with the last release of the chart
as well as the release history, freeing it up for future use.
Use the '--dry-run' flag to see which releases will be uninstalled without actually
uninstalling them.
Usage:
helm uninstall RELEASE_NAME [...] [flags]
Aliases:
uninstall, del, delete, un

Helm delete is older command which is now replaced by helm uninstall. This command basically uninstall all the resources in helm chart, which was previously deployed using helm install/upgrade.
Kubectl delete will delete just resource which will get redeployed again if it was deployed by helm chart. So these command is usefull if you want to redeploy pod or to delete resource if it was not deployed using helm chart approach.

Related

Update Kubernetes job with Helm

I have a helm chart containing Kubernetes job but unfortunately helm upgrade won't work because the image name is immutable so logically I need to do a delete and install but I will loose my set values.yaml if they were customised in the first place.
How can I keep the values before deleting the chart and use them for new install to simulate an upgrade? I couldn't find anything in documentations or here.
Thanks
EDIT:
First you need to get your previous values with helm get values <release-name>
So you could redirect the values to a file with:
helm get values <release-name> -o yaml > values.yaml
And then do a helm install again

How to delete values introduced on airflow through helm?

helm install airflow . --namespace airflow -f my_values.yaml -f my_other_values.yaml
I executed the command from above but had to interrupt it, cannot re-execute it because gives me the error:
Error: cannot re-use a name that is still in use
How can I fix it?
Thank you
Either helm uninstall the existing release
helm uninstall airflow
helm install airflow . -n airflow -f values.dev.yaml ...
or use helm upgrade to replace it with a new one
helm upgrade airflow . -n airflow -f values.dev.yaml ...
Both will have almost the same effect. You can helm rollback the upgrade but the uninstall discards that history.
Mechanically, helm install and helm upgrade just send Kubernetes manifests to the cluster, and from there the cluster takes responsibility for actually doing the work. Unless the chart has time-consuming hook jobs, it's actually possible that your current installation is fine and you don't need to do any of this (even if helm install --wait didn't report the Deployments were ready yet).
(The commands above assume you're using the current version 3 of Helm. Helm 2 has slightly different syntax and commands, but at this point is unsupported and end-of-lifed.)

Helm Umbrella Chart, dependency on remote Chart

I am new to Helm Kubernetes. I am currently using a list of bash commands to create a local Minikube cluster with many containers installed. In order to alleviate the manual burden we were thinking of creating an (umbrella) Helm Chart to execute the whole list of commands.
Between the commands that I would need to run in the Chart there are few (cleanup) kubectl deletes, i.e. :
kubectl delete all,configmap --all -n system --force --grace-period=0
and also some helm installs, i.e.:
helm repo add bitnami https://charts.bitnami.com/bitnami && \
helm install postgres bitnami/postgresql --set postgresqlPassword=test,postgresqlDatabase=test && \
Question1: is it possible to include kubectl command in my Helm Chart?
Question2: is it possible to add a dependency from a Chart only remotely available? I.e. the dependency from postgres above.
Question3: If you think Helm is not the correct tool for doing this, what would you suggest instead?
Thank you
You can't embed imperative kubectl commands in a Helm chart. An installed Helm chart keeps track of a specific set of Kubernetes resources it owns; you can helm delete the release, and that will delete that specific set of things. Similarly, if you have an installed Helm chart, you can helm upgrade it, and the new chart contents will replace the old ones.
For the workflow you describe – you're maintaining a developer environment based on Minikube, and you want to be able to start clean – there are two good approaches to take:
helm delete the release(s) that are already there, which will uninstall their managed Kubernetes resources; or
minikube delete the whole "cluster" (as a single container or VM), and then minikube start a new empty "cluster".

Helm re-install resources that already exist

How can I execute "helm install" command and re-install resources that I have defined in "templates"? I have some custom resources that already exist so I want to re-install them. It is possible to do that through a parameter in helm command?
I think your main question is:
I have some custom resources that already exist so I want to re-install them.
Which means DELETE then CREATE again.
Short answer
No.. but it can be done thru workaround
Detailed answer
Helm manages the RELEASE of the Kubernetes manifests by either:
creating helm install
updating helm upgrade
deleting helm delete
However, you can recreate resources following one of these approaches :
1. Twice Consecutive Upgrade
If your chart is designed to enable/disable installation of resources with Values ( .e.g: .Values.customResources.enabled) you can do the following:
helm -n namespace upgrade <helm-release> <chart> --set customResources.enabled=false
# Then another Run
helm -n namespace upgrade <helm-release> <chart> --set customResources.enabled=true
So, if you are the builder of the chart, your task is to make the design functional.
2. Using helmfile hooks
Helmfile is Helm of Helm.
It manage your helm releases within a single file called helmfile.yaml.
Not only that, but it also can call some LOCAL commands before/or/after installing/or/upgrading a Helm release.
This call which happen before or after, is named hook.
For your case, you will need presync hook.
If we organize your helm release as a Helmfile definition , it should be :
releases:
- name: <helm-release>
chart: <chart>
namespace: <namespace>
hooks:
- events: ["presync"]
showlogs: true
command: kubectl
args: [ "-n", "{{`{{ .Release.Namespace }}`}}", "delete", "crd", "my-custom-resources" ]
Now you just need to run helmfile apply
I know that CRD are not namespaced, but I put namespace in the hook just to demonstrate that Helmfile can give you the namespace of release as variable and no need to repeat your self.
You can use helm upgrade to upgrade any existing deployed chart with changes.
The upgrade arguments must be a release and chart. The chart argument can be either: a chart reference(example/mariadb), a path to a chart directory, a packaged chart, or a fully qualified URL. For chart references, the latest version will be specified unless the --version flag is set.
To override values in a chart, use either the --values flag and pass in a file or use the --set flag and pass configuration from the command line, to force string values, use --set-string. In case a value is large and therefore you want not to use neither --values nor --set, use --set-file to read the single large value from file.
You can specify the --values'/'-f flag multiple times. The priority will be given to the last (right-most) file specified. For example, if both myvalues.yaml and override.yaml contained a key called 'Test', the value set in override.yaml would take precedence
For example
helm upgrade -f myvalues.yaml -f override.yaml redis ./redis
easier way I follow, especially for pre existing jobs during helm upgrade is do kubectl delete job db-migrate-job --ignore-not-found

How do I see what custom values were used in a Helm release?

When I use helm install to install a chart into a Kubernetes cluster, I can pass custom values to the command to configure the release. helm must store them somewhere, because I can later rollback to them. However, I cannot find a way to view the values in the deployed version or the previous one.
I want to see what values will change (and confirm what values are set) when I rollback a release. I thought inspect or status might help with that, but they do different things. How can I see the values that were actually deployed?
To view what was actually deployed in a release, use helm get.
If you use helm -n <namespace> get all <release-name> you get all the information for the current release of <release-name> in namespace <namespace>†. You can specify --revision to get the information for a specific version, which you can use to see what the effect of rollback will be.
You can use helm -n <namespace> get values <release-name> to just get the values install used/computed rather than the whole chart and everything, or helm -n <namespace> get manifest <release-name> to view the generated resource configurations††.
Where this information is stored depends on the version of helm you are using:
For version 3, it is (by default) in a secret named <release-name>.<version> in the namespace where the release was deployed. The content of the secret is about the same as what was in the helm version 2 configMap
For version 2: it is in a configMap named <release-name>.<version>, in the kube-system namespace. You can get more detail on that here.
†For helm version 2, use helm get <release-name> instead of helm get all <release-name>
††For helm version 2, release names had to be unique cluster-wide. For helm version 3, release names are scoped to namespaces, and the helm command operates on the "current" namespace unless you specify a namespace using the -n or --namespace command line option.
helm get <release-name> no longer works with Helm3. helm get values <release-name> does show the custom values used for the release. Note: to get all possible values for reference, use helm show values <your-chart> - this doesn't show the custom values though.