Command to force new deployment in helm? - kubernetes

I use gitlab + kubernetes.
I use this command:
helm secrets -d vault upgrade --install --atomic --namespace NAMESPACE --values VALUES.yml --set image.tag="WHATEVER" DEPLOYMENT_NAME FILE_TO_THIS_DEPLOYMENT
the moment the CI pipeline fails i cannot restart it again, because of some Kubernetes/Helm errors:
another operation (install/upgrade/rollback) is in progress
I know that i can just fix this inside kubernetes and then i can rerun, but this is just a shitty experience for people who dont know that much about Kubernetes/Helm.
Is there a one-shot command which is just really deploying a new version and if the old version was somehow in a failing state, delete/fix it beforehand?
I really just want to execute the same commands again and again and just expect it to work without manually fixing kubernetes state everytime anything happens.

Related

UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress

Yesterday I stopped a helm upgrade when it was running on a release pipeline in Azure DevOps and the followings deployments got it failure.
I tried to see the chart that has failed with the aim of delete it but the chart of the microservice ("auth") doesn't appear. I used the command «helm list -n [namespace_of_AKS]» and it doesn't appear.
What can i do to solve this problem?
Error in Azure Release Pipeline
2022-03-24T08:01:39.2649230Z Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress
2022-03-24T08:01:39.2701686Z ##[error]Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress
Helm List
This error can happen for few reasons, but it most commonly occurs when there is an interruption during the upgrade/install process as you already mentioned.
To fix this one may need to, first rollback to another version, then reinstall or helm upgrade again.
Try below command to list
helm ls --namespace <namespace>
but you may note that when running that command ,it may not show any columns with information
Try to check the history of the previous deployment
helm history <release> --namespace <namespace>
This provides with information mostly like the original installation was never completed successfully and is pending state something like STATUS: pending-upgrade state.
To escape from this state, use the rollback command:
helm rollback <release> <revision> --namespace <namespace>
revision is optional, but you should try to provide it.
You may then try to issue your original command again to upgrade or reinstall.
helm ls -a -n {namespace} will list all releases within a namespace, regardless of status.
You can also use helm ls -aA instead to list all releases in all namespaces -- in case you actually deployed the release to a different namespace (I've done that before)
Try deleting the latest helm secret for the deployment and re-run your helm apply command.
kubectl get secret -A | grep <app-name>
kubectl delete secret <secret> -n <namespace>

What's the difference between helm uninstall, helm delete and kubectl delete

I want to remove pod that I deployed to my cluster with helm install.
I used 3 ways to do so:
helm uninstall <release name> -> remove the pod from the cluster and from the helm list
helm delete <release name> -> remove the pod from the cluster and from the helm list
kubectl delete -n <namespace> deploy <deployment name> -> remove the pod from the cluster but not from the helm list
What's the difference between them?
Is one better practice then the other?
helm delete is an alias for helm uninstall and you can see this when you check the --help syntax:
$ helm delete --help
...
Usage:
helm uninstall RELEASE_NAME [...] [flags]
kubectl delete ... just removes the resource in the cluster.
Doing helm uninstall ... won't just remove the pod, but it will remove all the resources created by helm when it installed the chart. For a single pod, this might not be any different to using kubectl delete... but when you have tens or hundreds of different resources and dependent charts, doing all this manually by doing kubectl delete... becomes cumbersome, time-consuming and error-prone.
Generally if you're deleting something off the cluster, use the same method you used to install it in in the first place. If you used helm to install it into the cluster, use helm to remove it. If you used kubectl create or kubectl apply, use kubectl delete to remove it.
I will add a point that we use, quite a lot. helm uninstall/install/upgrade has hooks attached to its lifecycle. This matters a lot, here is a small example.
We have database scripts that are run as part of a job. Say you prepare a release with version 1.2.3 and as part of that release you add a column in a table - you have a script for that (liquibase/flyway whatever) that will run automatically when the chart is installed. In plain english helm install allows you to say in this case : "before installing the code, upgrade the DB schema". This is awesome and allows you to tie the lifecycle of such scripts, to the lifecycle of the chart.
The same works for downgrade, you could say that when you downgrade, revert the schema, or take any needed action. kubectl delete simply does not have such functionality.
For me it is the same thing: uninstall, del, delete, and un for the helm (check Aliases):
$ helm del --help
This command takes a release name and uninstalls the release.
It removes all of the resources associated with the last release of the chart
as well as the release history, freeing it up for future use.
Use the '--dry-run' flag to see which releases will be uninstalled without actually
uninstalling them.
Usage:
helm uninstall RELEASE_NAME [...] [flags]
Aliases:
uninstall, del, delete, un
Helm delete is older command which is now replaced by helm uninstall. This command basically uninstall all the resources in helm chart, which was previously deployed using helm install/upgrade.
Kubectl delete will delete just resource which will get redeployed again if it was deployed by helm chart. So these command is usefull if you want to redeploy pod or to delete resource if it was not deployed using helm chart approach.

How to delete values introduced on airflow through helm?

helm install airflow . --namespace airflow -f my_values.yaml -f my_other_values.yaml
I executed the command from above but had to interrupt it, cannot re-execute it because gives me the error:
Error: cannot re-use a name that is still in use
How can I fix it?
Thank you
Either helm uninstall the existing release
helm uninstall airflow
helm install airflow . -n airflow -f values.dev.yaml ...
or use helm upgrade to replace it with a new one
helm upgrade airflow . -n airflow -f values.dev.yaml ...
Both will have almost the same effect. You can helm rollback the upgrade but the uninstall discards that history.
Mechanically, helm install and helm upgrade just send Kubernetes manifests to the cluster, and from there the cluster takes responsibility for actually doing the work. Unless the chart has time-consuming hook jobs, it's actually possible that your current installation is fine and you don't need to do any of this (even if helm install --wait didn't report the Deployments were ready yet).
(The commands above assume you're using the current version 3 of Helm. Helm 2 has slightly different syntax and commands, but at this point is unsupported and end-of-lifed.)

Kubernetes, Helm - helm upgrade fails when config is specified - JupyterHub

If I run
helm upgrade --cleanup-on-fail \
$RELEASE jupyterhub/jupyterhub \
--namespace $NAMESPACE \
--version=0.9.0 \
--values config.yaml
It fails, with this error: Error: UPGRADE FAILED: pre-upgrade hooks failed: timed out waiting for the condition. It just hangs for a bit and ultimately times out. It seems like too small of a change to cause a true timeout. I found this command in the Zero to JupyterHub docs, where it describes how to apply changes to the configuration file.
I've tried several permutations, including leaving out cleanup, leaving out version, etc. The only thing I could get to work was helm upgrade jhub jupyterhub/jupyterhub, but I don't think it's producing the desired effect.
For example, when I add a line in my config.yaml to change the default to Jupyter Lab, it doesn't work if I run helm upgrade jhub jupyterhub/jupyterhub. I believe I need to specify config.yaml using --values or -f
My overall project is to set up JupyterHub on a cloud Kubernetes environment. I'm using GKE and the online terminal.
Thanks
Solved: I specified tag incorrectly in config.yaml. I put the digest rather than the actual tag. Here are the images on DockerHub

Helm Umbrella Chart, dependency on remote Chart

I am new to Helm Kubernetes. I am currently using a list of bash commands to create a local Minikube cluster with many containers installed. In order to alleviate the manual burden we were thinking of creating an (umbrella) Helm Chart to execute the whole list of commands.
Between the commands that I would need to run in the Chart there are few (cleanup) kubectl deletes, i.e. :
kubectl delete all,configmap --all -n system --force --grace-period=0
and also some helm installs, i.e.:
helm repo add bitnami https://charts.bitnami.com/bitnami && \
helm install postgres bitnami/postgresql --set postgresqlPassword=test,postgresqlDatabase=test && \
Question1: is it possible to include kubectl command in my Helm Chart?
Question2: is it possible to add a dependency from a Chart only remotely available? I.e. the dependency from postgres above.
Question3: If you think Helm is not the correct tool for doing this, what would you suggest instead?
Thank you
You can't embed imperative kubectl commands in a Helm chart. An installed Helm chart keeps track of a specific set of Kubernetes resources it owns; you can helm delete the release, and that will delete that specific set of things. Similarly, if you have an installed Helm chart, you can helm upgrade it, and the new chart contents will replace the old ones.
For the workflow you describe – you're maintaining a developer environment based on Minikube, and you want to be able to start clean – there are two good approaches to take:
helm delete the release(s) that are already there, which will uninstall their managed Kubernetes resources; or
minikube delete the whole "cluster" (as a single container or VM), and then minikube start a new empty "cluster".