Yesterday I stopped a helm upgrade when it was running on a release pipeline in Azure DevOps and the followings deployments got it failure.
I tried to see the chart that has failed with the aim of delete it but the chart of the microservice ("auth") doesn't appear. I used the command «helm list -n [namespace_of_AKS]» and it doesn't appear.
What can i do to solve this problem?
Error in Azure Release Pipeline
2022-03-24T08:01:39.2649230Z Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress
2022-03-24T08:01:39.2701686Z ##[error]Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress
Helm List
This error can happen for few reasons, but it most commonly occurs when there is an interruption during the upgrade/install process as you already mentioned.
To fix this one may need to, first rollback to another version, then reinstall or helm upgrade again.
Try below command to list
helm ls --namespace <namespace>
but you may note that when running that command ,it may not show any columns with information
Try to check the history of the previous deployment
helm history <release> --namespace <namespace>
This provides with information mostly like the original installation was never completed successfully and is pending state something like STATUS: pending-upgrade state.
To escape from this state, use the rollback command:
helm rollback <release> <revision> --namespace <namespace>
revision is optional, but you should try to provide it.
You may then try to issue your original command again to upgrade or reinstall.
helm ls -a -n {namespace} will list all releases within a namespace, regardless of status.
You can also use helm ls -aA instead to list all releases in all namespaces -- in case you actually deployed the release to a different namespace (I've done that before)
Try deleting the latest helm secret for the deployment and re-run your helm apply command.
kubectl get secret -A | grep <app-name>
kubectl delete secret <secret> -n <namespace>
Related
I am running a new relic chart in helm (from this repo -> https://github.com/newrelic/helm-charts/blob/master/charts/newrelic-logging, this is my output when running in my cluster:
helm list -A -n kube-system
NAME NAMESPACE. REVISION. UPDATED.
newrelic-logging kube-system 1 2021-06-23 18:54:54.383769 +0200 CEST
STATUS. CHART. APP VERSION
deployed newrelic-logging-1.4.7 1.4.6
I am trying to set a specific value here: https://github.com/newrelic/helm-charts/blob/master/charts/newrelic-logging/values.yaml
To do this I am using helm upgrade. I have tried:
helm upgrade newrelic-logging newrelic-logging-1.4.7 -f values.yaml -n kube-system
helm upgrade newrelic-logging-1.4.7 newrelic-logging --set updatedVal=0 -n kube-system
However with these commands I am seeing the output:
Error: failed to download "newrelic-logging-1.4.7"
and
Error: failed to download "newrelic-logging"
Why and how do I fix this? I have also ran helm repo update and it completes with no error messages.
Unfortunately I don't see how this was initially setup as the previous employee has left the company and it's too risky to stop and redeploy right now.
To update the current chart with new values without upgrading the chart version, you can try:
helm upgrade --reuse-values -f values.yaml newrelic-logging kube-system/newrelic-logging
I use gitlab + kubernetes.
I use this command:
helm secrets -d vault upgrade --install --atomic --namespace NAMESPACE --values VALUES.yml --set image.tag="WHATEVER" DEPLOYMENT_NAME FILE_TO_THIS_DEPLOYMENT
the moment the CI pipeline fails i cannot restart it again, because of some Kubernetes/Helm errors:
another operation (install/upgrade/rollback) is in progress
I know that i can just fix this inside kubernetes and then i can rerun, but this is just a shitty experience for people who dont know that much about Kubernetes/Helm.
Is there a one-shot command which is just really deploying a new version and if the old version was somehow in a failing state, delete/fix it beforehand?
I really just want to execute the same commands again and again and just expect it to work without manually fixing kubernetes state everytime anything happens.
I want to remove pod that I deployed to my cluster with helm install.
I used 3 ways to do so:
helm uninstall <release name> -> remove the pod from the cluster and from the helm list
helm delete <release name> -> remove the pod from the cluster and from the helm list
kubectl delete -n <namespace> deploy <deployment name> -> remove the pod from the cluster but not from the helm list
What's the difference between them?
Is one better practice then the other?
helm delete is an alias for helm uninstall and you can see this when you check the --help syntax:
$ helm delete --help
...
Usage:
helm uninstall RELEASE_NAME [...] [flags]
kubectl delete ... just removes the resource in the cluster.
Doing helm uninstall ... won't just remove the pod, but it will remove all the resources created by helm when it installed the chart. For a single pod, this might not be any different to using kubectl delete... but when you have tens or hundreds of different resources and dependent charts, doing all this manually by doing kubectl delete... becomes cumbersome, time-consuming and error-prone.
Generally if you're deleting something off the cluster, use the same method you used to install it in in the first place. If you used helm to install it into the cluster, use helm to remove it. If you used kubectl create or kubectl apply, use kubectl delete to remove it.
I will add a point that we use, quite a lot. helm uninstall/install/upgrade has hooks attached to its lifecycle. This matters a lot, here is a small example.
We have database scripts that are run as part of a job. Say you prepare a release with version 1.2.3 and as part of that release you add a column in a table - you have a script for that (liquibase/flyway whatever) that will run automatically when the chart is installed. In plain english helm install allows you to say in this case : "before installing the code, upgrade the DB schema". This is awesome and allows you to tie the lifecycle of such scripts, to the lifecycle of the chart.
The same works for downgrade, you could say that when you downgrade, revert the schema, or take any needed action. kubectl delete simply does not have such functionality.
For me it is the same thing: uninstall, del, delete, and un for the helm (check Aliases):
$ helm del --help
This command takes a release name and uninstalls the release.
It removes all of the resources associated with the last release of the chart
as well as the release history, freeing it up for future use.
Use the '--dry-run' flag to see which releases will be uninstalled without actually
uninstalling them.
Usage:
helm uninstall RELEASE_NAME [...] [flags]
Aliases:
uninstall, del, delete, un
Helm delete is older command which is now replaced by helm uninstall. This command basically uninstall all the resources in helm chart, which was previously deployed using helm install/upgrade.
Kubectl delete will delete just resource which will get redeployed again if it was deployed by helm chart. So these command is usefull if you want to redeploy pod or to delete resource if it was not deployed using helm chart approach.
I am using azure Devops for my CICD. I get the below error in release pipeline - Helm upgrade stage. I am not getting any clues as to what causes this. I am not able to go past this stage from the past one day. I do not know what is the "another operation (install/upgrade/rollback) is in progress"
Any help to resolve this would greatly be appreciated, thanks.
I manage to resolve the issue using
helm delete <chart name>
I faced the same issue. I was able to solve it by running this command:
helm rollback <chart name>
I had the same issue while releasing using the ADO pipeline. Finally, I have figured it out using the below steps. I was not even able to list the release under this usual command.
helm list -n <name-space>
this was responding empty. So it's funny behavior from the helm.
kubectl config get-contexts
make sure your context is set for the correct Kubernetes cluster. Then next step is
helm history <release> -n <name-space> --kube-context <kube-context-name>
try applying the rollback to above command.
helm rollback <release> <revision> -n <name-space> --kube-context <kube-context-nam>
I have this error when the previous upgrade failed.
I cannot upgrade without deleting manually all my pods and services.
Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists.
Unable to continue with update: existing resource conflict: namespace: ns-xy, name: svc-xy, existing_kind: /v1, Kind=Service, new_kind: /v1, Kind=Service
I tried with helm upgrade --force but with no success.
One solution is to delete all the services and deployments updated, but that's long and creates a long interruption.
How can I force the upgrade?
OP doesn't mention what is the version of helm currently being used. So, assuming that you are using a version earlier than 3.1.0:
Upgrade helm to 3.2.4 (Which is the current 3.2 version)
Label and annotate the resource you want to upgrade (As per #7649):
KIND=deployment
NAME=my-app-staging
RELEASE=staging
NAMESPACE=default
kubectl annotate $KIND $NAME meta.helm.sh/release-name=$RELEASE --overwrite
kubectl annotate $KIND $NAME meta.helm.sh/release-namespace=$NAMESPACE --overwrite
kubectl label $KIND $NAME app.kubernetes.io/managed-by=Helm
Run your helm upgrade command as you were before.
This should tell Helm that it is okay to take over existing resource and begin managing it. That procedure also works for api upgrades (like "apps/v1beta2" changed to "apps/v1") or onboarding old elements in a namespace.
List the services
kubectl get service
Delete them in the following sequence
kubectl delete service <service-name>
And them run helm upgrade as normally