Helm Release with existing resources - kubernetes

Previously we only use helm template to generate the manifest and apply to the cluster, recently we start planning to use helm install to manage our deployment, but running into following problems:
Our deployment is a simple backend api which contains "Ingress", "Service", and "Deployment", when there is a new commit, the pipeline will be triggered to deploy.
We plan to use the short commit sha as the image tag and helm release name. Here is the command
helm upgrade --install releaseName repo/chartName -f value.yaml --set image.tag=SHA
This runs perfectly fine for the first time, but when I create another release it fails with following error message
rendered manifests contain a resource that already exists. Unable to continue with install: Service "app-svc" in namespace "ns" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "rel-124": current value is "rel-123"
The error message is pretty clear on what the issue is, but I am just wondering what's "correct" way of using helm in this case?
It is not practical that I uninstall everything for a new release, and I also dont want to keep using the same release.

You are already doing it "right" way, just don't change release-name. That's key for Helm to identify resources. It seems that you previously used different name for release (rel-123) then you are using now (rel-124).
To fix your immediate problem, you should be able to proceed by updating value of annotation meta.helm.sh/release-name on problematic resource. Something like this should do it:
kubectl annotate --overwrite service app-svc meta.helm.sh/release-name=rel-124

Related

helm rollback fails to identify the failed deployments when re-triggered

I have a scenario like below,
Have two releases - Release-A and Release-B.
Currently, I am on Release-A and need an upgrade of all the microservices to Release-B.
I tried performing the helm upgrade of microservice - "mymicroservice" with the below command to deliver Release-B.
helm --kubeconfig /home/config upgrade --namespace testing --install --wait mymicroservice mymicroservice-release-b.tgz
Because of some issue, the deployment object got failed to install and went into an error state.
Observing this, I perform the below rollback command.
helm --kubeconfig /home/config --namespace testing rollback mymicroservice
Due to some issue(may be an intermittent system failure or user behavior), the Release-A's deployment object also went into failed/Crashloopbackoff state.Although this will result in helm rollback success, the deployment object is still not entered the running state.
Once I made the necessary corrections, I will retry the rollback. As the deployment spec is already updated with helm, it never attempts to re-install the deployment objects even if it is in the failed state.
Is there any option with Helm to handle the above scenarios ?.
Tried with --force flag, but there are other errors related to Service object replace in the microservice when used the --force flag approach.
Rollback "mymicroservice -monitoring" failed: failed to replace object: Service "mymicroservice-monitoring" is invalid: spec.clusterIP: Invalid value: "": field is immutable
Maybe this can help u out:
Always use the helm upgrade --install command. I've seen you're using so you're doing well. This installs the charts if they're not present and upgrades them if they're present.
Use --atomic flag to rollback changes in the event of a failed operation during helm upgrade.
And the flag --cleanup-on-fail: It allows that Helm deletes newly created resources during a rollback in case the rollback fails.
From doc:
--atomic: if set, upgrade process rolls back changes made in case of failed upgrade. The --wait flag will be set automatically if --atomic is used
--cleanup-on-fail allow deletion of new resources created in this upgrade when upgrade fails
There are cases where an upgrade creates a resource that was not present in the last release. Setting this flag allows Helm to remove those new resources if the release fails. The default is to not remove them (Helm tends to avoid destruction-as-default, and give users explicit control over this)
https://helm.sh/docs/helm/helm_upgrade/
IIRC, helm rollback rolls back to the previous version, whether it is good or not, so if your previous attempts resulted in a failure and you try to rollback, you will rollback to a broken version

For how long should I keep the storage driver Secret in my cluster?

I'm using helm 3.4.2 for upgrade my charts to my AKS cluster and I saw that every time I deploy something new, it creates a new secret called sh.helm.v... This is the first time I'm using helm.
I was reading the doc and found that at version 3.x helm is using secrets to store driver as default. Cool, but every time I deploy it creates a new secret and I'm now sure if this is the best to keep it all in my cluster.
Soo, should I keep then all in my cluster? Like, every time I deploy some thing, it creates a secret and live there
or
Can I remove the last before? Like, deploy v5 now and erase v1, v2, v3 and keep the v4 and v5 for some reason. If it's ok to do it, does anyone has a clue for how to do it? Using a bash ou kubectl?
Thanks a lot!
So yes, There are few major changes in Helm3, comparing to Helm2.
Secrets are now used as the default storage driver
In Helm 3, Secrets are now used as the default storage driver. Helm 2 used ConfigMaps by default to store release information. In
Helm 2.7.0, a new storage backend that uses Secrets for storing
release information was implemented, and it is now the default
starting in Helm 3.
Also
Release Names are now scoped to the Namespace
In Helm 3, information about a particular release is now stored in the
same namespace as the release itself. With this greater alignment to
native cluster namespaces, the helm list command no longer lists all
releases by default. Instead, it will list only the releases in the
namespace of your current kubernetes context (i.e. the namespace shown
when you run kubectl config view --minify). It also means you must
supply the --all-namespaces flag to helm list to get behaviour similar
to Helm 2.
Soo, should I keep then all in my cluster? Like, every time I deploy
some thing, it creates a secret and live there or
Can I remove the last before?
I dont thinks its a good practice to remove anything manually. If it is not mandatory necessary - sure better not touch them. However, you can delete unused ones, if you sure you will not need old revisions in the future.
#To check all secretes were created by helm:
kubectl get secret -l "owner=helm" --all-namespaces
#To delete revision you can simply remove appropriate secret..
kubectl delete secret -n <namespace> <secret-name>
Btw(just FYI), taking into an account the fact Helm3 is scoped to namespaces - you can simply delete deployment by deleting its corresponding namespace
And the last remark, maybe it would be useful for: you can pass --history-max to helm upgrade to
limit the maximum number of revisions saved per release. Use 0 for no
limit (default 10)

Helm re-install resources that already exist

How can I execute "helm install" command and re-install resources that I have defined in "templates"? I have some custom resources that already exist so I want to re-install them. It is possible to do that through a parameter in helm command?
I think your main question is:
I have some custom resources that already exist so I want to re-install them.
Which means DELETE then CREATE again.
Short answer
No.. but it can be done thru workaround
Detailed answer
Helm manages the RELEASE of the Kubernetes manifests by either:
creating helm install
updating helm upgrade
deleting helm delete
However, you can recreate resources following one of these approaches :
1. Twice Consecutive Upgrade
If your chart is designed to enable/disable installation of resources with Values ( .e.g: .Values.customResources.enabled) you can do the following:
helm -n namespace upgrade <helm-release> <chart> --set customResources.enabled=false
# Then another Run
helm -n namespace upgrade <helm-release> <chart> --set customResources.enabled=true
So, if you are the builder of the chart, your task is to make the design functional.
2. Using helmfile hooks
Helmfile is Helm of Helm.
It manage your helm releases within a single file called helmfile.yaml.
Not only that, but it also can call some LOCAL commands before/or/after installing/or/upgrading a Helm release.
This call which happen before or after, is named hook.
For your case, you will need presync hook.
If we organize your helm release as a Helmfile definition , it should be :
releases:
- name: <helm-release>
chart: <chart>
namespace: <namespace>
hooks:
- events: ["presync"]
showlogs: true
command: kubectl
args: [ "-n", "{{`{{ .Release.Namespace }}`}}", "delete", "crd", "my-custom-resources" ]
Now you just need to run helmfile apply
I know that CRD are not namespaced, but I put namespace in the hook just to demonstrate that Helmfile can give you the namespace of release as variable and no need to repeat your self.
You can use helm upgrade to upgrade any existing deployed chart with changes.
The upgrade arguments must be a release and chart. The chart argument can be either: a chart reference(example/mariadb), a path to a chart directory, a packaged chart, or a fully qualified URL. For chart references, the latest version will be specified unless the --version flag is set.
To override values in a chart, use either the --values flag and pass in a file or use the --set flag and pass configuration from the command line, to force string values, use --set-string. In case a value is large and therefore you want not to use neither --values nor --set, use --set-file to read the single large value from file.
You can specify the --values'/'-f flag multiple times. The priority will be given to the last (right-most) file specified. For example, if both myvalues.yaml and override.yaml contained a key called 'Test', the value set in override.yaml would take precedence
For example
helm upgrade -f myvalues.yaml -f override.yaml redis ./redis
easier way I follow, especially for pre existing jobs during helm upgrade is do kubectl delete job db-migrate-job --ignore-not-found

Helm cannot upgrade chart - Change service from NodePort to ClusterIP?

I'm trying to edit services created via helm chart and when changing from NodePort to ClusterIP I get this error
The Service "<name>" is invalid: spec.ports[0].nodePort: Fordbidden: may not be used when 'type' is 'ClusterIP'
I've seen solutions from other people where they just run kubectl apply -f service.yaml --force - but I'm not using kubectl but helm to do it - any thoughts ? If it was just one service I would just update/re-deploy manually but there are xx of them.
Found the answer to my exact question in here https://www.ibm.com/support/knowledgecenter/SSSHTQ/omnibus/helms/all_helms/wip/reference/hlm_upgrading_service_type_change.html
In short they suggest either to:
There are three methods you can use to avoid the service conversion issue above. You will only need to perform one of these methods:
Method 1: Installing the new version of the helm chart with a different release name and update all clients to point to the new probe service endpoint if required. Then delete the old release. This is the recommended method but requires a re-configuration on the client side.
Method 2: Manually changing the service type using kubectl edit svc. This method requires more manual steps but preserves the current service name and previous revisions of the helm chart. After performing this workaround, users should be able to perform a helm upgrade.
Method 3: Deleting and purging the existing helm release, and then install the new version of helm chart with the same release name.

How to bind kubernetes resource to helm release

If I run kubectl apply -f <some statefulset>.yaml separately, is there a way to bind the stateful set to a previous helm release? (eg by specifying some tags in the yaml file)
As far as I know - you cannot do it.
Yes, you can always create resources via templates before installing the Helm chart.
However, I have never seen a solution for your question.