For how long should I keep the storage driver Secret in my cluster? - kubernetes

I'm using helm 3.4.2 for upgrade my charts to my AKS cluster and I saw that every time I deploy something new, it creates a new secret called sh.helm.v... This is the first time I'm using helm.
I was reading the doc and found that at version 3.x helm is using secrets to store driver as default. Cool, but every time I deploy it creates a new secret and I'm now sure if this is the best to keep it all in my cluster.
Soo, should I keep then all in my cluster? Like, every time I deploy some thing, it creates a secret and live there
or
Can I remove the last before? Like, deploy v5 now and erase v1, v2, v3 and keep the v4 and v5 for some reason. If it's ok to do it, does anyone has a clue for how to do it? Using a bash ou kubectl?
Thanks a lot!

So yes, There are few major changes in Helm3, comparing to Helm2.
Secrets are now used as the default storage driver
In Helm 3, Secrets are now used as the default storage driver. Helm 2 used ConfigMaps by default to store release information. In
Helm 2.7.0, a new storage backend that uses Secrets for storing
release information was implemented, and it is now the default
starting in Helm 3.
Also
Release Names are now scoped to the Namespace
In Helm 3, information about a particular release is now stored in the
same namespace as the release itself. With this greater alignment to
native cluster namespaces, the helm list command no longer lists all
releases by default. Instead, it will list only the releases in the
namespace of your current kubernetes context (i.e. the namespace shown
when you run kubectl config view --minify). It also means you must
supply the --all-namespaces flag to helm list to get behaviour similar
to Helm 2.
Soo, should I keep then all in my cluster? Like, every time I deploy
some thing, it creates a secret and live there or
Can I remove the last before?
I dont thinks its a good practice to remove anything manually. If it is not mandatory necessary - sure better not touch them. However, you can delete unused ones, if you sure you will not need old revisions in the future.
#To check all secretes were created by helm:
kubectl get secret -l "owner=helm" --all-namespaces
#To delete revision you can simply remove appropriate secret..
kubectl delete secret -n <namespace> <secret-name>
Btw(just FYI), taking into an account the fact Helm3 is scoped to namespaces - you can simply delete deployment by deleting its corresponding namespace
And the last remark, maybe it would be useful for: you can pass --history-max to helm upgrade to
limit the maximum number of revisions saved per release. Use 0 for no
limit (default 10)

Related

Helm delete and reinstall deployment. Wait or not to wait?

I have situation where I am deploying some chart, let's call it "myChart".
Let's suppose I have a pipeline, where I am doing below:
helm delete myChart_1.2 -n <myNamespace>
An right after I am installing new one:
helm delete myChart_1.3 -n <myNamespace>
Does Kubernetes or maybe Helm knows that all the resources should be deleted first and then install new?
For instance there might be some PVC and PV that are still not deleted. Is there any problem with that, should I add some waits before deployment?
Helm delete (aka. uninstall) should remove the objects managed in a given deployment, before exiting.
Still, when the command returns: you could be left with resources in a Terminating state, pending actual deletion.
Usually, we could find about PVC, that may still be attached to a running container.
Or objects such as ReplicaSet or Pods -- most likely, your Helm chart installs Deployments, DaemonSets, StatefulSets, ... top-level objects may appear to be deleted, while their child objects are still being terminated.
Although this shouldn't be an issue for Helm, assuming that your application is installed using a generated name, and as long as your chart is able to create multiple instances of a same application, in a same cluster/namespace, ... without them overlapping ( => if all resources managed through Helm have unique names, which is not always the case ).
If your chart is hosted on a public repository, let us know what to check. And if you're not one of the maintainer for that chart: beware that Helm charts could go from amazing to very bad, depending on who's contributing, what use cases have been met so far, ...
Kubernetes (and Helm by extension) will never clean up PVCs that have been created as part of StatefulSets. This is intentional (see relevant documentation) to avoid accidental loss of data.
Therefore, if you do have PVCs created from StatefulSets in your chart and if your pipeline re-installs your Helm chart under the same name, ensure that PVCs are deleted explicitly after running "helm delete", e.g. with a separate "kubectl delete" command.

Helm Release with existing resources

Previously we only use helm template to generate the manifest and apply to the cluster, recently we start planning to use helm install to manage our deployment, but running into following problems:
Our deployment is a simple backend api which contains "Ingress", "Service", and "Deployment", when there is a new commit, the pipeline will be triggered to deploy.
We plan to use the short commit sha as the image tag and helm release name. Here is the command
helm upgrade --install releaseName repo/chartName -f value.yaml --set image.tag=SHA
This runs perfectly fine for the first time, but when I create another release it fails with following error message
rendered manifests contain a resource that already exists. Unable to continue with install: Service "app-svc" in namespace "ns" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "rel-124": current value is "rel-123"
The error message is pretty clear on what the issue is, but I am just wondering what's "correct" way of using helm in this case?
It is not practical that I uninstall everything for a new release, and I also dont want to keep using the same release.
You are already doing it "right" way, just don't change release-name. That's key for Helm to identify resources. It seems that you previously used different name for release (rel-123) then you are using now (rel-124).
To fix your immediate problem, you should be able to proceed by updating value of annotation meta.helm.sh/release-name on problematic resource. Something like this should do it:
kubectl annotate --overwrite service app-svc meta.helm.sh/release-name=rel-124

Dynamically refresh pods on secrets update on kubernetes while using helm chart

I am creating deployment,service manifest files using helm charts, also secrets by helm but separately not with deployments and service.
secretes are being loaded as env variables on pod level.
we are looking to refresh or restart PODs when we update secrets with new content.
Kubernetes does not itself support this feature at the moment and there is feature in the works (https://github.com/kubernetes/kubernetes/issues/22368).
You can use custom solution available to achieve the same and one of the popular ones include Reloader.

Restart Pod when secrets gets updated

We are using secret as environment variables on pod, but every time we have updated on secrets, we are redeploying the pods to take changes effect. We are looking for a mechanism where Pods get restarted automatically whenever secrets gets updated. Any help on this?
Thanks in advance.
There are many ways to handle this.
First, use Deployment instead of "naked" Pods that are not managed. The Deployment will create new Pods for you, when the Pod template is changed.
Second, to manage Secrets may be a bit tricky. It would be great if you can use a setup where you can use Kustomize SecretGenerator - then each new Secret will get its unique name. In addition, that unique name is reflected to the Deployment automatically - and your pods will automatically be recreated when a Secret is changed - this match your origin problem. When Secret and Deployment is handled this way, you apply the changes with:
kubectl apply -k <folder>
If you mount your secrets to pod it will get updated automatically you don't have to restart your pod as mentioned here
Other approaches are staker reloader which can reload your deployments based on configs, secrets etc
There are multiple ways of doing this:
Simply restart the pod
this can be done manually, or,
you could use an operator provided by VMware carvel kapp controller (documentation), using kapp controller you can reload the secrets/ configmap without needing to restart the pods (which effectively runs helm template <package> on a periodic basis and applies the changes if it founds any differences in helm template), check out my design for reloading the log level without needing to restart the pod.
Using service bindings https://servicebinding.io/

How to bind kubernetes resource to helm release

If I run kubectl apply -f <some statefulset>.yaml separately, is there a way to bind the stateful set to a previous helm release? (eg by specifying some tags in the yaml file)
As far as I know - you cannot do it.
Yes, you can always create resources via templates before installing the Helm chart.
However, I have never seen a solution for your question.