How do I see what custom values were used in a Helm release? - kubernetes-helm

When I use helm install to install a chart into a Kubernetes cluster, I can pass custom values to the command to configure the release. helm must store them somewhere, because I can later rollback to them. However, I cannot find a way to view the values in the deployed version or the previous one.
I want to see what values will change (and confirm what values are set) when I rollback a release. I thought inspect or status might help with that, but they do different things. How can I see the values that were actually deployed?

To view what was actually deployed in a release, use helm get.
If you use helm -n <namespace> get all <release-name> you get all the information for the current release of <release-name> in namespace <namespace>†. You can specify --revision to get the information for a specific version, which you can use to see what the effect of rollback will be.
You can use helm -n <namespace> get values <release-name> to just get the values install used/computed rather than the whole chart and everything, or helm -n <namespace> get manifest <release-name> to view the generated resource configurations††.
Where this information is stored depends on the version of helm you are using:
For version 3, it is (by default) in a secret named <release-name>.<version> in the namespace where the release was deployed. The content of the secret is about the same as what was in the helm version 2 configMap
For version 2: it is in a configMap named <release-name>.<version>, in the kube-system namespace. You can get more detail on that here.
†For helm version 2, use helm get <release-name> instead of helm get all <release-name>
††For helm version 2, release names had to be unique cluster-wide. For helm version 3, release names are scoped to namespaces, and the helm command operates on the "current" namespace unless you specify a namespace using the -n or --namespace command line option.

helm get <release-name> no longer works with Helm3. helm get values <release-name> does show the custom values used for the release. Note: to get all possible values for reference, use helm show values <your-chart> - this doesn't show the custom values though.

Related

What's the difference between helm uninstall, helm delete and kubectl delete

I want to remove pod that I deployed to my cluster with helm install.
I used 3 ways to do so:
helm uninstall <release name> -> remove the pod from the cluster and from the helm list
helm delete <release name> -> remove the pod from the cluster and from the helm list
kubectl delete -n <namespace> deploy <deployment name> -> remove the pod from the cluster but not from the helm list
What's the difference between them?
Is one better practice then the other?
helm delete is an alias for helm uninstall and you can see this when you check the --help syntax:
$ helm delete --help
...
Usage:
helm uninstall RELEASE_NAME [...] [flags]
kubectl delete ... just removes the resource in the cluster.
Doing helm uninstall ... won't just remove the pod, but it will remove all the resources created by helm when it installed the chart. For a single pod, this might not be any different to using kubectl delete... but when you have tens or hundreds of different resources and dependent charts, doing all this manually by doing kubectl delete... becomes cumbersome, time-consuming and error-prone.
Generally if you're deleting something off the cluster, use the same method you used to install it in in the first place. If you used helm to install it into the cluster, use helm to remove it. If you used kubectl create or kubectl apply, use kubectl delete to remove it.
I will add a point that we use, quite a lot. helm uninstall/install/upgrade has hooks attached to its lifecycle. This matters a lot, here is a small example.
We have database scripts that are run as part of a job. Say you prepare a release with version 1.2.3 and as part of that release you add a column in a table - you have a script for that (liquibase/flyway whatever) that will run automatically when the chart is installed. In plain english helm install allows you to say in this case : "before installing the code, upgrade the DB schema". This is awesome and allows you to tie the lifecycle of such scripts, to the lifecycle of the chart.
The same works for downgrade, you could say that when you downgrade, revert the schema, or take any needed action. kubectl delete simply does not have such functionality.
For me it is the same thing: uninstall, del, delete, and un for the helm (check Aliases):
$ helm del --help
This command takes a release name and uninstalls the release.
It removes all of the resources associated with the last release of the chart
as well as the release history, freeing it up for future use.
Use the '--dry-run' flag to see which releases will be uninstalled without actually
uninstalling them.
Usage:
helm uninstall RELEASE_NAME [...] [flags]
Aliases:
uninstall, del, delete, un
Helm delete is older command which is now replaced by helm uninstall. This command basically uninstall all the resources in helm chart, which was previously deployed using helm install/upgrade.
Kubectl delete will delete just resource which will get redeployed again if it was deployed by helm chart. So these command is usefull if you want to redeploy pod or to delete resource if it was not deployed using helm chart approach.

Update Kubernetes job with Helm

I have a helm chart containing Kubernetes job but unfortunately helm upgrade won't work because the image name is immutable so logically I need to do a delete and install but I will loose my set values.yaml if they were customised in the first place.
How can I keep the values before deleting the chart and use them for new install to simulate an upgrade? I couldn't find anything in documentations or here.
Thanks
EDIT:
First you need to get your previous values with helm get values <release-name>
So you could redirect the values to a file with:
helm get values <release-name> -o yaml > values.yaml
And then do a helm install again

For how long should I keep the storage driver Secret in my cluster?

I'm using helm 3.4.2 for upgrade my charts to my AKS cluster and I saw that every time I deploy something new, it creates a new secret called sh.helm.v... This is the first time I'm using helm.
I was reading the doc and found that at version 3.x helm is using secrets to store driver as default. Cool, but every time I deploy it creates a new secret and I'm now sure if this is the best to keep it all in my cluster.
Soo, should I keep then all in my cluster? Like, every time I deploy some thing, it creates a secret and live there
or
Can I remove the last before? Like, deploy v5 now and erase v1, v2, v3 and keep the v4 and v5 for some reason. If it's ok to do it, does anyone has a clue for how to do it? Using a bash ou kubectl?
Thanks a lot!
So yes, There are few major changes in Helm3, comparing to Helm2.
Secrets are now used as the default storage driver
In Helm 3, Secrets are now used as the default storage driver. Helm 2 used ConfigMaps by default to store release information. In
Helm 2.7.0, a new storage backend that uses Secrets for storing
release information was implemented, and it is now the default
starting in Helm 3.
Also
Release Names are now scoped to the Namespace
In Helm 3, information about a particular release is now stored in the
same namespace as the release itself. With this greater alignment to
native cluster namespaces, the helm list command no longer lists all
releases by default. Instead, it will list only the releases in the
namespace of your current kubernetes context (i.e. the namespace shown
when you run kubectl config view --minify). It also means you must
supply the --all-namespaces flag to helm list to get behaviour similar
to Helm 2.
Soo, should I keep then all in my cluster? Like, every time I deploy
some thing, it creates a secret and live there or
Can I remove the last before?
I dont thinks its a good practice to remove anything manually. If it is not mandatory necessary - sure better not touch them. However, you can delete unused ones, if you sure you will not need old revisions in the future.
#To check all secretes were created by helm:
kubectl get secret -l "owner=helm" --all-namespaces
#To delete revision you can simply remove appropriate secret..
kubectl delete secret -n <namespace> <secret-name>
Btw(just FYI), taking into an account the fact Helm3 is scoped to namespaces - you can simply delete deployment by deleting its corresponding namespace
And the last remark, maybe it would be useful for: you can pass --history-max to helm upgrade to
limit the maximum number of revisions saved per release. Use 0 for no
limit (default 10)

Error in installing csidrvier to a kubernetes namesapce - This command needs 1 argument: chart name

I am a Kubernetes novice. I am trying to install a csi driver to a Kubernetes Namespace in a kubernetes cluster. I am using helm 2.16 version to do the install using below command :
.\helm install csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver -n csi
Error: This command needs 1 argument: chart name
Also tried running :
.\helm install csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver --namespace csi and get below Error :
Error: This command needs 1 argument: chart name
Can some one help me with the correct command?
.\helm version
Client: &version.Version{SemVer:"v2.16.12", GitCommit:"47f0b88409e71fd9ca272abc7cd762a56a1c613e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
Thanks
According to the official docs:
This command installs a chart archive.
The install argument must be a chart reference, a path to a packaged
chart, a path to an unpacked chart directory or a URL.
To override values in a chart, use either the –values flag and pass
in a file or use the –set flag and pass configuration from the
command line. To force string values in –set, use –set-string
instead. In case a value is large and therefore you want not to use
neither –values nor –set, use –set-file to read the single large
value from file.
CHART REFERENCES
A chart reference is a convenient way of reference a chart in a chart
repository.
When you use a chart reference with a repo prefix (‘stable/mariadb’),
Helm will look in the local configuration for a chart repository named
‘stable’, and will then look for a chart in that repository whose name
is ‘mariadb’. It will install the latest version of that chart unless
you also supply a version number with the ‘–version’ flag.
To see the list of chart repositories, use ‘helm repo list’. To search
for charts in a repository, use ‘helm search’.
helm install [CHART] [flags]
Note the:
-n, --name string release name. If unspecified, it will autogenerate one for you
and:
--namespace string namespace to install the release into. Defaults to the current kube config namespace.
The error you see means that you did not reference the chart name properly. It looks like there are too many arguments for --name (csi-secrets-store and csi). For example it should look more like this:
helm install --name <release name> --namespace <namespace>
which would be transleted into your use case as:
helm install --name csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver --namespace csi
*Note that I am not sure of the values you want to use so you have to check them yourself and adjust if needed. Here I assume that csi is the namespace, csi-secrets-store is the release name and secrets-store-csi-driver/secrets-store-csi-driver is the repo/chart name.
Also make sure that the chart you want to install does not require Helm v3.0+. If so, than you will have to upgrade it before installing.

Helm re-install resources that already exist

How can I execute "helm install" command and re-install resources that I have defined in "templates"? I have some custom resources that already exist so I want to re-install them. It is possible to do that through a parameter in helm command?
I think your main question is:
I have some custom resources that already exist so I want to re-install them.
Which means DELETE then CREATE again.
Short answer
No.. but it can be done thru workaround
Detailed answer
Helm manages the RELEASE of the Kubernetes manifests by either:
creating helm install
updating helm upgrade
deleting helm delete
However, you can recreate resources following one of these approaches :
1. Twice Consecutive Upgrade
If your chart is designed to enable/disable installation of resources with Values ( .e.g: .Values.customResources.enabled) you can do the following:
helm -n namespace upgrade <helm-release> <chart> --set customResources.enabled=false
# Then another Run
helm -n namespace upgrade <helm-release> <chart> --set customResources.enabled=true
So, if you are the builder of the chart, your task is to make the design functional.
2. Using helmfile hooks
Helmfile is Helm of Helm.
It manage your helm releases within a single file called helmfile.yaml.
Not only that, but it also can call some LOCAL commands before/or/after installing/or/upgrading a Helm release.
This call which happen before or after, is named hook.
For your case, you will need presync hook.
If we organize your helm release as a Helmfile definition , it should be :
releases:
- name: <helm-release>
chart: <chart>
namespace: <namespace>
hooks:
- events: ["presync"]
showlogs: true
command: kubectl
args: [ "-n", "{{`{{ .Release.Namespace }}`}}", "delete", "crd", "my-custom-resources" ]
Now you just need to run helmfile apply
I know that CRD are not namespaced, but I put namespace in the hook just to demonstrate that Helmfile can give you the namespace of release as variable and no need to repeat your self.
You can use helm upgrade to upgrade any existing deployed chart with changes.
The upgrade arguments must be a release and chart. The chart argument can be either: a chart reference(example/mariadb), a path to a chart directory, a packaged chart, or a fully qualified URL. For chart references, the latest version will be specified unless the --version flag is set.
To override values in a chart, use either the --values flag and pass in a file or use the --set flag and pass configuration from the command line, to force string values, use --set-string. In case a value is large and therefore you want not to use neither --values nor --set, use --set-file to read the single large value from file.
You can specify the --values'/'-f flag multiple times. The priority will be given to the last (right-most) file specified. For example, if both myvalues.yaml and override.yaml contained a key called 'Test', the value set in override.yaml would take precedence
For example
helm upgrade -f myvalues.yaml -f override.yaml redis ./redis
easier way I follow, especially for pre existing jobs during helm upgrade is do kubectl delete job db-migrate-job --ignore-not-found