I am trying to use the new Pulumi helm release (https://www.pulumi.com/registry/packages/kubernetes/api-docs/helm/v3/release/) and now wondering what helm command is wrapped inside here? Is it using helm install underneath or helm upgrade --install?
Thanks
To start with, Helm release resource embeds Helm as a library in the provider.
As for your question, it depends on the state of the release. If it is a fresh install, it should behave similar to a helm install. If the resource is seen to be updating, it triggers the equivalent of helm upgrade.
Related
when performing helm upgrade, I find that secrets that are created upon initial install are deleted. Why is this? The example I am using is dagster. When installing with:
helm install dagster dagster/dagster \ --namespace dagster \ --create-namespace
everything starts up fine and secrets are created. When updating the image and tag and performing an upgrade with:
helm upgrade -f charts/dagster-user-deployments/values.yaml dagster ./charts/dagster-user-deployments -n dagster
the image is upgraded, but all secrets are deleted. Why would/ could this happen?
After running the upgrade command, I expect secrets to still be in place, and the new image to be pulled and run.
when performing helm upgrade, I find that secrets that are created upon initial install are deleted. Why is this?
This is currently how helm works, here's the issue opened for discussion, there are several workarounds provided here as well.
Helm will create a new secret when you install/upgrade a release and you can check all the information regarding this in .airflow.
By default helm will keep up to 10 revisions and whenever you run commands like helm list, helm history, helm upgrade it will know what it has done in its past.
I have delved into the documentations of helm and still it is unclear what is the difference between the two. Here's what I understand so far
helm install -> install a helm chart
helm repo add -> add a repo from the internet
You can see Helm as a templating tool, which reads files from the templates directory, fills them with values from values.yaml, and deploys them into the Kubernetes cluster. These is all done by the helm install command. So, Helm install takes your chart and deploys it into the Kubernetes cluster.
One of the feature of Helm is helm package, which packages your chart into a single *.tgz file and then you can store it in the Helm registry. A lot of Helm charts are stored that way, you can look, e.g., into Artifact Hub. If you find a chart you'd like to install from the Helm registry, you can add that remote repo into your local Helm registry using helm repo add. Then, helm repo update downloads a Helm chart to your local registry. Downloading a repo just downloads the Helm chart into your local registry, but it does not deploy anything into the Kubernetes cluster. To do that, you need to use helm install.
I have multiple subcharts under one helm chart. I install those using the command
helm install my-app . --values values.dev.yaml
It is working fine. All subcharts are part of a one release. Now I have requirements that other member will be starting working those individual subcharts and want to upgrade their subchart without deleting/upgrading the entire application's subchats and in the same release
so when for upgrading one one say frontend subchart from it. I tried
helm upgrade my-app ./charts/frontend --values values.dev.yaml.
It will terminate all the other pods and will keep only pod for this subchart frontend running. Is there any way to upgrade only subcharts of the application without touching the other subcharts?
Just run helm upgrade on the top-level chart normally
rm requirements.lock
helm dependency update
helm upgrade my-app . -f values.dev.yaml
This will "redeploy" the entire chart, including all of its subcharts, but Helm knows to not resubmit unchanged objects to Kubernetes, and Kubernetes knows to not take action when an unmodified object is submitted.
Helm subcharts have some limitations; in addition to what you describe here about not being able to separately manage subcharts' versions, they will also flatten recursive dependencies together (if A depends on B depends on Redis, and A depends on C depends on Redis, B and C will share a single Redis installation and could conflict). If you need to separately manage the versions, consider installing the charts as separate top-level releases.
If your sub-charts are 3rd party dependencies (i.e. you are combining some charts together in a single chart), you can update the external charts by updating Helm dependencies:
Once in the Helm chart dir, where Chart.yaml lives, run
$ helm dependency update
To make sure you get the latest dependency, update Helm repos first:
$ helm repo update && helm dependency update
This will download the latest dependent charts (or the latest allowed, depending on your Chart.yaml config.
Please Note that helm dependency update will download txz files. If no action is taken (i.e. ignore them in git), they could end up version-controlled in your git repo.
when I recently tried to use Gitlab Auto DevOps to Kubernetes, my Gitlab GUI cannot see the Helm Tiller application.
Why?
One possible reason could be that you are using Helm 3. The Tiller doesn't exist on Helm 3.
The internal implementation of Helm 3 has changed considerably from Helm 2. The most apparent change is the removal of Tiller.
Is it possible to install new packages with helm3 while helm2 is running in the cluster with tiller?
Any potential problems to packages installed with helm2?
I don't see a problem. Helm v2 talks to tiller, Helm v3 doesn't. Just basically keep:
Every package managed with Helm v2 using Helm v2
Every package managed with Helm v3 managed with Helm v3
Eventually, you will have to migrate everything to Helm v3. One thing is that if you install the latest Helm v3 the client executable will have the helm name. So you'd like to maybe rename the v2 client executable to helm2 and the v3 client executable to v3 to keep yourself from becoming confusing.