We're in the process of removing Helm from our IAC setup and switching to using just Terraform. Our system is currently running live in production so simply deleting all the Helm charts and re-deploying with Terraform is not an option as the system must maintain uptime.
Our original idea was to:
Use a tool like k2tf to convert the Helm yamls to Terraform config
Run tf import ... to import the existing k8s resource in Terraform state
Run tf apply to allow Terraform to strip off any attached Helm metadata such as labels/annotations
Update the helm chart to no longer include the resource and deploy it
Unfortunately, this doesn't appear to work as step 4 still deletes the resource. We had hoped that the Helm labels/annotations cleaned up in step 3 would make Helm think it doesn't own the resource anymore and thus not delete it, but it seems the Helm release still maintained some knowledge of it.
Any ideas on how this could be done? I know there are ways to delete a Helm chart a leave the resources in place but as mentioned this isn't really an option for us. We want to slowly migrate resources out of the Helm chart. Is there someway to explicitly tell Helm to "disown" a resource?
After some digging through the docs, I found the "helm.sh/resource-policy": keep annotation: https://helm.sh/docs/howto/charts_tips_and_tricks/#tell-helm-not-to-uninstall-a-resource
By applying this to a resource it means Helm won't delete it even if you run a helm upgrade to a chart version with the resource removed.
So our strategy for migrating a resource from Helm to TF goes as follows:
Attach the "helm.sh/resource-policy": keep annotation to the desired resource then release and deploy the Helm chart.
Remove the resource description from the Helm chart and release and deploy again.
Write the terraform config for that resource (using k2tf from the original yaml if appropriate)
Run tf import ... on the now orphaned k8s resource
Run tf apply to bring everything up to date
Related
I have a few services running on a kubernetes cluster, and I use Helm Chart where I placed all my services. However, I was asked to transfer Helm Charts into Helmfile.
If I use
helmfile import myrepo/mychart
helmfile sync
Will it redeploy and substitute existing running pods or It will deploy just deploy additional services mentioned
Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on.
Helmfile is a declarative spec for deploying helm charts. It lets you...
Keep a directory of chart value files and maintain changes in version control.
Apply CI/CD to configuration changes.
Periodically sync to avoid skew in environments.
To avoid upgrades for each iteration of helm, the helmfile executable delegates to helm - as a result, helm must be installed.
Like #DavidMaze suggested, use helm diff command first to determine the changes and then use helm sync command for applying them.
We have several resources deployed as part of a helm (v3) chart. Some time ago, I made changes to resources deployed by that helm chart manually, via kubectl. This caused some drift between the values in the yaml resources deployed by the helm release (as show by helm get values <release>) and what is actually deployed in the cluster
Example: kubectl describe deployment <deployment> shows an updated image that was manually applied via a kubectl re-apply. Whereas helm show values <release> shows the original image used by helm for said deployment.
I realize that I should have performed a helm upgrade with a modified values.yaml file to execute the image change, but I am wondering if there is a way for me to sync the state of the values I manually updated with the values in the helm release. The goal is to create a new default values.yaml that reflect the current state of the cluster resources.
Thanks!
This is a community wiki answer posted for better visibility. Feel free to expand it.
According to the Helm issue 2730 this feature will not be added in the Helm, as it is outside of the scope of the project.
It looks like there is no existing tool right from the Helm, that would help to port/adapt the life kubernetes resource back into existing or new helm charts/releases.
Based on this, you can use one of the following options:
As suggested by #David Maze. The Helm Diff Plugin will show you the difference between the chart output and the cluster, but then you need to manually update values.yaml and templates.
The helm-adopt plugin is a helm plugin to adopt existing k8s resources into a new generated helm chart.
In the company where I currently work we have RBAC Helm chart that is in-house developed that among other things has namespace resource in the template folder, thus the namespaces are now part of the Helm release.
We now realised that it is not such a good idea to include the namespace in Helm, because there may be number of cases (insert any reason here) where Helm may try to recreate given resource and if this happens on the namespace object it will delete (in order to recreate) the namespace along with everything in it.
My question: Is it possibly to make Helm stop tracking the namespace without actually deleting it, so that the namespace is no longer part of the Helm release?
As per Helm documentation, you can ask Helm not to delete the resource.
annotations:
"helm.sh/resource-policy": keep
The annotation "helm.sh/resource-policy": keep instructs Helm to skip deleting this resource when a helm operation (such as helm uninstall, helm upgrade or helm rollback) would result in its deletion. However, this resource becomes orphaned. Helm will no longer manage it in any way. This can lead to problems if using helm install --replace on a release that has already been uninstalled, but has kept resources.
I have all our applications in Kubernetes Helm charts using:
# values.yaml
default:
IMAGE_REPO: myorg
IMAGE_NAME: api
IMAGE_TAG: latest
I understand that in order for Helm to know it has to re-deploy the pods (i.e. pull down the latest image) I have to change the the IMAGE_TAG. My question is how is this managed? Do I manually update the values.yaml file every deploy, git commit, git pull on the master, and then run helm upgrade api --values values.yaml ./?
Or is it better to just leave values.yaml on latest and update via the command line directly like:
helm upgrade api --values values.yaml ./ --set IMAGE_TAG=ab31f452
Use git (99% of the time)
For a production deployment or anywhere that needs tracking I would want it in git and pushed from there. The helm chart will also evolve over time with your app so this also means you get checkpoints of working app versions with the helm chart.
For development or snapshot environments that don't need to be reproduced, I sometimes might go with the less formal method of helm pushing out new image tags as needed. More so if you have something like Jenkins or any job runner that tracks when and how things happen.
This is very dependent on the environment the app runs in. It can range from applications that require an audit trail all the way from a dev, through testing to production deployment where it has to be in git, over to the other end of the spectrum of throw stuff at production by hand (where you end up wanting it in git).
I understand that in order for Helm to know it has to re-deploy the pods (i.e. pull down the latest image) I have to change the the IMAGE_TAG
This isn't entirely correct, kubernetes will reschedule pods when the resource spec changes. You could change an annotation or label on the pod spec and pods would be replaced. Then imagePullPolicy: Always can be set in the pod spec.
Still, don't use that to rely on :latest. It will bite you one day.
The recommended image tag for production environment is immutable tags. So that we can easily get to know which version is running on the k8s cluster. Also you have to run the command like this because the image tag is nested vales.
helm upgrade api --values values.yaml ./ --set **default.IMAGE_TAG**=ab31f452
I want to export already templated Helm Charts as YAML files. I can not use Tiller on my Kubernetes Cluster at the moment, but still want to make use of Helm Charts. Basically, I want Helm to export the YAML that gets send to the Kubernetes API with values that have been templated by Helm. After that, I will upload the YAML files to my Kubernetes cluster.
I tried to run .\helm.exe install --debug --dry-run incubator\kafka but I get the error Error: Unauthorized.
Note that I run Helm on Windows (version helm-v2.9.1-windows-amd64).
We need logs to check the Unauthorized issue.
But you can easily generate templates locally:
helm template mychart
Render chart templates locally and display the output.
This does not require Tiller. However, any values that would normally
be looked up or retrieved in-cluster will be faked locally.
Additionally, none of the server-side testing of chart validity (e.g.
whether an API is supported) is done.
More info: https://helm.sh/docs/helm/helm_template/
Amrit Bera's solution will only work with local helm chart, per the details of your question you want it to work with remote helm chart, that's a feature that will be added to Helm v3 (Work in Progress currently).
RehanSaeed posted the following workaround (https://github.com/helm/helm/issues/4527)
Basically:
mkdir yamls
helm fetch --untar --untardir . 'stable/redis' #makes a directory called redis
helm template --output-dir './yamls' './redis' #redis dir (local helm chart), export to yamls dir
The good thing about this is you can mix this technique with weaveworks flux for git ops + this gives you another option for using Helm v2 without tiller, in addition to the Tiller Plugin (which lets you run tiller locally, but doesn't work smoothly).
Straight from the helm install --help
To check the generated manifests of a release without installing the chart,
the '--debug' and '--dry-run' flags can be combined. This will still require a
round-trip to the Tiller server.
If you want to see only the resolved YAML you can use
helm template .
I prefer to see it on a file
helm template . > solved.yaml
This is not the answer for the question but this post on stackoverflow
is the first one which was displayed in searchengines when i was
searching for a solution of my problem and solved it by myself reading
the Helm CLI docs. I post it here anyway because maybe someone else is
searching for the same usecase as i did.
For already installed Helm charts on a Kubernetes cluster you can use the following command to export/download all information for a named release:
helm get all <release-name>
or
helm get all <release-name> > installed-kubernetes-resources.yaml
If you only want e.g. the manifests or values instead of all, just replace the all command appropriately (get more details by using helm get --help):
Usage:
helm get [command]
Available Commands:
all download all information for a named release
hooks download all hooks for a named release
manifest download the manifest for a named release
notes download the notes for a named release
values download the values file for a named release
If you want to export the information for a named release with a distinct revision you can use the flag --revision int in your get command (helm get all --help). To list all possible revisions of your named release just use the command helm history <release-name>.
My Helm CLI version:
version.BuildInfo{Version:"v3.5.0", GitCommit:"32c22239423b3b4ba6706d450bd044baffdcf9e6", GitTreeState:"clean", GoVersion:"go1.15.6"}