When packaging using a Helm Chart.yaml you must specify a version. We would like the chart version to match the apps version.
If there a way to read our existing version.txt file instead of remembering (or not) to update in two places?
There isn't. To confuse things further there's also an appVersion field in the Chart.yaml file saying what version of the application you're packaging, but it's also near-universal to be able to specify an image tag as a value, with the same effect.
This field is only really used by Helm-specific tooling, for example if this chart is listed as a dependency of other charts or if you're publishing the chart to a central repository. If you're not doing either of these things you can largely ignore the version: field.
If your CI system is publishing the Helm chart to a repository, you might be forced to have it modify the Chart.yaml file before publishing it. A simple sed command will work
sed -i.bak "s/^version:/version: $APP_VERSION/" Chart.yaml
but it does become slightly messy to set up.
If you have a more formal "release" process then you would have to remember to update the version number in both places; writing a shell script to update the version numbers (and tag the release in source control, and do whatever other tasks you need) is probably the most straightforward answer.
Related
I have a ton of helm files, with the structure aligned to comply with Helm2, i.e. a separate requirements.yaml file and no type:application in Chart.yaml
Is there an option in helm-2to3 plugin that automagically places the requirements.yaml under Chart.yaml or do I have to write myself a script to do this?
My charts are checked-in to GH btw (not using a helm repo but operating them via GitOps)
edit: After confirmation in answers below that helm-2to3 does not provide that functionality, I ended up using the draft script below (warning: do not use it in production :) ); you can then proceed by a simple find/xargs oneliner to remove all requirements.yaml (or give them an extension of .bak to keep around for some time).
the chart should of course be run from the root directory of the project where your helm files are kept.
import os
import time
for root, dirs, files in os.walk(os.path.abspath(".")):
for file in files:
if file == "requirements.yaml":
path = os.path.dirname(os.path.join(root, file))
print(path)
os.chdir(path)
files = [f for f in os.listdir('.') if os.path.isfile(f)]
# print(files)
if "requirements.yaml" in files and "Chart.yaml" in files:
requirements_path = os.path.join(root, file)
print("Doing job in..: ", requirements_path)
chart_path = path + "/Chart.yaml"
print(chart_path)
with open(requirements_path) as req:
r = req.read()
with open(chart_path, 'a') as chr:
chr.write(r)
time.sleep(2)
I have a ton of helm template files, with the structure aligned to comply with Helm2, i.e. a separate requirements.yaml file and no type:application in Chart.yaml
The template files, within the helm chart, should work the same way under Helm 3. It's the Chart.yaml file, in each chart that will need to be edited. Unfortunately the helm-2to3 plugin won't do that for you. It's primarily designed to fix the Kubernetes cluster where you might have previously installed helm charts.
Helm3 is capable of installing older helm charts, so I suggest updating each helm chart one at a time.
To conclude, we're using GitOps too. Happily ArgoCD supports both Helm 2 + Helm 3 charts and only uses Helm to generate YAML. This means there is no Helm configuration on the cluster that requires updating. (Miles has a link in his answer, if you're alternatively using FluxCD)
There is a very good plugin that does what you're looking for: https://github.com/helm/helm-2to3
There is also information on migrating a GitOps setup on the Flux CD page (assuming you're using Flux and Helm Operator for your GitOps) https://docs.fluxcd.io/projects/helm-operator/en/stable/helmrelease-guide/release-configuration/#migrating-from-helm-v2-to-v3
Even more information here: https://helm.sh/docs/topics/v2_v3_migration/
I am starting to get my arms around using Kubernetes and Helm. Most of it makes perfect sense to me. I am missing one thing though and maybe someone can answer me. Why is there a separate Chart.yaml and values.yaml file? it seems to me that it would make better sense on a helm install command to have one file with a standard name. Asking for some DevOps wisdom.
Chart.yaml contains metadata about the chart itself: its name, the chart version, a description, and similar details. In Helm 3 it can contain dependencies as well.
values.yaml contains configuration settings for the chart. This typically includes things like the image repository to pull from, where you want data to be stored, and how to make the service accessible.
When you install the chart, you can use helm install -f to supply an additional YAML file of configuration options that override things in value.yaml, or helm install --set to set a single specific value. You can't override things in the Chart.yaml.
In the template code, items in Chart.yaml and values.yaml are available in the top-level data items .Chart and .Values, respectively.
I am trying to understand the usage of Requirements.lock file . For using a dependent chart , we can make use of Requirements.yaml . Based on documentation
Requirements.lock : rebuild the charts/ directory based on the requirements.lock file
Requirements.yaml : update charts/ based on the contents of
requirements.yaml
Can someone explain the difference and usage of lock file and do we need to checking requirements.lock file in the repo too ?
This article says it well:
Much like a runtime language dependency file (such as Python’s requirements.txt), the requirements.yaml file allows you to manage your chart’s dependencies and their versions. When updating dependencies, a lockfile is generated so that subsequent fetching of dependencies use a known, working version.
The requirements.yaml file lists only the immediate dependencies that your chart needs. This makes it easier for you to focus on your chart.
The requirements.lock file lists the exact versions of immediate dependencies and their dependencies and their dependencies and so forth. This allows helm to precisely track the entire dependency tree and recreate it exactly as it last worked--even if some of the dependencies (or their dependencies) are updated later.
Here's roughly how it works:
You create the initial requirements.yaml file. You run helm install and helm creates the requirements.lock file as it builds the dependency tree.
On the next helm install, helm will ensure that it uses the same versions identified in the requirements.lock file.
At some later date, you update the requirements.yaml file. You run helm install (or helm upgrade) and helm will notice your changes and update the requirements.lock file to reflect them.
I have a helm repository set up for my CI/CD pipeline, but the one thing I am having trouble with is helm's versioning system which is focused on a semantic versioning system as in x.x.x.
I want to be able to specify tags like "staging", "latest", and "production", and although I am able to successfully upload charts with string versions
NAME CHART VERSION APP VERSION
chartmuseum/myrchart latest 1.0
Any attempt to actually access the chart fails, such as
helm inspect chartmuseum/mychart --version=latest
Generates the error:
Error: failed to download "chartmuseum/mychart" (hint: running 'helm repo update' may help)
I don't really want to get into controlled semantic versioning at this point in development, or the mess that is appending hashes to a version. Is there any way to get helm to pull non-semantically tagged chart versions?
My approach to this, where I do not want to version my chart (and subcharts) semanticaly as well is not to use helm repository at all and just pull whole chart in CI/CD from git instead. If you are publishing them to wider audience this may not suit you, but for own CI/CD which is authorized to access our repositories anyway it works like charm.
I found something that worked for me. Since the semvar allows you to append values after the last number like 0.1.0-aebcaber, I've taken to simply using 0.1.0-latest and overwriting that in chartmuseum on uploads.
I've checked out helm.sh of course, but at first glance the entire setup seems a little complicated (helm-client & tiller-server). It seems to me like I can get away by just having a helm-client in most cases.
This is what I currently do
Let's say I have a project composed of 3 services viz. postgres, express, nginx.
I create a directory called product-release that is as follows:
product-release/
.git/
k8s/
postgres/
Deployment.yaml
Service.yaml
Secret.mustache.yaml # Needs to be rendered by the dev before use
express/
Deployment.yaml
Service.yaml
nginx/
Deployment.yaml
Service.yaml
updates/
0.1__0.2/
Job.yaml # postgres schema migration
update.sh # k8s API server scritps to patch/replace existing k8s objects, and runs the state change job
The usual git stuff can apply now. Everytime I make a change, I make changes to the spec files, test them, write the update scripts to help move from the last version to this current version and then commit it and tag it.
Questions:
This works for me so far, but is this "the right way"?
Why does helm have the tiller server? Isn't it simpler to do the templating on the client-side? Of course, if you want to separate the activity of the deployment from the knowledge of the application (like secrets) the templating would have to happen on the server, but otherwise why?
Seems that https://redspread.com/ (open source) addresses this particular issue, but needs more development before it'll be production ready - at least from my team quick glance at it.
We'll stick with keeping yaml files in git together with the deployed application for now I guess.
We are using kubernetes/helm (the latest/incubated version) and a central repository for Helm charts (with references container images built for our component releases).
In other words, the Helm package definitions and its dependencies are separate from the source code and image definitions that make up the several components of our web applications.
Notice: Tiller has been removed in Helm v3. Checkout this answer to see details on why it needs tiller in Helm v2 and why it's removed in Helm v3: https://v3.helm.sh/docs/faq/#removal-of-tiller
According to the idea of GitOps, what you did is a right way (to perform release from a git repo). However, if you want to push it further to make it more common, you can plan more goals including:
Choose a configuration management system beyond k8s app declarative definition only. E.g., Helm (like above answer https://stackoverflow.com/a/42053983/914967), Kustomize. They're pure client-side only.
avoid custom release process by altering update.sh with popular tools like kubectl apply or helm install.
drive change delivery from git tags/branches by using a CI/CD engine like argocd, Travis CI or GitHub Actions.
Uses branching strategy so that you can try changes on test/staging/production/ environment before delivering it directly.