Helm chart versions for CI/CD - kubernetes

I have a helm repository set up for my CI/CD pipeline, but the one thing I am having trouble with is helm's versioning system which is focused on a semantic versioning system as in x.x.x.
I want to be able to specify tags like "staging", "latest", and "production", and although I am able to successfully upload charts with string versions
NAME CHART VERSION APP VERSION
chartmuseum/myrchart latest 1.0
Any attempt to actually access the chart fails, such as
helm inspect chartmuseum/mychart --version=latest
Generates the error:
Error: failed to download "chartmuseum/mychart" (hint: running 'helm repo update' may help)
I don't really want to get into controlled semantic versioning at this point in development, or the mess that is appending hashes to a version. Is there any way to get helm to pull non-semantically tagged chart versions?

My approach to this, where I do not want to version my chart (and subcharts) semanticaly as well is not to use helm repository at all and just pull whole chart in CI/CD from git instead. If you are publishing them to wider audience this may not suit you, but for own CI/CD which is authorized to access our repositories anyway it works like charm.

I found something that worked for me. Since the semvar allows you to append values after the last number like 0.1.0-aebcaber, I've taken to simply using 0.1.0-latest and overwriting that in chartmuseum on uploads.

Related

How to helm upgrade with v3 and remove / overwrite any manual changes that have been applied to templates

I have a problem where we essentially discovered a piece of stale configuration in a live environment on one of our deployments (a config map was added as a volume mount). Reading through the docs here (search for 'Upgrades where live state has changed') we can see that helm v2 would purge changes that were introduced to a template via external actors. Whereas v3 is very clever and will merge externally introduced changes alongside template changes as long as they dont conflict.
So how do we in helm v3 run a upgrade that purges any manual template changes that may have been introduced?
Based on the description, the --force flag should do the trick.
--force force resource updates through a replacement strategy
However, there are some issues with it as mentioned in this GitHub issue.

Helm Chart: How do I install dependencies first?

I've been developing a prototype chart that depends on some custom resource definitions that are defined in one of the child charts.
To be more specific, I'm trying to create the resources defined in the strimzi-kafka-operator within my helm chart and would like the dependency to be explicitly installed first. I followed the helm documentation and added the following to my Chart.yaml
dependencies:
- name: strimzi-kafka-operator
version: 0.16.2
repository: https://strimzi.io/charts/
I ran:
$ helm dep up ./prototype-chart
$ helm install ./prototype-chart
> Error: unable to build Kubernetes objects from release manifest: unable to recognize "": no matches for kind "KafkaTopic" in version "kafka.strimzi.io/v1beta1"
which shows that it's trying to deploy my chart before my dependency. What is the correct way to install dependencies first and then my parent chart?
(For reference, here is the question I opened on GitHub directly with Strimzi where they informed me they aren't sure how to use their helm as a dependency:
https://github.com/strimzi/strimzi-kafka-operator/issues/2552
)
Regarding CRD's: the fact that Helm by default won't manage those1 is a feature, not a bug. It will still install them if not present; but it won't modify or delete existing CRD's. The previous version of Helm (v2) does, but (speaking from experience) that can get you into all sorts of trouble if you're not careful. Quoting from the link you referenced:
There is not support at this time for upgrading or deleting CRDs using Helm. This was an explicit decision after much community discussion due to the danger for unintentional data loss. [...] One of the distinct disadvantages of the crd-install method used in Helm 2 was the inability to properly validate charts due to changing API availability (a CRD is actually adding another available API to your Kubernetes cluster). If a chart installed a CRD, helm no longer had a valid set of API versions to work against. [...] With the new crds method of CRD installation, we now ensure that Helm has completely valid information about the current state of the cluster.
The idea here is that Helm should operate only at the level of release data (adding/removing deployments, storage, etc.); but with CRD's, you're actually modifying an extension to the Kubernetes API itself, potentially inadvertently breaking other releases that use the same definitions. Consider if you're on a team that has a "library" of CRDs shared between several charts, and you want to uninstall one — formerly, With v2, Helm would happily let you modify or even delete those at will, with no checks on if/how they were used in other releases. Changes to CRDs are changes to your control plane / core API, and should be treated as such — you're modifying global resources.
In short: with v3, Helm positions itself more as a "developer" tool to define, template, and manage releases; CRDs, however, are meant to be managed independently e.g. by a "cluster administrator". At the end of the day, it's a win for all sides, since developers can setup/teardown deployments at will, with confidence that it's not going to break functionality elsewhere... and whoever's on call won't have to deal with alerts if/when you accidentally delete/modify a CRD and break things in production :)
See also the extensive discussion here for more context behind this decision.
Hope this helps!

Does Helm (or Kubernetes) caches charts?

I have a chart for Helm that works fine.
I updated couple lines of "template" files to have it set up differently and ran helm install -n <relaese name> <char dir>.
But I found that change never gets applied.
when I tried helm install --dry-run --debug, I don't see my updates.
(It might be getting the chart from remote ...)
Does Helm cache stuff? I wasn't able to find anything about it...
I am trying to setup hdfs on my cluster using this link
I found that I had to rebuild dependency after I make a changes
It is possible to make changes to a chart that do not make difference to the application when it runs or even that are not included in the Kubernetes resources that are generated (e.g. a change within an if block whose condition evaluates to false). You can use '--dry-run --debug' to see what the template evaluates to and check whether your change is present in the Kubernetes resources that would result from the chart installation. This gives you a quick way to check a chart change without it being installed.
If you were publishing the chart then you could see a delay between publishing and getting it from the hosted repo and might need to run helm repo update but you seem to be using the chart source code directly so I would not expect any delay.

Helm vs Replace Tokens in VSTS

I have been asked to set up CI/CD for a new app using VSTS and Kubernetes.
It was suggested to me that we could use Helm (but it was made clear it was not mandatory).
The value I am seeing for this tool in our project is to define different values for different environments e.g. database connection string.
But for that we can also use the Replace Tokens VSTS task which is a lot simpler.
A definition explains that Helm is a chart manager and it sort of connections all resources of a system to deploy to Kubernetes.
Our system is just 1 web API (could grow later) so I feel deploying using Helm would be over-engineering the deployment process. Plus, we need this for yesterday.
Question
According to the current context, should I go with Replace Tokens VSTS task or Helm?
Just based on your requirement, for example, which is easier to deploy, which is easier to manage, which you familiar or which is easier for requirement changes.
You also can custom build task to achieve it.
I would go for helm because it gives you more flexibility and it's more cross-platform; moreover, when adding more API's/components or microservices it will be easier to control configuration (a single or multiple values.yaml, using git submodules for helm charts and so on).
Surely it requires a slightly bigger time investment than simple value substitution in your CI/CD tools, but has a potential payback that far outweighs the effort (again, based on my experience and the limited information about your environment).
I'm curious, what did you end up using?

How do I version control a kubernetes application?

I've checked out helm.sh of course, but at first glance the entire setup seems a little complicated (helm-client & tiller-server). It seems to me like I can get away by just having a helm-client in most cases.
This is what I currently do
Let's say I have a project composed of 3 services viz. postgres, express, nginx.
I create a directory called product-release that is as follows:
product-release/
.git/
k8s/
postgres/
Deployment.yaml
Service.yaml
Secret.mustache.yaml # Needs to be rendered by the dev before use
express/
Deployment.yaml
Service.yaml
nginx/
Deployment.yaml
Service.yaml
updates/
0.1__0.2/
Job.yaml # postgres schema migration
update.sh # k8s API server scritps to patch/replace existing k8s objects, and runs the state change job
The usual git stuff can apply now. Everytime I make a change, I make changes to the spec files, test them, write the update scripts to help move from the last version to this current version and then commit it and tag it.
Questions:
This works for me so far, but is this "the right way"?
Why does helm have the tiller server? Isn't it simpler to do the templating on the client-side? Of course, if you want to separate the activity of the deployment from the knowledge of the application (like secrets) the templating would have to happen on the server, but otherwise why?
Seems that https://redspread.com/ (open source) addresses this particular issue, but needs more development before it'll be production ready - at least from my team quick glance at it.
We'll stick with keeping yaml files in git together with the deployed application for now I guess.
We are using kubernetes/helm (the latest/incubated version) and a central repository for Helm charts (with references container images built for our component releases).
In other words, the Helm package definitions and its dependencies are separate from the source code and image definitions that make up the several components of our web applications.
Notice: Tiller has been removed in Helm v3. Checkout this answer to see details on why it needs tiller in Helm v2 and why it's removed in Helm v3: https://v3.helm.sh/docs/faq/#removal-of-tiller
According to the idea of GitOps, what you did is a right way (to perform release from a git repo). However, if you want to push it further to make it more common, you can plan more goals including:
Choose a configuration management system beyond k8s app declarative definition only. E.g., Helm (like above answer https://stackoverflow.com/a/42053983/914967), Kustomize. They're pure client-side only.
avoid custom release process by altering update.sh with popular tools like kubectl apply or helm install.
drive change delivery from git tags/branches by using a CI/CD engine like argocd, Travis CI or GitHub Actions.
Uses branching strategy so that you can try changes on test/staging/production/ environment before delivering it directly.