Helm Chart: How do I install dependencies first? - kubernetes-helm

I've been developing a prototype chart that depends on some custom resource definitions that are defined in one of the child charts.
To be more specific, I'm trying to create the resources defined in the strimzi-kafka-operator within my helm chart and would like the dependency to be explicitly installed first. I followed the helm documentation and added the following to my Chart.yaml
dependencies:
- name: strimzi-kafka-operator
version: 0.16.2
repository: https://strimzi.io/charts/
I ran:
$ helm dep up ./prototype-chart
$ helm install ./prototype-chart
> Error: unable to build Kubernetes objects from release manifest: unable to recognize "": no matches for kind "KafkaTopic" in version "kafka.strimzi.io/v1beta1"
which shows that it's trying to deploy my chart before my dependency. What is the correct way to install dependencies first and then my parent chart?
(For reference, here is the question I opened on GitHub directly with Strimzi where they informed me they aren't sure how to use their helm as a dependency:
https://github.com/strimzi/strimzi-kafka-operator/issues/2552
)

Regarding CRD's: the fact that Helm by default won't manage those1 is a feature, not a bug. It will still install them if not present; but it won't modify or delete existing CRD's. The previous version of Helm (v2) does, but (speaking from experience) that can get you into all sorts of trouble if you're not careful. Quoting from the link you referenced:
There is not support at this time for upgrading or deleting CRDs using Helm. This was an explicit decision after much community discussion due to the danger for unintentional data loss. [...] One of the distinct disadvantages of the crd-install method used in Helm 2 was the inability to properly validate charts due to changing API availability (a CRD is actually adding another available API to your Kubernetes cluster). If a chart installed a CRD, helm no longer had a valid set of API versions to work against. [...] With the new crds method of CRD installation, we now ensure that Helm has completely valid information about the current state of the cluster.
The idea here is that Helm should operate only at the level of release data (adding/removing deployments, storage, etc.); but with CRD's, you're actually modifying an extension to the Kubernetes API itself, potentially inadvertently breaking other releases that use the same definitions. Consider if you're on a team that has a "library" of CRDs shared between several charts, and you want to uninstall one — formerly, With v2, Helm would happily let you modify or even delete those at will, with no checks on if/how they were used in other releases. Changes to CRDs are changes to your control plane / core API, and should be treated as such — you're modifying global resources.
In short: with v3, Helm positions itself more as a "developer" tool to define, template, and manage releases; CRDs, however, are meant to be managed independently e.g. by a "cluster administrator". At the end of the day, it's a win for all sides, since developers can setup/teardown deployments at will, with confidence that it's not going to break functionality elsewhere... and whoever's on call won't have to deal with alerts if/when you accidentally delete/modify a CRD and break things in production :)
See also the extensive discussion here for more context behind this decision.
Hope this helps!

Related

How to helm upgrade with v3 and remove / overwrite any manual changes that have been applied to templates

I have a problem where we essentially discovered a piece of stale configuration in a live environment on one of our deployments (a config map was added as a volume mount). Reading through the docs here (search for 'Upgrades where live state has changed') we can see that helm v2 would purge changes that were introduced to a template via external actors. Whereas v3 is very clever and will merge externally introduced changes alongside template changes as long as they dont conflict.
So how do we in helm v3 run a upgrade that purges any manual template changes that may have been introduced?
Based on the description, the --force flag should do the trick.
--force force resource updates through a replacement strategy
However, there are some issues with it as mentioned in this GitHub issue.

How to manage external Helm Chart dependencies

I'm very curious how I should combine my own Helm chart with a set of contributed charts, i.e. the ones of BitNami.
So say I have my own chart that defines my backend application, how do I add a dependency to the contributed MySQL chart (instead of manually crafting it myself).
Another use case would be to add ArgoCD to my own custom chart. How do I do this in a declarative manner, without running the needed commands on the cluster manually?
I'm aware of the helm repo add and helm install for 3rd party charts, but these commands do not lend them well for CI/CD because they are not idempotent. I would like my chart to be declarative, so a single helm install also installs all listed dependencies. And the weird thing is that I cannot find anything about this topic online.
Would love your feedback!

Is there a declarative way to install helm charts in a kuberenetes cluster

I am just wondering if anyone has figured out a declarative way to have helm charts installed/configured as part of a cluster initiation and that could be checked into source control. Using Kuberenetes I have very much gotten used to the "everything as code" type of workflow and I realize that installing and configuring helm is based mostly on imperative workflows via the CLI.
The reason I am asking is because currently we have our cluster in development and will be recreating it in production. Most of our configuration has been done declaratively via the deployment.yaml file. However we have spent a significant amount of time installing and configuring certain helm charts (e.g. prometheus, grafana etc.)
There a tools like helmfile or helmsman which allow you to declare to be installed Helm releases as code.
Here is an example from a helmfile.yaml doing so:
releases:
# Published chart example
- name: promnorbacxubuntu # name of this release
namespace: prometheus # target namespace
chart: stable/prometheus # the chart being installed to create this release, referenced by `repository/chart` syntax
set: # values (--set)
- name: rbac.create
value: false
Running helmfile charts will then ensure that all listed releases are installed
My team had a similar kind of problem and we solved it with Operators. And the best part about of Operators is that there are 3 kinds and one of them is Helm based.
So you could use a Helm Based Operator , create an associated CRD and then declare your configurations there. Those configurations are then ported directly to the Helm chart without you, as the user, having to do anything.

Draft and Helm vs Ksonnet? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
As I understand all of these tools Draft,Helm and Ksonnet have overlapping functionality such as creating a chart as well as deploying kubernetes configurations.
I understand that purpose of these tool would be to describe and configure the application as well as the k8s environments.
By using Draft we can create Dockerfile, Chart. Nevertheless we can do same thing with Helm and Ksonnet.
My question is If these components create a pipeline in CI/CD then what will be the order?
for example,
draft -> ksonnet > helm
or
draft -> helm -> ksonnet
In short, draft and helm are more or less complimentary and ksonnet is orthogonal, specifically providing an alternative to helm.
In elaborating I will split my answer up into three major sections, the first of which describes how draft and helm interact, the second describing how ksonnet is orthogonal to the others, and finally a section explaining how I understand these with respect to CI/CD.
Helm & Draft
Helm and Draft are complimentary in the sense that Helm, which can be considered to be a package management system for Kubernetes, provides a portion of the functionality offered by Draft which itself is essentially a Kubernetes application development tool.
The relationship between Draft and Helm can be summarized by pointing out that in pursuit of its goal of simplifying Kubernetes application development, Draft produces a Helm chart using metadata inferred from your current application type (more about that below) if one does not already exist or uses and existing one in order to deploy/update a development version of your application without you having to know anything about how it does that.
Helm for Kubernetes Package Management
As mentioned previously, Helm is a package management system for Kubernetes-based applications. It provides the following features:
A templating approach for defining Kubernetes manifests (called "charts")
Package management, including a basic package repository service to host released packages.
Application lifecycle management including deploy, update, and purging of Helm applications
Package dependencies
Helm takes a templated YAML approach to parameterizing Kubernetes manifests and allows values to be shared and overridden between dependent packages. ie, supposed Package A depends on Package B; Package A can re-use configuration values set on Package B and it can override those parameters with values of its own. Values for all packages in a given deployment can also be overridden using the Helm command line tool.
Also worth mentioning is the fact that Helm depends on the availability of its cluster-side component named "Tiller" to actually do the work of reifying templates and deploying the generated Kubernetes manifests to the cluster.
Draft for Kubernetes Application Development
The aim of Draft is to dramatically simplify development of Kubernetes applications by being quickly building and deploying the Helm charts/packages and corresponding docker images necessary to run a project -- provided that the following exist:
A Kubernetes cluster
Helm's Tiller pod installed in the Kubernetes cluster
A Docker registry
The draft installation guide provides details for getting these pieces set up to try it out yourself.
Draft also builds on Helm by providing a high-level "packaging" format that includes both the application helm chart and the Dockerfile, the latter giving it the ability to build docker images.
Finally, it has built-in support for specific programming languages and will to a limited extent attempt to infer which programming language and framework(s) you are using when initially creating a new Draft project using draft create.
Ksonnet for Kubernetes Package Management
As mentioned previously, Ksonnet is orthogonal in many ways to Helm, providing essentially the same features with respect to package management wrapped in different terminology -- see its core concepts documentation. It's worth noting that it is not compatible with nor does it address the same concerns as Draft.
I say that Ksonnet and Helm are orthogonal because they take mutually incompatible approaches to generating and deploying Kubernetes manifests. Whereas Helm uses templated YAML, Ksonnet generates Kubernetes manifests using a "data templating" language called Jsonnet. Also, rather than conceiving of "dependent" packages as is the case with Helm, Ksonnet blurs the line between dependent services by representing them as composable "prototypes". Finally, rather than depending on a cluster-side application that reifies and deployes manifest templates, Ksonnet has an apply subcommand analogous to kubectl apply.
CI/CD
So where do these pieces fit into a CI/CD workflow? Well since there are essentially two mutually incompatible toolsets, let's consider them on a case-by-case basis:
Draft + Helm
According to the Draft design Q&A section, it is meant only as a developer tool intended to abstract much of the complexity of dealing with kubernetes, helm, and docker from developers primarily interested in seeing their application run in a development cluster.
With this in mind, any CD approach involving this set of tools would have to do the following:
Build docker image(s) using the docker CLI if necessary
Build Helm package(s) using the helm CLI
Deploy Helm package(s) to Helm repository using the helm CLI
Install/update Helm package(s) on the appropriate staging/prod Kubernetes cluster(s) using the helm CLI
Ksonnet
The Ksonnet CD workflow is somewhat abbreviated compared to the helm workflow above:
Build docker image(s) using the docker CLI if necessary
Apply the Ksonnet manifest using the ks CLI
Whereas with Helm you would deploy your applicat's package to a Helm registry for re-use, if your Ksonnet manifest contains re-usable prototypes that might be of use to another Ksonnet-based application you would want to ensure it is available in a git repo as described in the Ksonnet registry documentation.
This means that how Ksonnet definitions are dealt with in CI/CD is largely dependent on which git repo(s) you decide to store them in and how they are structured.

How do I version control a kubernetes application?

I've checked out helm.sh of course, but at first glance the entire setup seems a little complicated (helm-client & tiller-server). It seems to me like I can get away by just having a helm-client in most cases.
This is what I currently do
Let's say I have a project composed of 3 services viz. postgres, express, nginx.
I create a directory called product-release that is as follows:
product-release/
.git/
k8s/
postgres/
Deployment.yaml
Service.yaml
Secret.mustache.yaml # Needs to be rendered by the dev before use
express/
Deployment.yaml
Service.yaml
nginx/
Deployment.yaml
Service.yaml
updates/
0.1__0.2/
Job.yaml # postgres schema migration
update.sh # k8s API server scritps to patch/replace existing k8s objects, and runs the state change job
The usual git stuff can apply now. Everytime I make a change, I make changes to the spec files, test them, write the update scripts to help move from the last version to this current version and then commit it and tag it.
Questions:
This works for me so far, but is this "the right way"?
Why does helm have the tiller server? Isn't it simpler to do the templating on the client-side? Of course, if you want to separate the activity of the deployment from the knowledge of the application (like secrets) the templating would have to happen on the server, but otherwise why?
Seems that https://redspread.com/ (open source) addresses this particular issue, but needs more development before it'll be production ready - at least from my team quick glance at it.
We'll stick with keeping yaml files in git together with the deployed application for now I guess.
We are using kubernetes/helm (the latest/incubated version) and a central repository for Helm charts (with references container images built for our component releases).
In other words, the Helm package definitions and its dependencies are separate from the source code and image definitions that make up the several components of our web applications.
Notice: Tiller has been removed in Helm v3. Checkout this answer to see details on why it needs tiller in Helm v2 and why it's removed in Helm v3: https://v3.helm.sh/docs/faq/#removal-of-tiller
According to the idea of GitOps, what you did is a right way (to perform release from a git repo). However, if you want to push it further to make it more common, you can plan more goals including:
Choose a configuration management system beyond k8s app declarative definition only. E.g., Helm (like above answer https://stackoverflow.com/a/42053983/914967), Kustomize. They're pure client-side only.
avoid custom release process by altering update.sh with popular tools like kubectl apply or helm install.
drive change delivery from git tags/branches by using a CI/CD engine like argocd, Travis CI or GitHub Actions.
Uses branching strategy so that you can try changes on test/staging/production/ environment before delivering it directly.