I have a parent chart which contains 4 subcharts, out of these I want to deploy the 1 specific subchart to different namespace and all the template files in that subchart are referring to {{ .Release.Namespace. }}. Is their any way to modify the .Release.Namespace. of subchart from the parent chart?
I don't believe this is possible using vanilla Helm and charts you don't control.
When a chart depends on a subchart, there's fairly little that it's possible to customize. The parent chart can provide a default set of values for the subchart, but nothing computed, and those can be overridden by the person running helm install.
If, and only if, the subchart is specifically written to deploy into an alternate namespace
# Every object in the subchart must have this configuration
metadata:
namespace: {{ .Values.namespace | default .Release.Namespace }}
then you could supply that value to the subchart; but this isn't a default configuration.
My general experience has been that Helm "umbrella charts" are inflexible in a couple of important ways. There are higher-level tools like Helmfile and Helmsman that provide a single-command installation of multiple Helm charts with a full set of options (Helmsman is simpler, Helmfile allows Helm-style templating almost everywhere which is both more powerful and more complex). If you need to install four charts, three into one namespace and one into another, these tools might work better.
Related
Imagin one product chart contains two sub charts, app and config, app depends on config, for example, a deployment depend on configmap mounting volume. The real senario could be multiple app charts sharing same config, but only one here for simplicity.
Goal
product chart design need support coexistance of multiple releases in same namespace, isolated with each other.
Need app ci success alone, here success only means the app chart can be successfully deployed to k8s, no need function tests.
Problem
To accomplish goal 1, I think the natual way is template prefixing config name with {{ .Release.Name }}.
But when in ci, the dependency {{ .Release.Name }}-config is missing, so apparently ci would fail. So violates goal 2.
Possible solutions
Deploy config before(or during) app ci, but this causes extra effort and resource usage to ci.
Add if else logic in app chart to determine whether currently is in ci, if true, not render those config parts. but introduce a chart logic only for ci seems foolish.
So is this common problem in k8s helm charts development, and what's the best practice? Is there any existing k8s mechanism to easily mock a object during ci deployment?
I have 2 helm child charts and 1 umbrella chart which depends on both childs.
Unfortunately the both child charts contain duplicated k8s objects (so they also share the same name).
So on doing helm install release1 umbrella I get Error: configmaps "x-1" already exists.
Hint: We are the authors of the "child" charts (so we can change them) but we can not avoid name collisions.
Is it possible to do helm install despite some k8s objects are duplicated ? Could we adapt the "child" charts to make it possible (like "consider this object only if not already defined")
I know it is possible doing it with plain kubectl since it will merge the duplicates without errors.
A typical Helm convention is to name objects after both the release name and the chart name. In a dependency-chart situation, each of the charts will have its own chart name, but all of the charts will share the same release name.
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-x-1
If you create a chart with helm create, it will include a "chartname.fullname" helper in _helpers.tpl that helps to construct this name in a standard way.
This approach will get you two separate, non-conflicting ConfigMaps, but they might have duplicated content.
If you want to have only one copy of the ConfigMap, you can move it into its own chart, and have the two "child" charts depend on that. If the umbrella chart depends on A and B, and A and B both depend on some common dependency C, Helm will only install that dependency once. The service charts A and B will still know the name of the ConfigMap, since they know the release name (the same as the current .Release.Name) and the chart name (the name of the common dependency chart).
configMapRef:
name: {{ .Release.Name }}-common-x-1
I have multiple charts that all use the same templates. Is it possible to instruct helm to use some other templates directory, or have some shared templates that I can import/reference in some way? I would like to avoid the copy paste and have reusable templates, but at the same time keep the project/service per chart because in the future there will be some discrepancies.
How do you achieve DRY and re-usability in helm?
To me, that sounds like you want to use so called "Library Charts" ( link to helm docs).
To create one, you define a helm chart that does not actually create any Resources but only defines reusable templates and set the type property in the chart.yml to library:
apiVersion: v2
name: library-chart
description: A Helm chart for Kubernetes
type: library
version: 0.0.1
Then, you can include that helm chart as a dependency in your other charts and start using the templates defined there.
Currently, I have 2 Helm Charts - Chart A, and Chart B. Both Chart A and Chart B have the same dependency on a Redis instance, as defined in the Chart.yaml file:
dependencies:
- name: redis
version: 1.1.21
repository: https://kubernetes-charts.storage.googleapis.com/
I have also overwritten Redis's name since applying the 2 Charts consecutively results in 2 Redis instances, as such:
redis:
fullnameOverride: "redis"
When I try to install Chart A and then Chart B I get the following error:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: PersistentVolumeClaim, namespace: default, name: redis
I was left with the impression that 2 charts with identical dependencies would use the same instance if it's already present?
When you install a chart using Helm, it generally expects each release to have its own self-contained set of Kubernetes objects. In the basic example you show, I'd expect to see Kubernetes Service objects named something like
release-a-application-a
release-a-redis
release-b-application-b
release-b-redis
There is a general convention that objects are named starting with {{ .Release.Name }}, so the two Redises are separate.
This is actually an expected setup. A typical rule of building microservices is that each service contains its own isolated storage, and that services never share storage with each other. This Helm pattern supports that, and there's not really a disadvantage to having this setup.
If you really want the two charts to share a single Redis installation, you can write an "umbrella" chart that doesn't do anything on its own but depends on the two application charts. The chart would have a Chart.yaml file and (in Helm 2) a requirements.yaml file that references the two other charts, but not a templates directory of its own. That would cause Helm to conclude that a single Redis could support both applications, and you'd wind up with something like
umbrella-application-a
umbrella-application-b
umbrella-redis
(In my experience you usually don't want this – you do want a separate Redis per application – and so trying to manage multiple installations using an umbrella chart doesn't work especially well.)
Unfortunately, Helm can't handle multiple resources with the same name or in other word there isn't any share resource capability.
You can follow This issue
I think you can use kustomize template to use share resources. There is a really good kustomize vs helm article.
I'm currently writing a Helm Chart for my multi-service application. In the application, I depend on CustomResources, which I apply before everything else with helm via the "helm.sh/hook": crd-install hook.
Now I want to upgrade the application. Helm fails because the CRDs are already installed.
In some GH issues, I read about the builtin .Capabilities variable in Helm templates. I want to wrap my CRDs with an "if" checking if the CRD is already installed:
{{- if (not (.Capabilities.APIVersions.Has "virtualmachineinstancepresets.kubevirt.io")) }}
Unfortunately, I misunderstood the APIVersions property.
So my question is, does Helm provide a way of checking whether a CustomAPI is already installed, so I can exclude it from my Helm pre-hook install?
The simple answer for Helm v2 is manually choose --no-crd-hook flag when running helm install.
The workaround by using builtin .Capabilities variable can be a workaround. E.g., using this:
{{- if not (.Capabilities.APIVersions.Has "virtualmachineinstancepresets.kubevirt.io/v1beta1/MyResource") }}
apiVersion: ...
{{- end}}
However, it also means you will never be able to manage the installed CRDs by Helm again.
Checkout a long answer from blog post Helm V2 CRD Management which explained different approaches. However, I quote this:
CRD management in helm is, to be nice about it, utterly horrible.
personally I suggest managing CRDs via a separate chart out of app/library charts that depends on it, since they have totally different lifecycle.