2 Helm Charts with shared Redis dependency - kubernetes

Currently, I have 2 Helm Charts - Chart A, and Chart B. Both Chart A and Chart B have the same dependency on a Redis instance, as defined in the Chart.yaml file:
dependencies:
- name: redis
version: 1.1.21
repository: https://kubernetes-charts.storage.googleapis.com/
I have also overwritten Redis's name since applying the 2 Charts consecutively results in 2 Redis instances, as such:
redis:
fullnameOverride: "redis"
When I try to install Chart A and then Chart B I get the following error:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: PersistentVolumeClaim, namespace: default, name: redis
I was left with the impression that 2 charts with identical dependencies would use the same instance if it's already present?

When you install a chart using Helm, it generally expects each release to have its own self-contained set of Kubernetes objects. In the basic example you show, I'd expect to see Kubernetes Service objects named something like
release-a-application-a
release-a-redis
release-b-application-b
release-b-redis
There is a general convention that objects are named starting with {{ .Release.Name }}, so the two Redises are separate.
This is actually an expected setup. A typical rule of building microservices is that each service contains its own isolated storage, and that services never share storage with each other. This Helm pattern supports that, and there's not really a disadvantage to having this setup.
If you really want the two charts to share a single Redis installation, you can write an "umbrella" chart that doesn't do anything on its own but depends on the two application charts. The chart would have a Chart.yaml file and (in Helm 2) a requirements.yaml file that references the two other charts, but not a templates directory of its own. That would cause Helm to conclude that a single Redis could support both applications, and you'd wind up with something like
umbrella-application-a
umbrella-application-b
umbrella-redis
(In my experience you usually don't want this – you do want a separate Redis per application – and so trying to manage multiple installations using an umbrella chart doesn't work especially well.)

Unfortunately, Helm can't handle multiple resources with the same name or in other word there isn't any share resource capability.
You can follow This issue
I think you can use kustomize template to use share resources. There is a really good kustomize vs helm article.

Related

Apply Github hosted Kubernetes file with Helm

I am trying to set up a helmfile deployment for my local kubernetes cluster which is running using 'kind' (a lightweight alternative to minikube). I have charts set up for my app which are all deploying correctly, however I require an nginx-ingress controller. Luckily 'kind' provides one, which I am currently applying with the command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
It seems perverse that I should have everything else set up to deploy at the touch of a button, but still have to 'remember' (and also train my colleagues to remember...) to run this additional command.
I realise I could copy and paste and create my own version, but I would like to keep up to date with any changes made at source. Is it possible to create a chart that makes a reference to an external template?
I am looking at solutions using either helm or helmfile.
Your linked YAML file seems to have been generated from the ingress-nginx chart.
Subchart
You can include ingress-nginx as a subchart in Helm 3 by adding it as a dependency to your own chart. In Helm 3, this is done with the dependencies field in Chart.yaml, e.g.:
apiVersion: v2
name: my-chart
version: 0.1.0
dependencies:
- name: ingress-nginx
version: ~4.0.6
repository: https://kubernetes.github.io/ingress-nginx
condition: ingress-nginx.enabled
This may be problematic, however, if you need to install multiple versions of your own chart in the same cluster. To handle this, you'd need to consider the implications of multiple Ingress controllers.
Chart
Ingress controllers are capable of handling ingresses from various releases across multiple namespaces. Therefore, I would recommend maintaining ingress-nginx separately from your own releases that depend on it. This would mean installing ingress-nginx like you already are or as a separate chart (guide).
If you go this route, there are tools that help make it easier for devs to take a hands-off approach for setting up their K8s environments. Some popular ones include Skaffold, DevSpace, Tilt, and Helmfile.

Deploying multiple Helm charts despite duplicated objects

I have 2 helm child charts and 1 umbrella chart which depends on both childs.
Unfortunately the both child charts contain duplicated k8s objects (so they also share the same name).
So on doing helm install release1 umbrella I get Error: configmaps "x-1" already exists.
Hint: We are the authors of the "child" charts (so we can change them) but we can not avoid name collisions.
Is it possible to do helm install despite some k8s objects are duplicated ? Could we adapt the "child" charts to make it possible (like "consider this object only if not already defined")
I know it is possible doing it with plain kubectl since it will merge the duplicates without errors.
A typical Helm convention is to name objects after both the release name and the chart name. In a dependency-chart situation, each of the charts will have its own chart name, but all of the charts will share the same release name.
metadata:
name: {{ .Release.Name }}-{{ .Chart.Name }}-x-1
If you create a chart with helm create, it will include a "chartname.fullname" helper in _helpers.tpl that helps to construct this name in a standard way.
This approach will get you two separate, non-conflicting ConfigMaps, but they might have duplicated content.
If you want to have only one copy of the ConfigMap, you can move it into its own chart, and have the two "child" charts depend on that. If the umbrella chart depends on A and B, and A and B both depend on some common dependency C, Helm will only install that dependency once. The service charts A and B will still know the name of the ConfigMap, since they know the release name (the same as the current .Release.Name) and the chart name (the name of the common dependency chart).
configMapRef:
name: {{ .Release.Name }}-common-x-1

Is it possible to define templates outside of a Helm chat?

I have multiple charts that all use the same templates. Is it possible to instruct helm to use some other templates directory, or have some shared templates that I can import/reference in some way? I would like to avoid the copy paste and have reusable templates, but at the same time keep the project/service per chart because in the future there will be some discrepancies.
How do you achieve DRY and re-usability in helm?
To me, that sounds like you want to use so called "Library Charts" ( link to helm docs).
To create one, you define a helm chart that does not actually create any Resources but only defines reusable templates and set the type property in the chart.yml to library:
apiVersion: v2
name: library-chart
description: A Helm chart for Kubernetes
type: library
version: 0.0.1
Then, you can include that helm chart as a dependency in your other charts and start using the templates defined there.

Integration of Kubernetes Helm templates for a project deployment

Currently I am working with a project based on a micro service architecture. For making this project, I have 20 Spring Boot micro service projects are there. I for for every root folder I placed my Dockerfile for image building. And I am using Kubernetes cluster for deployment through Helm chart.
My confusion here that, when I created Helm chart, it giving the service.yaml and deployment.yaml inside template directory.
If I am deploying these 20 microservices, do I need to create 20 separate helm chart ? Or Can I create service for every 20 within 1 chart?
I am new to Kubernetes and Helm chart. So I am confused about the standard way of using yaml files with chart. Do I need to create 20 separate chart or can I include in 1 chart?
How can I follow the standard way of chart creation for my micro service projects please?
What I ended up doing (working with a similar stack), is create one microservice Chart, which is stored in an internal Chart repository. Inside of the Helm Chart, I gave enough configuration options, so teams have the flexibility to control their own deployments, but I made sure to set sensible defaults (e.g. make sure the Deployment utilises a RollingUpdateStrategy and readiness probes are configured with sensible defaults).
These configuration options can be passed by the values.yaml file. Teams deploy their microservice via a CICD pipeline, passing the values.yaml file to the helm command (with the -f flag).
I would certainly recommend you read the Helm Template Developer guide, before making the decision. It really depends on how similar your microservices are, but I recommend going for 1 Helm Chart if you have a homogenous environment (which also was the case for me).

Is there a declarative way to install helm charts in a kuberenetes cluster

I am just wondering if anyone has figured out a declarative way to have helm charts installed/configured as part of a cluster initiation and that could be checked into source control. Using Kuberenetes I have very much gotten used to the "everything as code" type of workflow and I realize that installing and configuring helm is based mostly on imperative workflows via the CLI.
The reason I am asking is because currently we have our cluster in development and will be recreating it in production. Most of our configuration has been done declaratively via the deployment.yaml file. However we have spent a significant amount of time installing and configuring certain helm charts (e.g. prometheus, grafana etc.)
There a tools like helmfile or helmsman which allow you to declare to be installed Helm releases as code.
Here is an example from a helmfile.yaml doing so:
releases:
# Published chart example
- name: promnorbacxubuntu # name of this release
namespace: prometheus # target namespace
chart: stable/prometheus # the chart being installed to create this release, referenced by `repository/chart` syntax
set: # values (--set)
- name: rbac.create
value: false
Running helmfile charts will then ensure that all listed releases are installed
My team had a similar kind of problem and we solved it with Operators. And the best part about of Operators is that there are 3 kinds and one of them is Helm based.
So you could use a Helm Based Operator , create an associated CRD and then declare your configurations there. Those configurations are then ported directly to the Helm chart without you, as the user, having to do anything.