Helm chart & docker image version management - kubernetes-helm

We have around 50 micro services. Each micro service has its own repo in git. And in its repo we have a chart folder for the helm chart.
We also have an umbrella chart/parent chart for those 50 sub charts.
To differentiate dev, qa, production helm packages. We used different name for the umbrella chart and different versioning.
For example, all our development charts all have versions like version 0.0.0.xxxxx and production charts all have versions like 1.0.0.xxxxx.
The purpose of the different versioning strategy is so that we can pull down sub charts from different branches when building the parent chart.
When we run the pipeline from development branch, it will create helm chart with version prefix 0.0.0, and when from master branch, it creates chart version with prefix 1.0.0. And to make it simple, we are not using AppVersion, on Chart version and every time we build a new docker image, we bump up the Chart version.
For example, we have the following requirements.yaml in our development parent chart.
dependencies:
- name: subchart1
repository: 'chartmuseum'
version: ~0.0.0
- name: subchart2
repository: 'chartmusuem'
version: ~0.0.0
With this, when we run the pipeline of the development parent chart, it will pull down the dependencies that are built from development branch.
This works well.
The first problem we are having now is when multiple developers work on different micro services, they would include each other's changes when building the parent chart.
The second problem is with updating the sub charts. The yaml templates of all the charts are very similar(deployment, configmap, ingress). Sometimes, when we need to update an ingress setting for all the charts, we have to go to different git repos to update them and merge them to different branches. I am now considering creating one single dedicated git repo for all the charts. But I would like to hear some advice on the management of Chart. What are the best practices for managing helm charts and docker repositories in large scale.

Related

How to deal multiple branches (like master, dev, staging, feature-123, etc) with Helm Charts

in our project we are using Umbrella Helm Charts (while I can't find any better term, as mentioned here) and I have dilemma how to manage the multiple branches over multiple Git Repositories under this concept.
I can't be only person struggling with this concept, so I like ask your experiences and solutions in the topic.
Our project structure looks like the following, we have multiple Git Repositories containing Micro Services. like the following....
Git Repository - Micro Service 1
+-helm
+-templates
Charts.yaml
values.yaml
+-src
+-main
+-java
+-test
+-java
build.gradle
and a second one (n one actually)..
Git Repository - Micro Service 2
+-helm
+-templates
Charts.yaml
values.yaml
+-src
+-main
+-java
+-test
+-java
build.gradle
We have an convention in our Project that we only change the Version number of the Helm Chart for the Micro Service only if the K8s manifests are changed, not when the only change was the Java Source code, we place the Docker Image Id into the build and also the CommitId as AppVersion, so it recognisable from outside for which commit this Helm Chart is created.
Now with a pipeline, we deploy these helm charts (with the actual docker image ids) to the Helm Repository.
Then we have another Umbrella Git Repository (for lack of better word) which is orchestrating the Helm Charts of the Micro Service for all System deployments.
Git Repository - Umbrella
+-helm
Charts.yaml
values.yaml
values-dev.yaml
values-staging.yaml
values-prod.yaml
and Charts.yaml looks like the following
apiVersion: v2
name: umbrella
description: A Helm chart for Umbrella
type: application
version: 1.0.0
appVersion: "89WRG344HWRHHH"
dependencies:
- name: micro-service1
version: 1.0.0
repository: https://gitlab.xxx.com/api/v4/projects/XXXXX/packages/helm/stable
condition: micro-service1.enabled
tags:
- application
- name: micro-service2
version: 1.0.0
repository: https://gitlab.xxx.com/api/v4/projects/YYYYY/packages/helm/stable
condition: micro-service2.enabled
tags:
- application
Now life would be fine and dandy, if we would have one environment, like 'prod', or one Git Branch 'master', but while we have one Version number for the Micro Service Helm Charts (remember our convention as long as K8s Manifest does not changes, we don't change the Helm Chart Version for Micro Services and appVersion has no effect on Helm Charts dependency resolution, the branch for 'master', 'dev', 'feature-123', 'feature-987', etc would all produce the same Helm Chart Version, with different CommitId as appVersion. Of course, we can increase the Helm Chart Version for every Java Code change but keeping the Umbrella Charts in sink with that would be a crazy task).
So my potential solution to that, 'Gitlab Helm Chart Repositories' has a 'channel' property, so I can publish my charts to these channels based on the branch name of the Micro Service Repositories.
Now this would be quite straight forward for branch names that does not change like 'dev', 'staging', 'test') but what about feature branches, all the Micro Service Repositories should follow as convention the same Git Branch names, so if 'Micro Service 1' collaborating to 'feature-123', Git Repositories Micro Service 1 and Umbrella must have a branch 'feature-123' or if 'Micro Service 1' and 'Micro Service 2' are collaborating on 'feature-987' Git Repositories Micro Service 1, Micro Service 2 and Umbrella must have branch 'feature-987'. This would mean we would have 'channel's 'feature-123' and 'feature-987' in Helm Repository and I will add them to helm repo in build pipeline.
This brings us to my first dilemma and my first question, as long as I read and interpret the documentation of 'Nexus' and 'Artifactory' (these are the Helm Repositories that I have experience with it) has no concepts of 'channel', so this line of thought would tie my solution to the Gitlab Helm Repositories, which I don't want because at any moment upper management can say, we will use Nexus, or we will use Artifactory.
Does anybody knows these platforms have support for these concepts?
If this solution proves to be a dead end (because no other Helm Repository can support the concept of channels and create a new Helm Repository a feature branch is not really a solution ), my second plan is, during the pipeline for Micro Services, upload the Helm Chart with the Version of the charge changed to '1.0.0-branch-name', like '1.0.0-feature-123', '1.0.0-feature-987', '1.0.0-staging' so the Umbrella Repository would have the same branch and will implicitly add branch names to dependency versions (I can do that over a pipeline).
Do you see any KO criteria for this?
And finally, somebody out there must be facing the same dilemmas, how are you tackling these problems?

How can I use Gitlab's Container Registry for Helm Charts with ArgoCDs CI/CD Mechanism?

My situation is as follows:
have a kubernetes cluster with a couple of nodes
have argocd installed on the cluster and working great
using gitlab for my repo and build pipelines
have another repo for storing my helm charts
have docker images being built in gitlab and pushed to my gitlab registry
have argocd able to point to my helm chart repo and sync the helm chart with my k8s cluster
have helm chart archive files pushed to my gitlab repo
While this is a decent setup, it's not ideal.
The first problem i faced with using a helm chart git repo is that I can't (or don't know) how to differentiate my staging environment with my production environment. Since I have a dev environment and prod environment in my cluster, argocd syncs both environments with the helm chart repo. I could get around this with separate charts for each environment but that isn't a valid solution.
The second problem i faced, while trying to get around the above problem, is that I can't get argocd to pull helm charts from a gitlab oci registry. I made it so that my build pipeline pushed the helm chart archive file to my gitlab container registry with the tag dev-latest or prod-latest, which is great, just what I want. The problem is that argocd, as far as I can tell, can't pull from gitlab's container registry.
How do I go about getting my pipeline automated with gitlab as my repo and build pipeline, helm for packaging my application, and argocd for syncing my helm application with my k8s cluster?
is that I can't get argocd to pull helm charts from a gitlab oci registry.
You might be interested by the latest Jul. 2021 GitLab 14.1:
Build, publish, and share Helm charts
Helm defines a chart as a Helm package that contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
For organizations that create and manage their own Helm charts, it’s important to have a central repository to collect and share them.
GitLab already supports a variety of other package manager formats.
Why not also support Helm? That’s what community member and MVP from the 14.0 milestone Mathieu Parent asked several months ago before breaking ground on the new GitLab Helm chart registry. The collaboration between the community and GitLab is part of our dual flywheel strategy and one of the reasons I love working at GitLab. Chapeau Mathieu!
Now you can use your GitLab project to publish and share packaged Helm charts.
Simply add your project as a remote, authenticating with a personal access, deploy, or CI/CD job token.
Once that’s done you can use the Helm client or GitLab CI/CD to manage your Helm charts.
You can also download the charts using the API or the user interface.
What’s next? First, we’d like to present additional metadata for charts.
Then we’ll start dogfooding the feature by using it as a replacement for https://charts.gitlab.io/.
So, try out the feature and let us know how it goes by commenting in the epic GitLab-#6366.
See Documentation and issue.

Why almost all helm packages are DEPRECATED?

I just installed Helm v3.4.2 and the command below prints many packages as DEPRECATED in the description:
helm search repo stable
Output:
stable/acs-engine-autoscaler 2.2.2 2.1.1 DEPRECATED Scales worker nodes within agent pools
stable/aerospike 0.3.5 v4.5.0.5 DEPRECATED A Helm chart for Aerospike in Kubern...
stable/airflow 7.13.3 1.10.12 DEPRECATED - please use: https://github.com/air...
stable/ambassador 5.3.2 0.86.1 DEPRECATED A Helm chart for Datawire Ambassador
...
Why only 18 on 284 packages are not deprecated ?
Do that mean for these packages we have to add external repositories ?
The underlying reason "why" is that the CNCF no longer wanted to pay the costs in hosting a single monolithic repository:
https://www.cncf.io/blog/2020/10/07/important-reminder-for-all-helm-users-stable-incubator-repos-are-deprecated-and-all-images-are-changing-location/
This means that the charts are now scattered across various repositories, hosted by a range of organisations.
The Artifact Hub aggregates these so you can search them:
https://artifacthub.io/packages/search?page=1&ts_query_web=mysql
We're now in a very confusing situation where if you want to install a package, you're very likely to find several different repositories hosting different versions and variants, and you need to decide which one to trust and go for.
Very likely many of these repos will get deprecated themselves.
It's all a bit wild west right now, and it's a shame there is no longer a single "stable" one shop stop.
According to cncf.io
On November 13th, 2020 the Stable and Incubator Helm chart repositories will be deprecated and all Helm-related images will no longer be available from GCR. Users who do not switch image locations to their new homes and update any chart from the stable/incubator repos to their new homes will potentially run into issues.
This is also mentioned on Helm Charts github.
This project is no longer supported.
At 1 year, support for this project will formally end, at which point the stable and incubator chart repos will be marked obsolete. At that time these chart repos will likely be garbage collected and no longer available. This git repository will remain as an archive.
This timeline gives the community (chart OWNERS, organizations, groups or individuals who want to host charts) 9 months to move charts to new Helm repos, and list these new repos on the Helm Hub before stable and incubator are de-listed.
Many maintainers have already migrated their charts to new homes. You can track the chart migration progress here.
For example ambassador moved to datawire.
helm/charts has been deprecated and will be obsolete by Nov 13 2020. For this reason, the datawire team as retaken ownership of this chart.
The Ambassador Chart is now hosted at datawire/ambassador-chart.
As per the helm docs new location for stable and incubator charts are
https://charts.helm.sh/stable and https://charts.helm.sh/incubator
Use below command to update it
helm repo add stable https://charts.helm.sh/stable --force-update

Helm chart usage

i'm working on a kubernetes project where we have a each micro service with it's own helm chart, currently the helm chart of each microservice is with it in the code source repository, and now i want to create a qa environnement where the same code can be used but i'm having a problem customizing the helm chart for each environnement, my question is what is the best approach to handle a helm chart for a microservice?and should the helm chart be located in the repository of the source code?
thanks in advance
It's ok to have the chart in each microservice's repository.
Now, to deploy your system (no matter the environment), you need to helm install all those charts. How can you do this? You have two options, either you individually install each one, or the best approach, you create a meta chart.
What's this meta chart? Just another dummy chart, with dependencies to all of your microservices. So that you end up with something like:
apiVersion: v3
name: myservice
version: 1.0.0
dependencies:
- name: microserviceA
version: ">=1.0.0"
repository: "path_to_microserviceA_repo"
- name: microserviceB
version: ">=1.0.0"
repository: "path_to_microserviceA_repo"
Then, ideally you would have different values files with configuration for each environment you're going to deploy: QA, staging, production, personal for local development, etc

Customize helm chart from stable repository

So I am using the helm chart stable/traefik to deploy a reverse proxy to my cluster. I need to customise it beyond what is possible with the variables I can set for the template.
I want to enable the dashboard service while not creating an ingress for it (I set up OpenVPN to access the traefik dashboard only via VPN).
Both dashboard-ingress.yaml and dashboard-service.yaml conditionally include the ingress or the respective service based on the same variable {{- if .Values.dashboard.enabled }}
From my experience I would fork the helm chart and push the customised version to my own repository.
Is there a way to add that customization but keep the original helm chart from the stable repository?
You don't necessarily have to push to your own repository as you could take the source code and include the chart in your own as source. For example, if you dig into the gitlab chart in their charts dependencies they've included multiple other charts as source their, not packaged .tgz files. That enables you to make changes in the chart within your own source (much as the gitlab guys have). You could get the source using helm fetch stable/traefik --untar
However, including the chart as source is still quite close to forking. If you want to upgrade to get fixes then you still have to reapply your changes. I believe your only other option is to raise the issue on the official chart repo. Perhaps for your case you could suggest to the maintainers that the ingress be included only when .Values.dashboard.enabled and a separate ingress condition is met.