I just installed Helm v3.4.2 and the command below prints many packages as DEPRECATED in the description:
helm search repo stable
Output:
stable/acs-engine-autoscaler 2.2.2 2.1.1 DEPRECATED Scales worker nodes within agent pools
stable/aerospike 0.3.5 v4.5.0.5 DEPRECATED A Helm chart for Aerospike in Kubern...
stable/airflow 7.13.3 1.10.12 DEPRECATED - please use: https://github.com/air...
stable/ambassador 5.3.2 0.86.1 DEPRECATED A Helm chart for Datawire Ambassador
...
Why only 18 on 284 packages are not deprecated ?
Do that mean for these packages we have to add external repositories ?
The underlying reason "why" is that the CNCF no longer wanted to pay the costs in hosting a single monolithic repository:
https://www.cncf.io/blog/2020/10/07/important-reminder-for-all-helm-users-stable-incubator-repos-are-deprecated-and-all-images-are-changing-location/
This means that the charts are now scattered across various repositories, hosted by a range of organisations.
The Artifact Hub aggregates these so you can search them:
https://artifacthub.io/packages/search?page=1&ts_query_web=mysql
We're now in a very confusing situation where if you want to install a package, you're very likely to find several different repositories hosting different versions and variants, and you need to decide which one to trust and go for.
Very likely many of these repos will get deprecated themselves.
It's all a bit wild west right now, and it's a shame there is no longer a single "stable" one shop stop.
According to cncf.io
On November 13th, 2020 the Stable and Incubator Helm chart repositories will be deprecated and all Helm-related images will no longer be available from GCR. Users who do not switch image locations to their new homes and update any chart from the stable/incubator repos to their new homes will potentially run into issues.
This is also mentioned on Helm Charts github.
This project is no longer supported.
At 1 year, support for this project will formally end, at which point the stable and incubator chart repos will be marked obsolete. At that time these chart repos will likely be garbage collected and no longer available. This git repository will remain as an archive.
This timeline gives the community (chart OWNERS, organizations, groups or individuals who want to host charts) 9 months to move charts to new Helm repos, and list these new repos on the Helm Hub before stable and incubator are de-listed.
Many maintainers have already migrated their charts to new homes. You can track the chart migration progress here.
For example ambassador moved to datawire.
helm/charts has been deprecated and will be obsolete by Nov 13 2020. For this reason, the datawire team as retaken ownership of this chart.
The Ambassador Chart is now hosted at datawire/ambassador-chart.
As per the helm docs new location for stable and incubator charts are
https://charts.helm.sh/stable and https://charts.helm.sh/incubator
Use below command to update it
helm repo add stable https://charts.helm.sh/stable --force-update
Related
in our project we are using Umbrella Helm Charts (while I can't find any better term, as mentioned here) and I have dilemma how to manage the multiple branches over multiple Git Repositories under this concept.
I can't be only person struggling with this concept, so I like ask your experiences and solutions in the topic.
Our project structure looks like the following, we have multiple Git Repositories containing Micro Services. like the following....
Git Repository - Micro Service 1
+-helm
+-templates
Charts.yaml
values.yaml
+-src
+-main
+-java
+-test
+-java
build.gradle
and a second one (n one actually)..
Git Repository - Micro Service 2
+-helm
+-templates
Charts.yaml
values.yaml
+-src
+-main
+-java
+-test
+-java
build.gradle
We have an convention in our Project that we only change the Version number of the Helm Chart for the Micro Service only if the K8s manifests are changed, not when the only change was the Java Source code, we place the Docker Image Id into the build and also the CommitId as AppVersion, so it recognisable from outside for which commit this Helm Chart is created.
Now with a pipeline, we deploy these helm charts (with the actual docker image ids) to the Helm Repository.
Then we have another Umbrella Git Repository (for lack of better word) which is orchestrating the Helm Charts of the Micro Service for all System deployments.
Git Repository - Umbrella
+-helm
Charts.yaml
values.yaml
values-dev.yaml
values-staging.yaml
values-prod.yaml
and Charts.yaml looks like the following
apiVersion: v2
name: umbrella
description: A Helm chart for Umbrella
type: application
version: 1.0.0
appVersion: "89WRG344HWRHHH"
dependencies:
- name: micro-service1
version: 1.0.0
repository: https://gitlab.xxx.com/api/v4/projects/XXXXX/packages/helm/stable
condition: micro-service1.enabled
tags:
- application
- name: micro-service2
version: 1.0.0
repository: https://gitlab.xxx.com/api/v4/projects/YYYYY/packages/helm/stable
condition: micro-service2.enabled
tags:
- application
Now life would be fine and dandy, if we would have one environment, like 'prod', or one Git Branch 'master', but while we have one Version number for the Micro Service Helm Charts (remember our convention as long as K8s Manifest does not changes, we don't change the Helm Chart Version for Micro Services and appVersion has no effect on Helm Charts dependency resolution, the branch for 'master', 'dev', 'feature-123', 'feature-987', etc would all produce the same Helm Chart Version, with different CommitId as appVersion. Of course, we can increase the Helm Chart Version for every Java Code change but keeping the Umbrella Charts in sink with that would be a crazy task).
So my potential solution to that, 'Gitlab Helm Chart Repositories' has a 'channel' property, so I can publish my charts to these channels based on the branch name of the Micro Service Repositories.
Now this would be quite straight forward for branch names that does not change like 'dev', 'staging', 'test') but what about feature branches, all the Micro Service Repositories should follow as convention the same Git Branch names, so if 'Micro Service 1' collaborating to 'feature-123', Git Repositories Micro Service 1 and Umbrella must have a branch 'feature-123' or if 'Micro Service 1' and 'Micro Service 2' are collaborating on 'feature-987' Git Repositories Micro Service 1, Micro Service 2 and Umbrella must have branch 'feature-987'. This would mean we would have 'channel's 'feature-123' and 'feature-987' in Helm Repository and I will add them to helm repo in build pipeline.
This brings us to my first dilemma and my first question, as long as I read and interpret the documentation of 'Nexus' and 'Artifactory' (these are the Helm Repositories that I have experience with it) has no concepts of 'channel', so this line of thought would tie my solution to the Gitlab Helm Repositories, which I don't want because at any moment upper management can say, we will use Nexus, or we will use Artifactory.
Does anybody knows these platforms have support for these concepts?
If this solution proves to be a dead end (because no other Helm Repository can support the concept of channels and create a new Helm Repository a feature branch is not really a solution ), my second plan is, during the pipeline for Micro Services, upload the Helm Chart with the Version of the charge changed to '1.0.0-branch-name', like '1.0.0-feature-123', '1.0.0-feature-987', '1.0.0-staging' so the Umbrella Repository would have the same branch and will implicitly add branch names to dependency versions (I can do that over a pipeline).
Do you see any KO criteria for this?
And finally, somebody out there must be facing the same dilemmas, how are you tackling these problems?
We have around 50 micro services. Each micro service has its own repo in git. And in its repo we have a chart folder for the helm chart.
We also have an umbrella chart/parent chart for those 50 sub charts.
To differentiate dev, qa, production helm packages. We used different name for the umbrella chart and different versioning.
For example, all our development charts all have versions like version 0.0.0.xxxxx and production charts all have versions like 1.0.0.xxxxx.
The purpose of the different versioning strategy is so that we can pull down sub charts from different branches when building the parent chart.
When we run the pipeline from development branch, it will create helm chart with version prefix 0.0.0, and when from master branch, it creates chart version with prefix 1.0.0. And to make it simple, we are not using AppVersion, on Chart version and every time we build a new docker image, we bump up the Chart version.
For example, we have the following requirements.yaml in our development parent chart.
dependencies:
- name: subchart1
repository: 'chartmuseum'
version: ~0.0.0
- name: subchart2
repository: 'chartmusuem'
version: ~0.0.0
With this, when we run the pipeline of the development parent chart, it will pull down the dependencies that are built from development branch.
This works well.
The first problem we are having now is when multiple developers work on different micro services, they would include each other's changes when building the parent chart.
The second problem is with updating the sub charts. The yaml templates of all the charts are very similar(deployment, configmap, ingress). Sometimes, when we need to update an ingress setting for all the charts, we have to go to different git repos to update them and merge them to different branches. I am now considering creating one single dedicated git repo for all the charts. But I would like to hear some advice on the management of Chart. What are the best practices for managing helm charts and docker repositories in large scale.
My situation is as follows:
have a kubernetes cluster with a couple of nodes
have argocd installed on the cluster and working great
using gitlab for my repo and build pipelines
have another repo for storing my helm charts
have docker images being built in gitlab and pushed to my gitlab registry
have argocd able to point to my helm chart repo and sync the helm chart with my k8s cluster
have helm chart archive files pushed to my gitlab repo
While this is a decent setup, it's not ideal.
The first problem i faced with using a helm chart git repo is that I can't (or don't know) how to differentiate my staging environment with my production environment. Since I have a dev environment and prod environment in my cluster, argocd syncs both environments with the helm chart repo. I could get around this with separate charts for each environment but that isn't a valid solution.
The second problem i faced, while trying to get around the above problem, is that I can't get argocd to pull helm charts from a gitlab oci registry. I made it so that my build pipeline pushed the helm chart archive file to my gitlab container registry with the tag dev-latest or prod-latest, which is great, just what I want. The problem is that argocd, as far as I can tell, can't pull from gitlab's container registry.
How do I go about getting my pipeline automated with gitlab as my repo and build pipeline, helm for packaging my application, and argocd for syncing my helm application with my k8s cluster?
is that I can't get argocd to pull helm charts from a gitlab oci registry.
You might be interested by the latest Jul. 2021 GitLab 14.1:
Build, publish, and share Helm charts
Helm defines a chart as a Helm package that contains all of the resource definitions necessary to run an application, tool, or service inside of a Kubernetes cluster.
For organizations that create and manage their own Helm charts, it’s important to have a central repository to collect and share them.
GitLab already supports a variety of other package manager formats.
Why not also support Helm? That’s what community member and MVP from the 14.0 milestone Mathieu Parent asked several months ago before breaking ground on the new GitLab Helm chart registry. The collaboration between the community and GitLab is part of our dual flywheel strategy and one of the reasons I love working at GitLab. Chapeau Mathieu!
Now you can use your GitLab project to publish and share packaged Helm charts.
Simply add your project as a remote, authenticating with a personal access, deploy, or CI/CD job token.
Once that’s done you can use the Helm client or GitLab CI/CD to manage your Helm charts.
You can also download the charts using the API or the user interface.
What’s next? First, we’d like to present additional metadata for charts.
Then we’ll start dogfooding the feature by using it as a replacement for https://charts.gitlab.io/.
So, try out the feature and let us know how it goes by commenting in the epic GitLab-#6366.
See Documentation and issue.
I've been developing a prototype chart that depends on some custom resource definitions that are defined in one of the child charts.
To be more specific, I'm trying to create the resources defined in the strimzi-kafka-operator within my helm chart and would like the dependency to be explicitly installed first. I followed the helm documentation and added the following to my Chart.yaml
dependencies:
- name: strimzi-kafka-operator
version: 0.16.2
repository: https://strimzi.io/charts/
I ran:
$ helm dep up ./prototype-chart
$ helm install ./prototype-chart
> Error: unable to build Kubernetes objects from release manifest: unable to recognize "": no matches for kind "KafkaTopic" in version "kafka.strimzi.io/v1beta1"
which shows that it's trying to deploy my chart before my dependency. What is the correct way to install dependencies first and then my parent chart?
(For reference, here is the question I opened on GitHub directly with Strimzi where they informed me they aren't sure how to use their helm as a dependency:
https://github.com/strimzi/strimzi-kafka-operator/issues/2552
)
Regarding CRD's: the fact that Helm by default won't manage those1 is a feature, not a bug. It will still install them if not present; but it won't modify or delete existing CRD's. The previous version of Helm (v2) does, but (speaking from experience) that can get you into all sorts of trouble if you're not careful. Quoting from the link you referenced:
There is not support at this time for upgrading or deleting CRDs using Helm. This was an explicit decision after much community discussion due to the danger for unintentional data loss. [...] One of the distinct disadvantages of the crd-install method used in Helm 2 was the inability to properly validate charts due to changing API availability (a CRD is actually adding another available API to your Kubernetes cluster). If a chart installed a CRD, helm no longer had a valid set of API versions to work against. [...] With the new crds method of CRD installation, we now ensure that Helm has completely valid information about the current state of the cluster.
The idea here is that Helm should operate only at the level of release data (adding/removing deployments, storage, etc.); but with CRD's, you're actually modifying an extension to the Kubernetes API itself, potentially inadvertently breaking other releases that use the same definitions. Consider if you're on a team that has a "library" of CRDs shared between several charts, and you want to uninstall one — formerly, With v2, Helm would happily let you modify or even delete those at will, with no checks on if/how they were used in other releases. Changes to CRDs are changes to your control plane / core API, and should be treated as such — you're modifying global resources.
In short: with v3, Helm positions itself more as a "developer" tool to define, template, and manage releases; CRDs, however, are meant to be managed independently e.g. by a "cluster administrator". At the end of the day, it's a win for all sides, since developers can setup/teardown deployments at will, with confidence that it's not going to break functionality elsewhere... and whoever's on call won't have to deal with alerts if/when you accidentally delete/modify a CRD and break things in production :)
See also the extensive discussion here for more context behind this decision.
Hope this helps!
I have a helm repository set up for my CI/CD pipeline, but the one thing I am having trouble with is helm's versioning system which is focused on a semantic versioning system as in x.x.x.
I want to be able to specify tags like "staging", "latest", and "production", and although I am able to successfully upload charts with string versions
NAME CHART VERSION APP VERSION
chartmuseum/myrchart latest 1.0
Any attempt to actually access the chart fails, such as
helm inspect chartmuseum/mychart --version=latest
Generates the error:
Error: failed to download "chartmuseum/mychart" (hint: running 'helm repo update' may help)
I don't really want to get into controlled semantic versioning at this point in development, or the mess that is appending hashes to a version. Is there any way to get helm to pull non-semantically tagged chart versions?
My approach to this, where I do not want to version my chart (and subcharts) semanticaly as well is not to use helm repository at all and just pull whole chart in CI/CD from git instead. If you are publishing them to wider audience this may not suit you, but for own CI/CD which is authorized to access our repositories anyway it works like charm.
I found something that worked for me. Since the semvar allows you to append values after the last number like 0.1.0-aebcaber, I've taken to simply using 0.1.0-latest and overwriting that in chartmuseum on uploads.