i'm working on a kubernetes project where we have a each micro service with it's own helm chart, currently the helm chart of each microservice is with it in the code source repository, and now i want to create a qa environnement where the same code can be used but i'm having a problem customizing the helm chart for each environnement, my question is what is the best approach to handle a helm chart for a microservice?and should the helm chart be located in the repository of the source code?
thanks in advance
It's ok to have the chart in each microservice's repository.
Now, to deploy your system (no matter the environment), you need to helm install all those charts. How can you do this? You have two options, either you individually install each one, or the best approach, you create a meta chart.
What's this meta chart? Just another dummy chart, with dependencies to all of your microservices. So that you end up with something like:
apiVersion: v3
name: myservice
version: 1.0.0
dependencies:
- name: microserviceA
version: ">=1.0.0"
repository: "path_to_microserviceA_repo"
- name: microserviceB
version: ">=1.0.0"
repository: "path_to_microserviceA_repo"
Then, ideally you would have different values files with configuration for each environment you're going to deploy: QA, staging, production, personal for local development, etc
Related
in our project we are using Umbrella Helm Charts (while I can't find any better term, as mentioned here) and I have dilemma how to manage the multiple branches over multiple Git Repositories under this concept.
I can't be only person struggling with this concept, so I like ask your experiences and solutions in the topic.
Our project structure looks like the following, we have multiple Git Repositories containing Micro Services. like the following....
Git Repository - Micro Service 1
+-helm
+-templates
Charts.yaml
values.yaml
+-src
+-main
+-java
+-test
+-java
build.gradle
and a second one (n one actually)..
Git Repository - Micro Service 2
+-helm
+-templates
Charts.yaml
values.yaml
+-src
+-main
+-java
+-test
+-java
build.gradle
We have an convention in our Project that we only change the Version number of the Helm Chart for the Micro Service only if the K8s manifests are changed, not when the only change was the Java Source code, we place the Docker Image Id into the build and also the CommitId as AppVersion, so it recognisable from outside for which commit this Helm Chart is created.
Now with a pipeline, we deploy these helm charts (with the actual docker image ids) to the Helm Repository.
Then we have another Umbrella Git Repository (for lack of better word) which is orchestrating the Helm Charts of the Micro Service for all System deployments.
Git Repository - Umbrella
+-helm
Charts.yaml
values.yaml
values-dev.yaml
values-staging.yaml
values-prod.yaml
and Charts.yaml looks like the following
apiVersion: v2
name: umbrella
description: A Helm chart for Umbrella
type: application
version: 1.0.0
appVersion: "89WRG344HWRHHH"
dependencies:
- name: micro-service1
version: 1.0.0
repository: https://gitlab.xxx.com/api/v4/projects/XXXXX/packages/helm/stable
condition: micro-service1.enabled
tags:
- application
- name: micro-service2
version: 1.0.0
repository: https://gitlab.xxx.com/api/v4/projects/YYYYY/packages/helm/stable
condition: micro-service2.enabled
tags:
- application
Now life would be fine and dandy, if we would have one environment, like 'prod', or one Git Branch 'master', but while we have one Version number for the Micro Service Helm Charts (remember our convention as long as K8s Manifest does not changes, we don't change the Helm Chart Version for Micro Services and appVersion has no effect on Helm Charts dependency resolution, the branch for 'master', 'dev', 'feature-123', 'feature-987', etc would all produce the same Helm Chart Version, with different CommitId as appVersion. Of course, we can increase the Helm Chart Version for every Java Code change but keeping the Umbrella Charts in sink with that would be a crazy task).
So my potential solution to that, 'Gitlab Helm Chart Repositories' has a 'channel' property, so I can publish my charts to these channels based on the branch name of the Micro Service Repositories.
Now this would be quite straight forward for branch names that does not change like 'dev', 'staging', 'test') but what about feature branches, all the Micro Service Repositories should follow as convention the same Git Branch names, so if 'Micro Service 1' collaborating to 'feature-123', Git Repositories Micro Service 1 and Umbrella must have a branch 'feature-123' or if 'Micro Service 1' and 'Micro Service 2' are collaborating on 'feature-987' Git Repositories Micro Service 1, Micro Service 2 and Umbrella must have branch 'feature-987'. This would mean we would have 'channel's 'feature-123' and 'feature-987' in Helm Repository and I will add them to helm repo in build pipeline.
This brings us to my first dilemma and my first question, as long as I read and interpret the documentation of 'Nexus' and 'Artifactory' (these are the Helm Repositories that I have experience with it) has no concepts of 'channel', so this line of thought would tie my solution to the Gitlab Helm Repositories, which I don't want because at any moment upper management can say, we will use Nexus, or we will use Artifactory.
Does anybody knows these platforms have support for these concepts?
If this solution proves to be a dead end (because no other Helm Repository can support the concept of channels and create a new Helm Repository a feature branch is not really a solution ), my second plan is, during the pipeline for Micro Services, upload the Helm Chart with the Version of the charge changed to '1.0.0-branch-name', like '1.0.0-feature-123', '1.0.0-feature-987', '1.0.0-staging' so the Umbrella Repository would have the same branch and will implicitly add branch names to dependency versions (I can do that over a pipeline).
Do you see any KO criteria for this?
And finally, somebody out there must be facing the same dilemmas, how are you tackling these problems?
We have around 50 micro services. Each micro service has its own repo in git. And in its repo we have a chart folder for the helm chart.
We also have an umbrella chart/parent chart for those 50 sub charts.
To differentiate dev, qa, production helm packages. We used different name for the umbrella chart and different versioning.
For example, all our development charts all have versions like version 0.0.0.xxxxx and production charts all have versions like 1.0.0.xxxxx.
The purpose of the different versioning strategy is so that we can pull down sub charts from different branches when building the parent chart.
When we run the pipeline from development branch, it will create helm chart with version prefix 0.0.0, and when from master branch, it creates chart version with prefix 1.0.0. And to make it simple, we are not using AppVersion, on Chart version and every time we build a new docker image, we bump up the Chart version.
For example, we have the following requirements.yaml in our development parent chart.
dependencies:
- name: subchart1
repository: 'chartmuseum'
version: ~0.0.0
- name: subchart2
repository: 'chartmusuem'
version: ~0.0.0
With this, when we run the pipeline of the development parent chart, it will pull down the dependencies that are built from development branch.
This works well.
The first problem we are having now is when multiple developers work on different micro services, they would include each other's changes when building the parent chart.
The second problem is with updating the sub charts. The yaml templates of all the charts are very similar(deployment, configmap, ingress). Sometimes, when we need to update an ingress setting for all the charts, we have to go to different git repos to update them and merge them to different branches. I am now considering creating one single dedicated git repo for all the charts. But I would like to hear some advice on the management of Chart. What are the best practices for managing helm charts and docker repositories in large scale.
I have a few microservices and one of them needs to use postreSQL. I configure this microservice using Helm 3.I have two different values.yaml per environments such as values.stage.yaml and values.prod.yaml.So my confusion is,
Should I independentyl install the PostreSQL? What I mean, in my scr code I have helm chart call helm/app. Should I create one more chart for PostreSQL? How can I configure the PostreSQL per environments.
2.In future, if one more microservice would like to use the same PostreSQL, what should I do to provide this feature.
Your chart should declare postgresql as a dependency, in Helm 3 in its Chart.yaml file. (In Helm 2 there was a separate requirements.yaml file.) You will need to run helm dep up (helm dependency update) before deploying your chart, but then when you run helm install it will install both your application and its database dependency.
So your Chart.yaml can look roughly like
apiVersion: v2
name: app
...
dependencies:
- name: postgresql
version: '^8'
repository: #stable
(In Helm 3 you also need to helm repo add the stable Helm charts repository.)
You can configure the database per environment in the same way you configure the rest of your application. Database settings would be nested under the subchart's name; at the command line you might --set postgresql.postgresqlPassword=..., and in a YAML file you'd put database settings under a postgresql: key.
If you have a second service that needs PostgreSQL, it should declare a dependency in the same way and install its own independent copy of the database. With database installation isolated inside containers this isn't considered particularly heavy-weight. If your two services need to communicate, they should do it via a network (often HTTP) connection and not by sharing a database.
By default, Helm picks values.yaml of root directory of the chart.
To install same Helm Chart with different values, you can do something like,
helm install . -f values.stage.yaml
I am just wondering if anyone has figured out a declarative way to have helm charts installed/configured as part of a cluster initiation and that could be checked into source control. Using Kuberenetes I have very much gotten used to the "everything as code" type of workflow and I realize that installing and configuring helm is based mostly on imperative workflows via the CLI.
The reason I am asking is because currently we have our cluster in development and will be recreating it in production. Most of our configuration has been done declaratively via the deployment.yaml file. However we have spent a significant amount of time installing and configuring certain helm charts (e.g. prometheus, grafana etc.)
There a tools like helmfile or helmsman which allow you to declare to be installed Helm releases as code.
Here is an example from a helmfile.yaml doing so:
releases:
# Published chart example
- name: promnorbacxubuntu # name of this release
namespace: prometheus # target namespace
chart: stable/prometheus # the chart being installed to create this release, referenced by `repository/chart` syntax
set: # values (--set)
- name: rbac.create
value: false
Running helmfile charts will then ensure that all listed releases are installed
My team had a similar kind of problem and we solved it with Operators. And the best part about of Operators is that there are 3 kinds and one of them is Helm based.
So you could use a Helm Based Operator , create an associated CRD and then declare your configurations there. Those configurations are then ported directly to the Helm chart without you, as the user, having to do anything.
Is there any CLI tools or libraries that allow to update container images (and other parameters) in K8S YAML/JSON configuration files?
For example, I have this YAML:
apiVersion: apps/v1
kind: Deployment
<...>
spec:
template:
spec:
containers:
- name: dmp-reports
image: example.com/my-image:v1
<...>
And I want to automatically update the image for this deployment in this file (basically, this is necessary for the CI/CD system).
We have the same issue on the Jenkins X project where we have many git repositories and as we change things like libraries or base docker images we need to change lots of versions in pom.xml, package.json, Dockerfiles, helm charts etc.
We use a simple CLI tool called UpdateBot which automates the generation of Pull Requests on all downstream repositories. We tend to think of this as Continuous Delivery for libraries and base images ;). e.g. here's the current Pull Requests that UpdateBot has generated on the Jenkins X organisation repositories
Then here's how we update Dockerfiles / helm charts as we release, say, new base images:
https://github.com/jenkins-x/builder-base/blob/master/jx/scripts/release.sh#L28-L29
You can use sed in your CI/CD pipeline to update the file and deploy. In jenkins its sh sed ......
You can also use Helm - create templates and you can specify the new image names (etc.) when deploying the release.