I am new to this group. Glad to have connected.
I am wondering if someone has experience in using an umbrella helm chart in a CI/CD process?
In our project, we have 2 separate developer contractors. Each contractor is responsible for specific microservices.
We are using Harbor as our repository for charts and accompanying container images and GitLab for our code repo and CI/CD orchestrator...via GitLab runners.
The plan is to use an umbrella chart to deploy all approx 60 microservices as one system.
I am interested in hearing from any groups that have taken a similar approach and how they treated/handled the umbrella chart in their CI/CD process.
Thank you for any input/guidance.
VR,
We use similar kind of pattern where we have 30+ microservices.
We have got a Github repo for base-charts.
The base-microservice chart has all sorts of kubernetes templates (like HPA,ConfigMap,Secrets,Deployment,Service,Ingress etc) ,each having the option to be enabled or disabled.
Note- The base chart can even contain other charts too
eg. This base-chart has a dependency of nginx-ingress chart:
apiVersion: v2
name: base-microservice
description: A base helm chart for deploying a microservice in Kubernetes
type: application
version: 0.1.6
appVersion: 1
dependencies:
- name: nginx-ingress
version: "~1.39.1"
repository: "alias:stable"
condition: nginx-ingress.enabled
Below is an example template for secrets.yaml template:
{{- if .Values.secrets.enabled -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "base-microservice.fullname" . }}
type: Opaque
data:
{{- toYaml .Values.secrets.data | nindent 2}}
{{- end}}
Now when commit happens in this base-charts repo, as part of CI process, (along with other things) we do
Check if Helm index already exists in charts repository
If exists, then download the existing index and merge currently generated index with existing one -> helm repo index --merge oldindex/index.yaml .
If it does not exist, then we create new Helm index ->( helm repo index . ) Then upload the archived charts and index yaml to our charts repository.
Now in each of our microservice, we have a charts directory , inside which we have 2 files only:
Chart.yaml
values.yaml
Directory structure of a sample microservice:
The Chart.yaml for this microservice A looks like:
apiVersion: v2
name: my-service-A
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: 1
dependencies:
- name: base-microservice
version: "0.1.6"
repository: "alias:azure"
And the values.yaml for this microservice A has those values which need to be overriden for the base-microservice values.
eg.
base-microservice:
nameOverride: my-service-A
image:
repository: myDockerRepo/my-service-A
resources:
limits:
cpu: 1000m
memory: 1024Mi
requests:
cpu: 300m
memory: 500Mi
probe:
initialDelaySeconds: 120
nginx-ingress:
enabled: true
ingress:
enabled: true
Now while doing Continuous Deployment of this microservice, we have these steps (among others):
Fetch helm dependencies (helm dependency update ./charts/my-service-A)
Deploy my release to kubernetes (helm upgrade --install my-service-a ./charts/my-service-A)
Related
I have 10 Deployments that are currently managed by 10 unique helm releases.
But these deployments are so similar that I wish to group them together and manage it as one helm release.
All deployments use the same helm chart, the only difference is in the environment variables passed to the deployment pod which is set in values.yaml.
Can I do this using helm chart dependencies and the alias field?
dependencies:
- name: subchart
repository: http://localhost:10191
version: 0.1.0
alias: deploy-1
- name: subchart
repository: http://localhost:10191
version: 0.1.0
alias: deploy-2
- name: subchart
repository: http://localhost:10191
version: 0.1.0
alias: deploy-3
I can now set values for individual deploys in one parent Chart's values.yaml:
deploy-1:
environment:
key: value-1
deploy-2:
environment:
key: value-2
deploy-3:
environment:
key: value-3
But 99% of the values that I need to set for all the deployments are the same.
How do I avoid duplicating them across the deploy-n keys in the parent chart's values.yaml file?
If you have 10 deployments that are similar, but have only a few differences in environment variables, you can group them together and manage them as one helm release using a single values.yaml file. Here's how you can achieve this:
Define a single dependency in your parent chart's Chart.yaml file that points to the helm chart for your deployments:
dependencies:
- name: my-deployments
version: 1.0.0
repository: https://charts.example.com
Create a single values.yaml file in your parent chart's templates directory that defines the values for your deployments:
my-deployments:
environment:
key1: value1
key2: value2
key3: value3
In your deployment manifests, replace any environment variable values that vary between deployments with references to the values in the parent chart's values.yaml file:
env:
- name: ENV_VAR_1
value: {{ .Values.my-deployments.environment.key1 }}
- name: ENV_VAR_2
value: {{ .Values.my-deployments.environment.key2 }}
- name: ENV_VAR_3
value: {{ .Values.my-deployments.environment.key3 }}
This way, you only need to define the environment variable values that vary between deployments in the parent chart's values.yaml file, and Helm will automatically apply these values to all the deployments.
Note that this approach assumes that all of your deployments use the same Helm chart with the same structure for their deployment manifests. If this is not the case, you may need to adjust the approach accordingly to fit the specific needs of your deployments.
I have the application properties defined for each environment inside a config folder.
config/
application-dev.yml
application-dit.yml
application-sit.yml
When i'm trying to deploy the application in dev, i need to create configmap by considering the applicaiton-dev.yml with a name application.yml.
When i'm trying to deploy the application in dit i need to create configmap by considering the application-dit.yml. But the name of the file should be always application.yml inside the configmap.
Any suggestions?
When using helm to manage projects, different values.yaml files are generally used to distinguish between different environments (development/pre-release/online).
Suppose your configmap file is as follows:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ $.Values.cm.name }}
data:
application.yml : |-
{{ $.Files.Get {{ $.Values.cm.path }} | nindent 4 }}
In dev, define values-dev.yaml file
cm:
name: test
path: config/application-dev.yml
When you install the chart in dev, you can use the following command:
helm install test . -f values-dev.yaml
In dit, define values-dit.yaml file
cm:
name: test
path: config/application-dit.yml
When you install the chart in dit, you can use the following command:
helm install test . -f values-dit.yaml
Im trying to setup a Helm chart repo using Github pages. Everything appears to work fine with generating the index.yaml etc via Github Actions, awesome.
index.yaml
apiVersion: v1
entries:
test:
- apiVersion: v1
created: "2021-08-27T09:54:44.830905882Z"
description: Testing the chart releaser
digest: b41b263d236ef9eee0a75e877982a10ea73face093b4999c6004263b041f0fad
keywords:
- test
name: test
urls:
- https://github.com/xxx/xxx/releases/download/test-0.0.9/test-0.0.9.tgz
version: 0.0.9
generated: "2021-08-27T09:54:44.587113879Z"
And a test chart
name: test
description: Testing the chart releaser
version: 0.0.9
apiVersion: v1
keywords:
- test
sources:
home:
However, when i try to add the repo using
helm repo add test https://didactic-quibble-e0daddd0.pages.github.io/
I get the error
Error: looks like "http://didactic-quibble-e0daddd0.pages.github.io/" is not a valid chart repository or cannot be reached: error converting YAML to JSON: yaml: line 188: mapping values are not allowed in this context
The URL http://didactic-quibble-e0daddd0.pages.github.io/index.yaml returns the index.yaml file described above.
Any help would be much appreciated.
Cheers
I would suggest a different procedure and hope that helps you in hosting GitHub as a Helm Repo.
I have 2 Helm charts here:
Library- this is going to be used by other helm charts as a
dependency . This I am going to host in the below example as a helm repo on GitHub
App- this is going to consume the Library chart and extra
functionality.
Library Chart: In your Library Directory
helm package libchart
helm helm repo index .
Validate the index file is created and entries are correct
more index.yaml
apiVersion: v1
entries:
libchart:
- apiVersion: v2
appVersion: 1.16.0
created: "2022-11-30T08:57:01.109116+09:00"
description: A Helm chart for Kubernetes
digest: 8aa38d70d61f81cf31627a7d7d9cc5c293f340bf01918c9a16ac1fac9fcc96e9
name: libchart
type: library
urls:
- libchart-0.1.0.tgz
version: 0.1.0
generated: "2022-11-30T08:57:01.108194+09:00"
Commit "index.yaml" and ".tgz" files to the git .
Add helm repo:
#~: helm repo add mylib --username parjun8840 --password TOPSECRET-TOKEN-FROM-GIT
https://raw.githubusercontent.com/YOURGITUSER/helm-library/master
"mylib" has been added to your repositories
#~:helm-library arjunpandey$ helm repo update
App Chart: In your App Directory
#~:appchart arjunpandey$ more Chart.yaml
apiVersion: v2
name: appchart
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: "1.16.0"
dependencies:
- name: libchart
version: 0.1.0
repository: https://raw.githubusercontent.com/YOURGITUSER/helm-library/master
#~:appchart arjunpandey$ helm dependency update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "mylib" chart repository
...Successfully got an update from the "newrelic" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 1 charts
Downloading libchart from repo https://raw.githubusercontent.com/YOURGITUSER/helm-library/master
Deleting outdated charts
I'm learning Helm to setup my 3 AWS EKS clusters - sandbox, staging, and production.
How can I set up my templates so some values are derived based on which cluster the chart is being installed at? For example, in my myapp/templates/deployment.yaml I may want
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
I may want replicas to be either 1, 2, or 4 depending if I'm installing the chart in my sandbox, staging, or production cluster respectively? I wanna do same trick for cpu and memory requests and limits for my pods for example.
I was thinking of having something like this in my values.yaml file
environments:
- sandbox
- staging
- production
perClusterValues:
replicas:
- 1
- 2
- 4
cpu:
requests:
- 256m
- 512m
- 1024m
limits:
- 512m
- 1024m
- 2048m
memory:
requests:
- 1024Mi
- 1024Mi
- 2048Mi
limits:
- 2048Mi
- 2048Mi
- 3072Mi
So if I install a helm chart in the sandbox environment, I want to be able to do
$ helm install myapp myapp --set environment=sandbox
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
{{- if not .Values.autoscaling.enabled }}
# In pseudo-code, in my YAML files
# Get the index value from .Values.environments list
# based on pass-in environment parameter
{{ $myIndex = indexOf .Values.environments .Value.environment }}
replicas: {{ .Values.perClusterValues.replicas $myIndex }}
{{- end }}
I hope you understand my logic, but what is the correct syntax? Or is this even a good approach?
You can use the helm install -f option to pass an extra YAML values file in, and this takes precedence over the chart's own values.yaml file. So using exactly the template structure you already have, you can provide alternate values files
# sandbox.yaml
autoscaling:
enabled: false
replicaCount: 1
# production.yaml
autoscaling:
enabled: true
replicaCount: 5
And then when you go to deploy the chart, run it with
helm install myapp . -f production.yaml
(You can also helm install --set replicaCount=3 to override specific values, but the --set syntax is finicky and unusual; using a separate YAML file per environment is probably easier. Some tooling might be able to take advantage of JSON files also being valid YAML to write out additional deploy-time customizations.)
I have an application which is hosted on multiple environments and helm chart is used to deploy the application. I have values.yaml-
app:
component: mobile
type: web
env: prd --> (It will override with external parameters while deployment like dev / stg / uat)
image:
repository: ********.dkr.ecr.ap-south-1.amazonaws.com/mobile
pullPolicy: IfNotPresent
versions:
v1:
name: stable
replicaCount: 2
tag: latest
v2:
name: release
replicaCount: 1
tag: latest
Based on version v1 and v2(canary fashion), Deployment will iterate over loop. Canary deployment will be perform only on PRD environment. So on DEV / STG / UAT, only one version will be deployed and therefore loop will needed to iterate only one time for such environment.
{{- range $version, $val := $.Values.image.versions }}
---
apiVersion: apps/v1
kind: Deployment
I can set number of required pods for v2 as 0 but it create unnecessary metadata (deployemnt, replicaset).
So is there any method to break loop in helm template with condition (env: prd) to avoid loop iteration over v2.