I have 10 Deployments that are currently managed by 10 unique helm releases.
But these deployments are so similar that I wish to group them together and manage it as one helm release.
All deployments use the same helm chart, the only difference is in the environment variables passed to the deployment pod which is set in values.yaml.
Can I do this using helm chart dependencies and the alias field?
dependencies:
- name: subchart
repository: http://localhost:10191
version: 0.1.0
alias: deploy-1
- name: subchart
repository: http://localhost:10191
version: 0.1.0
alias: deploy-2
- name: subchart
repository: http://localhost:10191
version: 0.1.0
alias: deploy-3
I can now set values for individual deploys in one parent Chart's values.yaml:
deploy-1:
environment:
key: value-1
deploy-2:
environment:
key: value-2
deploy-3:
environment:
key: value-3
But 99% of the values that I need to set for all the deployments are the same.
How do I avoid duplicating them across the deploy-n keys in the parent chart's values.yaml file?
If you have 10 deployments that are similar, but have only a few differences in environment variables, you can group them together and manage them as one helm release using a single values.yaml file. Here's how you can achieve this:
Define a single dependency in your parent chart's Chart.yaml file that points to the helm chart for your deployments:
dependencies:
- name: my-deployments
version: 1.0.0
repository: https://charts.example.com
Create a single values.yaml file in your parent chart's templates directory that defines the values for your deployments:
my-deployments:
environment:
key1: value1
key2: value2
key3: value3
In your deployment manifests, replace any environment variable values that vary between deployments with references to the values in the parent chart's values.yaml file:
env:
- name: ENV_VAR_1
value: {{ .Values.my-deployments.environment.key1 }}
- name: ENV_VAR_2
value: {{ .Values.my-deployments.environment.key2 }}
- name: ENV_VAR_3
value: {{ .Values.my-deployments.environment.key3 }}
This way, you only need to define the environment variable values that vary between deployments in the parent chart's values.yaml file, and Helm will automatically apply these values to all the deployments.
Note that this approach assumes that all of your deployments use the same Helm chart with the same structure for their deployment manifests. If this is not the case, you may need to adjust the approach accordingly to fit the specific needs of your deployments.
Related
How to append a list to another list inside a dictionary using Helm?
I have a Helm chart specifying the key helm inside of an Argo CD Application (see snippet below).
Now given a values.yaml file, e.g.:
helm:
valueFiles:
- myvalues1.yaml
- myvalues2.yaml
I want to append helm.valuesFiles to the one below. How can I achieve this? The merge function doesn't seem to satisfy my needs in this case, since precedence will be given to the first dictionary.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
# You'll usually want to add your resources to the argocd namespace.
namespace: argocd
# Add this finalizer ONLY if you want these to cascade delete.
finalizers:
- resources-finalizer.argocd.argoproj.io
# Add labels to your application object.
labels:
name: guestbook
spec:
# The project the application belongs to.
project: default
# Source of the application manifests
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git # Can point to either a Helm chart repo or a git repo.
targetRevision: HEAD # For Helm, this refers to the chart version.
path: guestbook # This has no meaning for Helm charts pulled directly from a Helm repo instead of git.
# helm specific config
chart: chart-name # Set this when pulling directly from a Helm repo. DO NOT set for git-hosted Helm charts.
helm:
passCredentials: false # If true then adds --pass-credentials to Helm commands to pass credentials to all domains
# Extra parameters to set (same as setting through values.yaml, but these take precedence)
parameters:
- name: "nginx-ingress.controller.service.annotations.external-dns\\.alpha\\.kubernetes\\.io/hostname"
value: mydomain.example.com
- name: "ingress.annotations.kubernetes\\.io/tls-acme"
value: "true"
forceString: true # ensures that value is treated as a string
# Use the contents of files as parameters (uses Helm's --set-file)
fileParameters:
- name: config
path: files/config.json
# Release name override (defaults to application name)
releaseName: guestbook
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
- values-prod.yaml
https://raw.githubusercontent.com/argoproj/argo-cd/master/docs/operator-manual/application.yaml
If you only need to append helm.valueFiles to the existing .spec.source.helm.valueFiles, you can range through the list in the values file and add the list items like this:
valueFiles:
- values-prod.yaml
{{- range $item := .Values.helm.valueFiles }}
- {{ $item }}
{{- end }}
I am new to Helm and using Helm 3. I am trying to build a simple helm chart which depends on the mongodb helm chart available from bitnami here.
This is the structure of my chart:
mychart
|- charts\
|- mongodb-8.1.1.tgz
|- Chart.yaml
|- values.yaml
I am trying to override the value of mongodb.rootPassword (and some other properties) through the values.yaml file of the parent chart. However, it does not override the value specified and reverts to the default values from the mongodb chart.
It would be a great help to understand what I am doing wrong and how can I override the value of the child chart from the parent chart.
Here are the contents of my files:
Chart.yaml
apiVersion: v2
name: mychart
appVersion: "1.0"
description: mychart has the best description
version: 0.1.0
type: application
dependencies:
- name: mongodb
version: 8.1.1
repository: https://charts.bitnami.com/bitnami
condition: mongodb.enabled
values.yaml
mongodb:
global:
namespaceOverride: production
fullnameOverride: mongo-mychart
useStatefulSet: true
auth:
rootPassword: example
persistence:
size: 100Mi
This is possible in case the format of the values.yaml file has an issue. In this case, the values.yaml file of the parent chart had a few extra encoded characters which were causing it to be ignored by helm and defaulting the child chart's values.
I am new to this group. Glad to have connected.
I am wondering if someone has experience in using an umbrella helm chart in a CI/CD process?
In our project, we have 2 separate developer contractors. Each contractor is responsible for specific microservices.
We are using Harbor as our repository for charts and accompanying container images and GitLab for our code repo and CI/CD orchestrator...via GitLab runners.
The plan is to use an umbrella chart to deploy all approx 60 microservices as one system.
I am interested in hearing from any groups that have taken a similar approach and how they treated/handled the umbrella chart in their CI/CD process.
Thank you for any input/guidance.
VR,
We use similar kind of pattern where we have 30+ microservices.
We have got a Github repo for base-charts.
The base-microservice chart has all sorts of kubernetes templates (like HPA,ConfigMap,Secrets,Deployment,Service,Ingress etc) ,each having the option to be enabled or disabled.
Note- The base chart can even contain other charts too
eg. This base-chart has a dependency of nginx-ingress chart:
apiVersion: v2
name: base-microservice
description: A base helm chart for deploying a microservice in Kubernetes
type: application
version: 0.1.6
appVersion: 1
dependencies:
- name: nginx-ingress
version: "~1.39.1"
repository: "alias:stable"
condition: nginx-ingress.enabled
Below is an example template for secrets.yaml template:
{{- if .Values.secrets.enabled -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "base-microservice.fullname" . }}
type: Opaque
data:
{{- toYaml .Values.secrets.data | nindent 2}}
{{- end}}
Now when commit happens in this base-charts repo, as part of CI process, (along with other things) we do
Check if Helm index already exists in charts repository
If exists, then download the existing index and merge currently generated index with existing one -> helm repo index --merge oldindex/index.yaml .
If it does not exist, then we create new Helm index ->( helm repo index . ) Then upload the archived charts and index yaml to our charts repository.
Now in each of our microservice, we have a charts directory , inside which we have 2 files only:
Chart.yaml
values.yaml
Directory structure of a sample microservice:
The Chart.yaml for this microservice A looks like:
apiVersion: v2
name: my-service-A
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: 1
dependencies:
- name: base-microservice
version: "0.1.6"
repository: "alias:azure"
And the values.yaml for this microservice A has those values which need to be overriden for the base-microservice values.
eg.
base-microservice:
nameOverride: my-service-A
image:
repository: myDockerRepo/my-service-A
resources:
limits:
cpu: 1000m
memory: 1024Mi
requests:
cpu: 300m
memory: 500Mi
probe:
initialDelaySeconds: 120
nginx-ingress:
enabled: true
ingress:
enabled: true
Now while doing Continuous Deployment of this microservice, we have these steps (among others):
Fetch helm dependencies (helm dependency update ./charts/my-service-A)
Deploy my release to kubernetes (helm upgrade --install my-service-a ./charts/my-service-A)
We are using helm to deploy many charts, but for simplicity let's say it is two charts. A parent chart and a child chart:
helm/parent
helm/child
The parent chart has a helm/parent/requirements.yaml file which specifies:
dependencies:
- name: child
repository: file://../child
version: 0.1.0
The child chart requires a bunch of environment variables on startup for configuration, for example in helm/child/templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
spec:
replicas: 1
strategy:
type: Recreate
template:
spec:
containers:
env:
- name: A_URL
value: http://localhost:8080
What's the best way to override the child's environment variable from the parent chart, so that I can run the parent using below command and set the A_URL env variable for this instance to e.g. https://www.mywebsite.com?
helm install parent --name parent-release --namespace sample-namespace
I tried adding the variable to the parent's helm/parent/values.yaml file, but to no avail
global:
repository: my_repo
tag: 1.0.0-SNAPSHOT
child:
env:
- name: A_URL
value: https://www.mywebsite.com
Is the syntax of the parent's value.yaml correct? Is there a different approach?
In the child chart you have to explicitly reference a value from the configuration. (Having made this change you probably need to run helm dependency update from the parent chart directory.)
# child/templates/deployment.yaml, in the pod spec
env:
- name: A_URL
value: {{ .Values.aUrl | quote }}
You can give it a default value for the child chart.
# child/values.yaml
aUrl: "http://localhost:8080"
Then in the parent chart's values file, you can provide an override value for that.
# parent/values.yaml
child:
aUrl: "http://elsewhere"
You can't use Helm to override or inject arbitrary YAML, except to the extent the templates allow for it.
Unless the value is set up using the templating system, there is no way to directly modify it in Helm 2.
I've created a custom helm chart with elastic-stack as a subchart with following configurations.
# requirements.yaml
dependencies:
- name: elastic-stack
version: 1.5.0
repository: '#stable'
# values.yaml
elastic-stack:
kibana:
# at this level enabled is not recognized (does not work)
# enabled: true
# configs like env, only work at this level
env:
ELASTICSEARCH_URL: http://foo-elasticsearch-client.default.svc.cluster.local:9200
service:
externalPort: 80
# enabled only works at root level
elasticsearch:
enabled: true
kibana:
enabled: true
logstash:
enabled: false
What i don't get is why i have to define the enabled tags outside the elasatic-stack: and all other configurations inside?
Is this a normal helm behavior or some misconfiguration in elastic-stack chart?
Helm conditions are evaluated in the top parent's values:
Condition - The condition field holds one or more YAML paths
(delimited by commas). If this path exists in the top parent’s values
and resolves to a boolean value, the chart will be enabled or disabled
based on that boolean value
Take a look at the conditions in requirements.yaml from stable/elastic-stack:
- name: elasticsearch
version: ^1.17.0
repository: https://kubernetes-charts.storage.googleapis.com/
condition: elasticsearch.enabled
- name: kibana
version: ^1.1.0
repository: https://kubernetes-charts.storage.googleapis.com/
condition: kibana.enabled
- name: logstash
version: ^1.2.1
repository: https://kubernetes-charts.storage.googleapis.com/
condition: logstash.enabled
The conditions paths are elasticsearch.enabled, kibana.enabled and logstash.enabled, so you need to use them in your parent chart values.
Those properties in parent values.yaml serve as switch for the subcharts.
You are suppose to use condition in your requirements.yaml to control the installation or execution of your dependent subcharts. If not provided then helm simply proceeds to deploy the subchart without and problem.
And also, those values are in parent's values.yaml because they are being used in the parent chart itself and moreover cannot be used inside the subchart unless provided as global or within the subchart's name property key (which is in your case elastic-stack).