Include configmap with non-managed helm chart - kubernetes

I was wondering if it is possible to include a configmap with its own values.yml file with a helm chart repository that I am not managing locally. This way, I can uninstall the resource with the name of the chart.
Example:
I am using New Relics Helm chart repository and installing the helm charts using their repo name. I want to include a configmap used for infrastructure settings with the same helm deployment without having to use a kubectl apply to add it independently.
I also want to avoid having to manage the repo locally as I am pinning the version and other values separately from the help upgrade install set triggers.

What you could do is use Kustomize. Let me show you with an example that I use for my Prometheus installation.
I'm using the kube-prometheus-stack helm chart, but add some more custom resources like a SecretProviderClass.
kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: kube-prometheus-stack
repo: https://prometheus-community.github.io/helm-charts
version: 39.11.0
releaseName: prometheus
namespace: prometheus
valuesFile: values.yaml
includeCRDs: true
resources:
- secretproviderclass.yaml
I can then build the Kustomize yaml by running kustomize build . --enable-helm from within the same folder as where my kustomization.yaml file is.
I use this with my gitops setup, but you can use this standalone as well.
My folder structure would look something like this:
.
├── kustomization.yaml
├── secretproviderclass.yaml
└── values.yaml

Using only Helm without any 3rd party tools like kustomize there are two solutions:
Depend on the configurability of the Chart you are using as described by #Akshay in the other answer
Declare the Chart you are looking to add a ConfigMap to as a dependency
You can manage the Chart dependencies in the Chart.yaml file:
# Chart.yaml
dependencies:
- name: nginx
version: "1.2.3"
repository: "https://example.com/charts"
With the dependency in place, you can add your own resource files (e.g., the ConfigMap) to the chart. During Helm install, all dependencies and your custom files will be merged into a single Helm deployment.
my-nginx-chart/:
values.yaml # defines all values including the dependencies
Chart.yaml # declares the dependencies
templates/ # custom resources to be added on top of the dependencies
configmap.yaml # the configmap you want to add
To configure values for a dependency, you need to prefix the parameters in your values.yaml:
my-configmap-value: Hello World
nginx: #<- refers to "nginx" dependency
image: ...

Related

Kustomize HelmChartInflationGeneration Error With ChartName Not Found

I have the following chartInflator.yml file:
apiVersion: builtin
kind: ChartInflator
metadata:
name: project-helm-inflator
chartName: helm-k8s
chartHome: ../../../helm-k8s/
releaseName: project-monitoring-chart
values: ../../values.yaml
releaseNamespace: project-monitoring-ns
When I ran it using this, I got the error message below:
$ kustomize build .
Error: loading generator plugins: failed to load generator: plugin HelmChartInflationGenerator.builtin.[noGrp]/project-helm-inflator.[noNs] fails configuration: chart name cannot be empty
Here is my project structure:
project
- helm-k8s
- values.yml
- Chart.yml
- templates
- base
- project-namespace.yml
- grafana
- grafana-service.yml
- grafana-deployment.yml
- grafana-datasource-config.yml
- prometheus
- prometheus-service.yml
- prometheus-deployment.yml
- prometheus-config.yml
- prometheus-roles.yml
- kustomization.yml
- prod
- kustomization.yml
- test
- kustomization.yml
I think you may have found some outdated documentation for the helm chart generator. The canonical documentation for this is here. Reading that implies several changes:
Include the inflator directly in your kustomization.yaml in the helmCharts section.
Use name instead of chartName.
Set chartHome in the helmGlobals section rather than per-chart.
That gets us something like this in our kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmGlobals:
chartHome: ../../../helm-k8s/
helmCharts:
- name: helm-k8s
releaseName: project-monitoring-chart
values: ../../values.yaml
releaseNamespace: project-monitoring-ns
I don't know if this will actually work -- you haven't provided a reproducer in your question, and I'm not familiar enough with Helm to whip one up on the spot -- but I will note that your project layout is highly unusual. You appear to be trying to use Kustomize to deploy a Helm chart that contains your kustomize configuration, and it's not clear what the benefit is of this layout vs. just creating a helm chart and then using kustomize to inflate it from outside of the chart templates directory.
You may need to add --load-restrictor LoadRestrictionsNone when calling kustomize build for this to work; by default, the chartHome location must be contained by the same directory that contains your kustomization.yaml.
Update
To make sure things are clear, this is what I'm recommending:
Remove the kustomize bits from your helm chart, so that it looks like this.
Publish your helm charts somewhere. I've set up github pages for that repository and published the charts at http://oddbit.com/open-electrons-deployments/.
Use kustomize to deploy the chart with transformations. Here we add a -prod suffix to all the resources:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmCharts:
- name: open-electrons-monitoring
repo: http://oddbit.com/open-electrons-deployments/
nameSuffix: -prod

Declarative approach to deploy Helm chart by Argocd to multiple environments

I’m using Argocd with helm charts. I have two environments: uat, prod.
As far as I understand, proper approach for helm is to have base folder with commons + per env folder.
So I have single branch with 3 folders:
base # for commons: Chart.yaml, templates, etc.
uat # for uat values.yaml
prod # for prod values.yaml
In my helm chart I have following Chart.yaml (stored in base folder):
apiVersion: v1
appVersion: 1.0.11
name: my-nice-app
version: 1.0.11
With every release I increase appVersion and version (version is used as image tag version in charts).
I use declarative approach to deploy helm chart (this is uat application resource, similar is for prod):
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-nice-app
namespace: argocd
spec:
project: default
source:
repoURL: some-url
targetRevision: HEAD
path: base
helm:
version: v3
valueFiles:
- uat/values.yaml
destination:
server: https://kubernetes.default.svc
namespace: uat
syncPolicy:
syncOptions:
- CreateNamespace=false
automated:
selfHeal: true
prune: true
Question:
I do update uat values file.
I do update Chart.yaml with new version.
I would like to deploy uat only (but when I update base prod would also triggers).
Where or how should I store Chart.yaml? Should I change Argocd Application resource? Or only option is to duplicate charts per env?
I prefer also not to store any version related info in Argocd Application resource (so not to change it every time).
It would be nice not to apply kustomized.io.
You should split it into 2 charts (base chart and value chart)
base chart is chart dependency of value chart, like that if you update base chart, value chart won't be affected if you don't update chart dependency.
File Chart.yaml of value-chart will look like this.
apiVersion: v2
name: my-nice-app-prod
description: Chart for production
type: application
version: 0.0.1
appVersion: "1.0.0"
dependencies:
- name: my-nice-app-chart
version: 0.1.9
Link references:
https://helm.sh/docs/helm/helm_dependency/
You should use the variants folder,
The base directory holds configuration which is common to all environments. It is not expected to change often. If you want to do changes to multiple environments at the same time, it is best to use the “variants” folder.
Please go through this doc
https://codefresh.io/blog/how-to-model-your-gitops-environments-and-promote-releases-between-them/

helm dependencies with different namespaces

Right now, I have to install multiple helm charts in different namespaces for my product to work. I am trying to create a super helm chart in which I am planning to add the helm charts (of my tools, as mentioned above) and install them in one shot. My problem is, as these tools are in different namespaces I am not sure where to specify the namespace key where I want that particular dependency (chart) to be installed. For e.g. if below is the Charts.yaml of my super helm chart
dependencies:
- name: first_chart
version: 1.2.3
repository: https://firstchart.repo
- name: second_chart
version: 1.5.6
repository: https://secondchart.repo
I want my first chart to be installed in namespace foo and the second chart to be installed in namespace bar.
I was looking at using conditions but I believe conditions will only take a boolean as a value.
I stumbled upon this link (https://github.com/helm/helm/issues/2060) which says the we can do it in Helm 3 but mostly on how to keep releases between different namespaces. It does not specifically answer my question.
There is no builtin way to do this with pure Helm, but there is with helmfile.
Your example as helmfile.yaml:
releases:
- name: chart1 # name of the release (helm install <...> first_chart)
chart: repo1/first_chart
version: 1.2.3
namespace: foo
- name: chart2
chart: repo2/second_chart
version: 1.5.6
namespace: bar
# in case you want helmfile to automatically update repos
repositories:
- name: repo1
url: https://firstchart.repo
- name: repo2
url: https://secondchart.repo
Then, run:
helmfile sync => run helm install/upgrade on all releases, or
helmfile apply => same as sync, but do a diff first to only upgrade/install releases that changed
There is way more to helmfile, but this is the gist.
PS: if you struggle with values or want to have something similar to how umbrella Chart values are handled, have a look at helmfile: a simple trick to handle values intuitively
The way I solved this for my clusters with with ArgoCD's App of Apps cluster bootstrapping model. Of course, it requires that ArgoCD is install the cluster. However, for many reasons not relevant to this answer I would highly encourage installing ArgoCD regardless of the easy of bootstrapping capabilities.
Assuming ArgoCD is in place the structure is a single Helm chart containing templates for each of the child charts it will deploy and managed via Argo's Application CRD. You will notice there is a definition as part of the CRD, spec.destination.namespace, which governs where the chart will be deployed.
An example Application template which governs my cert-manager chart deployment to the cert-manager namespace looks like:
{{- if .Values.certManager.enabled }}
# ref: https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#applications
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cert-manager
# You'll usually want to add your resources to the argocd namespace.
namespace: argocd
# Add a this finalizer ONLY if you want these to cascade delete.
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
# The project the application belongs to.
project: cluster-configs
# Source of the application manifests
source:
repoURL: https://github.com/yourOrg/Helm
targetRevision: {{ .Values.targetRevision }}
path: charts/cert-manager-chart
# helm specific config
helm:
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
{{- range .Values.certManager.valueFiles }}
- {{ . }}
{{- end }}
# Optional Helm version to template with. If omitted it will fall back to look at the 'apiVersion' in Chart.yaml
# and decide which Helm binary to use automatically. This field can be either 'v2' or 'v3'.
version: v3
# Destination cluster and namespace to deploy the application
destination:
server: https://kubernetes.default.svc
namespace: cert-manager
{{- end }}
With a corresponding values.yaml file for this parent chart which may look something like the following with the path to desired value file(s) in that child chart's directory specified.
targetRevision: v1.11.0
certManager:
enabled: true
valueFiles:
- "values.yaml"
clusterAutoScaler:
valueFiles:
- "envs/dev-account/saas/values.yaml"
clusterResourceLimits:
valueFiles:
- "values.yaml"
externalDns:
valueFiles:
- "envs/dev-account/saas/values.yaml"
ingressNginx:
enabled: true
valueFiles:
- "values.yaml"
Below is a screenshot of one of my app of apps directory to complete the example.

How to choose dependency release name for custom Helm 3 chart

The syntax for adding a dependency to a helm 3 chart looks like this (inside of chart.yaml).
How can you specify a release name if you need multiple instances of a dependency?
apiVersion: v2
name: shared
description: Ingress Controller and Certificate Manager
type: application
version: 0.1.1
appVersion: 0.1.0
dependencies:
- name: cert-manager
version: ~0.13
repository: https://charts.jetstack.io
In the CLI it's just helm upgrade -i RELEASE_NAME CHART_NAME -n NAMESPACE
But inside of Chart.yaml the option to specify a release seems to be missing.
The next question I have is if there's a weird way to do it, how would you write the values for each instance in the values.yaml file?
After 5 more minutes of searching I found that there's an alias field that can be added, like so:
dependencies:
- name: cert-manager
alias: first-one
version: ~0.13
repository: https://charts.jetstack.io
- name: cert-manager
alias: second-one
version: ~0.13
repository: https://charts.jetstack.io
And in the values.yaml file
first-one:
# values go here
second-one:
# values go here
Reference https://helm.sh/docs/topics/charts/#the-chartyaml-file
Using cert-manager is just an example, I can't think of a use-case that would need two instances of that particular chart. I'm hoping to use it for brigade projects

Adjusting Kubernetes configurations depending on environment

I want to describe my services in kubernetes template files. Is it possible to parameterise values like number or replicas, so that I can set this at deploy time.
The goal here is to be able to run my services locally in minikube (where I'll only need one replica) and have them be as close to those running in staging/live as possible.
I'd like to be able to change the number of replicas, use locally mounted volumes and make other minor changes, without having to write a seperate template files that would inevitably diverge from each other.
Helm
Helm is becoming the standard for templatizing kubernetes deployments. A helm chart is a directory consisting of yaml files with golang variable placeholders
---
kind: Deployment
metadata:
name: foo
spec:
replicas: {{ .Values.replicaCount }}
You define the default value of a 'value' in the 'values.yaml' file
replicaCount: 1
You can optionally overwrite the value using the --set command line
helm install foo --set replicaCount=42
Helm can also point to an external answer file
helm install foo -f ./dev.yaml
helm install foo -f ./prod.yaml
dev.yaml
---
replicaCount: 1
prod.yaml
---
replicaCount: 42
Another advantage of Helm over simpler solutions like envbsubst is that Helm supports plugins. One powerful plugin is the helm-secrets plugin that lets you encrypt sensitive data using pgp keys. https://github.com/futuresimple/helm-secrets
If using helm + helm-secrets your setup may look like the following where your code is in one repo and your data is in another.
git repo with helm charts
stable
|__mysql
|__Values.yaml
|__Charts
|__apache
|__Values.yaml
|__Charts
incubator
|__mysql
|__Values.yaml
|__Charts
|__apache
|__Values.yaml
|__Charts
Then in another git repo that contains the environment specific data
values
|__ mysql
|__dev
|__values.yaml
|__secrets.yaml
|__prod
|__values.yaml
|__secrets.yaml
You then have a wrapper script that references the values and the secrets files
helm secrets upgrade foo --install -f ./values/foo/$environment/values.yaml -f ./values/foo/$environment/secrets.yaml
envsubst
As mentioned in other answers, envsubst is a very powerful yet simple way to make your own templates. An example from kiminehart
apiVersion: extensions/v1beta1
kind: Deployment
# ...
architecture: ${GOOS}
GOOS=amd64 envsubst < mytemplate.tmpl > mydeployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
# ...
architecture: amd64
Kubectl
There is a feature request to allow kubectl to do some of the same features of helm and allow for variable substitution. There is a background document that strongly suggest that the feature will never be added, and instead is up to external tools like Helm and envsubst to manage templating.
(edit)
Kustomize
Kustomize is a new project developed by google that is very similar to helm. Basically you have 2 folders base and overlays. You then run kustomize build someapp/overlays/production and it will generate the yaml for that environment.
someapp/
├── base/
│ ├── kustomization.yaml
│ ├── deployment.yaml
│ ├── configMap.yaml
│ └── service.yaml
└── overlays/
├── production/
│ └── kustomization.yaml
│ ├── replica_count.yaml
└── staging/
├── kustomization.yaml
└── cpu_count.yaml
It is simpler and has less overhead than helm, but does not have plugins for managing secrets. You could combine kustomize with sops or envsubst to manage secrets.
https://kubernetes.io/blog/2018/05/29/introducing-kustomize-template-free-configuration-customization-for-kubernetes/
I'm hoping someone will give me a better answer, but in the meantime, you can feed your configuration through envsubst (see gettext and this for mac).
Example config, text.yaml:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: test
spec:
replicas: ${NUM_REPLICAS}
...
Then run:
$ NUM_REPLICAS=2 envsubst < test.yaml | kubectl apply -f -
deployment "test" configured
The final dash is required. This doesn't solve the problem with volumes of course, but it helps a little. You could write a script/makefile to automate this for environment.