I'm working on an umbrella chart which has several child charts.
On the top level, I have a file values-ext.yaml which contains some values which are used in the child charts.
sql:
common:
host: <SQL Server host>
user: <SQL user>
pwd: <SQL password>
These settings are read in configmap.yaml of a child chart. In this case, a SQL Server connection string is built up from these settings.
apiVersion: v1
kind: ConfigMap
metadata:
name: "childchart-config"
labels:
app: some-app
chart: my-child-chart
data:
ConnectionStrings__DbConnection: Server={{ .Values.sql.common.host }};Database=some-db
I test the chart from the umbrella chart dir like this: helm template --values values-ext.yaml .
It gives me this error:
executing "my-project/charts/child-chart/templates/configmap.yaml" at <.Values.sql.common.host>:
nil pointer evaluating interface {}.host
So, it clearly can't find the values that I want to read from the values-ext.yaml file.
I should be able to pass in additional values files like this, right?
I also tried with $.Values.sql.common.host but it doesn't seem to matter.
What's wrong here?
When the child charts are rendered, their .Values are a subset of the parent chart values. Using $.Values to "escape" the current scope doesn't affect this at all. So within child-chart, .Values in effect refers to what .Values.child-chart would have referred to in the parent.
There's three main things you can do about this:
Move the settings down one level in the YAML file; you'd have to repeat them for each child chart, but they could be used unmodified.
child-chart:
sql:
common: { ... }
Move the settings under a global: key. All of the charts that referenced this value would have to reference .Values.global.sql..., but it would be consistent across the parent and child charts.
global:
sql:
common: { ... }
ConnectionStrings__DbConnection: Server={{ .Values.global.sql.common.host }};...
Create the ConfigMap in the parent chart and indirectly refer to it in the child charts. It can help to know that all of the charts will be installed as part of the same Helm release, and if you're using the standard {{ .Release.Name }}-{{ .Chart.Name }}-suffix naming pattern, the .Release.Name will be the same in all contexts.
# in a child chart, that knows it's being included by the parent
env:
- name: DB_CONNECTION
valueFrom:
configMapKeyRef:
name: '{{ .Release.Name }}-parent-dbconfig'
key: ConnectionStrings__DbConnection
Related
How to append a list to another list inside a dictionary using Helm?
I have a Helm chart specifying the key helm inside of an Argo CD Application (see snippet below).
Now given a values.yaml file, e.g.:
helm:
valueFiles:
- myvalues1.yaml
- myvalues2.yaml
I want to append helm.valuesFiles to the one below. How can I achieve this? The merge function doesn't seem to satisfy my needs in this case, since precedence will be given to the first dictionary.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
# You'll usually want to add your resources to the argocd namespace.
namespace: argocd
# Add this finalizer ONLY if you want these to cascade delete.
finalizers:
- resources-finalizer.argocd.argoproj.io
# Add labels to your application object.
labels:
name: guestbook
spec:
# The project the application belongs to.
project: default
# Source of the application manifests
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git # Can point to either a Helm chart repo or a git repo.
targetRevision: HEAD # For Helm, this refers to the chart version.
path: guestbook # This has no meaning for Helm charts pulled directly from a Helm repo instead of git.
# helm specific config
chart: chart-name # Set this when pulling directly from a Helm repo. DO NOT set for git-hosted Helm charts.
helm:
passCredentials: false # If true then adds --pass-credentials to Helm commands to pass credentials to all domains
# Extra parameters to set (same as setting through values.yaml, but these take precedence)
parameters:
- name: "nginx-ingress.controller.service.annotations.external-dns\\.alpha\\.kubernetes\\.io/hostname"
value: mydomain.example.com
- name: "ingress.annotations.kubernetes\\.io/tls-acme"
value: "true"
forceString: true # ensures that value is treated as a string
# Use the contents of files as parameters (uses Helm's --set-file)
fileParameters:
- name: config
path: files/config.json
# Release name override (defaults to application name)
releaseName: guestbook
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
- values-prod.yaml
https://raw.githubusercontent.com/argoproj/argo-cd/master/docs/operator-manual/application.yaml
If you only need to append helm.valueFiles to the existing .spec.source.helm.valueFiles, you can range through the list in the values file and add the list items like this:
valueFiles:
- values-prod.yaml
{{- range $item := .Values.helm.valueFiles }}
- {{ $item }}
{{- end }}
While composing helm chart with few sub-charts I've encountered collision. Long story short, I'm creating config with some value, and config name is generated. But subchart expects that generated names to be referenced directly in Values.yaml file.
That's actually service with PG database & I'm trying to install prometheus-postgres-exporter to enable Prometheus monitoring for the DB. But that's not the point.
So, I have some config for building DB connection string:
apiVersion: v1
kind: Secret
metadata:
name: {{ include "myapp.fullname" . }}-secret
type: Opaque
data:
PG_CONN_STRING: {{ printf "postgresql://%s:%s#%s:%s/%s" .Values.postgresql.postgresqlUsername .Values.postgresql.postgresqlPassword (include "postgresql.fullname" .) .Values.postgresql.service.port .Values.postgresql.postgresqlDatabase | b64enc | quote }}
Okay, that works file. However, when I'm trying to install prometheus-postgres-exporter, that requires naming specific secret name where DB connection string can be obtained. I have a problem with that name should be generated, so I can't provide exact value. Not sure, how can I reference that? Obviously, passing same template code doesn't work, since replacement is single-pass, not recursive.
prometheus-postgres-exporter:
serviceMonitor:
enabled: true
config:
datasourceSecret:
name: "{{ include "myapp.fullname" . }}-secret" # Doesn't work, unfortunatelly
key: "PG_CONN_STRING"
Is there known way to overcome this other than hardcoding values?
I am creating a Helm chart that depends on several Helm charts that are not maintained by me, and I would like to make some configurations to these subcharts. The configurations are not too complex, I just want to add a several environmental variables to each of the containers. However, the env fields of the containers are not already templated out in the Helm charts. I would like to avoid forking these charts and maintaining them myself since this is such a trivial change.
Is there an easy way to provide environmental variables to several containers in Kubernetes in a flexible way, either through Helm or another tool?
I am currently looking into using Kustomize to do the last mile changes after Helm fills out the templates, but I am getting hung up on setting up Kustomize patches. In my scenario, I have the environmental variables being filled out by Helm in a ConfigMap. I would like to add an envFrom field to read the ConfigMap and add the given environment variables to the containers. I want to add the envFrom to the resource YAML files through Kustomize. The snag I am hitting is that Kustomize patch.yaml files are resource specific. Below is an example of my patch.yaml and my kustomization.yaml respectively.
patch.yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: does-not-matter
spec:
template:
spec:
containers:
- name: server
envFrom:
- configMapRef:
name: my-env
kustomization.yaml:
resources:
- all.yaml
patches:
- path: patch.yaml
target:
kind: "StatefulSet"
name: "*"
To perform the Kustomization, I run:
helm install perceptor ../ --post-renderer ./kustomize
Which basically just fills out the Helm templates and passes them to Kustomize to do the last-mile patches.
In the patch, I have to specify the name of the container ("server") to properly inject my configMap. What I would really like to do is be able to provide those environment variables to all containers in a given deployment (as defined by the target constraints in kustomization.yaml), regardless of their name. From what I have seen, it almost looks like I will have to write a separate patch for each container, which is suboptimal. I just start working with Kubernetes, so it is possible that I am missing something that would easily solve this problem.
I understand, that you don't want to break the open/closed principle of subchart your umbrella chart depends on by forking it, but still you have a right to propose a changes to it by making it more extension-able and flexible. Yes, I would suggest you to submit a Pull Request/request a new Feature to the helm chart project in context.
The following code snippet won't break current functionality, and give users a chance to introduce custom environment variables based on existing ConfigMap(s) in desired resource's Spec.
helm_template.yaml
#helm template
...
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
{{- if .Values.envConfigs }}
{{- range $key, $config := $.Values.envConfigs }}
- name: {{ $key }}
valueFrom:
configMapKeyRef:
name: {{ $config }}
key: {{ $key | quote }}
{{- end }}
{{- end }}
values.yaml
#
# values.yaml
#
envConfigs:
Q3_CFG_MAP: Q3DM17
Q3_CFG_TIMEOUT: 30
# if empty use:
# envConfigs: {}
We are using helm to deploy many charts, but for simplicity let's say it is two charts. A parent chart and a child chart:
helm/parent
helm/child
The parent chart has a helm/parent/requirements.yaml file which specifies:
dependencies:
- name: child
repository: file://../child
version: 0.1.0
The child chart requires a bunch of environment variables on startup for configuration, for example in helm/child/templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
spec:
replicas: 1
strategy:
type: Recreate
template:
spec:
containers:
env:
- name: A_URL
value: http://localhost:8080
What's the best way to override the child's environment variable from the parent chart, so that I can run the parent using below command and set the A_URL env variable for this instance to e.g. https://www.mywebsite.com?
helm install parent --name parent-release --namespace sample-namespace
I tried adding the variable to the parent's helm/parent/values.yaml file, but to no avail
global:
repository: my_repo
tag: 1.0.0-SNAPSHOT
child:
env:
- name: A_URL
value: https://www.mywebsite.com
Is the syntax of the parent's value.yaml correct? Is there a different approach?
In the child chart you have to explicitly reference a value from the configuration. (Having made this change you probably need to run helm dependency update from the parent chart directory.)
# child/templates/deployment.yaml, in the pod spec
env:
- name: A_URL
value: {{ .Values.aUrl | quote }}
You can give it a default value for the child chart.
# child/values.yaml
aUrl: "http://localhost:8080"
Then in the parent chart's values file, you can provide an override value for that.
# parent/values.yaml
child:
aUrl: "http://elsewhere"
You can't use Helm to override or inject arbitrary YAML, except to the extent the templates allow for it.
Unless the value is set up using the templating system, there is no way to directly modify it in Helm 2.
The plan is to move my dockerized application to Kubernetes.
The docker container uses couple of files - which I used to mount on the docker volumes by specifying in the docker-compose file:
volumes:
- ./license.dat:/etc/sys0/license.dat
- ./config.json:/etc/sys0/config.json
The config file would be different for different environments, and the license file would be the same across.
How do I define this in a helm template file (yaml) so that it is available for the running application?
What is generally the best practise for this? Is it also possible to define the configuration values in values.yaml and the config.json file could get it?
Since you are dealing with json a good example to follow might be the official stable/centrifugo chart. It defines a ConfigMap that contains a config.json file:
data:
config.json: |-
{{ toJson .Values.config| indent 4 }}
So it takes a config section from the values.yaml and transforms it to json using the toJson function. The config can be whatever you want define in that yaml - the chart has:
config:
web: true
namespaces:
- name: public
anonymous: true
publish: true
...
In the deployment.yaml it creates a volume from the configmap:
volumes:
- name: {{ template "centrifugo.fullname" . }}-config
configMap:
name: {{ template "centrifugo.fullname" . }}-config
Note that {{ template "centrifugo.fullname" . }}-config matches the name of the ConfigMap.
And mounts it into the deployment's pod/s:
volumeMounts:
- name: "{{ template "centrifugo.fullname" . }}-config"
mountPath: "/centrifugo"
readOnly: true
This approach would let you populate the json config file from the values.yaml so that you can set different values for different environments by supplying custom values file per env to override the default one in the chart.
To handle the license.dat you can add an extra entry to the ConfigMap to define an additional file but with static content embedded. Since that is a license you may want to switch the ConfigMap to a Secret instead, which is a simple change of replacing the word ConfigMap for Secret in the definitions. You could try it with ConfigMap first though.