Is it possible to reference generated helm template name from values file - kubernetes-helm

While composing helm chart with few sub-charts I've encountered collision. Long story short, I'm creating config with some value, and config name is generated. But subchart expects that generated names to be referenced directly in Values.yaml file.
That's actually service with PG database & I'm trying to install prometheus-postgres-exporter to enable Prometheus monitoring for the DB. But that's not the point.
So, I have some config for building DB connection string:
apiVersion: v1
kind: Secret
metadata:
name: {{ include "myapp.fullname" . }}-secret
type: Opaque
data:
PG_CONN_STRING: {{ printf "postgresql://%s:%s#%s:%s/%s" .Values.postgresql.postgresqlUsername .Values.postgresql.postgresqlPassword (include "postgresql.fullname" .) .Values.postgresql.service.port .Values.postgresql.postgresqlDatabase | b64enc | quote }}
Okay, that works file. However, when I'm trying to install prometheus-postgres-exporter, that requires naming specific secret name where DB connection string can be obtained. I have a problem with that name should be generated, so I can't provide exact value. Not sure, how can I reference that? Obviously, passing same template code doesn't work, since replacement is single-pass, not recursive.
prometheus-postgres-exporter:
serviceMonitor:
enabled: true
config:
datasourceSecret:
name: "{{ include "myapp.fullname" . }}-secret" # Doesn't work, unfortunatelly
key: "PG_CONN_STRING"
Is there known way to overcome this other than hardcoding values?

Related

How to append a list to another list inside a dictionary using Helm?

How to append a list to another list inside a dictionary using Helm?
I have a Helm chart specifying the key helm inside of an Argo CD Application (see snippet below).
Now given a values.yaml file, e.g.:
helm:
valueFiles:
- myvalues1.yaml
- myvalues2.yaml
I want to append helm.valuesFiles to the one below. How can I achieve this? The merge function doesn't seem to satisfy my needs in this case, since precedence will be given to the first dictionary.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
# You'll usually want to add your resources to the argocd namespace.
namespace: argocd
# Add this finalizer ONLY if you want these to cascade delete.
finalizers:
- resources-finalizer.argocd.argoproj.io
# Add labels to your application object.
labels:
name: guestbook
spec:
# The project the application belongs to.
project: default
# Source of the application manifests
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git # Can point to either a Helm chart repo or a git repo.
targetRevision: HEAD # For Helm, this refers to the chart version.
path: guestbook # This has no meaning for Helm charts pulled directly from a Helm repo instead of git.
# helm specific config
chart: chart-name # Set this when pulling directly from a Helm repo. DO NOT set for git-hosted Helm charts.
helm:
passCredentials: false # If true then adds --pass-credentials to Helm commands to pass credentials to all domains
# Extra parameters to set (same as setting through values.yaml, but these take precedence)
parameters:
- name: "nginx-ingress.controller.service.annotations.external-dns\\.alpha\\.kubernetes\\.io/hostname"
value: mydomain.example.com
- name: "ingress.annotations.kubernetes\\.io/tls-acme"
value: "true"
forceString: true # ensures that value is treated as a string
# Use the contents of files as parameters (uses Helm's --set-file)
fileParameters:
- name: config
path: files/config.json
# Release name override (defaults to application name)
releaseName: guestbook
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
- values-prod.yaml
https://raw.githubusercontent.com/argoproj/argo-cd/master/docs/operator-manual/application.yaml
If you only need to append helm.valueFiles to the existing .spec.source.helm.valueFiles, you can range through the list in the values file and add the list items like this:
valueFiles:
- values-prod.yaml
{{- range $item := .Values.helm.valueFiles }}
- {{ $item }}
{{- end }}

Nil pointer evaluating interface with custom values file

I'm working on an umbrella chart which has several child charts.
On the top level, I have a file values-ext.yaml which contains some values which are used in the child charts.
sql:
common:
host: <SQL Server host>
user: <SQL user>
pwd: <SQL password>
These settings are read in configmap.yaml of a child chart. In this case, a SQL Server connection string is built up from these settings.
apiVersion: v1
kind: ConfigMap
metadata:
name: "childchart-config"
labels:
app: some-app
chart: my-child-chart
data:
ConnectionStrings__DbConnection: Server={{ .Values.sql.common.host }};Database=some-db
I test the chart from the umbrella chart dir like this: helm template --values values-ext.yaml .
It gives me this error:
executing "my-project/charts/child-chart/templates/configmap.yaml" at <.Values.sql.common.host>:
nil pointer evaluating interface {}.host
So, it clearly can't find the values that I want to read from the values-ext.yaml file.
I should be able to pass in additional values files like this, right?
I also tried with $.Values.sql.common.host but it doesn't seem to matter.
What's wrong here?
When the child charts are rendered, their .Values are a subset of the parent chart values. Using $.Values to "escape" the current scope doesn't affect this at all. So within child-chart, .Values in effect refers to what .Values.child-chart would have referred to in the parent.
There's three main things you can do about this:
Move the settings down one level in the YAML file; you'd have to repeat them for each child chart, but they could be used unmodified.
child-chart:
sql:
common: { ... }
Move the settings under a global: key. All of the charts that referenced this value would have to reference .Values.global.sql..., but it would be consistent across the parent and child charts.
global:
sql:
common: { ... }
ConnectionStrings__DbConnection: Server={{ .Values.global.sql.common.host }};...
Create the ConfigMap in the parent chart and indirectly refer to it in the child charts. It can help to know that all of the charts will be installed as part of the same Helm release, and if you're using the standard {{ .Release.Name }}-{{ .Chart.Name }}-suffix naming pattern, the .Release.Name will be the same in all contexts.
# in a child chart, that knows it's being included by the parent
env:
- name: DB_CONNECTION
valueFrom:
configMapKeyRef:
name: '{{ .Release.Name }}-parent-dbconfig'
key: ConnectionStrings__DbConnection

Helm - Configmap - Read and update the file name

I have the application properties defined for each environment inside a config folder.
config/
application-dev.yml
application-dit.yml
application-sit.yml
When i'm trying to deploy the application in dev, i need to create configmap by considering the applicaiton-dev.yml with a name application.yml.
When i'm trying to deploy the application in dit i need to create configmap by considering the application-dit.yml. But the name of the file should be always application.yml inside the configmap.
Any suggestions?
When using helm to manage projects, different values.yaml files are generally used to distinguish between different environments (development/pre-release/online).
Suppose your configmap file is as follows:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ $.Values.cm.name }}
data:
application.yml : |-
{{ $.Files.Get {{ $.Values.cm.path }} | nindent 4 }}
In dev, define values-dev.yaml file
cm:
name: test
path: config/application-dev.yml
When you install the chart in dev, you can use the following command:
helm install test . -f values-dev.yaml
In dit, define values-dit.yaml file
cm:
name: test
path: config/application-dit.yml
When you install the chart in dit, you can use the following command:
helm install test . -f values-dit.yaml

Escaping helm yml for deployment

I am trying to figure out how to escape these pieces of a yml file in order to use with helm.
- name: SYSLOG_TAG
value: '{{ index .Container.Config.Labels "io.kubernetes.pod.namespace" }}[{{ index .Container.Config.Labels "io.kubernetes.pod.name" }}]'
- name: SYSLOG_HOSTNAME
value: '{{ index .Container.Config.Labels "io.kubernetes.container.name" }}'
The yml file is a DaemonSet for sending logs to papertrail with instructions here for a standard kubernetes manual deployment https://help.papertrailapp.com/kb/configuration/configuring-centralized-logging-from-kubernetes/ . Here is a link to the full yml file https://help.papertrailapp.com/assets/files/papertrail-logspout-daemonset.yml .
I found some answers on how to escape the curly braces and quotes, but still can't seem to get it to work. It would be easiest if there was some way to just get helm to not evaluate each entire value.
The last I tried was this, but still results in an error.
value: ''"{{" index .Container.Config.Labels \"io.kubernetes.pod.namespace\" "}}"["{{" index .Container.Config.Labels \"io.kubernetes.pod.name\" "}}"]''
- name: SYSLOG_HOSTNAME
value: ''"{{" index .Container.Config.Labels \"io.kubernetes.container.name\" "}}"''
This is the error:
Error: UPGRADE FAILED: YAML parse error on templates/papertrail-logspout-daemonset.yml: error converting YAML to JSON: yaml: line 21: did not find expected key
I can hardcode values for both of these and it works fine. I don't quite understand how these env variables work, but what happens is that logs are sent to papertrail for each pod in a node with the labels from each of those pods. Namespace, pod name, and container name.
env:
- name: ROUTE_URIS
value: "{{ .Values.backend.log.destination }}"
{{ .Files.Get "files/syslog_vars.yaml" | indent 13 }}
Two sensible approaches come to mind.
One is to define a template that expands to the string {{, at which point you can use that in your variable expansion. You don't need to specially escape }}.
{{- define "cc" }}{{ printf "{{" }}{{ end -}}
- name: SYSLOG_HOSTNAME
value: '{{cc}} index .Container.Config.Labels "io.kubernetes.container.name" }}'
A second approach, longer-winded but with less escaping, is to create an external file that has these environment variable fragments.
# I am files/syslog_vars.yaml
- name: SYSLOG_HOSTNAME
value: '{{ index .Container.Config.Labels "io.kubernetes.container.name" }}'
Then you can include the file. This doesn't apply any templating in the file, it just reads it as literal text.
env:
{{ .Files.Get "files/syslog_vars.yaml" | indent 2 }}
The important point with this last technique, and the problem you're encountering in the question, is that Helm reads an arbitrary file, expands all of the templating, and then tries to interpret the resulting text as YAML. The indent 2 part of this needs to match whatever the rest of your env: block has; if this is deep inside a deployment spec it might need to be 8 or 10 spaces. helm template will render a chart to text without trying to do additional processing, which is really helpful for debugging.

How best to have files on volumes in Kubernetes using helm charts?

The plan is to move my dockerized application to Kubernetes.
The docker container uses couple of files - which I used to mount on the docker volumes by specifying in the docker-compose file:
volumes:
- ./license.dat:/etc/sys0/license.dat
- ./config.json:/etc/sys0/config.json
The config file would be different for different environments, and the license file would be the same across.
How do I define this in a helm template file (yaml) so that it is available for the running application?
What is generally the best practise for this? Is it also possible to define the configuration values in values.yaml and the config.json file could get it?
Since you are dealing with json a good example to follow might be the official stable/centrifugo chart. It defines a ConfigMap that contains a config.json file:
data:
config.json: |-
{{ toJson .Values.config| indent 4 }}
So it takes a config section from the values.yaml and transforms it to json using the toJson function. The config can be whatever you want define in that yaml - the chart has:
config:
web: true
namespaces:
- name: public
anonymous: true
publish: true
...
In the deployment.yaml it creates a volume from the configmap:
volumes:
- name: {{ template "centrifugo.fullname" . }}-config
configMap:
name: {{ template "centrifugo.fullname" . }}-config
Note that {{ template "centrifugo.fullname" . }}-config matches the name of the ConfigMap.
And mounts it into the deployment's pod/s:
volumeMounts:
- name: "{{ template "centrifugo.fullname" . }}-config"
mountPath: "/centrifugo"
readOnly: true
This approach would let you populate the json config file from the values.yaml so that you can set different values for different environments by supplying custom values file per env to override the default one in the chart.
To handle the license.dat you can add an extra entry to the ConfigMap to define an additional file but with static content embedded. Since that is a license you may want to switch the ConfigMap to a Secret instead, which is a simple change of replacing the word ConfigMap for Secret in the definitions. You could try it with ConfigMap first though.