Can we use template variables in helm3 values.yaml - kubernetes

Is it possible to use templating in helm values.yaml?
Use case -
values.yaml is as below -
myservice:
name: abc
namespace: abc
image:
ecr_uri: abc.dkr.ecr.us-east-2.amazonaws.com
repo_name: abc-123
version: 1.0.0
I want to pass these image metadata to my environment variable within values.yaml
env:
- name: POD_CONFIG
value: '{
"1234": {
"DEPLOYMENT": "xyz",
"IMAGE": "{{ .Values.myservice.image.ecr_uri }}/{{ .Values.myservice.image.repo_name }}:{{ .Values.myservice.image.version }}",
"REC_COUNTS_TO_POD_COUNTS": {"0": 0, "500": 1, "1000": 2, "1500": 3, "2000": 4, "2001": 5}
}
}'
Output -
$ helm template myservice does not render actual variable values but rather returns it as a string
- name: POD_CONFIG
value: '{ "1234": { "DEPLOYMENT": "xyz", "IMAGE": "{{ .Values.myservice.image.ecr_uri}}/{{ .Values.myservice.image.repo_name }}:{{ .Values.myservice.image.version }}", "REC_COUNTS_TO_POD_COUNTS":
{"0":0,"20":1, "40":2, "60":3} } }'
Is there any way to set this version for environment variables? as I will by passing the custom version with --set command from $ helm install and $ helm upgrade commands
e.g.
$ helm install myservice ./myservice --set myservice.image.version=1.0.1 -n mynamespace --debug
I am expecting this version to be reflected in my environment variables dynamically but that is not happening right now.

Related

Helm not using values override?

I'm using sub-charts. Here's my directory structure
/path/microservice-base-chart
/path/myApp
I have this values.yaml for my "base" (generic) chart
# Default region and repository
aws_region: us-east-1
repository: 012234567890.dkr.ecr.us-east-1.amazonaws.com
repositories:
us-east-1: 01234567890.dkr.ecr.us-east-1.amazonaws.com
eu-north-1: 98765432109.dkr.ecr.eu-north-1.amazonaws.com
image:
name: ""
version: ""
...and this in the base chart's templates/_helpers.yaml file
{{/*
Get the repository from the AWS region
*/}}
{{- define "microservice-base-chart.reponame" -}}
{{- $repo := index .Values.repositories .Values.aws_region | default .Values.repository }}
{{- printf "%s" $repo }}
{{- end }}
...and this in the base chart's templates/deployment.yaml file
apiVersion: apps/v1
kind: Deployment
...
spec:
...
template:
...
spec:
...
containers:
- name: {{ .Values.image.name }}
image: {{ include "microservice-base-chart.reponame" . }}/{{ .Values.image.name }}:{{ .Values.image.version }}
I have this in the Chart.yaml of a sub chart that uses the base chart.
dependencies:
- alias: microservice-0
name: microservice-base-chart
version: "0.1.0"
repository: file://../microservice-base-chart
...and this in the values.yaml of a sub chart
microservice-0:
image:
name: myApp
version: 1.2.3
However, when I run this, where I set aws_region
$ helm install marcom-stats-svc microservice-chart/ \
--set image.aws_region=eu-north-1 \
--set microservice-0.image.version=2.0.0 \
--dry-run --debug
I get this for the image name of the above deployment.yaml template
image: 01234567890.dkr.ecr.us-east-1.amazonaws.com/myApp:2.0.0
instead of the expected
image: 98765432109.dkr.ecr.eu-north-1.amazonaws.com/myApp:2.0.0
What am I missing? TIA

Pass values from Helmrelease to Terraform

I have a helm release file test-helm-release.yaml as given below.
apiVersion: helm.toolkit.gitops.io/v2beta1
kind: HelmRelease
metadata:
name: "test"
namespace: "test-system"
spec:
chart:
spec:
chart: "test-environment"
version: "0.1.10"
values:
key1: "value1"
key1: "value1"
key1: "value1"
key1: "value1"
gitRepository:
url: https://github.com/test-eng/test.git
helmRepositories:
- name: testplatform
url: https://test-platform/charts
While creating the helm release I can pass the values from above helm release to the new release using the below command
chart=$(yq '.spec.chart.spec.chart' test-helm-release.yaml)
version=$(yq '.spec.chart.spec.version' test-helm-release.yaml)
yq '.spec.values' test-helm-release.yaml | helm upgrade --install --values - --version "$version" --namespace test-system --create-namespace test-platform "helm-repo/$chart"
Above code is working perfectly and I'm able to pass the values to the helm release using "yq" command. How I can do the same "yq" function with terraform "helm-release" and git repository data type given below.
data "github_repository_file" "test-platform" {
repository = "test-platform"
branch = "test"
file = "internal/default/test-helm-release.yaml"
}
resource "helm_release" "test-platform" {
name = "test-platform"
repository = "https://test-platform/charts"
chart = "test-environment"
namespace = "test-system"
create_namespace = true
timeout = 800
lifecycle {
ignore_changes = all
}
}
Note
I cannot use "set" because i want to fetch the values form test-helm-release.yaml dynamically.Any idea how I could fetch the .spec.values alone using templatefile functio or a different way?

How to use environment/secret variable in Helm?

In my helm chart, I have a few files that need credentials to be inputted
For example
<Resource
name="jdbc/test"
auth="Container"
driverClassName="com.microsoft.sqlserver.jdbc.SQLServerDriver"
url="jdbc:sqlserver://{{ .Values.DB.host }}:{{ .Values.DB.port }};selectMethod=direct;DatabaseName={{ .Values.DB.name }};User={{ Values.DB.username }};Password={{ .Values.DB.password }}"
/>
I created a secret
Name: databaseinfo
Data:
username
password
I then create environment variables to retrieve those secrets in my deployment.yaml:
env:
- name: DBPassword
valueFrom:
secretKeyRef:
key: password
name: databaseinfo
- name: DBUser
valueFrom:
secretKeyRef:
key: username
name: databaseinfo
In my values.yaml or this other file, I need to be able to reference to this secret/environment variable. I tried the following but it does not work:
values.yaml
DB:
username: $env.DBUser
password: $env.DBPassword
you can't pass variables from any template to values.yaml with helm. Just from values.yaml to the templates.
The answer you are seeking was posted by mehowthe :
deployment.yaml =
env:
{{- range .Values.env }}
- name: {{ .name }}
value: {{ .value }}
{{- end }}
values.yaml =
env:
- name: "DBUser"
value: ""
- name: "DBPassword"
value: ""
then
helm install chart_name --name release_name --set env.DBUser="FOO" --set env.DBPassword="BAR"

Helm integer intended to be string value parsed with escape character "\u200"

I have this Secret resource yaml:
...
stringData:
imageTag: {{ .Values.image.tag | quote }}
...
In the value file:
image:
tag: "65977​45"
...
When running the helm template command results to a generated yaml file with the value:
...
stringData:
imageTag: "65977\u200b45"
...
Seems like a bug in helm. To get around this issue, I have to do this:
...
stringData:
imageTag: "{{ .Values.image.Tag }}"
...
Is there a better solution? I am using helm version 2.15.2

helm upgrade fails with "function "X" not defined"

I'm trying to upgrade a helm chart,
I get the error function "pod" not defined which make sense because I really have no such function.
The "pod" is coming from a json file which I convert into a configmap and helm is reading this value as a function and not as a straight string which is part of the json file.
This is a snippet of my configmap:
# Generated from 'pods' from https://raw.githubusercontent.com/coreos/prometheus-operator/master/contrib/kube-prometheus/manifests/grafana-dashboardDefinitions.yaml
# Do not change in-place! In order to change this file first read following link:
# https://github.com/helm/charts/tree/master/stable/prometheus-operator/hack
{{- if and .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ printf "%s-%s" (include "prometheus-operator.fullname" $) "services-health" | trunc 63 | trimSuffix "-" }}
labels:
{{- if $.Values.grafana.sidecar.dashboards.label }}
{{ $.Values.grafana.sidecar.dashboards.label }}: "1"
{{- end }}
app: {{ template "prometheus-operator.name" $ }}-grafana
{{ include "prometheus-operator.labels" $ | indent 4 }}
data:
services-health.json: |-
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"targets": [
{
"expr": "{__name__=~\"kube_pod_container_status_ready\", container=\"aggregation\",kubernetes_namespace=\"default\",chart=\"\"}",
"format": "time_series",
"instant": false,
"intervalFactor": 2,
"legendFormat": "{{pod}}",
"refId": "A"
}
}
{{- end }}
The error I get is coming from this line: "legendFormat": "{{pod}}",
And this is the error I get:
helm upgrade --dry-run prometheus-operator-chart
/home/ubuntu/infra-devops/helm/vector-chart/prometheus-operator-chart/
Error: UPGRADE FAILED: parse error in "prometheus-operator/templates/grafana/dashboards/services-health.yaml":
template:
prometheus-operator/templates/grafana/dashboards/services-health.yaml:1213:
function "pod" not defined
I tried to escape it but nothing worked.
Anyone get idea about how I can work around this issue?
Escaping gotpl placeholders is possible using backticks. For example, in your scenario, instead of using {{ pod }} you could write {{` {{ pod }} `}}.
Move your dashboard json to a separate file, let's say name it dashboard.json.
Then in your configmap file: instead of listing the json down inline, reference the dashboard.json file by typing the following:
data:
services-health.json: |-
{{ .Files.Get "dashboard.json" | indent 4 }}
That would solve the problem!
In the case of my experiments, I replaced
"legendFormat": "{{ pod }}",
with
"legendFormat": "{{ "{{ pod }}" }}",
and it was very happy to return the syntax I needed (Specifically for the grafana-operator GrafanaDashboard CRD).
Keeping json file out of config map and sourcing it within config map works, but make sure to keep the json file out of template directory while using with helm, or else it will try to search for {{ pod }} .