helm values.yaml - use value from another node - kubernetes-helm

so for example i have
database:
name: x-a2d9f4
replicaCount: 1
repository: mysql
tag: 5.7
pullPolicy: IfNotPresent
tier: database
app:
name: x-576a77
replicaCount: 1
repository: wordpress
tag: 5.2-php7.3
pullPolicy: IfNotPresent
tier: frontend
global:
namespace: x-c0ecdb9f
env:
name: WORDPRESS_DB_HOST
value:
and I want to do something like this
env:
name: WORDPRESS_DB_HOST
value: {{ .Values.database.name | lower }}
All these are examples from the same values.yaml
is this possible in Helm?

Yes, you can achieve this using the 'tpl' function
The tpl function allows developers to evaluate strings as templates inside a template. This is useful to pass a template string as a value to a chart or render external configuration files. Syntax: {{ tpl TEMPLATE_STRING VALUES }}
values.yaml
database:
name: x-a2d9f4
env:
name: WORDPRESS_DB_HOST
value: "{{ .Values.database.name | upper }}"
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
some: {{ tpl .Values.env.value . }}
output:
> helm template .
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: release-name-configmap
data:
some: X-A2D9F4

Related

How to set the value in envFrom as the value of env?

I have a configMap that stores a json file:
apiVersion: v1
data:
config.json: |
{{- toPrettyJson $.Values.serviceConfig | nindent 4 }}
kind: ConfigMap
metadata:
name: service-config
namespace: {{ .Release.Namespace }}
In my deployment.yaml, I use volume and envFrom
spec:
... ...
volumes:
- name: config-vol
configMap:
name: service-config
containers:
- name: {{ .Chart.Name }}
envFrom:
- configMapRef:
name: service-config
... ...
volumeMounts:
- mountPath: /src/config
name: config-vol
After I deployed the helm and run kubectl describe pods, I god this:
Environment Variables from:
service-config ConfigMap Optional: false
I wonder how can I get/use this service-config in my code? My values.yaml can extract the value under env but I don't know how to extract the value of configMap. Is there a way I can move this service-config or the json file stored in it as the variable of env in values.yaml? Thank you in advance!

pass constant to skaffold

I am trying to use a constant in skaffold, and to access it in skaffold profile:
example export SOME_IP=199.99.99.99 && skaffold run -p dev
skaffold.yaml
...
deploy:
helm:
flags:
global:
- "--debug"
releases:
- name: ***
chartPath: ***
imageStrategy:
helm:
explicitRegistry: true
createNamespace: true
namespace: "***"
setValueTemplates:
SKAFFOLD_SOME_IP: "{{.SOME_IP}}"
and in dev.yaml profile I need somehow to access it,
something like:
{{ .Template.SKAFFOLD_SOME_IP }} and it should be rendered as 199.99.99.99
I tried to use skaffold envTemplate and setValueTemplates fields, but could not get success, and could not find any example on web
Basically found a solution which I truly don't like, but it works:
in dev profile: values.dev.yaml I added a placeholder
_anchors_:
- &_IPAddr_01 "<IPAddr_01_TAG>" # will be replaced with SOME_IP
The <IPAddr_01_TAG> will be replaced with const SOME_IP which will become 199.99.99.99 at the skaffold run
Now to run skaffold I will do:
export SOME_IP=199.99.99.99
sed -i "s/<IPAddr_01_TAG>/$SOME_IP/g" values/values.dev.yaml
skaffold run -p dev
so after the above sed, in dev profile: values.dev.yaml, we will see the SOME_IP const instead of placeholder
_anchors_:
- &_IPAddr_01 "199.99.99.99"
To use the SKAFFOLD_SOME_IP variable that you have set in your skaffold.yaml you can write the chart template for Kubernetes Deployment like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Chart.Name }}
labels:
app: {{ .Chart.Name }}
spec:
selector:
matchLabels:
app: {{ .Chart.Name }}
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ .Chart.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: {{ .Values.image }}
env:
- name: SKAFFOLD_SOME_IP
value: "{{ .Values.SKAFFOLD_SOME_IP }}"
This will create an environment variable SKAFFOLD_SOME_IP for Kubernetes pods. And you can access it using 'go', for example, like this:
os.Getenv("SKAFFOLD_SOME_IP")

Helm post install hook on subchart

I try to create an umbrella helm chart for a complex solution that is made from few components. One of them is a database, for which I use a mariadb-galera chart. The problem I face is that I want to execute a flyway migration once my DB is available, and I can't find a way to do it properly.
First, I want to use range version, and I don't know how to indicate my hook to match the DB subchart without telling it the full version.
Second, I recently added an alias for my subcharts, and I haven't been able to trigger the hook properly since: it just trigger at installation and fails again and again until the DB became eventually available.
My Chart.yaml looks a bit like that :
apiVersion: v2
name: myApp
description: umbrella chart
type: application
version: 0.1.0
appVersion: "0.1-dev"
dependencies:
- name: "portal"
version: "0.1-dev"
alias: "portal"
- name: "mariadb-galera"
version: "~5.11"
repository: "https://charts.bitnami.com/bitnami"
alias: "database"
#...More dependencies...
My hook is defined as follow:
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "myApp.fullname" . }}-migration
labels:
{{- include "myApp.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
template:
metadata:
name: "{{ .Release.Name }}-migration"
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "database-5.11.2"
spec:
containers:
- name: flyway-migration
image: flyway/flyway
args:
- "migrate"
- "-password=$(DB_PASS)"
volumeMounts:
- name: migration-files
mountPath: /flyway/sql/
- name: flyway-conf
mountPath: /flyway/conf/
env:
- name: DB_PASS
valueFrom:
secretKeyRef:
name: mariadb-secret
key: mariadb-password
volumes:
- name: migration-files
configMap:
name: migration-files
- name: flyway-conf
configMap:
name: flyway-conf
Before using aliases, the helm.sh annotation used to look like :
helm.sh/chart: "mariadb-galera-5.11.2"
As you can see, it requires a full version which I don't want to include manually.
I tried to use somthing like :
{{ template ".Chart.name" .Subcharts.database }}
but it seems that it can't access the .Chart values of the subchart.
Is there something I missed ?
I found a way to get the proper name for the dependency thanks to variables embedded inside the package :
helm.sh/chart: "{{ template "common.names.chart" .Subcharts.database }}"
This produce the exact line I needed, but the job still starts without waiting for mariadb to be ready.

Helm error from Jenkins pipeline: "Error: no objects visited"

I'm trying to implement a couple subcharts in our CI/CD pipeline but my helm upgrade --install command keeps returning the error message:
Release "testing" does not exists. Installing now.
Error: no objects visisted
It seems like this is a pretty general error message so I'm not quite sure what it's pointing to. Any suggestions/hints/tips etc would be greatly appreciated.
The folder structure of my helm directory is as follows:
deployment
|-helm
|testing
||charts
|application
|-templates
| |-deployment.yaml
|-.helmignore
|-Chart.yaml
|-value.yaml
|configuration
|-templates
| |-configmap.yaml
| |-secrets.yaml
|-.helmignore
|-Chart.yaml
|-values.yaml
||-Chart.yaml
||-values.yaml
Dependencies are defined as follows:
testing/Chart.yaml
apiVersion: v2
name: testing
description: subchart demo
type: application
version: 0.1.0
appVersion: "1.16.0"
testing/values.yaml
application:
enabled: true
#disable configuration for easier debugging
configuration:
enabled: false
configuration/Chart.yaml
apiVersion: v2
name: configuration
description: subchart demo
type: application
version: 0.1.0
appVersion: "1.16.0"
application/Chart.yaml
apiVersion: v2
name: application
description: subchart demo
type: application
version: 0.1.0
appVersion: "1.16.0"
application/deployment.yaml
{{- range $k, $v := .Values.region }}
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
labels:
app: {{ $.Release.Name }}-{{ .countryCode | lower }}-deployment
name: {{ $.Release.Name }}-{{ .countryCode | lower }}-deployment
namespace: {{ $.Values.global.namespace }}
spec:
replicas: 1
template:
metadata:
labels:
app: {{ $.Release.Name }}-{{ .countryCode | lower }}-deployment
spec:
containers:
- image: {{ $.Values.image_name }}
imagePullPolicy: Always
name: {{ $.Release.Name }}-{{ .countryCode | lower }}-deployment
---
{{- end }}
application/values.yaml
image_name: image_name
config:
enabled: true
region:
- countryCode: US
volumeMounts:
- mountPath: /app/config
name: app-config-volume
volumes:
- name: app-config-volume
configMap:
defaultMode: 420
name: app-configmap
Finally below is the command I'm running in the pipeline:
helm upgrade --install testing deployment/helm/testing -f deployment/helm/testing/values.yaml --set known_hosts="***" --set image_name=$DOCKER_TAG

Why doesn't helm use the name defined in the deployment template?

i.e. from name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod below
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ template "project1234.name" . }}
chart: {{ template "project1234.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: {{ template "project1234.module5678.fullname" . }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "project1234.name" . }}
template:
metadata:
labels:
app: {{ template "project1234.name" . }}
spec:
containers:
- image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod
ports:
- containerPort: 1234
imagePullSecrets:
- name: {{ .Values.image.pullSecret }}
I am expecting the pod name to be:
pod/project1234-module5678-pod
Instead, the resulting Pod name is:
pod/chartname-project1234-module5678-dc7db787-skqvv
...where (in my understanding):
chartname is from: helm install --name chartname -f values.yaml .
project1234 is from:
# Chart.yaml
apiVersion: v1
appVersion: "1.0"
description: project1234 Helm chart for Kubernetes
name: project1234
version: 0.1.0
module5678 is from:
# values.yaml
rbac:
create: true
serviceAccounts:
module5678:
create: true
name:
image:
name: <image location>
tag: 1.5
pullSecret: <pull secret>
gitlab:
secretName: <secret name>
username: foo
password: bar
module5678:
enabled: true
name: module5678
ingress:
enabled: true
replicaCount: 1
resources: {}
I've tried changing name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod into a plain string value like "podname1234" and it isn't followed. I even tried removing the name setting entirely and the resulting pod name remains the same.
Pods created from a Deployment always have a generated name based on the Deployment's name (and also the name of the intermediate ReplicaSet, if you go off and look for it). You can't override it.
Given the YAML you've shown, I'd expect that this fragment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "project1234.module5678.fullname" . }}
expands out to a Deployment name of chartname-project1234-module5678; the remaining bits are added in by the ReplicaSet and then the Pod itself.
If you do look up the Pod and kubectl describe pod chartname-project1234-module5678-dc7db787-skqvv you will probably see that it has a single container that has the expected name project1234-module5678-pod. Pretty much the only time you need to use this is if you need to kubectl logs (or, more rarely, kubectl exec) in a multi-container pod; if you are in this case, you'll appreciate having a shorter name, and since the container names are always scoped to the specific pod in which they appear, there's nothing wrong with using a short fixed name here
spec:
containers:
- name: container