I'm facing the problem that we use an umbrella helm chart to deploy our services but our services are contained in subcharts. But there are some global configmaps that multiple services are using which are deployed by the umbrella chart. So a simplified structure looks like this:
├── Chart.yaml
├── templates
│ ├── global-config.yaml
├── charts
│ ├── subchart1
│ │ ├── Chart.yaml
│ │ ├── templates
│ │ │ ├── deployment1.yaml
│ ├── subchart2
│ │ ├── Chart.yaml
│ │ ├── templates
│ │ │ ├── deployment2.yaml
...
I what I need is to put a checksum of global-config.yaml in deployment1.yaml and deployment2.yaml as annotation like in https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
The problem is that I can define a template like this in the umbrella chart:
{{- define "configmap.checksum" }}{{ include (print $.Template.BasePath "/global-config.yaml") . | sha256sum }}{{ end }}
and then use {{ include "configmap.checksum" . }} in the deployment yamls as templates are global. But $.Template.BasePath is parsed when the include happens so it actually points to the template directory of the subchart where it is included. I have played around with '..' and the Files. and with passing another context in the template and also in the include like passing the global context $ but none of them were successful as I ways always locked in the specific subchart at rendering time.
Is there a solution to this or a different approach to solve this? Or is this something that cannot be done in helm? But I'm grateful for any solution
I found a workaround that requires you to refactor the contents of your global ConfigMap into a template in the umbrellas chart. You won't be able to use regular values from the umbrella chart in your ConfigMap, otherwise you'll get an error when trying to render the template in a subchart; you'll have to use global values instead.
{{/*
Create the contents of the global-config ConfigMap
*/}}
{{- define "my-umbrella-chart.global-config-contents" -}}
spring.activemq.user={{ .Values.global.activeMqUsername | quote }}
spring.activemq.password={{ .Values.global.activeMqPassword | quote }}
jms.destinations.itemCreationQueue={{ .Release.Namespace }}-itemCreationQueue
{{- end }}
kind: ConfigMap
apiVersion: v1
metadata:
name: global-config
data:
{{- include "my-umbrella-chart.global-config-contents" . | nindent 2 }}
Then you add a checksum of my-umbrella-chart.global-config-contents in deployment1.yaml and deployment2.yaml as an annotation like so:
annotations:
# Automatically roll deployment when the configmap contents changes
checksum/global-config: {{ include "my-umbrella-chart.global-config-contents" . | sha256sum }}
Related
How can we read index.js file in configmap with TPL function in Helm.
Means how to read below like index.js file.
exports.CDN_URL = 'http://100.470.255.255/';
exports.CDN_NAME = 'staticFilesNew';
exports.REDIS = [
{
host: 'redis-{{.Release.Name}}.{{.Release.Namespace}}.svc.cluster.local',
port: '26379',
},
];
file structure is below
.
├── Chart.yaml
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── configmap.yaml
│ ├── deployment.yaml
│ └── service.yaml
└── values
├──values.yaml
├── index.js
tried with below solution on configmap.yaml but getting error
apiVersion: v1
kind: ConfigMap
metadata:
name: test_config
data:
{{- tpl (.Files.Get (printf "values/index.js" .)) . | quote 12 }}
Getting ERROR
Error: YAML parse error on app-name/templates/configmap.yaml: error converting YAML to JSON: yaml: line 6: did not find expected key
helm.go:88: [debug] error converting YAML to JSON: yaml: line 6: did not find expected key
So it works what you are trying but I think you have issues with the indention.
Use nindent to ensure all lines are indented and dont quote the value.
kind: ConfigMap
apiVersion: v1
metadata:
name: test
namespace: test
data:
index.js: | {{- tpl (.Files.Get "values/index.js") $ | nindent 4 }}
I have an umbrella helm chart which has subcharts. Some of the subcharts have pre-install/pre-upgrade hooks (jobs).
Every time I run helm upgrade <release name> <umbrella chart> the pre-upgrade hooks of all subcharts are executed, even if there are no changes in corresponding subcharts.
Is this expected behavior? And is there a possibility to run subchart hooks only in case when there are changes in subchart?
UPD: more details
So this is the chart structure:
parent_chart/
├─ charts/
│ ├─ child_chart_1/
│ │ ├─ templates/
│ │ │ ├─ hooks_1.yaml
│ │ │ ├─ deployment_1.yaml
│ │ ├─ Chart.yaml
│ │ ├─ values.yaml
│ ├─ child_chart_2/
│ │ ├─ templates/
│ │ │ ├─ deployment_2.yaml
│ │ │ ├─ hooks_2.yaml
│ │ ├─ Chart.yaml
│ │ ├─ values.yaml
├─ values.yaml
├─ Chart.yaml
hook manifest looks like this:
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-migration-hook
namespace: {{ .Values.namespace }}
labels:
app.kubernetes.io/managed-by: {{ .Release.Service | quote }}
app.kubernetes.io/instance: {{ .Release.Name | quote }}
helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
"helm.sh/hook": pre-install, pre-upgrade
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
...
Let's assume this chart is installed.
Then I do changes in child_chart_1/values.yaml and upgrade the umbrella chart:
helm upgrade release_name parent_chart
During the upgrade hooks from both hooks_1.yaml and hooks_2.yaml are executed, but I need only hooks_1.yaml to run, because there are no changes in child_chart_2.
Try to add the annotation checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} to avoid the executing the child chart that was not updated. So if the spec is not changed, then the application keeps running with the old configuration resulting in an inconsistent deployment.
For more information please refer to this 'Chart Development Tips and Tricks' article
UPDATE
Then it's expected behaviour. What's your Helm version? Since it's was totally expected behaviour in Helm 2 and there is a Issue in GitHub with similar problem as yours.
Also we can see the resolution comment that's explaining of New 3-way Strategic Merge Patches that should resolve this issue somehow, but in their examples I can't see that it was fixed with the annotation pre-upgrade. Fell free to open that issue and ping them.
I am trying to access the key - values from a custom value file secretvalues.yaml passed to helm with the -f parameter. The key - value in this file is being passed to the yaml file postgres.configmap.yml.
here is my folder structure (there are there are a few other charts but I have removed them for simplicity)
├── k8shelmcharts
│ ├── Chart.yaml
│ ├── charts
│ │ ├── postgres-service
│ │ │ ├── Chart.yaml
│ │ │ ├── templates
│ │ │ │ ├── postgres.configmap.yml
│ │ │ │ ├── postgres.deployment.yml
│ │ │ │ └── postgres.service.yml
│ │ │ └── values.yaml
│ └── secretvalues.yaml
The contents of the values.yaml file in the postgres-services/ folder is
config:
postgres_admin_name: "postgres"
The contents of the secretvalues.yaml file in the k8shelmchars/ folder is
secrets:
postgres_admin_password: "password"
and the contents of the postgres.configmap.yml file in the postgres-services/ folder is
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
data:
# property-like keys; each key maps to a simple value
POSTGRES_USER: {{ .Values.config.postgres_admin_name }}
POSTGRES_PASSWORD: {{ .secretvalues.secrets.postgres_admin_password }}
I have tried several combinations here like .secretvalues.secrets.postgres_admin_password, .Secretvalues.secrets.postgres_admin_password and tried to remove the secrets key but no avail.
When I run the command to install the charts helm install -f k8shelmcharts/secretvalues.yaml testapp k8shelmcharts/ --dry-run --debug
I get the error:
Error: template: flask-k8s-app/charts/postgresdb/templates/postgres.configmap.yml:8:37: executing "flask-k8s-app/charts/postgresdb/templates/postgres.configmap.yml" at <.secretvalues.parameters.postgres_admin_password>: nil pointer evaluating interface {}.parameters
helm.go:94: [debug] template: flask-k8s-app/charts/postgresdb/templates/postgres.configmap.yml:8:37: executing "flask-k8s-app/charts/postgresdb/templates/postgres.configmap.yml" at <.secretvalues.parameters.postgres_admin_password>: nil pointer evaluating interface {}.parameters
my question is how do I access the secret.postgres_admin_password ?
I am using helm3
Thanks!
Trying to access the key-values from the secretvalues.yaml file by using POSTGRES_PASSWORD: {{ .Values.config.postgres_admin_password }} in the postgres.configmap.yml seems to pull null/nil values.
I am getting the error when I run helm install testapp k8shelmcharts/ --dry-run --debug -f k8shelmcharts/secretvalues.yaml:
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: unknown object type "nil" in ConfigMap.data.POSTGRES_PASSWORD
helm.go:94: [debug] error validating "": error validating data: unknown object type "nil" in ConfigMap.data.POSTGRES_PASSWORD
When I try to debug the template using helm template k8shelmcharts/ --debug, I see:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
data:
# property-like keys; each key maps to a simple value
POSTGRES_USER: postgres
POSTGRES_PASSWORD:
indicating helm is not able to pull values from the secretvalues.yaml file. NOTE: I have updated the key from secrets to config in the secretvalues.yaml file.
Values from all files are merged into one object Values, so you should access variables from the secretvalues.yaml the same way you access the others, i.e.
.Values.secrets.postgres_admin_password
I have an issue when trying to use a file from a library chart. Helm fails when I try to access a file from the library.
I have followed the example from the library_charts documentation.
Everything is the same as the documentation except two parts:
I have added the file mylibchart/files/foo.conf and this file is referenced in mylibchart/templates/_configmap.yaml's data key (in the documentation, data is an empty object):
├── mychart
│ ├── Chart.yaml
│ └── templates
│ └── configmap.yaml
└── mylibchart
├── Chart.yaml
├── files
│ └── foo.conf
└── templates
├── _configmap.yaml
└── _util.yaml
cat mylibchart/templates/_configmap.yaml
{{- define "mylibchart.configmap.tpl" -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name | printf "%s-%s" .Chart.Name }}
data:
fromlib: yes
{{ (.Files.Glob "files/foo.conf").AsConfig | nindent 2 }}
{{- end -}}
{{- define "mylibchart.configmap" -}}
{{- include "mylibchart.util.merge" (append . "mylibchart.configmap.tpl") -}}
{{- end -}}
This error is caused by the fact that mychart/files/foo.conf does not exist. If I create it, it does not crash, but contains mychart/files/foo.conf's content instead of mylibchart/files/foo.conf's content.
The file foo.conf does exist inside the file generated by "helm dependency update" (mychart/charts/mylibchart-0.1.0.tgz).
How can I use that file from the .tgz file?
You can easy reproduce the issue by cloning the project: https://github.com/florentvaldelievre/helm-issue
Helm version:
version.BuildInfo{Version:"v3.2.3", GitCommit:"8f832046e258e2cb800894579b1b3b50c2d83492", GitTreeState:"clean", GoVersion:"go1.13.12"}
We have several microservices and all of this microservices has their own helm chart and SCM repository. We have also two different cluster per stage and production environment.One of the microservice needs to use PostgreSQL. Due to company policy we had a separate team that created Helm Charts for Postgre and we should use this and need to deploy independently to our k8s cluster. Because of we do not have our own Helm repository I guess we need to use ConfigMap, or Secrets or both to integrate PostgreSQL to our microservice.
It might be a general question but I did not able to find specific examples how to integrate database without using dependency. So ı guess ı need to add database information as a ENV in deployment.yaml as below but what is the best practices to use Configmap, or Secrets and how should they look like in templates and how should ı pass the url,username,password per environment with safely way?
env:
- name: SPRING_DATASOURCE_URL
valueFrom:
configMapKeyRef:
name: postgres-production
key: jdbc-url
- name: SPRING_DATASOURCE_USERNAME
valueFrom:
secretKeyRef:
name: postgres-production
key: username
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-production
key: password
Microservice's Tree
├── helm
│ └── oneapihub-mp
│ ├── Chart.yaml
│ ├── charts
│ ├── templates
│ │ ├── NOTES.txt
│ │ ├── _helpers.tpl
│ │ ├── deployment.yaml
│ │ ├── ingress.yaml
│ │ ├── networkpolicy.yaml
│ │ ├── service.yaml
│ │ ├── serviceaccount.yaml
│ │ └── tests
│ │ └── test-connection.yaml
│ ├── values.dev.yaml
│ ├── values.prod.yaml
│ ├── values.stage.yaml
I think this mysql helm chart answer your question.
There are 3 yamls you should check.
Deployment
Secret
Values.yaml
You can create your own password in values.yaml.
## Create a database user
##
# mysqlUser:
## Default: random 10 character string
# mysqlPassword:
Which will be taken in secret and encoded using base64.
mysql-password: {{ .Values.mysqlPassword | b64enc | quote }}
Then it will be taken by deployment.
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "mysql.secretName" . }}
key: mysql-password
And a bit more about secrets based on kubernetes documentation.
Kubernetes Secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Storing confidential information in a Secret is safer and more flexible than putting it verbatim in a Pod definition or in a container image
Based on my knowledge that's how it should be done.
EDIT
I add mysql just as example, there is postgres chart. Idea is still the same.
I am confuse how should ı create config and secret yaml in my application?
You can create secret before the chart, but you can make a secret.yaml in templates and make a secret there, which will be created with the rest of the yamls when installing chart and will take the credentials from values.dev,prod,stage.yamls.
using only this env variable is enough?
Yes, as you can see in the answer if you create some password in values.yaml then it will be taken in secret through deployment with secretKeyRef.
should I deploy my application in same namespace?
I don't understand this question, you can specify namespace for your application or you can deploy everything in the default namespace.
I hope it answer your question. Let me know if you have any more questions.
If you have a requirement to install and manage the database separately from your container, you essentially need to treat it as an external database. Aside from the specific hostname, it doesn't really make any difference whether the database is running in the same namespace, a different namespace, or outside the cluster altogether.
If your ops team's PostgreSQL container generates the specific pair of ConfigMap and Secret you show in the question, and it will always be in the same Kubernetes namespace, then the only change I'd make is to make it parametrizable where exactly that deployment is.
# values.yaml
# postgresName has the name of the PostgreSQL installation.
postgresName: postgres-production
env:
- name: SPRING_DATASOURCE_URL
valueFrom:
configMapKeyRef:
name: {{ .Values.postgresName }}
key: jdbc-url
If these are being provided as configuration values to Helm...
# values.yaml
postgres:
url: ...
username: ...
password: ...
...I'd probably put the username and password parts in a Secret and incorporate them as you've done here. I'd probably directly inject the URL part, though. There's nothing wrong with using a ConfigMap here and it matches the way you'd do things without Helm, but there's not a lot of obvious value to the extra layer of indirection.
env:
- name: SPRING_DATASOURCE_URL
value: {{ .Values.postgres.url | quote }}
- name: SPRING_DATASOURCE_USERNAME
valueFrom:
secretKeyRef:
name: {{ template "chart.name" . }}
key: postgresUsername
When you go to deploy the chart, you can provide an override set of Helm values. Put some sensible set of defaults in the chart's values.yaml file, but then maintain a separate values file for each environment.
helm upgrade --install myapp . -f values.qa.yaml
# where values.qa.yaml has database settings for your test environment