We have several microservices and all of this microservices has their own helm chart and SCM repository. We have also two different cluster per stage and production environment.One of the microservice needs to use PostgreSQL. Due to company policy we had a separate team that created Helm Charts for Postgre and we should use this and need to deploy independently to our k8s cluster. Because of we do not have our own Helm repository I guess we need to use ConfigMap, or Secrets or both to integrate PostgreSQL to our microservice.
It might be a general question but I did not able to find specific examples how to integrate database without using dependency. So ı guess ı need to add database information as a ENV in deployment.yaml as below but what is the best practices to use Configmap, or Secrets and how should they look like in templates and how should ı pass the url,username,password per environment with safely way?
env:
- name: SPRING_DATASOURCE_URL
valueFrom:
configMapKeyRef:
name: postgres-production
key: jdbc-url
- name: SPRING_DATASOURCE_USERNAME
valueFrom:
secretKeyRef:
name: postgres-production
key: username
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-production
key: password
Microservice's Tree
├── helm
│ └── oneapihub-mp
│ ├── Chart.yaml
│ ├── charts
│ ├── templates
│ │ ├── NOTES.txt
│ │ ├── _helpers.tpl
│ │ ├── deployment.yaml
│ │ ├── ingress.yaml
│ │ ├── networkpolicy.yaml
│ │ ├── service.yaml
│ │ ├── serviceaccount.yaml
│ │ └── tests
│ │ └── test-connection.yaml
│ ├── values.dev.yaml
│ ├── values.prod.yaml
│ ├── values.stage.yaml
I think this mysql helm chart answer your question.
There are 3 yamls you should check.
Deployment
Secret
Values.yaml
You can create your own password in values.yaml.
## Create a database user
##
# mysqlUser:
## Default: random 10 character string
# mysqlPassword:
Which will be taken in secret and encoded using base64.
mysql-password: {{ .Values.mysqlPassword | b64enc | quote }}
Then it will be taken by deployment.
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "mysql.secretName" . }}
key: mysql-password
And a bit more about secrets based on kubernetes documentation.
Kubernetes Secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Storing confidential information in a Secret is safer and more flexible than putting it verbatim in a Pod definition or in a container image
Based on my knowledge that's how it should be done.
EDIT
I add mysql just as example, there is postgres chart. Idea is still the same.
I am confuse how should ı create config and secret yaml in my application?
You can create secret before the chart, but you can make a secret.yaml in templates and make a secret there, which will be created with the rest of the yamls when installing chart and will take the credentials from values.dev,prod,stage.yamls.
using only this env variable is enough?
Yes, as you can see in the answer if you create some password in values.yaml then it will be taken in secret through deployment with secretKeyRef.
should I deploy my application in same namespace?
I don't understand this question, you can specify namespace for your application or you can deploy everything in the default namespace.
I hope it answer your question. Let me know if you have any more questions.
If you have a requirement to install and manage the database separately from your container, you essentially need to treat it as an external database. Aside from the specific hostname, it doesn't really make any difference whether the database is running in the same namespace, a different namespace, or outside the cluster altogether.
If your ops team's PostgreSQL container generates the specific pair of ConfigMap and Secret you show in the question, and it will always be in the same Kubernetes namespace, then the only change I'd make is to make it parametrizable where exactly that deployment is.
# values.yaml
# postgresName has the name of the PostgreSQL installation.
postgresName: postgres-production
env:
- name: SPRING_DATASOURCE_URL
valueFrom:
configMapKeyRef:
name: {{ .Values.postgresName }}
key: jdbc-url
If these are being provided as configuration values to Helm...
# values.yaml
postgres:
url: ...
username: ...
password: ...
...I'd probably put the username and password parts in a Secret and incorporate them as you've done here. I'd probably directly inject the URL part, though. There's nothing wrong with using a ConfigMap here and it matches the way you'd do things without Helm, but there's not a lot of obvious value to the extra layer of indirection.
env:
- name: SPRING_DATASOURCE_URL
value: {{ .Values.postgres.url | quote }}
- name: SPRING_DATASOURCE_USERNAME
valueFrom:
secretKeyRef:
name: {{ template "chart.name" . }}
key: postgresUsername
When you go to deploy the chart, you can provide an override set of Helm values. Put some sensible set of defaults in the chart's values.yaml file, but then maintain a separate values file for each environment.
helm upgrade --install myapp . -f values.qa.yaml
# where values.qa.yaml has database settings for your test environment
Related
I have an environment made of pods that address their target environment based on an environment variable called CONF_ENV that could be test, stage or prod.
The application running inside the Pod has the same source code across environments, the configuration file is picked according to the CONF_ENV environment variable.
I'v encapsulated this CONF_ENV in *.properties files just because I may have to add more environment variables later, but I make sure that each property file contains the expected CONF_ENV e.g.:
test.properites has CONF_ENV=test,
prod.properties has CONF_ENV=prod, and so on...
I struggle to make this work with Kustomize overlays, because I want to define a ConfigMap as a shared resource across all the pods within the same overlay e.g. test (each pod in their own directory, along other stuff when needed).
So the idea is:
base/ (shared) with the definition of the Namespace, the ConfigMap (and potentially other shared resources
base/pod1/ with the definition of pod1 picking from the shared ConfigMap (this defaults to test, but in principle it could be different)
Then the overlays:
overlay/test that patches the base with CONF_ENV=test (e.g. for overlay/test/pod1/ and so on)
overlay/prod/ that patches the base with CONF_ENV=prod (e.g. for overlay/prod/pod1/ and so on)
Each directory with their own kustomize.yaml.
The above doesn't work because when going into e.g. overlay/test/pod1/ and I invoke the command kubectl kustomize . to check the output YAML, then I get all sorts of errors depending on how I defined the lists for the YAML keys bases: or resources:.
I am trying to share the ConfigMap across the entire CONF_ENV environment in an attempt to minimize the boilerplate YAML by leveraging the patching-pattern with Kustomize.
The Kubernetes / Kustomize YAML directory structure works like this:
├── base
│ ├── configuration.yaml # I am trying to share this!
│ ├── kustomization.yaml
│ ├── my_namespace.yaml # I am trying to share this!
│ ├── my-scheduleset-etl-misc
│ │ ├── kustomization.yaml
│ │ └── my_scheduleset_etl_misc.yaml
│ ├── my-scheduleset-etl-reporting
│ │ ├── kustomization.yaml
│ │ └── my_scheduleset_etl_reporting.yaml
│ └── test.properties # I am trying to share this!
└── overlay
└── test
├── kustomization.yaml # here I want tell "go and pick up the shared resources in the base dir"
├── my-scheduleset-etl-misc
│ ├── kustomization.yaml
│ └── test.properties # I've tried to share this one level above, but also to add this inside the "leaf" level for a given pod
└── my-scheduleset-etl-reporting
└── kustomization.yaml
The command kubectl with Kustomize:
sometimes complains that the shared namespace does not exist:
error: merging from generator &{0xc001d99530 { map[] map[]} {{ my-schedule-set-props merge {[CONF_ENV=test] [] [] } <nil>}}}:
id resid.ResId{Gvk:resid.Gvk{Group:"", Version:"v1", Kind:"ConfigMap", isClusterScoped:false}, Name:"my-schedule-set-props", Namespace:""}
does not exist; cannot merge or replace
sometimes doesn't allow to have shared resources inside an overlay:
error: loading KV pairs: env source files: [../test.properties]:
security; file '/my/path/to/yaml/overlay/test/test.properties'
is not in or below '/my/path/to/yaml/overlay/test/my-scheduleset-etl-misc'
sometimes doesn't allow cycles when I am trying to have multiple bases - the shared resources and the original pod definition:
error: accumulating resources: accumulation err='accumulating resources from '../':
'/my/path/to/yaml/overlay/test' must resolve to a file':
cycle detected: candidate root '/my/path/to/yaml/overlay/test'
contains visited root '/my/path/to/yaml/overlay/test/my-scheduleset-etl-misc'
The overlay kustomization.yaml files inside the pod dirs have:
bases:
- ../ # tried with/without this to share the ConfigMap
- ../../../base/my-scheduleset-etl-misc/
The kustomization.yaml at the root of the overlay has:
bases:
- ../../base
The kustomization.yaml at the base dir contains this configuration for the ConfigMap:
# https://gist.github.com/hermanbanken/3d0f232ffd86236c9f1f198c9452aad9
configMapGenerator:
- name: my-schedule-set-props
namespace: my-ss-schedules
envs:
- test.properties
vars:
- name: CONF_ENV
objref:
kind: ConfigMap
name: my-schedule-set-props
apiVersion: v1
fieldref:
fieldpath: data.CONF_ENV
configurations:
- configuration.yaml
With configuration.yaml containing:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
varReference:
- path: spec/confEnv/value
kind: Pod
How do I do this?
How do I make sure that I minimise the amount of YAML by sharing all the ConfigMap stuff and the Pods definitions as much as I can?
If I understand your goal correctly, I think you may be grossly over-complicating things. I think you want a common properties file defined in your base, but you want to override specific properties in your overlays. Here's one way of doing that.
In base, I have:
$ cd base
$ tree
.
├── example.properties
├── kustomization.yaml
└── pod1
├── kustomization.yaml
└── pod.yaml
Where example.properties contains:
SOME_OTHER_VAR=somevalue
CONF_ENV=test
And kustomization.yaml contains:
resources:
- pod1
configMapGenerator:
- name: example-props
envs:
- example.properties
I have two overlays defined, test and prod:
$ cd ../overlays
$ tree
.
├── prod
│ ├── example.properties
│ └── kustomization.yaml
└── test
└── kustomization.yaml
test/kustomization.yaml looks like this:
resources:
- ../../base
It's just importing the base without any changes, since the value of CONF_ENV from the base directory is test.
prod/kustomization.yaml looks like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
configMapGenerator:
- name: example-props
behavior: merge
envs:
- example.properties
And prod/example.properties looks like:
CONF_ENV=prod
If I run kustomize build overlays/test, I get as output:
apiVersion: v1
data:
CONF_ENV: test
SOME_OTHER_VAR: somevalue
kind: ConfigMap
metadata:
name: example-props-7245222b9b
---
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- command:
- sleep
- 1800
envFrom:
- configMapRef:
name: example-props-7245222b9b
image: docker.io/alpine
name: alpine
If I run kustomize build overlays/prod, I get:
apiVersion: v1
data:
CONF_ENV: prod
SOME_OTHER_VAR: somevalue
kind: ConfigMap
metadata:
name: example-props-h4b5tc869g
---
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- command:
- sleep
- 1800
envFrom:
- configMapRef:
name: example-props-h4b5tc869g
image: docker.io/alpine
name: alpine
That is, everything looks as you would expect given the configuration in base, but we have provided a new value for CONF_ENV.
You can find all these files here.
I'm facing the problem that we use an umbrella helm chart to deploy our services but our services are contained in subcharts. But there are some global configmaps that multiple services are using which are deployed by the umbrella chart. So a simplified structure looks like this:
├── Chart.yaml
├── templates
│ ├── global-config.yaml
├── charts
│ ├── subchart1
│ │ ├── Chart.yaml
│ │ ├── templates
│ │ │ ├── deployment1.yaml
│ ├── subchart2
│ │ ├── Chart.yaml
│ │ ├── templates
│ │ │ ├── deployment2.yaml
...
I what I need is to put a checksum of global-config.yaml in deployment1.yaml and deployment2.yaml as annotation like in https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
The problem is that I can define a template like this in the umbrella chart:
{{- define "configmap.checksum" }}{{ include (print $.Template.BasePath "/global-config.yaml") . | sha256sum }}{{ end }}
and then use {{ include "configmap.checksum" . }} in the deployment yamls as templates are global. But $.Template.BasePath is parsed when the include happens so it actually points to the template directory of the subchart where it is included. I have played around with '..' and the Files. and with passing another context in the template and also in the include like passing the global context $ but none of them were successful as I ways always locked in the specific subchart at rendering time.
Is there a solution to this or a different approach to solve this? Or is this something that cannot be done in helm? But I'm grateful for any solution
I found a workaround that requires you to refactor the contents of your global ConfigMap into a template in the umbrellas chart. You won't be able to use regular values from the umbrella chart in your ConfigMap, otherwise you'll get an error when trying to render the template in a subchart; you'll have to use global values instead.
{{/*
Create the contents of the global-config ConfigMap
*/}}
{{- define "my-umbrella-chart.global-config-contents" -}}
spring.activemq.user={{ .Values.global.activeMqUsername | quote }}
spring.activemq.password={{ .Values.global.activeMqPassword | quote }}
jms.destinations.itemCreationQueue={{ .Release.Namespace }}-itemCreationQueue
{{- end }}
kind: ConfigMap
apiVersion: v1
metadata:
name: global-config
data:
{{- include "my-umbrella-chart.global-config-contents" . | nindent 2 }}
Then you add a checksum of my-umbrella-chart.global-config-contents in deployment1.yaml and deployment2.yaml as an annotation like so:
annotations:
# Automatically roll deployment when the configmap contents changes
checksum/global-config: {{ include "my-umbrella-chart.global-config-contents" . | sha256sum }}
I am using this example:
├── base
│ ├── kustomization.yaml
│ └── pod.yaml
├── dev
│ └── kustomization.yaml
├── kustomization.yaml
├── production
│ └── kustomization.yaml
└── staging
└── kustomization.yaml
and in kustomization.yaml file in root:
resources:
- ./dev
- ./staging
- ./production
I also have the image transformer code in dev, staging, production kustomization.yaml:
images:
- name: my-app
newName: gcr.io/my-platform/my-app
To build a single deployment manifest, I use:
(cd dev && kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 && kustomize build .)
which simply works!
to build deployment manifest for all overlays (dev, staging, production), I use:
(kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 && kustomize build .)
which uses the kustomization.yaml in root which contains all resources(dev, staging, production).
It does work and the final build is printed on console but without the image tag.
It seems like the kusotmize edit set image only updates the kustomizaion.yaml of the current dir.
Is there anything which can be done to handle this scenario in an easy and efficient way so the final output contains image tag as well for all deployments?
To test please use this repo
It took some time to realise what happens here. I'll explain step by step what happens and how it should work.
What happens
Firstly I re-created the same structure:
$ tree
.
├── base
│ ├── kustomization.yaml
│ └── pod.yaml
├── dev
│ └── kustomization.yaml
├── kustomization.yaml
└── staging
└── kustomization.yaml
When you run this command for single deployment:
(cd dev && kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 && kustomize build .)
you change working directory to dev, manually override image from gcr.io/my-platform/my-app and adding tag 0.0.2 and then render the deployment.
The thing is previously added transformer code gets overridden by the command above. You can remove transformer code, run the command above and get the same result. And after running the command you will find out that your dev/kustomization.yaml will look like:
resources:
- ./../base
namePrefix: dev-
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: my-app
newName: gcr.io/my-platform/my-app
newTag: 0.0.2
Then what happens when you run this command from main directory:
(kustomize edit set image my-app=gcr.io/my-platform/my-app:0.0.2 && kustomize build .)
kustomize firstly goes to overlays and do transformation code which is located in overlays/kustomization.yaml. When this part is finished, image name is not my-app, but gcr.io/my-platform/my-app.
At this point kustomize edit command tries to find image with name my-app and can't do so and therefore does NOT apply the tag.
What to do
You need to use transformed image name if you run kustomize edit in main working directory:
$ kustomize edit set image gcr.io/my-platform/my-app=*:0.0.4 && kustomize build .
apiVersion: v1
kind: Pod
metadata:
labels:
app: my-app
name: dev-myapp-pod
spec:
containers:
- image: gcr.io/my-platform/my-app:0.0.4
name: my-app
---
apiVersion: v1
kind: Pod
metadata:
labels:
app: my-app
name: stag-myapp-pod
spec:
containers:
- image: gcr.io/my-platform/my-app:0.0.4
name: my-app
I am trying to access the key - values from a custom value file secretvalues.yaml passed to helm with the -f parameter. The key - value in this file is being passed to the yaml file postgres.configmap.yml.
here is my folder structure (there are there are a few other charts but I have removed them for simplicity)
├── k8shelmcharts
│ ├── Chart.yaml
│ ├── charts
│ │ ├── postgres-service
│ │ │ ├── Chart.yaml
│ │ │ ├── templates
│ │ │ │ ├── postgres.configmap.yml
│ │ │ │ ├── postgres.deployment.yml
│ │ │ │ └── postgres.service.yml
│ │ │ └── values.yaml
│ └── secretvalues.yaml
The contents of the values.yaml file in the postgres-services/ folder is
config:
postgres_admin_name: "postgres"
The contents of the secretvalues.yaml file in the k8shelmchars/ folder is
secrets:
postgres_admin_password: "password"
and the contents of the postgres.configmap.yml file in the postgres-services/ folder is
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
data:
# property-like keys; each key maps to a simple value
POSTGRES_USER: {{ .Values.config.postgres_admin_name }}
POSTGRES_PASSWORD: {{ .secretvalues.secrets.postgres_admin_password }}
I have tried several combinations here like .secretvalues.secrets.postgres_admin_password, .Secretvalues.secrets.postgres_admin_password and tried to remove the secrets key but no avail.
When I run the command to install the charts helm install -f k8shelmcharts/secretvalues.yaml testapp k8shelmcharts/ --dry-run --debug
I get the error:
Error: template: flask-k8s-app/charts/postgresdb/templates/postgres.configmap.yml:8:37: executing "flask-k8s-app/charts/postgresdb/templates/postgres.configmap.yml" at <.secretvalues.parameters.postgres_admin_password>: nil pointer evaluating interface {}.parameters
helm.go:94: [debug] template: flask-k8s-app/charts/postgresdb/templates/postgres.configmap.yml:8:37: executing "flask-k8s-app/charts/postgresdb/templates/postgres.configmap.yml" at <.secretvalues.parameters.postgres_admin_password>: nil pointer evaluating interface {}.parameters
my question is how do I access the secret.postgres_admin_password ?
I am using helm3
Thanks!
Trying to access the key-values from the secretvalues.yaml file by using POSTGRES_PASSWORD: {{ .Values.config.postgres_admin_password }} in the postgres.configmap.yml seems to pull null/nil values.
I am getting the error when I run helm install testapp k8shelmcharts/ --dry-run --debug -f k8shelmcharts/secretvalues.yaml:
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: unknown object type "nil" in ConfigMap.data.POSTGRES_PASSWORD
helm.go:94: [debug] error validating "": error validating data: unknown object type "nil" in ConfigMap.data.POSTGRES_PASSWORD
When I try to debug the template using helm template k8shelmcharts/ --debug, I see:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
data:
# property-like keys; each key maps to a simple value
POSTGRES_USER: postgres
POSTGRES_PASSWORD:
indicating helm is not able to pull values from the secretvalues.yaml file. NOTE: I have updated the key from secrets to config in the secretvalues.yaml file.
Values from all files are merged into one object Values, so you should access variables from the secretvalues.yaml the same way you access the others, i.e.
.Values.secrets.postgres_admin_password
My application requires us to be running multiple instances of a database, let's say InfluxDB.
The chart we are writing should allow us to run an arbitrary number of databases, based on the values passed to the chart, so I can't alias a fixed number of times the influxdb chart in the Chart.yaml file.
The way I want to solve this challenge, is by having my main chart main have a range of values that specify the configuration. A quick example of values.yaml
databases:
- type: influxdb
name: influx1
port: 9001
- type: influxdb
name: influx2
port: 9002
I can iterate over this array with range easily, but I'm unsure of how to "call" the dependency chart from the main.yaml file. Arborescence:
main_chart
├── charts
│ └── influxdb-1.2.3.tgz
├── Chart.yaml
├── templates
│ └── main.yaml
└── values.yaml
I tried using {{- include "influxdb" .Values.some_test_config}}, but I get a No template influxdb associated with template gotpl error.
I also went through the Helm docs, but didn't find an answer.
Thanks for following through ! Any thoughts ?
You want to use helm chart dependencies with aliases:
https://helm.sh/docs/topics/charts/#alias-field-in-dependencies
Update your Chart.yaml to include:
dependencies:
- name: influxdb
repository: https://kubernetes-charts.storage.googleapis.com
version: 1.2.3
alias: influx1
- name: influxdb
repository: https://kubernetes-charts.storage.googleapis.com
version: 1.2.3
alias: influx2
Then values.yaml would look like this:
influx1:
port: 9001
<other chart values>
influx2:
port: 9002
<other chart values>