Retrieve and write TLS CRT kubernetes secret to another pod in Helm template - kubernetes

I have a Kubernetes cluster with Elasticsearch currently deployed.
The Elasticsearch coordinator node is accessible behind a service via a ClusterIP over HTTPS. It uses a self-signed TLS certificate.
I can retrieve the value of the CA:
kubectl get secret \
-n elasticsearch elasticsearch-coordinating-only-crt \
-o jsonpath="{.data.ca\.crt}" | base64 -d
-----BEGIN CERTIFICATE-----
MIIDIjCCAgqgAwIBAgIRANkAx51S
...
...
I need to provide this as a ca.crt to other app deployments.
Note: The Elasticsearch deployment is an an elasticsearch Kubernetes namespace. New deployments will be in different namespaces.
An example of this is a deployment of kafka that includes a kafka-connect-elasticsearch/ sink. The sink connector uses configuration such as:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "kafka.fullname" . }}-connect
labels: {{- include "common.labels.standard" . | nindent 4 }}
app.kubernetes.io/component: connector
data:
connect-standalone-custom.properties: |-
bootstrap.servers={{ include "kafka.fullname" . }}-0.{{ include "kafka.fullname" . }}-headless.{{ .Release.Namespace }}.svc.{{ .Values.clusterDomain }}:{{ .Values.service.port }}
key.converter.schemas.enable=false
value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
plugin.path=/usr/local/share/kafka/plugins
elasticsearch.properties: |-
name=elasticsearch-sink
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=4
topics=syslog,nginx
key.ignore=true
schema.ignore=true
connection.url=https://elasticsearch-coordinating-only.elasticsearch:9200
type.name=kafka-connect
connection.username=elastic
connection.password=xxxxxxxx
elastic.security.protocol=SSL
elastic.https.ssl.truststore.location=/etc/ssl/certs/elasticsearch-ca.crt
elastic.https.ssl.truststore.type=PEM
Notice the elastic.https.ssl.truststore.location=/etc/ssl/certs/elasticsearch-ca.crt; that's the file I need to put inside the kafka-based container.
What's the optimal way to do that with Helm templates?
Currently I have a fork of https://github.com/bitnami/charts/tree/master/bitnami/kafka. It adds 3 new templates under templates/:
kafka-connect-elasticsearch-configmap.yaml
kafka-connect-svc.yaml
kafka-connect.yaml
The configmap is shown above. The kafka-connect.yaml Deployment looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "kafka.fullname" . }}-connect
labels: {{- include "common.labels.standard" . | nindent 4 }}
app.kubernetes.io/component: connector
spec:
replicas: 1
selector:
matchLabels: {{- include "common.labels.matchLabels" . | nindent 6 }}
app.kubernetes.io/component: connector
template:
metadata:
labels: {{- include "common.labels.standard" . | nindent 8 }}
app.kubernetes.io/component: connector
spec:
containers:
- name: connect
image: REDACTED.dkr.ecr.REDACTED.amazonaws.com/kafka-connect-elasticsearch
imagePullPolicy: Always
command:
- /bin/bash
- -ec
- bin/connect-standalone.sh custom-config/connect-standalone-custom.properties custom-config/elasticsearch.properties
ports:
- name: connector
containerPort: 8083
volumeMounts:
- name: configuration
mountPath: /opt/bitnami/kafka/custom-config
imagePullSecrets:
- name: regcred
volumes:
- name: configuration
configMap:
name: {{ include "kafka.fullname" . }}-connect
How can I modify these Kafka Helm charts to allow them to retrieve the value for kubectl get secret -n elasticsearch elasticsearch-coordinating-only-crt -o jsonpath="{.data.ca\.crt}" | base64 -d and write its content to /etc/ssl/certs/elasticsearch-ca.crt ?

Got this working and learned a few things in the process:
Secret resources reside in a namespace. Secrets can only be referenced by Pods in that same namespace. (ref). Therefore, I switched to using a shared namespace for elasticsearch + kafka
The secret can be used in a straightforward way as documented at https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets. This is not a Helm-specific but rather core Kubernetes feature
In my case this looked like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "kafka.fullname" . }}-connect
labels: {{- include "common.labels.standard" . | nindent 4 }}
app.kubernetes.io/component: connector
spec:
replicas: 1
selector:
matchLabels: {{- include "common.labels.matchLabels" . | nindent 6 }}
app.kubernetes.io/component: connector
template:
metadata:
labels: {{- include "common.labels.standard" . | nindent 8 }}
app.kubernetes.io/component: connector
spec:
containers:
- name: connect
image: REDACTED.dkr.ecr.REDACTED.amazonaws.com/kafka-connect-elasticsearch
imagePullPolicy: Always
command:
- /bin/bash
- -ec
- bin/connect-standalone.sh custom-config/connect-standalone-custom.properties custom-config/elasticsearch.properties
ports:
- name: connector
containerPort: 8083
volumeMounts:
- name: configuration
mountPath: /opt/bitnami/kafka/custom-config
- name: ca
mountPath: /etc/ssl/certs
readOnly: true
imagePullSecrets:
- name: regcred
volumes:
- name: configuration
configMap:
name: {{ include "kafka.fullname" . }}-connect
- name: ca
secret:
secretName: elasticsearch-coordinating-only-crt
This gets the kafka-connect pod up and running, and I can validate the certs are written there also:
$ kubectl exec -it -n elasticsearch kafka-connect-c4f4d7dbd-wbxfq \
-- ls -1 /etc/ssl/certs
ca.crt
tls.crt
tls.key

Related

How to get a pod index inside a helm chart

I'm deploying a Kubernetes stateful set and I would like to get the pod index inside the helm chart so I can configure each pod with this pod index.
For example in the following template I'm using the variable {{ .Values.podIndex }} to retrieve the pod index in order to use it to configure my app.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ .Values.replicaCount }}
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 50%
template:
metadata:
labels:
app: {{ .Values.name }}
spec:
containers:
- image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: Always
name: {{ .Values.name }}
command: ["launch"],
args: ["-l","{{ .Values.podIndex }}"]
ports:
- containerPort: 4000
imagePullSecrets:
- name: gitlab-registry
You can't do this in the way you're describing.
Probably the best path is to change your Deployment into a StatefulSet. Each pod launched from a StatefulSet has an identity, and each pod's hostname gets set to the name of the StatefulSet plus an index. If your launch command looks at hostname, it will see something like name-0 and know that it's the first (index 0) pod in the StatefulSet.
A second path would be to create n single-replica Deployments using Go templating. This wouldn't be my preferred path, but you can
{{ range $podIndex := until .Values.replicaCount -}}
---
apiVersion: v1
kind: Deployment
metadata:
name: {{ .Values.name }}-{{ $podIndex }}
spec:
replicas: 1
template:
spec:
containers:
- name: {{ .Values.name }}
command: ["launch"]
args: ["-l", "{{ $podIndex }}"]
{{ end -}}
The actual flow here is that Helm reads in all of the template files and produces a block of YAML files, then submits these to the Kubernetes API server (with no templating directives at all), and the Kubernetes machinery acts on it. You can see what's being submitted by running helm template. By the time a Deployment is creating a Pod, all of the template directives have been stripped out; you can't make fields in the pod spec dependent on things like which replica it is or which node it got scheduled on.

How to convert k8s yaml to helm chart

Right now I'm deploying applications on k8s using yaml files.
Like the one below:
apiVersion: v1
kind: Service
metadata:
name: serviceA
namespace: flow
spec:
ports:
- port: 8080
targetPort: 8080
selector:
app: serviceA
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: serviceA-ingress
namespace: flow
annotations:
nginx.ingress.kubernetes.io/use-regex: "true"
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- serviceA.xyz.com
secretName: letsencrypt-prod
rules:
- host: serviceA.xyz.com
http:
paths:
- path: /
backend:
serviceName: serviceA
servicePort: 8080
---
apiVersion: v1
kind: ConfigMap
metadata:
name: serviceA-config
namespace: flow
data:
application-dev.properties: |
spring.application.name=serviceA-main
server.port=8080
logging.level.org.springframework.jdbc.core=debug
lead.pg.url=serviceB.flow.svc:8080/lead
task.pg.url=serviceB.flow.svc:8080/task
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: serviceA-deployment
namespace: flow
spec:
selector:
matchLabels:
app: serviceA
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: serviceA
spec:
containers:
- name: serviceA
image: xyzaccount.dkr.ecr.eu-west-1.amazonaws.com/flow/test:serviceA-v1
command: [ "java", "-jar", "-agentlib:jdwp=transport=dt_socket,address=9098,server=y,suspend=n", "serviceA-service.jar", "--spring.config.additional-location=/config/application-dev.properties" ]
ports:
- containerPort: 8080
volumeMounts:
- name: serviceA-application-config
mountPath: "/config"
readOnly: true
volumes:
- name: serviceA-application-config
configMap:
name: serviceA-config
items:
- key: application-dev.properties
path: application-dev.properties
restartPolicy: Always
Is there any automated way to convert this yaml into helm charts.
Or any other workaround or sample template that I can use to achieve this.
Even if there is no any generic way, then I would like to know how to convert this specific yaml into helm chart.
Also want to know what all things should I keep configurable (I mean convert into variables) as I can't just put these resource in yaml into separate template folder and called it helm chart.
At heart a Helm chart is still just YAML so to make that a chart, just drop that file under templates/ and add a Chart.yml.
There is no unambiguous way to map k8s YAML to a Helm chart. Because app/chart maintainer should decide:
which app parameters can be modified by chart users
which of these parameters are mandatory
what are default values, etc...
So creation of a Helm chart is a manual process. But it contains a lot of routine steps. For example, most of the chart creators want to:
remove the namespace from templates to set it with helm install -n
remove fields generated by k8s
add helm release name to resource name
preserve correct links between templated names
move some obvious parameters to values.yaml like:
container resources
service type and ports
configmap/secret values
image repo:tag
I've created a CLI called helmify to automate the steps listed above.
It reads a list of k8s objects from stdin and creates a helm chart from it.
You can install it with brew brew install arttor/tap/helmify. And then use to generate charts from YAML file:
cat my-app.yaml | helmify mychart
or from directory <my_directory> containing yamls:
awk 'FNR==1 && NR!=1 {print "---"}{print}' /<my_directory>/*.yaml | helmify mychart
Both of the commands will create mychart Helm chart directory from your k8s objects similar to helm create command.
Here is a chart generated by helmify from the yaml published in the question:
mychart
├── Chart.yaml
├── templates
│   ├── _helpers.tpl
│   ├── config.yaml
│   ├── deployment.yaml
│   ├── ingress.yaml
│   └── serviceA.yaml
└── values.yaml
#values.yaml
config:
applicationDevProperties:
lead:
pg:
url: serviceB.flow.svc:8080/lead
logging:
level:
org:
springframework:
jdbc:
core: debug
server:
port: "8080"
spring:
application:
name: serviceA-main
task:
pg:
url: serviceB.flow.svc:8080/task
deployment:
replicas: 1
image:
serviceA:
repository: xyzaccount.dkr.ecr.eu-west-1.amazonaws.com/flow/test
tag: serviceA-v1
serviceA:
ports:
- port: 8080
targetPort: 8080
type: ClusterIP
---
# templates/config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "mychart.fullname" . }}-config
labels:
{{- include "mychart.labels" . | nindent 4 }}
data:
application-dev.properties: |
spring.application.name={{ .Values.config.applicationDevProperties.spring.application.name | quote }}
server.port={{ .Values.config.applicationDevProperties.server.port | quote }}
logging.level.org.springframework.jdbc.core={{ .Values.config.applicationDevProperties.logging.level.org.springframework.jdbc.core | quote }}
lead.pg.url={{ .Values.config.applicationDevProperties.lead.pg.url | quote }}
task.pg.url={{ .Values.config.applicationDevProperties.task.pg.url | quote }}
---
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "mychart.fullname" . }}-deployment
labels:
{{- include "mychart.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.deployment.replicas }}
selector:
matchLabels:
app: serviceA
{{- include "mychart.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
app: serviceA
{{- include "mychart.selectorLabels" . | nindent 8 }}
spec:
containers:
- command:
- java
- -jar
- -agentlib:jdwp=transport=dt_socket,address=9098,server=y,suspend=n
- serviceA-service.jar
- --spring.config.additional-location=/config/application-dev.properties
image: {{ .Values.image.serviceA.repository }}:{{ .Values.image.serviceA.tag |
default .Chart.AppVersion }}
name: serviceA
ports:
- containerPort: 8080
resources: {}
volumeMounts:
- mountPath: /config
name: serviceA-application-config
readOnly: true
restartPolicy: Always
volumes:
- configMap:
items:
- key: application-dev.properties
path: application-dev.properties
name: {{ include "mychart.fullname" . }}-config
name: serviceA-application-config
---
# templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ include "mychart.fullname" . }}-ingress
labels:
{{- include "mychart.labels" . | nindent 4 }}
annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: serviceA.xyz.com
http:
paths:
- backend:
serviceName: serviceA
servicePort: 8080
path: /
tls:
- hosts:
- serviceA.xyz.com
secretName: letsencrypt-prod
---
# templates/serviceA.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "mychart.fullname" . }}-serviceA
labels:
{{- include "mychart.labels" . | nindent 4 }}
spec:
type: {{ .Values.serviceA.type }}
selector:
app: serviceA
{{- include "mychart.selectorLabels" . | nindent 4 }}
ports:
{{- .Values.serviceA.ports | toYaml | nindent 2 -}}
https://github.com/mailchannels/palinurus/tree/master
This git repo contains a python script which will convert basic yamls to helm charts
You can use this tools, helmtrans : https://github.com/codeandcode0x/helmtrans . It can transform YAML to Helm Charts automation.
example:
➜ helmtrans yamltohelm -p [source path] -o [output path]

Deploying a kubernetes job via helm

I am new to helm and I have tried to deploy a few tutorial charts. Had a couple of queries:
I have a Kubernetes job which I need to deploy. Is it possible to deploy a job via helm?
Also, currently my kubernetes job is deployed from my custom docker image and it runs a bash script to complete the job. I wanted to pass a few parameters to this chart/job so that the bash commands takes the input parameters. That's the reason I decided to move to helm because it provided a more flexibility. Is that possible?
You can use helm. Helm installs all the kubernetes resources like job,pods,configmaps,secrets inside the templates folder. You can control the order of installation by helm hooks. Helm offers hooks like pre-install, post-install, pre-delete with respect to deployment. if two or more jobs are pre-install then their weights will be compared for installing.
|-scripts/runjob.sh
|-templates/post-install.yaml
|-Chart.yaml
|-values.yaml
Many times you need to change the variables in the script as per the environment. so instead of hardcoding variable in script, you can also pass parameters to script by setting them as environment variables to your custom docker image. Change the values in values.yaml instead of changing in your script.
values.yaml
key1:
someKey1: value1
key2:
someKey2: value1
post-install.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: post-install-job
labels:
provider: stackoverflow
microservice: {{ template "name" . }}
release: "{{ .Release.Name }}"
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
annotations:
"helm.sh/hook": pre-install,pre-upgrade,pre-rollback
"helm.sh/hook-delete-policy": before-hook-creation
"helm.sh/hook-weight": "3"
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
provider: stackoverflow
microservice: {{ template "name" . }}
release: "{{ .Release.Name }}"
app: {{ template "fullname" . }}
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "custom-docker-image:v1"
command: ["/bin/sh", "-c", {{ .Files.Get "scripts/runjob.sh" | quote }} ]
env:
#setting KEY1 as environment variable in the container,value of KEY1 in container is value1(read from values.yaml)
- name: KEY1
value: {{ .Values.key1.someKey1 }}
- name: KEY2
value: {{ .Values.key2.someKey2 }}
runjob.sh
# you can access the variable from env variable
echo $KEY1
echo $KEY2
# some stuff
You can use Helm Hooks to run jobs. Depending on how you set up your annotations you can run a different type of hook (pre-install, post-install, pre-delete, post-delete, pre-upgrade, post-upgrade, pre-rollback, post-rollback, crd-install). An example from the doc is as follows:
apiVersion: batch/v1
kind: Job
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
metadata:
name: "{{.Release.Name}}"
labels:
app.kubernetes.io/managed-by: {{.Release.Service | quote }}
app.kubernetes.io/instance: {{.Release.Name | quote }}
helm.sh/chart: "{{.Chart.Name}}-{{.Chart.Version}}"
spec:
restartPolicy: Never
containers:
- name: post-install-job
image: "alpine:3.3"
command: ["/bin/sleep","{{default "10" .Values.sleepyTime}}"]
You can pass your parameters as secrets or configMaps to your job as you would to a pod.
I had a similar scenario where I had a job I wanted to pass a variety of arguments to. I ended up doing something like this:
Template:
apiVersion: batch/v1
kind: Job
metadata:
name: myJob
spec:
template:
spec:
containers:
- name: myJob
image: myImage
args: {{ .Values.args }}
Command (powershell):
helm template helm-chart --set "args={arg1\, arg2\, arg3}" | kubectl apply -f -

Why doesn't helm use the name defined in the deployment template?

i.e. from name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod below
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ template "project1234.name" . }}
chart: {{ template "project1234.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: {{ template "project1234.module5678.fullname" . }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "project1234.name" . }}
template:
metadata:
labels:
app: {{ template "project1234.name" . }}
spec:
containers:
- image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod
ports:
- containerPort: 1234
imagePullSecrets:
- name: {{ .Values.image.pullSecret }}
I am expecting the pod name to be:
pod/project1234-module5678-pod
Instead, the resulting Pod name is:
pod/chartname-project1234-module5678-dc7db787-skqvv
...where (in my understanding):
chartname is from: helm install --name chartname -f values.yaml .
project1234 is from:
# Chart.yaml
apiVersion: v1
appVersion: "1.0"
description: project1234 Helm chart for Kubernetes
name: project1234
version: 0.1.0
module5678 is from:
# values.yaml
rbac:
create: true
serviceAccounts:
module5678:
create: true
name:
image:
name: <image location>
tag: 1.5
pullSecret: <pull secret>
gitlab:
secretName: <secret name>
username: foo
password: bar
module5678:
enabled: true
name: module5678
ingress:
enabled: true
replicaCount: 1
resources: {}
I've tried changing name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod into a plain string value like "podname1234" and it isn't followed. I even tried removing the name setting entirely and the resulting pod name remains the same.
Pods created from a Deployment always have a generated name based on the Deployment's name (and also the name of the intermediate ReplicaSet, if you go off and look for it). You can't override it.
Given the YAML you've shown, I'd expect that this fragment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "project1234.module5678.fullname" . }}
expands out to a Deployment name of chartname-project1234-module5678; the remaining bits are added in by the ReplicaSet and then the Pod itself.
If you do look up the Pod and kubectl describe pod chartname-project1234-module5678-dc7db787-skqvv you will probably see that it has a single container that has the expected name project1234-module5678-pod. Pretty much the only time you need to use this is if you need to kubectl logs (or, more rarely, kubectl exec) in a multi-container pod; if you are in this case, you'll appreciate having a shorter name, and since the container names are always scoped to the specific pod in which they appear, there's nothing wrong with using a short fixed name here
spec:
containers:
- name: container

Import data to config map from kubernetes secret

I'm using a kubernetes ConfigMap that contains database configurations for an app and there is a secret that has the database password.
I need to use this secret in the ConfigMap so when I try to add environment variable in the ConfigMap and specify the value in the pod deployment from the secret I'm not able to connect to mysql with password as the values in the ConfigMap took the exact string of the variable.
apiVersion: v1
kind: ConfigMap
metadata:
name: config
data:
APP_CONFIG: |
port: 8080
databases:
default:
connector: mysql
host: "mysql"
port: "3306"
user: "root"
password: "$DB_PASSWORD"
and the deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
labels:
app: backend
spec:
replicas: 1
template:
metadata:
labels:
app: backend
spec:
containers:
- name: app
image: simple-app-image
ports:
- name: "8080"
containerPort: 8080
env:
- name: APP_CONFIG
valueFrom:
configMapKeyRef:
name: config
key: APP_CONFIG
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: "mysql-secret"
key: "mysql-root-password"
Note: the secret exist and I'm able to get "mysql-root-password" value and use to login to the database
Kubernetes can't make that substitution for you, you should do it with shell in the entrypoint of the container.
This is a working example. I modify the default entrypoint to create a new variable with that substitution. After this command you should add the desired entrypoint.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
labels:
app: backend
spec:
replicas: 1
template:
metadata:
labels:
app: backend
spec:
containers:
- name: app
image: simple-app-image
command:
- /bin/bash
- -c
args:
- "NEW_APP_CONFIG=$(echo $APP_CONFIG | envsubst) && echo $NEW_APP_CONFIG && <INSERT IMAGE ENTRYPOINT HERE>"
ports:
- name: "app"
containerPort: 8080
env:
- name: APP_CONFIG
valueFrom:
configMapKeyRef:
name: config
key: APP_CONFIG
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: "mysql-secret"
key: "mysql-root-password"
You could do something like this in HELM:
{{- define "getValueFromSecret" }}
{{- $len := (default 16 .Length) | int -}}
{{- $obj := (lookup "v1" "Secret" .Namespace .Name).data -}}
{{- if $obj }}
{{- index $obj .Key | b64dec -}}
{{- else -}}
{{- randAlphaNum $len -}}
{{- end -}}
{{- end }}
Then you could do something like this in configmap:
{{- include "getValueFromSecret" (dict "Namespace" .Release.Namespace "Name" "<secret_name>" "Length" 10 "Key" "<key>") -}}
The secret should be already present while deploying; or you can control the order of deployment using https://github.com/vmware-tanzu/carvel-kapp-controller
I would transform the whole configMap into a secret and deploy the database password directly in there.
Then you can mount the secret as a file to a volume and use it like a regular config file in the container.