How can I reuse common configuration across different kubernetes manifests? - kubernetes

Assume I have this manifest:
apiVersion: batch/v1
kind: Job
metadata:
name: initialize-assets-fixtures
spec:
template:
spec:
initContainers:
- name: wait-for-minio
image: bitnami/minio-client
env:
- name: MINIO_SERVER_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio
key: access-key
- name: MINIO_SERVER_SECRET_KEY
valueFrom:
secretKeyRef:
name: minio
key: secret-key
- name: MINIO_SERVER_HOST
value: minio
- name: MINIO_SERVER_PORT_NUMBER
value: "9000"
- name: MINIO_ALIAS
value: minio
command:
- /bin/sh
- -c
- |
mc config host add ${MINIO_ALIAS} http://${MINIO_SERVER_HOST}:${MINIO_SERVER_PORT_NUMBER} ${MINIO_SERVER_ACCESS_KEY} ${MINIO_SERVER_SECRET_KEY}
containers:
- name: initialize-assets-fixtures
image: bitnami/minio
env:
- name: MINIO_SERVER_ACCESS_KEY
valueFrom:
secretKeyRef:
name: minio
key: access-key
- name: MINIO_SERVER_SECRET_KEY
valueFrom:
secretKeyRef:
name: minio
key: secret-key
- name: MINIO_SERVER_HOST
value: minio
- name: MINIO_SERVER_PORT_NUMBER
value: "9000"
- name: MINIO_ALIAS
value: minio
command:
- /bin/sh
- -c
- |
mc config host add ${MINIO_ALIAS} http://${MINIO_SERVER_HOST}:${MINIO_SERVER_PORT_NUMBER} ${MINIO_SERVER_ACCESS_KEY} ${MINIO_SERVER_SECRET_KEY}
for category in `ls`; do
for f in `ls $category/*` ; do
mc cp $f ${MINIO_ALIAS}/$category/$(basename $f)
done
done
restartPolicy: Never
You see I have here one initContainer and one container. In both containers, I have the same configuration, i.e. the same env section.
Assume I have yet another Job manifest where I use the very same env section again.
It's a lot of duplicated configuration that I bet I can simplify drastically, but I don't know how to do it. Any hint? Any link to some documentation? After some googling, I was not able to come up with anything useful. Maybe with kustomize, but I'm not sure. Or maybe I'm doing it the wrong way with all those environment variables, but I don't think I have a choice, depending on the service I'm using (here it's minio, but I want to do the same kind of stuff with other services which might not be as flexible as minio).

Based on my knowledge you have those 3 options
Kustomize
Helm
ConfigMap
ConfigMap
You can use either kubectl create configmap or a ConfigMap generator in kustomization.yaml to create a ConfigMap.
The data source corresponds to a key-value pair in the ConfigMap, where
key = the file name or the key you provided on the command line
value = the file contents or the literal value you provided on the command line.
More about how to use it in pod here
Helm
As #Matt mentionted in comments you can use helm
helm lets you template the yaml with values. Also once you get into it there are ways to create and include partial templates – Matt
By the way, helm has it's own created minio chart, you might take a look how it is created there.
Kustomize
It's well described here and here how could you do that in kustomize.
Let me know if you have any more questions.

So, long story short: to solve my problem, I first created a new chart for my service and transformed the k8s manifests I had into helm templates. Then, I completed the _helpers.tpl with the following code:
{{/*
Common minio environment variables setup
*/}}
{{- define "minio.envvarsblock" -}}
- name: MINIO_SERVER_ACCESS_KEY
valueFrom:
secretKeyRef:
name: {{ .Values.minio.fullname }}
key: access-key
- name: MINIO_SERVER_SECRET_KEY
valueFrom:
secretKeyRef:
name: {{ .Values.minio.fullname }}
key: secret-key
- name: MINIO_SERVER_HOST
value: {{ .Values.minio.fullname }}
- name: MINIO_SERVER_PORT_NUMBER
value: {{ .Values.minio.server.port | quote }}
- name: MINIO_ALIAS
value: {{ .Values.minio.client.alias }}
{{- end -}}
{{/*
Wait for minio init container definition
*/}}
{{- define "wait-for-minio" -}}
- name: wait-for-minio
image: {{ .Values.minio.client.image }}
env: {{- include "minio.envvarsblock" . | nindent 4 }}
command:
- /bin/sh
- -c
- |
mc config host add ${MINIO_ALIAS} http://${MINIO_SERVER_HOST}:${MINIO_SERVER_PORT_NUMBER} ${MINIO_SERVER_ACCESS_KEY} ${MINIO_SERVER_SECRET_KEY}
{{- end -}}
The first section above allows to reuse the env section throughout all my templates and the second allows to reuse an initContainer that I use all over the place too. I was then able to inject those partial templates into my helm templates like so (to take the example I put in my original post):
{{- if .Values.fixtures.enabled -}}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "chart.fullname" . }}-init-fixtures
labels:
{{ include "chart.labels" . | indent 4 }}
spec:
template:
spec:
initContainers:
{{- include "wait-for-minio" . | nindent 6 }}
containers:
- name: {{ .Chart.Name }}-init-fixtures
image: {{ .Values.image }}
env: {{- include "minio.envvarsblock" . | nindent 10 }}
command:
- /bin/sh
- -c
- |
mc config host add ${MINIO_ALIAS} http://${MINIO_SERVER_HOST}:${MINIO_SERVER_PORT_NUMBER} ${MINIO_SERVER_ACCESS_KEY} ${MINIO_SERVER_SECRET_KEY}
for category in `ls`; do
for f in `ls $category/*` ; do
mc cp $f ${MINIO_ALIAS}/$category/$(basename $f)
done
done
restartPolicy: OnFailure
{{- end -}}

Related

Combining ENV variables in helm chart

Based on this SO, this should work and I'm not sure what I'm missing.
I'm trying to combine env variables in a helm chart. TARGET and TARGET_KEY, but I'm getting:
- name: TARGET_KEY # combining keys together
value: Hello $(TARGET)
I'm expecting
- name: TARGET_KEY # combining keys together
value: Hello World
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: hello
namespace: myapp
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "1"
autoscaling.knative.dev/target: "10"
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
ports:
- containerPort: 8080
env:
- name: TARGET
value: "World"
- name: APIKEY
valueFrom: # get single key from secret at key
secretKeyRef:
name: {{ .Values.keys.name }}
key: apiKey
- name: TARGET_KEY # combining keys together
value: Hello $(TARGET)
envFrom: # set ENV variables from all the values in secret
- secretRef:
name: {{ .Values.keys.name }}
I am using ArgoCD to sync the helm charts. Checking the newly deployed pod's ENV vars.
#David is correct. The ENV variable shown in template and pod description keep the template name, but once I ssh'ed into the pod, doing printenv shows the env variable was properly filled in.
However, I did read there are issues with alphabetic sorting and ordering when trying to mix multiple ENV vars this way. That's a topic for another SO.

Helm Environment Variables in if else

When am building the image path, this is how I want to build the image Path, where the docker registry address, I want to fetch it from the configMap.
I can't hard code the registry address in the values.yaml file because for each customer the registry address would be different and I don't want to ask customer to enter this input manually. These helm charts are deployed via argoCD, so fetching registryIP via shell and then invoking the helm command is also not an option.
I tried below code, it isn't working because the env variables will not be available in the context where image path is present.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "helm-guestbook.fullname" . }}
spec:
template:
metadata:
labels:
app: {{ template "helm-guestbook.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
{{- if eq .Values.isOnPrem "true" }}
image: {{ printf "%s/%s:%s" $dockerRegistryIP .Values.image.repository .Values.image.tag }}
{{- else }}
env:
- name: DOCKER_REGISTRY_IP
valueFrom:
configMapKeyRef:
name: docker-registry-config
key: DOCKER_REGISTRY_IP
Any pointers on how can I solve this using helm itself ? Thanks
Check out the lookup function, https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function
Though this could get very complicated very quickly, so be careful to not overuse it.

helm reference secret in deployment yaml

I'm looking for a possible way to reference the secrets in my deployment.yaml (1 liner)
Currently I'm using the
containers:
- name: {{ template "myapp.name" . }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: Always
env:
- name: COUCHDB_USER
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-secrets
key: COUCHDB_USER
- name: COUCHDB_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-secrets
key: COUCHDB_PASSWORD
With the minimal modification possible, I want to achieve something like this:
containers:
- name: {{ template "myapp.name" . }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: Always
env:
- name: COUCHDB_URL
value: http://${COUCHDB_USER}:${COUCHDB_PASSWORD}#{{ .Release.Name }}-couchdb:5984
Just carious if I can do this in 1 step in during the deployment, instead of passing 2 env vars and parse them in my application.
I am not seeing any way to achieve it without setting COUCHDB_USER and COUCHDB_PASSWORD in container env.
One workaround is, you can specify your secret in container.EnvFrom and all your secret keys will be converted to Environment variables. then, You can use those environment variables to create your composite env (ie, COUCHDB_URL).
FYI, To create env from another env in kubernetes, () is used. Curly braces {} won't work at this very moment.
One sample is,
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
COUCHDB_USER: YWRtaW4=
COUCHDB_PASSWORD: MWYyZDFlMmU2N2Rm
---
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
envFrom:
- secretRef:
name: mysecret
env:
- name: COUCHDB_URL
value: http://$(COUCHDB_USER):$(COUCHDB_PASSWORD)rest-of-the-url
You can confirm, the output by,
$ kubectl exec -it secret-env-pod bash
root#secret-env-pod:/data# env | grep COUCHDB
COUCHDB_URL=http://admin:1f2d1e2e67dfrest-of-the-url
COUCHDB_PASSWORD=1f2d1e2e67df
COUCHDB_USER=admin
In your case, the yaml for container is:
containers:
- name: {{ template "myapp.name" . }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: Always
envFrom:
- secretRef:
name: {{ .Release.Name }}-secrets
env:
- name: COUCHDB_URL
value: http://$(COUCHDB_USER):$(COUCHDB_PASSWORD)#{{ .Release.Name }}-couchdb:5984

Import data to config map from kubernetes secret

I'm using a kubernetes ConfigMap that contains database configurations for an app and there is a secret that has the database password.
I need to use this secret in the ConfigMap so when I try to add environment variable in the ConfigMap and specify the value in the pod deployment from the secret I'm not able to connect to mysql with password as the values in the ConfigMap took the exact string of the variable.
apiVersion: v1
kind: ConfigMap
metadata:
name: config
data:
APP_CONFIG: |
port: 8080
databases:
default:
connector: mysql
host: "mysql"
port: "3306"
user: "root"
password: "$DB_PASSWORD"
and the deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
labels:
app: backend
spec:
replicas: 1
template:
metadata:
labels:
app: backend
spec:
containers:
- name: app
image: simple-app-image
ports:
- name: "8080"
containerPort: 8080
env:
- name: APP_CONFIG
valueFrom:
configMapKeyRef:
name: config
key: APP_CONFIG
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: "mysql-secret"
key: "mysql-root-password"
Note: the secret exist and I'm able to get "mysql-root-password" value and use to login to the database
Kubernetes can't make that substitution for you, you should do it with shell in the entrypoint of the container.
This is a working example. I modify the default entrypoint to create a new variable with that substitution. After this command you should add the desired entrypoint.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
labels:
app: backend
spec:
replicas: 1
template:
metadata:
labels:
app: backend
spec:
containers:
- name: app
image: simple-app-image
command:
- /bin/bash
- -c
args:
- "NEW_APP_CONFIG=$(echo $APP_CONFIG | envsubst) && echo $NEW_APP_CONFIG && <INSERT IMAGE ENTRYPOINT HERE>"
ports:
- name: "app"
containerPort: 8080
env:
- name: APP_CONFIG
valueFrom:
configMapKeyRef:
name: config
key: APP_CONFIG
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: "mysql-secret"
key: "mysql-root-password"
You could do something like this in HELM:
{{- define "getValueFromSecret" }}
{{- $len := (default 16 .Length) | int -}}
{{- $obj := (lookup "v1" "Secret" .Namespace .Name).data -}}
{{- if $obj }}
{{- index $obj .Key | b64dec -}}
{{- else -}}
{{- randAlphaNum $len -}}
{{- end -}}
{{- end }}
Then you could do something like this in configmap:
{{- include "getValueFromSecret" (dict "Namespace" .Release.Namespace "Name" "<secret_name>" "Length" 10 "Key" "<key>") -}}
The secret should be already present while deploying; or you can control the order of deployment using https://github.com/vmware-tanzu/carvel-kapp-controller
I would transform the whole configMap into a secret and deploy the database password directly in there.
Then you can mount the secret as a file to a volume and use it like a regular config file in the container.

How do I load multiple templated config files into a helm chart?

So I am trying to build a helm chart.
in my templates file I've got a file like:
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map
data:
{{ Do something here to load up a set of files | indent 2 }}
I have another directory in my chart: configmaps
where a set of json files, that themselves will have templated variables in them:
a.json
b.json
c.json
Ultimately I'd like to be sure in my chart I can reference:
volumes:
- name: config-a
configMap:
name: config-map
items:
- key: a.json
path: a.json
I had same problem for a few weeks ago with adding files and templates directly to container.
Look for the sample syntax:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configmap-{{ .Release.Name }}
namespace: {{ .Release.Namespace }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
data:
nginx_conf: {{ tpl (.Files.Get "files/nginx.conf") . | quote }}
ssl_conf: {{ tpl (.Files.Get "files/ssl.conf") . | quote }}
dhparam_pem: {{ .Files.Get "files/dhparam.pem" | quote }}
fastcgi_conf: {{ .Files.Get "files/fastcgi.conf" | quote }}
mime_types: {{ .Files.Get "files/mime.types" | quote }}
proxy_params_conf: {{ .Files.Get "files/proxy_params.conf" | quote }}
Second step is to reference it from deployment:
volumes:
- name: {{ $.Release.Name }}-configmap-volume
configMap:
name:nginx-configmap-{{ $.Release.Name }}
items:
- key: dhparam_pem
path: dhparam.pem
- key: fastcgi_conf
path: fastcgi.conf
- key: mime_types
path: mime.types
- key: nginx_conf
path: nginx.conf
- key: proxy_params_conf
path: proxy_params.conf
- key: ssl_conf
path: ssl.conf
It's actual for now. Here you can find 2 types of importing:
regular files without templating
configuration files with dynamic variables inside
Please do not forget to read official docs:
https://helm.sh/docs/chart_template_guide/accessing_files/
Good luck!
include all files from directory config-dir/, with {{ range ..:
my-configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
{{- $files := .Files }}
{{- range $key, $value := .Files }}
{{- if hasPrefix "config-dir/" $key }} {{/* only when in config-dir/ */}}
{{ $key | trimPrefix "config-dir/" }}: {{ $files.Get $key | quote }} {{/* adapt $key as desired */}}
{{- end }}
{{- end }}
my-deployment.yaml
apiVersion: apps/v1
kind: Deployment
...
spec:
template:
...
spec:
containers:
- name: my-pod-container
...
volumeMounts:
- name: my-volume
mountPath: /config
readOnly: true # is RO anyway for configMap
volumes:
- name: my-volume
configMap:
name: my-configmap
# defaultMode: 0555 # mode rx for all
I assume that a.json,b.json,c.json etc. is a defined list and you know all the contents (apart from the bits that you want to set as values through templated variables). I'm also assuming you only want to expose parts of the content of the files to users and not to let the user configure the whole file content. (But if I'm assuming wrong and you do want to let users set the whole file content then the suggestion from #hypnoglow of following the datadog chart seems to me a good one.) If so I'd suggest the simplest way to do it is to do:
apiVersion: v1
kind: ConfigMap
metadata:
name: config-map
data:
a.json:
# content of a.json in here, including any templated stuff with {{ }}
b.json:
# content of b.json in here, including any templated stuff with {{ }}
c.json:
# content of c.json in here, including any templated stuff with {{ }}
I guess you'd like to mount then to the same directory. It would be tempting for cleanliness to use different configmaps but that would then be a problem for mounting to the same directory. It would also be nice to be able to load the files independently using .Files.Glob to be able to reference the files without having to put the whole content in the configmap but I don't think you can do that and still use templated variables in them... However, you can do it with Files.Get to read the file content as a string and the pass that into tpl to put it through the templating engine as #Oleg Mykolaichenko suggests in https://stackoverflow.com/a/52009992/9705485. I suggest everyone votes for his answer as it is the better solution. I'm only leaving my answer here because it explains why his suggestion is so good and some people may prefer the less abstract approach.