Combining ENV variables in helm chart - kubernetes-helm

Based on this SO, this should work and I'm not sure what I'm missing.
I'm trying to combine env variables in a helm chart. TARGET and TARGET_KEY, but I'm getting:
- name: TARGET_KEY # combining keys together
value: Hello $(TARGET)
I'm expecting
- name: TARGET_KEY # combining keys together
value: Hello World
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: hello
namespace: myapp
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "1"
autoscaling.knative.dev/target: "10"
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
ports:
- containerPort: 8080
env:
- name: TARGET
value: "World"
- name: APIKEY
valueFrom: # get single key from secret at key
secretKeyRef:
name: {{ .Values.keys.name }}
key: apiKey
- name: TARGET_KEY # combining keys together
value: Hello $(TARGET)
envFrom: # set ENV variables from all the values in secret
- secretRef:
name: {{ .Values.keys.name }}
I am using ArgoCD to sync the helm charts. Checking the newly deployed pod's ENV vars.

#David is correct. The ENV variable shown in template and pod description keep the template name, but once I ssh'ed into the pod, doing printenv shows the env variable was properly filled in.
However, I did read there are issues with alphabetic sorting and ordering when trying to mix multiple ENV vars this way. That's a topic for another SO.

Related

microprofile-config.properties as ConfigMap in Openliberty

This example shows how to use ConfigMaps with openliberty.
The problem to me is that you have to create a section for the env variable in each of kubernetes deployment.
containers:
- name: system-container
image: system:1.0-SNAPSHOT
ports:
- containerPort: 9080
# Set the environment variables
env:
- name: CONTEXT_ROOT
valueFrom:
configMapKeyRef:
name: sys-app-root
key: contextRoot
- name: SYSTEM_APP_USERNAME
valueFrom:
secretKeyRef:
name: sys-app-credentials
key: username
- name: SYSTEM_APP_PASSWORD
valueFrom:
secretKeyRef:
name: sys-app-credentials
key: password
Wouldn't it be just easier to upload microprofile-config.properties as
a ConfigMap and mount it as volume to the right location?
You dont have to create env part for each value. You can create your config map from microprofile-config.properties file and then just use envFrom to load all the key pairs like this:
envFrom:
- configMapRef:
name: my-microprofile-config
But maybe I missed your real problem...
Since you're using open-liberty I also think that creating your own config-map using microprofile-config.properties file will be good idea.
According to this documentation:
MicroProfile Config allows you to define configuration values, which are referred to as "config property" values, in a huge range of locations that are known as ConfigSources.
To load all the key pairs one can use envForm (just like #Gus suggested).
According to this documentation:
One can use envForm to define all of the ConfigMap's data as container environment variables.
The key from the ConfigMap becomes the environment variable name in the Pod.
Here is the example of use envForm:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
To load all the key pairs you need this scrap:
envFrom:
- configMapRef:
name: special-config
See also this and this documentations.

Concating values from configMap and secret

I have a configMap file:
apiVersion: v1
kind: ConfigMap
metadata:
labels:
owner: testdb
name: testdb-configmap
data:
host: postgres
port: "5432"
and a secret file:
aapiVersion: v1
kind: Secret
type: Opaque
metadata:
labels:
owner: testdb
name: testdb-secret
namespace: test
data:
user: dGVzdA==
pwd: dGVzdA==
and I want to build an environment variable CONNECTION_STRING as below:
env:
- name: CONNECTION_STRING
value: "Host=<host-from-configmap>;Username=<user-from-secret>;Password=<password-from-secret>;Port=<port-from-configmap>;Pooling=False;"
I want to know if this is possible and if yes, then how? I have also looked at using .tpl (named templates) but couldn't figure out a way.
NOTE
Since I don't have access to the image which requires CONNECTION_STRING I have to build it this way. These configmap and secret files are also going to remain like this.
Kubernetes can set environment variables based on other environment variables. This is a core Kubernetes Pod capability, and doesn't depend on anything from Helm.
Your value uses four components, two from the ConfigMap and two from the Secret. You need to declare each of these as separate environment variables, and then declare a main environment variable that concatenates them together.
env:
- name: TESTDB_HOST
valueFrom:
configMapRef:
name: testdb-configmap # {{ include "chart.name" . }}
key: host
- name: TESTDB_PORT
valueFrom:
configMapRef:
name: testdb-configmap
key: port
- name: TESTDB_USER
valueFrom:
secretKeyRef:
name: testdb-secret
key: user
- name: TESTDB_PASSWORD
valueFrom:
secretKeyRef:
name: testdb-secret
key: password
- name: CONNECTION_STRING
value: Host=$(TESTDB_HOST);Username=$(TESTDB_USER);Password=$(TESTDB_PASSWORD);PORT=$(TESTDB_PORT);Pooling=False;
I do not believe what you're asking to do is possible.
Furthermore, do not use configs maps for storing information like this. It's best practice to use secrets and then mount them to your container as files or ENV variables.
I would abandon whatever you're thinking and re-evaluate what you're trying to accomplish.

ConfigMap value as input for another variable inside container

How to use ConfigMap for $LOCAL_IP_DB variable declared in below section as input for another variable declared? $LOCAL_IP_DB is a generic key defined inside db-secret configmap, but there is another environment variable which needs it? How to make it work?
spec:
containers:
- env:
- name: LOCAL_IP_DB
valueFrom:
configMapKeyRef:
name: db-secret
key: LOCAL_IP_DB
- name: LOG_Files
value: \\${LOCAL_IP_DB}\redis\files\
The key is using: $() instead of ${}
example-pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: example
image: bash
args: [printenv]
env:
- name: LOCAL_IP_DB
valueFrom:
configMapKeyRef:
name: db-secret
key: LOCAL_IP_DB
- name: LOG_FILES
value: \$(LOCAL_IP_DB)\redis\files\
example-configmap.yaml:
apiVersion: v1
data:
LOCAL_IP_DB: 192.168.0.1
kind: ConfigMap
metadata:
name: db-secret
test:
controlplane $ kubectl apply -f example-pod.yaml -f example-configmap.yaml
controlplane $ kubectl logs example | grep 192
LOCAL_IP_DB=192.168.0.1
LOG_FILES=\192.168.0.1\redis\files\
You can find more information about this function here: link
Note, if you want to manage secrets Secret is the recommended way to do that.

Use Kubernetes secrets as environment variables inside a config map

I have an application in a container which reads certain data from a configMap which goes like this
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.yaml: |
server:
port: 8080
host: 0.0.0.0
##
## UCP configuration.
## If skipped, it will default to looking inside of the connections.xml file.
database:
ApplicationDB:
username: username
password: hello123
Now I created a secret for the password and mounted as env variable while starting the container.
apiVersion: v1
kind: Secret
metadata:
name: appdbpassword
type: Opaque
stringData:
password: hello123
My pod looks like:
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.pod.name }}
spec:
containers:
- name: {{ .Values.container.name }}
image: {{ .Values.image }}
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;"]
env:
- name: password
valueFrom:
secretKeyRef:
name: appdbpassword
key: password
volumeMounts:
- name: config-volume
mountPath: /app/app-config/application.yaml
subPath: application.yaml
volumes:
- name: config-volume
configMap:
name: app-config
I tried using this env variable inside the configMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.yaml: |
server:
port: 8080
host: 0.0.0.0
##
## UCP configuration.
## If skipped, it will default to looking inside of the connections.xml file.
database:
ApplicationDB:
username: username
**password: ${password}**
But my application is unable to read this password. Am I missing something here?
EDIT:
I cannot change the application.yaml to any other form as my server looks for application.yaml in source path. Do we have any way to use that environment variable in values.yaml(helm) file and use it in the configmap?
your ${password} variable will not be replaced by its value as application.yaml is a static file. If you use this yaml file in some configuration then it is possible that it will get replaced by its value.
consider a scenario where instead of application.yaml pass this file
application.sh: |
echo "${password}"
now go inside /app/app-config you will see application.sh file . And now do sh application.sh you will see the value of environment variable.
I hope this might clear your point.
You cannot use a secret in ConfigMap as they are intended to non-sensitive data (See here).
Also you should not pass Secrets using env's as it's create potential risk (Read more here why env shouldn't be
used).
Applications usually dump env variables in error reports or even write the to the
app logs at startup which could lead to exposing Secrets.
The best way would be to mount the Secret as file.
Here's an simple example how to mount it as file:
spec:
template:
spec:
containers:
- image: "my-image:latest"
name: my-app
...
volumeMounts:
- mountPath: "/var/my-app"
name: ssh-key
readOnly: true
volumes:
- name: ssh-key
secret:
secretName: ssh-key
Kubernetes documentation explains well how to use and mount secrets.

Combining multiple k8s secrets into an env variable

My k8s namespace contains a Secret which is created at deployment time (by svcat), so the values are not known in advance.
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: my-database-credentials
data:
hostname: ...
port: ...
database: ...
username: ...
password: ...
A Deployment needs to inject these values in a slightly different format:
...
containers:
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: my-database-credentials
key: jdbc:postgresql:<hostname>:<port>/<database> // ??
- name: DATABASE_USERNAME
valueFrom:
secretKeyRef:
name: my-database-credentials
key: username
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: my-database-credentials
key: password
The DATABASE_URL needs to be composed out of the hostname, port, 'database` from the previously defined secret.
Is there any way to do this composition?
Kubernetes allows you to use previously defined environment variables as part of subsequent environment variables elsewhere in the configuration. From the Kubernetes API reference docs:
Variable references $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables.
This $(...) syntax defines interdependent environment variables for the container.
So, you can first extract the required secret values into environment variables, and then compose the DATABASE_URL with those variables.
...
containers:
env:
- name: DB_URL_HOSTNAME // part 1
valueFrom:
secretKeyRef:
name: my-database-credentials
key: hostname
- name: DB_URL_PORT // part 2
valueFrom:
secretKeyRef:
name: my-database-credentials
key: port
- name: DB_URL_DBNAME // part 3
valueFrom:
secretKeyRef:
name: my-database-credentials
key: database
- name: DATABASE_URL // combine
value: jdbc:postgresql:$(DB_URL_HOSTNAME):$(DB_URL_PORT)/$(DB_URL_DBNAME)
...
If all the pre-variables are defined as env variables:
- { name: DATABASE_URL, value: '{{ printf "jdbc:postgresql:$(DATABASE_HOST):$(DATABASE_PORT)/$(DB_URL_DBNAME)" }}'}
With this statement you may also bring in vlaues from the values.yaml file as well:
For Example:
If you may have defined DB_URL_DBNAME in the values file:
- { name: DATABASE_URL, value: '{{ printf "jdbc:postgresql:$(DATABASE_HOST):$(DATABASE_PORT)/%s" .Values.database.DB_URL_DBNAME }}'}
You can do a couple of things I can think of:
Use a secrets volume and make a startup script that reads the secrets from the volume and then starts your application with the DATABASE_URL environment variable.
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: your_db_container
command: [ "yourscript.sh" ]
volumeMounts:
- name: mycreds
mountPath: "/etc/credentials"
volumes:
- name: mycreds
secret:
secretName: my-database-credentials
defaultMode: 256
Pass the env variable in the command key of your container spec:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: your_db_container
command: [ "/bin/sh", "-c", "DATABASE_URL=jdbc:postgresql:<hostname>:<port>/<database>/$(DATABASE_USERNAME):$(DATABASE_PASSWORD) /start/yourdb" ]
env:
- name: DATABASE_USERNAME
valueFrom:
secretKeyRef:
name: my-database-credentials
key: username
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: my-database-credentials
key: password
There are several ways to go (in increasing complexity order):
Mangle the parameter before putting it into the Secret (extend whatever you use to insert the info there).
Add a script into your Pod/Container to mangle the incoming parameters (environmental variables or command arguments) into what is needed. If you cannot or don't want to have your own container image, you can add your extra script as a Volume to the container, and set the Container's command field to override the container image start command.
Add a facility to your Kubernetes to do an automatic mangling "behind the scenes": you can add a Dynamic Admission Controller to do your mangling, or you can create a Kubernetes Operator and add a Custom Resource Definition (the operator would be told by the CRD which Secrets to watch for changes, and the operator would read the values and generate whatever other entries you want).