Kubernetes - Use values from Secret in multiline configmap - kubernetes

I am relativly new to Kubernetes and I have the following problem: We use Grafana in our Kubernetes Cluster, but currently the way our template.yaml file is built does not allow to use a secret form a password.
- apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: ${APP}
name: "${APP}-ldap-file"
data:
ldap.toml: |-
[[servers]]
....
# Search user bind dn
bind_dn = "uid=tu0213,cn=users,o=company,c=de"
bind_password = ${BIND_PASSWORD}
parameters:
- name: BIND_PASSWORD
Just using the password this way works fine, but it´s in plain text in a params file in our CI/CD Pipeline.
I a different repository I fould this:
spec:
containers:
- name: nginx-auth-ldap
image: ${REGISTRY}/${NAMESPACE}/nginx-auth-ldap:6
imagePullPolicy: Always
env:
- name: LDAP_BIND_DN
valueFrom:
secretKeyRef:
name: ldap-bind-dn
key: dn
Is this valueFrom approach also possible in my usecase?

You can use a secret like that but you have to split the data into separate keys like this:
apiVersion: v1
kind: Secret
metadata:
labels:
app: ${APP}
name: "${APP}-ldap-file"
stringData:
dn: "uid=tu0213,cn=users,o=company,c=de"

The format you specify is correct. Just create a secret with name "ldap-bind-dn" and as a value provide your password there.
Path for secret: In openshift console go to Resources-> Secrets -> create secret.
spec:
containers:
- name: nginx-auth-ldap
image: ${REGISTRY}/${NAMESPACE}/nginx-auth-ldap:6
imagePullPolicy: Always
env:
- name: LDAP_BIND_DN
valueFrom:
secretKeyRef:
name: ldap-bind-dn
key: dn

Related

Concating values from configMap and secret

I have a configMap file:
apiVersion: v1
kind: ConfigMap
metadata:
labels:
owner: testdb
name: testdb-configmap
data:
host: postgres
port: "5432"
and a secret file:
aapiVersion: v1
kind: Secret
type: Opaque
metadata:
labels:
owner: testdb
name: testdb-secret
namespace: test
data:
user: dGVzdA==
pwd: dGVzdA==
and I want to build an environment variable CONNECTION_STRING as below:
env:
- name: CONNECTION_STRING
value: "Host=<host-from-configmap>;Username=<user-from-secret>;Password=<password-from-secret>;Port=<port-from-configmap>;Pooling=False;"
I want to know if this is possible and if yes, then how? I have also looked at using .tpl (named templates) but couldn't figure out a way.
NOTE
Since I don't have access to the image which requires CONNECTION_STRING I have to build it this way. These configmap and secret files are also going to remain like this.
Kubernetes can set environment variables based on other environment variables. This is a core Kubernetes Pod capability, and doesn't depend on anything from Helm.
Your value uses four components, two from the ConfigMap and two from the Secret. You need to declare each of these as separate environment variables, and then declare a main environment variable that concatenates them together.
env:
- name: TESTDB_HOST
valueFrom:
configMapRef:
name: testdb-configmap # {{ include "chart.name" . }}
key: host
- name: TESTDB_PORT
valueFrom:
configMapRef:
name: testdb-configmap
key: port
- name: TESTDB_USER
valueFrom:
secretKeyRef:
name: testdb-secret
key: user
- name: TESTDB_PASSWORD
valueFrom:
secretKeyRef:
name: testdb-secret
key: password
- name: CONNECTION_STRING
value: Host=$(TESTDB_HOST);Username=$(TESTDB_USER);Password=$(TESTDB_PASSWORD);PORT=$(TESTDB_PORT);Pooling=False;
I do not believe what you're asking to do is possible.
Furthermore, do not use configs maps for storing information like this. It's best practice to use secrets and then mount them to your container as files or ENV variables.
I would abandon whatever you're thinking and re-evaluate what you're trying to accomplish.

How to supply a value of a server in NFS mount in a k8 Deployment via a ConfigMap

I'm writing a helm chart where I need to supply a nfs.server value for the volume mount from the ConfigMap (efs-url in the example below).
There are examples in the docs on how to pass the value from the ConfigMap to env variables or even mount ConfigMaps. I understand how I can pass this value from the values.yaml but I just can't find an example on how it can be done using a ConfigMap.
I have control over this ConfigMap so I can reformat it as needed.
Am I missing something very obvious?
Is it even possible to do?
If not, what are the possible workarounds?
---
apiVersion: v1
kind: ConfigMap
metadata:
name: efs-url
data:
url: yourEFSsystemID.efs.yourEFSregion.amazonaws.com
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: efs-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: efs-provisioner
spec:
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:latest
env:
- name: FILE_SYSTEM_ID
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: file.system.id
- name: AWS_REGION
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: aws.region
- name: PROVISIONER_NAME
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: provisioner.name
volumeMounts:
- name: pv-volume
mountPath: /persistentvolumes
volumes:
- name: pv-volume
nfs:
server: <<< VALUE SHOULD COME FROM THE CONFIG MAP >>>
path: /
Having analysed the comments it looks like using ConfigMap approach is not suitable for this example as ConfigMap
is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
To read more about ConfigMaps and how they can be utilized one can visit the "ConfigMaps" section and the "Configure a Pod to Use a ConfigMap" section.

Kubernetes Kustomize: replace variable in patch file

Given the following kustomize patch:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flux
spec:
template:
spec:
containers:
- name: some-name
args:
- --some-key=some-value
...
- --git-url=https://user:${PASSWORD}#domain.de
I want to use kubectl apply -k and somehow pass a value for ${PASSWORD} which I can set from my build script.
The only solution I got to work so far was replacing the ${PASSWORD} with sed, but I would prefer a kustomize solution.
As #Jonas already suggested you should consider using Secret. It's nicely described in this article.
I want to use kubectl apply -k and somehow pass a value for
${PASSWORD} which I can set from my build script.
I guess your script can store the generated password as a variable or save it to some file. You can easily create a Secret as follows:
$ kustomize edit add secret sl-demo-app --from-literal=db-password=$PASSWORD
or from a file:
$ kustomize edit add secret sl-demo-app --from-file=file/path
As you can read in the mentioned article:
These commands will modify your kustomization.yaml and add a
SecretGenerator inside it.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
patchesStrategicMerge:
- custom-env.yaml
- replica-and-rollout-strategy.yaml
secretGenerator:
- literals:
- db-password=12345
name: sl-demo-app
type: Opaque
kustomize build run in your project directory will create among others following Secret:
apiVersion: v1
data:
db-password: MTIzNDU=
kind: Secret
metadata:
name: sl-demo-app-6ft88t2625
type: Opaque
...
More details you can fine in the article.
If we want to use this secret from our deployment, we just have, like
before, to add a new layer definition which uses the secret.
For example, this file will mount the db-password value as
environement variables
apiVersion: apps/v1
kind: Deployment
metadata:
name: sl-demo-app
spec:
template:
spec:
containers:
- name: app
env:
- name: "DB_PASSWORD"
valueFrom:
secretKeyRef:
name: sl-demo-app
key: db.password
In your Deployment definition file it may look similar to this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flux
spec:
template:
spec:
containers:
- name: some-name
env:
- name: "PASSWORD"
valueFrom:
secretKeyRef:
name: git-secret
key: git.password
args:
- --some-key=some-value
...
- --git-url=https://user:${PASSWORD}#domain.de

Use a secret to store sensitive data in helm kubernetes

I have a secret.yaml file inside the templates directory with the following data:
apiVersion: v1
kind: Secret
metadata:
name: appdbpassword
stringData:
password: password
I also have a ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: appdbconfigmap
data:
jdbcUrl: jdbc:oracle:thin:#proxy:service
username: bhargav
I am using the following pod:
apiVersion: v1
kind: Pod
metadata:
name: expense-pod-sample-1
spec:
containers:
- name: expense-container-sample-1
image: exm:1
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
envFrom:
- configMapRef:
name: appdbconfigmap
env:
- name: password
valueFrom:
secretKeyRef:
name: appdbpassword
key: password
When I use the helm install command I see the pod running but if I try to use the environment variable ${password} in my application, it just does not work. It says the password is wrong. It does not complain about the username which is a ConfigMap. This happens only if I use helm. If I don't use helm and independently-run all the YAML files using kubectl, my application access both username and password correctly.
Am I missing anything here ?
apiVersion: v1
kind: Secret
metadata:
name: test-secret
data:
password : cGFzc3dvcmQK
You can also add the secret like this where converting data into base64 format. While stringData do it automatically when you create secret.
Trying Adding the secrets in environment like this way
envFrom:
- secretRef:
name: test-secret

Kubernetes - ConfigMap for nested variables

We have an image deployed in an AKS cluster for which we need to update a config entry during deployment using configmaps.
The configuration file has the following key and we are trying to replace the value of the "ChildKey" without replacing the entire file -
{
"ParentKey": {
"ChildKey": "123"
}
}
The configmap looks like -
apiVersion: v1
data:
ParentKey: |
ChildKey: 456
kind: ConfigMap
name: cf
And in the deployment, the configmap is used like this -
apiVersion: extensions/v1beta1
kind: Deployment
spec:
template:
metadata:
creationTimestamp: null
labels:
app: abc
spec:
containers:
- env:
- name: ParentKey
valueFrom:
configMapKeyRef:
key: ParentKey
name: cf
The replacement is not working with the setup above. Is there a different way to declare the key names for nested structures?
We have addressed this in the following manner -
The configmap carries a simpler structure - only the child element -
apiVersion: v1
data:
ChildKey: 456
kind: ConfigMap
name: cf
In the deployment, the environment variable key refers to the child key like this -
apiVersion: extensions/v1beta1
kind: Deployment
spec:
template:
metadata:
creationTimestamp: null
labels:
app: abc
spec:
containers:
- env:
- name: ParentKey__ChildKey
valueFrom:
configMapKeyRef:
key: ChildKey
name: cf
Posting this for reference.
use the double underscore for nested environment variables and arrays as explained here
To avoid explicit environment variables and typing names twice, you can use envFrom
configMap.yaml
apiVersion: v1
data:
ParentKey__ChildKey: 456
kind: ConfigMap
name: cf
deployment.yml
containers:
- name: $(name)
image: $(image)
envFrom:
- configMapRef:
name: common-config
- configMapRef:
name: specific-config