I have a configMap file:
apiVersion: v1
kind: ConfigMap
metadata:
labels:
owner: testdb
name: testdb-configmap
data:
host: postgres
port: "5432"
and a secret file:
aapiVersion: v1
kind: Secret
type: Opaque
metadata:
labels:
owner: testdb
name: testdb-secret
namespace: test
data:
user: dGVzdA==
pwd: dGVzdA==
and I want to build an environment variable CONNECTION_STRING as below:
env:
- name: CONNECTION_STRING
value: "Host=<host-from-configmap>;Username=<user-from-secret>;Password=<password-from-secret>;Port=<port-from-configmap>;Pooling=False;"
I want to know if this is possible and if yes, then how? I have also looked at using .tpl (named templates) but couldn't figure out a way.
NOTE
Since I don't have access to the image which requires CONNECTION_STRING I have to build it this way. These configmap and secret files are also going to remain like this.
Kubernetes can set environment variables based on other environment variables. This is a core Kubernetes Pod capability, and doesn't depend on anything from Helm.
Your value uses four components, two from the ConfigMap and two from the Secret. You need to declare each of these as separate environment variables, and then declare a main environment variable that concatenates them together.
env:
- name: TESTDB_HOST
valueFrom:
configMapRef:
name: testdb-configmap # {{ include "chart.name" . }}
key: host
- name: TESTDB_PORT
valueFrom:
configMapRef:
name: testdb-configmap
key: port
- name: TESTDB_USER
valueFrom:
secretKeyRef:
name: testdb-secret
key: user
- name: TESTDB_PASSWORD
valueFrom:
secretKeyRef:
name: testdb-secret
key: password
- name: CONNECTION_STRING
value: Host=$(TESTDB_HOST);Username=$(TESTDB_USER);Password=$(TESTDB_PASSWORD);PORT=$(TESTDB_PORT);Pooling=False;
I do not believe what you're asking to do is possible.
Furthermore, do not use configs maps for storing information like this. It's best practice to use secrets and then mount them to your container as files or ENV variables.
I would abandon whatever you're thinking and re-evaluate what you're trying to accomplish.
Related
I am relativly new to Kubernetes and I have the following problem: We use Grafana in our Kubernetes Cluster, but currently the way our template.yaml file is built does not allow to use a secret form a password.
- apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: ${APP}
name: "${APP}-ldap-file"
data:
ldap.toml: |-
[[servers]]
....
# Search user bind dn
bind_dn = "uid=tu0213,cn=users,o=company,c=de"
bind_password = ${BIND_PASSWORD}
parameters:
- name: BIND_PASSWORD
Just using the password this way works fine, but it´s in plain text in a params file in our CI/CD Pipeline.
I a different repository I fould this:
spec:
containers:
- name: nginx-auth-ldap
image: ${REGISTRY}/${NAMESPACE}/nginx-auth-ldap:6
imagePullPolicy: Always
env:
- name: LDAP_BIND_DN
valueFrom:
secretKeyRef:
name: ldap-bind-dn
key: dn
Is this valueFrom approach also possible in my usecase?
You can use a secret like that but you have to split the data into separate keys like this:
apiVersion: v1
kind: Secret
metadata:
labels:
app: ${APP}
name: "${APP}-ldap-file"
stringData:
dn: "uid=tu0213,cn=users,o=company,c=de"
The format you specify is correct. Just create a secret with name "ldap-bind-dn" and as a value provide your password there.
Path for secret: In openshift console go to Resources-> Secrets -> create secret.
spec:
containers:
- name: nginx-auth-ldap
image: ${REGISTRY}/${NAMESPACE}/nginx-auth-ldap:6
imagePullPolicy: Always
env:
- name: LDAP_BIND_DN
valueFrom:
secretKeyRef:
name: ldap-bind-dn
key: dn
I am working on operator-sdk, in the controller, we often need to create a Deployment object, and Deployment resource has a lot of configuration items, such as environment variables or ports definition or others as following. I am wondering what is best way to get these values, I don't want to hard code them, for example, variable_a or variable_b.
Probably, you can put them in the CRD as spec, then pass them to Operator Controller; Or maybe you can put them in the configmap, then pass configmap name to Operator Controller, Operator Controller can access configmap to get them; Or maybe you can put in the template file, then in the Operator Controller, controller has to read that template file.
What is best way or best practice to deal with this situation? Thanks for sharing your ideas or points.
deployment := &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: m.Name,
Namespace: m.Namespace,
Labels: ls,
},
Spec: appsv1.DeploymentSpec{
Replicas: &replicas,
Selector: &metav1.LabelSelector{
MatchLabels: ls,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: ls,
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{{
Image: "....",
Name: m.Name,
Ports: []corev1.ContainerPort{{
ContainerPort: port_a,
Name: "tcpport",
}},
Env: []corev1.EnvVar{
{
Name: "aaaa",
Value: variable_a,
},
{
Name: "bbbb",
Value: variable_b,
},
Using enviroment variables
It can be convenient that your app gets your data as environment variables.
Environment variables from ConfigMap
For non-sensitive data, you can store your variables in a ConfigMap and then define container environment variables using the ConfigMap data.
Example from Kubernetes docs:
Create the ConfigMap first. File configmaps.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
---
apiVersion: v1
kind: ConfigMap
metadata:
name: env-config
namespace: default
data:
log_level: INFO
Create the ConfigMap:
kubectl create -f ./configmaps.yaml
Then define the environment variables in the Pod specification, pod-multiple-configmap-env-variable.yaml:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.how
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: env-config
key: log_level
restartPolicy: Never
Create the Pod:
kubectl create -f ./pod-multiple-configmap-env-variable.yaml
Now in your controller you can read these environment variables SPECIAL_LEVEL_KEY (which will give you special.how value from special-config ConfigMap) and LOG_LEVEL (which will give you log_level value from env-config ConfigMap):
For example:
specialLevelKey := os.Getenv("SPECIAL_LEVEL_KEY")
logLevel := os.Getenv("LOG_LEVEL")
fmt.Println("SPECIAL_LEVEL_KEY:", specialLevelKey)
fmt.Println("LOG_LEVEL:", logLevel)
Environment variables from Secret
If your data is sensitive, you can store it in a Secret and then use the Secret as environment variables.
To create a Secret manually:
You'll first need to encode your strings using base64.
# encode username
$ echo -n 'admin' | base64
YWRtaW4=
# encode password
$ echo -n '1f2d1e2e67df' | base64
MWYyZDFlMmU2N2Rm
Then create a Secret with the above data:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
Create a Secret with kubectl apply:
$ kubectl apply -f ./secret.yaml
Please notice that there are other ways to create a secret, pick one that works best for you:
Creating a Secret using kubectl
Creating a Secret from a generator
Creating a Secret from files
Creating a Secret from string literals
Now you can use this created Secret for environment variables.
To use a secret in an environment variable in a Pod:
Create a secret or use an existing one. Multiple Pods can reference the same secret.
Modify your Pod definition in each container that you wish to consume the value of a secret key to add an environment variable for each secret key you wish to consume. The environment variable that consumes the secret key should populate the secret's name and key in env[].valueFrom.secretKeyRef.
Modify your image and/or command line so that the program looks for values in the specified environment variables.
Here is a Pod example from Kubernetes docs that shows how to use a Secret for environment variables:
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
restartPolicy: Never
Finally, as stated in the docs:
Inside a container that consumes a secret in an environment variables, the secret keys appear as normal environment variables containing the base64 decoded values of the secret data.
Now in your controller you can read these environment variables SECRET_USERNAME (which will give you username value from mysecret Secret) and SECRET_PASSWORD (which will give you password value from mysecret Secret):
For example:
username := os.Getenv("SECRET_USERNAME")
password := os.Getenv("SECRET_PASSWORD")
Using volumes
You can also mount both ConfigMap and Secret as a volume to you pods.
Populate a Volume with data stored in a ConfigMap:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
restartPolicy: Never
Using Secrets as files from a Pod:
To consume a Secret in a volume in a Pod:
Create a secret or use an existing one. Multiple Pods can reference the same secret.
Modify your Pod definition to add a volume under .spec.volumes[]. Name the volume anything, and have a .spec.volumes[].secret.secretName field equal to the name of the Secret object.
Add a .spec.containers[].volumeMounts[] to each container that needs the secret. Specify .spec.containers[].volumeMounts[].readOnly = true and .spec.containers[].volumeMounts[].mountPath to an unused directory name where you would like the secrets to appear.
Modify your image or command line so that the program looks for files in that directory. Each key in the secret data map becomes the filename under mountPath.
An example of a Pod that mounts a Secret in a volume:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret
I want to make some deployments in kubernetes using helm charts. Here is a sample override-values yaml that I use:
imageRepository: ""
ocbb:
imagePullPolicy: IfNotPresent
TZ: UTC
logDir: /oms_logs
tnsAdmin: /oms/ora_k8
LOG_LEVEL: 3
wallet:
client:
server:
root:
db:
deployment:
imageName: init_db
imageTag:
host: 192.168.88.80
port:
service:
alias:
schemauser: pincloud
schemapass:
schematablespace: pincloud
indextablespace: pincloudx
nls_lang: AMERICAN_AMERICA.AL32UTF8
charset: AL32UTF8
pipelineschemauser: ifwcloud
pipelineschemapass:
pipelineschematablespace: ifwcloud
pipelineindextablespace: ifwcloudx
pipelinealias:
queuename:
In this file I have to set some values involving credentials, for example schemapass, pipelineschemapass...
Documentation states I have to generate kubernetes secrets to do this and add this key to my yaml file with the same path hierarchy.
I generated some kubernetes secrets, for example:
kubectl create secret generic schemapass --from-literal=password='pincloud'
Now I don't know how to reference this newly generated secret in my yaml file. Any tip about how to set schemapass field in yaml chart to reference kubernetes secret?
You cannot use Kubernetes secret in your values.yaml. In values.yaml you only specify the input parameters for the Helm Chart, so it could be the secret name, but not the secret itself (or anything that it resolved).
If you want to use the secret in your container, then you can insert it as an environment variable:
env:
- name: SECRET_VALUE_ENV
valueFrom:
secretKeyRef:
name: schemapass
key: password
You can check more in the Hazelcast Enterprise Helm Chart. We do exactly that. You specify the secret name in values.yaml and then the secret is injected into the container using environment variable.
You can reference K8S values -whether secrets or not- in Helm by specifying them in your container as environment variables.
let your deployment be mongo.yml
--
kind: Deployment
--
--
containers:
--
env:
- name: DB_URL
valueFrom:
configMapKeyRef:
name: mongo-config
key: mongo-url
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: mongo-password
Where mongo-secret is
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
type: Opaque
data:
mongo-user: bW9uZ291c2Vy
mongo-password: bW9uZ29wYXNzd29yZA==
and mongo-config is
apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-config
data:
mongo-url: mongo-service
My k8s namespace contains a Secret which is created at deployment time (by svcat), so the values are not known in advance.
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: my-database-credentials
data:
hostname: ...
port: ...
database: ...
username: ...
password: ...
A Deployment needs to inject these values in a slightly different format:
...
containers:
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: my-database-credentials
key: jdbc:postgresql:<hostname>:<port>/<database> // ??
- name: DATABASE_USERNAME
valueFrom:
secretKeyRef:
name: my-database-credentials
key: username
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: my-database-credentials
key: password
The DATABASE_URL needs to be composed out of the hostname, port, 'database` from the previously defined secret.
Is there any way to do this composition?
Kubernetes allows you to use previously defined environment variables as part of subsequent environment variables elsewhere in the configuration. From the Kubernetes API reference docs:
Variable references $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables.
This $(...) syntax defines interdependent environment variables for the container.
So, you can first extract the required secret values into environment variables, and then compose the DATABASE_URL with those variables.
...
containers:
env:
- name: DB_URL_HOSTNAME // part 1
valueFrom:
secretKeyRef:
name: my-database-credentials
key: hostname
- name: DB_URL_PORT // part 2
valueFrom:
secretKeyRef:
name: my-database-credentials
key: port
- name: DB_URL_DBNAME // part 3
valueFrom:
secretKeyRef:
name: my-database-credentials
key: database
- name: DATABASE_URL // combine
value: jdbc:postgresql:$(DB_URL_HOSTNAME):$(DB_URL_PORT)/$(DB_URL_DBNAME)
...
If all the pre-variables are defined as env variables:
- { name: DATABASE_URL, value: '{{ printf "jdbc:postgresql:$(DATABASE_HOST):$(DATABASE_PORT)/$(DB_URL_DBNAME)" }}'}
With this statement you may also bring in vlaues from the values.yaml file as well:
For Example:
If you may have defined DB_URL_DBNAME in the values file:
- { name: DATABASE_URL, value: '{{ printf "jdbc:postgresql:$(DATABASE_HOST):$(DATABASE_PORT)/%s" .Values.database.DB_URL_DBNAME }}'}
You can do a couple of things I can think of:
Use a secrets volume and make a startup script that reads the secrets from the volume and then starts your application with the DATABASE_URL environment variable.
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: your_db_container
command: [ "yourscript.sh" ]
volumeMounts:
- name: mycreds
mountPath: "/etc/credentials"
volumes:
- name: mycreds
secret:
secretName: my-database-credentials
defaultMode: 256
Pass the env variable in the command key of your container spec:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: your_db_container
command: [ "/bin/sh", "-c", "DATABASE_URL=jdbc:postgresql:<hostname>:<port>/<database>/$(DATABASE_USERNAME):$(DATABASE_PASSWORD) /start/yourdb" ]
env:
- name: DATABASE_USERNAME
valueFrom:
secretKeyRef:
name: my-database-credentials
key: username
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: my-database-credentials
key: password
There are several ways to go (in increasing complexity order):
Mangle the parameter before putting it into the Secret (extend whatever you use to insert the info there).
Add a script into your Pod/Container to mangle the incoming parameters (environmental variables or command arguments) into what is needed. If you cannot or don't want to have your own container image, you can add your extra script as a Volume to the container, and set the Container's command field to override the container image start command.
Add a facility to your Kubernetes to do an automatic mangling "behind the scenes": you can add a Dynamic Admission Controller to do your mangling, or you can create a Kubernetes Operator and add a Custom Resource Definition (the operator would be told by the CRD which Secrets to watch for changes, and the operator would read the values and generate whatever other entries you want).
apiVersion: v1
kind: Secret
metadata:
name: john-secret
data:
USERNAME: abc=
PASSWORD: def=
apiVersion: v1
kind: Secret
metadata:
name: jane-secret
data:
USERNAME: ghi=
PASSWORD: jkl=
Then I could include them like:
env:
- name: JOHN_USERNAME
valueFrom:
secretKeyRef:
name: john-secret
key: USERNAME
- name: JOHN_PASSWORD
valueFrom:
secretKeyRef:
name: john-secret
key: PASSWORD
- name: JANE_USERNAME
valueFrom:
secretKeyRef:
name: jane-secret
key: USERNAME
- name: JANE_PASSWORD
valueFrom:
secretKeyRef:
name: jane-secret
key: PASSWORD
And use them in Node.js app like process.env.JOHN_USERNAME, etc..
This works, but is there a cleaner/easier way to set secrets for a bunch of users that will have multiple fields? I imagine this would get messy with say 100 users x 5 fields.
You can mount the secret as a volume. Adapting the example from the linked Kubernetes documentation:
apiVersion: v1
kind: Pod
metadata:
name: secret-test-pod
spec:
containers:
- name: test-container
image: nginx
volumeMounts:
- name: john-secret
mountPath: /etc/john-secret
volumes:
- name: secret-volume
secret:
secretName: john-secret
If you have a bunch of secrets, you'd need to mount them all into the pod spec. That's a maintainability problem in itself.
I don't think anything actually stops you from using a more structured data object, like a JSON or YAML file, as the value of a secret. That could work reasonably in combination with mounting it as a volume.
If you truly have a lot of secrets – many "users" with many values for each – then some sort of external storage for the secrets is probably a better idea. If they're just usernames and passwords, it's very common to store a one-way hash of the password in a database (which also allows them to be updated without redeploying the system). Tools like Hashicorp's Vault can be complicated to administer, but have actual security of this content as a priority, and you get much more rigorous control over who can access the actual secrets.