As the documentation shows, you should be setting the env vars when doing a docker run like the following:
docker run --name some-postgres -e POSTGRES_PASSWORD='foo' POSTGRES_USER='bar'
This sets the superuser and password to access the database instead of the defaults of POSTGRES_PASSWORD='' and POSTGRES_USER='postgres'.
However, I'm using Skaffold to spin up a k8s cluster and I'm trying to figure out how to do something similar. How does one go about doing this for Kubernetes and Skaffold?
#P Ekambaram is correct but I would like to go further into this topic and explain the "whys and hows".
When passing passwords on Kubernetes, it's highly recommended to use encryption and you can do this by using secrets.
Creating your own Secrets (Doc)
To be able to use the secrets as described by #P Ekambaram, you need to have a secret in your kubernetes cluster.
To easily create a secret, you can also create a Secret from generators and then apply it to create the object on the Apiserver. The generators should be specified in a kustomization.yaml inside a directory.
For example, to generate a Secret from literals username=admin and password=secret, you can specify the secret generator in kustomization.yaml as
# Create a kustomization.yaml file with SecretGenerator
$ cat <<EOF >./kustomization.yaml
secretGenerator:
- name: db-user-pass
literals:
- username=admin
- password=secret
EOF
Apply the kustomization directory to create the Secret object.
$ kubectl apply -k .
secret/db-user-pass-dddghtt9b5 created
Using Secrets as Environment Variables (Doc)
This is an example of a pod that uses secrets from environment variables:
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
restartPolicy: Never
Source: here and here.
Use the below YAML
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
template:
metadata:
labels:
name: postgres
spec:
containers:
- name: postgres
image: postgres:11.2
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: "sampledb"
- name: POSTGRES_USER
value: "postgres"
- name: POSTGRES_PASSWORD
value: "secret"
volumeMounts:
- name: data
mountPath: /var/lib/postgresql
volumes:
- name: data
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
type: ClusterIP
ports:
- port: 5432
selector:
name: postgres
Related
I have configured minikube and am trying to run kubenetes on my local ubuntu machine.
When I build the MongoDB docker image on my local, I can pass the env variables this way and it works well with the backend API:
mongo_db:
image: mongo:latest
container_name: db_container
environment:
- MONGODB_INITDB_DATABASE=contacts
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=password
ports:
- 27017:27017
volumes:
- ./mongodb_data_container:/data/db
But when I try to run the entire application(frontend, backend, and MongoDB) in Kubernetes, how do I initiate the MongoDB with the env variables so the backend API can connect to the database pod instance? I'm pulling latest mongodb instance, here's the mongo-deployment yaml file:
# MongoDB Deployment - Database
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mern-stack
replicas: 1
template:
metadata:
labels:
app: mern-stack
spec:
containers:
- name: mern-stack
image: mongo:latest
ports:
- containerPort: 27017
volumeMounts:
- name: db-data
mountPath: /data
readOnly: false
volumes:
- name: db-data
persistentVolumeClaim:
claimName: mern-stack-data
I have tried to pass the env variables this way, but it doesn't seem to work:
...
volumeMounts:
- name: db-data
mountPath: /data
readOnly: false
env:
- name: MONGODB_INITDB_DATABASE
value: "contacts"
- name: MONGO_INITDB_ROOT_USERNAME
value: "root"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "password"
...
What's the quick solution? Should I try config map and secret eventually?
volumeMounts usually used for the whole config file e.g. ***.conf
It's more convenient for you to use secret in your case.
1、create secret resource
apiVersion: v1
data:
db_url: YWRtaW4=
db_password: MWYyZDFlMmU2N2Rm
kind: Secret
metadata:
name: <secretname>
namespace: <namespaceuwant>
type: Opaque
2、you can use in your mongodb
mongo_db:
image: mongo:latest
container_name: db_container
env:
- name: db_url # env you can get
valueFrom:
secretKeyRef:
name: <secretname> # step 1 create
key: db_url # in your secret
.
.
.
Configure kubectl to default to your namespace.
If you have not already, run the following command to execute all kubectl commands in the namespace you created:
kubectl config set-context $(kubectl config current-context) --namespace=<metadata.namespace>
You can choose to use a cleartext password:
apiVersion: v1
kind: Secret
metadata:
name: <mms-user-1-password>
# corresponds to user.spec.passwordSecretKeyRef.name
type: Opaque
stringData:
password: <my-plain-text-password>
# corresponds to user.spec.passwordSecretKeyRef.key
data:
password: <base-64-encoded-password>
# corresponds to user.spec.passwordSecretKeyRef.key
...
or you can choose to use a Base64-encoded password:
---
apiVersion: v1
kind: Secret
metadata:
name: <mms-user-1-password>
# corresponds to user.spec.passwordSecretKeyRef.name
type: Opaque
stringData:
password: <my-plain-text-password>
# corresponds to user.spec.passwordSecretKeyRef.key
data:
password: <base-64-encoded-password>
# corresponds to user.spec.passwordSecretKeyRef.key
...
Create a new User Secret YAML file.
To learn about your options for secret storage, see
https://kubernetes.io/docs/concepts/configuration/secret/
Change these values to yours.
stringData.password #Plaintext password for the desired user.
data.password #Base64-encoded password for the desired user.
Save the User Secret file with a .yaml extension
Create MongoDBUser
Copy the following example MongoDBUser
apiVersion: mongodb.com/v1
kind: MongoDBUser
metadata:
name: <mms-scram-user-1>
spec:
passwordSecretKeyRef:
name: <mms-user-1-password>
# Match to metadata.name of the User Secret
key: password
username: "<mms-scram-user-1>"
db: "admin" #
mongodbResourceRef:
name: "<my-replica-set>"
# Match to MongoDB resource using authenticaiton
roles:
- db: "admin"
name: "clusterAdmin"
- db: "admin"
name: "userAdminAnyDatabase"
- db: "admin"
name: "readWrite"
- db: "admin"
name: "userAdminAnyDatabase"
...
Add your own fields.
Add any additional roles for the user to the MongoDBUser.
Invoke the following Kubernetes command to create your database user:
kubectl apply -f <database-user-conf>.yaml
When you create a new MongoDB database user, Kubernetes Operator automatically creates a new Kubernetes secret. The Kubernetes secret contains the following information about the new database user:
username: Username for the database user
password: Password for the database user
connectionString.standard: Standard connection string that can connect you to the database as this database user.
connectionString.standardSrv: DNS seed list connection string that can connect you to the database as this database user.
For more details, you can visit here
You can populate a container's environment variables through the use of Secrets or ConfigMaps. Use Secrets when the data you are working with is sensitive (e.g. passwords), and ConfigMaps when it is not.
In your Pod definition specify that the container should pull values from a Secret:
apiVersion: v1
kind: Pod
metadata:
labels:
context: docker-k8s-lab
name: mongo-pod
name: mongo-pod
spec:
containers:
- image: "mongo:latest"
name: mongo
ports:
- containerPort: 3306
envFrom:
- secretRef:
name: mongo-secret
You can now define two different Secrets, one for production and one for dev.
dev-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
type: Opaque
data:
MYSQL_USER: bXlzcWwK
MYSQL_PASSWORD: bXlzcWwK
MYSQL_DATABASE: c2FtcGxlCg==
MYSQL_ROOT_PASSWORD: c3VwZXJzZWNyZXQK
prod-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
type: Opaque
data:
MYSQL_USER: am9obgo=
MYSQL_PASSWORD: c2VjdXJlCg==
MYSQL_DATABASE: cHJvZC1kYgo=
MYSQL_ROOT_PASSWORD: cm9vdHkK
And deploy the correct secret to the correct Kubernetes cluster:
kubectl config use-context dev
kubectl create -f dev-secret.yaml
kubectl config use-context prod
kubectl create -f prod-secret.yaml
Now whenever a Pod starts it will populate its environment variables from the values specified in the Secret.
I have below ConfigMap code which pulls secrets from GSM.
kind: ConfigMap
metadata:
name: db-config
labels:
app: poc
data:
entrypoint.sh: |
#!/usr/bin/env bash
set -euo pipefail
echo $(gcloud secrets versions access --project=<project> --secret=<secret-name>) >> /var/config/dburl.env
---
apiVersion: v1
kind: Pod
metadata:
name: poc-pod
namespace: default
spec:
initContainers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: init
command: ["/tmp/entrypoint.sh"]
volumeMounts:
- mountPath: /tmp
name: entrypoint
- mountPath: /var/config
name: secrets
volumes:
# volumes mounting
...
containers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: my-container
volumeMounts:
- mountPath: /var/config
name: secrets
env:
- name: HOST
?? # Assign value fetched in configmap
How to assign values from CM created files to container's env variables? Or, is there any other approach available to achieve this?
I need send couple of env variable to Spring cloud config service. It's hard to find any guide/documentation for this. Any help is appreciated!
One of the best ways to access secrets in google secret manager from GKE is by using the operator External Secret Operator, you can install it easily using helm.
Once is installed, you create a service account with the role roles/secretmanager.secretAccessor, then you download the creds (key file) and save them in a k8s secret:
kubectl create secret generic gcpsm-secret --from-file=service-account-credentials=key.json
Then you can define your secret store (it is not execlusive for GCP, it works with other secret manager such as AWS secret manager...):
apiVersion: external-secrets.io/v1alpha1
kind: SecretStore
metadata:
name: gcp-secret-manager
spec:
provider:
gcpsm:
auth:
secretRef:
secretAccessKeySecretRef:
name: gcpsm-secret # the secret you created in the first step
key: service-account-credentials
projectID: <your project id>
Now, you can create an external secret, and the operator will read the secret from the secret manager and create a k8s secret for you
apiVersion: external-secrets.io/v1alpha1
kind: ExternalSecret
metadata:
name: gcp-external-secret
spec:
secretStoreRef:
kind: SecretStore
name: gcp-secret-manager
target:
name: k8s-secret # the k8s secret name
data:
- secretKey: host # the key name in the secret
remoteRef:
key: <secret-name in gsm>
Finally in your pod, you can access the secret by:
apiVersion: v1
kind: Pod
metadata:
name: poc-pod
namespace: default
spec:
initContainers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: init
command: ["/tmp/entrypoint.sh"]
volumeMounts:
- mountPath: /tmp
name: entrypoint
- mountPath: /var/config
name: secrets
volumes:
# volumes mounting
...
containers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: my-container
volumeMounts:
- mountPath: /var/config
name: secrets
env:
- name: HOST
valueFrom:
secretKeyRef:
name: k8s-secret
key: host
If you update the secret value in the secret manager, you should recreate the external secret to update the k8s secret value.
The pod that created in the same default namespace as it's secret does not see values from it.
Secret's file contains following:
apiVersion: v1
kind: Secret
metadata:
name: backend-secret
data:
SECRET_KEY: <base64 of value>
DEBUG: <base64 of value>
After creating this secret via kubectl create -f backend-secret.yaml I'm launching pod with the following configuration:
apiVersion: v1
kind: Pod
metadata:
name: backend
spec:
containers:
- image: backend
name: backend
ports:
- containerPort: 8000
imagePullSecrets:
- name: dockerhub-credentials
volumes:
- name: secret
secret:
secretName: backend-secret
But pod crashes after trying to extract this environment variable via python's os.environ['DEBUG'] line.
How to make it work?
If you mount secret as volume, it will be mounted in a defined directory where key name will be the file name. For example click here
If you want to access secrets from the environment into your pod then you need to use secret in an environment variable like following.
apiVersion: v1
kind: Pod
metadata:
name: backend
spec:
containers:
- image: backend
name: backend
ports:
- containerPort: 8000
env:
- name: DEBUG
valueFrom:
secretKeyRef:
name: backend-secret
key: DEBUG
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: backend-secret
key: SECRET_KEY
imagePullSecrets:
- name: dockerhub-credentials
Ref: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables
Finally, I've used these lines at Deployment.spec.template.spec.containers:
containers:
- name: backend
image: zuber93/wts_backend
imagePullPolicy: Always
envFrom:
- secretRef:
name: backend-secret
ports:
- containerPort: 8000
I use minikube on windows 10 and try to test Kubernetes ConfigMap with both literal type and outer file type. First I make below manifest yaml file to make ConfigMap.
apiVersion: v1
kind: ConfigMap
metadata:
name: simple-config
data:
mysql_root_password: password
mysql_password: password
mysql_database: test
---
apiVersion: v1
kind: Pod
metadata:
name: blog-db
labels:
app: blog-mysql
spec:
containers:
- name: blog-mysql
image: mysql:latest
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
configMapKeyRef:
name: simple-config
key: mysql_root_password
- name: MYSQL_PASSWORD
valueFrom:
configMapKeyRef:
name: simple-config
key: mysql_password
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: simple-config
key: mysql_database
ports:
- containerPort: 3306
The above configmap yaml file throws no errors. It works successfully. This time I try to test kubernetes configmap with file.
== configmap.properties
mysql_root_password=password
mysql_password=password
mysql_database=test
But I am stuck with this part. Most of configmap examples use kubectl command with --from-file option like below,
kubectl create configmap simple-config --from-file=configmap.properties
But I have no idea how to mount the properties file using manifest yaml file grammer. Any advice?
You can not directly mount a properties file in a pod without first creating a ConfigMap from the properties file.You can create configMap from env file as below
kubectl create configmap simple-config \
--from-env-file=configmap.properties
I have an application in a container which reads certain data from a configMap which goes like this
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.yaml: |
server:
port: 8080
host: 0.0.0.0
##
## UCP configuration.
## If skipped, it will default to looking inside of the connections.xml file.
database:
ApplicationDB:
username: username
password: hello123
Now I created a secret for the password and mounted as env variable while starting the container.
apiVersion: v1
kind: Secret
metadata:
name: appdbpassword
type: Opaque
stringData:
password: hello123
My pod looks like:
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.pod.name }}
spec:
containers:
- name: {{ .Values.container.name }}
image: {{ .Values.image }}
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;"]
env:
- name: password
valueFrom:
secretKeyRef:
name: appdbpassword
key: password
volumeMounts:
- name: config-volume
mountPath: /app/app-config/application.yaml
subPath: application.yaml
volumes:
- name: config-volume
configMap:
name: app-config
I tried using this env variable inside the configMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.yaml: |
server:
port: 8080
host: 0.0.0.0
##
## UCP configuration.
## If skipped, it will default to looking inside of the connections.xml file.
database:
ApplicationDB:
username: username
**password: ${password}**
But my application is unable to read this password. Am I missing something here?
EDIT:
I cannot change the application.yaml to any other form as my server looks for application.yaml in source path. Do we have any way to use that environment variable in values.yaml(helm) file and use it in the configmap?
your ${password} variable will not be replaced by its value as application.yaml is a static file. If you use this yaml file in some configuration then it is possible that it will get replaced by its value.
consider a scenario where instead of application.yaml pass this file
application.sh: |
echo "${password}"
now go inside /app/app-config you will see application.sh file . And now do sh application.sh you will see the value of environment variable.
I hope this might clear your point.
You cannot use a secret in ConfigMap as they are intended to non-sensitive data (See here).
Also you should not pass Secrets using env's as it's create potential risk (Read more here why env shouldn't be
used).
Applications usually dump env variables in error reports or even write the to the
app logs at startup which could lead to exposing Secrets.
The best way would be to mount the Secret as file.
Here's an simple example how to mount it as file:
spec:
template:
spec:
containers:
- image: "my-image:latest"
name: my-app
...
volumeMounts:
- mountPath: "/var/my-app"
name: ssh-key
readOnly: true
volumes:
- name: ssh-key
secret:
secretName: ssh-key
Kubernetes documentation explains well how to use and mount secrets.