How to pass environment variables to a service in kubernetes? - mongodb

I have configured minikube and am trying to run kubenetes on my local ubuntu machine.
When I build the MongoDB docker image on my local, I can pass the env variables this way and it works well with the backend API:
mongo_db:
image: mongo:latest
container_name: db_container
environment:
- MONGODB_INITDB_DATABASE=contacts
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=password
ports:
- 27017:27017
volumes:
- ./mongodb_data_container:/data/db
But when I try to run the entire application(frontend, backend, and MongoDB) in Kubernetes, how do I initiate the MongoDB with the env variables so the backend API can connect to the database pod instance? I'm pulling latest mongodb instance, here's the mongo-deployment yaml file:
# MongoDB Deployment - Database
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mern-stack
replicas: 1
template:
metadata:
labels:
app: mern-stack
spec:
containers:
- name: mern-stack
image: mongo:latest
ports:
- containerPort: 27017
volumeMounts:
- name: db-data
mountPath: /data
readOnly: false
volumes:
- name: db-data
persistentVolumeClaim:
claimName: mern-stack-data
I have tried to pass the env variables this way, but it doesn't seem to work:
...
volumeMounts:
- name: db-data
mountPath: /data
readOnly: false
env:
- name: MONGODB_INITDB_DATABASE
value: "contacts"
- name: MONGO_INITDB_ROOT_USERNAME
value: "root"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "password"
...
What's the quick solution? Should I try config map and secret eventually?

volumeMounts usually used for the whole config file e.g. ***.conf
It's more convenient for you to use secret in your case.
1、create secret resource
apiVersion: v1
data:
db_url: YWRtaW4=
db_password: MWYyZDFlMmU2N2Rm
kind: Secret
metadata:
name: <secretname>
namespace: <namespaceuwant>
type: Opaque
2、you can use in your mongodb
mongo_db:
image: mongo:latest
container_name: db_container
env:
- name: db_url # env you can get
valueFrom:
secretKeyRef:
name: <secretname> # step 1 create
key: db_url # in your secret
.
.
.

Configure kubectl to default to your namespace.
If you have not already, run the following command to execute all kubectl commands in the namespace you created:
kubectl config set-context $(kubectl config current-context) --namespace=<metadata.namespace>
You can choose to use a cleartext password:
apiVersion: v1
kind: Secret
metadata:
name: <mms-user-1-password>
# corresponds to user.spec.passwordSecretKeyRef.name
type: Opaque
stringData:
password: <my-plain-text-password>
# corresponds to user.spec.passwordSecretKeyRef.key
data:
password: <base-64-encoded-password>
# corresponds to user.spec.passwordSecretKeyRef.key
...
or you can choose to use a Base64-encoded password:
---
apiVersion: v1
kind: Secret
metadata:
name: <mms-user-1-password>
# corresponds to user.spec.passwordSecretKeyRef.name
type: Opaque
stringData:
password: <my-plain-text-password>
# corresponds to user.spec.passwordSecretKeyRef.key
data:
password: <base-64-encoded-password>
# corresponds to user.spec.passwordSecretKeyRef.key
...
Create a new User Secret YAML file.
To learn about your options for secret storage, see
https://kubernetes.io/docs/concepts/configuration/secret/
Change these values to yours.
stringData.password #Plaintext password for the desired user.
data.password #Base64-encoded password for the desired user.
Save the User Secret file with a .yaml extension
Create MongoDBUser
Copy the following example MongoDBUser
apiVersion: mongodb.com/v1
kind: MongoDBUser
metadata:
name: <mms-scram-user-1>
spec:
passwordSecretKeyRef:
name: <mms-user-1-password>
# Match to metadata.name of the User Secret
key: password
username: "<mms-scram-user-1>"
db: "admin" #
mongodbResourceRef:
name: "<my-replica-set>"
# Match to MongoDB resource using authenticaiton
roles:
- db: "admin"
name: "clusterAdmin"
- db: "admin"
name: "userAdminAnyDatabase"
- db: "admin"
name: "readWrite"
- db: "admin"
name: "userAdminAnyDatabase"
...
Add your own fields.
Add any additional roles for the user to the MongoDBUser.
Invoke the following Kubernetes command to create your database user:
kubectl apply -f <database-user-conf>.yaml
When you create a new MongoDB database user, Kubernetes Operator automatically creates a new Kubernetes secret. The Kubernetes secret contains the following information about the new database user:
username: Username for the database user
password: Password for the database user
connectionString.standard: Standard connection string that can connect you to the database as this database user.
connectionString.standardSrv: DNS seed list connection string that can connect you to the database as this database user.
For more details, you can visit here

You can populate a container's environment variables through the use of Secrets or ConfigMaps. Use Secrets when the data you are working with is sensitive (e.g. passwords), and ConfigMaps when it is not.
In your Pod definition specify that the container should pull values from a Secret:
apiVersion: v1
kind: Pod
metadata:
labels:
context: docker-k8s-lab
name: mongo-pod
name: mongo-pod
spec:
containers:
- image: "mongo:latest"
name: mongo
ports:
- containerPort: 3306
envFrom:
- secretRef:
name: mongo-secret
You can now define two different Secrets, one for production and one for dev.
dev-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
type: Opaque
data:
MYSQL_USER: bXlzcWwK
MYSQL_PASSWORD: bXlzcWwK
MYSQL_DATABASE: c2FtcGxlCg==
MYSQL_ROOT_PASSWORD: c3VwZXJzZWNyZXQK
prod-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: mongo-secret
type: Opaque
data:
MYSQL_USER: am9obgo=
MYSQL_PASSWORD: c2VjdXJlCg==
MYSQL_DATABASE: cHJvZC1kYgo=
MYSQL_ROOT_PASSWORD: cm9vdHkK
And deploy the correct secret to the correct Kubernetes cluster:
kubectl config use-context dev
kubectl create -f dev-secret.yaml
kubectl config use-context prod
kubectl create -f prod-secret.yaml
Now whenever a Pod starts it will populate its environment variables from the values specified in the Secret.

Related

Assigning configmap entries to container's env variables

I have below ConfigMap code which pulls secrets from GSM.
kind: ConfigMap
metadata:
name: db-config
labels:
app: poc
data:
entrypoint.sh: |
#!/usr/bin/env bash
set -euo pipefail
echo $(gcloud secrets versions access --project=<project> --secret=<secret-name>) >> /var/config/dburl.env
---
apiVersion: v1
kind: Pod
metadata:
name: poc-pod
namespace: default
spec:
initContainers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: init
command: ["/tmp/entrypoint.sh"]
volumeMounts:
- mountPath: /tmp
name: entrypoint
- mountPath: /var/config
name: secrets
volumes:
# volumes mounting
...
containers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: my-container
volumeMounts:
- mountPath: /var/config
name: secrets
env:
- name: HOST
?? # Assign value fetched in configmap
How to assign values from CM created files to container's env variables? Or, is there any other approach available to achieve this?
I need send couple of env variable to Spring cloud config service. It's hard to find any guide/documentation for this. Any help is appreciated!
One of the best ways to access secrets in google secret manager from GKE is by using the operator External Secret Operator, you can install it easily using helm.
Once is installed, you create a service account with the role roles/secretmanager.secretAccessor, then you download the creds (key file) and save them in a k8s secret:
kubectl create secret generic gcpsm-secret --from-file=service-account-credentials=key.json
Then you can define your secret store (it is not execlusive for GCP, it works with other secret manager such as AWS secret manager...):
apiVersion: external-secrets.io/v1alpha1
kind: SecretStore
metadata:
name: gcp-secret-manager
spec:
provider:
gcpsm:
auth:
secretRef:
secretAccessKeySecretRef:
name: gcpsm-secret # the secret you created in the first step
key: service-account-credentials
projectID: <your project id>
Now, you can create an external secret, and the operator will read the secret from the secret manager and create a k8s secret for you
apiVersion: external-secrets.io/v1alpha1
kind: ExternalSecret
metadata:
name: gcp-external-secret
spec:
secretStoreRef:
kind: SecretStore
name: gcp-secret-manager
target:
name: k8s-secret # the k8s secret name
data:
- secretKey: host # the key name in the secret
remoteRef:
key: <secret-name in gsm>
Finally in your pod, you can access the secret by:
apiVersion: v1
kind: Pod
metadata:
name: poc-pod
namespace: default
spec:
initContainers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: init
command: ["/tmp/entrypoint.sh"]
volumeMounts:
- mountPath: /tmp
name: entrypoint
- mountPath: /var/config
name: secrets
volumes:
# volumes mounting
...
containers:
- image: gcr.io/google.com/cloudsdktool/cloud-sdk:slim
name: my-container
volumeMounts:
- mountPath: /var/config
name: secrets
env:
- name: HOST
valueFrom:
secretKeyRef:
name: k8s-secret
key: host
If you update the secret value in the secret manager, you should recreate the external secret to update the k8s secret value.

Pod does not see secrets

The pod that created in the same default namespace as it's secret does not see values from it.
Secret's file contains following:
apiVersion: v1
kind: Secret
metadata:
name: backend-secret
data:
SECRET_KEY: <base64 of value>
DEBUG: <base64 of value>
After creating this secret via kubectl create -f backend-secret.yaml I'm launching pod with the following configuration:
apiVersion: v1
kind: Pod
metadata:
name: backend
spec:
containers:
- image: backend
name: backend
ports:
- containerPort: 8000
imagePullSecrets:
- name: dockerhub-credentials
volumes:
- name: secret
secret:
secretName: backend-secret
But pod crashes after trying to extract this environment variable via python's os.environ['DEBUG'] line.
How to make it work?
If you mount secret as volume, it will be mounted in a defined directory where key name will be the file name. For example click here
If you want to access secrets from the environment into your pod then you need to use secret in an environment variable like following.
apiVersion: v1
kind: Pod
metadata:
name: backend
spec:
containers:
- image: backend
name: backend
ports:
- containerPort: 8000
env:
- name: DEBUG
valueFrom:
secretKeyRef:
name: backend-secret
key: DEBUG
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: backend-secret
key: SECRET_KEY
imagePullSecrets:
- name: dockerhub-credentials
Ref: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables
Finally, I've used these lines at Deployment.spec.template.spec.containers:
containers:
- name: backend
image: zuber93/wts_backend
imagePullPolicy: Always
envFrom:
- secretRef:
name: backend-secret
ports:
- containerPort: 8000

Setting postgres environmental variables running image

As the documentation shows, you should be setting the env vars when doing a docker run like the following:
docker run --name some-postgres -e POSTGRES_PASSWORD='foo' POSTGRES_USER='bar'
This sets the superuser and password to access the database instead of the defaults of POSTGRES_PASSWORD='' and POSTGRES_USER='postgres'.
However, I'm using Skaffold to spin up a k8s cluster and I'm trying to figure out how to do something similar. How does one go about doing this for Kubernetes and Skaffold?
#P Ekambaram is correct but I would like to go further into this topic and explain the "whys and hows".
When passing passwords on Kubernetes, it's highly recommended to use encryption and you can do this by using secrets.
Creating your own Secrets (Doc)
To be able to use the secrets as described by #P Ekambaram, you need to have a secret in your kubernetes cluster.
To easily create a secret, you can also create a Secret from generators and then apply it to create the object on the Apiserver. The generators should be specified in a kustomization.yaml inside a directory.
For example, to generate a Secret from literals username=admin and password=secret, you can specify the secret generator in kustomization.yaml as
# Create a kustomization.yaml file with SecretGenerator
$ cat <<EOF >./kustomization.yaml
secretGenerator:
- name: db-user-pass
literals:
- username=admin
- password=secret
EOF
Apply the kustomization directory to create the Secret object.
$ kubectl apply -k .
secret/db-user-pass-dddghtt9b5 created
Using Secrets as Environment Variables (Doc)
This is an example of a pod that uses secrets from environment variables:
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
restartPolicy: Never
Source: here and here.
Use the below YAML
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
template:
metadata:
labels:
name: postgres
spec:
containers:
- name: postgres
image: postgres:11.2
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: "sampledb"
- name: POSTGRES_USER
value: "postgres"
- name: POSTGRES_PASSWORD
value: "secret"
volumeMounts:
- name: data
mountPath: /var/lib/postgresql
volumes:
- name: data
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
type: ClusterIP
ports:
- port: 5432
selector:
name: postgres

Use Kubernetes secrets as environment variables inside a config map

I have an application in a container which reads certain data from a configMap which goes like this
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.yaml: |
server:
port: 8080
host: 0.0.0.0
##
## UCP configuration.
## If skipped, it will default to looking inside of the connections.xml file.
database:
ApplicationDB:
username: username
password: hello123
Now I created a secret for the password and mounted as env variable while starting the container.
apiVersion: v1
kind: Secret
metadata:
name: appdbpassword
type: Opaque
stringData:
password: hello123
My pod looks like:
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.pod.name }}
spec:
containers:
- name: {{ .Values.container.name }}
image: {{ .Values.image }}
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;"]
env:
- name: password
valueFrom:
secretKeyRef:
name: appdbpassword
key: password
volumeMounts:
- name: config-volume
mountPath: /app/app-config/application.yaml
subPath: application.yaml
volumes:
- name: config-volume
configMap:
name: app-config
I tried using this env variable inside the configMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.yaml: |
server:
port: 8080
host: 0.0.0.0
##
## UCP configuration.
## If skipped, it will default to looking inside of the connections.xml file.
database:
ApplicationDB:
username: username
**password: ${password}**
But my application is unable to read this password. Am I missing something here?
EDIT:
I cannot change the application.yaml to any other form as my server looks for application.yaml in source path. Do we have any way to use that environment variable in values.yaml(helm) file and use it in the configmap?
your ${password} variable will not be replaced by its value as application.yaml is a static file. If you use this yaml file in some configuration then it is possible that it will get replaced by its value.
consider a scenario where instead of application.yaml pass this file
application.sh: |
echo "${password}"
now go inside /app/app-config you will see application.sh file . And now do sh application.sh you will see the value of environment variable.
I hope this might clear your point.
You cannot use a secret in ConfigMap as they are intended to non-sensitive data (See here).
Also you should not pass Secrets using env's as it's create potential risk (Read more here why env shouldn't be
used).
Applications usually dump env variables in error reports or even write the to the
app logs at startup which could lead to exposing Secrets.
The best way would be to mount the Secret as file.
Here's an simple example how to mount it as file:
spec:
template:
spec:
containers:
- image: "my-image:latest"
name: my-app
...
volumeMounts:
- mountPath: "/var/my-app"
name: ssh-key
readOnly: true
volumes:
- name: ssh-key
secret:
secretName: ssh-key
Kubernetes documentation explains well how to use and mount secrets.

Authentication mongo deployed on kubernetes

I tried to configure mongo with authentication on a kubernetes cluster. I deployed the following yaml:
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongodb
image: mongo:4.0.0
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "admin"
- name: MONGO_INITDB_ROOT_PASSWORD
# Get password from secret
value: "abc123changeme"
command:
- mongod
- --auth
- --replSet
- rs0
- --bind_ip
- 0.0.0.0
ports:
- containerPort: 27017
name: web
volumeMounts:
- name: mongo-ps
mountPath: /data/db
volumes:
- name: mongo-ps
persistentVolumeClaim:
claimName: mongodb-pvc
When I tried to authenticate with username "admin" and password "abc123changeme" I received "Authentication failed.".
How can I configure mongo admin username and password (I want to get password from secret)?
Thanks
The reason the environment variables don't work is that the MONGO_INITDB environment variables are used by the docker-entrypoint.sh script within the image ( https://github.com/docker-library/mongo/tree/master/4.0 ) however when you define a 'command:' in your kubernetes file you override that entrypoint (see notes https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/ )
See below YML which is adapted from a few of the examples I found online. Note the learning points for me
cvallance/mongo-k8s-sidecar looks for ANY mongo instance matching the POD labels REGARDLESS of namespace so it'll try to hook up with any old instance in the cluster. This caused me a few hours of headscratching as I'd removed the environment= labels from the example as we use namespaces to segregate our environments..silly and obvious in retrospect...extremely confusing in the beginning (mongo logs were throwing all sorts of authentication errors and service down type errors because of the cross talk)
I was new to ClusterRoleBindings and it took me a while to realise they are Cluster level which I know seems obvious (despite needing to supply a namespace to get kubectl to accept it) but was causing mine to get overwritten between each namespace so make sure you create unique names per environment to avoid a deployment in one namespace messing up another as the ClusterRoleBinding gets overwritten if they're not unqiue within the cluster
MONGODB_DATABASE needs to be set to 'admin' for authentication to work.
I was following this example to configure authentication which depended on a sleep5 in the hope the daemon was up and running before attempting to create the adminUser. I found this wasn't long enough so upped it initially as failure to create the adminUser obviously led to connection refused issues. I later changed the sleep to test the daemon with a while loop and a ping of mongo which is more foolproof.
If you run mongod in a container (e.g. lxc, cgroups, Docker, etc.) that does not have access to all of the RAM available in a system, you must set --wiredTigerCacheSizeGB to a value less than the amount of RAM available in the container. The exact amount depends on the other processes running in the container.
You need at least 3 nodes in a Mongo cluster !
The YML below should spin up and configure a mongo replicaset in kubernetes with persistent storage and authentication enabled.
If you connect into the pod...
kubectl exec -ti mongo-db-0 --namespace somenamespace /bin/bash
mongo shell is installed in the image so you should be able to connect to the replicaset with...
mongo mongodb://mongoadmin:adminpassword#mongo-db/admin?replicaSet=rs0
And see that you get either rs0:PRIMARY> or rs0:SECONDARY, indicating the two pods are in a mongo replicateset. use rs.conf() to verify that from the PRIMARY.
#Create a Secret to hold the MONGO_INITDB_ROOT_USERNAME/PASSWORD
#so we can enable authentication
apiVersion: v1
data:
#echo -n "mongoadmin" | base64
init.userid: bW9uZ29hZG1pbg==
#echo -n "adminpassword" | base64
init.password: YWRtaW5wYXNzd29yZA==
kind: Secret
metadata:
name: mongo-init-credentials
namespace: somenamespace
type: Opaque
---
# Create a secret to hold a keyfile used to authenticate between replicaset members
# this seems to need to be base64 encoded twice (might not be the case if this
# was an actual file reference as per the examples, but we're using a simple key
# here
apiVersion: v1
data:
#echo -n "CHANGEMECHANGEMECHANGEME" | base64 | base64
mongodb-keyfile: UTBoQlRrZEZUVVZEU0VGT1IwVk5SVU5JUVU1SFJVMUYK
kind: Secret
metadata:
name: mongo-key
namespace: somenamespace
type: Opaque
---
# Create a service account for Mongo and give it Pod List role
# note this is a ClusterROleBinding - the Mongo Pod will be able
# to list all pods present in the cluster regardless of namespace
# (and this is exactly what it does...see below)
apiVersion: v1
kind: ServiceAccount
metadata:
name: mongo-serviceaccount
namespace: somenamespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: mongo-somenamespace-serviceaccount-view
namespace: somenamespace
subjects:
- kind: ServiceAccount
name: mongo-serviceaccount
namespace: somenamespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pod-viewer
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-viewer
namespace: somenamespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list"]
---
#Create a Storage Class for Google Container Engine
#Note fstype: xfs isn't supported by GCE yet and the
#Pod startup will hang if you try to specify it.
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: somenamespace
name: mongodb-ssd-storage
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
allowVolumeExpansion: true
---
#Headless Service for StatefulSets
apiVersion: v1
kind: Service
metadata:
namespace: somenamespace
name: mongo-db
labels:
name: mongo-db
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
app: mongo
---
# Now the fun part
#
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
namespace: somenamespace
name: mongo-db
spec:
serviceName: mongo-db
replicas: 3
template:
metadata:
labels:
# Labels MUST match MONGO_SIDECAR_POD_LABELS
# and MUST differentiate between other mongo
# instances in the CLUSTER not just the namespace
# as the sidecar will search the entire cluster
# for something to configure
app: mongo
environment: somenamespace
spec:
#Run the Pod using the service account
serviceAccountName: mongo-serviceaccount
terminationGracePeriodSeconds: 10
#Prevent a Mongo Replica running on the same node as another (avoid single point of failure)
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mongo
topologyKey: "kubernetes.io/hostname"
containers:
- name: mongo
image: mongo:4.0.12
command:
#Authentication adapted from https://gist.github.com/thilinapiy/0c5abc2c0c28efe1bbe2165b0d8dc115
#in order to pass the new admin user id and password in
- /bin/sh
- -c
- >
if [ -f /data/db/admin-user.lock ]; then
echo "KUBERNETES LOG $HOSTNAME- Starting Mongo Daemon with runtime settings (clusterAuthMode)"
#ensure wiredTigerCacheSize is set within the size of the containers memory limit
mongod --wiredTigerCacheSizeGB 0.5 --replSet rs0 --bind_ip 0.0.0.0 --smallfiles --noprealloc --clusterAuthMode keyFile --keyFile /etc/secrets-volume/mongodb-keyfile --setParameter authenticationMechanisms=SCRAM-SHA-1;
else
echo "KUBERNETES LOG $HOSTNAME- Starting Mongo Daemon with setup setting (authMode)"
mongod --auth;
fi;
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- >
if [ ! -f /data/db/admin-user.lock ]; then
echo "KUBERNETES LOG $HOSTNAME- no Admin-user.lock file found yet"
#replaced simple sleep, with ping and test.
while (! mongo --eval "db.adminCommand('ping')"); do sleep 10; echo "KUBERNETES LOG $HOSTNAME - waiting another 10 seconds for mongo to start" >> /data/db/configlog.txt; done;
touch /data/db/admin-user.lock
if [ "$HOSTNAME" = "mongo-db-0" ]; then
echo "KUBERNETES LOG $HOSTNAME- creating admin user ${MONGODB_USERNAME}"
mongo --eval "db = db.getSiblingDB('admin'); db.createUser({ user: '${MONGODB_USERNAME}', pwd: '${MONGODB_PASSWORD}', roles: [{ role: 'root', db: 'admin' }]});" >> /data/db/config.log
fi;
echo "KUBERNETES LOG $HOSTNAME-shutting mongod down for final restart"
mongod --shutdown;
fi;
env:
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.userid
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.password
ports:
- containerPort: 27017
livenessProbe:
exec:
command:
- mongo
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 5
periodSeconds: 60
timeoutSeconds: 10
readinessProbe:
exec:
command:
- mongo
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 5
periodSeconds: 60
timeoutSeconds: 10
resources:
requests:
memory: "350Mi"
cpu: 0.05
limits:
memory: "1Gi"
cpu: 0.1
volumeMounts:
- name: mongo-key
mountPath: "/etc/secrets-volume"
readOnly: true
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
# Sidecar searches for any POD in the CLUSTER with these labels
# not just the namespace..so we need to ensure the POD is labelled
# to differentiate it from other PODS in different namespaces
- name: MONGO_SIDECAR_POD_LABELS
value: "app=mongo,environment=somenamespace"
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.userid
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.password
#don't be fooled by this..it's not your DB that
#needs specifying, it's the admin DB as that
#is what you authenticate against with mongo.
- name: MONGODB_DATABASE
value: admin
volumes:
- name: mongo-key
secret:
defaultMode: 0400
secretName: mongo-key
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "mongodb-ssd-storage"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
Supposing you created a secret:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
Here a snippet to get a value from a secret in a kubernetes yaml file:
env:
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
I found this issue is related to a bug in docker-entrypoint.sh and occurs when numactl is detected on the node.
Try this simplified code (which moves numactl out of the way):
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
labels:
app: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: mongo:4.0.0
command:
- /bin/bash
- -c
# mv is not needed for later versions e.g. 3.4.19 and 4.1.7
- mv /usr/bin/numactl /usr/bin/numactl1 && source docker-entrypoint.sh mongod
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "xxxxx"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "xxxxx"
ports:
- containerPort: 27017
I raised an issue at:
https://github.com/docker-library/mongo/issues/330
Hopefully it will be fixed at some point so no need for the hack :o)
Adding this resolved the issue for me:
- name: ME_CONFIG_MONGODB_ENABLE_ADMIN
value: "true"
Seems like the default is set to "false".
If you are using Kubernetes you can check the reason for failure by using the command:
kubernetes logs <pod name>
This is what worked for me;
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: my-mongodb-pod
image: mongo:4.4.3
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "someMongoUser"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "somePassword"
- name: MONGO_REPLICA_SET
value: "myReplicaSet"
- name: MONGO_PORT
value: "27017"
# Note, to disable non-auth in mongodb is kind of complicated[4]
# Note, the `_getEnv` function is internal and undocumented[3].
#
# 1. https://gist.github.com/thilinapiy/0c5abc2c0c28efe1bbe2165b0d8dc115
# 2. https://stackoverflow.com/a/54726708/2768067
# 3. https://stackoverflow.com/a/67037065/2768067
# 4. https://www.mongodb.com/features/mongodb-authentication
command:
- /bin/sh
- -c
- >
set -x # print command been ran
set -e # fail if any command fails
env;
ps auxwww;
printf "\n\t mongod:: start in the background \n\n";
mongod \
--port="${MONGO_PORT}" \
--bind_ip_all \
--replSet="${MONGO_REPLICA_SET}" \
--quiet > /tmp/mongo.log.json 2>&1 &
sleep 9;
ps auxwww;
printf "\n\t mongod: set master \n\n";
mongo --port "${MONGO_PORT}" --eval '
rs.initiate({});
sleep(3000);';
printf "\n\t mongod: add user \n\n";
mongo --port "${MONGO_PORT}" --eval '
db.getSiblingDB("admin").createUser({
user: _getEnv("MONGO_INITDB_ROOT_USERNAME"),
pwd: _getEnv("MONGO_INITDB_ROOT_PASSWORD"),
roles: [{ role: "userAdminAnyDatabase", db: "admin" }]
});';
printf "\n\t mongod: shutdown \n\n";
mongod --shutdown;
sleep 3;
ps auxwww;
printf "\n\t mongod: restart with authentication \n\n";
mongod \
--auth \
--port="${MONGO_PORT}" \
--bind_ip_all \
--replSet="${MONGO_REPLICA_SET}" \
--verbose=v