i try to run a mongodb within a kubernetes cluster secured with a keyFile. For this, i created a simple statefulset and a configmap, where i stored the keyfile:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb
spec:
serviceName: mongodb
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo:4.4
args:
- --bind_ip
- '0.0.0.0,::'
- --replSet
- MySetname01
- --auth
- --keyFile
- /etc/mongodb/keyfile/keyfile
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: MyUsername
- name: MONGO_INITDB_ROOT_PASSWORD
value: MyPassword
ports:
- containerPort: 27017
name: mongodb
volumeMounts:
- name: mongodb-persistent-storage
mountPath: /data/db
- name: mongodb-keyfile
mountPath: /etc/mongodb/keyfile
readOnly: True
volumes:
- name: mongodb-keyfile
configMap:
name: mongodb-keyfile
volumeClaimTemplates:
- metadata:
name: mongodb-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongodb
labels:
app: mongodb
spec:
ports:
- port: 27017
selector:
app: mongodb
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-keyfile
data:
keyfile: |
+PN6gXEU8NeRsyjlWDnTesHCoPOn6uQIEI5pNorDkphREi6RyoSHCIaXOzLrUpPq
jpSGhSc5/MZj17R7K5anjerhvR6f5JtWjBuQcrjdJdNBceck71F2bly/u9ICfCOy
STFzv6foMQJBJTBYqLwtfyEO7CQ9ywodM0K5r9jtT7x5BiJaqso+F8VN/VFtIYpe
vnzKj7uU3GwDbmw6Yduybgv6P88BGXyW3w6HG8VLMgud5aV7wxIIPE6nAcr2nYmM
1BqC7wp8G6uCcMiHx5pONPA5ONYAIF+u3zj2wAthgMe2UeQxx2L2ERx8Zdsa9HLR
qYOmy9XhfolwdCTwwYvqYRO+RqXGoPGczenC/CKJPj14yfkua+0My5NBWvpL/fIB
osu0lQNw1vFu0rcT1/9OcaJHuwFWocec2qBih9tk2C3c7jNMuxkPo3dxjv8J/lex
vN3Et6tK/wDsQo2+8j6uLYkPFQbHZJQzf/oQiekV4RaC6/pejAf9fSAo4zbQXh29
8BIMpRL3fik+hvamjrtS/45yfqGf/Q5DQ7o8foI4HYmhy+SU2+Bxyc0ZLTn659zl
myesNjB6uC9lMWtpjas0XphNy8GvJxfjvz+bckccPUVczxyC3QSEIcVMMH9vhzes
AcQscswhFMgzp1Z0fbNKy0FqQiDy1hUSir06ZZ3xBGLKeIySRsw9D1Pyh1Y11HlH
NdGwF14cLqm53TGVd9gYeIAm2siQYMKm8rEjxmecc3yGgn0B69gtMcBmxr+z3xMU
X256om6l8L2BJjm3W1zUTiZABuKzeNKjhmXQdEFPQvxhubvCinTYs68XL76ZdVdJ
Q909MmllkOXKbAhi/TMdWmpV9nhINUCBrnu3F08jAQ3UkmVb923XZBzcbdPlpuHe
Orp11/f3Dke4x0niqATccidRHf6Hz+ufVkwIrucBZwcHhK4SBY/RU90n233nV06t
JXlBl/4XjWifB7iJi9mxy/66k
Problem is: MongoDb stays in a Crashloopbackoff , because the permissions on the keyfile are too open:
{"t":{"$date":"2022-12-19T12:41:41.399+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2022-12-19T12:41:41.402+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2022-12-19T12:41:41.402+00:00"},"s":"I", "c":"ACCESS", "id":20254, "ctx":"main","msg":"Read security file failed","attr":{"error":{"code":30,"codeName":"InvalidPath","errmsg":"permissions on /etc/mongodb/keyfile/keyfile are too open"}}}
For what i dont have a explanation.
I already set the volumemount of the configmap on readonly (You see in the mongo statefulset). Also i tried around with commands or lifecyclehooks to chmod 600/400 the file. I tried differnt versions of mongodb, but got always the same error.
For sure i also tried if the configmap is included correctly, it is (I uncommented the args and Username/Password for that one).
Permissions are shown:
lrwxrwxrwx 1 root root 14 Dec 19 12:50 keyfile -> ..data/keyfile
Maybe its related to that fact that the file is shown as linked?
I expect a kubernetes yaml which is able to start with a keyfile. Thank you very much.
EDIT: I tried to mount the file directly, not as a link with subpath. Now i got the following permissions:
-rw-r--r-- 1 root root 1001 Dec 19 13:34 mongod.key
But sadly the db will not start with that one too, it's still crashing with the same error.
EDIT2:
Adding defaultMode: 0600 to the volume in the statefulset led at least to the correct permissions, but also another error (already mentioned in one of my comments):
file: /var/lib/mongo/mongod.key: bad file"
So i tried to mount on different places in the Pod (You see here /var/lib/) for example and i tried to include the keyfile as secret. But none is working.
Related
I'm running a mongoDB (5.0.12) instance as a kubernetes pod. Suddenly the pod is failing and I need some help to understand the logs:
{"t":{"$date":"2022-09-13T18:39:51.104+00:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"AuthorizationManager-1","msg":"WiredTiger error","attr":{"error":1,"message":"[1663094391:104664][1:0x7fc5224cc700], file:index-9--3195476868760592993.wt, WT_SESSION.open_cursor: __posix_open_file, 808: /data/db/index-9--3195476868760592993.wt: handle-open: open: Operation not permitted"}}
{"t":{"$date":"2022-09-13T18:39:51.104+00:00"},"s":"F", "c":"STORAGE", "id":50882, "ctx":"AuthorizationManager-1","msg":"Failed to open WiredTiger cursor. This may be due to data corruption","attr":{"uri":"table:index-9--3195476868760592993","config":"overwrite=false","error":{"code":8,"codeName":"UnknownError","errmsg":"1: Operation not permitted"},"message":"Please read the documentation for starting MongoDB with --repair here: http://dochub.mongodb.org/core/repair"}}
{"t":{"$date":"2022-09-13T18:39:51.104+00:00"},"s":"F", "c":"-", "id":23091, "ctx":"AuthorizationManager-1","msg":"Fatal assertion","attr":{"msgid":50882,"file":"src/mongo/db/storage/wiredtiger/wiredtiger_session_cache.cpp","line":109}}
{"t":{"$date":"2022-09-13T18:39:51.104+00:00"},"s":"F", "c":"-", "id":23092, "ctx":"AuthorizationManager-1","msg":"\n\n***aborting after fassert() failure\n\n"}
So why is there operation is not permitted? I already run mongod --repair, but the error still occurs.
This is how the pod is deployed:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
strategy:
type: Recreate
template:
metadata:
labels:
app: mongodb
spec:
hostname: mongodb
# securityContext:
# runAsUser: 999
# runAsGroup: 3000
# fsGroup: 2000
volumes:
- name: data
persistentVolumeClaim:
claimName: data
containers:
- name: mongodb
image: mongo:5.0.12
args: ["--auth", "--dbpath", "/data/db"]
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
volumeMounts:
- mountPath: /data/db
name: data
# securityContext:
# allowPrivilegeEscalation: false
Update
The PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
You can try checking the permissions for that file before execution:
ls -l
then using chmod you can try changing the permission and then try executing it.
OR
You can refer here, this might help you:
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
You should have a look at setting the umask on the directory:
http://www.cyberciti.biz/tips/understanding-linux-unix-umask-value-usage.html
That will ensure new files in the directory are created with the specified permissions/ownerships.
I am trying to deploy a single instance mongodb inside of a kubernetes cluster (RKE2 specifically) on an AWS ec2 instance running Redhat 8.5. I am just trying to use the local file system i.e. no EBS. I am having trouble getting my application to work with persistent volumes so I have a few questions. Below is my pv.yaml
kind: Namespace
apiVersion: v1
metadata:
name: mongo
labels:
name: mongo
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongodb-pv
namespace: mongo
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/home/ec2-user/database"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-pvc
namespace: mongo
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
And here is my mongo deployment (I know having the user/password in plain text is not secure but this is for the sake of the example)
apiVersion: v1
kind: Pod
metadata:
name: mongodb-pod
namespace: mongo
labels:
app.kubernetes.io/name: mongodb-pod
spec:
containers:
- name: mongo
image: mongo:latest
imagePullPolicy: Always
ports:
- containerPort: 27017
name: mongodb-cp
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "user"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "password"
volumeMounts:
- mountPath: /data/db
name: mongodb-storage
volumes:
- name: mongodb-storage
persistentVolumeClaim:
claimName: mongodb-pvc
---
apiVersion: v1
kind: Service
metadata:
name: mongodb
namespace: mongo
spec:
selector:
app.kubernetes.io/name: mongodb-pod
ports:
- name: mongodb-cp
port: 27017
targetPort: mongodb-cp
When I run the above configuration files, I get the following errors from the mongo pod:
find: '/data/db': Permission denied
chown: changing ownership of '/data/db': Permission denied
I tried creating a mongodb user on the host with a uid and gid of 1001 since that is the process owner inside the mongo container and chowning the hostPath mentioned above but the error persists.
I have tried adding a securityContext block at both the pod and container level like so:
securityContext:
runAsUser: 1001
runAsGroup: 1001
fsGroup: 1001
which does get me further, but I now get the following error:
{"t":{"$date":"2022-06-02T20:32:13.015+00:00"},"s":"E", "c":"CONTROL", "id":20557, "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db"}}
and then the pod dies. If I set the container securityContext to privileged
securityContext:
privileged: true
Everything runs fine. So the two questions are.. is it secure to run a pod as privileged? If not (which is my assumption), what is the correct and secure way to use persistent volumes with the above configurations/example?
I'm suddenly facing some issues in my Kubernetes application (with no event to explain it). The application has been working properly during one year but now I'm getting a CrashLoopBackOff status.
IMPORTANT UPDATE:
I cannot update the Mongo replication controller in GKE, because when I commit the changes in mongo.yml (from GIT) all workloads update except mongo-controller (which is down).
In GKE in Workloads/Mongo-controller/Managed pods I can see that the "Created on" date is some days ago when the app was up. The rest of pods are updating with my commits. I don't want to delete the Mongo pod, because I suppose that we'd lost the database info/content. (The developer who created the cluster pipeline didn't schedule a backup).
Database: MongoDB (latest, not sure what was the one running properly)
OS: Pod running on Ubuntu 18.04
CLuster: Google Cloud Kubernetes Engines (a.k.a GKE)
Kubectl get pods
mongo-controller-dgkkg 0/1 CrashLoopBackOff 1199 4d6h
Logs of Mongo POD
Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
No TransportLayer configured during NetworkInterface startup"}
Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.
MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"mongo-controller-dgkkg"}
Build Info","attr":{"buildInfo":{"version":"4.4.1","gitVersion":"ad91a93a5a31e175f5cbf8c69561e788bbc55ce1","openSSLVersion":"OpenSSL 1.1.1 11 Sep 2018","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu1804","distarch":"x86_64","target_arch":"x86_64"}}}
Operating System","attr":{"os":{"name":"Ubuntu","version":"18.04"}}
Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}
Storage engine to use detected by data files","attr":{"dbpath":"/data/db","storageEngine":"wiredTiger"}
Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]
Opening WiredTiger","attr":{"config":"create,cache_size=1336M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress]
Failed to start up WiredTiger under any compatibility version. This may be due to an unsupported upgrade or downgrade.
Terminating.","attr":{"reason":"95: Operation not supported"}}
Fatal assertion","attr":{"msgid":28595,"file":"src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp","line":1101}}
\n\n***aborting after fassert() failure\n\n
My Mongo.yml:
apiVersion: v1
kind: Service
metadata:
name: mongo
namespace: $KUBE_NAMESPACE-$CI_ENVIRONMENT_SLUG
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
name: mongo
---
apiVersion: v1
kind: ReplicationController
metadata:
name: mongo-controller
namespace: $KUBE_NAMESPACE-$CI_ENVIRONMENT_SLUG
labels:
name: mongo
spec:
replicas: 1
template:
metadata:
labels:
name: mongo
spec:
containers:
- image: mongo
name: mongo
ports:
- name: mongo
containerPort: 27017
hostPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
gcePersistentDisk:
pdName: mongo-disk-$CI_ENVIRONMENT_SLUG
fsType: ext4
PD: Maybe I should update my ReplicationController to Deployment (recommended), but being a database container we always configure it in that way. However I tried that, and nothing changed.
I solved this issue editing the Replication Controller online from the Google Cloud Console.
Access to: "Kubernetes Engine" > "Workload" > "mongo-controller" > "Managed pods" > "mongo-controller-XXXXX"
...and press EDIT button (in the top navbar). You can edit the configuration online in real time. I simply specified the Mongo version (4.2.10) in the image, and everything woked as expected.
spec:
replicas: 1
selector:
name: mongo
template:
metadata:
creationTimestamp: null
labels:
name: mongo
spec:
containers:
- image: mongo: 4.2.10
(...)
I am trying to deploy and Express api on GKE, with a Mongo StatefulSet.
googlecloud_ssd.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
mongo-statefulset.yaml
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 2
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "fast"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
I deployed my Express app and it works perfect, I then deployed Mongo using the above yaml config.
Having set the connection string in express as:
"mongodb://mongo-0.mongo,mongo-1.mongo:27017/"
I can see the updated pod(s) not starting.
Looking at the logs for that container I see
{
insertId: "a9tu83g211w2a6"
labels: {…}
logName: "projects/<my-project-id>/logs/express"
receiveTimestamp: "2019-06-03T14:19:14.142238836Z"
resource: {…}
severity: "ERROR"
textPayload: "[ ERROR ] MongoError: no mongos proxy available
"
timestamp: "2019-06-03T14:18:56.132989616Z"
}
I am unsure how to debug / fix MongoError: no mongos proxy available
Edit
So I scaled down my replicas to 1 on each and it's now working.
I'm confused as to why this won't work more than 1 replica.
The connection to your Mongodb database doesn't work for two reasons:
You cannot connect to high-available MongoDB deployment running inside your Kubernetes cluster using Pods DNS names. These unique POD names: mongo-0.mongo, mongo-1.mongo, with corresponding FQDNs as mongo-0.mongo.default.svc.cluster.local, mongo-1.mongo.default.svc.cluster.local, can be only reached within the K8S cluster. You have an Express web application that runs on client side (Web browser), and needs to connect to your mongodb from outside of cluster.
Connection string: you should connect to primary node via Kubernetes service name, that abstracts access to the Pods behind the replica sets.
Solution:
Create a separate Kubernetes Service of LoadBalancer or NodePort type for your Primary ReplicaSet, and use <ExternalIP_of_LoadBalancer>:27017 in your connection string.
I would encourage you to take a look at official mongodb helm chart, to see what kind of manifest files are required to satisfy your case.
Hint: use '--set service.type=LoadBalancer' with this helm chart
I tried to configure mongo with authentication on a kubernetes cluster. I deployed the following yaml:
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongodb
image: mongo:4.0.0
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "admin"
- name: MONGO_INITDB_ROOT_PASSWORD
# Get password from secret
value: "abc123changeme"
command:
- mongod
- --auth
- --replSet
- rs0
- --bind_ip
- 0.0.0.0
ports:
- containerPort: 27017
name: web
volumeMounts:
- name: mongo-ps
mountPath: /data/db
volumes:
- name: mongo-ps
persistentVolumeClaim:
claimName: mongodb-pvc
When I tried to authenticate with username "admin" and password "abc123changeme" I received "Authentication failed.".
How can I configure mongo admin username and password (I want to get password from secret)?
Thanks
The reason the environment variables don't work is that the MONGO_INITDB environment variables are used by the docker-entrypoint.sh script within the image ( https://github.com/docker-library/mongo/tree/master/4.0 ) however when you define a 'command:' in your kubernetes file you override that entrypoint (see notes https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/ )
See below YML which is adapted from a few of the examples I found online. Note the learning points for me
cvallance/mongo-k8s-sidecar looks for ANY mongo instance matching the POD labels REGARDLESS of namespace so it'll try to hook up with any old instance in the cluster. This caused me a few hours of headscratching as I'd removed the environment= labels from the example as we use namespaces to segregate our environments..silly and obvious in retrospect...extremely confusing in the beginning (mongo logs were throwing all sorts of authentication errors and service down type errors because of the cross talk)
I was new to ClusterRoleBindings and it took me a while to realise they are Cluster level which I know seems obvious (despite needing to supply a namespace to get kubectl to accept it) but was causing mine to get overwritten between each namespace so make sure you create unique names per environment to avoid a deployment in one namespace messing up another as the ClusterRoleBinding gets overwritten if they're not unqiue within the cluster
MONGODB_DATABASE needs to be set to 'admin' for authentication to work.
I was following this example to configure authentication which depended on a sleep5 in the hope the daemon was up and running before attempting to create the adminUser. I found this wasn't long enough so upped it initially as failure to create the adminUser obviously led to connection refused issues. I later changed the sleep to test the daemon with a while loop and a ping of mongo which is more foolproof.
If you run mongod in a container (e.g. lxc, cgroups, Docker, etc.) that does not have access to all of the RAM available in a system, you must set --wiredTigerCacheSizeGB to a value less than the amount of RAM available in the container. The exact amount depends on the other processes running in the container.
You need at least 3 nodes in a Mongo cluster !
The YML below should spin up and configure a mongo replicaset in kubernetes with persistent storage and authentication enabled.
If you connect into the pod...
kubectl exec -ti mongo-db-0 --namespace somenamespace /bin/bash
mongo shell is installed in the image so you should be able to connect to the replicaset with...
mongo mongodb://mongoadmin:adminpassword#mongo-db/admin?replicaSet=rs0
And see that you get either rs0:PRIMARY> or rs0:SECONDARY, indicating the two pods are in a mongo replicateset. use rs.conf() to verify that from the PRIMARY.
#Create a Secret to hold the MONGO_INITDB_ROOT_USERNAME/PASSWORD
#so we can enable authentication
apiVersion: v1
data:
#echo -n "mongoadmin" | base64
init.userid: bW9uZ29hZG1pbg==
#echo -n "adminpassword" | base64
init.password: YWRtaW5wYXNzd29yZA==
kind: Secret
metadata:
name: mongo-init-credentials
namespace: somenamespace
type: Opaque
---
# Create a secret to hold a keyfile used to authenticate between replicaset members
# this seems to need to be base64 encoded twice (might not be the case if this
# was an actual file reference as per the examples, but we're using a simple key
# here
apiVersion: v1
data:
#echo -n "CHANGEMECHANGEMECHANGEME" | base64 | base64
mongodb-keyfile: UTBoQlRrZEZUVVZEU0VGT1IwVk5SVU5JUVU1SFJVMUYK
kind: Secret
metadata:
name: mongo-key
namespace: somenamespace
type: Opaque
---
# Create a service account for Mongo and give it Pod List role
# note this is a ClusterROleBinding - the Mongo Pod will be able
# to list all pods present in the cluster regardless of namespace
# (and this is exactly what it does...see below)
apiVersion: v1
kind: ServiceAccount
metadata:
name: mongo-serviceaccount
namespace: somenamespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: mongo-somenamespace-serviceaccount-view
namespace: somenamespace
subjects:
- kind: ServiceAccount
name: mongo-serviceaccount
namespace: somenamespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pod-viewer
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-viewer
namespace: somenamespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list"]
---
#Create a Storage Class for Google Container Engine
#Note fstype: xfs isn't supported by GCE yet and the
#Pod startup will hang if you try to specify it.
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: somenamespace
name: mongodb-ssd-storage
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
allowVolumeExpansion: true
---
#Headless Service for StatefulSets
apiVersion: v1
kind: Service
metadata:
namespace: somenamespace
name: mongo-db
labels:
name: mongo-db
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
app: mongo
---
# Now the fun part
#
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
namespace: somenamespace
name: mongo-db
spec:
serviceName: mongo-db
replicas: 3
template:
metadata:
labels:
# Labels MUST match MONGO_SIDECAR_POD_LABELS
# and MUST differentiate between other mongo
# instances in the CLUSTER not just the namespace
# as the sidecar will search the entire cluster
# for something to configure
app: mongo
environment: somenamespace
spec:
#Run the Pod using the service account
serviceAccountName: mongo-serviceaccount
terminationGracePeriodSeconds: 10
#Prevent a Mongo Replica running on the same node as another (avoid single point of failure)
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mongo
topologyKey: "kubernetes.io/hostname"
containers:
- name: mongo
image: mongo:4.0.12
command:
#Authentication adapted from https://gist.github.com/thilinapiy/0c5abc2c0c28efe1bbe2165b0d8dc115
#in order to pass the new admin user id and password in
- /bin/sh
- -c
- >
if [ -f /data/db/admin-user.lock ]; then
echo "KUBERNETES LOG $HOSTNAME- Starting Mongo Daemon with runtime settings (clusterAuthMode)"
#ensure wiredTigerCacheSize is set within the size of the containers memory limit
mongod --wiredTigerCacheSizeGB 0.5 --replSet rs0 --bind_ip 0.0.0.0 --smallfiles --noprealloc --clusterAuthMode keyFile --keyFile /etc/secrets-volume/mongodb-keyfile --setParameter authenticationMechanisms=SCRAM-SHA-1;
else
echo "KUBERNETES LOG $HOSTNAME- Starting Mongo Daemon with setup setting (authMode)"
mongod --auth;
fi;
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- >
if [ ! -f /data/db/admin-user.lock ]; then
echo "KUBERNETES LOG $HOSTNAME- no Admin-user.lock file found yet"
#replaced simple sleep, with ping and test.
while (! mongo --eval "db.adminCommand('ping')"); do sleep 10; echo "KUBERNETES LOG $HOSTNAME - waiting another 10 seconds for mongo to start" >> /data/db/configlog.txt; done;
touch /data/db/admin-user.lock
if [ "$HOSTNAME" = "mongo-db-0" ]; then
echo "KUBERNETES LOG $HOSTNAME- creating admin user ${MONGODB_USERNAME}"
mongo --eval "db = db.getSiblingDB('admin'); db.createUser({ user: '${MONGODB_USERNAME}', pwd: '${MONGODB_PASSWORD}', roles: [{ role: 'root', db: 'admin' }]});" >> /data/db/config.log
fi;
echo "KUBERNETES LOG $HOSTNAME-shutting mongod down for final restart"
mongod --shutdown;
fi;
env:
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.userid
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.password
ports:
- containerPort: 27017
livenessProbe:
exec:
command:
- mongo
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 5
periodSeconds: 60
timeoutSeconds: 10
readinessProbe:
exec:
command:
- mongo
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 5
periodSeconds: 60
timeoutSeconds: 10
resources:
requests:
memory: "350Mi"
cpu: 0.05
limits:
memory: "1Gi"
cpu: 0.1
volumeMounts:
- name: mongo-key
mountPath: "/etc/secrets-volume"
readOnly: true
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
# Sidecar searches for any POD in the CLUSTER with these labels
# not just the namespace..so we need to ensure the POD is labelled
# to differentiate it from other PODS in different namespaces
- name: MONGO_SIDECAR_POD_LABELS
value: "app=mongo,environment=somenamespace"
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.userid
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.password
#don't be fooled by this..it's not your DB that
#needs specifying, it's the admin DB as that
#is what you authenticate against with mongo.
- name: MONGODB_DATABASE
value: admin
volumes:
- name: mongo-key
secret:
defaultMode: 0400
secretName: mongo-key
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "mongodb-ssd-storage"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
Supposing you created a secret:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
Here a snippet to get a value from a secret in a kubernetes yaml file:
env:
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
I found this issue is related to a bug in docker-entrypoint.sh and occurs when numactl is detected on the node.
Try this simplified code (which moves numactl out of the way):
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
labels:
app: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: mongo:4.0.0
command:
- /bin/bash
- -c
# mv is not needed for later versions e.g. 3.4.19 and 4.1.7
- mv /usr/bin/numactl /usr/bin/numactl1 && source docker-entrypoint.sh mongod
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "xxxxx"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "xxxxx"
ports:
- containerPort: 27017
I raised an issue at:
https://github.com/docker-library/mongo/issues/330
Hopefully it will be fixed at some point so no need for the hack :o)
Adding this resolved the issue for me:
- name: ME_CONFIG_MONGODB_ENABLE_ADMIN
value: "true"
Seems like the default is set to "false".
If you are using Kubernetes you can check the reason for failure by using the command:
kubernetes logs <pod name>
This is what worked for me;
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: my-mongodb-pod
image: mongo:4.4.3
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "someMongoUser"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "somePassword"
- name: MONGO_REPLICA_SET
value: "myReplicaSet"
- name: MONGO_PORT
value: "27017"
# Note, to disable non-auth in mongodb is kind of complicated[4]
# Note, the `_getEnv` function is internal and undocumented[3].
#
# 1. https://gist.github.com/thilinapiy/0c5abc2c0c28efe1bbe2165b0d8dc115
# 2. https://stackoverflow.com/a/54726708/2768067
# 3. https://stackoverflow.com/a/67037065/2768067
# 4. https://www.mongodb.com/features/mongodb-authentication
command:
- /bin/sh
- -c
- >
set -x # print command been ran
set -e # fail if any command fails
env;
ps auxwww;
printf "\n\t mongod:: start in the background \n\n";
mongod \
--port="${MONGO_PORT}" \
--bind_ip_all \
--replSet="${MONGO_REPLICA_SET}" \
--quiet > /tmp/mongo.log.json 2>&1 &
sleep 9;
ps auxwww;
printf "\n\t mongod: set master \n\n";
mongo --port "${MONGO_PORT}" --eval '
rs.initiate({});
sleep(3000);';
printf "\n\t mongod: add user \n\n";
mongo --port "${MONGO_PORT}" --eval '
db.getSiblingDB("admin").createUser({
user: _getEnv("MONGO_INITDB_ROOT_USERNAME"),
pwd: _getEnv("MONGO_INITDB_ROOT_PASSWORD"),
roles: [{ role: "userAdminAnyDatabase", db: "admin" }]
});';
printf "\n\t mongod: shutdown \n\n";
mongod --shutdown;
sleep 3;
ps auxwww;
printf "\n\t mongod: restart with authentication \n\n";
mongod \
--auth \
--port="${MONGO_PORT}" \
--bind_ip_all \
--replSet="${MONGO_REPLICA_SET}" \
--verbose=v