How to set chown in kubernetes deployment persistently? - mongodb

I'm running a mongodb instance as a kubernetes pod in a single node cluster (bare metal ubuntu machine). The volume is configured ReadWriteOnce as the mongodb pod is accessed only by pods in one node.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
strategy:
type: Recreate
template:
metadata:
labels:
app: mongo
spec:
hostname: mongo
volumes:
- name: data
persistentVolumeClaim:
claimName: data
- name: restore
persistentVolumeClaim:
claimName: restore
containers:
- name: mongo
image: mongo:4.4.14
args: ["--auth"]
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
volumeMounts:
- mountPath: /data/db
name: data
- mountPath: /restore
name: restore
But from time to time I cannot run commands like inserting documents to a non existing collection or running mongodump.
Then I do get the error MongoServerError: 1: Operation not permitted. This is caused by a chown problem: ls -ld /data/db/ is returning
drwxr-sr-x 4 nobody 4294967294 16384 Jun 28 18:19 /data/db/
I can fix the problem by running
chown mongodb:mongodb /data/db
But after some time it changes again, so the same problem happens again and I have to rerun the chown mongodb:mongodb /data/db
I tried to set
securityContext:
runAsUser: 1000
fsGroup: 2000
But then the mongodb pod is failing:
{"t":{"$date":"2022-07-03T10:09:24.379+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2022-07-03T10:09:24.383+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2022-07-03T10:09:24.384+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2022-07-03T10:09:24.384+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"mongodb"}}
{"t":{"$date":"2022-07-03T10:09:24.384+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.14","gitVersion":"0b0843af97c3ec9d2c0995152d96d2aad725aab7","openSSLVersion":"OpenSSL 1.1.1f 31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2022-07-03T10:09:24.384+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}}
{"t":{"$date":"2022-07-03T10:09:24.384+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"},"security":{"authorization":"enabled"}}}}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"E", "c":"STORAGE", "id":20557, "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db"}}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"REPL", "id":4784900, "ctx":"initandlisten","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":10000}}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"COMMAND", "id":4784901, "ctx":"initandlisten","msg":"Shutting down the MirrorMaestro"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"SHARDING", "id":4784902, "ctx":"initandlisten","msg":"Shutting down the WaitForMajorityService"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"NETWORK", "id":20562, "ctx":"initandlisten","msg":"Shutdown: going to close listening sockets"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"NETWORK", "id":4784905, "ctx":"initandlisten","msg":"Shutting down the global connection pool"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"STORAGE", "id":4784906, "ctx":"initandlisten","msg":"Shutting down the FlowControlTicketholder"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"-", "id":20520, "ctx":"initandlisten","msg":"Stopping further Flow Control ticket acquisitions."}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"NETWORK", "id":4784918, "ctx":"initandlisten","msg":"Shutting down the ReplicaSetMonitor"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"SHARDING", "id":4784921, "ctx":"initandlisten","msg":"Shutting down the MigrationUtilExecutor"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"CONTROL", "id":4784925, "ctx":"initandlisten","msg":"Shutting down free monitoring"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"STORAGE", "id":4784927, "ctx":"initandlisten","msg":"Shutting down the HealthLog"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"STORAGE", "id":4784929, "ctx":"initandlisten","msg":"Acquiring the global lock for shutdown"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"-", "id":4784931, "ctx":"initandlisten","msg":"Dropping the scope cache for shutdown"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"FTDC", "id":4784926, "ctx":"initandlisten","msg":"Shutting down full-time data capture"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"CONTROL", "id":20565, "ctx":"initandlisten","msg":"Now exiting"}
{"t":{"$date":"2022-07-03T10:09:24.386+00:00"},"s":"I", "c":"CONTROL", "id":23138, "ctx":"initandlisten","msg":"Shutting down","attr":{"exitCode":100}}
I also tried
initContainers:
- name: mongodb-init
image: mongo:4.4.14
command: ["sh", "-c", "chown 999:999 /data/db"]
volumeMounts:
- name: data
mountPath: /data/db
But after some time I again get:
drwxr-sr-x 4 nobody 4294967294 16384 Jun 28 18:19 /data/db/
I do not understand what is causing this behaviour.

If you set securityContext then according to the official documentation:
the runAsUser field specifies that for any Containers in the Pod, all processes run with user ID 1000.
there is no fear of mismatching between different processes of the container.
Perhaps you could give (although not recommended) root user access to the pod by setting runAsUser to 0? Just to see if security context can fix the issue. It should.

It might be useful to run ls -n /data/db to see what uid has ownership of the directory when it's failing. nobody is not useful information.
You set securityContext to 1000, but in the initContainer you set ownership to 999. That won't work.
What user is the container running as? run id.
What user is the mongo process using? There's no assigned user in the Deployment.spec, so I imagine it's a mongoDB config setting. (docs say uid is 999)
Figure out which user id needs to own that directoryissue. Then a possible solution is to mount the path /data with the storage provisioner, then create the path /data/db with an initcontainer for the mongoDB deployment using the correct uid. That way the directories will have the correct ownership.

Related

MongoDB on Docker keeps waiting for connection

Trying to put up a mongo and mongo-express on Docker, but the DB won't initiate.
I am on an online course to learn. I just started on docker and mongo.
The teacher made the same code, and works normally.
Can it be on my machine? I am using WSL and VSCode Remote to make the course project.
Here is the docker-compose:
services:
app:
build: .
ports:
- 3000:3000
volumes:
- .:/home/node/app
depends_on:
- mongodb
mongodb:
image: mongo
ports:
- 27017:27017
environment:
- MONGO_INITDDB_ROOT_USERNAME=root
- MONGO_INITDDB_ROOT_PASSWORD=root
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongo localhost:27017/test
retries: 5
interval: 15s
start_period: 30s
mongo-express:
image: mongo-express
restart: unless-stopped
ports:
- 8081:8081
environment:
- ME_CONFIG_MONGODB_SERVER=mongodb
- ME_CONFIG_MONGODB_AUTH_USERNAME=root
- ME_CONFIG_MONGODB_AUTH_PASSWORD=root
- ME_CONFIG_MONGODB_ADMINUSERNAME=root
- ME_CONFIG_MONGODB_ADMINPASSWORD=root
depends_on:
mongodb:
condition: service_healthy
And here is the log:
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:37.267+00:00"},"s":"I", "c":"NETWORK", "id":4915701, "ctx":"-","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":17},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":17},"outgoing":{"minWireVersion":6,"maxWireVersion":17},"isInternalClient":true}}}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:37.270+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:37.270+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:37.272+00:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","namespace":"config.tenantMigrationDonors"}}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:37.272+00:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","namespace":"config.tenantMigrationRecipients"}}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:37.272+00:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"ShardSplitDonorService","namespace":"config.tenantSplitDonors"}}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:37.272+00:00"},"s":"I", "c":"CONTROL", "id":5945603, "ctx":"main","msg":"Multi threading initialized"}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:37.273+00:00"},"s":"I", "c":"CONTROL", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"434b144655c6"}}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:37.273+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"6.0.2","gitVersion":"94fb7dfc8b974f1f5343e7ea394d0d9deedba50e","openSSLVersion":"OpenSSL 1.1.1f 31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:37.273+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:37.273+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:37.274+00:00"},"s":"I", "c":"STORAGE", "id":22270, "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"/data/db","storageEngine":"wiredTiger"}}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:37.274+00:00"},"s":"I", "c":"STORAGE", "id":22297, "ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:37.275+00:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=1301M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,remove=true,path=journal,compressor=snappy),builtin_extension_config=(zstd=(compression_level=6)),file_manager=(close_idle_time=600,close_scan_interval=10,close_handle_minimum=2000),statistics_log=(wait=0),json_output=(error,message),verbose=[recovery_progress:1,checkpoint_progress:1,compact_progress:1,backup:0,checkpoint:0,compact:0,evict:0,history_store:0,recovery:0,rts:0,salvage:0,tiered:0,timestamp:0,transaction:0,verify:0,log:0],"}}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:38.070+00:00"},"s":"I", "c":"STORAGE", "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":795}}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:38.071+00:00"},"s":"I", "c":"RECOVERY", "id":23987, "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:38.078+00:00"},"s":"W", "c":"CONTROL", "id":22120, "ctx":"initandlisten","msg":"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted","tags":["startupWarnings"]}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:38.078+00:00"},"s":"W", "c":"CONTROL", "id":22178, "ctx":"initandlisten","msg":"/sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'","tags":["startupWarnings"]}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:38.078+00:00"},"s":"W", "c":"CONTROL", "id":5123300, "ctx":"initandlisten","msg":"vm.max_map_count is too low","attr":{"currentValue":65530,"recommendedMinimum":1677720,"maxConns":838860},"tags":["startupWarnings"]}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:38.081+00:00"},"s":"I", "c":"NETWORK", "id":4915702, "ctx":"initandlisten","msg":"Updated wire specification","attr":{"oldSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":17},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":17},"outgoing":{"minWireVersion":6,"maxWireVersion":17},"isInternalClient":true},"newSpec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":17},"incomingInternalClient":{"minWireVersion":17,"maxWireVersion":17},"outgoing":{"minWireVersion":17,"maxWireVersion":17},"isInternalClient":true}}}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:38.081+00:00"},"s":"I", "c":"REPL", "id":5853300, "ctx":"initandlisten","msg":"current featureCompatibilityVersion value","attr":{"featureCompatibilityVersion":"6.0","context":"startup"}}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:38.081+00:00"},"s":"I", "c":"STORAGE", "id":5071100, "ctx":"initandlisten","msg":"Clearing temp directory"}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:38.083+00:00"},"s":"I", "c":"CONTROL", "id":20536, "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:38.083+00:00"},"s":"I", "c":"FTDC", "id":20625, "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/data/db/diagnostic.data"}}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:38.089+00:00"},"s":"I", "c":"REPL", "id":6015317, "ctx":"initandlisten","msg":"Setting new configuration state","attr":{"newState":"ConfigReplicationDisabled","oldState":"ConfigPreStart"}}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:38.089+00:00"},"s":"I", "c":"STORAGE", "id":22262, "ctx":"initandlisten","msg":"Timestamp monitor starting"}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:38.093+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:38.093+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0"}}
nestjs-api-testes-mongodb-1 | {"t":{"$date":"2022-10-21T19:01:38.093+00:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
nestjs-api-testes-app-1 |
nestjs-api-testes-app-1 | up to date in 1m
nestjs-api-testes-app-1 |
nestjs-api-testes-app-1 | 92 packages are looking for funding
nestjs-api-testes-app-1 | run `npm fund` for details
container for service "mongodb" is unhealthy```
Thanks in advance.

mongodb always crashes with docker-compose file

This is my docker-compose file
version: "3.1"
services:
mongo:
image: mongo:focal
user: 1000:1000
networks:
mynet:
ipv4_address: "172.19.0.15"
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME="root"
- MONGO_INITDB_ROOT_PASSWORD="mongo#123"
command: mongod --verbose --dbpath=/data/db
volumes:
- /data/mongo:/data/db
mongo-admin:
image: mongo-express:0.54.0
networks:
mynet:
ipv4_address: "172.19.0.16"
ports:
- 8081:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME="root"
- ME_CONFIG_MONGODB_ADMINPASSWORD="mongo#123"
- ME_CONFIG_MONGODB_URL="mongodb://root:mongo#123#mongo:27017/"
networks:
mynet:
driver: bridge
ipam:
config:
- subnet: "172.19.0.0/24"
gateway: "172.19.0.1"
whenever I run it it errors out:
Attaching to environment_mongo-admin_1, environment_mongo_1
mongo-admin_1 | Waiting for mongo:27017...
mongo-admin_1 | /docker-entrypoint.sh: connect: Connection refused
mongo-admin_1 | /docker-entrypoint.sh: line 14: /dev/tcp/mongo/27017: Connection refused
mongo_1 | about to fork child process, waiting until server is ready for connections.
mongo_1 | forked process: 20
mongo_1 |
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.185+00:00"},"s":"I", "c":"CONTROL", "id":20698, "ctx":"-","msg":"***** SERVER RESTARTED *****"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.187+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"-","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.200+00:00"},"s":"I", "c":"NETWORK", "id":4915701, "ctx":"main","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true}}}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.201+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.202+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.204+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.204+00:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","ns":"config.tenantMigrationDonors"}}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.205+00:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"main","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","ns":"config.tenantMigrationRecipients"}}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.205+00:00"},"s":"D1", "c":"NETWORK", "id":22940, "ctx":"main","msg":"file descriptor and connection resource limits","attr":{"hard":1048576,"soft":1048576,"conn":838860}}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.206+00:00"},"s":"I", "c":"CONTROL", "id":5945603, "ctx":"main","msg":"Multi threading initialized"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.207+00:00"},"s":"I", "c":"CONTROL", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":20,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"e2237d5770b6"}}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.207+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"5.0.8","gitVersion":"c87e1c23421bf79614baf500fda6622bd90f674e","openSSLVersion":"OpenSSL 1.1.1f 31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.207+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.208+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"127.0.0.1","port":27017,"tls":{"mode":"disabled"}},"processManagement":{"fork":true,"pidFilePath":"/tmp/docker-entrypoint-temp-mongod.pid"},"storage":{"dbPath":"/data/db"},"systemLog":{"destination":"file","logAppend":true,"path":"/proc/1/fd/1","verbosity":1}}}}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.212+00:00"},"s":"D1", "c":"NETWORK", "id":22940, "ctx":"initandlisten","msg":"file descriptor and connection resource limits","attr":{"hard":1048576,"soft":1048576,"conn":838860}}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.214+00:00"},"s":"D1", "c":"EXECUTOR", "id":23104, "ctx":"OCSPManagerHTTP-0","msg":"Starting thread","attr":{"threadName":"OCSPManagerHTTP-0","poolName":"OCSPManagerHTTP"}}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.214+00:00"},"s":"D1", "c":"-", "id":23074, "ctx":"initandlisten","msg":"User assertion","attr":{"error":"IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db","file":"src/mongo/db/storage/storage_engine_init.cpp","line":214}}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.215+00:00"},"s":"E", "c":"CONTROL", "id":20557, "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db"}}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.215+00:00"},"s":"I", "c":"REPL", "id":4784900, "ctx":"initandlisten","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":15000}}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.216+00:00"},"s":"D1", "c":"-", "id":23074, "ctx":"initandlisten","msg":"User assertion","attr":{"error":"NotWritablePrimary: not primary so can't step down","file":"src/mongo/db/repl/replication_coordinator_impl.cpp","line":2589}}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.216+00:00"},"s":"I", "c":"COMMAND", "id":4784901, "ctx":"initandlisten","msg":"Shutting down the MirrorMaestro"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.216+00:00"},"s":"I", "c":"SHARDING", "id":4784902, "ctx":"initandlisten","msg":"Shutting down the WaitForMajorityService"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.217+00:00"},"s":"I", "c":"NETWORK", "id":20562, "ctx":"initandlisten","msg":"Shutdown: going to close listening sockets"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.217+00:00"},"s":"I", "c":"NETWORK", "id":4784905, "ctx":"initandlisten","msg":"Shutting down the global connection pool"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.219+00:00"},"s":"I", "c":"CONTROL", "id":4784906, "ctx":"initandlisten","msg":"Shutting down the FlowControlTicketholder"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.219+00:00"},"s":"I", "c":"-", "id":20520, "ctx":"initandlisten","msg":"Stopping further Flow Control ticket acquisitions."}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.220+00:00"},"s":"I", "c":"NETWORK", "id":4784918, "ctx":"initandlisten","msg":"Shutting down the ReplicaSetMonitor"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.222+00:00"},"s":"I", "c":"SHARDING", "id":4784921, "ctx":"initandlisten","msg":"Shutting down the MigrationUtilExecutor"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.224+00:00"},"s":"I", "c":"ASIO", "id":22582, "ctx":"MigrationUtil-TaskExecutor","msg":"Killing all outstanding egress activity."}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.224+00:00"},"s":"I", "c":"COMMAND", "id":4784923, "ctx":"initandlisten","msg":"Shutting down the ServiceEntryPoint"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.224+00:00"},"s":"I", "c":"CONTROL", "id":4784925, "ctx":"initandlisten","msg":"Shutting down free monitoring"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.225+00:00"},"s":"I", "c":"CONTROL", "id":4784927, "ctx":"initandlisten","msg":"Shutting down the HealthLog"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.225+00:00"},"s":"I", "c":"CONTROL", "id":4784928, "ctx":"initandlisten","msg":"Shutting down the TTL monitor"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.225+00:00"},"s":"I", "c":"CONTROL", "id":4784929, "ctx":"initandlisten","msg":"Acquiring the global lock for shutdown"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.226+00:00"},"s":"I", "c":"-", "id":4784931, "ctx":"initandlisten","msg":"Dropping the scope cache for shutdown"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.226+00:00"},"s":"I", "c":"FTDC", "id":4784926, "ctx":"initandlisten","msg":"Shutting down full-time data capture"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.226+00:00"},"s":"I", "c":"CONTROL", "id":20565, "ctx":"initandlisten","msg":"Now exiting"}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.227+00:00"},"s":"D1", "c":"EXECUTOR", "id":23105, "ctx":"OCSPManagerHTTP-0","msg":"Shutting down thread","attr":{"threadName":"OCSPManagerHTTP-0","poolName":"OCSPManagerHTTP"}}
mongo_1 | {"t":{"$date":"2022-06-01T09:51:29.227+00:00"},"s":"I", "c":"CONTROL", "id":23138, "ctx":"initandlisten","msg":"Shutting down","attr":{"exitCode":100}}
mongo_1 | ERROR: child process failed, exited with 100
mongo_1 | To see additional information in this output, start without the "--fork" option.
environment_mongo_1 exited with code 100
mongo-admin_1 | Wed Jun 1 09:51:29 UTC 2022 retrying to connect to mongo:27017 (2/5)
mongo-admin_1 | /docker-entrypoint.sh: line 14: mongo: Try again
mongo-admin_1 | /docker-entrypoint.sh: line 14: /dev/tcp/mongo/27017: Invalid argument
mongo-admin_1 | Wed Jun 1 09:51:35 UTC 2022 retrying to connect to mongo:27017 (3/5)
mongo-admin_1 | /docker-entrypoint.sh: line 14: mongo: Try again
mongo-admin_1 | /docker-entrypoint.sh: line 14: /dev/tcp/mongo/27017: Invalid argument
mongo-admin_1 | Wed Jun 1 09:51:41 UTC 2022 retrying to connect to mongo:27017 (4/5)
^CGracefully stopping... (press Ctrl+C again to force)
what is the issue in my docker-compose file for mongodb?

MongoDB config file is ignored after setting replication options

I had MongoDB running fine on Ubuntu 20.04 but after adding replication options to mongod.conf, the config file seems to be being rejected. This results in errors because mongo attempts to write to the default storage path, which doesn't exist.
Here is my conf file:
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27018
bindIp: 127.0.0.1
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
#security:
#operationProfiling:
replication:
replSetName: rs0
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:
And here are the errors:
{"t":{"$date":"2022-03-12T16:05:56.861+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"thread1","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2022-03-12T16:05:56.861+00:00"},"s":"I", "c":"NETWORK", "id":4915701, "ctx":"thread1","msg":"Initialized wire specification","attr":{"spec":{"incomingExternalClient":{"minWireVersion":0,"maxWireVersion":13},"incomingInternalClient":{"minWireVersion":0,"maxWireVersion":13},"outgoing":{"minWireVersion":0,"maxWireVersion":13},"isInternalClient":true}}}
{"t":{"$date":"2022-03-12T16:05:56.864+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"thread1","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2022-03-12T16:05:56.864+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"thread1","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2022-03-12T16:05:56.865+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"thread1","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2022-03-12T16:05:56.865+00:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"thread1","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationDonorService","ns":"config.tenantMigrationDonors"}}
{"t":{"$date":"2022-03-12T16:05:56.865+00:00"},"s":"I", "c":"REPL", "id":5123008, "ctx":"thread1","msg":"Successfully registered PrimaryOnlyService","attr":{"service":"TenantMigrationRecipientService","ns":"config.tenantMigrationRecipients"}}
{"t":{"$date":"2022-03-12T16:05:56.865+00:00"},"s":"I", "c":"CONTROL", "id":5945603, "ctx":"thread1","msg":"Multi threading initialized"}
{"t":{"$date":"2022-03-12T16:05:56.866+00:00"},"s":"I", "c":"CONTROL", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":74756,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"Gabriel-PC-Ubuntu"}}
{"t":{"$date":"2022-03-12T16:05:56.866+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"5.0.6","gitVersion":"212a8dbb47f07427dae194a9c75baec1d81d9259","openSSLVersion":"OpenSSL 1.1.1f 31 Mar 2020","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu2004","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2022-03-12T16:05:56.866+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"20.04"}}}
{"t":{"$date":"2022-03-12T16:05:56.866+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{}}}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"E", "c":"CONTROL", "id":20557, "ctx":"initandlisten","msg":"DBException in initAndListen, terminating","attr":{"error":"NonExistentPath: Data directory /data/db not found. Create the missing directory or specify another path using (1) the --dbpath command line option, or (2) by adding the 'storage.dbPath' option in the configuration file."}}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"I", "c":"REPL", "id":4784900, "ctx":"initandlisten","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":15000}}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"I", "c":"COMMAND", "id":4784901, "ctx":"initandlisten","msg":"Shutting down the MirrorMaestro"}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"I", "c":"SHARDING", "id":4784902, "ctx":"initandlisten","msg":"Shutting down the WaitForMajorityService"}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"I", "c":"NETWORK", "id":20562, "ctx":"initandlisten","msg":"Shutdown: going to close listening sockets"}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"I", "c":"NETWORK", "id":4784905, "ctx":"initandlisten","msg":"Shutting down the global connection pool"}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"I", "c":"CONTROL", "id":4784906, "ctx":"initandlisten","msg":"Shutting down the FlowControlTicketholder"}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"I", "c":"-", "id":20520, "ctx":"initandlisten","msg":"Stopping further Flow Control ticket acquisitions."}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"I", "c":"NETWORK", "id":4784918, "ctx":"initandlisten","msg":"Shutting down the ReplicaSetMonitor"}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"I", "c":"SHARDING", "id":4784921, "ctx":"initandlisten","msg":"Shutting down the MigrationUtilExecutor"}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"I", "c":"ASIO", "id":22582, "ctx":"MigrationUtil-TaskExecutor","msg":"Killing all outstanding egress activity."}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"I", "c":"COMMAND", "id":4784923, "ctx":"initandlisten","msg":"Shutting down the ServiceEntryPoint"}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"I", "c":"CONTROL", "id":4784925, "ctx":"initandlisten","msg":"Shutting down free monitoring"}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"I", "c":"CONTROL", "id":4784927, "ctx":"initandlisten","msg":"Shutting down the HealthLog"}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"I", "c":"CONTROL", "id":4784928, "ctx":"initandlisten","msg":"Shutting down the TTL monitor"}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"I", "c":"CONTROL", "id":4784929, "ctx":"initandlisten","msg":"Acquiring the global lock for shutdown"}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"I", "c":"-", "id":4784931, "ctx":"initandlisten","msg":"Dropping the scope cache for shutdown"}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"I", "c":"FTDC", "id":4784926, "ctx":"initandlisten","msg":"Shutting down full-time data capture"}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"I", "c":"CONTROL", "id":20565, "ctx":"initandlisten","msg":"Now exiting"}
{"t":{"$date":"2022-03-12T16:05:56.867+00:00"},"s":"I", "c":"CONTROL", "id":23138, "ctx":"initandlisten","msg":"Shutting down","attr":{"exitCode":100}}
Any help would be greatly appreciated!

Getting U_STRINGPREP_PROHIBITED_ERROR when trying to deploy MongoDB on Kubernetes

Trying to set up MongoDB in a Kubernetes cluster on my local machine using Minikube but getting the following error. (I tried multiple MongoDB images; latest, 5.0.0, 4.0, etc. The problem continued.)
My mongodb-secret.yaml is as follows:
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
data:
mongo-root-username: dXNlcm5hbWUNCg==
mongo-root-password: cGFzc3dvcmQNCg==
My mongodb-deployment.yaml file is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
apps: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo:latest
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
The output when I run kubectl get all:
NAME READY STATUS RESTARTS AGE
pod/mongodb-deployment-6c587ddcbb-lnk2q 0/1 CrashLoopBackOff 6 7m56s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mongodb-deployment 0/1 1 0 14m
NAME DESIRED CURRENT READY AGE
replicaset.apps/mongodb-deployment-59ff8f9fd7 0 0 0 9m1s
replicaset.apps/mongodb-deployment-6c587ddcbb 1 1 0 7m56s
replicaset.apps/mongodb-deployment-7fc5cbcf9c 0 0 0 10m
replicaset.apps/mongodb-deployment-8f6675bc5 0 0 0 14m
The output when I run kubectl logs mongodb-deployment-6c587ddcbb-lnk2q:
about to fork child process, waiting until server is ready for connections.
forked process: 28
{"t":{"$date":"2021-06-16T08:15:33.738+00:00"},"s":"I", "c":"CONTROL", "id":20698, "ctx":"main","msg":"***** SERVER RESTARTED *****"}
{"t":{"$date":"2021-06-16T08:15:33.740+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2021-06-16T08:15:33.741+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2021-06-16T08:15:33.741+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2021-06-16T08:15:33.741+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":28,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"mongodb-deployment-6c587ddcbb-lnk2q"}}
{"t":{"$date":"2021-06-16T08:15:33.741+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.6","gitVersion":"72e66213c2c3eab37d9358d5e78ad7f5c1d0d0d7","openSSLVersion":"OpenSSL 1.1.1 11 Sep 2018","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu1804","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2021-06-16T08:15:33.741+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"18.04"}}}
{"t":{"$date":"2021-06-16T08:15:33.741+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"127.0.0.1","port":27017,"tls":{"mode":"disabled"}},"processManagement":{"fork":true,"pidFilePath":"/tmp/docker-entrypoint-temp-mongod.pid"},"systemLog":{"destination":"file","logAppend":true,"path":"/proc/1/fd/1"}}}}
{"t":{"$date":"2021-06-16T08:15:33.741+00:00"},"s":"I", "c":"STORAGE", "id":22297, "ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]}
{"t":{"$date":"2021-06-16T08:15:33.742+00:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=12285M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}}
{"t":{"$date":"2021-06-16T08:15:33.855+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1623831333:855500][28:0x7f583f1b6ac0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"}}
{"t":{"$date":"2021-06-16T08:15:33.855+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1623831333:855566][28:0x7f583f1b6ac0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"}}
{"t":{"$date":"2021-06-16T08:15:33.865+00:00"},"s":"I", "c":"STORAGE", "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":123}}
{"t":{"$date":"2021-06-16T08:15:33.865+00:00"},"s":"I", "c":"RECOVERY", "id":23987, "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}}
{"t":{"$date":"2021-06-16T08:15:33.886+00:00"},"s":"I", "c":"STORAGE", "id":4366408, "ctx":"initandlisten","msg":"No table logging settings modifications are required for existing WiredTiger tables","attr":{"loggingEnabled":true}}
{"t":{"$date":"2021-06-16T08:15:33.887+00:00"},"s":"I", "c":"STORAGE", "id":22262, "ctx":"initandlisten","msg":"Timestamp monitor starting"}
{"t":{"$date":"2021-06-16T08:15:33.894+00:00"},"s":"W", "c":"CONTROL", "id":22120, "ctx":"initandlisten","msg":"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted","tags":["startupWarnings"]}
{"t":{"$date":"2021-06-16T08:15:33.894+00:00"},"s":"W", "c":"CONTROL", "id":22178, "ctx":"initandlisten","msg":"/sys/kernel/mm/transparent_hugepage/enabled is 'always'. We suggest setting it to 'never'","tags":["startupWarnings"]}
{"t":{"$date":"2021-06-16T08:15:33.894+00:00"},"s":"I", "c":"STORAGE", "id":20320, "ctx":"initandlisten","msg":"createCollection","attr":{"namespace":"admin.system.version","uuidDisposition":"provided","uuid":{"uuid":{"$uuid":"5118c0eb-1d88-4b4a-837c-ec78ce97b710"}},"options":{"uuid":{"$uuid":"5118c0eb-1d88-4b4a-837c-ec78ce97b710"}}}}
{"t":{"$date":"2021-06-16T08:15:33.908+00:00"},"s":"I", "c":"INDEX", "id":20345, "ctx":"initandlisten","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"admin.system.version","index":"_id_","commitTimestamp":{"$timestamp":{"t":0,"i":0}}}}
{"t":{"$date":"2021-06-16T08:15:33.908+00:00"},"s":"I", "c":"COMMAND", "id":20459, "ctx":"initandlisten","msg":"Setting featureCompatibilityVersion","attr":{"newVersion":"4.4"}}
{"t":{"$date":"2021-06-16T08:15:33.908+00:00"},"s":"I", "c":"STORAGE", "id":20536, "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"}
{"t":{"$date":"2021-06-16T08:15:33.909+00:00"},"s":"I", "c":"STORAGE", "id":20320, "ctx":"initandlisten","msg":"createCollection","attr":{"namespace":"local.startup_log","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"5e09a18c-0532-4fad-8981-283eb3f2f429"}},"options":{"capped":true,"size":10485760}}}
{"t":{"$date":"2021-06-16T08:15:33.924+00:00"},"s":"I", "c":"INDEX", "id":20345, "ctx":"initandlisten","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"local.startup_log","index":"_id_","commitTimestamp":{"$timestamp":{"t":0,"i":0}}}}
{"t":{"$date":"2021-06-16T08:15:33.924+00:00"},"s":"I", "c":"FTDC", "id":20625, "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/data/db/diagnostic.data"}}
{"t":{"$date":"2021-06-16T08:15:33.925+00:00"},"s":"I", "c":"CONTROL", "id":20712, "ctx":"LogicalSessionCacheReap","msg":"Sessions collection is not set up; waiting until next sessions reap interval","attr":{"error":"NamespaceNotFound: config.system.sessions does not exist"}}
{"t":{"$date":"2021-06-16T08:15:33.925+00:00"},"s":"I", "c":"STORAGE", "id":20320, "ctx":"LogicalSessionCacheRefresh","msg":"createCollection","attr":{"namespace":"config.system.sessions","uuidDisposition":"generated","uuid":{"uuid":{"$uuid":"8149a51f-47fd-4180-a6a4-310574dfae9b"}},"options":{}}}
{"t":{"$date":"2021-06-16T08:15:33.925+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}}
{"t":{"$date":"2021-06-16T08:15:33.926+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"127.0.0.1"}}
{"t":{"$date":"2021-06-16T08:15:33.926+00:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
child process started successfully, parent exiting
{"t":{"$date":"2021-06-16T08:15:33.948+00:00"},"s":"I", "c":"INDEX", "id":20345, "ctx":"LogicalSessionCacheRefresh","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"config.system.sessions","index":"_id_","commitTimestamp":{"$timestamp":{"t":0,"i":0}}}}
{"t":{"$date":"2021-06-16T08:15:33.948+00:00"},"s":"I", "c":"INDEX", "id":20345, "ctx":"LogicalSessionCacheRefresh","msg":"Index build: done building","attr":{"buildUUID":null,"namespace":"config.system.sessions","index":"lsidTTLIndex","commitTimestamp":{"$timestamp":{"t":0,"i":0}}}}
{"t":{"$date":"2021-06-16T08:15:33.961+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"127.0.0.1:55144","connectionId":1,"connectionCount":1}}
{"t":{"$date":"2021-06-16T08:15:33.961+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn1","msg":"client metadata","attr":{"remote":"127.0.0.1:55144","client":"conn1","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.6"},"os":{"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"18.04"}}}}
{"t":{"$date":"2021-06-16T08:15:33.964+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn1","msg":"Connection ended","attr":{"remote":"127.0.0.1:55144","connectionId":1,"connectionCount":0}}
{"t":{"$date":"2021-06-16T08:15:34.003+00:00"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener","msg":"Connection accepted","attr":{"remote":"127.0.0.1:55146","connectionId":2,"connectionCount":1}}
{"t":{"$date":"2021-06-16T08:15:34.004+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn2","msg":"client metadata","attr":{"remote":"127.0.0.1:55146","client":"conn2","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.6"},"os":{"type":"Linux","name":"Ubuntu","architecture":"x86_64","version":"18.04"}}}}
uncaught exception: Error: couldn't add user: Error preflighting normalization: U_STRINGPREP_PROHIBITED_ERROR :
_getErrorWithCode#src/mongo/shell/utils.js:25:13
DB.prototype.createUser#src/mongo/shell/db.js:1386:11
#(shell):1:1
Error saving history file: FileOpenFailed Unable to open() file /home/mongodb/.dbshell: No such file or directory
{"t":{"$date":"2021-06-16T08:15:34.016+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn2","msg":"Connection ended","attr":{"remote":"127.0.0.1:55146","connectionId":2,"connectionCount":0}}
What am I missing?
Checking your secret values, something's wrong:
$ echo dXNlcm5hbWUNCg==| base64 --decode | cat -e
username^M$
$ echo cGFzc3dvcmQNCg== | base64 --decode | cat -e
password^M$
Try re-creating your secret without those un-printable characters (\r as well as \n).
Maybe, instead of pre-encoding your secret, you could try:
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
stringData:
mongo-root-username: username
mongo-root-password: password
I got the same error. So I converted the username and password using the below commands and it worked.
can you use the exact code as shown below while converting into base64:
echo -n 'username' | base64
echo -n 'password' | base64
I faced some issues with Secrets configurations. Solution provided above did not help in my case.
Below quick fix worked for me.
I removed the configuration for env attribute.
Then ran the kubectl apply -f configuration_filename.yaml command
to apply changes.
Add the env & again run kubectl apply -f configuration_filename.yaml command.
Hope it works.

YAML provided on official Mongo Docker Image page doesn't work on CentOS

The YAML file below (originally provided here) works in a Windows and macOS docker environment. But when I run it in a CentOS environment, mongo-express can't connect to the MongoDB service and it doesn't show up in the browser at localhost:8081. I suppose this is an issue with DNSes and mapping Mongo to the respective container IP address. How can I fix this?
# Use root/example as user/password credentials
version: '3.1'
services:
mongo:
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: example
Log from the mongo service:
{"t":{"$date":"2021-01-01T00:15:56.059+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2021-01-01T00:15:56.063+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2021-01-01T00:15:56.063+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
killing process with pid: 29
{"t":{"$date":"2021-01-01T00:15:56.063+00:00"},"s":"I", "c":"CONTROL", "id":23377, "ctx":"SignalHandler","msg":"Received signal","attr":{"signal":15,"error":"Terminated"}}
{"t":{"$date":"2021-01-01T00:15:56.063+00:00"},"s":"I", "c":"CONTROL", "id":23378, "ctx":"SignalHandler","msg":"Signal was sent by kill(2)","attr":{"pid":83,"uid":999}}
{"t":{"$date":"2021-01-01T00:15:56.063+00:00"},"s":"I", "c":"CONTROL", "id":23381, "ctx":"SignalHandler","msg":"will terminate after current cmd ends"}
{"t":{"$date":"2021-01-01T00:15:56.063+00:00"},"s":"I", "c":"REPL", "id":4784900, "ctx":"SignalHandler","msg":"Stepping down the ReplicationCoordinator for shutdown","attr":{"waitTimeMillis":10000}}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"s":"I", "c":"COMMAND", "id":4784901, "ctx":"SignalHandler","msg":"Shutting down the MirrorMaestro"}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"s":"I", "c":"SHARDING", "id":4784902, "ctx":"SignalHandler","msg":"Shutting down the WaitForMajorityService"}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"s":"I", "c":"CONTROL", "id":4784903, "ctx":"SignalHandler","msg":"Shutting down the LogicalSessionCache"}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"s":"I", "c":"NETWORK", "id":20562, "ctx":"SignalHandler","msg":"Shutdown: going to close listening sockets"}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"s":"I", "c":"NETWORK", "id":23017, "ctx":"listener","msg":"removing socket file","attr":{"path":"/tmp/mongodb-27017.sock"}}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"s":"I", "c":"NETWORK", "id":4784905, "ctx":"SignalHandler","msg":"Shutting down the global connection pool"}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"s":"I", "c":"STORAGE", "id":4784906, "ctx":"SignalHandler","msg":"Shutting down the FlowControlTicketholder"}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"s":"I", "c":"-", "id":20520, "ctx":"SignalHandler","msg":"Stopping further Flow Control ticket acquisitions."}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"s":"I", "c":"STORAGE", "id":4784908, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToAbortExpiredTransactions"}
{"t":{"$date":"2021-01-01T00:15:56.064+00:00"},"s":"I", "c":"STORAGE", "id":4784934, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToDecreaseSnapshotHistoryCachePressure"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"s":"I", "c":"REPL", "id":4784909, "ctx":"SignalHandler","msg":"Shutting down the ReplicationCoordinator"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"s":"I", "c":"SHARDING", "id":4784910, "ctx":"SignalHandler","msg":"Shutting down the ShardingInitializationMongoD"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"s":"I", "c":"REPL", "id":4784911, "ctx":"SignalHandler","msg":"Enqueuing the ReplicationStateTransitionLock for shutdown"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"s":"I", "c":"-", "id":4784912, "ctx":"SignalHandler","msg":"Killing all operations for shutdown"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"s":"I", "c":"-", "id":4695300, "ctx":"SignalHandler","msg":"Interrupted all currently running operations","attr":{"opsKilled":3}}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"s":"I", "c":"COMMAND", "id":4784913, "ctx":"SignalHandler","msg":"Shutting down all open transactions"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"s":"I", "c":"REPL", "id":4784914, "ctx":"SignalHandler","msg":"Acquiring the ReplicationStateTransitionLock for shutdown"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"s":"I", "c":"INDEX", "id":4784915, "ctx":"SignalHandler","msg":"Shutting down the IndexBuildsCoordinator"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"s":"I", "c":"REPL", "id":4784916, "ctx":"SignalHandler","msg":"Reacquiring the ReplicationStateTransitionLock for shutdown"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"s":"I", "c":"REPL", "id":4784917, "ctx":"SignalHandler","msg":"Attempting to mark clean shutdown"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"s":"I", "c":"NETWORK", "id":4784918, "ctx":"SignalHandler","msg":"Shutting down the ReplicaSetMonitor"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"s":"I", "c":"SHARDING", "id":4784921, "ctx":"SignalHandler","msg":"Shutting down the MigrationUtilExecutor"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"s":"I", "c":"CONTROL", "id":4784925, "ctx":"SignalHandler","msg":"Shutting down free monitoring"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"s":"I", "c":"CONTROL", "id":20609, "ctx":"SignalHandler","msg":"Shutting down free monitoring"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"s":"I", "c":"FTDC", "id":4784926, "ctx":"SignalHandler","msg":"Shutting down full-time data capture"}
{"t":{"$date":"2021-01-01T00:15:56.065+00:00"},"s":"I", "c":"FTDC", "id":20626, "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"}
{"t":{"$date":"2021-01-01T00:15:56.067+00:00"},"s":"I", "c":"STORAGE", "id":4784927, "ctx":"SignalHandler","msg":"Shutting down the HealthLog"}
{"t":{"$date":"2021-01-01T00:15:56.067+00:00"},"s":"I", "c":"STORAGE", "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"}
{"t":{"$date":"2021-01-01T00:15:56.067+00:00"},"s":"I", "c":"STORAGE", "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"}
{"t":{"$date":"2021-01-01T00:15:56.067+00:00"},"s":"I", "c":"STORAGE", "id":20282, "ctx":"SignalHandler","msg":"Deregistering all the collections"}
{"t":{"$date":"2021-01-01T00:15:56.067+00:00"},"s":"I", "c":"STORAGE", "id":22261, "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"}
{"t":{"$date":"2021-01-01T00:15:56.067+00:00"},"s":"I", "c":"STORAGE", "id":22317, "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"}
{"t":{"$date":"2021-01-01T00:15:56.067+00:00"},"s":"I", "c":"STORAGE", "id":22318, "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"}
{"t":{"$date":"2021-01-01T00:15:56.067+00:00"},"s":"I", "c":"STORAGE", "id":22319, "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"}
{"t":{"$date":"2021-01-01T00:15:56.067+00:00"},"s":"I", "c":"STORAGE", "id":22320, "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"}
{"t":{"$date":"2021-01-01T00:15:56.068+00:00"},"s":"I", "c":"STORAGE", "id":22321, "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"}
{"t":{"$date":"2021-01-01T00:15:56.068+00:00"},"s":"I", "c":"STORAGE", "id":22322, "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"}
{"t":{"$date":"2021-01-01T00:15:56.068+00:00"},"s":"I", "c":"STORAGE", "id":22323, "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"}
{"t":{"$date":"2021-01-01T00:15:56.068+00:00"},"s":"I", "c":"STORAGE", "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":{"closeConfig":"leak_memory=true,"}}
{"t":{"$date":"2021-01-01T00:15:56.074+00:00"},"s":"I", "c":"STORAGE", "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":{"durationMillis":6}}
{"t":{"$date":"2021-01-01T00:15:56.074+00:00"},"s":"I", "c":"STORAGE", "id":22279, "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."}
{"t":{"$date":"2021-01-01T00:15:56.074+00:00"},"s":"I", "c":"-", "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"}
{"t":{"$date":"2021-01-01T00:15:56.074+00:00"},"s":"I", "c":"CONTROL", "id":20565, "ctx":"SignalHandler","msg":"Now exiting"}
{"t":{"$date":"2021-01-01T00:15:56.074+00:00"},"s":"I", "c":"CONTROL", "id":23138, "ctx":"SignalHandler","msg":"Shutting down","attr":{"exitCode":0}}
MongoDB init process complete; ready for start up.
{"t":{"$date":"2021-01-01T00:15:57.181+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2021-01-01T00:15:57.185+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2021-01-01T00:15:57.185+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2021-01-01T00:15:57.186+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"973e354c35f7"}}
{"t":{"$date":"2021-01-01T00:15:57.186+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.2","gitVersion":"15e73dc5738d2278b688f8929aee605fe4279b0e","openSSLVersion":"OpenSSL 1.1.1 11 Sep 2018","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu1804","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2021-01-01T00:15:57.186+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"18.04"}}}
{"t":{"$date":"2021-01-01T00:15:57.186+00:00"},"s":"I", "c":"CONTROL", "id":21951, "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"},"security":{"authorization":"enabled"}}}}
{"t":{"$date":"2021-01-01T00:15:57.189+00:00"},"s":"I", "c":"STORAGE", "id":22270, "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"/data/db","storageEngine":"wiredTiger"}}
{"t":{"$date":"2021-01-01T00:15:57.189+00:00"},"s":"I", "c":"STORAGE", "id":22297, "ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]}
{"t":{"$date":"2021-01-01T00:15:57.189+00:00"},"s":"I", "c":"STORAGE", "id":22315, "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=6656M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}}
{"t":{"$date":"2021-01-01T00:15:57.861+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1609460157:861019][1:0x7f184105dac0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 1 through 2"}}
{"t":{"$date":"2021-01-01T00:15:57.969+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1609460157:969647][1:0x7f184105dac0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 2"}}
{"t":{"$date":"2021-01-01T00:15:58.064+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1609460158:64534][1:0x7f184105dac0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Main recovery loop: starting at 1/29568 to 2/256"}}
{"t":{"$date":"2021-01-01T00:15:58.175+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1609460158:175353][1:0x7f184105dac0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 1 through 2"}}
{"t":{"$date":"2021-01-01T00:15:58.249+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1609460158:249021][1:0x7f184105dac0], txn-recover: [WT_VERB_RECOVERY_PROGRESS] Recovering log 2 through 2"}}
{"t":{"$date":"2021-01-01T00:15:58.301+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1609460158:301066][1:0x7f184105dac0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global recovery timestamp: (0, 0)"}}
{"t":{"$date":"2021-01-01T00:15:58.301+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"initandlisten","msg":"WiredTiger message","attr":{"message":"[1609460158:301126][1:0x7f184105dac0], txn-recover: [WT_VERB_RECOVERY | WT_VERB_RECOVERY_PROGRESS] Set global oldest timestamp: (0, 0)"}}
{"t":{"$date":"2021-01-01T00:15:58.317+00:00"},"s":"I", "c":"STORAGE", "id":4795906, "ctx":"initandlisten","msg":"WiredTiger opened","attr":{"durationMillis":1128}}
{"t":{"$date":"2021-01-01T00:15:58.317+00:00"},"s":"I", "c":"RECOVERY", "id":23987, "ctx":"initandlisten","msg":"WiredTiger recoveryTimestamp","attr":{"recoveryTimestamp":{"$timestamp":{"t":0,"i":0}}}}
{"t":{"$date":"2021-01-01T00:15:58.318+00:00"},"s":"I", "c":"STORAGE", "id":4366408, "ctx":"initandlisten","msg":"No table logging settings modifications are required for existing WiredTiger tables","attr":{"loggingEnabled":true}}
{"t":{"$date":"2021-01-01T00:15:58.320+00:00"},"s":"I", "c":"STORAGE", "id":22262, "ctx":"initandlisten","msg":"Timestamp monitor starting"}
{"t":{"$date":"2021-01-01T00:15:58.321+00:00"},"s":"I", "c":"CONTROL", "id":22161, "ctx":"initandlisten","msg":"You are running in OpenVZ which can cause issues on versions of RHEL older than RHEL6","tags":["startupWarnings"]}
{"t":{"$date":"2021-01-01T00:15:58.324+00:00"},"s":"I", "c":"STORAGE", "id":20536, "ctx":"initandlisten","msg":"Flow Control is enabled on this deployment"}
{"t":{"$date":"2021-01-01T00:15:58.325+00:00"},"s":"I", "c":"FTDC", "id":20625, "ctx":"initandlisten","msg":"Initializing full-time diagnostic data capture","attr":{"dataDirectory":"/data/db/diagnostic.data"}}
{"t":{"$date":"2021-01-01T00:15:58.334+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"/tmp/mongodb-27017.sock"}}
{"t":{"$date":"2021-01-01T00:15:58.334+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"0.0.0.0"}}
{"t":{"$date":"2021-01-01T00:15:58.334+00:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27017,"ssl":"off"}}
Log from the mongo-express service:
Mongo Express server listening at http://0.0.0.0:8081
Server is open to allow connections from anyone (0.0.0.0)
basicAuth credentials are "admin:pass", it is recommended you change this in your config.js!
/node_modules/mongodb/lib/server.js:265
process.nextTick(function() { throw err; })
^
Error [MongoError]: failed to connect to server [mongo:27017] on first connect
at Pool.<anonymous> (/node_modules/mongodb-core/lib/topologies/server.js:326:35)
at Pool.emit (events.js:314:20)
at Connection.<anonymous> (/node_modules/mongodb-core/lib/connection/pool.js:270:12)
at Object.onceWrapper (events.js:421:26)
at Connection.emit (events.js:314:20)
at Socket.<anonymous> (/node_modules/mongodb-core/lib/connection/connection.js:175:49)
at Object.onceWrapper (events.js:421:26)
at Socket.emit (events.js:314:20)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
Waiting for mongo:27017...
/docker-entrypoint.sh: line 14: mongo: Try again
/docker-entrypoint.sh: line 14: /dev/tcp/mongo/27017: Invalid argument
Fri Jan 1 00:17:21 UTC 2021 retrying to connect to mongo:27017 (2/5)
/docker-entrypoint.sh: line 14: mongo: Try again
/docker-entrypoint.sh: line 14: /dev/tcp/mongo/27017: Invalid argument
Fri Jan 1 00:17:27 UTC 2021 retrying to connect to mongo:27017 (3/5)
/docker-entrypoint.sh: line 14: mongo: Try again
/docker-entrypoint.sh: line 14: /dev/tcp/mongo/27017: Invalid argument
Fri Jan 1 00:17:33 UTC 2021 retrying to connect to mongo:27017 (4/5)
/docker-entrypoint.sh: line 14: mongo: Try again
/docker-entrypoint.sh: line 14: /dev/tcp/mongo/27017: Invalid argument
Fri Jan 1 00:17:39 UTC 2021 retrying to connect to mongo:27017 (5/5)
/docker-entrypoint.sh: line 14: mongo: Try again
/docker-entrypoint.sh: line 14: /dev/tcp/mongo/27017: Invalid argument
Typically the number of retries mongo-express is 5, maybe the database take more time to be connectable on docker-centos-env more than windows/macos ..you can check if the database is working if you do docker exec -it nameofmongocontainer mongo -uroot -pexample , then you can do something like show dbs;
You mentioned an DNS issue it happened to on my docker windows env 3 time before mongo-express managed to establish connection with database so its not exclusive to centos docker env ..
Potential fixes : you can add script to docker-compose file that can increase to number of retries to 20 .. here is an example from here (save the docker-entrypoint.sh script in the same location as your docker-compose file )
#!/bin/bash
set -eo pipefail
# This is a very small modification of the original docker script taken from
# https://github.com/mongo-express/mongo-express-docker/blob/master/docker-entrypoint.sh
# with only change being the modified value of max_tries=20 instead of original 5,
# to give the main mongodb container enough time to start and be connectable to.
# Also, see https://docs.docker.com/compose/startup-order/
# if command does not start with mongo-express, run the command instead of the entrypoint
if [ "${1}" != "mongo-express" ]; then
exec "$#"
fi
function wait_tcp_port {
local host="$1" port="$2"
local max_tries=20 tries=1
# see http://tldp.org/LDP/abs/html/devref1.html for description of this syntax.
while ! exec 6<>/dev/tcp/$host/$port && [[ $tries -lt $max_tries ]]; do
sleep 10s
tries=$(( tries + 1 ))
echo "$(date) retrying to connect to $host:$port ($tries/$max_tries)"
done
exec 6>&-
}
# if ME_CONFIG_MONGODB_SERVER has a comma in it, we're pointing to a replica set (https://github.com/mongo-express/mongo-express-docker/issues/21)
if [[ "$ME_CONFIG_MONGODB_SERVER" != *,* ]]; then
# wait for the mongo server to be available
echo Waiting for ${ME_CONFIG_MONGODB_SERVER}:${ME_CONFIG_MONGODB_PORT:-27017}...
wait_tcp_port "${ME_CONFIG_MONGODB_SERVER}" "${ME_CONFIG_MONGODB_PORT:-27017}"
fi
# run mongo-express
exec node app
The docker-compose file then will be
version: '3.1'
services:
mongo:
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: example
volumes:
- ./docker-entrypoint.sh:/docker-entrypoint.sh
An issue with Centos maybe ?
What you can do is do these steps separately
docker network create mongo-test
docker run -d --name mongo -e MONGO_INITDB_ROOT_USERNAME=root -e MONGO_INITDB_ROOT_PASSWORD=example --net mongo-test mongo
docker run -d --name mongo-express -e ME_CONFIG_MONGODB_ADMINUSERNAME=root -e ME_CONFIG_MONGODB_ADMINPASSWORD=example --net mongo-test -p 8081:8081 mongo-express
A potential issue centos is not allowing any container to use "-p" try running a test docker run --rm --name test --net mongo-test -p 8081:80 httpd and then check your localhost:80
Try this:
# Use root/example as user/password credentials
version: '3.1'
services:
mongo:
image: mongo
container_name: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
ports:
- 27017:27017
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: example
links:
- "mongo"
depends_on:
- "mongo"
If you want, please try this (working example):
version: '3'
services:
mongo:
image: mongo
restart: always
container_name: my_mongo
ports:
- "27017:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
volumes:
- ./data/db:/data/db
- ./data/configdb:/data/configdb
mongo-express:
image: mongo-express
restart: always
container_name: my_mongo_express
ports:
- "8081:8081"
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: example
ME_CONFIG_BASICAUTH_USERNAME: user
ME_CONFIG_BASICAUTH_PASSWORD: example
depends_on:
- mongo
volumes:
node_modules:
You can also try to expose the Mongo service with different name or ports, if you think there is a problem there. In this case, you also have to change the respective variables in MongoExpress service accordingly (ME_CONFIG_MONGODB_SERVER and ME_CONFIG_MONGODB_PORT) - details on the possible configurations here
Either way, is the network involving the two services correctly created?