I have a dockerized service, It consists in a server with an API REST and a mongoDB storage system.
First of all I have these two images of docker hub:
mongo:3.6.0
node:alpine
I'm trying to create a backup with a snapshot from my server. When I performed the restoration and I started to wake up the services, mongoDB crashed. This is the error log:
mongo_1 | 2018-01-09T18:50:17.770+0000 F STORAGE
[initandlisten] Unable to start up mongod due to missing featureCompatibilityVersion document.
mongo_1 | 2018-01-09T18:50:17.774+0000 F STORAGE
[initandlisten] Please run with --repair to restore the document.
mongo_1 | 2018-01-09T18:50:17.774+0000 F -
[initandlisten] Fatal Assertion 40652 at src/mongo/db/db.cpp 660
mongo_1 | 2018-01-09T18:50:17.775+0000 F - [initandlisten]
This is my Dockerfile:
FROM node:alpine
RUN mkdir /CSCFUTSAL
WORKDIR /CSCFUTSAL
COPY package.json /CSCFUTSAL
RUN cd /CSCFUTSAL
RUN npm install
RUN npm install -g #angular/cli
COPY . /CSCFUTSAL
RUN ng build --prod
EXPOSE 80
And my docker-compose.yml:
version: '2'
services:
mongo:
image: mongo:3.6.0
volumes:
- /data/mongodb/db:/data/db
ports:
- "27017:27017"
api:
build: .
command: node ./bin/www
volumes:
- api-img:/CSCFUTSAL/public/plantillas
ports:
- "80:80"
links:
- mongo
volumes:
api-img:
driver: local
I dont know how to repair this without losing the data stored.
As you're running this inside Docker you'll need to run the mongod --repair command in a container. Make sure you have a back up of the data you're going to repair and then run something like:
docker container run --rm -v /data/mongodb/db:/data/db mongo:3.6.0 mongod --repair
That's assuming that your local data is at /data/mongodb/db, which your Docker compose file suggests.
The answers lie within the error.
Just go to the location where "mongod.exe" is installed.
Note: For Windows OS, it is generally installed at C:\Program Files\MongoDB\Server\3.6\bin.
Open cmd at the same location.
Run the following command: mongod.exe --repair
This command will do the needful reparations.
After that just run mongod.exe from the same location.
The above solution worked for me. I hope this will solve your problem.
Related
I'm on Ubuntu 20.04.5 LTS trying to create a docker container for a mongo database
the command i'm using is:
docker-compose up -d --build
the docker-compose.yml file contains:
version: '2'
services:
ptmdocker-mongodb:
build:
context: ptmdocker-mongodb/
dockerfile: Dockerfile
environment:
- MONGODB_ROOT_PASSWORD=xx
- MONGODB_USERNAME=xx
- MONGODB_PASSWORD=xx
- MONGODB_DATABASE=xx
ports:
- '27017:27017'
volumes:
- /home/administrator/mongo_live_data:/bitnami/mongodb
and the Dockerfile contains:
FROM bitnami/mongodb:latest
VOLUME /backups
USER root
RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-y", "nano"]
EXPOSE 27017
I expect the container to start, and I expect to have an empty database. But after a few seconds the docker container goes down, in the logs I find following error:
mongodb 15:20:35.01
mongodb 15:20:35.01 Welcome to the Bitnami mongodb container
mongodb 15:20:35.01 Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb 15:20:35.01 Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb 15:20:35.01
mongodb 15:20:35.02 INFO ==> ** Starting MongoDB setup **
mongodb 15:20:35.03 INFO ==> Validating settings in MONGODB_* env vars...
mongodb 15:20:36.97 INFO ==> Initializing MongoDB...
mongodb 15:20:37.01 INFO ==> Deploying MongoDB from scratch...
mongodb 15:20:37.02 DEBUG ==> Starting MongoDB in background...
Error opening config file: Permission denied
try '/opt/bitnami/mongodb/bin/mongod --help' for more information
I'm mapping the db folder inside the container to a folder on the host:
/home/administrator/mongo_live_data
Maybe there is an issue for permissions on this folder? Any ideas how I can fix this?
From https://hub.docker.com/r/bitnami/mongodb:
NOTE: As this is a non-root container, the mounted files and directories must have the proper permissions for the UID 1001
To find user with UID 1001:
$ cat /etc/passwd | grep 1001
More a workaround then a solution, but I installed a new ubuntu instance, this time I used ubuntu 20.04.2 instead of 20.04.5.
After installing docker and docker-compose I could create the mongo docker container with none of the mentioned permissions issues.
(its fixed for me, but I'm still not sure if its because of the ubtuntu version or if I did something wrong on my first ubuntu install...)
I use MongoDB change stream in my code and need to create a MongoDB docker image with change stream enabled.
The problem is that mongod should be started first with default settings to allow create users, documents, etc.
Then mongod should be stopped. Then the replica set should be added to the mongod.conf to enable change stream:
# mongod.conf
replication:
replSetName: rs0
oplogSizeMB: 100
After that mongod should be started again and replica set initialized by MongoDB shell:
rs.slaveOk()
rs.initiate()
rs.initiate({_id:"rs0", members: [{"_id":1, "host":"127.0.0.1:27017"}]})
MogodDB 3.6 base image provides an initialization capabilities.
Do you know how to start mongod, initialize DB then stop it and reconfigure?
UPD:
I need to initialize database then add replica set.
Therefore, I need to run the mongod with the default mongod.conf, create users and collections, then restart the mongod with another mongod,conf in which the replica set is enabled. I can't do that with official MongoDB image. I've installed MongoDB 3.6.12 on Ubuntu image. My MongoDB container is working well after running the setup commands manually in its bash shell, but the same instructions not working from Dockerfile
Here is the commands
RUN mongod --fork --config /etc/mongod.conf \
&& mongo < /opt/init_mongodb.js \
&& mongod --shutdown --dbpath /var/lib/mongodb \
&& cp /etc/mongod.conf /etc/mongod.conf.orig \
&& mongod --fork --config /opt/mongod.conf \
&& mongo -u "root" -p "root" --authenticationDatabase "admin" < /opt/reconfig_mongodb.js \
When run this commands from Dockerfile, the following error appears
> backend#1.0.0 start /usr/src/app
> npm run babelserver
> backend#1.0.0 babelserver /usr/src/app
> babel-node --presets es2015 index.js
(node:41) UnhandledPromiseRejectionWarning: MongoError: not master and slaveOk=false
at queryCallback (/usr/src/app/node_modules/mongodb-core/lib/cursor.js:248:25)
at /usr/src/app/node_modules/mongodb-core/lib/connection/pool.js:532:18
at processTicksAndRejections (internal/process/task_queues.js:82:9)
(node:41) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
I don't think that's a clean way of setting up your replica set. It really should be much simpler.
Instead of running one of the members in background mode, setting up the replicaset, saving the config file, shutting down process, and lastly starting with the saved config file, you can just run an additional "helper" container (also a mongodb image) whose sole purpose is to set up the mongodb replica set without acting as a member.
Something similar to this would do the job:
docker-compose.yaml
version: '3'
services:
mongodb-rs0-1:
image: ${your_image}
ports: [27018:27018]
environment:
- PORT=27018
- MONGO_RS=rs0/mongodb-rs0-1:27018,mongodb-rs0-2:27019,mongodb-rs0-3:27020
volumes:
- ./db/rs0-1:/data/db
mongodb-rs0-2:
image: ${your_image}
ports: [27019:27019]
environment:
- PORT=27019
- MONGO_RS=rs0/mongodb-rs0-1:27018,mongodb-rs0-2:27019,mongodb-rs0-3:27020
volumes:
- ./db/rs0-2:/data/db
mongodb-rs0-3:
image: ${your_image}
ports: [27020:27020]
environment:
- PORT=27020
- MONGO_RS=rs0/mongodb-rs0-1:27018,mongodb-rs0-2:27019,mongodb-rs0-3:27020
volumes:
- ./db/rs0-3:/data/db
mongodb-replicator:
image: ${your_image}
environment:
- PRIMARY=mongodb-rs0-1:27018
depends_on:
- mongodb-rs0-1
- mongodb-rs0-2
- mongodb-rs0-3
networks:
default:
external:
name: mongo
and in container_start.sh:
#!/bin/sh
if [ -z "$PRIMARY" ]; then
# Member container
docker-entrypoint.sh --replSet rs0 --smallfiles --oplogSize 128 --port "${PORT-27017}"
else
# Replicator container
until mongo "$PRIMARY" /app/replicaset-setup.js; do
sleep 10
done
fi
Even thought it took me hours to figure out, it's actually pretty easy to setup a dockerized mongodb instance with replication enabled (which allows the use of change streams.)
Put both of these in the same folder:
Dockerfile
FROM mongo:4.4
ADD ./init_replicaset.js /docker-entrypoint-initdb.d/init_replicaset.js
CMD ["mongod", "--replSet", "rs0", "--bind_ip_all"]
init_replicaset.js
rs.initiate({'_id':'rs0', members: [{'_id':1, 'host':'127.0.0.1:27017'}]});
Note: All .js/.sh files in the /docker-entrypoint-initdb.d directory are automatically run when the mongo instance is initialized which is why this works.
From within the same folder, run the following:
docker build . -t mongo_repl
docker run -i -p 27017:27017 mongo_repl
That's it! You should now have a running mongodb instance with replication enable.
Resolved by initializing the replica set on container startup.
This script is set as ENTRYPOINT in docker file
#!/bin/sh
# start mongod, initialize replica set
mongod --fork --config /opt/mongod.conf
mongo --quiet < /opt/replica.js
# restart mongod
mongod --shutdown --dbpath /var/lib/mongodb
mongod --config /opt/mongod.conf
replica.js
rs.slaveOk()
rs.initiate()
rs.initiate({_id:"rs0", members: [{"_id":1, "host":"127.0.0.1:27017"}]})
I am using official mongodb docker FROM mongo:3.2. In the entrypoint.sh I am restarting the Mongodb with replica mode. Mongodb process is owned by root user. Is there any way I can start the container by non root user and able to restart the mongodb in the replica set mode. Right now I am getting the following error.
2017-10-27T20:08:23.888+0000 I STORAGE [initandlisten] exception in initAndListen: 98 Unable to create/open lock file: /data/db/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating
My docker file is
FROM mongo:3.2
COPY entrypoint.sh /root/entrypoint.sh
ENTRYPOINT ["/root/entrypoint.sh"] here
Thanks,
Using docker-compose:
version: '3.5'
services:
yourmongo:
image: mongo:xenial
container_name: yourmongo
restart: always
user: 1000:1000
volumes:
- ./yourmongo-folder:/data/db
1000:1000 are the chosen values for uid:gid (user-id:group-id)
EDIT: don't forget to add the folder with the correct permissions: mkdir yourmongo-folder && chown 1000:1000 yourmongo-folder (This must be done on the Docker host machine BEFORE bringing up the Docker service).
You can use the USER command available in dockerfile, when using ENTRYPOINT, CMD, RUN instructions to start/execute the process with the required user.
Syntax:
USER <user>[:<group>] or
USER <UID>[:<GID>]
Example:
FROM mongo:3.2
COPY entrypoint.sh /root/entrypoint.sh
USER deploy:root
ENTRYPOINT ["/root/entrypoint.sh"]
Update From user2010672: (working solution)
FROM mongo:3.2
COPY entrypoint.sh /root/entrypoint.sh
RUN chown -R mongodb:mongodb /var/log /data/db
USER mongodb
ENTRYPOINT ["/root/entrypoint.sh"]
Thanks user2010672
You can customize user on docker run command using the flag --user with user:group format
This docker-compose.yml:
services:
database:
image: mongo:3.2
ports:
- "27017"
command: "mongod --dbpath=/usr/database"
networks:
- backend
volumes:
- dbdata:/usr/database
volumes:
dbdata:
results in this error (snipped):
database_1 | 2016-11-28T06:30:29.864+0000 I STORAGE [initandlisten] exception in initAndListen: 98 Unable to create/open lock file: /usr/database/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating
Ditto for just trying to run the command in a container using that image directly:
$ docker run -v /usr/database mongo:3.2 mongod --dbpath=/usr/database
But, if I run /bin/bash when starting the container, and THEN start mongo, we're OK:
$ docker run -it -v /usr/database mongo:3.2 /bin/bash
root#8aab722fad89:/# mongod --dbpath=/usr/database
Based on the output, the difference seems to be that in the second scenario, the command is run as root.
So, my questions are:
Why does the /bin/bash method work, when the others do not?
How can I replicate that reason, in the docker-compose?
Note: On OSX, since that seems to effect whether you can mount a host directory as a volume for Mongo to use - not that I'm doing that.
To clarify, this image hub.docker.com/_/mongo is an official MongoDB docker image from DockerHub, but NOT an official docker image from MongoDB.
Now to answer your questions,
Why does the /bin/bash method work, when the others do not?
This answer is based on Dockerfile v3.2. First to point out that your volume mount command -v /usr/database , is essentially creating a directory in the container with the root ownership permission.
Your command below failed with permission denied because the the docker image is running the command as user mongodb (see this dockerfile line) . As the directory /usr/database is owned by root.
$ docker run -v /usr/database mongo:3.2 mongod --dbpath=/usr/database
While if you execute below /bin/bash then manually run mongod:
$ docker run -it -v /usr/database mongo:3.2 /bin/bash
Your are logged in as root and executing mongod as root, and it has the permission to create database files in /usr/database/.
Also, if you're executing the line below, it works because you're pointing to a directory /data/db which the permission has been corrected for user mongodb (see this dockerfile line)
$ docker run -v db:/data/db mongo:3.2
How can I replicate that reason, in the docker-compose?
The easiest solution is to use command: "mongod --dbpath=/data/db" because the permission ownership has been corrected in the Dockerfile.
If you are intending to use a host volume, you probably would have to add mongodb user on your host OSX and change appropriate directories permission. Modifying the permission ownership of a volume mount is outside the scope of docker-compose.
My client has mongodb running in docker container, one morning the db crashed and logs says "unclean shutdown detected". I tried for repairing mongodb but not able to do as it is running under docker container. The reason is that as i have to shutdown mongodb for running repair command and once i shutdown the mongodb then docker throws me out of the container because only one process was running and as it was stopped container was also stopped, now i am not able to run any mongod commands as i am out of container and container is stopped along with mongod instance.
So, can anyone help me out of this, how can i run repair command from outside of docker container ?
Can i run repair command without stopping mongod ?
Please help !!!
Thanks
Irshad
The example command below fixed my issue with corrupted Wt files.
docker run -it -v <path>:/data/db <image-name> mongod --repair
Just replace path and image-name with yours
I had this same thing when my Mac decided to restart and install an update this morning, grr.
Normally my mongo container is started with a docker-compose up command, which uses a docker-compose.yml file for all the settings, something like...
version: "3"
services:
mongo:
image: mongo:3.6.12
container_name: mongo
ports:
- 27017:27017
volumes:
- ~/mongo/data/db:/data/db
- ~/mongo/db/configdb:/data/configdb
To run a basic docker command with those settings I did...
# Kill any existing mongo containers
docker rm mongo
# Repair the db
docker run -it -v ~/mongo/data/db:/data/db -v ~/mongo/db/configdb:/data/configdb --name mongo -d mongo:3.6.12 mongod --repair
# Has it finished the repair yet?
docker ps -a
After about a minute the container exited, the database was repaired and I was able to run mongo using docker-compose again.
version: '3.9'
services:
mongo:
image: mongo
command: --repair
This is how we pass the --repair in docker-compose.