I am using official mongodb docker FROM mongo:3.2. In the entrypoint.sh I am restarting the Mongodb with replica mode. Mongodb process is owned by root user. Is there any way I can start the container by non root user and able to restart the mongodb in the replica set mode. Right now I am getting the following error.
2017-10-27T20:08:23.888+0000 I STORAGE [initandlisten] exception in initAndListen: 98 Unable to create/open lock file: /data/db/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating
My docker file is
FROM mongo:3.2
COPY entrypoint.sh /root/entrypoint.sh
ENTRYPOINT ["/root/entrypoint.sh"] here
Thanks,
Using docker-compose:
version: '3.5'
services:
yourmongo:
image: mongo:xenial
container_name: yourmongo
restart: always
user: 1000:1000
volumes:
- ./yourmongo-folder:/data/db
1000:1000 are the chosen values for uid:gid (user-id:group-id)
EDIT: don't forget to add the folder with the correct permissions: mkdir yourmongo-folder && chown 1000:1000 yourmongo-folder (This must be done on the Docker host machine BEFORE bringing up the Docker service).
You can use the USER command available in dockerfile, when using ENTRYPOINT, CMD, RUN instructions to start/execute the process with the required user.
Syntax:
USER <user>[:<group>] or
USER <UID>[:<GID>]
Example:
FROM mongo:3.2
COPY entrypoint.sh /root/entrypoint.sh
USER deploy:root
ENTRYPOINT ["/root/entrypoint.sh"]
Update From user2010672: (working solution)
FROM mongo:3.2
COPY entrypoint.sh /root/entrypoint.sh
RUN chown -R mongodb:mongodb /var/log /data/db
USER mongodb
ENTRYPOINT ["/root/entrypoint.sh"]
Thanks user2010672
You can customize user on docker run command using the flag --user with user:group format
Related
Description
I am running a Postgres container in docker-compose. I am mounting the /data directory needed by Postgres into the container using a volume in the docker-compose.yml below.
Qualifications
The Postgres user must be called graph-node and create a database called graph-node
I delete the data/postgres/ folder before each docker-compose up using the boot.sh script below for application-specific reasons. Just know that /data/postgres is re-created on each run of docker-compose up.
Expected Behavior
Postgres boots and writes all files it needs to the mounted /data/postgres volume.
Actual Behavior
Postgres boots fine, but writes nothing to the volume.
Possible Reasons
This feels like a read/write permissions problem? I've added :rw in the third column of the volume as suggested, still no cigar. I run chmod -R a+rwx ./data on the data dir to get access to all files recursively .
The oddest thing to is that if I manually run chmod -R a+rwx ./data after booting, Postgres suddenly IS able to write to the directory all needed files. But if I run this before it's created as seen below (recursively for all things in /data) it does not work.
Files
boot.sh
# Check for data/ dir. If found, make it recursively rwx for all users. Otherwise, create it and make it recursively rwx for all users.
if [ -d "./data" ]
then
chmod -R a+rwx ./data
else
mkdir data
chmod -R a+rwx ./data
fi
if [ -d "./data/postgres" ]
then
rm -rf data/postgres
else
echo "No data/postgres dir found. Proceeding"
fi
docker-compose -f docker-compose.yml up
docker-compose.yml
version: "3"
services:
postgres:
image: postgres
ports:
- '5432:5432'
command:
[
"postgres",
"-cshared_preload_libraries=pg_stat_statements"
]
environment:
POSTGRES_USER: graph-node
POSTGRES_PASSWORD: let-me-in
POSTGRES_DB: graph-node
volumes:
- ./data/postgres:/var/lib/postgresql/data:rw
Machine + Software Specs
Operating System: Windows 10, WSL2, Ubuntu
Docker Version: 20.10.7 (running directly on the machine since it's Ubuntu, NOT in Docker Desktop like on a Mac)
Well, not exactly an answer, but because I only needed one-run ephemeral storage for Postgres (I was deleting the data/ dir between runs anyways) I solved the problem by just removing the external volume and letting Postgres write data into the container itself, where it surely had privileges.
I have to create a mongo image with some default collection and data. I am able to create mongo image with this data by referring the following link :-
How to create a Mongo Docker Image with default collections and data?
so when I run the container I get the default data.
Now when I use the app and some more data is generated(by calling API's) which gets saved again in mongodb with default data.
Now for some reason if docker container is re-started, unfortunately, all the run-time created data is gone and only default data is left. Though I am saving data using volumes.
So how to persist the run time data and default data each time docker is restarted?
I am using following docker file and docker-compose file
Dockerfile :
FROM mongo
####### working isnerting data $##########
# Modify child mongo to use /data/db2 as dbpath (because /data/db wont persist the build)
RUN mkdir -p /data/db2 \
&& echo "dbpath = /data/db2" > /etc/mongodb.conf \
&& chown -R mongodb:mongodb /data/db2
COPY . /data/db2
RUN mongod --fork --logpath /var/log/mongodb.log --dbpath /data/db2 --smallfiles \
&& mongo 127.0.0.1:27017/usaa /data/db2/config-mongo.js \
&& mongod --dbpath /data/db2 --shutdown \
&& chown -R mongodb /data/db2
# Make the new dir a VOLUME to persists it
VOLUME /data/db2
CMD ["mongod", "--config", "/etc/mongodb.conf", "--smallfiles"]
and a part of docker-compose.yml
services:
mongo:
build: ./mongodb
image: "mongo:1.2"
container_name: "mongo"
ports:
- "27017:27017"
volumes:
- ${LOCAL_DIRECTORY}:/data/db2
networks:
- some-network
Reason may be, by rebuilding docker image its creating /data/db2 directory with only default data defined in .js file. But not sure.
Please correct me what I am doing wrong or suggest a new work-flow for this problem.
Thanks much!
Because docker is stateless by default. Each time you call docker run it rebuilds the container. If you want some data to persist, you have 2 general approaches:
Not to remove the container after it exits. Just give the lovely name to your container when first starting it, like docker run --name jessica mongo and then, on subsequent calls, use docker start jessica
Use volumes to store data and share it between containers. In this case you will start your container with volume arguments, like docker run -v /home/data:/data mongo. Also, you will have to reconfigure your mongodb to save data in path /data inside container. This approach is easier and can be used to share data between different containers, as well as providing default data for the first run
UPD
When using docker-compose to start the containers, if you need your data to persist between sessions, you can simply use external volumes, which you create in advance.
First create volume, lets say lovely:
docker volume create lovely
Then use it in docker-compose.yml:
version: '3'
services:
db1:
image: whatever
volumes:
- lovely:/data
db2:
image: whatever
volumes:
- lovely:/data
volumes:
lovely:
external: true
I have a dockerized service, It consists in a server with an API REST and a mongoDB storage system.
First of all I have these two images of docker hub:
mongo:3.6.0
node:alpine
I'm trying to create a backup with a snapshot from my server. When I performed the restoration and I started to wake up the services, mongoDB crashed. This is the error log:
mongo_1 | 2018-01-09T18:50:17.770+0000 F STORAGE
[initandlisten] Unable to start up mongod due to missing featureCompatibilityVersion document.
mongo_1 | 2018-01-09T18:50:17.774+0000 F STORAGE
[initandlisten] Please run with --repair to restore the document.
mongo_1 | 2018-01-09T18:50:17.774+0000 F -
[initandlisten] Fatal Assertion 40652 at src/mongo/db/db.cpp 660
mongo_1 | 2018-01-09T18:50:17.775+0000 F - [initandlisten]
This is my Dockerfile:
FROM node:alpine
RUN mkdir /CSCFUTSAL
WORKDIR /CSCFUTSAL
COPY package.json /CSCFUTSAL
RUN cd /CSCFUTSAL
RUN npm install
RUN npm install -g #angular/cli
COPY . /CSCFUTSAL
RUN ng build --prod
EXPOSE 80
And my docker-compose.yml:
version: '2'
services:
mongo:
image: mongo:3.6.0
volumes:
- /data/mongodb/db:/data/db
ports:
- "27017:27017"
api:
build: .
command: node ./bin/www
volumes:
- api-img:/CSCFUTSAL/public/plantillas
ports:
- "80:80"
links:
- mongo
volumes:
api-img:
driver: local
I dont know how to repair this without losing the data stored.
As you're running this inside Docker you'll need to run the mongod --repair command in a container. Make sure you have a back up of the data you're going to repair and then run something like:
docker container run --rm -v /data/mongodb/db:/data/db mongo:3.6.0 mongod --repair
That's assuming that your local data is at /data/mongodb/db, which your Docker compose file suggests.
The answers lie within the error.
Just go to the location where "mongod.exe" is installed.
Note: For Windows OS, it is generally installed at C:\Program Files\MongoDB\Server\3.6\bin.
Open cmd at the same location.
Run the following command: mongod.exe --repair
This command will do the needful reparations.
After that just run mongod.exe from the same location.
The above solution worked for me. I hope this will solve your problem.
This docker-compose.yml:
services:
database:
image: mongo:3.2
ports:
- "27017"
command: "mongod --dbpath=/usr/database"
networks:
- backend
volumes:
- dbdata:/usr/database
volumes:
dbdata:
results in this error (snipped):
database_1 | 2016-11-28T06:30:29.864+0000 I STORAGE [initandlisten] exception in initAndListen: 98 Unable to create/open lock file: /usr/database/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating
Ditto for just trying to run the command in a container using that image directly:
$ docker run -v /usr/database mongo:3.2 mongod --dbpath=/usr/database
But, if I run /bin/bash when starting the container, and THEN start mongo, we're OK:
$ docker run -it -v /usr/database mongo:3.2 /bin/bash
root#8aab722fad89:/# mongod --dbpath=/usr/database
Based on the output, the difference seems to be that in the second scenario, the command is run as root.
So, my questions are:
Why does the /bin/bash method work, when the others do not?
How can I replicate that reason, in the docker-compose?
Note: On OSX, since that seems to effect whether you can mount a host directory as a volume for Mongo to use - not that I'm doing that.
To clarify, this image hub.docker.com/_/mongo is an official MongoDB docker image from DockerHub, but NOT an official docker image from MongoDB.
Now to answer your questions,
Why does the /bin/bash method work, when the others do not?
This answer is based on Dockerfile v3.2. First to point out that your volume mount command -v /usr/database , is essentially creating a directory in the container with the root ownership permission.
Your command below failed with permission denied because the the docker image is running the command as user mongodb (see this dockerfile line) . As the directory /usr/database is owned by root.
$ docker run -v /usr/database mongo:3.2 mongod --dbpath=/usr/database
While if you execute below /bin/bash then manually run mongod:
$ docker run -it -v /usr/database mongo:3.2 /bin/bash
Your are logged in as root and executing mongod as root, and it has the permission to create database files in /usr/database/.
Also, if you're executing the line below, it works because you're pointing to a directory /data/db which the permission has been corrected for user mongodb (see this dockerfile line)
$ docker run -v db:/data/db mongo:3.2
How can I replicate that reason, in the docker-compose?
The easiest solution is to use command: "mongod --dbpath=/data/db" because the permission ownership has been corrected in the Dockerfile.
If you are intending to use a host volume, you probably would have to add mongodb user on your host OSX and change appropriate directories permission. Modifying the permission ownership of a volume mount is outside the scope of docker-compose.
I want to start a Mongo container with a read-only fs for security reason according to 5.12.
I have the following docker-compose.yml:
version: '2'
services:
mongodb:
image: mongo:3.2
command: -f /etc/mongo.conf
volumes:
- ./mongo/mongo.conf:/etc/mongo.conf:ro
- /data/db
user: mongodb
read_only: true
On docker-compose up it fails with the error Failed to unlink socket file /tmp/mongodb-27017.sock errno:30 Read-only file system.
OK. No problem. I could add - /tmp to the volumes.
But is this good practice to add every path to the volumes? And are there some other paths to add? Like some log paths and so on?
Is there a list from Mongodb?
TL;DR case
You don't need have read-only container, you should only keep your user as non-root for host machine, mount only dirs that you really need and manage permisiions only for mounted dirs.
Full answer
From the official mongo docker image and best usage practices of docker much easier and convenient case is using gosu. In this case your MongoDB will be running by non-root user, that should be enough secure.
All directories that is not mounted from host to container could not be affected from container to host. As example, even if you remove root of you system inside of container where nothing is mounted, it will not affect host dirs (but it WILL affect all mounted dirs, so be careful if you decide to try it by yourself =)).
Also for MongoDB /data/db directory is where all db info stored it writes all info "schemas" etc., so while it is in read only mode, mongodb will not work in any case. This is why you could see chown -R mongodb:mongodb /data/db lines befor mongodb start in docker-entrypoint.sh from official mongodb docker image.