Automatically run setup replica-set and restore database in MongoDb using Docker - mongodb

This is my Dockerfile:
FROM mongo
WORKDIR /usr/src/app
COPY db /usr/src/app/db
COPY replica.js /usr/src/app/
CMD mongo
The replica.js as follows
rs.initiate();
This is my docker-compose file
mongo_server:
image: mongo
hostname: mongo_server.$ENV_NAME
build:
context: ./mongo
dockerfile: Dockerfile
expose:
- 27017
ports:
- "$MONGO_PORT:27017"
restart: always
networks:
localnet:
aliases:
- mongo_server.$ENV_NAME
command: --replSet $MONGO_REPLICA --bind_ip_all
volumes:
- "mongovolume:/data/db"
The problem is if I run successfully docker-compose up.
Then I need to run manually two command
docker exec 2b2 sh -c "mongo < /usr/src/app/replica.js" # 2b2 is id of container mongo
and
docker exec 2b2 sh -c "mongorestore --drop -d mydb /usr/src/app/db"
Now the replica is set, the database is restored. My question is could I make it automatically such as moving to entrypoint.sh and call in Dockerfile or setting in docker-compose.yml to reduce manual work?

There is definitely a way by adding another container in your docker-compose file:
mongo_restore:
image: mongo
build:
context: ./mongo
dockerfile: Dockerfile
networks:
localnet:
aliases:
- mongo_server.$ENV_NAME
entrypoint:
- sh
command:
- -c
- |
# Step 1: Wait until mongo_server is fully up and running. Please insert your own code to check.
# Step 2: Execute your restore script but make sure to target mongo_server instead
volumes:
- "mongovolume:/data/db"
There might be some syntax errors here and there but the idea is the same as I have used this method in some other projects :)

Related

docker-compose restore mongo database every time collection deletes

i want to be able to recreate some base data that is dumped when mongo-data folder is deleted and docker-compose up is called.
the problem that im facing is that app does not have mongo
these are my files:
docker-compose.yml
version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- .:/testapp
environment:
DB_URL: mongodb://test_mongo/appdb
depends_on:
- mongo
mongo:
image: "mongo:4.4.4"
restart: always
container_name: test_mongo
ports:
- "27017:27017"
- "27018:27018"
volumes:
- ./mongo-data:/data/db
Dockerfile:
FROM node:14.15.5
RUN mkdir -p /testapp
WORKDIR /testapp
EXPOSE 3000
ENTRYPOINT ["./entrypoint.sh"]
entrypoint.sh:
#!/bin/bash
sh ./__backup__/db/restore.sh
sh ./__backup__/app/restore.sh
yarn install
yarn start:dev
backup/app/restore.sh:
#!/bin/bash
if [[ ! -d '/testapp/uploads' ]]
then
tar -xvf ./uploads.tar.gz /testapp/
fi
backup/app/restore.sh:
#!/bin/bash
until mongo --eval "print(\"waited for connection\")"
do
sleep 1
done
if [[ ! -d '/testapp/mongo-data' ]]
then
mongorestore --archive ./db.dump
fi
is there anyway to run these resotre.sh files after mongo service is up or running mongo from app?
If I understand the question correctly, you want to restore the MongoDB to a certain state every time your app launches, and you're asking if there's a way to do it after MongoDB container launches.
There's a tool called docker-compose-wait, quoting from its GitHub README, it's a small command-line utility to wait for other docker images to be started while using docker-compose.
It's fairly simple to use it. Add it to the image, run /wait to wait for services to be up, and get on to whatever you want next.
So according to your current setup, your Dockerfile could be like this:
FROM node:14.15.5
## Add the wait script to the image
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.9.0/wait /wait
RUN chmod +x /wait
RUN mkdir -p /testapp
WORKDIR /testapp
ADD . .
EXPOSE 3000
## Launch the wait tool and then your entrypoint.sh
ENTRYPOINT /wait && /testapp/entrypoint.sh"
In which your entrypoint.sh was already written to call the restore script. In your docker-compose.yml, add environment variable to set up the services to be waited.
version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- .:/testapp
environment:
DB_URL: mongodb://test_mongo/appdb
WAIT_HOSTS: mongo:27017
depends_on:
- mongo
mongo:
image: "mongo:4.4.4"
restart: always
container_name: test_mongo
ports:
- "27017:27017"
- "27018:27018"
volumes:
- ./mongo-data:/data/db

docker-entrypoint-initdb.d not executing scripts

I'm using docker-compose version 1.25.5, build 8a1c60f6 &
Docker version 19.03.8, build afacb8b
I'm trying to initialise database with Users in MongoDb container using docker-entrypoint-initdb.d but,
js/scripts aren't being executed when container starts.
I know /data/db must be empty for it execute so I've been deleting it every time before start. But still doesn't execute. Going into the container and manually executing mongo mongo-init.js works.
Not sure why it's not working when it should.
docker-compose.yml:
version: '3'
services:
mongodb:
container_name: "mongodb"
image: tonyh308/cv-mongodb:1
build: ./mongo
container_name: mongodb
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_DATABASE: test
volumes:
- ./docker-entrypoint-initdb.d/mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
- ./mongo/mongodb:/data/db
labels:
com.qa.description: "MongoDb Container for application."
# other services ...
Dockerfile:
FROM mongo:3.6.18-xenial
RUN apt-get update
COPY ./Customers /Customers
COPY ./test /test
WORKDIR /data
ENTRYPOINT ["mongod", "--bind_ip_all"]
EXPOSE 27017
Feb 26 2021: still no solution

Knex Migration with Docker Compose Psql

I have a problem migrating using Knex js inside my docker-compose container.
the problem is that npm run db (knex migrate:rollback && knex migrate:latest && knex seed:run) would run right before the database is even created. Is there anyway to say that I would only like to run npm run db after the database has been created?
NOTE : if I do this npm commands on the docker terminal after it has been built everything works fine. just fyi
here is my docker-compose.yml
version: '3.6'
services:
#Backend api
server:
container_name: server
build: ./
command: npm run db
working_dir: /user/src/server
ports:
- "5000:5000"
volumes:
- ./:/user/src/server
environment:
POSTGRES_URI: postgres://test:test#192.168.99.100:5432/interapp
links:
- postgres
# PostgreSQL database
postgres:
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: interapp
POSTGRES_HOST: postgres
image: postgres
ports:
- "5432:5432"
and here is my Dockerfile
FROM node:10.14.0
WORKDIR /user/src/server
COPY ./ ./
RUN npm install
CMD ["/bin/bash"]
on the docker-compose.yml file, using sh (bash) for a contained environment context for your command to run in. ie. sh -c 'npm run db'
your docker-compose file would now be
secondly, use the depends_on step to wait for the database to start
services:
#Backend api
server:
container_name: server
build: ./
command: sh -c 'npm run db'
working_dir: /user/src/server
depends_on:
-postgres
ports:
- "5000:5000"
volumes:
- ./:/user/src/server
environment:
POSTGRES_URI: postgres://test:test#192.168.99.100:5432/interapp
links:
- postgres
Simply adding depends_on to server service should do the trick here.
services:
server:
depends_on:
- postgres
...
This will cause docker-compose to start postgres container before the server container. It will not however wait for postgres to be ready. In this case it shouldn't be problem, because postgres starts really quickly.
If you want something more solid, or depends_on doesn't do the trick, you can add entrypoint wrapping script to your container. See https://docs.docker.com/compose/startup-order/, where you can read more about it. There are also links to tools, so you don't have to write your own script from scratch.

How to create user in mongodb with docker-compose

I'm trying to create some kind of script that will create a docker with mongodb and automatically create a user.
I can usually manage my docker images with docker-compose but this time, I don't know how to do it.
Basically, here is what I have to do:
clean/destroy container (docker-compose down)
create a docker container with mongodb and start it (without --auth parameter)
execute a java script containing db.createUser()
stop the container
restart the same container with --auth parameter to allow login with the user created in the javascript
I can't find how to do that properly with docker-compose because when it starts, I have to give it the command --auth. If I do that, I cannot execute my javascript to add my user. MongoDB allows users creation without being logged in if there is no user and if --auth parameter is not provided.
I want to do that automatically, I do not want to manually do some commands. The goal is to have a script that can be executed before each integration tests to start from a clean database.
Here is my project:
integration-test/src/test/resources/scripts/docker-compose.yml
mongodb:
container_name: mongo
image: mongo
ports:
- "27017:27017"
volumes:
- .:/setup
command: --auth
integration-test/src/test/resources/scripts/docker-init.sh
docker-compose down
docker-compose up -d
sleep 1
docker exec mongo bash -c "mongo myDatabase /setup/mongodb-setup.js"
integration-test/src/test/resources/scripts/mongodb-setup.js
db.createUser(
{
user: "myUser",
pwd: "myPassword",
roles: [
{ role: "readWrite", db: "myDatabase" }
]
})
Finding a way to start again a container with a new parameter (in this case --auth) would help but I can't find how to do that (docker start does not take parameters).
Any idea how I should do what I would like ?
If not, I can still delete everything from my database with some Java code or something else but I would like a complete mongodb docker setup created with a script.
The official mongo image now supports following environment variables that can be used in docker-compose as below:
environment:
- MONGO_INITDB_ROOT_USERNAME=user
- MONGO_INITDB_ROOT_PASSWORD=password
- MONGO_INITDB_DATABASE=test
more explanation at:
https://stackoverflow.com/a/42917632/1069610
This is how I do it, my requirement was to bring up a few containers along with mongodb, the other containers expect a user to be present when they come up, this worked for me. The good part is, the mongoClientTemp exits after the command is executed so the container doesn't stick around.
version: '2'
services:
mongo:
image: mongo:latest
container_name: mongo
ports:
- "27017:27017"
volumes:
- /app/hdp/mongo/data:/data/db
mongoClientTemp:
image: mongo:latest
container_name: mongoClientTemp
links:
- mongo:mongo
command: mongo --host mongo --eval "db.getSiblingDB('dashboard').createUser({user:'db', pwd:'dbpass', roles:[{role:'readWrite',db:'dashboard'}]});"
depends_on:
- mongo
another-container:
image: another-image:v01
container_name: another-container
ports:
- "8080:8080"
volumes:
- ./logs:/app/logs
environment:
- MONGODB_HOST=mongo
- MONGODB_PORT=27017
links:
- mongo:mongo
depends_on:
- mongoClientTemp
EDIT: tutumcloud repository is deprecated and no longer maintained, see other answers
I suggest that you use environment variables to set mongo user, database and password. tutum (owned by Docker) published a very good image
https://github.com/tutumcloud/mongodb
docker run -d -p 27017:27017 -p 28017:28017 -e MONGODB_USER="user" -e MONGODB_DATABASE="mydatabase" -e MONGODB_PASS="mypass" tutum/mongodb
You may convert these variables into docker-compose environments variables. You don't have to hard code it.
environment:
MONGODB_USER: "${db_user_env}"
MONGODB_DATABASE: "${dbname_env}"
MONGODB_PASS: "${db_pass}"
This configuration will read from your session's environment variables.
In your project directory create another directory docker-entrypoint-initdb.d then the
file tree looks like this:
📦Project-directory
┣ 📂docker-entrypoint-initdb.d
┃ ┗ 📜mongo-init.js
┗ 📜docker-compose.yaml
The docker-compose.yml contains:
version: "3.7"
services:
mongo:
container_name: container-mongodb
image: mongo:latest
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
MONGO_INITDB_DATABASE: root-db
volumes:
- ./docker-entrypoint-initdb.d/mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
mongo-init.js contains the javascript code to create user with different roles.
print("Started Adding the Users.");
db = db.getSiblingDB("admin");
db.createUser({
user: "userx",
pwd: "1234",
roles: [{ role: "readWrite", db: "admin" }],
});
print("End Adding the User Roles.");
You can modify the mongo-init.js as you need.
After reading the the official mongo docker page, I've found that you can create an admin user one single time, even if the auth option is being used. This is not well documented, but it simply works (hope it is not a feature).
Therefore, you can keep using the auth option all the time.
I created a github repository with scripts wrapping up the commands to be used. The most important command lines to run are:
docker exec db_mongodb mongo admin /setup/create-admin.js
docker exec db_mongodb mongo admin /setup/create-user.js -u admin -p admin --authenticationDatabase admin
The first line will create the admin user (and mongo will not complain even with auth option). The second line will create your "normal" user, using the admin rights from the first one.
Mongo image provides the /docker-entrypoint-initdb.d/ path to deploy custom .js or .sh setup scripts.
Check this post to get more details :
How to create a DB for MongoDB container on start up?
file: docker-compose.yaml
mongo:
image: mongo:latest
volumes_from:
- data
ports:
- "27017:27017"
command: --auth
container_name: "db_mongodb"
data:
image: mongo:latest
volumes:
- /var/lib/mongo
- ./setup:/setup
command: "true"
container_name: "db_mongodb_data"
file: .buildMongo.sh
#!/bin/sh
docker-compose down
docker-compose up -d
sleep 1
docker exec db_mongodb mongo admin /setup/create-admin.js
docker exec db_mongodb mongo myDb /setup/create-user.js -u admin -p admin --authenticationDatabase admin
The create-admin.js and create-user.js files are commands that you use using the mongo shell. So they must be easy for you to understand. The real direction is like the jzqa answer, "environment variables".
So the question here is how to create a user. I think this answers that point at least, you can check the complete setup here https://github.com/Lus1t4nUm/mongo_docker_bootstrap
For initializing mongo with initial user-password-db triple and initdb scripts with only one docker-compose.yml, without any extra configuration, you can use bitnami/mongo image.
In my case, I didn't run my scripts under /docker-entrypoint-initdb.d directory in the container after setting environment variables; MONGODB_USERNAME and MONGODB_PASSWORD (specific env variables for bitnami image) because mongod runs with --auth option automatically when you set these variables. Consequently, I got authentication errors when the container was in the process of executing the scripts.
Because, it was connecting to: mongodb://192.168.192.2:27017/compressors=disabled&gssapiServiceName=mongodb
TERMINAL LOG OF THE ERROR
FIRST DOCKER-COMPOSE FILE:
version: "3"
services:
mongodb:
container_name: mongodb
image: 'docker.io/bitnami/mongodb:4.2-debian-10'
ports:
- "27017:27017"
volumes:
- "mongodb_data:/bitnami/mongodb"
- "./mongodb/scripts:/docker-entrypoint-initdb.d"
environment:
- MONGODB_INITSCRIPTS_DIR=/docker-entrypoint-initdb.d
- MONGODB_USERNAME=some_username
- MONGODB_PASSWORD=some_password
- MONGODB_DATABASE=some_db_name
networks:
backend:
restart: unless-stopped
volumes:
mongodb_data:
networks:
backend:
driver: bridge
INIT JS FILE UNDER ./mongodb/scripts PATH:
let db = connect("localhost:27017/some_db_name");
db.auth("some_username", "some_password");
let collections = db.getCollectionNames();
let storeFound = false;
let index;
for(index=0; index<collections.length; index++){
if ("store" === collections[index]){
storeFound = true;
}
}
if(!storeFound ){
db.createCollection("store");
db.store.createIndex({"name": 1});
}
So, I decided to add new environment variables to my docker-compose.yml after inspecting https://github.com/bitnami/bitnami-docker-mongodb/blob/master/4.2/debian-10/rootfs/opt/bitnami/scripts/libmongodb.sh file.
In this sh file, there is function like mongodb_custom_init_scripts() for executing the scripts. For executing all script files, it runs mongodb_execute() method. In this method, after mongod instance is up and run, mongo client is connecting to the mongod instance by using some parameters.
########################
# Execute an arbitrary query/queries against the running MongoDB service
# Stdin:
# Query/queries to execute
# Arguments:
# $1 - User to run queries
# $2 - Password
# $3 - Database where to run the queries
# $4 - Host (default to result of get_mongo_hostname function)
# $5 - Port (default $MONGODB_PORT_NUMBER)
# $6 - Extra arguments (default $MONGODB_CLIENT_EXTRA_FLAGS)
# Returns:
# None
########################
mongodb_execute() {
local -r user="${1:-}"
local -r password="${2:-}"
local -r database="${3:-}"
local -r host="${4:-$(get_mongo_hostname)}"
local -r port="${5:-$MONGODB_PORT_NUMBER}"
local -r extra_args="${6:-$MONGODB_CLIENT_EXTRA_FLAGS}"
local result
local final_user="$user"
# If password is empty it means no auth, do not specify user
[[ -z "$password" ]] && final_user=""
local -a args=("--host" "$host" "--port" "$port")
[[ -n "$final_user" ]] && args+=("-u" "$final_user")
[[ -n "$password" ]] && args+=("-p" "$password")
[[ -n "$extra_args" ]] && args+=($extra_args)
[[ -n "$database" ]] && args+=("$database")
"$MONGODB_BIN_DIR/mongo" "${args[#]}"
}
After that I added new environment variables to my docker-compose like MONGODB_ADVERTISED_HOSTNAME, MONGODB_PORT_NUMBER, and, MONGODB_CLIENT_EXTRA_FLAGS
So my final docker-compose.yml looks like:
version: "3"
services:
mongodb:
container_name: mongodb
image: 'docker.io/bitnami/mongodb:4.2-debian-10'
ports:
- "27017:27017"
volumes:
- "mongodb_data:/bitnami/mongodb"
- "./mongodb/scripts:/docker-entrypoint-initdb.d"
environment:
- MONGODB_INITSCRIPTS_DIR=/docker-entrypoint-initdb.d
- MONGODB_USERNAME=some_username
- MONGODB_PASSWORD=some_password
- MONGODB_DATABASE=some_db_name
- MONGODB_ADVERTISED_HOSTNAME=localhost
- MONGODB_PORT_NUMBER=27017
- MONGODB_CLIENT_EXTRA_FLAGS=--authenticationDatabase=some_db_name
networks:
backend:
restart: unless-stopped
volumes:
mongodb_data:
networks:
backend:
driver: bridge
Now, it was connecting by this url:
mongodb://localhost:27017/?authSource=some_db_name&compressors=disabled &gssapiServiceName=mongodb
add --noauth option to the mongo command
extract from my docker-compose.yml file
mongors:
image: mongo:latest
command: mongod --noprealloc --smallfiles --replSet mongors2 --dbpath /data/db --nojournal --oplogSize 16 --noauth
environment:
TERM: xterm
volumes:
- ./data/mongors:/data/db

Docker postgres does not run init file in docker-entrypoint-initdb.d

Based on Docker's Postgres documentation, I can create any *.sql file inside /docker-entrypoint-initdb.d and have it automatically run.
I have init.sql that contains CREATE DATABASE ronda;
In my docker-compose.yaml, I have
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn ronda.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
postgres:
restart: always
build: ./postgres/
volumes_from:
- data
ports:
- "5432:5432"
data:
restart: always
build: ./postgres/
volumes:
- /var/lib/postgresql
command: "true"
and my postgres Dockerfile,
FROM library/postgres
RUN mkdir -p /docker-entrypoint-initdb.d
COPY init.sql /docker-entrypoint-initdb.d/
Running docker-compose build and docker-compose up work fine, but the database ronda is not created.
This is how I use postgres on my projects and preload the database.
file: docker-compose.yml
db:
container_name: db_service
build:
context: .
dockerfile: ./Dockerfile.postgres
ports:
- "5432:5432"
volumes:
- /var/lib/postgresql/data/
This Dockerfile load the file named pg_dump.backup(binary dump) or psql_dump.sql(plain text dump) if exist on root folder of the project.
file: Dockerfile.postgres
FROM postgres:9.6-alpine
ENV POSTGRES_DB DatabaseName
COPY pg_dump.backup .
COPY pg_dump.sql .
RUN [[ -e "pg_dump.backup" ]] && pg_restore pg_dump.backup > pg_dump.sql
# Preload database on init
RUN [[ -e "pg_dump.sql" ]] && cp pg_dump.sql /docker-entrypoint-initdb.d/
In case of need retry the loading of the dump, you can remove the current database with the command:
docker-compose rm db
Then you can run docker-compose up to retry load the database.
If your initialisation requirements are just to create the ronda schema, then you could just make use of the POSTGRES_DB environment variable as described in the documentation.
The bit of your docker-compose.yml file for the postgres service would then be:
postgres:
restart: always
build: ./postgres/
volumes_from:
- data
ports:
- "5432:5432"
environment:
POSTGRES_DB: ronda
On a side note, do not use restart: always for your data container as this container does not run any service (just the true command). Doing this you are basically telling Docker to run the true command in an infinite loop.
Had the same problem with postgres 11.
Some points that helped me:
run:
docker-compose rm
docker-compose build
docker-compose up
The obvious: don't run compose in detached mode. You want to see the logs.
After adding the step docker-compose rm to the mix it worked, finally.