My docker compose of mongo fails to authenticate when I add a docker-entrypoint-initdb.d/init-mongo.sh:ro file
docker-compose
version: "3.1"
services:
mongo1:
container_name: mongo1
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASS}
MONGO_INITDB_DATABASE: ${MONGO_DB_NAME}
ports:
- 27017:27017
- 28017:28017
env_file:
- .env
volumes:
- volume-mongo:/data/db
- ./mongo/init-mongo-js.sh:/docker-entrypoint-initdb.d/init-mongo.sh:ro
command: ['--auth', '--wiredTigerCacheSizeGB=1']
networks:
- mongo-network
# mongo-express:
# container_name: mongo-express
# image: mongo-express
# restart: always
# ports:
# - '8081:8081'
# env_file:
# - .env
# environment:
# - ME_CONFIG_OPTIONS_EDITORTHEME=ambiance
# - ME_CONFIG_BASICAUTH_USERNAME=${MONGO_USER}
# - ME_CONFIG_BASICAUTH_PASSWORD=${MONGO_PASS}
# - ME_CONFIG_MONGODB_ENABLE_ADMIN=true
# - ME_CONFIG_MONGODB_AUTH_DATABASE=admin
# - ME_CONFIG_MONGODB_SERVER=mongo1
# - ME_CONFIG_MONGODB_ADMINUSERNAME=${MONGO_USER}
# - ME_CONFIG_MONGODB_ADMINPASSWORD=${MONGO_PASS}
# depends_on:
# - mongo1
# networks:
# - mongo-network
volumes:
volume-mongo:
driver: local
networks:
mongo-network:
driver: bridge
.env
MONGO_DB_NAME=DBX
MONGO_USER=MYNEWUSERNAME
MONGO_PASS=somesecret
# Mongo host
Attempting to login fails
docker exec -it mongo1 mongo admin -u MYNEWUSERNAME -p somesecret
When I remove the reference to the init-mongo.sh:ro- file then it works but then I don't get to run my initialisation script.
I have been referred to this article:
https://github.com/docker-library/mongo/issues/174
I'm assuming the reason it fails is because my bash file - init-mongo.sh:ro is overwriting this file:
if [ "$MONGO_INITDB_ROOT_USERNAME" ] && [ "$MONGO_INITDB_ROOT_PASSWORD" ]; then
rootAuthDatabase='admin'
"${mongo[#]}" "$rootAuthDatabase" <<-EOJS
db.createUser({
user: $(_js_escape "$MONGO_INITDB_ROOT_USERNAME"),
pwd: $(_js_escape "$MONGO_INITDB_ROOT_PASSWORD"),
roles: [ { role: 'root', db: $(_js_escape "$rootAuthDatabase") } ]
})
EOJS
fi
If that theory is correct then including it into my script with my additional changes seems like a good solution. The only problem is in my vscode, with shell-format extension, the colour coding of this code combination doesn't look right so I suspect its wrong.
init-mongo.sh:ro
Attempting to combine the following:
#!/bin/bash
set -e
# set -e causes the whole script to exit when a command fails, so the script can't silently fail and startup mongo.
if [ "$MONGO_INITDB_ROOT_USERNAME" ] && [ "$MONGO_INITDB_ROOT_PASSWORD" ]; then
rootAuthDatabase='admin'
"${mongo[#]}" "$rootAuthDatabase" <<-EOJS
db.createUser({
user: $(_js_escape "$MONGO_INITDB_ROOT_USERNAME"),
pwd: $(_js_escape "$MONGO_INITDB_ROOT_PASSWORD"),
roles: [ { role: 'root', db: $(_js_escape "$rootAuthDatabase") } ]
})
EOJS
fi
mongo <<EOF
use ${MONGO_DB_NAME}
db.createCollection("users")
db.users.insert({"name": "mike"})
EOF
Any help in fixing this bash script appreciated.
thanks
I have experimented and removed the reference in the docker-compose file
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASS}
MONGO_INITDB_DATABASE: ${MONGO_DB_NAME}
and instead create the admin user directly in the init file.
#!/bin/bash
set -e
mongo <<EOF
use admin
db.createUser(
{
user: "${MONGO_USER}",
pwd: "${MONGO_PASS}",
roles: [ { role: "userAdminAnyDatabase", db: "admin" }, "readWriteAnyDatabase" ]
}
)
use ${MONGO_DB_NAME}
db.createCollection("users")
db.users.insert({"name": "john"})
EOF
I'm getting the following error when dockerizing a node postgres database using sequelize as an orm backend
Unhandled rejection SequelizeConnectionRefusedError: connect
ECONNREFUSED 127.0.0.1:5432 app_1 | at
connection.connect.err
(/home/app/node_modules/sequelize/lib/dialects/postgres/connection-manager.js:170:24)
These lines of code seems to be the culprit, docker should not be connecting these credentials as this is for my local.
if (process.env.NODE_ENV === "production") {
var sequelize = new Sequelize(process.env.DATABASE_URL);
} else {
// docker is looking at these credentials..... when it should not
var sequelize = new Sequelize("elifullstack", "eli", "", {
host: "127.0.0.1",
dialect: "postgres",
pool: {
max: 100,
min: 0,
idle: 200000,
// #note https://github.com/sequelize/sequelize/issues/8133#issuecomment-359993057
acquire: 1000000,
},
});
}
docker-compose.yml
# docker-compose.yml
version: "3"
services:
app:
build: ./server
depends_on:
- database
ports:
- 5000:5000
environment:
# database refers to the database server at the bottom called "database"
- PSQL_HOST=database
- PSQL_USER=postgres
- PORT=5000
- PSQL_NAME=elitypescript
command: npm run server
client:
build: ./client
image: react_client
links:
- app
working_dir: /home/node/app/client
volumes:
- ./:/home/node/app
ports:
- 3001:3001
command: npm run start
env_file:
- ./client/.env
database:
image: postgres:9.6.8-alpine
volumes:
- database:/var/lib/postgresql/data
ports:
- 3030:5432
volumes:
database:
./server/dockerFile
FROM node:10.6.0
COPY . /home/app
WORKDIR /home/app
COPY package.json ./
RUN npm install
EXPOSE 5000
I looked at other similar questions like the following, but it ultimately did not help solve the issue.
Docker - SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306
SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306
I solved it...
What i did was change this
host: "127.0.0.1",
to this
let sequelize;
if (process.env.NODE_ENV === "production") {
sequelize = new Sequelize(process.env.DATABASE_URL);
} else {
sequelize = new Sequelize(
process.env.POSTGRES_DB || "elitypescript",
process.env.POSTGRES_USER || "eli",
"",
{
host: process.env.PSQL_HOST || "localhost",
dialect: "postgres",
pool: {
max: 100,
min: 0,
idle: 200000,
// #note https://github.com/sequelize/sequelize/issues/8133#issuecomment-359993057
acquire: 1000000,
},
}
);
}
that way the host would be set to docker environment variable like this
PSQL_HOST: database
and that connects to
database:
image: postgres:9.6.8-alpine
volumes:
- database:/var/lib/postgresql/data
ports:
- 3030:5432
Edit
# docker-compose.yml
version: "3"
services:
app:
build: ./server
depends_on:
- database
ports:
- 5000:5000
environment:
PSQL_HOST: database
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-password}
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_DB: ${POSTGRES_DB:-elitypescript}
command: npm run server
client:
build: ./client
image: react_client
links:
- app
working_dir: /home/node/app/client
volumes:
- ./:/home/node/app
ports:
- 3001:3001
command: npm run start
env_file:
- ./client/.env
database:
image: postgres:9.6.8-alpine
volumes:
- database:/var/lib/postgresql/data
ports:
- 3030:5432
volumes:
database:
I'm trying to intialize a mongo replica set mode and add two nodes(swarm services) to it, I'm using docker-compose 3.4
I'm using a script shell that I execute on an ansible playbook (version 2.7.10)
The following pieces of code are respectively the docker-compose, the ansible playbook and the shell script:
version: '3.4'
services:
mongo:
image: mongodb:4.0.9-debian8-1
command: mongod --smallfiles --replSet rs_mongo --port 27017
ports:
- 27017:27017
volumes:
- /opt/application/fwcwas/mongo/:/data/db
deploy:
replicas: 1
placement:
constraints:
- node.labels.mongo == true
environment:
MONGO_INITDB_ROOT_USERNAME: username
MONGO_INITDB_ROOT_PASSWORD: pwd
MONGO_REPLICATION_MODE: RS
MONGO_REPLICATION_REPLSETNAME: rs_mongo
networks:
- web
mongo02:
image: mongodb:4.0.9-debian8-1
command: mongod --smallfiles --replSet rs_mongo --port 27018
ports:
- 27018:27017
volumes:
- /opt/application/fwcwas/mongo02/:/data/db
deploy:
replicas: 1
placement:
constraints:
- node.labels.mongo == true
environment:
MONGO_INITDB_ROOT_USERNAME: username
MONGO_INITDB_ROOT_PASSWORD: pwd
MONGO_REPLICATION_MODE: RS
MONGO_REPLICATION_REPLSETNAME: rs_mongo
networks:
- web
mongo03:
image: mongodb:4.0.9-debian8-1
command: mongod --smallfiles --replSet rs_mongo --port 27019
ports:
- 27019:27017
volumes:
- /opt/application/fwcwas/mongo03/:/data/db
deploy:
replicas: 1
placement:
constraints:
- node.labels.mongo == true
environment:
MONGO_INITDB_ROOT_USERNAME: username
MONGO_INITDB_ROOT_PASSWORD: pwd
MONGO_REPLICATION_MODE: RS
MONGO_REPLICATION_REPLSETNAME: rs_mongo
networks:
- web
networks:
web:
external:
name: mongo_network
Ansible playbook:
##########################################################
# Create a mongo replicaset using a script shell
##########################################################
- name: Ensure replicaset rs_mongo exist
shell: chmod +x init_replica_set.sh
args:
chdir: /home/docker/mongo/
when:
- ansible_host in groups['manager_launcher']
Script shell that initialize the replica set mode and add the mongo replicas(init_replica_set.sh):
#!/bin/bash
echo "Intializing replica set on master"
replicate='rs.initiate();sleep(1000);cfg=rs.conf();cfg.members[0].host="mongo:27017";rs.reconfig(cfg);rs.add({host:"mongo02:27018",priority:0.5});rs.add({host:"mongo03:27019",priority:0.5});rs.status();'
docker exec -it $(docker ps -qf "name = mongo_mongo.1") bash -c "echo '$replicate' | mongo --port 27017 -u username -p pwd"
The script supposed to intialize the replica set mode of mongodb with the swarm service mongo:27017 which is the primary replica and add to it the services mongo02:27018 and mongo03:27019 but it only did the initialization with the first service mongo:27017 and when I tried to add the nodes manually:
rs.add({host:"mongo02:27018",priority:0.5});
I got this error
{
"operationTime" : Timestamp(1571407148, 1),
"ok" : 0,
"errmsg" : "Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: mongo:27017; the following nodes did not respond affirmatively: mongo02:27018 failed with Error connecting to mongo02:27018 (10.0.2.24:27018) :: caused by :: Connection refused",
"code" : 74,
"codeName" : "NodeNotFound",
"$clusterTime" : {
"clusterTime" : Timestamp(1571407148, 1),
"signature" : {
"hash" : BinData(0,"IAMJw09rJWNAcRle0WWba1eE8os="),
"keyId" : NumberLong("6749137026550857730")
}
}
}
any advice please
I prefer to initialize a replica set by defining all the parameters for the replica set at the same time. i.e., call init and add together...
Also, make sure each node can reach the others by hostname.
Example:
rs.initiate(
{
_id: "shard1",
version: 1,
members: [
{ _id: 0, host: "ip-172-31-30-65.us-west-2.compute.internal:27017" },
{ _id: 1, host: "ip-172-31-17-88.us-west-2.compute.internal:27017" },
{ _id: 2, host: "ip-172-31-23-140.us-west-2.compute.internal:27017" }
]
}
)
Also, when looking at your docker compose it looks like you start mongo2 with port 27018, which is fine, but then you map 27018 to 27017. Shouldn't this be mapped to 27018:27018 instead?
I should configure a mongodb cluster with two nodes. Each node should have all data replicated. If one server die, the other should assume as primary.
Before I configure the cluster, I'm doing a local test using docker instances to configure it. As I saw in the documentation, I should have at least 3 instances of MongoDB. Is that correct?
First I created the tree instance with docker, the I configured the instance one to be the primary. The code bellow is my docker compose and the configure script.
The Docker compose:
version: '3'
services:
mongo-2:
container_name: mongo-2
image: mongo:4
ports:
- 30102:27017
command: mongod --replSet cnf-serv --port 27017 --oplogSize 16 --bind_ip_all
restart: always
mongo-3:
container_name: mongo-3
image: mongo:4
ports:
- 30103:27017
command: mongod --replSet cnf-serv --port 27017 --oplogSize 16 --bind_ip_all
restart: always
mongo-1:
container_name: mongo-1
image: mongo:4
ports:
- 30101:27017
command: mongod --replSet cnf-serv --port 27017 --oplogSize 16 --bind_ip_all
links:
- mongo-2:mongo-2
- mongo-3:mongo-3
restart: always
mongo-setup:
container_name: mongo-setup
image: mongo:4
depends_on:
- mongo-1
- mongo-2
- mongo-3
links:
- mongo-1:mongo-1
- mongo-2:mongo-2
- mongo-3:mongo-3
volumes:
- ./scripts:/scripts
environment:
- MONGO1=mongo-1
- MONGO2=mongo-2
- MONGO3=mongo-3
- RS=cnf-serv
- PORT=27017
entrypoint: [ "/scripts/setup.sh" ]
The configure script:
#!/bin/bash
mongodb1=`getent hosts ${MONGO1} | awk '{ print $1 }'`
mongodb2=`getent hosts ${MONGO2} | awk '{ print $1 }'`
mongodb3=`getent hosts ${MONGO3} | awk '{ print $1 }'`
port=${PORT:-27017}
echo "Waiting for startup.."
until mongo --host ${mongodb1}:${port} --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)' &>/dev/null; do
printf '.'
sleep 1
done
echo "Started.."
echo setup.sh time now: `date +"%T" `
mongo --host ${mongodb1}:${port} <<EOF
var cfg = {
"_id": "${RS}",
"protocolVersion": 1,
"members": [
{
"_id": 100,
"host": "${mongodb1}:${port}"
},
{
"_id": 101,
"host": "${mongodb2}:${port}"
},
{
"_id": 102,
"host": "${mongodb3}:${port}"
}
]
};
rs.initiate(cfg, { force: true });
rs.reconfig(cfg, { force: true });
EOF
When I try to connect in any external container port, I have the same error message: Could not connect to MongoDB on the provided host and port
How can I configure a internal cluster using docker?
A 2-node replica set won't be very much help; if one of the nodes goes down, then the other node can't take over as primary because it can't command a majority of votes in the election. You get some benefit in that you have a full copy of the data on the second node, but to get the full benefit of your replica set (both redundancy and high-availability), you would need to add a 3rd node.
I'm trying to configure a mongodb replicaSet using docker-compose, but when I stop the master container it seems that it doesn't pass to the secondary.
redis:
image: redis
ports:
- "6379:6379"
mongo3:
hostname: mongo3
image: mongo
entrypoint: [ "/usr/bin/mongod", "--replSet", "rs", "--journal","--dbpath","/data/db","--smallfiles", "--rest" ]
volumes:
- ./data/mongo3:/data/db
ports:
- "27018:27017"
- "28018:28017"
restart: always
mongo2:
hostname: mongo2
image: mongo
entrypoint: [ "/usr/bin/mongod", "--replSet", "rs", "--journal","--dbpath","/data/db","--smallfiles", "--rest" ]
volumes:
- ./data/mongo2:/data/db
ports:
- "27019:27017"
- "28019:28017"
restart: always
mongo1:
hostname: mongo1
image: mongo
entrypoint: [ "/usr/bin/mongod", "--replSet", "rs", "--journal","--dbpath","/data/db","--smallfiles", "--rest" ]
volumes:
- ./data/mongo1:/data/db
ports:
- "27017:27017"
- "28017:28017"
links:
- mongo2:mongo2
- mongo3:mongo3
restart: always
web:
build: .
ports:
- "2000:2000"
volumes:
- .:/vip
links:
- redis
- mongo1
- mongo2
- mongo3
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
links:
- web:web
mongosetup:
image: mongo
links:
- mongo1:mongo1
- mongo2:mongo2
- mongo3:mongo3
volumes:
- ./scripts:/scripts
entrypoint: [ "/scripts/setup.sh" ]
setup.sh :
#!/bin/bash
MONGODB1=`ping -c 1 mongo1 | head -1 | cut -d "(" -f 2 | cut -d ")" -f 1`
MONGODB2=`ping -c 1 mongo2 | head -1 | cut -d "(" -f 2 | cut -d ")" -f 1`
MONGODB3=`ping -c 1 mongo3 | head -1 | cut -d "(" -f 2 | cut -d ")" -f 1`
echo "**********************************************" ${MONGODB1}
echo "Waiting for startup.."
until curl http://${MONGODB1}:28017/serverStatus\?text\=1 2>&1 | grep uptime | head -1; do
printf '.'
sleep 1
done
echo curl http://${MONGODB1}:28017/serverStatus\?text\=1 2>&1 | grep uptime | head -1
echo "Started.."
echo SETUP.sh time now: `date +"%T" `
mongo --host ${MONGODB1}:27017 <<EOF
var cfg = {
"_id": "rs",
"version": 1,
"members": [
{
"_id": 0,
"host": "${MONGODB1}:27017",
"priority": 2
},
{
"_id": 1,
"host": "${MONGODB2}:27017",
"priority": 0
},
{
"_id": 2,
"host": "${MONGODB3}:27017",
"priority": 0
}
],settings: {chainingAllowed: true}
};
rs.initiate(cfg, { force: true });
rs.reconfig(cfg, { force: true });
rs.slaveOk();
db.getMongo().setReadPref('nearest');
db.getMongo().setSlaveOk();
EOF
I had a similar issue and resolved it with the following compose file:
version: "3.8"
services:
mongo1:
image: mongo:4.2
container_name: mongo1
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30001"]
volumes:
- ./data/mongo-1:/data/db
ports:
- 30001:30001
healthcheck:
test: test $$(echo "rs.initiate({_id:'my-replica-set',members:[{_id:0,host:\"mongo1:30001\"},{_id:1,host:\"mongo2:30002\"},{_id:2,host:\"mongo3:30003\"}]}).ok || rs.status().ok" | mongo --port 30001 --quiet) -eq 1
interval: 10s
start_period: 30s
mongo2:
image: mongo:4.2
container_name: mongo2
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30002"]
volumes:
- ./data/mongo-2:/data/db
ports:
- 30002:30002
mongo3:
image: mongo:4.2
container_name: mongo3
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30003"]
volumes:
- ./data/mongo-3:/data/db
ports:
- 30003:30003
with the following in my /etc/hosts file:
127.0.0.1 mongo1
127.0.0.1 mongo2
127.0.0.1 mongo3
I documented it in a GitHub repo and with a little blog post here:
https://github.com/UpSync-Dev/docker-compose-mongo-replica-set
https://www.upsync.dev/2021/02/02/run-mongo-replica-set.html
Update: This does not work! You do need to run rs.initiate()
With MongoDB 4.0, you don't need a 4th container to run a setup script. It is really simple to bring up a replicaSet of 3 containers:
version: "3"
services:
mongo1:
hostname: mongo1
container_name: localmongo1
image: mongo:4.0-xenial
expose:
- 27017
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
mongo2:
hostname: mongo2
container_name: localmongo2
image: mongo:4.0-xenial
expose:
- 27017
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
mongo3:
hostname: mongo3
container_name: localmongo3
image: mongo:4.0-xenial
expose:
- 27017
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
More info here: https://github.com/msound/localmongo/tree/4.0
I was looking for how to start MongoDB Replica Set with one DB instance for local development and ended up here. I found the answers here too complicated, so I came up with the following solution:
docker-compose.yml
version: "3"
services:
mongo:
hostname: mongodb
container_name: mongodb
image: mongo:latest
restart: always
ports:
- "27017:27017"
volumes:
- ./scripts:/docker-entrypoint-initdb.d/
command: ["--replSet", "rs0", "--bind_ip_all"]
And there is a folder called 'scripts' in the current directory with a single file in it called 'init.js' (the name is not important). This folder mounted as a volume to the '/docker-entrypoint-initdb.d/', which is a special folder. When MongoDB is started, all the files in this directory will be executed. The content of file is:
init.js
rs.initiate();
I would adivse you to have a look at khezen/mongo.
You can deploy a mongo replica set across a 3 nodes docker swarm with the following:
version: '3'
services:
replica1:
image: khezen/mongo:slim
deploy:
mode: replicated
replicas: 1
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
palcement:
node.hostname: node-1
environment:
RS_NAME: shard1
SHARD_SVR: 'y'
AUTH: 'y'
volumes:
- /data/mongo/replica1:/data/db
networks:
- mongo_cluster
replica2:
image: khezen/mongo:slim
deploy:
mode: replicated
replicas: 1
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
palcement:
node.hostname: node-2
environment:
RS_NAME: shard1
SHARD_SVR: 'y'
AUTH: 'y'
volumes:
- /data/mongo/replica2:/data/db
networks:
- mongo_cluster
replica3:
image: khezen/mongo:slim
deploy:
mode: replicated
replicas: 1
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
palcement:
node.hostname: node-3
environment:
RS_NAME: shard1
SHARD_SVR: 'y'
MASTER: replica3
SLAVES: replica1 replica2
AUTH: 'y'
volumes:
- /data/mongo/replica3:/data/db
networks:
- mongo_cluster
networks:
mongo_cluster:
driver: overlay
disclaimer: I am the maintainer of this image.
I set up a gist with a guide on how to set it up using a docker-compose file and mongoose. https://gist.github.com/harveyconnor/518e088bad23a273cae6ba7fc4643549
I had similar problem in setting replica set up in a standalone mongodb service with authentication and here are what I ended up with.
docker-compose.yml:
version: '3.7'
services:
...
db:
image: mongo
restart: always
expose:
- 27017
environment:
MONGO_INITDB_DATABASE: ${DATABASE_NAME}
MONGO_INITDB_ROOT_USERNAME: ${DATABASE_USER}
MONGO_INITDB_ROOT_PASSWORD: ${DATABASE_PASSWORD}
MONGO_REPLICA_SET_NAME: ${MONGO_REPLICA_SET_NAME}
command: ["--replSet", "${MONGO_REPLICA_SET_NAME}", "--bind_ip_all"]
healthcheck:
test: test $$(echo "rs.status().ok" | mongo -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1
interval: 10s
start_period: 30s
volumes:
- ./db:/data/db
- ./scripts/set-credentials.sh:/docker-entrypoint-initdb.d/set-credentials.sh
replica-setup:
image: mongo
restart: on-failure
networks:
default:
volumes:
- ./scripts/setup-replica.sh:/scripts/setup-replica.sh
entrypoint: [ "bash", "/scripts/setup-replica.sh" ]
depends_on:
- db
environment:
MONGO_INITDB_ROOT_USERNAME: ${DATABASE_USER}
MONGO_INITDB_ROOT_PASSWORD: ${DATABASE_PASSWORD}
./scripts/setup-replica.sh:
#!/bin/bash
MONGODB1=db
echo "Waiting for MongoDB startup..."
until curl http://${MONGODB1}:27017/serverStatus\?text\=1 2>&1 | grep uptime | head -1; do
printf '.'
sleep 1
done
# check if replica set is already initiated
RS_STATUS=$( mongo --quiet --host ${MONGODB1}:27017 -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD --eval "rs.status().ok" )
if [[ $RS_STATUS != 1 ]]
then
echo "[INFO] Replication set config invalid. Reconfiguring now."
RS_CONFIG_STATUS=$( mongo --quiet --host ${MONGODB1}:27017 -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD --eval "rs.status().codeName" )
if [[ $RS_CONFIG_STATUS == 'InvalidReplicaSetConfig' ]]
then
mongo --quiet --host ${MONGODB1}:27017 -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD <<EOF
config = rs.config()
config.members[0].host = db # Here is important to set the host name of the db instance
rs.reconfig(config, {force: true})
EOF
else
echo "[INFO] MongoDB setup finished. Initiating replicata set."
mongo --quiet --host ${MONGODB1}:27017 -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD --eval "rs.initiate()" > /dev/null
fi
else
echo "[INFO] Replication set already initiated."
fi
./scripts/set-credentials.sh:
#!/bin/bash
set -e
mongo -- "$MONGO_INITDB_DATABASE" <<EOF
var rootUser = '$MONGO_INITDB_ROOT_USERNAME';
var rootPassword = '$MONGO_INITDB_ROOT_PASSWORD';
var admin = db.getSiblingDB('admin');
admin.auth(rootUser, rootPassword);
var user = '$MONGO_INITDB_ROOT_USERNAME';
var password = '$MONGO_INITDB_ROOT_PASSWORD';
db.createUser({user: user, pwd: password, roles: ["readWrite"]});
EOF
What I achieved through is:
Setup a mongodb service with username/password authentication for a default collection
Initialize replica set when it's first time running services
Reconfigure replica set member when there's a previous db data
Health check the mongodb service by checking replica set status
If you just need single node replica set of MongoDB via docker-compose.yml you can simply use this:
mongodb:
image: mongo:5
restart: always
command: ["--replSet", "rs0", "--bind_ip_all"]
ports:
- 27018:27017
healthcheck:
test: mongo --eval "rs.initiate()"
start_period: 5s
This setup works for me. I have also setup everything in https://github.com/nguyenduyhust/docker-mongodb-replica-set.
Great if that's what you're looking for.
Dockerfile
FROM mongo
RUN mkdir /config
WORKDIR /config
COPY wait-for-it.sh .
COPY mongo-setup.js .
COPY mongo-setup.sh .
RUN chmod +x /config/wait-for-it.sh
RUN chmod +x /config/mongo-setup.sh
CMD [ "bash", "-c", "/config/wait-for-it.sh mongodb1:27011 -- /config/mongo-setup.sh"]
docker-compose.yml
version: "3"
services:
mongodb1:
container_name: mongo1
image: mongo
restart: always
volumes:
- ./volumes/mongodb1:/data/db
ports:
- "27011:27011"
expose:
- "27011"
entrypoint:
[
"/usr/bin/mongod",
"--port", "27011",
"--replSet", "rs0",
"--bind_ip_all",
]
mongodb2:
container_name: mongo2
image: mongo
restart: always
volumes:
- ./volumes/mongodb2:/data/db
ports:
- "27012:27012"
expose:
- "27012"
entrypoint:
[
"/usr/bin/mongod",
"--port", "27012",
"--replSet", "rs0",
"--bind_ip_all",
]
mongodb3:
container_name: mongo3
image: mongo
restart: always
volumes:
- ./volumes/mongodb3:/data/db
ports:
- "27013:27013"
expose:
- "27013"
entrypoint:
[
"/usr/bin/mongod",
"--port", "27013",
"--replSet", "rs0",
"--bind_ip_all",
]
mongosetup:
container_name: mongosetup
image: "mongo-setup"
build: "./mongo-setup"
depends_on:
- mongodb1
mongo-express:
container_name: mongo-express
image: mongo-express
environment:
ME_CONFIG_MONGODB_URL: mongodb://mongodb1:27011,mongodb2:27012,mongodb3:27013/?replicaSet=rs0
ports:
- 8081:8081
restart: always
depends_on:
- mongodb1
- mongosetup
mongo-setup.js
rsconf = {
_id : "rs0",
members: [
{
"_id": 0,
"host": "mongodb1:27011",
"priority": 3
},
{
"_id": 1,
"host": "mongodb2:27012",
"priority": 2
},
{
"_id": 2,
"host": "mongodb3:27013",
"priority": 1
}
]
}
rs.initiate(rsconf);
mongo-setup.sh
#!/usr/bin/env bash
if [ ! -f /data/mongo-init.flag ]; then
echo "Init replicaset"
mongo mongodb://mongodb1:27011 mongo-setup.js
touch /data/mongo-init.flag
else
echo "Replicaset already initialized"
fi