I would like to start this MongoDB Replica Set:
version: "3"
services:
mongo1:
image: mongo
ports:
- 27017:27017
command: mongod --replSet rs0
mongo2:
image: mongo
ports:
- 27018:27017
command: mongod --replSet rs0
mongo3:
image: mongo
ports:
- 27019:27017
command: mongod --replSet rs0
Wait for those to come up, then access the Mongo shell via terminal:
docker exec -it mongo1 mongo
Then in Mongo shell do:
rs.initiate({"_id":"rs0","members":[{"_id":0,"host":"mongo1:27017"},{"_id":1,"host":"mongo2:27017"},{"_id":2,"host":"mongo3:27017"}]})
Mongo also allows mongo --eval "rs.initiate(..)", which may make things easier.
My question is how do I run this command after mongo1, mongo2, mongo3 are up?
You can do this, I recently had to run mongo --repair then run the MongoDB itself and after the MongoDB is up I needed to add my user to the DB, you can easily change things to run commands only after all three MongoDBs are up.
Possible docker-compose.yml:
version: "2"
services:
mongo:
container_name: mongo
restart: on-failure:10
image: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=<user>
- MONGO_INITDB_ROOT_PASSWORD=<pass>
- MONGO_INITDB_DATABASE=db
volumes:
- ./data:/data/db
ports:
- "27017:27017"
command: bash -c "mongod --repair && mongod"
mongoClient:
image: mongo
container_name: mongoClient
links:
- mongo
volumes:
- ./deployment_scripts:/deployment_scripts
command:
- /deployment_scripts/add_user.sh
depends_on:
- mongo
app:
container_name: app
restart: always
build: .
volumes:
- .:/usr/src/app
ports:
- "3000:3000"
depends_on:
- mongoClient
links:
- mongo
My /deployment_scripts/add_user.sh script wait for MongoDB to be up:
until mongo --host mongo --eval "print(\"waited for connection\")"
do
sleep 1
done
// you can add more MongoDB waits here
echo "Adding user to MongoDB..."
mongo --host mongo --eval "db.createUser({ user: \"<user>\", pwd: \"<pass>\", roles: [ { role: \"root\", db: \"admin\" } ] });"
echo "User added."
Note that you can address all three of your MongoDBs by replacing --host mongo with your --host mongo1 --host mongo2 and --host mongo3. You'll use this for both of the eval commands in the script.
Credit to this SO answer https://stackoverflow.com/a/45060399/4295037 that I used (until mongo ...).
I assume you are using Oficial Mongo image, that image is configured with:
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["mongod"]
If you check the docker-entrypoint.sh you will notice you could run any command you want by overwriting the CMD.
So, you can do for each mongo container
$ docker run -d mongo
9bf0473d491a2d7ae821bcf10ed08cd49678d28e46344589622bd9440a6aca65
$ docker ps -q
9bf0473d491a
$ docker exec -ti 9bf0473d491a mongo --eval "rs.initiate(.....)"
MongoDB shell version v3.6.5
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.5
{
"ok" : 0,
"errmsg" : "This node was not started with the replSet option",
"code" : 76,
"codeName" : "NoReplicationEnabled"
}
Notice that the errmsg is just because in my example the rs.initiate() is empty, it will work for you with the right config.
Related
I am trying to connect to a MongoDB replica set using pymongo, but I keep getting the error: pymongo.errors.ServerSelectionTimeoutError: No replica set members match selector. In the error message it's also specified that my topology type is ReplicaSetNoPrimary, which is odd, as connecting with mongo bash shows a clear primary.
Note that the replica set works fine and is usable via mongo bash on the master node.
Also, I have added firewall rules to allow both inbound and outbound traffic on the specified ports, just to make sure this isn't the issue.
I am using docker-compose for the cluster. The file:
version: "3.9"
services:
mongo-master:
image: mongo:latest
container_name: mongo_master
volumes:
- ./data/master:/data/db
ports:
- 27017:27017
command: mongod --replSet dbrs & mongo --eval rs.initiate(`cat rs_config.json`)
stdin_open: true
tty: true
mongo-slave-1:
image: mongo:latest
container_name: mongo_slave_1
volumes:
- ./data/slave_1:/data/db
ports:
- 27018:27017
command: mongod --replSet dbrs
stdin_open: true
tty: true
mongo-slave-2:
image: mongo:latest
container_name: mongo_slave_2
volumes:
- ./data/slave_2:/data/db
ports:
- 27019:27017
command: mongod --replSet dbrs
stdin_open: true
tty: true
The rs_config.json file used above:
{
"_id" : "dbrs",
"members" : [
{
"_id" : 0,
"host" : "mongo_master:27017",
"priority" : 10
},
{
"_id" : 1,
"host" : "mongo_slave_1:27017"
},
{
"_id" : 2,
"host" : "mongo_slave_2:27017"
}
]
}
The error raises on the last line here:
self.__client = MongoClient(["localhost:27017", "localhost:27018", "localhost:27019"], replicaset="dbrs")
self.__collection = self.__client[self.__db_name][collection.value]
self.__collection.insert_one(dictionary_object)
I ommitted some code for brevity, but you can assume all class attributes and dictionary_object are well defined according to pymongo docs.
Also please note that I have tried many different ways to initialize MongoClient, including a connection string (as in the docs), and the connect=False optional parameter as advised in some blogs. The issue persists...
Edit: I tried adding "mongo_master" to my etc/hosts file pointing at 127.0.0.1 and changing the connection string from localhost to that, and it works with the replica set. This is a bad workaround but maybe can help in figuring out a solution.
Thanks in advance for any help!
To get a connection to a MongoDB replicaset from an external client, you must be able to resolve the hostnames from the local client.
https://docs.mongodb.com/manual/tutorial/deploy-replica-set/#connectivity
Ensure that network traffic can pass securely between all members of the set and all clients in the network.
So, add the following to your /etc/hosts file:
127.0.0.1 mongodb-1
127.0.0.1 mongodb-2
127.0.0.1 mongodb-3
To be able to connect both internally and externally, you will need to run each MongoDB service on different ports.
The following script will initiate a 3-node MongoDB replicaset and run a test client. I recommend using the Bitnami image as it takes care of the replset initiation for you. (Borrowing heavily from this configuration)
#!/bin/bash
PROJECT_NAME=replset_test
MONGODB_VERSION=4.4
PYTHON_VERSION=3.9.6
PYMONGO_VERSION=4.0.1
cd "$(mktemp -d)" || exit
cat << EOF > Dockerfile
FROM python:${PYTHON_VERSION}-slim-buster
COPY requirements.txt /tmp/
RUN pip install -r /tmp/requirements.txt
COPY ${PROJECT_NAME}.py .
CMD [ "python", "./${PROJECT_NAME}.py" ]
EOF
cat << EOF > requirements.txt
pymongo==${PYMONGO_VERSION}
EOF
cat << EOF > ${PROJECT_NAME}.py
from pymongo import MongoClient
connection_string = 'mongodb://root:password123#mongodb-1:27017,mongodb-2:27018,mongodb-3:27019/mydatabase?authSource=admin&replicaSet=replicaset'
client = MongoClient(connection_string)
db = client.db
db['mycollection'].insert_one({'a': 1})
record = db['mycollection'].find_one()
if record is not None:
print(f'{__file__}: MongoDB connection working using connection string "{connection_string}"')
EOF
cp ${PROJECT_NAME}.py ${PROJECT_NAME}_external.py
cat << EOF > docker-compose.yaml
version: '3.9'
services:
mongodb-1:
image: docker.io/bitnami/mongodb:${MONGODB_VERSION}
ports:
- 27017:27017
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-1
- MONGODB_PORT_NUMBER=27017
- MONGODB_REPLICA_SET_MODE=primary
- MONGODB_ROOT_PASSWORD=password123
- MONGODB_REPLICA_SET_KEY=replicasetkey123
volumes:
- 'mongodb_master_data:/bitnami/mongodb'
mongodb-2:
image: docker.io/bitnami/mongodb:${MONGODB_VERSION}
ports:
- 27018:27018
depends_on:
- mongodb-1
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-2
- MONGODB_PORT_NUMBER=27018
- MONGODB_REPLICA_SET_MODE=secondary
- MONGODB_INITIAL_PRIMARY_HOST=mongodb-primary
- MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD=password123
- MONGODB_REPLICA_SET_KEY=replicasetkey123
mongodb-3:
image: docker.io/bitnami/mongodb:${MONGODB_VERSION}
ports:
- 27019:27019
depends_on:
- mongodb-1
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-3
- MONGODB_PORT_NUMBER=27019
- MONGODB_REPLICA_SET_MODE=secondary
- MONGODB_INITIAL_PRIMARY_HOST=mongodb-primary
- MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD=password123
- MONGODB_REPLICA_SET_KEY=replicasetkey123
${PROJECT_NAME}:
container_name: ${PROJECT_NAME}
build: .
depends_on:
- mongodb-1
- mongodb-2
- mongodb-3
volumes:
mongodb_master_data:
driver: local
EOF
docker rm --force $(docker ps -a -q --filter name=mongo) 2>&1 > /dev/null
docker rm --force $(docker ps -a -q --filter name=${PROJECT_NAME}) 2>&1 > /dev/null
docker-compose up --build -d
python ${PROJECT_NAME}.py
docker ps -a -q --filter name=${PROJECT_NAME}
docker logs $(docker ps -a -q --filter name=${PROJECT_NAME})
If all is ok you will get an output confirming both internal and external connectivity:
/tmp/tmp.QM9tQPE8Dj/replset_test.py: MongoDB connection working using connection string "mongodb://root:password123#mongodb-1:27017,mongodb-2:27018,mongodb-3:27019/mydatabase?authSource=admin&replicaSet=replicaset"
d53e8c41ad20
//./replset_test.py: MongoDB connection working using connection string "mongodb://root:password123#mongodb-1:27017,mongodb-2:27018,mongodb-3:27019/mydatabase?authSource=admin&replicaSet=replicaset"
I have a MongoDb container which is using a single node replica set configured like following code. To run a replica set i need to execute another command on mongoDb shell which is rs.initiate() How to add that command into the container configuration command list so that i do not have to execute that command manually on MongoDb shell ?
mongodb:
image: mongo:5.0
container_name: mongodb
command: ["--replSet", "rs0", "--bind_ip_all"] # This works but i have to manually go into mongoshell to execute rs.initilize(), i dont want to do that
# command: ["--replSet", "rs0", "--bind_ip_all","mongo", "rs.initiate()"] # --> Does't work
# command: ["--replSet", "rs0", "--bind_ip_all","rs.initiate()"] # --> Does't work
# command: ["--replSet", "rs0", "--bind_ip_all", "mongo rs.initiate()"] # --> Does't work
networks:
- dev
ports:
- 27017:27017
volumes:
- ${HOME}/mongodb:/data/db
Ok, i am not sure if this is the correct way to solve that problem but, i got that working by using healthcheck feature of Docker. After this change, i do not have to execute rs.initiate() manually
For details please look at docker mongo for single (primary node only) replica set (for development)?
HealthCheck: https://docs.docker.com/engine/reference/builder/#healthcheck
mongodb:
image: mongo:5.0
container_name: mongodb
command: ["--replSet", "rs0", "--bind_ip_all"]
- dev
ports:
- 27017:27017
volumes:
- ${HOME}/mongodb:/data/db
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u root -p imagiaRoot --quiet) -eq 1
interval: 10s
start_period: 30s
This is the portion of the dockerfile that has served us well to date. However, now I need to convert this to be a single node replica set (for transactions to work). I don't want any secondary or arbiter - just the primary node. What am I missing to get this working?
mongo:
image: mongo:4.4.3
container_name: mongo
restart: unless-stopped
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: myPass
command: mongod --port 27017
ports:
- '27017:27017'
volumes:
- ./data/mongodb:/data/db
- ./data/mongodb/home:/home/mongodb/
- ./configs/mongodb/mongo-init.sh:/docker-entrypoint-initdb.d/mongo-init.sh:ro
Got it working. I inserted the following into the block in my question above:
hostname: mongodb
volumes:
- ./data/mongodb/data/log/:/var/log/mongodb/
# the healthcheck avoids the need to initiate the replica set
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u root -p imagiaRoot --quiet) -eq 1
interval: 10s
start_period: 30s
I was unable to initiate the replica set via the healthcheck. I used the bash script below instead. For Windows users, be sure to call your DB with the name of your computer. For example:
mongodb://DESKTOP-QPRKMN2:27017
run-test.sh
#!/bin/bash
echo "Running docker-compose"
docker-compose up -d
echo "Waiting for DB to initialize"
sleep 10
echo "Initiating DB"
docker exec mongo_container mongo --eval "rs.initiate();"
echo "Running tests"
# test result
if go test ./... -v
then
echo "Test PASSED"
else
echo "Test FAILED"
fi
# cleanup
docker-compose -f docker-compose.test.yml down
docker-compose file
version: '3.8'
services:
mongo:
hostname: $HOST
container_name: mongo_container
image: mongo:5.0.3
volumes:
- ./test-db.d
expose:
- 27017
ports:
- "27017:27017"
restart: always
command: ["--replSet", "test", "--bind_ip_all"]
This forum post was very helpful: https://www.mongodb.com/community/forums/t/docker-compose-replicasets-getaddrinfo-enotfound/14301/4
I want to create a Docker container with an instance of Mongo. In particular, I would like to create a replica set with only one node (since I'm interested in transactions and they are only available for replica sets).
Dockerfile
FROM mongo
RUN echo "rs.initiate();" > /docker-entrypoint-initdb.d/replica-init.js
CMD ["--replSet", "rs0"]
docker-compose.yml
version: "3"
services:
db:
build:
dockerfile: Dockerfile
context: .
ports:
- "27017:27017"
If I use the Dockerfile alone everything is fine, while if I use docker-compose it does not work: in fact if I then log to the container I got prompted as rs0:OTHER> instead of rs0:PRIMARY>.
I consulted these links but the solutions proposed are not working:
https://github.com/docker-library/mongo/issues/246#issuecomment-382072843
https://github.com/docker-library/mongo/issues/249#issuecomment-381786889
This is the compose file I have used for a while now for local development. You can remove the keyfile pieces if you don't need to connect via SSL.
version: "3.8"
services:
mongodb:
image : mongo:4
container_name: mongodb
hostname: mongodb
restart: on-failure
environment:
- PUID=1000
- PGID=1000
- MONGO_INITDB_ROOT_USERNAME=mongo
- MONGO_INITDB_ROOT_PASSWORD=mongo
- MONGO_INITDB_DATABASE=my-service
- MONGO_REPLICA_SET_NAME=rs0
volumes:
- mongodb4_data:/data/db
- ./:/opt/keyfile/
ports:
- 27017:27017
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1
interval: 10s
start_period: 30s
command: "--bind_ip_all --keyFile /opt/keyfile/keyfile --replSet rs0"
volumes:
mongodb4_data:
It uses Docker's health check (with a startup delay) to sneak in the rs.initiate() if it actually needs it after it's already running.
To create a keyfile.
Mac:
openssl rand -base64 741 > keyfile
chmod 600 keyfile
Linux:
openssl rand -base64 756 > keyfile
chmod 600 keyfile
sudo chown 999 keyfile
sudo chgrp 999 keyfile
The top answer stopped working for me in later MongoDB and Docker versions. Particularly because rs.initiate().ok would throw an error if the replica set was already initiated, causing the whole command to fail. In addition, connecting from another container was failing because the replica set's sole member had some random host, which wouldn't allow the connection. Here's my new docker-compose.yml:
services:
web:
# ...
environment:
DATABASE_URL: mongodb://root:root#db/?authSource=admin&tls=false
db:
build:
context: ./mongo/
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: root
ports:
- '27017:27017'
volumes:
- data:/data/db
healthcheck:
test: |
test $$(mongosh --quiet -u root -p root --eval "try { rs.initiate({ _id: 'rs0', members: [{ _id: 0, host: 'db' }] }).ok } catch (_) { rs.status().ok }") -eq 1
interval: 10s
start_period: 30s
volumes:
data:
Inside ./mongo/, I have a custom Dockerfile that looks like:
FROM mongo:6
RUN echo "password" > /keyfile \
&& chmod 600 /keyfile \
&& chown 999 /keyfile \
&& chgrp 999 /keyfile
CMD ["--bind_ip_all", "--keyFile", "/keyfile", "--replSet", "rs0"]
This Dockerfile is suitable for development, but you'd definitely want a securely generated and persistent keyfile to be mounted in production (and therefore strike the entire RUN command).
You still need to issue replSetInitiate even if there's only one node in the RS.
See also here.
I had to do something similar to build tests around ChangeStreams which are only available when running mongo as a replica set. I don't remember where I pulled this from, so I can't explain it in detail but it does work for me. Here is my setup:
Dockerfile
FROM mongo:5.0.3
RUN echo "rs.initiate({'_id':'rs0', members: [{'_id':1, 'host':'127.0.0.1:27017'}]});" > "/docker-entrypoint-initdb.d/init_replicaset.js"
RUN echo "12345678" > "/tmp/key.file"
RUN chmod 600 /tmp/key.file
RUN chown 999:999 /tmp/key.file
CMD ["mongod", "--replSet", "rs0", "--bind_ip_all", "--keyFile", "/tmp/key.file"]
docker-compose.yml
version: '3.7'
services:
mongo:
build: .
restart: always
ports:
- 27017:27017
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u admin -p pass --quiet) -eq 1
interval: 10s
start_period: 30s
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: pass
MONGO_INITDB_DATABASE: test
Run docker-compose up and you should be good.
Connection String: mongodb://admin:pass#localhost:27017/test
Note: You shouldn't use this in production obviously, adjust the key "12345678" in the Dockerfile if security is a concern.
If you just need single node replica set of MongoDB via docker-compose.yml you can simply use this:
mongodb:
image: mongo:5
restart: always
command: ["--replSet", "rs0", "--bind_ip_all"]
ports:
- 27018:27017
healthcheck:
test: mongo --eval "rs.initiate()"
start_period: 5s
This one works fine for me:
version: '3.4'
services:
ludustack-db:
container_name: ludustack-db
command: mongod --auth
image: mongo:latest
hostname: mongodb
ports:
- '27017:27017'
env_file:
- .env
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGO_INITDB_ROOT_USERNAME}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_INITDB_ROOT_PASSWORD}
- MONGO_INITDB_DATABASE=${MONGO_INITDB_DATABASE}
- MONGO_REPLICA_SET_NAME=${MONGO_REPLICA_SET_NAME}
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1
interval: 60s
start_period: 60s
I create docker-compose with two services: python api and mongodb, after starting first-time sudo docker-compose up it creates a file for mongo db inside \data, after sudo docker-compose down and again sudo docker-compose up mongo can't access files in \data.
Obviously something wrong with permissions, but I don't know what exactly.
But if I just stop containers without sudo docker-compose down and sudo docker-compose up again everything is ok.
So everything goes wrong after down
systemd service:
[Unit]
Description=RestAPI imp
Requires=docker.service
After=docker.service
[Service]
Type=simple
WorkingDirectory=/home/entrant/myserv
ExecStart=/usr/local/bin/docker-compose up
ExecStop=/usr/local/bin/docker-compose down
Restart=on-failure
RestartSec=10
KillMode=process
[Install]
WantedBy=multi-user.target
docker-compose.yml
version: '3.5'
services:
web_dev:
build: .
ports:
- "8080:8080"
volumes:
- .:/app
environment:
- ENV=PROD
- DATABASE_URL=mongodb://mongodb:27017/myserv?authSource=admin&replicaSet=myrepl
depends_on:
- mongodb
command: deploy/wait-for-it.sh mongodb:27017 -- gunicorn -b 0.0.0.0:8080 index:api.app -w 9
mongodb:
image: mongo:4.0.12-xenial
container_name: "mongodb"
environment:
- MONGO_INITDB_DATABASE=myserv
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
command: bash -c "
mongod --fork --replSet myrepl --bind_ip_all --smallfiles --logpath=/dev/null
&& mongo --eval 'rs.initiate()'
&& mongod --shutdown
&& mongod --replSet myrepl --bind_ip_all --smallfiles --logpath=/dev/null
"
networks:
default:
name: web_dev