I am trying to connect to a MongoDB replica set using pymongo, but I keep getting the error: pymongo.errors.ServerSelectionTimeoutError: No replica set members match selector. In the error message it's also specified that my topology type is ReplicaSetNoPrimary, which is odd, as connecting with mongo bash shows a clear primary.
Note that the replica set works fine and is usable via mongo bash on the master node.
Also, I have added firewall rules to allow both inbound and outbound traffic on the specified ports, just to make sure this isn't the issue.
I am using docker-compose for the cluster. The file:
version: "3.9"
services:
mongo-master:
image: mongo:latest
container_name: mongo_master
volumes:
- ./data/master:/data/db
ports:
- 27017:27017
command: mongod --replSet dbrs & mongo --eval rs.initiate(`cat rs_config.json`)
stdin_open: true
tty: true
mongo-slave-1:
image: mongo:latest
container_name: mongo_slave_1
volumes:
- ./data/slave_1:/data/db
ports:
- 27018:27017
command: mongod --replSet dbrs
stdin_open: true
tty: true
mongo-slave-2:
image: mongo:latest
container_name: mongo_slave_2
volumes:
- ./data/slave_2:/data/db
ports:
- 27019:27017
command: mongod --replSet dbrs
stdin_open: true
tty: true
The rs_config.json file used above:
{
"_id" : "dbrs",
"members" : [
{
"_id" : 0,
"host" : "mongo_master:27017",
"priority" : 10
},
{
"_id" : 1,
"host" : "mongo_slave_1:27017"
},
{
"_id" : 2,
"host" : "mongo_slave_2:27017"
}
]
}
The error raises on the last line here:
self.__client = MongoClient(["localhost:27017", "localhost:27018", "localhost:27019"], replicaset="dbrs")
self.__collection = self.__client[self.__db_name][collection.value]
self.__collection.insert_one(dictionary_object)
I ommitted some code for brevity, but you can assume all class attributes and dictionary_object are well defined according to pymongo docs.
Also please note that I have tried many different ways to initialize MongoClient, including a connection string (as in the docs), and the connect=False optional parameter as advised in some blogs. The issue persists...
Edit: I tried adding "mongo_master" to my etc/hosts file pointing at 127.0.0.1 and changing the connection string from localhost to that, and it works with the replica set. This is a bad workaround but maybe can help in figuring out a solution.
Thanks in advance for any help!
To get a connection to a MongoDB replicaset from an external client, you must be able to resolve the hostnames from the local client.
https://docs.mongodb.com/manual/tutorial/deploy-replica-set/#connectivity
Ensure that network traffic can pass securely between all members of the set and all clients in the network.
So, add the following to your /etc/hosts file:
127.0.0.1 mongodb-1
127.0.0.1 mongodb-2
127.0.0.1 mongodb-3
To be able to connect both internally and externally, you will need to run each MongoDB service on different ports.
The following script will initiate a 3-node MongoDB replicaset and run a test client. I recommend using the Bitnami image as it takes care of the replset initiation for you. (Borrowing heavily from this configuration)
#!/bin/bash
PROJECT_NAME=replset_test
MONGODB_VERSION=4.4
PYTHON_VERSION=3.9.6
PYMONGO_VERSION=4.0.1
cd "$(mktemp -d)" || exit
cat << EOF > Dockerfile
FROM python:${PYTHON_VERSION}-slim-buster
COPY requirements.txt /tmp/
RUN pip install -r /tmp/requirements.txt
COPY ${PROJECT_NAME}.py .
CMD [ "python", "./${PROJECT_NAME}.py" ]
EOF
cat << EOF > requirements.txt
pymongo==${PYMONGO_VERSION}
EOF
cat << EOF > ${PROJECT_NAME}.py
from pymongo import MongoClient
connection_string = 'mongodb://root:password123#mongodb-1:27017,mongodb-2:27018,mongodb-3:27019/mydatabase?authSource=admin&replicaSet=replicaset'
client = MongoClient(connection_string)
db = client.db
db['mycollection'].insert_one({'a': 1})
record = db['mycollection'].find_one()
if record is not None:
print(f'{__file__}: MongoDB connection working using connection string "{connection_string}"')
EOF
cp ${PROJECT_NAME}.py ${PROJECT_NAME}_external.py
cat << EOF > docker-compose.yaml
version: '3.9'
services:
mongodb-1:
image: docker.io/bitnami/mongodb:${MONGODB_VERSION}
ports:
- 27017:27017
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-1
- MONGODB_PORT_NUMBER=27017
- MONGODB_REPLICA_SET_MODE=primary
- MONGODB_ROOT_PASSWORD=password123
- MONGODB_REPLICA_SET_KEY=replicasetkey123
volumes:
- 'mongodb_master_data:/bitnami/mongodb'
mongodb-2:
image: docker.io/bitnami/mongodb:${MONGODB_VERSION}
ports:
- 27018:27018
depends_on:
- mongodb-1
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-2
- MONGODB_PORT_NUMBER=27018
- MONGODB_REPLICA_SET_MODE=secondary
- MONGODB_INITIAL_PRIMARY_HOST=mongodb-primary
- MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD=password123
- MONGODB_REPLICA_SET_KEY=replicasetkey123
mongodb-3:
image: docker.io/bitnami/mongodb:${MONGODB_VERSION}
ports:
- 27019:27019
depends_on:
- mongodb-1
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-3
- MONGODB_PORT_NUMBER=27019
- MONGODB_REPLICA_SET_MODE=secondary
- MONGODB_INITIAL_PRIMARY_HOST=mongodb-primary
- MONGODB_INITIAL_PRIMARY_ROOT_PASSWORD=password123
- MONGODB_REPLICA_SET_KEY=replicasetkey123
${PROJECT_NAME}:
container_name: ${PROJECT_NAME}
build: .
depends_on:
- mongodb-1
- mongodb-2
- mongodb-3
volumes:
mongodb_master_data:
driver: local
EOF
docker rm --force $(docker ps -a -q --filter name=mongo) 2>&1 > /dev/null
docker rm --force $(docker ps -a -q --filter name=${PROJECT_NAME}) 2>&1 > /dev/null
docker-compose up --build -d
python ${PROJECT_NAME}.py
docker ps -a -q --filter name=${PROJECT_NAME}
docker logs $(docker ps -a -q --filter name=${PROJECT_NAME})
If all is ok you will get an output confirming both internal and external connectivity:
/tmp/tmp.QM9tQPE8Dj/replset_test.py: MongoDB connection working using connection string "mongodb://root:password123#mongodb-1:27017,mongodb-2:27018,mongodb-3:27019/mydatabase?authSource=admin&replicaSet=replicaset"
d53e8c41ad20
//./replset_test.py: MongoDB connection working using connection string "mongodb://root:password123#mongodb-1:27017,mongodb-2:27018,mongodb-3:27019/mydatabase?authSource=admin&replicaSet=replicaset"
Current State
I have a MongoDB instance running on a server without the replication set flag (--replSet)
I have some previously stored information on the Database and wish to retain the information
Aim
I wish to however restart the container with the --replSet "my-set" flag set for the daemon and keep the previous information intact
Implementation
I am trying to follow along a tutorial for setting replica sets in MongoDB with Docker and trying it out on my local machine.
Create a standard MongoDB Docker container w/o the flag replSet set which represents the current state:
docker run -d --name mongo_rs --publish 37017:27017 mongo
Using the MongoDB Compass I connected to the DB and added some dummy information to a Database called test and collection called players
I stop the container:
docker container stop mongo_rs
From here onwards I wish to add the --replSet "my-set" to the mongo_rs container and configuring the Replica set via the mongo Shell as mentioned in the tutorial. What is the possible solution for achieving it?
Here is my .yml file
version: '3.7'
services:
node1:
image: mongo
ports:
- 30001:27017
volumes:
- $HOME/mongoclusterdata/node1:/data/db
networks:
- mongocluster
command: mongod --replSet comments
node2:
image: mongo
ports:
- 30002:27017
volumes:
- $HOME/mongoclusterdata/node2:/data/db
networks:
- mongocluster
command: mongod --replSet comments
depends_on :
- node1
node3:
image: mongo
ports:
- 30003:27017
volumes:
- $HOME/mongoclusterdata/node3:/data/db
networks:
- mongocluster
command: mongod --replSet comments
depends_on :
- node2
networks:
mongocluster:
driver: bridge
The volume section has absolute path which is different from the root.Actually docker file creates a self config file on root , so if u have root as a docker-compose install location change it to else where, and now the config file settings would never delete on docker-install up/down.
Here is a workaround:
1- copy the entrypoint script to your host:
docker cp mongo_rs:/usr/local/bin/docker-entrypoint.sh .
2- edit the script , change the line (last line) exec "$#" to :
mongod --replSet my-mongo-set --port 27017
3- re-copy the script to your container:
docker cp docker-entrypoint.sh mongo_rs:/usr/local/bin/docker-entrypoint.sh
4- start your container:
docker start mongo_rs
I create docker-compose with two services: python api and mongodb, after starting first-time sudo docker-compose up it creates a file for mongo db inside \data, after sudo docker-compose down and again sudo docker-compose up mongo can't access files in \data.
Obviously something wrong with permissions, but I don't know what exactly.
But if I just stop containers without sudo docker-compose down and sudo docker-compose up again everything is ok.
So everything goes wrong after down
systemd service:
[Unit]
Description=RestAPI imp
Requires=docker.service
After=docker.service
[Service]
Type=simple
WorkingDirectory=/home/entrant/myserv
ExecStart=/usr/local/bin/docker-compose up
ExecStop=/usr/local/bin/docker-compose down
Restart=on-failure
RestartSec=10
KillMode=process
[Install]
WantedBy=multi-user.target
docker-compose.yml
version: '3.5'
services:
web_dev:
build: .
ports:
- "8080:8080"
volumes:
- .:/app
environment:
- ENV=PROD
- DATABASE_URL=mongodb://mongodb:27017/myserv?authSource=admin&replicaSet=myrepl
depends_on:
- mongodb
command: deploy/wait-for-it.sh mongodb:27017 -- gunicorn -b 0.0.0.0:8080 index:api.app -w 9
mongodb:
image: mongo:4.0.12-xenial
container_name: "mongodb"
environment:
- MONGO_INITDB_DATABASE=myserv
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
command: bash -c "
mongod --fork --replSet myrepl --bind_ip_all --smallfiles --logpath=/dev/null
&& mongo --eval 'rs.initiate()'
&& mongod --shutdown
&& mongod --replSet myrepl --bind_ip_all --smallfiles --logpath=/dev/null
"
networks:
default:
name: web_dev
Okay so I know I can automate my "docker run" instructions like this, say I would do this without compose:
First create the volume
docker volume create --name mongodb-shard-1-node-1
Then the container
docker run --name mongodb-node-1 -d -v mongodb-node-1:/data/db -p 27031:27017 --link mongo-node-2:mongo mongo --replSet rs0 --smallfiles --oplogSize 128
This would be the same as including this in the docker-compose.yml file:
mongodb-node-1:
image: mongo
volumes:
- "mongodb-node-1:/data/db"
ports:
- "27031:27017"
container_name: mongodb-node-1
external_links:
- "mongodb-node-3:mongo"
command: --replSet rs0 --smallfiles --oplogSize 128
But I also have to run commands inside the mongodb shell, to do this I first use exec to enter the shell like this:
docker exec -it mongodb-shard-1-node-1 mongo
afterwards inside the shell I need to run commands such as
rs.initiate()
and others like
rs.addArb("172.17.0.6:27017")
etc...
Can I automate these last steps with docker-compose? Is it possible to automate this in docker at all?
You can't directly automate it like that, sadly.
As a workaround, you could extend the container to add in a shell script which runs, starts Mongo, then runs the specified commands. You could even pass in that IP address in an environment variable if it needs to be modifiable.
Kontena has leveraged docker-compose.yml file format and introduced this kind of functionality by adding post_start hook.
peer:
image: mongo:3.2
stateful: true
command: --replSet kontena --smallfiles
instances: 3
hooks:
post_start:
- cmd: sleep 10
name: sleep
instances: 1
oneshot: true
- cmd: mongo --eval "printjson(rs.initiate());"
name: rs_initiate
instances: 1
oneshot: true
- cmd: mongo --eval "printjson(rs.add('%{project}-peer-2'))"
name: rs_add2
instances: 1
oneshot: true
- cmd: mongo --eval "printjson(rs.add('%{project}-peer-3'))"
name: rs_add3
instances: 1
oneshot: true
https://github.com/kontena/examples/blob/master/mongodb-cluster/kontena.yml
$ kontena app deploy command will deploy all three mongodb peers and add them to replica set.
I'm trying to create some kind of script that will create a docker with mongodb and automatically create a user.
I can usually manage my docker images with docker-compose but this time, I don't know how to do it.
Basically, here is what I have to do:
clean/destroy container (docker-compose down)
create a docker container with mongodb and start it (without --auth parameter)
execute a java script containing db.createUser()
stop the container
restart the same container with --auth parameter to allow login with the user created in the javascript
I can't find how to do that properly with docker-compose because when it starts, I have to give it the command --auth. If I do that, I cannot execute my javascript to add my user. MongoDB allows users creation without being logged in if there is no user and if --auth parameter is not provided.
I want to do that automatically, I do not want to manually do some commands. The goal is to have a script that can be executed before each integration tests to start from a clean database.
Here is my project:
integration-test/src/test/resources/scripts/docker-compose.yml
mongodb:
container_name: mongo
image: mongo
ports:
- "27017:27017"
volumes:
- .:/setup
command: --auth
integration-test/src/test/resources/scripts/docker-init.sh
docker-compose down
docker-compose up -d
sleep 1
docker exec mongo bash -c "mongo myDatabase /setup/mongodb-setup.js"
integration-test/src/test/resources/scripts/mongodb-setup.js
db.createUser(
{
user: "myUser",
pwd: "myPassword",
roles: [
{ role: "readWrite", db: "myDatabase" }
]
})
Finding a way to start again a container with a new parameter (in this case --auth) would help but I can't find how to do that (docker start does not take parameters).
Any idea how I should do what I would like ?
If not, I can still delete everything from my database with some Java code or something else but I would like a complete mongodb docker setup created with a script.
The official mongo image now supports following environment variables that can be used in docker-compose as below:
environment:
- MONGO_INITDB_ROOT_USERNAME=user
- MONGO_INITDB_ROOT_PASSWORD=password
- MONGO_INITDB_DATABASE=test
more explanation at:
https://stackoverflow.com/a/42917632/1069610
This is how I do it, my requirement was to bring up a few containers along with mongodb, the other containers expect a user to be present when they come up, this worked for me. The good part is, the mongoClientTemp exits after the command is executed so the container doesn't stick around.
version: '2'
services:
mongo:
image: mongo:latest
container_name: mongo
ports:
- "27017:27017"
volumes:
- /app/hdp/mongo/data:/data/db
mongoClientTemp:
image: mongo:latest
container_name: mongoClientTemp
links:
- mongo:mongo
command: mongo --host mongo --eval "db.getSiblingDB('dashboard').createUser({user:'db', pwd:'dbpass', roles:[{role:'readWrite',db:'dashboard'}]});"
depends_on:
- mongo
another-container:
image: another-image:v01
container_name: another-container
ports:
- "8080:8080"
volumes:
- ./logs:/app/logs
environment:
- MONGODB_HOST=mongo
- MONGODB_PORT=27017
links:
- mongo:mongo
depends_on:
- mongoClientTemp
EDIT: tutumcloud repository is deprecated and no longer maintained, see other answers
I suggest that you use environment variables to set mongo user, database and password. tutum (owned by Docker) published a very good image
https://github.com/tutumcloud/mongodb
docker run -d -p 27017:27017 -p 28017:28017 -e MONGODB_USER="user" -e MONGODB_DATABASE="mydatabase" -e MONGODB_PASS="mypass" tutum/mongodb
You may convert these variables into docker-compose environments variables. You don't have to hard code it.
environment:
MONGODB_USER: "${db_user_env}"
MONGODB_DATABASE: "${dbname_env}"
MONGODB_PASS: "${db_pass}"
This configuration will read from your session's environment variables.
In your project directory create another directory docker-entrypoint-initdb.d then the
file tree looks like this:
📦Project-directory
┣ 📂docker-entrypoint-initdb.d
┃ ┗ 📜mongo-init.js
┗ 📜docker-compose.yaml
The docker-compose.yml contains:
version: "3.7"
services:
mongo:
container_name: container-mongodb
image: mongo:latest
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
MONGO_INITDB_DATABASE: root-db
volumes:
- ./docker-entrypoint-initdb.d/mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
mongo-init.js contains the javascript code to create user with different roles.
print("Started Adding the Users.");
db = db.getSiblingDB("admin");
db.createUser({
user: "userx",
pwd: "1234",
roles: [{ role: "readWrite", db: "admin" }],
});
print("End Adding the User Roles.");
You can modify the mongo-init.js as you need.
After reading the the official mongo docker page, I've found that you can create an admin user one single time, even if the auth option is being used. This is not well documented, but it simply works (hope it is not a feature).
Therefore, you can keep using the auth option all the time.
I created a github repository with scripts wrapping up the commands to be used. The most important command lines to run are:
docker exec db_mongodb mongo admin /setup/create-admin.js
docker exec db_mongodb mongo admin /setup/create-user.js -u admin -p admin --authenticationDatabase admin
The first line will create the admin user (and mongo will not complain even with auth option). The second line will create your "normal" user, using the admin rights from the first one.
Mongo image provides the /docker-entrypoint-initdb.d/ path to deploy custom .js or .sh setup scripts.
Check this post to get more details :
How to create a DB for MongoDB container on start up?
file: docker-compose.yaml
mongo:
image: mongo:latest
volumes_from:
- data
ports:
- "27017:27017"
command: --auth
container_name: "db_mongodb"
data:
image: mongo:latest
volumes:
- /var/lib/mongo
- ./setup:/setup
command: "true"
container_name: "db_mongodb_data"
file: .buildMongo.sh
#!/bin/sh
docker-compose down
docker-compose up -d
sleep 1
docker exec db_mongodb mongo admin /setup/create-admin.js
docker exec db_mongodb mongo myDb /setup/create-user.js -u admin -p admin --authenticationDatabase admin
The create-admin.js and create-user.js files are commands that you use using the mongo shell. So they must be easy for you to understand. The real direction is like the jzqa answer, "environment variables".
So the question here is how to create a user. I think this answers that point at least, you can check the complete setup here https://github.com/Lus1t4nUm/mongo_docker_bootstrap
For initializing mongo with initial user-password-db triple and initdb scripts with only one docker-compose.yml, without any extra configuration, you can use bitnami/mongo image.
In my case, I didn't run my scripts under /docker-entrypoint-initdb.d directory in the container after setting environment variables; MONGODB_USERNAME and MONGODB_PASSWORD (specific env variables for bitnami image) because mongod runs with --auth option automatically when you set these variables. Consequently, I got authentication errors when the container was in the process of executing the scripts.
Because, it was connecting to: mongodb://192.168.192.2:27017/compressors=disabled&gssapiServiceName=mongodb
TERMINAL LOG OF THE ERROR
FIRST DOCKER-COMPOSE FILE:
version: "3"
services:
mongodb:
container_name: mongodb
image: 'docker.io/bitnami/mongodb:4.2-debian-10'
ports:
- "27017:27017"
volumes:
- "mongodb_data:/bitnami/mongodb"
- "./mongodb/scripts:/docker-entrypoint-initdb.d"
environment:
- MONGODB_INITSCRIPTS_DIR=/docker-entrypoint-initdb.d
- MONGODB_USERNAME=some_username
- MONGODB_PASSWORD=some_password
- MONGODB_DATABASE=some_db_name
networks:
backend:
restart: unless-stopped
volumes:
mongodb_data:
networks:
backend:
driver: bridge
INIT JS FILE UNDER ./mongodb/scripts PATH:
let db = connect("localhost:27017/some_db_name");
db.auth("some_username", "some_password");
let collections = db.getCollectionNames();
let storeFound = false;
let index;
for(index=0; index<collections.length; index++){
if ("store" === collections[index]){
storeFound = true;
}
}
if(!storeFound ){
db.createCollection("store");
db.store.createIndex({"name": 1});
}
So, I decided to add new environment variables to my docker-compose.yml after inspecting https://github.com/bitnami/bitnami-docker-mongodb/blob/master/4.2/debian-10/rootfs/opt/bitnami/scripts/libmongodb.sh file.
In this sh file, there is function like mongodb_custom_init_scripts() for executing the scripts. For executing all script files, it runs mongodb_execute() method. In this method, after mongod instance is up and run, mongo client is connecting to the mongod instance by using some parameters.
########################
# Execute an arbitrary query/queries against the running MongoDB service
# Stdin:
# Query/queries to execute
# Arguments:
# $1 - User to run queries
# $2 - Password
# $3 - Database where to run the queries
# $4 - Host (default to result of get_mongo_hostname function)
# $5 - Port (default $MONGODB_PORT_NUMBER)
# $6 - Extra arguments (default $MONGODB_CLIENT_EXTRA_FLAGS)
# Returns:
# None
########################
mongodb_execute() {
local -r user="${1:-}"
local -r password="${2:-}"
local -r database="${3:-}"
local -r host="${4:-$(get_mongo_hostname)}"
local -r port="${5:-$MONGODB_PORT_NUMBER}"
local -r extra_args="${6:-$MONGODB_CLIENT_EXTRA_FLAGS}"
local result
local final_user="$user"
# If password is empty it means no auth, do not specify user
[[ -z "$password" ]] && final_user=""
local -a args=("--host" "$host" "--port" "$port")
[[ -n "$final_user" ]] && args+=("-u" "$final_user")
[[ -n "$password" ]] && args+=("-p" "$password")
[[ -n "$extra_args" ]] && args+=($extra_args)
[[ -n "$database" ]] && args+=("$database")
"$MONGODB_BIN_DIR/mongo" "${args[#]}"
}
After that I added new environment variables to my docker-compose like MONGODB_ADVERTISED_HOSTNAME, MONGODB_PORT_NUMBER, and, MONGODB_CLIENT_EXTRA_FLAGS
So my final docker-compose.yml looks like:
version: "3"
services:
mongodb:
container_name: mongodb
image: 'docker.io/bitnami/mongodb:4.2-debian-10'
ports:
- "27017:27017"
volumes:
- "mongodb_data:/bitnami/mongodb"
- "./mongodb/scripts:/docker-entrypoint-initdb.d"
environment:
- MONGODB_INITSCRIPTS_DIR=/docker-entrypoint-initdb.d
- MONGODB_USERNAME=some_username
- MONGODB_PASSWORD=some_password
- MONGODB_DATABASE=some_db_name
- MONGODB_ADVERTISED_HOSTNAME=localhost
- MONGODB_PORT_NUMBER=27017
- MONGODB_CLIENT_EXTRA_FLAGS=--authenticationDatabase=some_db_name
networks:
backend:
restart: unless-stopped
volumes:
mongodb_data:
networks:
backend:
driver: bridge
Now, it was connecting by this url:
mongodb://localhost:27017/?authSource=some_db_name&compressors=disabled &gssapiServiceName=mongodb
add --noauth option to the mongo command
extract from my docker-compose.yml file
mongors:
image: mongo:latest
command: mongod --noprealloc --smallfiles --replSet mongors2 --dbpath /data/db --nojournal --oplogSize 16 --noauth
environment:
TERM: xterm
volumes:
- ./data/mongors:/data/db