I'm running this docker-compose.yml on my mac over a fresh Docker for Mac environment.
Gist here!
version: '2'
services:
replica1:
image: mongo:3.0
container_name: mongo1
ports:
- "27017:27017"
volumes:
- ./mongodata/replica1:/data/db
command: mongod --smallfiles --replSet "mrmtx"
networks:
- mongo_cluster
replica2:
image: mongo:3.0
container_name: mongo2
ports:
- "27017:27017"
volumes:
- ./mongodata/replica2:/data/db
command: mongod --smallfiles --replSet "mrmtx"
networks:
- mongo_cluster
replica3:
image: mongo:3.0
container_name: mongo3
ports:
- "27017:27017"
volumes:
- ./mongodata/replica3:/data/db
command: mongod --smallfiles --replSet "mrmtx"
networks:
- mongo_cluster
networks:
mongo_cluster:
driver: overlay
I'm getting this error:
Cannot start service replica2: b'Could not attach to network mongo-rplset_mongo_cluster: rpc error: code = PermissionDenied desc = network mongo-rplset_mongo_cluster not manually attachable'
Related
I am currently trying to install a mongo cluster on docker.
We already have such cluster with mongo 4.2 but for the new installation we wanted to use latest version of docker image.
I used the same docker-compose file but the data and config servers don’t want to start.
When looking at the docker logs, the error is:
BadValue: Cannot start a shardsvr as a standalone server. Please use the option --replSet to start the node as a replica set.
BadValue: Cannot start a configsvr as a standalone server. Please use the option --replSet to start the node as a replica set.
But I have the replSet in my commands.
After some try and errors, the error occurs when I add the init db environment variables to initialize the admin user.
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_PASSWORD}
I did the test also with mongo image version 5 and I have same behavior.
I works fine with mongo image 4.4.18
Here is my docker compose file
version: '3.5'
services:
# Router
mongo-router-01:
command: mongos --port 27017 --configdb ${MONGO_RS_CONFIG_NAME}/mongo-config-01:27017,mongo-config-02:27017,mongo-config-03:27017 --bind_ip_all --keyFile /etc/mongo-cluster.key
container_name: ${MONGO_ROUTER_SERVER}-01-${ENVIRONMENT_NAME}
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_PASSWORD}
image: mongo:${MONGO_VERSION}
networks:
- mongo-network
restart: always
volumes:
- ./keys/${ENVIRONMENT_NAME}/mongo-cluster.key:/etc/mongo-cluster.key
- ./volumes/${ENVIRONMENT_NAME}/${MONGO_ROUTER_SERVER}-01/db:/data/db
- ./volumes/${ENVIRONMENT_NAME}/${MONGO_ROUTER_SERVER}-01/configdb:/data/configdb
mongo-router-02:
command: mongos --port 27017 --configdb ${MONGO_RS_CONFIG_NAME}/mongo-config-01:27017,mongo-config-02:27017,mongo-config-03:27017 --bind_ip_all --keyFile /etc/mongo-cluster.key
container_name: ${MONGO_ROUTER_SERVER}-02-${ENVIRONMENT_NAME}
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_PASSWORD}
image: mongo:${MONGO_VERSION}
networks:
- mongo-network
restart: always
volumes:
- ./keys/${ENVIRONMENT_NAME}/mongo-cluster.key:/etc/mongo-cluster.key
- ./volumes/${ENVIRONMENT_NAME}/${MONGO_ROUTER_SERVER}-02/db:/data/db
- ./volumes/${ENVIRONMENT_NAME}/${MONGO_ROUTER_SERVER}-02/configdb:/data/configdb
# Config Servers
mongo-config-01:
command: mongod --port 27017 --configsvr --replSet ${MONGO_RS_CONFIG_NAME} --keyFile /etc/mongo-cluster.key
container_name: ${MONGO_CONFIG_SERVER}-01-${ENVIRONMENT_NAME}
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_PASSWORD}
image: mongo:${MONGO_VERSION}
networks:
- mongo-network
restart: always
volumes:
- ./keys/preprod/mongo-cluster.key:/etc/mongo-cluster.key
- ./volumes/${ENVIRONMENT_NAME}/${MONGO_CONFIG_SERVER}-01/db:/data/db
- ./volumes/${ENVIRONMENT_NAME}/${MONGO_CONFIG_SERVER}-01/configdb:/data/configdb
mongo-config-02:
command: mongod --port 27017 --configsvr --replSet ${MONGO_RS_CONFIG_NAME} --keyFile /etc/mongo-cluster.key
container_name: ${MONGO_CONFIG_SERVER}-02-${ENVIRONMENT_NAME}
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_PASSWORD}
image: mongo:${MONGO_VERSION}
networks:
- mongo-network
restart: always
volumes:
- ./keys/preprod/mongo-cluster.key:/etc/mongo-cluster.key
- ./volumes/${ENVIRONMENT_NAME}/${MONGO_CONFIG_SERVER}-02/db:/data/db
- ./volumes/${ENVIRONMENT_NAME}/${MONGO_CONFIG_SERVER}-02/configdb:/data/configdb
mongo-config-03:
command: mongod --port 27017 --configsvr --replSet ${MONGO_RS_CONFIG_NAME} --keyFile /etc/mongo-cluster.key
container_name: ${MONGO_CONFIG_SERVER}-03-${ENVIRONMENT_NAME}
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_PASSWORD}
image: mongo:${MONGO_VERSION}
networks:
- mongo-network
restart: always
volumes:
- ./keys/${ENVIRONMENT_NAME}/mongo-cluster.key:/etc/mongo-cluster.key
- ./volumes/${ENVIRONMENT_NAME}/${MONGO_CONFIG_SERVER}-03/db:/data/db
- ./volumes/${ENVIRONMENT_NAME}/${MONGO_CONFIG_SERVER}-03/configdb:/data/configdb
# Data Servers
mongo-arbiter-01:
command: mongod --port 27017 --shardsvr --replSet ${MONGO_RS_DATA_NAME} --keyFile /etc/mongo-cluster.key
container_name: ${MONGO_ARBITER_SERVER}-01-${ENVIRONMENT_NAME}
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_PASSWORD}
image: mongo:${MONGO_VERSION}
networks:
- mongo-network
restart: always
volumes:
- ./keys/${ENVIRONMENT_NAME}/mongo-cluster.key:/etc/mongo-cluster.key
- ./volumes/${ENVIRONMENT_NAME}/${MONGO_ARBITER_SERVER}-01/db:/data/db
- ./volumes/${ENVIRONMENT_NAME}/${MONGO_ARBITER_SERVER}-01/configdb:/data/configdb
mongo-data-01:
command: mongod --port 27017 --shardsvr --replSet ${MONGO_RS_DATA_NAME} --keyFile /etc/mongo-cluster.key
container_name: ${MONGO_DATA_SERVER}-01-${ENVIRONMENT_NAME}
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_PASSWORD}
image: mongo:${MONGO_VERSION}
networks:
- mongo-network
restart: always
volumes:
- ./keys/${ENVIRONMENT_NAME}/mongo-cluster.key:/etc/mongo-cluster.key
- ./volumes/${ENVIRONMENT_NAME}/${MONGO_DATA_SERVER}-01/db:/data/db
- ./volumes/${ENVIRONMENT_NAME}/${MONGO_DATA_SERVER}-01/configdb:/data/configdb
mongo-data-02:
command: mongod --port 27017 --shardsvr --replSet ${MONGO_RS_DATA_NAME} --keyFile /etc/mongo-cluster.key
container_name: ${MONGO_DATA_SERVER}-02-${ENVIRONMENT_NAME}
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_PASSWORD}
image: mongo:${MONGO_VERSION}
networks:
- mongo-network
restart: always
volumes:
- ./keys/${ENVIRONMENT_NAME}/mongo-cluster.key:/etc/mongo-cluster.key
- ./volumes/${ENVIRONMENT_NAME}/${MONGO_DATA_SERVER}-02/db:/data/db
- ./volumes/${ENVIRONMENT_NAME}/${MONGO_DATA_SERVER}-02/configdb:/data/configdb
networks:
mongo-network:
external:
name: _preprod
EDIT 2023-02-08
I finally may have found something: https://github.com/docker-library/mongo/issues/509
Seems it is normal that it fails on shard server.
For config server, there is a PR: https://github.com/docker-library/mongo/pull/600
But it has not been merged yet.
So I guess until the PR is merged and new version of the image is published, there is no way to use the environment variables at all.
So the root user insertion should be done via script after the replica sets and routers are initialized
I am not familiar with Docker, so I can only guess. When I deployed my sharded cluster I had to add readPreference=primaryPreferred to connectionString.
Otherwise when the ReplicaSet is initated, then the current host may become a SECONDARY and the shell does not switch-over automatically to new PRIMARY.
Another common issue, is when the ReplicaSet is initated then you must wait till it is finished before you run other actions. When I initate a ReplicaSet, then usually I do it like this:
rs.initiate(...)
while (! db.hello().isWritablePrimary ) { sleep(1000) }
And last but not least in version 6.0 you must set setDefaultRWConcern before you can run many other operations, see Compatibility Changes in MongoDB 6.0#Replica Sets
I found this repository:
https://github.com/minhhungit/mongodb-cluster-docker-compose
There are some docker-compose files (with or without cluster key)
I tested it and it seems to work.
EDIT 13/02/2023
Took a bit more time than expected as I do not have the same setup as the example found in the repository mentioned above, but now all is working fine with replica set and users initialized correctly
I want to connect to MongoDB cluster using
mongodb://localhost:27017
It shows me an error
getaddrinfo ENOTFOUND messenger-mongodb-1
This is my docker-compose.yml file
version: '3'
services:
messenger-mongodb-1:
container_name: messenger-mongodb-1
image: mongo:6.0.3
command: mongod --replSet messenger-mongodb-replica-set --bind_ip_all
ports:
- 27017:27017
networks:
- messenger-mongodb-cluster
volumes:
- messenger-mongodb-1-data:/data/db
depends_on:
- messenger-mongodb-2
- messenger-mongodb-3
healthcheck:
test: test $$(echo "rs.initiate({_id:\"messenger-mongodb-replica-set\",members:[{_id:0,host:\"messenger-mongodb-1\"},{_id:1,host:\"messenger-mongodb-2\"},{_id:2,host:\"messenger-mongodb-3\"}]}).ok || rs.status().ok" | mongo --quiet) -eq 1
interval: 10s
start_period: 30s
messenger-mongodb-2:
container_name: messenger-mongodb-2
image: mongo:6.0.3
command: mongod --replSet messenger-mongodb-replica-set --bind_ip_all
ports:
- 27018:27017
networks:
- messenger-mongodb-cluster
environment:
- MONGO_INITDB_DATABASE=messenger-db
volumes:
- messenger-mongodb-2-data:/data/db
messenger-mongodb-3:
container_name: messenger-mongodb-3
image: mongo:6.0.3
command: mongod --replSet messenger-mongodb-replica-set --bind_ip_all
ports:
- 27019:27017
networks:
- messenger-mongodb-cluster
environment:
- MONGO_INITDB_DATABASE=messenger-db
volumes:
- messenger-mongodb-3-data:/data/db
networks:
messenger-mongodb-cluster:
volumes:
messenger-mongodb-1-data:
messenger-mongodb-2-data:
messenger-mongodb-3-data:
I run it like
docker-compose up -d
How can I connect to my replica set?
I want to use it for the local development of my node.js application
My operating system is Windows 11 Pro
I am using to following docker-compose file to setup my sharded MongoDB which has 3 shards, 3 config servers and 2 router instances:
version: "3.5"
services:
mongorsn1:
container_name: mongors1n1
hostname: mongors1n1
image: mongo
command: mongod --shardsvr --replSet mongors1 --dbpath /data/db --port 27017
ports:
- 27017:27017
expose:
- "27017"
environment:
TERM: xterm
volumes:
- /etc/localtime:/etc/localtime:ro
- /mongo_cluster/data1:/data/db
mongors1n2:
container_name: mongors1n2
hostname: mongors1n2
image: mongo
command: mongod --shardsvr --replSet mongors1 --dbpath /data/db --port 27017
ports:
- 27027:27017
expose:
- "27017"
environment:
TERM: xterm
volumes:
- /etc/localtime:/etc/localtime:ro
- /mongo_cluster/data2:/data/db
mongors1n3:
container_name: mongors1n3
hostname: mongors1n3
image: mongo
command: mongod --shardsvr --replSet mongors1 --dbpath /data/db --port 27017
ports:
- 27037:27017
expose:
- "27017"
environment:
TERM: xterm
volumes:
- /etc/localtime:/etc/localtime:ro
- /mongo_cluster/data3:/data/db
mongocfg1:
container_name: mongocfg1
image: mongo
command: mongod --configsvr --replSet mongors1conf --dbpath /data/db --port 27017
environment:
TERM: xterm
expose:
- "27017"
volumes:
- /etc/localtime:/etc/localtime:ro
- /mongo_cluster/config1:/data/db
mongocfg2:
container_name: mongocfg2
image: mongo
command: mongod --configsvr --replSet mongors1conf --dbpath /data/db --port 27017
environment:
TERM: xterm
expose:
- "27017"
volumes:
- /etc/localtime:/etc/localtime:ro
- /mongo_cluster/config2:/data/db
mongocfg3:
container_name: mongocfg3
image: mongo
command: mongod --configsvr --replSet mongors1conf --dbpath /data/db --port 27017
environment:
TERM: xterm
expose:
- "27017"
volumes:
- /etc/localtime:/etc/localtime:ro
- /mongo_cluster/config3:/data/db
mongos1:
container_name: mongos1
image: mongo
hostname: mongos1
depends_on:
- mongocfg1
- mongocfg2
command: mongos --configdb mongors1conf/mongocfg1:27017,mongocfg2:27017,mongocfg3:27017 --port 27017
ports:
- 27019:27017
expose:
- "27017"
volumes:
- /etc/localtime:/etc/localtime:ro
mongos2:
container_name: mongos2
image: mongo
hostname: mongos2
depends_on:
- mongocfg1
- mongocfg2
command: mongos --configdb mongors1conf/mongocfg1:27017,mongocfg2:27017,mongocfg3:27017 --port 27017
ports:
- 27020:27017
expose:
- "27017"
volumes:
- /etc/localtime:/etc/localtime:ro
As you can see it is up and running correctly.
docker exec -it mongos1 bash -c "echo 'sh.status()' | mongo "
MongoDB shell version v4.4.6
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("f00e3ac7-e114-40b3-b7f8-0e5cf9c56e10") }
MongoDB server version: 4.4.6
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("609e21145937100b3f006e20")
}
shards:
{ "_id" : "mongors1", "host" : "mongors1/mongors1n1:27017,mongors1n2:27017,mongors1n3:27017", "state" : 1 }
active mongoses:
"4.4.6" : 2
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
bye
Now here is the docker-compose file I use to connect Orion and the IoT Agent to my MongoDB shards.
version: "3.5"
services:
orion:
image: atos/smbt/orion:2.5.2
hostname: orion
container_name: orion
restart: unless-stopped
ports:
- "1026:1026"
entrypoint:
- /usr/bin/contextBroker
- -fg
- -multiservice
- -ngsiv1Autocast
- -corsOrigin
- __ALL
command: -dbhost mongors1n1:27017,mongors1n2:27027,mongors1n3:27037 -rplSet mongors1 -logLevel ERROR -noCache
healthcheck:
test: curl --fail -s http://orion:1026/version || exit 1
networks:
- fiware_mongo_sharding_default
iot-agent:
image: atos/smbt/iotagent-ul:1.15.0
hostname: iot-agent
container_name: iot-agent
restart: unless-stopped
expose:
- "4041"
- "7896"
ports:
- "4041:4041"
- "7896:7896"
environment:
- IOTA_CB_HOST=orion
- IOTA_CB_PORT=1026
- IOTA_NORTH_PORT=4041
- IOTA_REGISTRY_TYPE=mongodb
- IOTA_LOG_LEVEL=ERROR
- IOTA_TIMESTAMP=true
- IOTA_CB_NGSI_VERSION=v2
- IOTA_AUTOCAST=true
- IOTA_MONGO_HOST=mongors1n1
- IOTA_MONGO_PORT=27017
- IOTA_MONGO_DB=iotagentul
- IOTA_HTTP_PORT=7896
- IOTA_PROVIDER_URL=http://iot-agent:4041
networks:
- fiware_mongo_sharding_default
networks:
fiware_mongo_sharding_default:
external: true
Orion is connected to MongoDB but with the IoT Agent I get the following error message.
time=2021-05-14T07:51:51.086Z | lvl=ERROR | corr=28202651-30e1-48dc-88a9-53de603e7e6d | trans=28202651-30e1-48dc-88a9-53de603e7e6d | op=IoTAgentNGSI.DbConn | from=n/a | srv=n/a | subsrv=n/a | msg=MONGODB-001: Error trying to connect to MongoDB: MongoNetworkError: failed to connect to server [mongos1:27019] on first connect [MongoNetworkError: connect ECONNREFUSED 172.20.0.9:27019] | comp=IoTAgent
I have tried different things such as:
- IOTA_MONGO_HOST=mongos1
- IOTA_MONGO_PORT=27019
but to no avail unfortunately. Is there something I am doing wrong?
Thanks in advance for your help.
I found the solution myself. What I was doing wrong is that the IoT Agent should be NOT be connected with a Sharded MongoDB as it only expects one Mongo instance. That instance is used to persist and retrieve IoT device created when you provision a sensor via the IoT Agent. Device measurement are sent to the Orion Context Broket which will persist into the Sharded MongoDB. Hope it can help somebody else.
The app (produced by docker-compose up) works as expected. But when I entered the mongo container (docker exec -it mongo) I cannot find db chatmongoose.
connectionString = 'mongodb://mongo:27017/chatmongoose'
> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
version: '3.7'
services:
server:
build:
context: ./server
dockerfile: Dockerfile
image: myapp-server
container_name: myapp-node-server
command: /usr/src/app/node_modules/.bin/nodemon server.js
ports:
- '5000:5000'
links:
- mongo
environment:
- NODE_ENV=development
networks:
- app-network
mongo:
container_name: mongo
image: mongo
volumes:
- data-volume:/data/db
ports:
- '27017:27017'
networks:
- app-network
client:
build:
context: ./client
dockerfile: Dockerfile
image: myapp-client
container_name: myapp-react-client
command: npm start
depends_on:
- server
ports:
- '3000:3000'
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
data-volume:
node_modules:
web-root:
driver: local
The data in app did work as expected but why I cannot find the db in container?
Could you try profile. It probably only start mongo but also volume will be valid.
docker-compose --profile mongo up
mongo:
profiles: ["mongo"]
container_name: mongo
image: mongo
volumes:
- data-volume:/data/db
ports:
- '27017:27017'
networks:
- app-network
I have tried to run mongodb replicaSet in local with mongoldb-community in my Mac I follow mongodb doc I can run it by this command
mongod --port 27017 --dbpath /usr/local/var/mongodb --replSet rs0 --bind_ip localhost,127.0.0.1
but it doesn't run on background, so every time I want to start replica set mongodb I should run that command, before I run it I should stop mongo first, then on the next tab console I should run mongo --eval "rs.initiate()" to create to replicaSet again
here is my docker compose:
version: "3.7"
services:
mongodb_container:
image: mongo:latest
ports:
- 27017:27017
volumes:
- mongodb_data_container:/data/db
volumes:
mongodb_data_container:
how to convert that into docker-compose ? is it possible ?
can I do docker exec CONTAINER_ID [commands] ? to run command mongo like above , but must stop the mongodb run in that docker ?
You can have a mongodb replica-set with this docker-compose services:
mongodb-primary:
image: "bitnami/mongodb:4.2"
user: root
volumes:
- ./mongodb-persistence/bitnami:/bitnami
networks:
- parse_network
environment:
- MONGODB_REPLICA_SET_MODE=primary
- MONGODB_REPLICA_SET_KEY=123456789
- MONGODB_ROOT_USERNAME=admin-123
- MONGODB_ROOT_PASSWORD=password-123
- MONGODB_USERNAME=admin-123
- MONGODB_PASSWORD=password-123
- MONGODB_DATABASE=my_database
ports:
- 27017:27017
mongodb-secondary:
image: "bitnami/mongodb:4.2"
depends_on:
- mongodb-primary
environment:
- MONGODB_REPLICA_SET_MODE=secondary
- MONGODB_REPLICA_SET_KEY=123456789
- MONGODB_PRIMARY_HOST=mongodb-primary
- MONGODB_PRIMARY_PORT_NUMBER=27017
- MONGODB_PRIMARY_ROOT_USERNAME=admin-123
- MONGODB_PRIMARY_ROOT_PASSWORD=password-123
networks:
- parse_network
ports:
- 27027:27017
mongodb-arbiter:
image: "bitnami/mongodb:4.2"
depends_on:
- mongodb-primary
environment:
- MONGODB_ADVERTISED_HOSTNAME=mongodb-arbiter
- MONGODB_REPLICA_SET_MODE=arbiter
- MONGODB_PRIMARY_HOST=mongodb-primary
- MONGODB_PRIMARY_PORT_NUMBER=27017
- MONGODB_PRIMARY_ROOT_PASSWORD=password-123
- MONGODB_REPLICA_SET_KEY=123456789
networks:
- parse_network
ports:
- 27037:27017
networks:
parse_network:
driver: bridge
ipam:
driver: default
volumes:
mongodb_master_data:
driver: local