MongoDB replicaSet on three different machines - mongodb

I have the following Docker Compose file, and I wish to persist data on three machines (different hostname).
The file as-is works OK, but on the same machine (it creates one primary and two replicas).
My questions:
How can I set the Docker Compose file up in such way that the primary is on hostname primary-1.com and the replicas are on backup-1.com and backup-2.com.
Should this docker-compose run on each hostname?
My docker-compose file:
version: "3.8"
services:
mongo1:
image: mongo:5
container_name: mongo1
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30001"]
volumes:
- ./data/mongo-1:/data/db
ports:
- 30001:30001
healthcheck:
test: test $$(echo "rs.initiate({_id:'my-replica-set',members:[{_id:0,host:\"mongo1:30001\"},{_id:1,host:\"mongo2:30002\"},{_id:2,host:\"mongo3:30003\"}]}).ok || rs.status().ok" | mongo --port 30001 --quiet) -eq 1
interval: 10s
start_period: 30s
mongo2:
image: mongo:5
container_name: mongo2
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30002"]
volumes:
- ./data/mongo-2:/data/db
ports:
- 30002:30002
mongo3:
image: mongo:5
container_name: mongo3
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30003"]
volumes:
- ./data/mongo-3:/data/db
ports:
- 30003:30003

Related

How to connect to MongoDB replica set?

I want to connect to MongoDB cluster using
mongodb://localhost:27017
It shows me an error
getaddrinfo ENOTFOUND messenger-mongodb-1
This is my docker-compose.yml file
version: '3'
services:
messenger-mongodb-1:
container_name: messenger-mongodb-1
image: mongo:6.0.3
command: mongod --replSet messenger-mongodb-replica-set --bind_ip_all
ports:
- 27017:27017
networks:
- messenger-mongodb-cluster
volumes:
- messenger-mongodb-1-data:/data/db
depends_on:
- messenger-mongodb-2
- messenger-mongodb-3
healthcheck:
test: test $$(echo "rs.initiate({_id:\"messenger-mongodb-replica-set\",members:[{_id:0,host:\"messenger-mongodb-1\"},{_id:1,host:\"messenger-mongodb-2\"},{_id:2,host:\"messenger-mongodb-3\"}]}).ok || rs.status().ok" | mongo --quiet) -eq 1
interval: 10s
start_period: 30s
messenger-mongodb-2:
container_name: messenger-mongodb-2
image: mongo:6.0.3
command: mongod --replSet messenger-mongodb-replica-set --bind_ip_all
ports:
- 27018:27017
networks:
- messenger-mongodb-cluster
environment:
- MONGO_INITDB_DATABASE=messenger-db
volumes:
- messenger-mongodb-2-data:/data/db
messenger-mongodb-3:
container_name: messenger-mongodb-3
image: mongo:6.0.3
command: mongod --replSet messenger-mongodb-replica-set --bind_ip_all
ports:
- 27019:27017
networks:
- messenger-mongodb-cluster
environment:
- MONGO_INITDB_DATABASE=messenger-db
volumes:
- messenger-mongodb-3-data:/data/db
networks:
messenger-mongodb-cluster:
volumes:
messenger-mongodb-1-data:
messenger-mongodb-2-data:
messenger-mongodb-3-data:
I run it like
docker-compose up -d
How can I connect to my replica set?
I want to use it for the local development of my node.js application
My operating system is Windows 11 Pro

How to connect to a mongoDb replica created with Docker compose from mongo express and mongo-Compass

Problem:
I have created a mongo Db replica set like this using docker-compose.
questionsdb:
image: mongo:latest
container_name: questionsdb
hostname: questionsdb
restart: always
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=mi1234
volumes:
- mongo_data:/data/dbadmin
ports:
- 27017:27017
- 9229:9229
entrypoint: [ "/usr/bin/mongod", "--replSet", "rsmongo", "--bind_ip_all"]
#networks:
# - custom-
questionsdb1:
image: mongo:latest
container_name: questionsdb1
hostname: questionsdb1
restart: always
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=mi1234
volumes:
- mongo_data1:/data/dbadmin
ports:
- 27018:27017
- 9230:9229
entrypoint: [ "/usr/bin/mongod", "--replSet", "rsmongo", "--bind_ip_all"]
#networks:
# - custom-network
questionsdb2:
image: mongo:latest
container_name: questionsdb2
hostname: questionsdb2
restart: always
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=mi1234
volumes:
- mongo_data2:/data/dbadmin
expose:
- "27017"
ports:
- 27019:27017
- 9231:9229
entrypoint: [ "/usr/bin/mongod", "--replSet", "rsmongo", "--bind_ip_all"]
#networks:
# - custom-network
And my mongo-express container configuration is like this.
mongo-express:
image: mongo-express
container_name: mongo-express
restart: always
ports:
- 8111:8081
environment:
- ME_CONFIG_MONGODB_SERVER=questionsdb
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=mi1234
- ME_CONFIG_BASICAUTH_USERNAME=admin#mi.com
- ME_CONFIG_BASICAUTH_PASSWORD=mi#1234
#networks:
# - custom-network
And I try to connect to it through mongo Compass like this.
mongodb://admin:mi1234#localhost:27017,localhost:27018,localhost:27019/admin?replicaSet=rsmongo
But both mongo-express and mongo compass was failed with giving me authentication failed.
This is the error I can see in the docker container.
{"t":{"$date":"2021-08-16T01:57:15.391+00:00"},"s":"I", "c":"ACCESS", "id":20249, "ctx":"conn219","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-1","speculative":false,"principalName":"admin","authenticationDatabase":"admin","remote":"172.18.0.15:59638","extraInfo":{},"error":"UserNotFound: Could not find user \"admin\" for db \"admin\""}}
Can someone help me to solve this. I tried a lot to find out a solution to this but I was unable to do so. Thank you
According to the error message no admin user has been created in your DB during the container startup.
This could be explained by your custom entrypoint [ "/usr/bin/mongod", "--replSet", "rsmongo", "--bind_ip_all"] that overrides the official one. Whereas the root user creation work seems to be achieved by this script.
Try to execute your docker-compose stack with only one mongo instance and without the entrypoint statement.

docker-compose MongoDB replica set data loss after some period of time

Below is the docker-compose.yml I have used.
MongoDB container got created, after some 6 to 8 hours of time the databases inside the mongo are getting cleared of automatically.
Even after the manual restart I could not able to get my data back.
I have followed two approaches, and still could not able to figure out what's the issue.
Should I need to declare volumes at the top of the compose file like in approach 1 or provide user specific directory "./data/mongo-1" like in approach 2
Adding the below will it help?
volumes:
mongo_data:
driver: local # not working
external: true # will it work?
Approach-1: Single node replica set
version: '3.8'
volumes:
mongo_data:
mongodb:
hostname: mongodb
container_name: mongodb
image: mongo:latest
environment:
MONGO_INITDB_DATABASE: moviebooking
MONGO_REPLICA_SET_NAME: rs0
volumes:
- ./mongo-initdb.d:/docker-entrypoint-initdb.d
- mongo_data:/data/db
expose:
- 27017
ports:
- "27017:27017"
restart: unless-stopped
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.slaveOk().ok || rs.status().ok" | mongo --quiet) -eq 1
interval: 10s
start_period: 30s
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
Approach-2: Multi node replica set
version: '3.8'
services:
mongo1:
image: mongo:4.2
container_name: mongo1
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "27017"]
volumes:
- ./data/mongo-1:/data/db
ports:
- 27017:27017
healthcheck:
test: test $$(echo "rs.initiate({_id:'my-replica-set',members:[{_id:0,host:\"mongo1:27017\"},{_id:1,host:\"mongo2:27018\"},{_id:2,host:\"mongo3:27019\"}]}).ok || rs.status().ok" | mongo --port 27017 --quiet) -eq 1
interval: 10s
start_period: 30s
mongo2:
image: mongo:4.2
container_name: mongo2
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "27018"]
volumes:
- ./data/mongo-2:/data/db
ports:
- 27018:27018
mongo3:
image: mongo:4.2
container_name: mongo3
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "27019"]
volumes:
- ./data/mongo-3:/data/db
ports:
- 27019:27019
In my case , db was hacked by some one.
Adding volume mount will work, making it as external in compose will ask you to create a volume externally before running the compose file. Which I feel is a safe way instead of asking compose to decide the volume location.
you are welcome :)

Docker compose for MongoDB ReplicaSet

I have been trying to dockerize my spring boot application which depends on redis, kafka and mongodb.
Following is the docker-compose.yml:
version: '3.3'
services:
my-service:
image: my-service
build:
context: ../../
dockerfile: Dockerfile
restart: always
container_name: my-service
environment:
KAFKA_CONFLUENT_BOOTSTRAP_SERVERS: kafka:9092
MONGO_HOSTS: mongodb:27017
REDIS_HOST: redis
REDIS_PORT: 6379
volumes:
- /private/var/log/my-service/:/var/log/my-service/
ports:
- 8080:8090
- 1053:1053
depends_on:
- redis
- kafka
- mongodb
portainer:
image: portainer/portainer
command: -H unix:///var/run/docker.sock
restart: always
container_name: portainer
ports:
- 9000:9000
- 9001:8000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
redis:
image: redis
container_name: redis
restart: always
ports:
- 6379:6379
zookeeper:
image: wurstmeister/zookeeper
ports:
- 2181:2181
container_name: zookeeper
kafka:
image: wurstmeister/kafka
ports:
- 9092:9092
container_name: kafka
environment:
KAFKA_CREATE_TOPICS: "cms.entity.change:1:1" # topic:partition:replicas
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_PORT: 9092
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- "zookeeper"
mongodb:
image: mongo:latest
container_name: mongodb
environment:
MONGO_INITDB_ROOT_USERNAME:
MONGO_INITDB_ROOT_PASSWORD:
ports:
- 27017:27017
volumes:
- ./data/db:/data/db
The issue is that this starts up mongo as a STANDALONE instance. So the APIs in my service that persist data are failing as mongo needs to start as a REPLICA_SET.
How can I edit my docker-compose file to start mongo as a REPLICA_SET?
I had the same issue and ended up on this stackoverflow post.
We had a requirement of using official mongoDB docker image (https://hub.docker.com/_/mongo ) and couldn't use bitnami as suggested in Vahid's answer.
This answer isn't exactly what's needed by the question asked and coming in 6 months too late; but it should give directions to someone who need to use the mongoDb standalone replicaset throw away instance for integration testing purpose. If you need to use it in PROD then you'll have to provide environment variables for volumes and auth as per Vahid's answer.
version: '3.7'
services:
mongodb:
image: mongo:latest
container_name: myservice-mongodb
networks:
- myServiceNetwork
expose:
- 27017
command: --replSet singleNodeReplSet
mongodb-replicaset:
container_name: mongodb-replicaset-helper
depends_on:
- mongodb
networks:
- myServiceNetwork
image: mongo:latest
command: bash -c "sleep 5 && mongo --host myservice-mongodb --port 27017 --eval \"rs.initiate()\" && sleep 2 && mongo --host myservice-mongodb --port 27017 --eval \"rs.status()\" && sleep infinity"
my-service:
depends_on:
- mongodb-replicaset
image: myserviceimage
container_name: myservicecontainer
networks:
- myServiceNetwork
environment:
myservice__Database__ConnectionString: mongodb://myservice-mongodb:27017/?connect=direct&replicaSet=singleNodeReplSet&readPreference=primary
myservice__Database__Name: myserviceDb
networks:
myServiceNetwork:
driver: bridge
NOTE: Please look at the way how connection string is passed as env variable to the service depending on mongo replicaset instance. You'd have to ensure that the name used in setting up the mongodb replicaset (in my case singleNodeReplicaSet) is passed on to the service depending on it.
Edited:
my previous answer was far wrong so I changed it. I managed to make it work using 'bitnami/mongodb:4.0'. Not sure if that would help you or not, but maybe it gives you some idea. They have a docker-compose file ready for replicaset mode.
version: '3'
services:
mdb-primary:
image: 'bitnami/mongodb:4.0'
environment:
- MONGODB_REPLICA_SET_MODE=primary
- MONGODB_ROOT_PASSWORD=somepassword
- MONGODB_REPLICA_SET_KEY=replicasetkey
- MONGODB_ADVERTISED_HOSTNAME=mdb-primary
mdb-secondary:
image: 'bitnami/mongodb:4.0'
depends_on:
- mdb-primary
environment:
- MONGODB_PRIMARY_HOST=mdb-primary
- MONGODB_REPLICA_SET_MODE=secondary
- MONGODB_PRIMARY_ROOT_PASSWORD=somepassword
- MONGODB_REPLICA_SET_KEY=replicasetkey
- MONGODB_ADVERTISED_HOSTNAME=mdb-secondary
mdb-arbiter:
image: 'bitnami/mongodb:4.0'
depends_on:
- mdb-primary
environment:
- MONGODB_PRIMARY_HOST=mdb-primary
- MONGODB_REPLICA_SET_MODE=arbiter
- MONGODB_PRIMARY_ROOT_PASSWORD=somepassword
- MONGODB_REPLICA_SET_KEY=replicasetkey
- MONGODB_ADVERTISED_HOSTNAME=mdb-arbiter
mongo-cli:
image: 'bitnami/mongodb:latest'
don't forget to add volumes and map it to /bitnami on the primary node
the last container, mongo-cli is for testing purposes. So you can connect to the replicaset using the cli, there is an argument about that here if you like to read about it.
$ docker-compose exec mongo-cli bash
$ mongo "mongodb://mdb-primary:27017/test?replicaSet=replicaset"

can't connect to local postgres from docker container

I have some docker containers such as php, nginx, etc. A also have postgres locally because I have learned that dabatase inside docker container is a bad practise. But I can't connect to local postgres from docker container.
At this moment I have done next
In postgresql.conf I changed listen_addresses
listen_addresses = '*'
In pg_hba.conf I have added next line
host all all 0.0.0.0/0 md5
The I executed next command for iptables
iptables -I INPUT -p tcp -m tcp -s 0.0.0.0 --dport 5432 -j ACCEPT
Then I restarted postgres.
My database configuration
DB_CONNECTION=pgsql
DB_HOST=my_server_ip_address
DB_PORT=5432
DB_DATABASE=mydbname
DB_USERNAME=mydbuser
DB_PASSWORD=mydbpasswd
But i still can't connect to posgresql. At the same moment I can connect to postgres via psql or phpstorm
My docker-compose.yml
version: '3.7'
networks:
backend-network:
driver: bridge
frontend-network:
driver: bridge
services:
&app-service app: &app-service-template
container_name: k4fntr_app
build:
context: ./docker/php-fpm
args:
UID: ${UID?Use your user ID}
GID: ${GID?Use your group ID}
USER: ${USER?Use your user name}
user: "${UID}:${GID}"
hostname: *app-service
volumes:
- /etc/passwd/:/etc/passwd:ro
- /etc/group/:/etc/group:ro
- ./:/var/www/k4fntr
environment:
APP_ENV: "${APP_ENV}"
CONTAINER_ROLE: app
FPM_PORT: &php-fpm-port 9000
FPM_USER: "${UID:-1000}"
FPM_GROUP: "${GID:-1000}"
networks:
- backend-network
&queue-service queue:
<<: *app-service-template
container_name: k4fntr_queue
restart: always
hostname: *queue-service
depends_on:
- app
environment:
CONTAINER_ROLE: queue
&schedule-service schedule:
<<: *app-service-template
container_name: k4fntr_schedule
restart: always
hostname: *schedule-service
depends_on:
- app
environment:
CONTAINER_ROLE: scheduler
&sportlevel-listener sportlevel_listener:
<<: *app-service-template
container_name: k4fntr_sl_listener
restart: always
hostname: *sportlevel-listener
ports:
- "${SPORTLEVEL_LISTEN_PORT}:${SPORTLEVEL_LISTEN_PORT}"
depends_on:
- app
environment:
CONTAINER_ROLE: sl_listener
&php-fpm-service php-fpm:
<<: *app-service-template
container_name: k4fntr_php-fpm
user: 'root:root'
restart: always
hostname: *php-fpm-service
ports: [*php-fpm-port]
entrypoint: /fpm-entrypoint.sh
command: php-fpm --nodaemonize
networks:
- backend-network
- frontend-network
echo-server:
container_name: k4fntr_echo
image: oanhnn/laravel-echo-server
volumes:
- ./:/app
environment:
GENERATE_CONFIG: "false"
depends_on:
- app
ports:
- "6001:6001"
networks:
- backend-network
- frontend-network
nginx:
container_name: k4fntr_nginx
image: nginx
volumes:
- ./docker/nginx/config:/etc/nginx/conf.d
- ./:/var/www/k4fntr
depends_on:
- *php-fpm-service
ports:
- "${NGINX_LISTEN_PORT}:80"
networks:
- frontend-network
redis:
container_name: k4fntr_redis
image: redis
restart: always
command: redis-server
volumes:
- ./docker/redis/config/redis.conf:/usr/local/etc/redis/redis.conf
- ./docker/redis/redis-data:/data:rw
ports:
- "16379:6379"
networks:
- backend-network