Docker false positive on health check for MongoDB server - mongodb

I want to setup a replica set with mongodb, and I want to detect all the servers are ready to use.
I have configured the following docker-compose file:
version: '3.8'
services:
mongo_launcher:
container_name: mongo_launcher
image: mongo:6.0.2
restart: on-failure
networks:
- dashboard_network
volumes:
- ./docker/scripts/mongo-setup.sh:/scripts/mongo-setup.sh
entrypoint: ['sh', '/scripts/mongo-setup.sh']
mongo_replica_1:
container_name: mongo_replica_1
image: mongo:6.0.2
ports:
- 27017:27017
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27017',
]
volumes:
- ./.volumes/mongo/replica1:/data/db
- ./.volumes/mongo/replica1/configdb:/data/configdb
networks:
- dashboard_network
mongo_replica_2:
container_name: mongo_replica_2
image: mongo:6.0.2
ports:
- 27018:27018
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27018',
]
volumes:
- ./.volumes/mongo/replica2:/data/db
- ./.volumes/mongo/replica2/configdb:/data/configdb
networks:
- dashboard_network
mongo_replica_3:
container_name: mongo_replica_3
image: mongo:6.0.2
ports:
- 27019:27019
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27019',
]
volumes:
- ./.volumes/mongo/replica3:/data/db
- ./.volumes/mongo/replica3/configdb:/data/configdb
networks:
- dashboard_network
My mongo-setup.sh file is:
#!/bin/bash
MONGODB_REPLICA_1=mongo_replica_1
MONGODB_REPLICA_2=mongo_replica_2
MONGODB_REPLICA_3=mongo_replica_3
echo "************ [ Waiting for startup ] **************" ${MONGODB_REPLICA_1}
until ncc -zvv ${MONGODB_REPLICA_1} 27017 2>&1 | grep uptime | head -1; do
printf '.'
sleep 1
done
echo "************ [ Startup completed ] **************" ${MONGODB_REPLICA_1}
mongosh --host ${MONGODB_REPLICA_1}:27017 <<EOF
var cfg = {
"_id": "dbrs",
"protocolVersion": 1,
"version": 1,
"members": [
{
"_id": 1,
"host": "${MONGODB_REPLICA_1}:27017",
"priority": 3
},
{
"_id": 2,
"host": "${MONGODB_REPLICA_2}:27018",
"priority": 2
},
{
"_id": 3,
"host": "${MONGODB_REPLICA_3}:27019",
"priority": 1
}
],settings: {chainingAllowed: true}
};
rs.initiate(cfg, { force: true });
rs.reconfig(cfg, { force: true });
rs.secondaryOk();
db.getMongo().setReadPref('nearest');
db.getMongo().setSecondaryOk();
EOF
If I check the logs of mongo_launcher, using docker logs mongo_launcher I get:
************ [ Waiting for startup ] ************** mongo_replica_1
************ [ Startup completed ] ************** mongo_replica_1
Current Mongosh Log ID: 6367f96a8273830a6762893c
Connecting to: mongodb://mongo_replica_1:27017/?directConnection=true&appName=mongosh+1.6.0
MongoNetworkError: connect ECONNREFUSED 172.21.0.3:27017
************ [ Waiting for startup ] ************** mongo_replica_1
************ [ Startup completed ] ************** mongo_replica_1
Current Mongosh Log ID: 6367f96c7b96e8524fb103e7
Connecting to: mongodb://mongo_replica_1:27017/?directConnection=true&appName=mongosh+1.6.0
MongoNetworkError: connect ECONNREFUSED 172.21.0.3:27017
************ [ Waiting for startup ] ************** mongo_replica_1
************ [ Startup completed ] ************** mongo_replica_1
Current Mongosh Log ID: 6367f96f9fd15a9ae8bc32d6
Connecting to: mongodb://mongo_replica_1:27017/?directConnection=true&appName=mongosh+1.6.0
MongoNetworkError: connect ECONNREFUSED 172.21.0.3:27017
************ [ Waiting for startup ] ************** mongo_replica_1
************ [ Startup completed ] ************** mongo_replica_1
Current Mongosh Log ID: 6367f972d8f575d837789d62
Connecting to: mongodb://mongo_replica_1:27017/?directConnection=true&appName=mongosh+1.6.0
MongoNetworkError: connect ECONNREFUSED 172.21.0.3:27017
************ [ Waiting for startup ] ************** mongo_replica_1
************ [ Startup completed ] ************** mongo_replica_1
Current Mongosh Log ID: 6367f9746bfa236397cf67a5
Connecting to: mongodb://mongo_replica_1:27017/?directConnection=true&appName=mongosh+1.6.0
MongoNetworkError: connect ECONNREFUSED 172.21.0.3:27017
************ [ Waiting for startup ] ************** mongo_replica_1
************ [ Startup completed ] ************** mongo_replica_1
Current Mongosh Log ID: 6367f97842ac739549b27e4a
Connecting to: mongodb://mongo_replica_1:27017/?directConnection=true&appName=mongosh+1.6.0
MongoNetworkError: connect ECONNREFUSED 172.21.0.3:27017
************ [ Waiting for startup ] ************** mongo_replica_1
************ [ Startup completed ] ************** mongo_replica_1
Current Mongosh Log ID: 6367f97cb586d80909d5b2f8
Does anyone could tell why the scripts passes the:
until ncc -zvv ${MONGODB_REPLICA_1} 27017 2>&1 | grep uptime | head -1; do
printf '.'
sleep 1
done
script, but fails to connect to mongo server? I don't want the script to skip this if condition, if the server is not ready..

Related

is it possible to run Hyperledger Explorer on ARM64 like Raspberry pi?

I am trying to see the blocks or Fabric using Hyperledger Explorer on raspberry pi. everytime I run docker-compose up the result is always the same like this:
Creating explorerdb.mynetwork.com ... done Creating explorer.mynetwork.com ... done Attaching to explorerdb.mynetwork.com, explorer.mynetwork.com explorer.mynetwork.com | exec /usr/local/bin/docker-entrypoint.sh: exec format error explorerdb.mynetwork.com | 2022-12-13 19:59:02.094 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 explorerdb.mynetwork.com | 2022-12-13 19:59:02.096 UTC [1] LOG: listening on IPv6 address "::", port 5432 explorerdb.mynetwork.com | 2022-12-13 19:59:02.102 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" explorerdb.mynetwork.com | 2022-12-13 19:59:02.152 UTC [20] LOG: database system was shut down at 2022-12-13 19:58:39 UTC explorerdb.mynetwork.com | 2022-12-13 19:59:02.164 UTC [1] LOG: database system is ready to accept connections explorer.mynetwork.com exited with code 1
my docker-compose.yaml
`version: '2.1'
volumes:
pgdata:
walletstore:
networks:
mynetwork.com:
name: el_red
services:
explorerdb.mynetwork.com:
image: hyperledger/explorer-db:latest
container_name: explorerdb.mynetwork.com
hostname: explorerdb.mynetwork.com
environment:
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWORD=password
healthcheck:
test: "pg_isready -h localhost -p 5432 -q -U postgres"
interval: 30s
timeout: 10s
retries: 5
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- mynetwork.com
explorer.mynetwork.com:
image: hyperledger/explorer:latest
container_name: explorer.mynetwork.com
hostname: explorer.mynetwork.com
environment:
- DATABASE_HOST=explorerdb.mynetwork.com
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWD=password
- LOG_LEVEL_APP=debug
- LOG_LEVEL_DB=debug
- LOG_LEVEL_CONSOLE=info
- LOG_CONSOLE_STDOUT=true
- DISCOVERY_AS_LOCALHOST=false
volumes:
- ./config.json:/opt/explorer/app/platform/fabric/config.json
- ./connection-profile:/opt/explorer/app/platform/fabric/connection-profile
- ./organizations:/tmp/crypto
- walletstore:/opt/explorer/wallet
ports:
- 8080:8080
depends_on:
explorerdb.mynetwork.com:
condition: service_healthy
networks:
- mynetwork.com
`
my test-network.json
`
{
"name": "my-network",
"version": "1.0.0",
"client": {
"tlsEnable": true,
"adminCredential": {
"id": "exploreradmin",
"password": "exploreradminpw"
},
"enableAuthentication": true,
"organization": "Org1MSP",
"connection": {
"timeout": {
"peer": {
"endorser": "300"
},
"orderer": "300"
}
}
},
"channels": {
"mychannel": {
"peers": {
"peer0.org1.example.com": {}
}
}
},
"organizations": {
"Org1MSP": {
"mspid": "Org1MSP",
"adminPrivateKey": {
"pem": "-----BEGIN PRIVATE KEY-----\nMIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQgtkP3OchnVeSd6c0n\ns/SXp7E3JLiBQZExi1UVXuCQYcahRANCAAQgJLvV9SaRC550c1hyDVfDao1MaxJU\nlvnDq1Yi51/d2d5sLndQ4q33nuAoybKIR3eQIrvE2Wu4wTGQCL2r3t2F\n-----END PRIVATE KEY-----\n"
},
"peers": ["peer0.org1.example.com"],
"signedCert": {
"path": "/tmp/crypto//peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/signcerts/cert.pem"
}
}
},
"peers": {
"peer0.org1.example.com": {
"tlsCACerts": {
"path": "/tmp/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt"
},
"url": "grpcs://peer0.org1.example.com:7051"
}
}
}
`
I wait for like an hour everytime I run docker-compose up but nothing happen. Please help. Thank you so much in advance.

Cannot connect with Mongo Compass to docerized mongoDB replica sets

I have the following docker-compose.yaml file:
version: '3.8'
services:
mongo_launcher:
container_name: mongo_launcher
image: mongo:5.0.8
restart: on-failure
networks:
- mongo_network
volumes:
- ./docker/mongo-setup.sh:/scripts/mongo-setup.sh
entrypoint: ['sh', '/scripts/mongo-setup.sh']
mongo_replica_1:
container_name: mongo_replica_1
image: mongo:5.0.8
ports:
- 27017:27017
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27017',
]
volumes:
- ./.volumes/mongo/replica1:/data/db
- ./.volumes/mongo/replica1/configdb:/data/configdb
networks:
- mongo_network
mongo_replica_2:
container_name: mongo_replica_2
image: mongo:5.0.8
ports:
- 27018:27018
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27018',
]
volumes:
- ./.volumes/mongo/replica2:/data/db
- ./.volumes/mongo/replica2/configdb:/data/configdb
networks:
- mongo_network
mongo_replica_3:
container_name: mongo_replica_3
image: mongo:5.0.8
ports:
- 27019:27019
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27019',
]
volumes:
- ./.volumes/mongo/replica3:/data/db
- ./.volumes/mongo/replica3/configdb:/data/configdb
networks:
- mongo_network
networks:
mongo_network:
driver: bridge
Note that the first service will use the mongo-setup.sh script:
#!/bin/bash
MONGODB_REPLICA_1=mongo_replica_1
MONGODB_REPLICA_2=mongo_replica_2
MONGODB_REPLICA_3=mongo_replica_3
echo "************ [ Waiting for startup ] **************" ${MONGODB_REPLICA_1}
until curl http://${MONGODB_REPLICA_1}:27017/serverStatus\?text\=1 2>&1 | grep uptime | head -1; do
printf '.'
sleep 1
done
echo "************ [ Startup completed ] **************" ${MONGODB_REPLICA_1}
mongosh --host ${MONGODB_REPLICA_1}:27017 <<EOF
var cfg = {
"_id": "dbrs",
"protocolVersion": 1,
"version": 1,
"members": [
{
"_id": 1,
"host": "${MONGODB_REPLICA_1}:27017",
"priority": 3
},
{
"_id": 2,
"host": "${MONGODB_REPLICA_2}:27018",
"priority": 2
},
{
"_id": 3,
"host": "${MONGODB_REPLICA_3}:27019",
"priority": 1
}
],settings: {chainingAllowed: true}
};
rs.initiate(cfg, { force: true });
rs.reconfig(cfg, { force: true });
rs.secondaryOk();
db.getMongo().setReadPref('nearest');
db.getMongo().setSecondaryOk();
EOF
When I run docker-compose up -d, it succeeds and when I run docker ps I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0c8f6d916e4a mongo:5.0.8 "/usr/bin/mongod --b…" 11 seconds ago Up 10 seconds 27017/tcp, 0.0.0.0:27018->27018/tcp mongo_replica_2
c59cb625362e mongo:5.0.8 "/usr/bin/mongod --b…" 11 seconds ago Up 10 seconds 0.0.0.0:27017->27017/tcp mongo_replica_1
cb61093e4dd0 mongo:5.0.8 "/usr/bin/mongod --b…" 11 seconds ago Up 10 seconds 27017/tcp, 0.0.0.0:27019->27019/tcp mongo_replica_3
But when I try to connect one of the replica sets with Mongo Compass I get the following error:
So it seems as the containers are all running fine, why do I get this issue? Please advice..
I could solve the issue by appending the following content to my /etc/hosts file:
127.0.0.1 mongo_replica_1
127.0.0.1 mongo_replica_2
127.0.0.1 mongo_replica_3
Could anyone come up with more elegant way?
I tried to bypass the /etc/hosts solution, so I edited docker-compose.yaml file and I applied extra_hosts:
version: '3.8'
services:
mongo_launcher:
container_name: mongo_launcher
image: mongo:5.0.8
restart: on-failure
networks:
- mongo_network
volumes:
- ./docker/mongo-setup.sh:/scripts/mongo-setup.sh
entrypoint: ['sh', '/scripts/mongo-setup.sh']
mongo_replica_1:
container_name: mongo_replica_1
image: mongo:5.0.8
ports:
- 27017:27017
extra_hosts:
- 'localhost:0.0.0.0'
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27017',
]
volumes:
- ./.volumes/mongo/replica1:/data/db
- ./.volumes/mongo/replica1/configdb:/data/configdb
networks:
- mongo_network
mongo_replica_2:
container_name: mongo_replica_2
image: mongo:5.0.8
ports:
- 27018:27018
extra_hosts:
- 'localhost:0.0.0.0'
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27018',
]
volumes:
- ./.volumes/mongo/replica2:/data/db
- ./.volumes/mongo/replica2/configdb:/data/configdb
networks:
- mongo_network
mongo_replica_3:
container_name: mongo_replica_3
image: mongo:5.0.8
ports:
- 27019:27019
extra_hosts:
- 'localhost:0.0.0.0'
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27019',
]
volumes:
- ./.volumes/mongo/replica3:/data/db
- ./.volumes/mongo/replica3/configdb:/data/configdb
networks:
- mongo_network
networks:
mongo_network:
driver: bridge
Then I edited the mongo-setup.sh to:
#!/bin/bash
echo "************ [ Waiting for startup ] **************"
until curl http://${MONGODB_REPLICA_1}:27017/serverStatus\?text\=1 2>&1 | grep uptime | head -1; do
printf '.'
sleep 1
done
echo "************ [ Startup completed ] **************"
mongosh --host localhost:27017 <<EOF
var cfg = {
"_id": "dbrs",
"protocolVersion": 1,
"version": 1,
"members": [
{
"_id": 1,
"host": "localhost:27017",
"priority": 3
},
{
"_id": 2,
"host": "localhost:27018",
"priority": 2
},
{
"_id": 3,
"host": "localhost:27019",
"priority": 1
}
],settings: {chainingAllowed: true}
};
rs.initiate(cfg, { force: true });
rs.reconfig(cfg, { force: true });
rs.secondaryOk();
db.getMongo().setReadPref('nearest');
db.getMongo().setSecondaryOk();
EOF
And I removed the edits I made in /etc/hosts file. I don't get an error like before, I do get timeout:

Unable to connect to Mongodb container from flask container

#Docker compose file
version: "3.4" # optional since v1.27.0
services:
flaskblog:
build:
context: flaskblog
dockerfile: Dockerfile
image: flaskblog
restart: unless-stopped
environment:
APP_ENV: "prod"
APP_DEBUG: "False"
APP_PORT: 5000
MONGODB_DATABASE: flaskdb
MONGODB_USERNAME: flaskuser
MONGODB_PASSWORD:
MONGODB_HOSTNAME: mongodbuser
volumes:
- appdata:/var/www
depends_on:
- mongodb
networks:
- frontend
- backend
mongodb:
image: mongo:4.2
#container_name: mongodb
restart: unless-stopped
command: mongod --auth
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD:
MONGO_INITDB_DATABASE: flaskdb
MONGODB_DATA_DIR: /data/db2
MONDODB_LOG_DIR: /dev/null
volumes:
- mongodbdata:/data/db2
#- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
networks:
- backend
networks:
backend:
driver: bridge
volumes:
mongodbdata:
driver: local
appdata:
driver: local
#Flask container file
FROM python:3.8-slim-buster
#python:3.6-stretch
LABEL MAINTAINER="Shekhar Banerjee"
ENV GROUP_ID=1000 \
USER_ID=1000 \
SECRET_KEY="4" \
EMAIL_USER="dankml.com" \
EMAIL_PASS="nzw"
RUN mkdir /home/logix3
WORKDIR /home/logix3
ADD . /home/logix3/
RUN pip install -r requirements.txt
RUN pip install gunicorn
RUN groupadd --gid $GROUP_ID www
RUN useradd --create-home -u $USER_ID --shell /bin/sh --gid www www
USER www
EXPOSE 5000/tcp
#CMD ["python","run.py"]
CMD [ "gunicorn", "-w", "4", "--bind", "127.0.0.1:5000", "run:app"]
These are my docker-compose file and Dockerfile for an application . The container for MongoDB and Flask are working fine individually, but when I try to run them together, I do not get any response on the localhost, There are no error messages in the logs or the containers
Curl in the system gives this :
$ curl -i http://127.0.0.1:5000
curl: (7) Failed to connect to 127.0.0.1 port 5000: Connection refused
Can anyone suggest how do I debug this ??
**Only backend network is functional
[
{
"Name": "mlspace_backend",
"Id": "e7e37cad0058330c99c55ffea2b98281e0c6526763e34550db24431b30030b77",
"Created": "2022-05-15T22:52:25.0281001+05:30",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"12c50f5f70d18b4c7ffc076177b59ff063a8ff81c4926c8cae0bf9e74dc2fc83": {
"Name": "mlspace_mongodb_1",
"EndpointID": "8278f672d9211aec9b539e14ae4eeea5e8f7aaef95448f44aab7ec1a8c61bb0b",
"MacAddress": "02:42:ac:14:00:02",
"IPv4Address": "172.20.0.2/16",
"IPv6Address": ""
},
"75cde7a34d6b3c4517c536a06349579c6c39090a93a017b2c280d987701ed0cf": {
"Name": "mlspace_flaskblog_1",
"EndpointID": "20489de8841d937f01768170a89f74c37ed049d241b096c8de8424c51e58704c",
"MacAddress": "02:42:ac:14:00:03",
"IPv4Address": "172.20.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "backend",
"com.docker.compose.project": "mlspace",
"com.docker.compose.version": "1.23.2"
}
}
]
I checked the logs of the containers. no errors found, flask engine seem to be running but the site wont run , even curl won't given any output

System.TimeoutException: A timeout occurred after 30000ms MongoDb

I am trying to compose my application (asp net core web api and mongodb) , but encountered error while trying to connect to db:
System.TimeoutException: A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : "1", Type : "Unknown", State : "Disconnected", Servers : [{ ServerId: "{ ClusterId : 1, EndPoint : "Unspecified/localhost:27017" }", EndPoint: "Unspecified/localhost:27017", ReasonChanged: "Heartbeat", State: "Disconnected", ServerVersion: , TopologyVersion: , Type: "Unknown", HeartbeatException: "MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.
My appsettings.json:
{
"DatabaseSettings": {
"ConnectionString": "mongodb://localhost:27017",
"DatabaseName": "CatalogDb",
"CollectionName": "Products"
},
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"AllowedHosts": "*"
}
docker-compose.override:
version: '3.4'
services:
catalogdb:
container_name: catalogdb
restart: always
ports:
- "27017:27017"
volumes:
- mongo_data:/data/db
eshop.catalog.api:
container_name: catalog.api
environment:
- ASPNETCORE_ENVIRONMENT=Development
- "DatabaseSettings:ConnectionString=mongodb://catalogdb:27017"
depends_on:
- catalogdb
ports:
- "8000:80"
docker-compose:
version: '3.4'
services:
catalogdb:
image: mongo
eshop.catalog.api:
image: ${DOCKER_REGISTRY-}eshopcatalogapi
build:
context: .
dockerfile: EShop.Catalog.API/Dockerfile
volumes:
mongo_data:
Solved. I have change compose command on docker-compose -f .\docker-compose.yml -f .\docker-compose.override.yml up -d and it works now

Implement docker mongodb replica set config settings inside .yml file

I have set a MongoDB replica set which is running properly.
But I want to run the config settings inside the .yml, and not initiating inside a replica set node.
by config settings I mean:
1.
config = {
"_id": "comments",
"members": [
{
"_id": 0,
"host": "node1:27017"
},
{
"_id": 1,
"host": "node2:27017"
},
{
"_id": 2,
"host": "node3:27017"
}
]
}
and the below:
2.
rs.initiate(config)
So i figured out a way, hope it helps.Below i will post my .yml file, rsinit file and rs.sh file, all the file should be placed on the same location in order to work, rest all config are written anyways.
.yml file:
version: '3.7'
services:
mongo1:
hostname: mongo1
container_name: localmongo1
image: mongo
expose:
- 27017
ports:
- 27017:27017
restart: always
entrypoint: ["/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0"]
# volumes:
# - /data/db/mongotest:/data/db # This is where your volume will persist. e.g. VOLUME-DIR = ./volumes/mongodb
mongo2:
hostname: mongo2
container_name: localmongo2
image: mongo
expose:
- 27017
ports:
- 27018:27017
restart: always
entrypoint: ["/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0"]
mongo3:
hostname: mongo3
container_name: localmongo3
image: mongo
expose:
- 27017
ports:
- 27019:27017
restart: always
entrypoint: ["/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0"]
rsinit:
build:
context: .
dockerfile: rsinit
depends_on:
- mongo1
- mongo2
- mongo3
entrypoint: ["sh", "-c", "rs.sh"]
rsinit(normal text file):
FROM mongo
ADD rs.sh /usr/local/bin/
RUN chmod +x /usr/local/bin/rs.sh
rs.sh file:
!/bin/bash
echo "prepare rs initiating"
check_db_status() {
mongo1=$(mongo --host mongo1 --port 27017 --eval "db.stats().ok" | tail -n1 | grep -E '(^|\s)1($|\s)')
mongo2=$(mongo --host mongo2 --port 27017 --eval "db.stats().ok" | tail -n1 | grep -E '(^|\s)1($|\s)')
mongo3=$(mongo --host mongo3 --port 27017 --eval "db.stats().ok" | tail -n1 | grep -E '(^|\s)1($|\s)')
if [[ $mongo1 == 1 ]] && [[ $mongo2 == 1 ]] && [[ $mongo3 == 1 ]]; then
init_rs
else
check_db_status
fi
}
init_rs() {
ret=$(mongo --host mongo1 --port 27017 --eval "rs.initiate({ _id: 'rs0', members: [{ _id: 0, host: 'mongo1:27017' }, { _id: 1, host: 'mongo2:27017' }, { _id: 2, host: 'mongo3:27017' } ] })" > /dev/null 2>&1)
}
check_db_status > /dev/null 2>&1
echo "rs initiating finished"
exit 0