How use /var/run/docker.sock inside running docker-compose container? - postgresql

I have docker-compose.yml like this:
version: '3'
services:
zabbix-agent:
image: zabbix/zabbix-agent
ports:
- "10050:10050"
- "10051:10051"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /etc/localtime:/etc/localtime:ro
- ./zbx_env/etc/zabbix/zabbix_agentd.d:/etc/zabbix/zabbix_agentd.d:ro
- ./zbx_env/var/lib/zabbix/modules:/var/lib/zabbix/modules:ro
- ./zbx_env/var/lib/zabbix/enc:/var/lib/zabbix/enc:ro
- ./zbx_env/var/lib/zabbix/ssh_keys:/var/lib/zabbix/ssh_keys:ro
links:
- db
env_file:
- .env_agent
user: root
privileged: true
pid: "host"
stop_grace_period: 5s
labels:
com.zabbix.description: "Zabbix agent"
com.zabbix.company: "Zabbix SIA"
com.zabbix.component: "zabbix-agentd"
com.zabbix.os: "ubuntu"
postgres-server:
image: postgres:latest
volumes:
- ./zbx_env/var/lib/postgresql/data:/var/lib/postgresql/data:rw
env_file:
- .env_db_pgsql
user: root
stop_grace_period: 1m
In zabbix-agent i use UserParameter like as:
...
UserParameter=pgsql.ping[*],/bin/echo -e "\\\timing \n select 1" | psql -qAtX $1 | tail -n 1 |cut -d' ' -f2|sed 's/,/./'
...
When i call from zabbix-server this UserParameter, i have error about not exists psql. And it's correct - in container 'zabbix-agent' psql not exist.
How can i run psql that containing in 'postgres-server' from 'zabbix-agent' and get result?

Just run:
curl -H 'Content-Type: application/json' --unix-socket /var/run/docker.sock localhost:4243/containers/zabbix-agent/exec -d '{"Cmd":["date"]}'
How make requests look this:
https://docs.docker.com/develop/sdk/examples/
API reference look this:
https://docs.docker.com/engine/api/v1.27/#operation/ContainerExec

Related

How to write bash script for init-mongo.sh:ro to add admin user and run an initialisation script

My docker compose of mongo fails to authenticate when I add a docker-entrypoint-initdb.d/init-mongo.sh:ro file
docker-compose
version: "3.1"
services:
mongo1:
container_name: mongo1
image: mongo
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASS}
MONGO_INITDB_DATABASE: ${MONGO_DB_NAME}
ports:
- 27017:27017
- 28017:28017
env_file:
- .env
volumes:
- volume-mongo:/data/db
- ./mongo/init-mongo-js.sh:/docker-entrypoint-initdb.d/init-mongo.sh:ro
command: ['--auth', '--wiredTigerCacheSizeGB=1']
networks:
- mongo-network
# mongo-express:
# container_name: mongo-express
# image: mongo-express
# restart: always
# ports:
# - '8081:8081'
# env_file:
# - .env
# environment:
# - ME_CONFIG_OPTIONS_EDITORTHEME=ambiance
# - ME_CONFIG_BASICAUTH_USERNAME=${MONGO_USER}
# - ME_CONFIG_BASICAUTH_PASSWORD=${MONGO_PASS}
# - ME_CONFIG_MONGODB_ENABLE_ADMIN=true
# - ME_CONFIG_MONGODB_AUTH_DATABASE=admin
# - ME_CONFIG_MONGODB_SERVER=mongo1
# - ME_CONFIG_MONGODB_ADMINUSERNAME=${MONGO_USER}
# - ME_CONFIG_MONGODB_ADMINPASSWORD=${MONGO_PASS}
# depends_on:
# - mongo1
# networks:
# - mongo-network
volumes:
volume-mongo:
driver: local
networks:
mongo-network:
driver: bridge
.env
MONGO_DB_NAME=DBX
MONGO_USER=MYNEWUSERNAME
MONGO_PASS=somesecret
# Mongo host
Attempting to login fails
docker exec -it mongo1 mongo admin -u MYNEWUSERNAME -p somesecret
When I remove the reference to the init-mongo.sh:ro- file then it works but then I don't get to run my initialisation script.
I have been referred to this article:
https://github.com/docker-library/mongo/issues/174
I'm assuming the reason it fails is because my bash file - init-mongo.sh:ro is overwriting this file:
if [ "$MONGO_INITDB_ROOT_USERNAME" ] && [ "$MONGO_INITDB_ROOT_PASSWORD" ]; then
rootAuthDatabase='admin'
"${mongo[#]}" "$rootAuthDatabase" <<-EOJS
db.createUser({
user: $(_js_escape "$MONGO_INITDB_ROOT_USERNAME"),
pwd: $(_js_escape "$MONGO_INITDB_ROOT_PASSWORD"),
roles: [ { role: 'root', db: $(_js_escape "$rootAuthDatabase") } ]
})
EOJS
fi
If that theory is correct then including it into my script with my additional changes seems like a good solution. The only problem is in my vscode, with shell-format extension, the colour coding of this code combination doesn't look right so I suspect its wrong.
init-mongo.sh:ro
Attempting to combine the following:
#!/bin/bash
set -e
# set -e causes the whole script to exit when a command fails, so the script can't silently fail and startup mongo.
if [ "$MONGO_INITDB_ROOT_USERNAME" ] && [ "$MONGO_INITDB_ROOT_PASSWORD" ]; then
rootAuthDatabase='admin'
"${mongo[#]}" "$rootAuthDatabase" <<-EOJS
db.createUser({
user: $(_js_escape "$MONGO_INITDB_ROOT_USERNAME"),
pwd: $(_js_escape "$MONGO_INITDB_ROOT_PASSWORD"),
roles: [ { role: 'root', db: $(_js_escape "$rootAuthDatabase") } ]
})
EOJS
fi
mongo <<EOF
use ${MONGO_DB_NAME}
db.createCollection("users")
db.users.insert({"name": "mike"})
EOF
Any help in fixing this bash script appreciated.
thanks
I have experimented and removed the reference in the docker-compose file
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASS}
MONGO_INITDB_DATABASE: ${MONGO_DB_NAME}
and instead create the admin user directly in the init file.
#!/bin/bash
set -e
mongo <<EOF
use admin
db.createUser(
{
user: "${MONGO_USER}",
pwd: "${MONGO_PASS}",
roles: [ { role: "userAdminAnyDatabase", db: "admin" }, "readWriteAnyDatabase" ]
}
)
use ${MONGO_DB_NAME}
db.createCollection("users")
db.users.insert({"name": "john"})
EOF

Mongo ReplicaSet in Docker - couldn't add user: not master

I am trying to set up a ReplicaSet but I'm having problem with the initialisation.
The FIRST time I run
db_1 | uncaught exception: Error: couldn't add user: not master
And each time after
db_1 | {"t":{"$date":"2020-09-16T16:06:05.341+00:00"},"s":"I", "c":"ACCESS", "id":20249, "ctx":"conn1","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-256","principalName":"user","authenticationDatabase":"admin","client":"172.18.0.5:37916","result":"UserNotFound: Could not find user \"user\" for db \"admin\""}}
db_1 | {"t":{"$date":"2020-09-16T16:06:05.342+00:00"},"s":"I", "c":"ACCESS", "id":20249, "ctx":"conn1","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-1","principalName":"user","authenticationDatabase":"admin","client":"172.18.0.5:37916","result":"UserNotFound: Could not find user \"user\" for db \"admin\""}}
db_1 | {"t":{"$date":"2020-09-16T16:06:05.349+00:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn1","msg":"connection ended","attr":{"remote":"172.18.0.5:37916","connectionCount":0}}
setup_1 | Error: Authentication failed. :
setup_1 | connect#src/mongo/shell/mongo.js:362:17
setup_1 | #(connect):2:6
setup_1 | exception: connect failed
setup_1 | exiting with code 1
docker_setup_1 exited with code 1
my setup is:
/ docker-compose.yml
version: "3"
services:
db:
image: db
build:
context: .
dockerfile: DockerfileDb
environment:
MONGO_INITDB_ROOT_USERNAME: user
MONGO_INITDB_ROOT_PASSWORD: xxxx
ports:
- "34000:34000"
volumes:
- mongodata:/data/db
- ./mongologs:/data/logs
db2:
image: db
ports:
- "34001:34000"
volumes:
- mongodata2:/data/db
db3:
image: db
ports:
- "34002:34000"
volumes:
- mongodata3:/data/db
setup:
image: setup
build: ./replicaSetup
depends_on:
- db
- db2
- db3
links:
- "db:database"
- "db2:database2"
- "db3:database3"
volumes:
mongodata:
mongodata2:
mongodata3:
/ DockerfileDb
FROM mongo
WORKDIR /usr/src/config
COPY replicaSetup/mongod.conf .
COPY replicaSetup/shared.key .
EXPOSE 34000
RUN chmod 700 shared.key
RUN chown 999:999 shared.key
CMD ["--config", "./mongod.conf"]
/ replicaSetup / mongod.conf
net:
port: 34000
bindIpAll : true
security:
authorization: enabled
keyFile: ./shared.key
replication:
oplogSizeMB: 1024
replSetName: amsdb
/ replicaSetup / Dockerfile
FROM mongo
# Create app directory
WORKDIR /usr/src/configs
# Install app dependencies
COPY replicaSet.js .
COPY setup.sh .
CMD ["./setup.sh"]
/ replicaSetup / setup.sh
sleep 10 | echo Sleeping
mongo mongodb://database:34000 -u "user" -p "xxxx" replicaSet.js
/ replicaSetup / replicaSet.js
rsconf = {
_id : "amsdb",
members: [
{ _id : 0, host : "database:34000"},
{ _id : 1, host : "database2:34001" },
{ _id : 2, host : "database3:34002" }
]
}
rs.initiate(rsconf);
rs.conf();
Thanks for any help!
You can do this by just using the base mongo image in docker-compose
Your setup should look like:
/ docker-compose.yml
version: "3.0"
services:
# Worker 1
mongo1:
image: mongo:latest
volumes:
- ./replicaSetup:/opt/keyfile
- mongodata:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: user
MONGO_INITDB_ROOT_PASSWORD: xxxx
ports:
- 27017:27017
command: 'mongod --auth --keyFile /opt/keyfile/shared.key --replSet amsdb'
# Worker 2
mongo2:
image: mongo:latest
volumes:
- ./replicaSetup:/opt/keyfile
- mongodata2:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: user
MONGO_INITDB_ROOT_PASSWORD: xxxx
ports:
- 27018:27017
command: 'mongod --auth --keyFile /opt/keyfile/shared.key --replSet amsdb'
# Worker 3
mongo3:
image: mongo:latest
volumes:
- ./replicaSetup:/opt/keyfile
- mongodata3:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: user
MONGO_INITDB_ROOT_PASSWORD: xxxx
ports:
- 27019:27017
command: 'mongod --auth --keyFile /opt/keyfile/shared.key --replSet amsdb'
volumes:
mongodata:
mongodata2:
mongodata3:
/ replicaSetup / Dockerfile - stays the same
/ replicaSetup / setup.sh - stays the same
/ replicaSetup / replicaSet.js
rsconf = {
_id : "amsdb",
members: [
{ _id : 0, host : "172.17.0.1:27017", priority:1 },
{ _id : 1, host : "172.17.0.1:27018", priority:1 },
{ _id : 2, host : "172.17.0.1:27019", priority:1 }
]
}
rs.initiate(rsconf);
rs.conf();
At the time of writing "mongo:latest" resolves to v4.4.1. The answer is for that version of entrypoint.sh https://github.com/docker-library/mongo/blob/master/4.4/docker-entrypoint.sh
In order to process MONGO_INITDB_ROOT_* environment variables and add the user to the database, the database should be started in standalone mode. It appears that the current implementation does not support replica set configuration in .conf file but only through command line arguments.
Either pass arguments in Dockerfile command: "--bind_ip_all --replSet amsdb --port 34000 ... etc" or create a PR for docker-entrypoint.sh to support docker.conf

Fiware: Create Postgresql

Having Cygnus installed and running (subscribed to Orion). Orion receives notifications from client (via the ioagent). How do I start and create the postgresql databases for persistence?
Accessing fiware from remote server. Not sure what command to execute and start postgres.
There is a FIWARE Tutorial available about persisting historic data to PostGres
Within docker-compose PostGres can be set-up as follows:
postgres-db:
image: postgres
hostname: postgres-db
container_name: db-postgres
expose:
- "5432"
ports:
- "5432:5432"
networks:
- default
environment:
- "POSTGRES_PASSWORD=password"
- "POSTGRES_USER=postgres"
- "POSTGRES_DB=postgres"
The Cygnus configuration mirrors the PostGre config as shown:
cygnus:
image: fiware/cygnus-ngsi
hostname: cygnus
container_name: fiware-cygnus
networks:
- default
depends_on:
- postgres-db
expose:
- "5080"
ports:
- "5050:5050"
- "5080:5080"
environment:
- "CYGNUS_POSTGRESQL_HOST=postgres-db"
- "CYGNUS_POSTGRESQL_PORT=5432"
- "CYGNUS_POSTGRESQL_USER=postgres"
- "CYGNUS_POSTGRESQL_PASS=password"
- "CYGNUS_SERVICE_PORT=5050"
- "CYGNUS_API_PORT=5080"
- "CYGNUS_POSTGRESQL_ENABLE_CACHE=true"
The database instance should be instantiated when a subscription to Cygnus is fired.
for example:
curl -iX POST \
'http://orion:1026/v2/subscriptions' \
-H 'Content-Type: application/json' \
-H 'fiware-service: <xxxxxx>' \
-H 'fiware-servicepath: <yyyyyy>' \
-d '{
"description": "Notify Cygnus of all context changes",
"subject": {
"entities": [
{
"idPattern": ".*"
}
]
},
"notification": {
"http": {
"url": "http://cygnus:5050/notify"
},
"attrsFormat": "legacy"
},
"throttling": 5
}'

Configuring a MongoDB Cluster

I should configure a mongodb cluster with two nodes. Each node should have all data replicated. If one server die, the other should assume as primary.
Before I configure the cluster, I'm doing a local test using docker instances to configure it. As I saw in the documentation, I should have at least 3 instances of MongoDB. Is that correct?
First I created the tree instance with docker, the I configured the instance one to be the primary. The code bellow is my docker compose and the configure script.
The Docker compose:
version: '3'
services:
mongo-2:
container_name: mongo-2
image: mongo:4
ports:
- 30102:27017
command: mongod --replSet cnf-serv --port 27017 --oplogSize 16 --bind_ip_all
restart: always
mongo-3:
container_name: mongo-3
image: mongo:4
ports:
- 30103:27017
command: mongod --replSet cnf-serv --port 27017 --oplogSize 16 --bind_ip_all
restart: always
mongo-1:
container_name: mongo-1
image: mongo:4
ports:
- 30101:27017
command: mongod --replSet cnf-serv --port 27017 --oplogSize 16 --bind_ip_all
links:
- mongo-2:mongo-2
- mongo-3:mongo-3
restart: always
mongo-setup:
container_name: mongo-setup
image: mongo:4
depends_on:
- mongo-1
- mongo-2
- mongo-3
links:
- mongo-1:mongo-1
- mongo-2:mongo-2
- mongo-3:mongo-3
volumes:
- ./scripts:/scripts
environment:
- MONGO1=mongo-1
- MONGO2=mongo-2
- MONGO3=mongo-3
- RS=cnf-serv
- PORT=27017
entrypoint: [ "/scripts/setup.sh" ]
The configure script:
#!/bin/bash
mongodb1=`getent hosts ${MONGO1} | awk '{ print $1 }'`
mongodb2=`getent hosts ${MONGO2} | awk '{ print $1 }'`
mongodb3=`getent hosts ${MONGO3} | awk '{ print $1 }'`
port=${PORT:-27017}
echo "Waiting for startup.."
until mongo --host ${mongodb1}:${port} --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)' &>/dev/null; do
printf '.'
sleep 1
done
echo "Started.."
echo setup.sh time now: `date +"%T" `
mongo --host ${mongodb1}:${port} <<EOF
var cfg = {
"_id": "${RS}",
"protocolVersion": 1,
"members": [
{
"_id": 100,
"host": "${mongodb1}:${port}"
},
{
"_id": 101,
"host": "${mongodb2}:${port}"
},
{
"_id": 102,
"host": "${mongodb3}:${port}"
}
]
};
rs.initiate(cfg, { force: true });
rs.reconfig(cfg, { force: true });
EOF
When I try to connect in any external container port, I have the same error message: Could not connect to MongoDB on the provided host and port
How can I configure a internal cluster using docker?
A 2-node replica set won't be very much help; if one of the nodes goes down, then the other node can't take over as primary because it can't command a majority of votes in the election. You get some benefit in that you have a full copy of the data on the second node, but to get the full benefit of your replica set (both redundancy and high-availability), you would need to add a 3rd node.

replica Set mongo docker-compose

I'm trying to configure a mongodb replicaSet using docker-compose, but when I stop the master container it seems that it doesn't pass to the secondary.
redis:
image: redis
ports:
- "6379:6379"
mongo3:
hostname: mongo3
image: mongo
entrypoint: [ "/usr/bin/mongod", "--replSet", "rs", "--journal","--dbpath","/data/db","--smallfiles", "--rest" ]
volumes:
- ./data/mongo3:/data/db
ports:
- "27018:27017"
- "28018:28017"
restart: always
mongo2:
hostname: mongo2
image: mongo
entrypoint: [ "/usr/bin/mongod", "--replSet", "rs", "--journal","--dbpath","/data/db","--smallfiles", "--rest" ]
volumes:
- ./data/mongo2:/data/db
ports:
- "27019:27017"
- "28019:28017"
restart: always
mongo1:
hostname: mongo1
image: mongo
entrypoint: [ "/usr/bin/mongod", "--replSet", "rs", "--journal","--dbpath","/data/db","--smallfiles", "--rest" ]
volumes:
- ./data/mongo1:/data/db
ports:
- "27017:27017"
- "28017:28017"
links:
- mongo2:mongo2
- mongo3:mongo3
restart: always
web:
build: .
ports:
- "2000:2000"
volumes:
- .:/vip
links:
- redis
- mongo1
- mongo2
- mongo3
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
links:
- web:web
mongosetup:
image: mongo
links:
- mongo1:mongo1
- mongo2:mongo2
- mongo3:mongo3
volumes:
- ./scripts:/scripts
entrypoint: [ "/scripts/setup.sh" ]
setup.sh :
#!/bin/bash
MONGODB1=`ping -c 1 mongo1 | head -1 | cut -d "(" -f 2 | cut -d ")" -f 1`
MONGODB2=`ping -c 1 mongo2 | head -1 | cut -d "(" -f 2 | cut -d ")" -f 1`
MONGODB3=`ping -c 1 mongo3 | head -1 | cut -d "(" -f 2 | cut -d ")" -f 1`
echo "**********************************************" ${MONGODB1}
echo "Waiting for startup.."
until curl http://${MONGODB1}:28017/serverStatus\?text\=1 2>&1 | grep uptime | head -1; do
printf '.'
sleep 1
done
echo curl http://${MONGODB1}:28017/serverStatus\?text\=1 2>&1 | grep uptime | head -1
echo "Started.."
echo SETUP.sh time now: `date +"%T" `
mongo --host ${MONGODB1}:27017 <<EOF
var cfg = {
"_id": "rs",
"version": 1,
"members": [
{
"_id": 0,
"host": "${MONGODB1}:27017",
"priority": 2
},
{
"_id": 1,
"host": "${MONGODB2}:27017",
"priority": 0
},
{
"_id": 2,
"host": "${MONGODB3}:27017",
"priority": 0
}
],settings: {chainingAllowed: true}
};
rs.initiate(cfg, { force: true });
rs.reconfig(cfg, { force: true });
rs.slaveOk();
db.getMongo().setReadPref('nearest');
db.getMongo().setSlaveOk();
EOF
I had a similar issue and resolved it with the following compose file:
version: "3.8"
services:
mongo1:
image: mongo:4.2
container_name: mongo1
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30001"]
volumes:
- ./data/mongo-1:/data/db
ports:
- 30001:30001
healthcheck:
test: test $$(echo "rs.initiate({_id:'my-replica-set',members:[{_id:0,host:\"mongo1:30001\"},{_id:1,host:\"mongo2:30002\"},{_id:2,host:\"mongo3:30003\"}]}).ok || rs.status().ok" | mongo --port 30001 --quiet) -eq 1
interval: 10s
start_period: 30s
mongo2:
image: mongo:4.2
container_name: mongo2
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30002"]
volumes:
- ./data/mongo-2:/data/db
ports:
- 30002:30002
mongo3:
image: mongo:4.2
container_name: mongo3
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "30003"]
volumes:
- ./data/mongo-3:/data/db
ports:
- 30003:30003
with the following in my /etc/hosts file:
127.0.0.1 mongo1
127.0.0.1 mongo2
127.0.0.1 mongo3
I documented it in a GitHub repo and with a little blog post here:
https://github.com/UpSync-Dev/docker-compose-mongo-replica-set
https://www.upsync.dev/2021/02/02/run-mongo-replica-set.html
Update: This does not work! You do need to run rs.initiate()
With MongoDB 4.0, you don't need a 4th container to run a setup script. It is really simple to bring up a replicaSet of 3 containers:
version: "3"
services:
mongo1:
hostname: mongo1
container_name: localmongo1
image: mongo:4.0-xenial
expose:
- 27017
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
mongo2:
hostname: mongo2
container_name: localmongo2
image: mongo:4.0-xenial
expose:
- 27017
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
mongo3:
hostname: mongo3
container_name: localmongo3
image: mongo:4.0-xenial
expose:
- 27017
restart: always
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
More info here: https://github.com/msound/localmongo/tree/4.0
I was looking for how to start MongoDB Replica Set with one DB instance for local development and ended up here. I found the answers here too complicated, so I came up with the following solution:
docker-compose.yml
version: "3"
services:
mongo:
hostname: mongodb
container_name: mongodb
image: mongo:latest
restart: always
ports:
- "27017:27017"
volumes:
- ./scripts:/docker-entrypoint-initdb.d/
command: ["--replSet", "rs0", "--bind_ip_all"]
And there is a folder called 'scripts' in the current directory with a single file in it called 'init.js' (the name is not important). This folder mounted as a volume to the '/docker-entrypoint-initdb.d/', which is a special folder. When MongoDB is started, all the files in this directory will be executed. The content of file is:
init.js
rs.initiate();
I would adivse you to have a look at khezen/mongo.
You can deploy a mongo replica set across a 3 nodes docker swarm with the following:
version: '3'
services:
replica1:
image: khezen/mongo:slim
deploy:
mode: replicated
replicas: 1
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
palcement:
node.hostname: node-1
environment:
RS_NAME: shard1
SHARD_SVR: 'y'
AUTH: 'y'
volumes:
- /data/mongo/replica1:/data/db
networks:
- mongo_cluster
replica2:
image: khezen/mongo:slim
deploy:
mode: replicated
replicas: 1
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
palcement:
node.hostname: node-2
environment:
RS_NAME: shard1
SHARD_SVR: 'y'
AUTH: 'y'
volumes:
- /data/mongo/replica2:/data/db
networks:
- mongo_cluster
replica3:
image: khezen/mongo:slim
deploy:
mode: replicated
replicas: 1
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
palcement:
node.hostname: node-3
environment:
RS_NAME: shard1
SHARD_SVR: 'y'
MASTER: replica3
SLAVES: replica1 replica2
AUTH: 'y'
volumes:
- /data/mongo/replica3:/data/db
networks:
- mongo_cluster
networks:
mongo_cluster:
driver: overlay
disclaimer: I am the maintainer of this image.
I set up a gist with a guide on how to set it up using a docker-compose file and mongoose. https://gist.github.com/harveyconnor/518e088bad23a273cae6ba7fc4643549
I had similar problem in setting replica set up in a standalone mongodb service with authentication and here are what I ended up with.
docker-compose.yml:
version: '3.7'
services:
...
db:
image: mongo
restart: always
expose:
- 27017
environment:
MONGO_INITDB_DATABASE: ${DATABASE_NAME}
MONGO_INITDB_ROOT_USERNAME: ${DATABASE_USER}
MONGO_INITDB_ROOT_PASSWORD: ${DATABASE_PASSWORD}
MONGO_REPLICA_SET_NAME: ${MONGO_REPLICA_SET_NAME}
command: ["--replSet", "${MONGO_REPLICA_SET_NAME}", "--bind_ip_all"]
healthcheck:
test: test $$(echo "rs.status().ok" | mongo -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1
interval: 10s
start_period: 30s
volumes:
- ./db:/data/db
- ./scripts/set-credentials.sh:/docker-entrypoint-initdb.d/set-credentials.sh
replica-setup:
image: mongo
restart: on-failure
networks:
default:
volumes:
- ./scripts/setup-replica.sh:/scripts/setup-replica.sh
entrypoint: [ "bash", "/scripts/setup-replica.sh" ]
depends_on:
- db
environment:
MONGO_INITDB_ROOT_USERNAME: ${DATABASE_USER}
MONGO_INITDB_ROOT_PASSWORD: ${DATABASE_PASSWORD}
./scripts/setup-replica.sh:
#!/bin/bash
MONGODB1=db
echo "Waiting for MongoDB startup..."
until curl http://${MONGODB1}:27017/serverStatus\?text\=1 2>&1 | grep uptime | head -1; do
printf '.'
sleep 1
done
# check if replica set is already initiated
RS_STATUS=$( mongo --quiet --host ${MONGODB1}:27017 -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD --eval "rs.status().ok" )
if [[ $RS_STATUS != 1 ]]
then
echo "[INFO] Replication set config invalid. Reconfiguring now."
RS_CONFIG_STATUS=$( mongo --quiet --host ${MONGODB1}:27017 -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD --eval "rs.status().codeName" )
if [[ $RS_CONFIG_STATUS == 'InvalidReplicaSetConfig' ]]
then
mongo --quiet --host ${MONGODB1}:27017 -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD <<EOF
config = rs.config()
config.members[0].host = db # Here is important to set the host name of the db instance
rs.reconfig(config, {force: true})
EOF
else
echo "[INFO] MongoDB setup finished. Initiating replicata set."
mongo --quiet --host ${MONGODB1}:27017 -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD --eval "rs.initiate()" > /dev/null
fi
else
echo "[INFO] Replication set already initiated."
fi
./scripts/set-credentials.sh:
#!/bin/bash
set -e
mongo -- "$MONGO_INITDB_DATABASE" <<EOF
var rootUser = '$MONGO_INITDB_ROOT_USERNAME';
var rootPassword = '$MONGO_INITDB_ROOT_PASSWORD';
var admin = db.getSiblingDB('admin');
admin.auth(rootUser, rootPassword);
var user = '$MONGO_INITDB_ROOT_USERNAME';
var password = '$MONGO_INITDB_ROOT_PASSWORD';
db.createUser({user: user, pwd: password, roles: ["readWrite"]});
EOF
What I achieved through is:
Setup a mongodb service with username/password authentication for a default collection
Initialize replica set when it's first time running services
Reconfigure replica set member when there's a previous db data
Health check the mongodb service by checking replica set status
If you just need single node replica set of MongoDB via docker-compose.yml you can simply use this:
mongodb:
image: mongo:5
restart: always
command: ["--replSet", "rs0", "--bind_ip_all"]
ports:
- 27018:27017
healthcheck:
test: mongo --eval "rs.initiate()"
start_period: 5s
This setup works for me. I have also setup everything in https://github.com/nguyenduyhust/docker-mongodb-replica-set.
Great if that's what you're looking for.
Dockerfile
FROM mongo
RUN mkdir /config
WORKDIR /config
COPY wait-for-it.sh .
COPY mongo-setup.js .
COPY mongo-setup.sh .
RUN chmod +x /config/wait-for-it.sh
RUN chmod +x /config/mongo-setup.sh
CMD [ "bash", "-c", "/config/wait-for-it.sh mongodb1:27011 -- /config/mongo-setup.sh"]
docker-compose.yml
version: "3"
services:
mongodb1:
container_name: mongo1
image: mongo
restart: always
volumes:
- ./volumes/mongodb1:/data/db
ports:
- "27011:27011"
expose:
- "27011"
entrypoint:
[
"/usr/bin/mongod",
"--port", "27011",
"--replSet", "rs0",
"--bind_ip_all",
]
mongodb2:
container_name: mongo2
image: mongo
restart: always
volumes:
- ./volumes/mongodb2:/data/db
ports:
- "27012:27012"
expose:
- "27012"
entrypoint:
[
"/usr/bin/mongod",
"--port", "27012",
"--replSet", "rs0",
"--bind_ip_all",
]
mongodb3:
container_name: mongo3
image: mongo
restart: always
volumes:
- ./volumes/mongodb3:/data/db
ports:
- "27013:27013"
expose:
- "27013"
entrypoint:
[
"/usr/bin/mongod",
"--port", "27013",
"--replSet", "rs0",
"--bind_ip_all",
]
mongosetup:
container_name: mongosetup
image: "mongo-setup"
build: "./mongo-setup"
depends_on:
- mongodb1
mongo-express:
container_name: mongo-express
image: mongo-express
environment:
ME_CONFIG_MONGODB_URL: mongodb://mongodb1:27011,mongodb2:27012,mongodb3:27013/?replicaSet=rs0
ports:
- 8081:8081
restart: always
depends_on:
- mongodb1
- mongosetup
mongo-setup.js
rsconf = {
_id : "rs0",
members: [
{
"_id": 0,
"host": "mongodb1:27011",
"priority": 3
},
{
"_id": 1,
"host": "mongodb2:27012",
"priority": 2
},
{
"_id": 2,
"host": "mongodb3:27013",
"priority": 1
}
]
}
rs.initiate(rsconf);
mongo-setup.sh
#!/usr/bin/env bash
if [ ! -f /data/mongo-init.flag ]; then
echo "Init replicaset"
mongo mongodb://mongodb1:27011 mongo-setup.js
touch /data/mongo-init.flag
else
echo "Replicaset already initialized"
fi