everyone. I have an odd problem (who hasn't?)
I have this docker-compose file:
version: '3.4'
services:
ludustack-web:
container_name: ludustack-web
image: ${DOCKER_REGISTRY-}ludustack-web
build:
context: .
dockerfile: LuduStack.Web/Dockerfile
networks:
- ludustack-network
ports:
- '80:80'
- '443:443'
depends_on:
- 'ludustack-db'
ludustack-db:
container_name: ludustack-db
command: mongod --auth
image: mongo:latest
hostname: mongodb
networks:
- ludustack-network
ports:
- '27017:27017'
env_file:
- .env
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGO_INITDB_ROOT_USERNAME}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_INITDB_ROOT_PASSWORD}
- MONGO_INITDB_DATABASE=${MONGO_INITDB_DATABASE}
- MONGO_REPLICA_SET_NAME=${MONGO_REPLICA_SET_NAME}
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1
interval: 60s
start_period: 60s
command: ["--replSet", "${MONGO_REPLICA_SET_NAME}", "--bind_ip_all"]
networks:
ludustack-network:
driver: bridge
The problem is the web application only waits for the mongodb container to be ready, not the replica set itself. So, when the application starts, it crashes because the replicaset is not ready yet. Right after the crash, it logs the replicaset continuing its job:
Any tips on how to make the web application wait the replicaset to be ready?
The application did wait, for 30 seconds. You can increase the timeout by adjusting serverSelectionTimeoutMS URI option or through language-specific means.
Related
I want to connect to MongoDB cluster using
mongodb://localhost:27017
It shows me an error
getaddrinfo ENOTFOUND messenger-mongodb-1
This is my docker-compose.yml file
version: '3'
services:
messenger-mongodb-1:
container_name: messenger-mongodb-1
image: mongo:6.0.3
command: mongod --replSet messenger-mongodb-replica-set --bind_ip_all
ports:
- 27017:27017
networks:
- messenger-mongodb-cluster
volumes:
- messenger-mongodb-1-data:/data/db
depends_on:
- messenger-mongodb-2
- messenger-mongodb-3
healthcheck:
test: test $$(echo "rs.initiate({_id:\"messenger-mongodb-replica-set\",members:[{_id:0,host:\"messenger-mongodb-1\"},{_id:1,host:\"messenger-mongodb-2\"},{_id:2,host:\"messenger-mongodb-3\"}]}).ok || rs.status().ok" | mongo --quiet) -eq 1
interval: 10s
start_period: 30s
messenger-mongodb-2:
container_name: messenger-mongodb-2
image: mongo:6.0.3
command: mongod --replSet messenger-mongodb-replica-set --bind_ip_all
ports:
- 27018:27017
networks:
- messenger-mongodb-cluster
environment:
- MONGO_INITDB_DATABASE=messenger-db
volumes:
- messenger-mongodb-2-data:/data/db
messenger-mongodb-3:
container_name: messenger-mongodb-3
image: mongo:6.0.3
command: mongod --replSet messenger-mongodb-replica-set --bind_ip_all
ports:
- 27019:27017
networks:
- messenger-mongodb-cluster
environment:
- MONGO_INITDB_DATABASE=messenger-db
volumes:
- messenger-mongodb-3-data:/data/db
networks:
messenger-mongodb-cluster:
volumes:
messenger-mongodb-1-data:
messenger-mongodb-2-data:
messenger-mongodb-3-data:
I run it like
docker-compose up -d
How can I connect to my replica set?
I want to use it for the local development of my node.js application
My operating system is Windows 11 Pro
When I first time build containers database doesn't have enough time to initialize itself while web service and nginx is already up and thus I can't reach the server from a first run, but after second containers run everything works properly. I have tried this command: ["./wait-for-it.sh", "db:5432", "--", "python", "manage.py runserver 0.0.0.0:8000"] to wait until database got initialized, but it didn't help me. Help me please to make my services wait until database get initialized. I've tried solutions from this post, but nothing was helpful. Help me please to make my services wait until database get initialized. Thanks in advance!
Here is my docker-compose file
version: "3.9"
services:
db:
image: postgres:13.3-alpine
container_name: db
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:5432" ]
interval: 30s
timeout: 10s
retries: 5
web:
build: .
container_name: web
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
restart: on-failure
depends_on:
- db
nginx:
build: ./nginx
container_name: nginx
ports:
- "80:80"
restart: on-failure
depends_on:
- web
- db
depends_on only waits until the service has started, not until it is healthy. You should try to additionally define the condition service_healthy to wait until a dependency is healthy:
depends_on:
db:
condition: service_healthy
Here's a complete docker-compose file for reference:
version: "3.9"
services:
db:
image: postgres:13.3-alpine
container_name: db
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 1s
timeout: 5s
retries: 5
web:
image: nginx:latest
container_name: web
restart: on-failure
depends_on:
db:
condition: service_healthy
Problem solved by adding the short line of script to command.
web:
build: .
container_name: web
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; python manage.py runserver 0.0.0.0:8000'
volumes:
- .:/code
ports:
- "8000:8000"
restart: on-failure
depends_on:
- db
Below is the docker-compose.yml I have used.
MongoDB container got created, after some 6 to 8 hours of time the databases inside the mongo are getting cleared of automatically.
Even after the manual restart I could not able to get my data back.
I have followed two approaches, and still could not able to figure out what's the issue.
Should I need to declare volumes at the top of the compose file like in approach 1 or provide user specific directory "./data/mongo-1" like in approach 2
Adding the below will it help?
volumes:
mongo_data:
driver: local # not working
external: true # will it work?
Approach-1: Single node replica set
version: '3.8'
volumes:
mongo_data:
mongodb:
hostname: mongodb
container_name: mongodb
image: mongo:latest
environment:
MONGO_INITDB_DATABASE: moviebooking
MONGO_REPLICA_SET_NAME: rs0
volumes:
- ./mongo-initdb.d:/docker-entrypoint-initdb.d
- mongo_data:/data/db
expose:
- 27017
ports:
- "27017:27017"
restart: unless-stopped
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.slaveOk().ok || rs.status().ok" | mongo --quiet) -eq 1
interval: 10s
start_period: 30s
entrypoint: [ "/usr/bin/mongod", "--bind_ip_all", "--replSet", "rs0" ]
Approach-2: Multi node replica set
version: '3.8'
services:
mongo1:
image: mongo:4.2
container_name: mongo1
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "27017"]
volumes:
- ./data/mongo-1:/data/db
ports:
- 27017:27017
healthcheck:
test: test $$(echo "rs.initiate({_id:'my-replica-set',members:[{_id:0,host:\"mongo1:27017\"},{_id:1,host:\"mongo2:27018\"},{_id:2,host:\"mongo3:27019\"}]}).ok || rs.status().ok" | mongo --port 27017 --quiet) -eq 1
interval: 10s
start_period: 30s
mongo2:
image: mongo:4.2
container_name: mongo2
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "27018"]
volumes:
- ./data/mongo-2:/data/db
ports:
- 27018:27018
mongo3:
image: mongo:4.2
container_name: mongo3
command: ["--replSet", "my-replica-set", "--bind_ip_all", "--port", "27019"]
volumes:
- ./data/mongo-3:/data/db
ports:
- 27019:27019
In my case , db was hacked by some one.
Adding volume mount will work, making it as external in compose will ask you to create a volume externally before running the compose file. Which I feel is a safe way instead of asking compose to decide the volume location.
you are welcome :)
Hello I have the following error in my node project:
(node:51) UnhandledPromiseRejectionWarning: Error: getaddrinfo
ENOTFOUND ${DB_HOST}
I'm thinking the problem is that my postgress is not yet started when my project starts
and so I'm not able to think of a solution on how to start my container after my postgres is ready, I read something about dockerize, but I'm not able to imagine how to apply
my docker file:
FROM node:lts-alpine
RUN mkdir -p /home/node/api/node_modules && chown -R node:node /home/node/api
WORKDIR /home/node/api
COPY ormconfig.json .env package.json yarn.* ./
USER node
RUN yarn
COPY --chown=node:node . .
EXPOSE 4000
CMD ["yarn", "dev"]
my docker compose:
version: '3.7'
services:
ci-api:
build: .
container_name: ci-api
volumes:
- .:/home/node/api
- /home/node/api/node_modules
ports:
- '${SERVER_PORT}:${SERVER_PORT}'
depends_on:
- ci-postgres
networks:
- ci-network
ci-postgres:
image: postgres:12
container_name: ci-postgres
ports:
- '${DB_PORT}:5432'
environment:
- ALLOW_EMPTY_PASSWORD=no
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
- POSTGRES_DB=${DB_NAME}
volumes:
- ci-postgres-data:/data
networks:
- ci-network
volumes:
ci-postgres-data:
networks:
ci-network:
driver: bridge
and this is my .env
SERVER_PORT=4000
DB_HOST=ci-postgres
DB_PORT=5432
DB_USER=spirit
DB_PASS=api
DB_NAME=emasa_ci
You can reference the below docker-compose.yml in which depends_on, healthcheck and links are added as web service depends on db service.
Reference:
Postgresql Container is not running in docker-compose file - Why is this?
version: "3"
services:
webapp:
build: .
container_name: webapp
ports:
- "5000:5000"
links:
- postgres
depends_on:
postgres:
condition: service_healthy
postgres:
image: postgres:11-alpine
container_name: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_DB=tmp
- POSTGRES_USER=tmp
- POSTGRES_PASSWORD=tmp_password
volumes: # Persist the db data
- database-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
volumes:
database-data:
I have been trying to dockerize my spring boot application which depends on redis, kafka and mongodb.
Following is the docker-compose.yml:
version: '3.3'
services:
my-service:
image: my-service
build:
context: ../../
dockerfile: Dockerfile
restart: always
container_name: my-service
environment:
KAFKA_CONFLUENT_BOOTSTRAP_SERVERS: kafka:9092
MONGO_HOSTS: mongodb:27017
REDIS_HOST: redis
REDIS_PORT: 6379
volumes:
- /private/var/log/my-service/:/var/log/my-service/
ports:
- 8080:8090
- 1053:1053
depends_on:
- redis
- kafka
- mongodb
portainer:
image: portainer/portainer
command: -H unix:///var/run/docker.sock
restart: always
container_name: portainer
ports:
- 9000:9000
- 9001:8000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
redis:
image: redis
container_name: redis
restart: always
ports:
- 6379:6379
zookeeper:
image: wurstmeister/zookeeper
ports:
- 2181:2181
container_name: zookeeper
kafka:
image: wurstmeister/kafka
ports:
- 9092:9092
container_name: kafka
environment:
KAFKA_CREATE_TOPICS: "cms.entity.change:1:1" # topic:partition:replicas
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_PORT: 9092
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- "zookeeper"
mongodb:
image: mongo:latest
container_name: mongodb
environment:
MONGO_INITDB_ROOT_USERNAME:
MONGO_INITDB_ROOT_PASSWORD:
ports:
- 27017:27017
volumes:
- ./data/db:/data/db
The issue is that this starts up mongo as a STANDALONE instance. So the APIs in my service that persist data are failing as mongo needs to start as a REPLICA_SET.
How can I edit my docker-compose file to start mongo as a REPLICA_SET?
I had the same issue and ended up on this stackoverflow post.
We had a requirement of using official mongoDB docker image (https://hub.docker.com/_/mongo ) and couldn't use bitnami as suggested in Vahid's answer.
This answer isn't exactly what's needed by the question asked and coming in 6 months too late; but it should give directions to someone who need to use the mongoDb standalone replicaset throw away instance for integration testing purpose. If you need to use it in PROD then you'll have to provide environment variables for volumes and auth as per Vahid's answer.
version: '3.7'
services:
mongodb:
image: mongo:latest
container_name: myservice-mongodb
networks:
- myServiceNetwork
expose:
- 27017
command: --replSet singleNodeReplSet
mongodb-replicaset:
container_name: mongodb-replicaset-helper
depends_on:
- mongodb
networks:
- myServiceNetwork
image: mongo:latest
command: bash -c "sleep 5 && mongo --host myservice-mongodb --port 27017 --eval \"rs.initiate()\" && sleep 2 && mongo --host myservice-mongodb --port 27017 --eval \"rs.status()\" && sleep infinity"
my-service:
depends_on:
- mongodb-replicaset
image: myserviceimage
container_name: myservicecontainer
networks:
- myServiceNetwork
environment:
myservice__Database__ConnectionString: mongodb://myservice-mongodb:27017/?connect=direct&replicaSet=singleNodeReplSet&readPreference=primary
myservice__Database__Name: myserviceDb
networks:
myServiceNetwork:
driver: bridge
NOTE: Please look at the way how connection string is passed as env variable to the service depending on mongo replicaset instance. You'd have to ensure that the name used in setting up the mongodb replicaset (in my case singleNodeReplicaSet) is passed on to the service depending on it.
Edited:
my previous answer was far wrong so I changed it. I managed to make it work using 'bitnami/mongodb:4.0'. Not sure if that would help you or not, but maybe it gives you some idea. They have a docker-compose file ready for replicaset mode.
version: '3'
services:
mdb-primary:
image: 'bitnami/mongodb:4.0'
environment:
- MONGODB_REPLICA_SET_MODE=primary
- MONGODB_ROOT_PASSWORD=somepassword
- MONGODB_REPLICA_SET_KEY=replicasetkey
- MONGODB_ADVERTISED_HOSTNAME=mdb-primary
mdb-secondary:
image: 'bitnami/mongodb:4.0'
depends_on:
- mdb-primary
environment:
- MONGODB_PRIMARY_HOST=mdb-primary
- MONGODB_REPLICA_SET_MODE=secondary
- MONGODB_PRIMARY_ROOT_PASSWORD=somepassword
- MONGODB_REPLICA_SET_KEY=replicasetkey
- MONGODB_ADVERTISED_HOSTNAME=mdb-secondary
mdb-arbiter:
image: 'bitnami/mongodb:4.0'
depends_on:
- mdb-primary
environment:
- MONGODB_PRIMARY_HOST=mdb-primary
- MONGODB_REPLICA_SET_MODE=arbiter
- MONGODB_PRIMARY_ROOT_PASSWORD=somepassword
- MONGODB_REPLICA_SET_KEY=replicasetkey
- MONGODB_ADVERTISED_HOSTNAME=mdb-arbiter
mongo-cli:
image: 'bitnami/mongodb:latest'
don't forget to add volumes and map it to /bitnami on the primary node
the last container, mongo-cli is for testing purposes. So you can connect to the replicaset using the cli, there is an argument about that here if you like to read about it.
$ docker-compose exec mongo-cli bash
$ mongo "mongodb://mdb-primary:27017/test?replicaSet=replicaset"