Why does the docker-compose healthcheck of my mongo container always fail? - mongodb

I'm using docker-compose to stand up an Express/React/Mongo app. I can currently stand up everything using retry logic in the express app. However, I would prefer to use Docker's healthcheck to prevent the string of errors when the containers initially spin up. However, when I add a healthcheck in my docker-compose.yml, it hangs for the interval/retry time limit and exits with:
ERROR: for collector Container "70e7aae49c64" is unhealthy.
ERROR: for server Container "70e7aae49c64" is unhealthy.
ERROR: Encountered errors while bringing up the project.
It seems that my healthcheck never returns a healthy status, and I'm not entirely sure why. The entirety of my docker-compose.yml:
version: "2.1"
services:
mongo:
image: mongo
volumes:
- ./data/mongodb/db:/data/db
ports:
- "${DB_PORT}:${DB_PORT}"
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongo mongo:27017/test --quiet 1
interval: 10s
timeout: 10s
retries: 5
collector:
build: ./collector/
environment:
- DB_HOST=${DB_HOST}
- DB_PORT=${DB_PORT}
- DB_NAME=${DB_NAME}
volumes:
- ./collector/:/app
depends_on:
mongo:
condition: service_healthy
server:
build: .
environment:
- SERVER_PORT=$SERVER_PORT
volumes:
- ./server/:/app
ports:
- "${SERVER_PORT}:${SERVER_PORT}"
depends_on:
mongo:
condition: service_healthy
For the test, I've also tried:
["CMD", "nc", "-z", "localhost", "27017"]
And:
["CMD", "bash", "/mongo-healthcheck"]
I've also tried ditching the healthcheck altogether, following the advice of this guy. Everything stands up, but I get the dreaded errors in the output before a successful connection:
collector_1 | MongoDB connection error: MongoNetworkError: failed to connect to server [mongo:27017] on first connect [MongoNetworkError: connect
ECONNREFUSED 172.21.0.2:27017]
collector_1 | MongoDB connection with retry
collector_1 | MongoDB connection error: MongoNetworkError: failed to connect to server [mongo:27017] on first connect
The ultimate goal is a clean startup output when running the docker-compose up --build. I've also looked into some of the solutions in this question, but I haven't had much luck with wait-for-it either. What's the correct way to wait for Mongo to be up and running before starting the other containers, and achieving a clean startup?

Firstly, I'd suggest to update the docker-compose.yaml file version to at least 3.4 (version: "3.5"), then please add the start_period option to your mongo healthcheck
Note: start_period is only supported for v3.4 and higher of the compose file format.
start period provides initialization time for containers that need time to bootstrap. Probe failure during that period will not be counted towards the maximum number of retries. However, if a health check succeeds during the start period, the container is considered started and all consecutive failures will be counted towards the maximum number of retries.
So it would look something like this:
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongo mongo:27017/test --quiet
interval: 10s
timeout: 10s
retries: 5
start_period: 40s

We can use MongoDB's serverStatus command to do the health check, as the MongoDB document puts it this way:
Monitoring applications can run this command at a regular interval to collect statistics about the instance.
Because this command serverStatus requires authentication, you need setup the health check similar to the configuration shown below:
version: '3.4'
services:
mongo:
image: mongo
restart: always
healthcheck:
test: echo 'db.runCommand({serverStatus:1}).ok' | mongo admin -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD --quiet | grep 1
interval: 10s
timeout: 10s
retries: 3
start_period: 20s
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
That's it. If your MongoDB instance is healthy, you will see something similar to mine:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
01ed0e02aa70 mongo "docker-entrypoint.s…" 11 minutes ago Up 11 minutes (healthy) 27017/tcp demo_mongo_1

The mongo shell is removed from MongoDB 6.0. The replacement is mongosh.
Check if this works for you :
mongo:
image: mongo
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongosh localhost:27017/test --quiet

Note that you probably should you use mongosh if you use newer versios of the mongodb:
healthcheck:
test: ["CMD","mongosh", "--eval", "db.adminCommand('ping')"]
interval: 5s
timeout: 5s
retries: 3
start_period: 5s

I found a solution here
https://github.com/docker-library/healthcheck/tree/master/mongo
Note, it explains why health check is not included into official image
https://github.com/docker-library/cassandra/pull/76#issuecomment-246054271
docker-healthcheck
#!/bin/bash
set -eo pipefail
if mongo --quiet "localhost/test" --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)'; then
exit 0
fi
exit 1
In the example from the link, they use host variable
host="$(hostname --ip-address || echo '127.0.0.1')"
if mongo --quiet "$host/test" --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)'; then
# continues the same code
It did not work for me, so I replaced the host with localhost.
In docker-compose
mongo:
build:
context: "./mongodb"
dockerfile: Dockerfile
container_name: crm-mongo
restart: always
healthcheck:
test: ["CMD", "docker-healthcheck"]
interval: 10s
timeout: 2s
retries: 10
Alternatively, you can execute health checks in container. Change Dockerfile or that.
FROM mongo:4
ADD docker-healthcheck /usr/local/bin/

When i execute the echo db.runCommand("ping").ok' | mongo localhost:27017/test --quiet 1 command in the docker container, the result is:
2019-04-19T02:39:19.770+0000 E - [main] file [1] doesn't exist
failed to load: 1
Try this
healthcheck:
test: bash -c "if mongo --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)'; then exit 0; fi; exit 1;"

This one worked for me:
healthcheck:
test: ["CMD","mongo", "--eval", "db.adminCommand('ping')"]
interval: 10s
timeout: 10s
retries: 5

Related

Docker health check always fails for mongodb and cassandra

I've been trying to start a mongoDB and cassandra container and make them pass two simple health checks, but they keep failing no matter what health check I put:
For mongoDB, here's my yml file:
version: '3.1'
services:
mongo:
image: mongo:3.6.3
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
healthcheck:
test: ["CMD", "mongo --quiet 127.0.0.1/test --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)'"]
start_period: 5s
interval: 5s
timeout: 120s
retries: 24
and for cassandra:
version: '2'
services:
cassandra:
image: 'docker.io/bitnami/cassandra:3-debian-10'
ports:
- '7000:7000'
- '9042:9042'
volumes:
- 'cassandra_data:/bitnami'
environment:
- CASSANDRA_SEEDS=cassandra
- CASSANDRA_PASSWORD_SEEDER=yes
- CASSANDRA_PASSWORD
- MAX_HEAP_SIZE=1G
- HEAP_NEWSIZE=800M
healthcheck:
test: [ "CMD-SHELL", "cqlsh --username cassandra --password ${CASSANDRA_PASSWORD} -e 'describe cluster'" ]
interval: 5s
timeout: 120s
retries: 24
Am I missing something,
I also tried running this for the health check:
echo 'db.runCommand({serverStatus:1}).ok' | mongo admin -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD --quiet | g$$et | grep 1```
I went trough a lot of the discussions about the healthchecks for mongo and cassandra, but still not able to make it work
For everyone still out there trying to figure out how to set the health check for mongo container, use this
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongosh localhost:27017/productiondb --quiet
interval: 10s
timeout: 10s
retries: 3
start_period: 20s
The reason why (at least for me) it was not working before is because I was using mongo instead of mongosh while the recent versions of mongo uses the newer mongosh shell
First, you shoudn't use CMD-SHEEl but CMD:
Related github issue
Can you share healthcheck status and any log that can be useful to help you ?
I think this thread can help you: Why does the docker-compose healthcheck of my mongo container always fail?
Sincerly,
Was having the same problem, I found my answer here for cassandra:
https://quantonganh.com/2021/09/09/docker-compose-healthcheck
healthcheck:
test: cqlsh -u cassandra -p cassandra -k <YOUR_KEYSPACE_NAME>
interval: 5s
timeout: 10s
retries: 6

Docker MongoDB seed notification

I want to deploy in Docker Swarm a stack with a MongoDB and a Web Server; the db must be filled with initial data and I found out this valid solution.
Given that Stack services start in casual order, how could I be sure that the Web Server will read initial data correctly?
Probably I need some notification system (Redis?), but I am new to MongoDB, so I am looking for well-known solutions to this problem (that I think is pretty common).
I would highly suggest looking at health checks of docker-compose.yml. You can change the health check command to specific MongoDB check, Once health-check is ready, then only Web-server will start sending the request to the MongoDB container.
Have a look at the example file. Please change the health-check as per your need
version: "3.3"
services:
mongo:
image: mongo
ports:
- "27017:27017"
volumes:
- ./data/mongodb/db:/data/db
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongo mongo:27017/test --quiet 1
interval: 10s
timeout: 10s
retries: 5
start_period: 40s
webserver:
image: custom_webserver_image:latest
volumes:
- $PWD:/app
links:
- mongodb
ports:
- 8000:8000
depends_on:
- mongo
Ref:- https://docs.docker.com/compose/compose-file/#healthcheck

Mongo container with a replica set with only one node in docker-compose

I want to create a Docker container with an instance of Mongo. In particular, I would like to create a replica set with only one node (since I'm interested in transactions and they are only available for replica sets).
Dockerfile
FROM mongo
RUN echo "rs.initiate();" > /docker-entrypoint-initdb.d/replica-init.js
CMD ["--replSet", "rs0"]
docker-compose.yml
version: "3"
services:
db:
build:
dockerfile: Dockerfile
context: .
ports:
- "27017:27017"
If I use the Dockerfile alone everything is fine, while if I use docker-compose it does not work: in fact if I then log to the container I got prompted as rs0:OTHER> instead of rs0:PRIMARY>.
I consulted these links but the solutions proposed are not working:
https://github.com/docker-library/mongo/issues/246#issuecomment-382072843
https://github.com/docker-library/mongo/issues/249#issuecomment-381786889
This is the compose file I have used for a while now for local development. You can remove the keyfile pieces if you don't need to connect via SSL.
version: "3.8"
services:
mongodb:
image : mongo:4
container_name: mongodb
hostname: mongodb
restart: on-failure
environment:
- PUID=1000
- PGID=1000
- MONGO_INITDB_ROOT_USERNAME=mongo
- MONGO_INITDB_ROOT_PASSWORD=mongo
- MONGO_INITDB_DATABASE=my-service
- MONGO_REPLICA_SET_NAME=rs0
volumes:
- mongodb4_data:/data/db
- ./:/opt/keyfile/
ports:
- 27017:27017
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1
interval: 10s
start_period: 30s
command: "--bind_ip_all --keyFile /opt/keyfile/keyfile --replSet rs0"
volumes:
mongodb4_data:
It uses Docker's health check (with a startup delay) to sneak in the rs.initiate() if it actually needs it after it's already running.
To create a keyfile.
Mac:
openssl rand -base64 741 > keyfile
chmod 600 keyfile
Linux:
openssl rand -base64 756 > keyfile
chmod 600 keyfile
sudo chown 999 keyfile
sudo chgrp 999 keyfile
The top answer stopped working for me in later MongoDB and Docker versions. Particularly because rs.initiate().ok would throw an error if the replica set was already initiated, causing the whole command to fail. In addition, connecting from another container was failing because the replica set's sole member had some random host, which wouldn't allow the connection. Here's my new docker-compose.yml:
services:
web:
# ...
environment:
DATABASE_URL: mongodb://root:root#db/?authSource=admin&tls=false
db:
build:
context: ./mongo/
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: root
ports:
- '27017:27017'
volumes:
- data:/data/db
healthcheck:
test: |
test $$(mongosh --quiet -u root -p root --eval "try { rs.initiate({ _id: 'rs0', members: [{ _id: 0, host: 'db' }] }).ok } catch (_) { rs.status().ok }") -eq 1
interval: 10s
start_period: 30s
volumes:
data:
Inside ./mongo/, I have a custom Dockerfile that looks like:
FROM mongo:6
RUN echo "password" > /keyfile \
&& chmod 600 /keyfile \
&& chown 999 /keyfile \
&& chgrp 999 /keyfile
CMD ["--bind_ip_all", "--keyFile", "/keyfile", "--replSet", "rs0"]
This Dockerfile is suitable for development, but you'd definitely want a securely generated and persistent keyfile to be mounted in production (and therefore strike the entire RUN command).
You still need to issue replSetInitiate even if there's only one node in the RS.
See also here.
I had to do something similar to build tests around ChangeStreams which are only available when running mongo as a replica set. I don't remember where I pulled this from, so I can't explain it in detail but it does work for me. Here is my setup:
Dockerfile
FROM mongo:5.0.3
RUN echo "rs.initiate({'_id':'rs0', members: [{'_id':1, 'host':'127.0.0.1:27017'}]});" > "/docker-entrypoint-initdb.d/init_replicaset.js"
RUN echo "12345678" > "/tmp/key.file"
RUN chmod 600 /tmp/key.file
RUN chown 999:999 /tmp/key.file
CMD ["mongod", "--replSet", "rs0", "--bind_ip_all", "--keyFile", "/tmp/key.file"]
docker-compose.yml
version: '3.7'
services:
mongo:
build: .
restart: always
ports:
- 27017:27017
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u admin -p pass --quiet) -eq 1
interval: 10s
start_period: 30s
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: pass
MONGO_INITDB_DATABASE: test
Run docker-compose up and you should be good.
Connection String: mongodb://admin:pass#localhost:27017/test
Note: You shouldn't use this in production obviously, adjust the key "12345678" in the Dockerfile if security is a concern.
If you just need single node replica set of MongoDB via docker-compose.yml you can simply use this:
mongodb:
image: mongo:5
restart: always
command: ["--replSet", "rs0", "--bind_ip_all"]
ports:
- 27018:27017
healthcheck:
test: mongo --eval "rs.initiate()"
start_period: 5s
This one works fine for me:
version: '3.4'
services:
ludustack-db:
container_name: ludustack-db
command: mongod --auth
image: mongo:latest
hostname: mongodb
ports:
- '27017:27017'
env_file:
- .env
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGO_INITDB_ROOT_USERNAME}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_INITDB_ROOT_PASSWORD}
- MONGO_INITDB_DATABASE=${MONGO_INITDB_DATABASE}
- MONGO_REPLICA_SET_NAME=${MONGO_REPLICA_SET_NAME}
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1
interval: 60s
start_period: 60s

docker-compose mongo healthcheck failing

Hello my mongo health check is failing
below is my docker-compose file
version: "2.4"
services:
production-api:
build: .
environment:
- MONGO_URI=mongodb://mongodb:27017/productiondb
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mongodb
# condition: service_healthy
mongodb:
image: mongo
ports:
- "27017:27017"
# healthcheck:
# test: echo 'db.runCommand("ping").ok' | mongo mongo:27017/productiondb --quiet 1
# interval: 10s
# timeout: 10s
# retries: 5
And is there any way to pass MONGO_URI to health check as a variable?
you're healthcheck should look like this:
test: echo 'db.runCommand("ping").ok' | mongo localhost:27017/productiondb --quiet
the hostname mongo doesn't exist inside the mongodb container unless you specify hostname: mongo in your compose file or just simply use localhost which is more common, when doing healthchecks
the 1 after --quiet seems to be a typo which leads to [main] file [1] doesn't exist https://docs.mongodb.com/manual/reference/program/mongo/#cmdoption-mongo-quiet
see also:
Simple HTTP/TCP health check for MongoDB
Why does the docker-compose healthcheck of my mongo container always fail?
You can pass MONGO_URI to the healthcheck, by specifying it a second time:
mongodb:
..
environment:
- MONGO_URI=mongodb://mongodb:27017/productiondb
If you want to use one value for both, create an env_file and use it via:
env_file:
- mongo.env # wich contains `MONGO_URI=mongodb://..`

Prisma, MongoDB, Docker "request to http://localhost:4466/ failed, reason: connect ECONNREFUSED 127.0.0.1:4466"

After I launch my docker-compose up command, everything starts up and I run prisma deploy which also works fine, yet my application still returns the above error. I have been trying to find a solution to this for days, and there is nothing helpful online, and the few similar questions have been closed or ignored. I would appreciate getting help with this issue.
Here is my docker-compose.yml file:
version: '3'
services:
prisma:
env_file:
- .env
image: prismagraphql/prisma:1.34
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
databases:
default:
connector: mongo
uri: ${MONGODB_URI}
host: host.docker.internal
JWT_SECRET: ${JWT_SECRET}
mongo:
env_file:
- .env
image: mongo:3.6
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGODB_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGODB_PASSWORD}
ports:
- "27017:27017"
volumes:
- mongo:/var/lib/mongo
web:
env_file:
- .env
build: .
volumes:
- .:/usr/app/
- /usr/app/node_modules
ports:
- "4000:4000"
environment:
DATABASE_URL: ${MONGODB_URI}
volumes:
mongo:
My Dockerfile:
FROM node:8.16.0-alpine
WORKDIR /usr/app
COPY package.json .
RUN npm install --quiet
COPY . .
ENV DOCKERIZE_VERSION v0.6.0
RUN wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& tar -C /usr/local/bin -xzvf dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& rm dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz
CMD dockerize -wait tcp://mongo:27017 -wait tcp://prisma:4466 -timeout 60m npm start
My prisma.yml:
endpoint: http://localhost:4466
datamodel:
- db/types.prisma
- db/enums.prisma
databaseType: document
generate:
- generator: javascript-client
output: ./generated/prisma-client/
My prisma deploy command works, and it generates the mongo database, but when I try to query my application at localhost:4000, it looks like this and returns this error:
request to http://localhost:4466/ failed, reason: connect ECONNREFUSED 127.0.0.1:4466
But when I navigate to localhost:4466/_admin, the database is all set up fine and shows the three tables that should be there.
I checked if anything is running localhost:4466 by issuing this command: lsof -i :4466, and I can see that docker started up correctly.
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 5923 sguduguntla 24u IPv4 0x89ea943c9b98ff09 0t0 TCP *:4466 (LISTEN)
com.docke 5923 sguduguntla 25u IPv6 0x89ea943c87111549 0t0 TCP localhost:4466 (LISTEN)
When I run docker ps, you can also see the following output with the three images:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fa70fae37f10 prismagraphql/prisma:1.34 "/bin/sh -c /app/sta…" 35 minutes ago Up 31 minutes 0.0.0.0:4466->4466/tcp decal-board-graphql-server_prisma_1
d64b9f6dcd29 decal-board-graphql-server_web "docker-entrypoint.s…" 35 minutes ago Up 31 minutes 0.0.0.0:4000->4000/tcp decal-board-graphql-server_web_1
6f7dda5e58a0 mongo:3.6 "docker-entrypoint.s…" 35 minutes ago Up 31 minutes 0.0.0.0:27017->27017/tcp decal-board-graphql-server_mongo_1