Docker health check always fails for mongodb and cassandra - mongodb

I've been trying to start a mongoDB and cassandra container and make them pass two simple health checks, but they keep failing no matter what health check I put:
For mongoDB, here's my yml file:
version: '3.1'
services:
mongo:
image: mongo:3.6.3
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
healthcheck:
test: ["CMD", "mongo --quiet 127.0.0.1/test --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)'"]
start_period: 5s
interval: 5s
timeout: 120s
retries: 24
and for cassandra:
version: '2'
services:
cassandra:
image: 'docker.io/bitnami/cassandra:3-debian-10'
ports:
- '7000:7000'
- '9042:9042'
volumes:
- 'cassandra_data:/bitnami'
environment:
- CASSANDRA_SEEDS=cassandra
- CASSANDRA_PASSWORD_SEEDER=yes
- CASSANDRA_PASSWORD
- MAX_HEAP_SIZE=1G
- HEAP_NEWSIZE=800M
healthcheck:
test: [ "CMD-SHELL", "cqlsh --username cassandra --password ${CASSANDRA_PASSWORD} -e 'describe cluster'" ]
interval: 5s
timeout: 120s
retries: 24
Am I missing something,
I also tried running this for the health check:
echo 'db.runCommand({serverStatus:1}).ok' | mongo admin -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD --quiet | g$$et | grep 1```
I went trough a lot of the discussions about the healthchecks for mongo and cassandra, but still not able to make it work

For everyone still out there trying to figure out how to set the health check for mongo container, use this
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongosh localhost:27017/productiondb --quiet
interval: 10s
timeout: 10s
retries: 3
start_period: 20s
The reason why (at least for me) it was not working before is because I was using mongo instead of mongosh while the recent versions of mongo uses the newer mongosh shell

First, you shoudn't use CMD-SHEEl but CMD:
Related github issue
Can you share healthcheck status and any log that can be useful to help you ?
I think this thread can help you: Why does the docker-compose healthcheck of my mongo container always fail?
Sincerly,

Was having the same problem, I found my answer here for cassandra:
https://quantonganh.com/2021/09/09/docker-compose-healthcheck
healthcheck:
test: cqlsh -u cassandra -p cassandra -k <YOUR_KEYSPACE_NAME>
interval: 5s
timeout: 10s
retries: 6

Related

MopngoDb Single Node Replica Set with docker container

I have a MongoDb container which is using a single node replica set configured like following code. To run a replica set i need to execute another command on mongoDb shell which is rs.initiate() How to add that command into the container configuration command list so that i do not have to execute that command manually on MongoDb shell ?
mongodb:
image: mongo:5.0
container_name: mongodb
command: ["--replSet", "rs0", "--bind_ip_all"] # This works but i have to manually go into mongoshell to execute rs.initilize(), i dont want to do that
# command: ["--replSet", "rs0", "--bind_ip_all","mongo", "rs.initiate()"] # --> Does't work
# command: ["--replSet", "rs0", "--bind_ip_all","rs.initiate()"] # --> Does't work
# command: ["--replSet", "rs0", "--bind_ip_all", "mongo rs.initiate()"] # --> Does't work
networks:
- dev
ports:
- 27017:27017
volumes:
- ${HOME}/mongodb:/data/db
Ok, i am not sure if this is the correct way to solve that problem but, i got that working by using healthcheck feature of Docker. After this change, i do not have to execute rs.initiate() manually
For details please look at docker mongo for single (primary node only) replica set (for development)?
HealthCheck: https://docs.docker.com/engine/reference/builder/#healthcheck
mongodb:
image: mongo:5.0
container_name: mongodb
command: ["--replSet", "rs0", "--bind_ip_all"]
- dev
ports:
- 27017:27017
volumes:
- ${HOME}/mongodb:/data/db
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u root -p imagiaRoot --quiet) -eq 1
interval: 10s
start_period: 30s

docker mongo for single (primary node only) replica set (for development)?

This is the portion of the dockerfile that has served us well to date. However, now I need to convert this to be a single node replica set (for transactions to work). I don't want any secondary or arbiter - just the primary node. What am I missing to get this working?
mongo:
image: mongo:4.4.3
container_name: mongo
restart: unless-stopped
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: myPass
command: mongod --port 27017
ports:
- '27017:27017'
volumes:
- ./data/mongodb:/data/db
- ./data/mongodb/home:/home/mongodb/
- ./configs/mongodb/mongo-init.sh:/docker-entrypoint-initdb.d/mongo-init.sh:ro
Got it working. I inserted the following into the block in my question above:
hostname: mongodb
volumes:
- ./data/mongodb/data/log/:/var/log/mongodb/
# the healthcheck avoids the need to initiate the replica set
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u root -p imagiaRoot --quiet) -eq 1
interval: 10s
start_period: 30s
I was unable to initiate the replica set via the healthcheck. I used the bash script below instead. For Windows users, be sure to call your DB with the name of your computer. For example:
mongodb://DESKTOP-QPRKMN2:27017
run-test.sh
#!/bin/bash
echo "Running docker-compose"
docker-compose up -d
echo "Waiting for DB to initialize"
sleep 10
echo "Initiating DB"
docker exec mongo_container mongo --eval "rs.initiate();"
echo "Running tests"
# test result
if go test ./... -v
then
echo "Test PASSED"
else
echo "Test FAILED"
fi
# cleanup
docker-compose -f docker-compose.test.yml down
docker-compose file
version: '3.8'
services:
mongo:
hostname: $HOST
container_name: mongo_container
image: mongo:5.0.3
volumes:
- ./test-db.d
expose:
- 27017
ports:
- "27017:27017"
restart: always
command: ["--replSet", "test", "--bind_ip_all"]
This forum post was very helpful: https://www.mongodb.com/community/forums/t/docker-compose-replicasets-getaddrinfo-enotfound/14301/4

Mongo container with a replica set with only one node in docker-compose

I want to create a Docker container with an instance of Mongo. In particular, I would like to create a replica set with only one node (since I'm interested in transactions and they are only available for replica sets).
Dockerfile
FROM mongo
RUN echo "rs.initiate();" > /docker-entrypoint-initdb.d/replica-init.js
CMD ["--replSet", "rs0"]
docker-compose.yml
version: "3"
services:
db:
build:
dockerfile: Dockerfile
context: .
ports:
- "27017:27017"
If I use the Dockerfile alone everything is fine, while if I use docker-compose it does not work: in fact if I then log to the container I got prompted as rs0:OTHER> instead of rs0:PRIMARY>.
I consulted these links but the solutions proposed are not working:
https://github.com/docker-library/mongo/issues/246#issuecomment-382072843
https://github.com/docker-library/mongo/issues/249#issuecomment-381786889
This is the compose file I have used for a while now for local development. You can remove the keyfile pieces if you don't need to connect via SSL.
version: "3.8"
services:
mongodb:
image : mongo:4
container_name: mongodb
hostname: mongodb
restart: on-failure
environment:
- PUID=1000
- PGID=1000
- MONGO_INITDB_ROOT_USERNAME=mongo
- MONGO_INITDB_ROOT_PASSWORD=mongo
- MONGO_INITDB_DATABASE=my-service
- MONGO_REPLICA_SET_NAME=rs0
volumes:
- mongodb4_data:/data/db
- ./:/opt/keyfile/
ports:
- 27017:27017
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1
interval: 10s
start_period: 30s
command: "--bind_ip_all --keyFile /opt/keyfile/keyfile --replSet rs0"
volumes:
mongodb4_data:
It uses Docker's health check (with a startup delay) to sneak in the rs.initiate() if it actually needs it after it's already running.
To create a keyfile.
Mac:
openssl rand -base64 741 > keyfile
chmod 600 keyfile
Linux:
openssl rand -base64 756 > keyfile
chmod 600 keyfile
sudo chown 999 keyfile
sudo chgrp 999 keyfile
The top answer stopped working for me in later MongoDB and Docker versions. Particularly because rs.initiate().ok would throw an error if the replica set was already initiated, causing the whole command to fail. In addition, connecting from another container was failing because the replica set's sole member had some random host, which wouldn't allow the connection. Here's my new docker-compose.yml:
services:
web:
# ...
environment:
DATABASE_URL: mongodb://root:root#db/?authSource=admin&tls=false
db:
build:
context: ./mongo/
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: root
ports:
- '27017:27017'
volumes:
- data:/data/db
healthcheck:
test: |
test $$(mongosh --quiet -u root -p root --eval "try { rs.initiate({ _id: 'rs0', members: [{ _id: 0, host: 'db' }] }).ok } catch (_) { rs.status().ok }") -eq 1
interval: 10s
start_period: 30s
volumes:
data:
Inside ./mongo/, I have a custom Dockerfile that looks like:
FROM mongo:6
RUN echo "password" > /keyfile \
&& chmod 600 /keyfile \
&& chown 999 /keyfile \
&& chgrp 999 /keyfile
CMD ["--bind_ip_all", "--keyFile", "/keyfile", "--replSet", "rs0"]
This Dockerfile is suitable for development, but you'd definitely want a securely generated and persistent keyfile to be mounted in production (and therefore strike the entire RUN command).
You still need to issue replSetInitiate even if there's only one node in the RS.
See also here.
I had to do something similar to build tests around ChangeStreams which are only available when running mongo as a replica set. I don't remember where I pulled this from, so I can't explain it in detail but it does work for me. Here is my setup:
Dockerfile
FROM mongo:5.0.3
RUN echo "rs.initiate({'_id':'rs0', members: [{'_id':1, 'host':'127.0.0.1:27017'}]});" > "/docker-entrypoint-initdb.d/init_replicaset.js"
RUN echo "12345678" > "/tmp/key.file"
RUN chmod 600 /tmp/key.file
RUN chown 999:999 /tmp/key.file
CMD ["mongod", "--replSet", "rs0", "--bind_ip_all", "--keyFile", "/tmp/key.file"]
docker-compose.yml
version: '3.7'
services:
mongo:
build: .
restart: always
ports:
- 27017:27017
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u admin -p pass --quiet) -eq 1
interval: 10s
start_period: 30s
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: pass
MONGO_INITDB_DATABASE: test
Run docker-compose up and you should be good.
Connection String: mongodb://admin:pass#localhost:27017/test
Note: You shouldn't use this in production obviously, adjust the key "12345678" in the Dockerfile if security is a concern.
If you just need single node replica set of MongoDB via docker-compose.yml you can simply use this:
mongodb:
image: mongo:5
restart: always
command: ["--replSet", "rs0", "--bind_ip_all"]
ports:
- 27018:27017
healthcheck:
test: mongo --eval "rs.initiate()"
start_period: 5s
This one works fine for me:
version: '3.4'
services:
ludustack-db:
container_name: ludustack-db
command: mongod --auth
image: mongo:latest
hostname: mongodb
ports:
- '27017:27017'
env_file:
- .env
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGO_INITDB_ROOT_USERNAME}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_INITDB_ROOT_PASSWORD}
- MONGO_INITDB_DATABASE=${MONGO_INITDB_DATABASE}
- MONGO_REPLICA_SET_NAME=${MONGO_REPLICA_SET_NAME}
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1
interval: 60s
start_period: 60s

docker-compose mongo healthcheck failing

Hello my mongo health check is failing
below is my docker-compose file
version: "2.4"
services:
production-api:
build: .
environment:
- MONGO_URI=mongodb://mongodb:27017/productiondb
volumes:
- .:/app
ports:
- "3000:3000"
depends_on:
- mongodb
# condition: service_healthy
mongodb:
image: mongo
ports:
- "27017:27017"
# healthcheck:
# test: echo 'db.runCommand("ping").ok' | mongo mongo:27017/productiondb --quiet 1
# interval: 10s
# timeout: 10s
# retries: 5
And is there any way to pass MONGO_URI to health check as a variable?
you're healthcheck should look like this:
test: echo 'db.runCommand("ping").ok' | mongo localhost:27017/productiondb --quiet
the hostname mongo doesn't exist inside the mongodb container unless you specify hostname: mongo in your compose file or just simply use localhost which is more common, when doing healthchecks
the 1 after --quiet seems to be a typo which leads to [main] file [1] doesn't exist https://docs.mongodb.com/manual/reference/program/mongo/#cmdoption-mongo-quiet
see also:
Simple HTTP/TCP health check for MongoDB
Why does the docker-compose healthcheck of my mongo container always fail?
You can pass MONGO_URI to the healthcheck, by specifying it a second time:
mongodb:
..
environment:
- MONGO_URI=mongodb://mongodb:27017/productiondb
If you want to use one value for both, create an env_file and use it via:
env_file:
- mongo.env # wich contains `MONGO_URI=mongodb://..`

Why does the docker-compose healthcheck of my mongo container always fail?

I'm using docker-compose to stand up an Express/React/Mongo app. I can currently stand up everything using retry logic in the express app. However, I would prefer to use Docker's healthcheck to prevent the string of errors when the containers initially spin up. However, when I add a healthcheck in my docker-compose.yml, it hangs for the interval/retry time limit and exits with:
ERROR: for collector Container "70e7aae49c64" is unhealthy.
ERROR: for server Container "70e7aae49c64" is unhealthy.
ERROR: Encountered errors while bringing up the project.
It seems that my healthcheck never returns a healthy status, and I'm not entirely sure why. The entirety of my docker-compose.yml:
version: "2.1"
services:
mongo:
image: mongo
volumes:
- ./data/mongodb/db:/data/db
ports:
- "${DB_PORT}:${DB_PORT}"
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongo mongo:27017/test --quiet 1
interval: 10s
timeout: 10s
retries: 5
collector:
build: ./collector/
environment:
- DB_HOST=${DB_HOST}
- DB_PORT=${DB_PORT}
- DB_NAME=${DB_NAME}
volumes:
- ./collector/:/app
depends_on:
mongo:
condition: service_healthy
server:
build: .
environment:
- SERVER_PORT=$SERVER_PORT
volumes:
- ./server/:/app
ports:
- "${SERVER_PORT}:${SERVER_PORT}"
depends_on:
mongo:
condition: service_healthy
For the test, I've also tried:
["CMD", "nc", "-z", "localhost", "27017"]
And:
["CMD", "bash", "/mongo-healthcheck"]
I've also tried ditching the healthcheck altogether, following the advice of this guy. Everything stands up, but I get the dreaded errors in the output before a successful connection:
collector_1 | MongoDB connection error: MongoNetworkError: failed to connect to server [mongo:27017] on first connect [MongoNetworkError: connect
ECONNREFUSED 172.21.0.2:27017]
collector_1 | MongoDB connection with retry
collector_1 | MongoDB connection error: MongoNetworkError: failed to connect to server [mongo:27017] on first connect
The ultimate goal is a clean startup output when running the docker-compose up --build. I've also looked into some of the solutions in this question, but I haven't had much luck with wait-for-it either. What's the correct way to wait for Mongo to be up and running before starting the other containers, and achieving a clean startup?
Firstly, I'd suggest to update the docker-compose.yaml file version to at least 3.4 (version: "3.5"), then please add the start_period option to your mongo healthcheck
Note: start_period is only supported for v3.4 and higher of the compose file format.
start period provides initialization time for containers that need time to bootstrap. Probe failure during that period will not be counted towards the maximum number of retries. However, if a health check succeeds during the start period, the container is considered started and all consecutive failures will be counted towards the maximum number of retries.
So it would look something like this:
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongo mongo:27017/test --quiet
interval: 10s
timeout: 10s
retries: 5
start_period: 40s
We can use MongoDB's serverStatus command to do the health check, as the MongoDB document puts it this way:
Monitoring applications can run this command at a regular interval to collect statistics about the instance.
Because this command serverStatus requires authentication, you need setup the health check similar to the configuration shown below:
version: '3.4'
services:
mongo:
image: mongo
restart: always
healthcheck:
test: echo 'db.runCommand({serverStatus:1}).ok' | mongo admin -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD --quiet | grep 1
interval: 10s
timeout: 10s
retries: 3
start_period: 20s
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
That's it. If your MongoDB instance is healthy, you will see something similar to mine:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
01ed0e02aa70 mongo "docker-entrypoint.s…" 11 minutes ago Up 11 minutes (healthy) 27017/tcp demo_mongo_1
The mongo shell is removed from MongoDB 6.0. The replacement is mongosh.
Check if this works for you :
mongo:
image: mongo
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongosh localhost:27017/test --quiet
Note that you probably should you use mongosh if you use newer versios of the mongodb:
healthcheck:
test: ["CMD","mongosh", "--eval", "db.adminCommand('ping')"]
interval: 5s
timeout: 5s
retries: 3
start_period: 5s
I found a solution here
https://github.com/docker-library/healthcheck/tree/master/mongo
Note, it explains why health check is not included into official image
https://github.com/docker-library/cassandra/pull/76#issuecomment-246054271
docker-healthcheck
#!/bin/bash
set -eo pipefail
if mongo --quiet "localhost/test" --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)'; then
exit 0
fi
exit 1
In the example from the link, they use host variable
host="$(hostname --ip-address || echo '127.0.0.1')"
if mongo --quiet "$host/test" --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)'; then
# continues the same code
It did not work for me, so I replaced the host with localhost.
In docker-compose
mongo:
build:
context: "./mongodb"
dockerfile: Dockerfile
container_name: crm-mongo
restart: always
healthcheck:
test: ["CMD", "docker-healthcheck"]
interval: 10s
timeout: 2s
retries: 10
Alternatively, you can execute health checks in container. Change Dockerfile or that.
FROM mongo:4
ADD docker-healthcheck /usr/local/bin/
When i execute the echo db.runCommand("ping").ok' | mongo localhost:27017/test --quiet 1 command in the docker container, the result is:
2019-04-19T02:39:19.770+0000 E - [main] file [1] doesn't exist
failed to load: 1
Try this
healthcheck:
test: bash -c "if mongo --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)'; then exit 0; fi; exit 1;"
This one worked for me:
healthcheck:
test: ["CMD","mongo", "--eval", "db.adminCommand('ping')"]
interval: 10s
timeout: 10s
retries: 5