I want to create replica set of 3 nodes using docker-compose and seed initial data to them. If I remove --replSet and seed data without specifying hosts I have no problems.
docker-compose.yml
master:
image: 'mongo:3.4'
ports:
- '50000:27017'
volumes:
- './restaurants.json:/restaurants.json'
- './setup.js:/docker-entrypoint-initdb.d/00_setup.js'
- './seed.sh:/docker-entrypoint-initdb.d/01_seed.sh'
command: '--replSet rs'
slave1:
image: 'mongo:3.4'
ports:
- '50001:27017'
command: '--replSet rs'
slave2:
image: 'mongo:3.4'
ports:
- '50002:27017'
command: '--replSet rs'
seed.sh
# ...
_wait "slave1"
_wait "slave2"
echo "Starting to import data..."
mongoimport --host="rs/master:27017,slave1:27017,slave2:27017" --db db --collection restaurants --drop --file /restaurants.json > /dev/null
echo "Done."
Log
master_1 | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/01_seed.sh
master_1 | Waiting for slave1...
master_1 | .
master_1 | Done.
master_1 | Waiting for slave2...
master_1 | Done.
master_1 | Starting to import data...
master_1 | 2017-11-26T16:06:39.148+0000 [........................] db.restaurants 0B/11.3MB (0.0%)
master_1 | 2017-11-26T16:06:39.653+0000 [........................] db.restaurants 0B/11.3MB (0.0%)
master_1 | 2017-11-26T16:06:39.653+0000 Failed: error connecting to db server: no reachable servers
master_1 | 2017-11-26T16:06:39.653+0000 imported 0 documents
mongoreplication_master_1 exited with code 1
This question is old but i ran into the same issue recently, it's worth noting that the mongo docker-entrypoint.sh script will strip the --replicaSet argument during the initDb phase, see:
https://github.com/docker-library/mongo/blob/master/3.6/docker-entrypoint.sh#L237
So you can't connect to the host that is running the init scripts, you can create another container with the sole purpose of initializing the replicaset however and override the docker-entrypoint.sh
Related
I have a Golang server alongside Postgres instance running inside docker compose. For some reason the Postgres is refusing connection. From all of my previous searches, usually the problem is typo, not exposing the port, having SSL and so on, but I don't have anything like that going on and still having this issue
version: "3.2"
services:
ingress:
image: jwilder/nginx-proxy
ports:
- "3000:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
auth-service:
depends_on:
- rabbitmq
- auth-db
- ingress
build: ./auth
container_name: "auth-service"
ports:
- 3001:3000
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_HOST=auth-db
- POSTGRES_DB=auth-dev
- POSTGRES_PORT=5435
- PORT=3000
- RABBITMQ_USER=guest
- RABBITMQ_PASSWORD=guest
- RABBITMQ_HOST=rabbitmq
- RABBITMQ_PORT=5672
- VIRTUAL_HOST=api.twitchy.dev
- VIRTUAL_PATH=/v1/auth/
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
# networks:
# - rabbitmq_net
# - default
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: "rabbitmq"
ports:
- 5672:5672
- 15672:15672
volumes:
- rabbitmq_data:/var/lib/rabbitmq/
- rabbitmq_log:/var/log/rabbitmq/
# networks:
# - rabbitmq_net
auth-db:
image: postgres:14.1-alpine
restart: always
container_name: "auth-db"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=auth-dev
ports:
- "5435:5432"
volumes:
- db:/var/lib/postgresql/data
chat-db:
image: postgres:14.1-alpine
restart: always
container_name: "chat-db"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=chat-dev
ports:
- "5434:5432"
volumes:
- db:/var/lib/postgresql/data
# networks:
# rabbitmq_net:
# driver: bridge
volumes:
db:
driver: local
rabbitmq_data:
rabbitmq_log:
This is the error I am getting
auth-service | Retrying connection to database...
auth-service | failed to connect to `host=auth-db user=postgres database=auth-dev`: dial error (dial tcp 172.23.0.3:5435: connect: connection refused)
And my Golang code used to connect to the DB (using pgx)
dbUrl := fmt.Sprintf("postgres://%s:%s#%s:%s/%s?sslmode=disable",
os.Getenv("POSTGRES_USER"),
os.Getenv("POSTGRES_PASSWORD"),
os.Getenv("POSTGRES_HOST"),
os.Getenv("POSTGRES_PORT"),
os.Getenv("POSTGRES_DB"))
This is why I am confused
The ports match up, I expose 5435 from postgres, and I connect to 5435
The host should be correct as I am referencing the auth-db service name and they are on the same default network so that should be fine
The password and username match up
The POSTGRES_DB also match up, the default database should be auth-dev
POSTGRES_DB
This optional environment variable can be used to define a different name for the default database that is created when the image is first started. If it is not specified, then the value of POSTGRES_USER will be used.
I have sslmode disable as well
Is there anything else that can cause the connection to be refused?
Tried changing db to template1 and postgres as they are created by default but both aren't working either
54511e50369c postgres:14.1-alpine "docker-entrypoint.s…" 16 minutes ago Up 16 seconds 0.0.0.0:5435->5432/tcp, :::5435->5432/tcp auth-db
docker exec -it 54511e50369c psql -U postgres
psql (14.1)
Type "help" for help.
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+------------+------------+-----------------------
postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
(3 rows)
The database is ready when I am trying to connect (I am retrying 20 times so, and restarting the service if it crashes, so it should be available)
When you map ports in docker-compose, say like "5435:5432", you are mapping port 5435 on the HOST machine to 5432 on the CONTAINER. However, your db url in the auth-service definition is using the name of the service, auth-db, so you are actually hitting the db container directly, not going through the host machine. Because the db container does not expose 5435, you are unable to connect using port 5435.
If you were to try to connect to the database from your host machine for example, you would probably be successful using port 5435 and localhost.
I am running airflow locally based on dockerfile, .env, docker-compose.yaml and entrypoint.sh
as "docker-compose -f docker-compose.yaml up"
And just after "airflow init db" in entrypoint.sh I am getting the following error:
[after it all is cool, I can run airflow. But this drives me crazy. Can anyone help me to resolve it, please?]
what's strange is that service queries the tables even before the db has been initiated
airflow_webserver | initiating db
airflow_webserver | DB: postgresql://airflow:***#airflow_metadb:5432/airflow
airflow_webserver | [2022-02-22 13:52:26,318] {db.py:929} INFO - Dropping tables that exist
airflow_webserver | [2022-02-22 13:52:26,570] {migration.py:201} INFO - Context impl PostgresqlImpl.
airflow_webserver | [2022-02-22 13:52:26,570] {migration.py:204} INFO - Will assume transactional DDL.
airflow_metadb | 2022-02-22 13:52:26.712 UTC [71] ERROR: relation "connection" does not exist at character 55
airflow_metadb | 2022-02-22 13:52:26.712 UTC [71] STATEMENT: SELECT connection.conn_id AS connection_conn_id
airflow_metadb | FROM connection GROUP BY connection.conn_id
airflow_metadb | HAVING count(*) > 1
airflow_metadb | 2022-02-22 13:52:26.714 UTC [72] ERROR: relation "connection" does not exist at character 55
airflow_metadb | 2022-02-22 13:52:26.714 UTC [72] STATEMENT: SELECT connection.conn_id AS connection_conn_id
airflow_metadb | FROM connection
airflow_metadb | WHERE connection.conn_type IS NULL
airflow_webserver | [2022-02-22 13:52:26,733] {db.py:921} INFO - Creating tables
airflow 2.2.3
postgres 13
in dockerfile:
ENTRYPOINT ["/entrypoint.sh"]
in docker-compose.yaml:
webserver:
env_file: ./.env
image: airflow
container_name: airflow_webserver
restart: always
depends_on:
- postgres
environment:
<<: *env_common
AIRFLOW__CORE__LOAD_EXAMPLES: ${AIRFLOW__CORE__LOAD_EXAMPLES}
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: ${AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION}
EXECUTOR: ${EXECUTOR}
_AIRFLOW_DB_UPGRADE: ${_AIRFLOW_DB_UPGRADE}
_AIRFLOW_WWW_USER_CREATE: ${_AIRFLOW_WWW_USER_CREATE}
_AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME}
_AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD}
_AIRFLOW_WWW_USER_ROLE: ${_AIRFLOW_WWW_USER_ROLE}
_AIRFLOW_WWW_USER_EMAIL: ${_AIRFLOW_WWW_USER_EMAIL}
logging:
options:
max-size: 10m
max-file: "3"
volumes:
- ./dags:bla-bla
- ./logs:bla-bla
ports:
- ${AIRFLOW_WEBSERVER_PORT}:${AIRFLOW_WEBSERVER_PORT}
command: webserver
healthcheck:
test: ["CMD-SHELL", "[ -f /usr/local/airflow/airflow-webserver.pid ]"]
interval: 30s
timeout: 30s
retries: 3
I'm trying to install TimeScaleDB using Docker Compose, but I get the following error when importing data using timescaledb-parallel-copy :
/usr/local/bin/docker-entrypoint.sh: running
/docker-entrypoint-initdb.d/import_data.sh timescaledb | panic: could
not connect: dial tcp 172.18.0.2:5432: connect: connection refused
timescaledb | timescaledb | goroutine 6 [running]: timescaledb |
main.processBatches(0xc000016730, 0xc000060780) timescaledb |
/go/src/github.com/timescale/timescaledb-parallel-copy/cmd/timescaledb-parallel-copy/main.go:238
+0x8bb timescaledb | created by main.main timescaledb | /go/src/github.com/timescale/timescaledb-parallel-copy/cmd/timescaledb-parallel-copy/main.go:148
+0x1d2 timescaledb | panic: could not connect: dial tcp 172.18.0.2:5432: connect: connection refused
Here's my docker compose and my docker file :
Docker compose :
version: "3.8"
services:
timescaledb:
container_name: timescaledb
build:
context: "./timescaledb"
dockerfile: "docker_file"
env_file:
- "./timescaledb/environment.env"
volumes:
- "./timescaledb/data:/data"
ports:
- "5432:5432/tcp"
networks:
- local_network
restart: on-failure
networks:
local_network:
Docker file :
FROM timescale/timescaledb:latest-pg13
ADD create_tables.sql /docker-entrypoint-initdb.d
ADD import_data.sh /docker-entrypoint-initdb.d
RUN chmod a+r /docker-entrypoint-initdb.d/*
And here's the import_data.sh script that calls timescaledb-parallel-copy :
#!/bin/bash
timescaledb-parallel-copy --connection "host=timescaledb user=postgres password=XXX sslmode=disable" --db-name YYY --table ZZZ --copy-options "CSV" -skip-header --columns "name, unit" --file "/data/data.csv" --reporting-period 30s workers 4
I also tried using localhost, but I get the same error.
Edit : Problem solved. The host should not be specified in the connection string. According to the official documentation (https://hub.docker.com/_/postgres) :
Also, as of docker-library/postgres#440, the temporary daemon started for these initialization scripts listens only on the Unix socket, so any psql usage should drop the hostname portion (see docker-library/postgres#474 (comment) for example).
I am tring to start a postgreSQL docker container with my mac; i use OSX 10.11.16 El Capitan with Docker Toolbox 19.03.01.
If i run:
docker run --name my_postgres -v my_dbdata:/var/lib/postgresql/data -p 54320:5432 postgres:11
all was done and i get:
my_postgres | 2019-09-17 04:51:48.908 UTC [41] LOG: database system is ready to accept connections
but if i use an .yml file like this one:
docker-compose.yml:
version: "3"
services:
db:
image: "postgres:11"
container_name: "my_postgres"
ports:
- "54320:5432"
volumes:
- my_dbdata:/var/lib/postgresql/data
volumes:
my_dbdata:
and run
docker-compose up
i get instead:
my_postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
my_postgres |
my_postgres | 2019-09-17 04:51:49.009 UTC [41] LOG: received fast shutdown request
my_postgres | 2019-09-17 04:51:49.011 UTC [41] LOG: aborting any active transactions
my_postgres | waiting for server to shut down....2019-09-17 04:51:49.087 UTC [41] LOG: background worker "logical replication launcher" (PID 48) exited with exit code 1
my_postgres | 2019-09-17 04:51:49.091 UTC [43] LOG: shutting down
my_postgres | 2019-09-17 04:51:49.145 UTC [41] LOG: database system is shut down
Why the same think with docker-compose fail?
So many thanks in advance
Try the below one it worked for me
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: mypassword
volumes:
- ./postgres-data:/var/lib/postgresql/data
ports:
- 5432:5432
Then use docker-compose up to get the logs after using the previous command use
docker-compose logs -f
If you are trying to access and existing volume on the host machine, you need to specify that the volume was created outside the Compose file with the external keyword like this:
version: "3.7"
services:
db:
image: postgres
volumes:
- data:/var/lib/postgresql/data
volumes:
data:
external: true
I took the example from the Compose file reference https://docs.docker.com/compose/compose-file/.
Also double check the contents of your external volume between runs, to see if it was overriden.
Please also double check your quotes, you don't need to put the image name in quotes, but I don't think that's the issue.
The my_dbdata named volume is not the same for the two cases.
docker run creates a volume named my_dbdata, instead docker-compose creates by default a volume called <dir>_my_dbdata
Run docker volume to list the volumes:
docker volume ls |grep my_dbdata
I suspect the volume created by docker-compose has issues and as a consequence postgres doesn't start correctly. The initialization of the database in the my_postgres container is done only once.
Try to remove the container and the volume created by docker-compose:
docker rm my_postgres
docker volume rm <dir>_my_dbdata
Hope it helps
I'm using docker-compose to stand up an Express/React/Mongo app. I can currently stand up everything using retry logic in the express app. However, I would prefer to use Docker's healthcheck to prevent the string of errors when the containers initially spin up. However, when I add a healthcheck in my docker-compose.yml, it hangs for the interval/retry time limit and exits with:
ERROR: for collector Container "70e7aae49c64" is unhealthy.
ERROR: for server Container "70e7aae49c64" is unhealthy.
ERROR: Encountered errors while bringing up the project.
It seems that my healthcheck never returns a healthy status, and I'm not entirely sure why. The entirety of my docker-compose.yml:
version: "2.1"
services:
mongo:
image: mongo
volumes:
- ./data/mongodb/db:/data/db
ports:
- "${DB_PORT}:${DB_PORT}"
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongo mongo:27017/test --quiet 1
interval: 10s
timeout: 10s
retries: 5
collector:
build: ./collector/
environment:
- DB_HOST=${DB_HOST}
- DB_PORT=${DB_PORT}
- DB_NAME=${DB_NAME}
volumes:
- ./collector/:/app
depends_on:
mongo:
condition: service_healthy
server:
build: .
environment:
- SERVER_PORT=$SERVER_PORT
volumes:
- ./server/:/app
ports:
- "${SERVER_PORT}:${SERVER_PORT}"
depends_on:
mongo:
condition: service_healthy
For the test, I've also tried:
["CMD", "nc", "-z", "localhost", "27017"]
And:
["CMD", "bash", "/mongo-healthcheck"]
I've also tried ditching the healthcheck altogether, following the advice of this guy. Everything stands up, but I get the dreaded errors in the output before a successful connection:
collector_1 | MongoDB connection error: MongoNetworkError: failed to connect to server [mongo:27017] on first connect [MongoNetworkError: connect
ECONNREFUSED 172.21.0.2:27017]
collector_1 | MongoDB connection with retry
collector_1 | MongoDB connection error: MongoNetworkError: failed to connect to server [mongo:27017] on first connect
The ultimate goal is a clean startup output when running the docker-compose up --build. I've also looked into some of the solutions in this question, but I haven't had much luck with wait-for-it either. What's the correct way to wait for Mongo to be up and running before starting the other containers, and achieving a clean startup?
Firstly, I'd suggest to update the docker-compose.yaml file version to at least 3.4 (version: "3.5"), then please add the start_period option to your mongo healthcheck
Note: start_period is only supported for v3.4 and higher of the compose file format.
start period provides initialization time for containers that need time to bootstrap. Probe failure during that period will not be counted towards the maximum number of retries. However, if a health check succeeds during the start period, the container is considered started and all consecutive failures will be counted towards the maximum number of retries.
So it would look something like this:
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongo mongo:27017/test --quiet
interval: 10s
timeout: 10s
retries: 5
start_period: 40s
We can use MongoDB's serverStatus command to do the health check, as the MongoDB document puts it this way:
Monitoring applications can run this command at a regular interval to collect statistics about the instance.
Because this command serverStatus requires authentication, you need setup the health check similar to the configuration shown below:
version: '3.4'
services:
mongo:
image: mongo
restart: always
healthcheck:
test: echo 'db.runCommand({serverStatus:1}).ok' | mongo admin -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD --quiet | grep 1
interval: 10s
timeout: 10s
retries: 3
start_period: 20s
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
That's it. If your MongoDB instance is healthy, you will see something similar to mine:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
01ed0e02aa70 mongo "docker-entrypoint.s…" 11 minutes ago Up 11 minutes (healthy) 27017/tcp demo_mongo_1
The mongo shell is removed from MongoDB 6.0. The replacement is mongosh.
Check if this works for you :
mongo:
image: mongo
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongosh localhost:27017/test --quiet
Note that you probably should you use mongosh if you use newer versios of the mongodb:
healthcheck:
test: ["CMD","mongosh", "--eval", "db.adminCommand('ping')"]
interval: 5s
timeout: 5s
retries: 3
start_period: 5s
I found a solution here
https://github.com/docker-library/healthcheck/tree/master/mongo
Note, it explains why health check is not included into official image
https://github.com/docker-library/cassandra/pull/76#issuecomment-246054271
docker-healthcheck
#!/bin/bash
set -eo pipefail
if mongo --quiet "localhost/test" --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)'; then
exit 0
fi
exit 1
In the example from the link, they use host variable
host="$(hostname --ip-address || echo '127.0.0.1')"
if mongo --quiet "$host/test" --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)'; then
# continues the same code
It did not work for me, so I replaced the host with localhost.
In docker-compose
mongo:
build:
context: "./mongodb"
dockerfile: Dockerfile
container_name: crm-mongo
restart: always
healthcheck:
test: ["CMD", "docker-healthcheck"]
interval: 10s
timeout: 2s
retries: 10
Alternatively, you can execute health checks in container. Change Dockerfile or that.
FROM mongo:4
ADD docker-healthcheck /usr/local/bin/
When i execute the echo db.runCommand("ping").ok' | mongo localhost:27017/test --quiet 1 command in the docker container, the result is:
2019-04-19T02:39:19.770+0000 E - [main] file [1] doesn't exist
failed to load: 1
Try this
healthcheck:
test: bash -c "if mongo --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)'; then exit 0; fi; exit 1;"
This one worked for me:
healthcheck:
test: ["CMD","mongo", "--eval", "db.adminCommand('ping')"]
interval: 10s
timeout: 10s
retries: 5