Hello I have the following error in my node project:
(node:51) UnhandledPromiseRejectionWarning: Error: getaddrinfo
ENOTFOUND ${DB_HOST}
I'm thinking the problem is that my postgress is not yet started when my project starts
and so I'm not able to think of a solution on how to start my container after my postgres is ready, I read something about dockerize, but I'm not able to imagine how to apply
my docker file:
FROM node:lts-alpine
RUN mkdir -p /home/node/api/node_modules && chown -R node:node /home/node/api
WORKDIR /home/node/api
COPY ormconfig.json .env package.json yarn.* ./
USER node
RUN yarn
COPY --chown=node:node . .
EXPOSE 4000
CMD ["yarn", "dev"]
my docker compose:
version: '3.7'
services:
ci-api:
build: .
container_name: ci-api
volumes:
- .:/home/node/api
- /home/node/api/node_modules
ports:
- '${SERVER_PORT}:${SERVER_PORT}'
depends_on:
- ci-postgres
networks:
- ci-network
ci-postgres:
image: postgres:12
container_name: ci-postgres
ports:
- '${DB_PORT}:5432'
environment:
- ALLOW_EMPTY_PASSWORD=no
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
- POSTGRES_DB=${DB_NAME}
volumes:
- ci-postgres-data:/data
networks:
- ci-network
volumes:
ci-postgres-data:
networks:
ci-network:
driver: bridge
and this is my .env
SERVER_PORT=4000
DB_HOST=ci-postgres
DB_PORT=5432
DB_USER=spirit
DB_PASS=api
DB_NAME=emasa_ci
You can reference the below docker-compose.yml in which depends_on, healthcheck and links are added as web service depends on db service.
Reference:
Postgresql Container is not running in docker-compose file - Why is this?
version: "3"
services:
webapp:
build: .
container_name: webapp
ports:
- "5000:5000"
links:
- postgres
depends_on:
postgres:
condition: service_healthy
postgres:
image: postgres:11-alpine
container_name: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_DB=tmp
- POSTGRES_USER=tmp
- POSTGRES_PASSWORD=tmp_password
volumes: # Persist the db data
- database-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
volumes:
database-data:
Related
i have this docker-compose file:
version: '3.7'
services:
db:
image: "postgres:9.6"
container_name: postgres-container
ports: ["6543:5432"]
environment:
- POSTGRES_USER=odoo
- POSTGRES_PASSWORD=admin
- POSTGRES_DB=odoo
restart: always
volumes:
- ./data/postgres:/var/lib/postgresql/data
odoo:
#build: ./odoo-container
image: odoo-image
container_name: odoo-container
ports: ["8069:8069"]
tty: true
command: opt/odoo/odoo-bin -c opt/odoo.conf -d teste
depends_on:
- db
the problem is that when i start docker compose, my db service runs and when docker runs the odoo service i get an error:
psycopg2.OperationalError: FATAL: the database system is starting up
and when i restart the odoo container, its works
im added the restart method, and works:
odoo:
#build: ./odoo-container
image: odoo-image
container_name: odoo-container
ports: ["8069:8069"]
command: opt/odoo/odoo-bin -c opt/odoo.conf -d teste
depends_on:
- db
restart: always
When I first time build containers database doesn't have enough time to initialize itself while web service and nginx is already up and thus I can't reach the server from a first run, but after second containers run everything works properly. I have tried this command: ["./wait-for-it.sh", "db:5432", "--", "python", "manage.py runserver 0.0.0.0:8000"] to wait until database got initialized, but it didn't help me. Help me please to make my services wait until database get initialized. I've tried solutions from this post, but nothing was helpful. Help me please to make my services wait until database get initialized. Thanks in advance!
Here is my docker-compose file
version: "3.9"
services:
db:
image: postgres:13.3-alpine
container_name: db
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:5432" ]
interval: 30s
timeout: 10s
retries: 5
web:
build: .
container_name: web
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
restart: on-failure
depends_on:
- db
nginx:
build: ./nginx
container_name: nginx
ports:
- "80:80"
restart: on-failure
depends_on:
- web
- db
depends_on only waits until the service has started, not until it is healthy. You should try to additionally define the condition service_healthy to wait until a dependency is healthy:
depends_on:
db:
condition: service_healthy
Here's a complete docker-compose file for reference:
version: "3.9"
services:
db:
image: postgres:13.3-alpine
container_name: db
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 1s
timeout: 5s
retries: 5
web:
image: nginx:latest
container_name: web
restart: on-failure
depends_on:
db:
condition: service_healthy
Problem solved by adding the short line of script to command.
web:
build: .
container_name: web
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; python manage.py runserver 0.0.0.0:8000'
volumes:
- .:/code
ports:
- "8000:8000"
restart: on-failure
depends_on:
- db
I am trying to set up a docker-pod with laravel, mariadb, nginx, redis and phpmyadmin. The laravel webspace works finde but if i switch to port 10081 like configured in the docker-compose.yml i am not able to login with the root account.
it sais " mysqli::real_connect(): php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution"
i already tried to configure a "my-network" which links all of the container, but if i understand docker right there is already a "defaul" network which does this. It didnt change the error message anyway.
here is my full docker-compose file
version: "3.8"
services:
redis:
image: redis:6.0-alpine
expose:
- "6380"
db:
image: mariadb:10.4
ports:
- "3307:3306"
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: laravel
volumes:
- db-data:/var/lib/mysql
nginx:
image: nginx:1.19-alpine
build:
context: .
dockerfile: ./docker/nginx.Dockerfile
restart: always
depends_on:
- php
ports:
- "10080:80"
networks:
- default
environment:
VIRTUAL_HOST: cockpit.example.de
volumes:
- ./docker/nginx.conf:/etc/nginx/nginx.conf:ro
- ./public:/app/public:ro
php:
build:
target: dev
context: .
dockerfile: ./docker/php.Dockerfile
working_dir: /app
env_file: .env
restart: always
expose:
- "9000"
depends_on:
- composer
- redis
- db
volumes:
- ./:/app
- ./docker/www.conf:/usr/local/etc/php-fpm.d/www.conf:ro
links:
- db:mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
ports:
- 10081:80
restart: always
environment:
PMA_HOST : db
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: secret
depends_on:
- db
#user: "109:115"
links:
- db:mysql
node:
image: node:12-alpine
working_dir: /app
volumes:
- ./:/app
command: sh -c "npm install && npm run watch"
composer:
image: composer:1.10
working_dir: /app
#environment:
#SSH_AUTH_SOCK: /ssh-auth.sock
volumes:
- ./:/app
#- "$SSH_AUTH_SOCK:/ssh-auth.sock"
- /etc/passwd:/etc/passwd:ro
- /etc/group:/etc/group:ro
command: composer install --ignore-platform-reqs --no-scripts
volumes:
db-data:
Make sure you have defined all attributes correctly for phpmyadmin container, in the current case there was the absence of -network definition
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin
restart: always
ports:
# 8080 is the host port and 80 is the docker port
- 8080:80
environment:
- PMA_ARBITRARY:1
- PMA_HOST:mysql
- MYSQL_USERNAME:root
- MYSQL_ROOT_PASSWORD:secret
depends_on:
- mysql
networks:
# define your network where all containers are connected to each other
- laravel
volumes:
# define directory path where you shall store your persistent data and config
# files of phpmyadmin
- ./docker/phpmyadmin
Maybe your container cannot start because its volume contains incompatible data. It can happen if you downgrade the version of mysql or mariadb image.
You can resolve the problem if you remove the volume and import the database again. Maybe you have to create a backup first.
Running a docker container with postgres and npm produces this error, while running them separetly
(npm and docker) doesn't. What seems to be the error here?
My docker-compose.yml:
version: "3.5"
services:
db:
image: postgres:12.1
ports:
- 5432:5432
environment:
- FLYWAY_URL=jdbc:postgresql://db:5432/
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypass
- POSTGRES_DB=mydb
volumes:
- ./db/data:/var/lib/postgresql/data
migrate:
image: boxfuse/flyway
entrypoint: ["sh", "-c", "/flyway/wait-for.sh db:5432 -- flyway migrate"]
depends_on:
- db
volumes:
- ./common/migrations/:/flyway/sql:rw
- ./common/scripts/wait-for.sh:/flyway/wait-for.sh:rw
environment:
# - FLYWAY_LOCATIONS=classpath:/common/migrations/
- FLYWAY_PASSWORD=mypass
- FLYWAY_USER=myuser
- FLYWAY_URL=jdbc:postgresql://db:5432/mydb?user=myuser&password=mypass
- FLYWAY_CONNECT_RETRIES=30
networks:
default:
name: mydb-local
services:
example-service:
build: .
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
ports:
- 3000:3000
- 9229:9229
command: npm start
Thing I have checked:
postgresql.conf & pg_hba.conf accepts connections.
DB credentials are correct.
Db is running
I'm configured my project to docker. I have database that have been used in non-docker period and now I want to connect my docker-compose db service to it. But when I write docker-compose up - existing database not used - new one created instead (I suspect, docker container simply doesn't see the database). If I do nonsense please let me know. Maybe I shoud migrate my server db into container.
Here is my docker-compose.yml:
services:
db:
restart: always
image: postgres:latest
environment:
- POSTGRES_DB=mydb
- POSTGRES_PASSWORD=p#ssw0rd
- POSTGRES_USER=root
ports:
- "5432:5432"
volumes:
# We'll mount the 'postgres-data' volume into the location Postgres stores it's data:
- postgres-data:/var/lib/postgresql/data
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
depends_on:
- db
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
I think, the canonic approach is to have your DB engine running in container while storing the data on the persistent storage (map the volume to your hard disk).
So I would use the Postgres in docker as ServerDB, as you suggested.
If you only want your application connect to the external database, declare it as an external host:
version: '2'
services:
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
extra_hosts:
- "db:192.168.1.2"
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
Just be sure your application reference the database as db and replace the ip I put there with your host ip.
Regards