Can't reach database server at `postgres`:`5432` - postgresql

Trying to dockerize, nests, and Prisma.
Nest is responding correctly to curl requests and and I can connect to the Postgres server fine with this command
--- docker compose exec postgres psql -h localhost -U postgres -d webapp_dev
Everything works until i try to run
npx prisma migrate dev --name init
then i get back
Error: P1001: Can't reach database server at `postgres`:`5432`
Here is my code:
docker-compose.yml
version: "2"
services:
backend:
build: .
ports:
- 3000:3000
- 9229:9229 # debugger port
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
command: yarn start:debug
restart: unless-stopped
depends_on:
- postgres
environment:
DATABASE_URL: postgres://postgres#postgres/webapp_dev
PORT: 8000
postgres:
image: postgres:14-alpine
ports:
- 5432:5432
environment:
POSTGRES_DB: webapp_dev
POSTGRES_HOST_AUTH_METHOD: trust
DockerFile
FROM node:16
# Create app directory, this is in our container
WORKDIR /usr/src/app
# Install app dependencies
# Need to copy both package and lock to work
COPY package.json yarn.lock ./
RUN yarn install
COPY prisma/schema.prisma ./prisma/
RUN npx prisma generate
# Bundle app source
COPY . .
RUN yarn build
EXPOSE 8080
CMD ["node": "dist/main"]
.env
//.env
DATABASE_URL=postgres://postgres#postgres/webapp_dev

not sure if this is the only issue but your db url does not contain the db secret in it
DATABASE_URL: postgres://postgres:mysecret#postgres/webapp_dev?schema=public

I got the same error I solved it after adding ?connect_timeout=300 at my DATABASE_URL

Related

PrismaORM PostgreSQL create migration error inside Docker container

I have my NestJS application that use PrismaORM for connection to PostgreSQL. But building of docker file crashes after executing npx prisma migrate dev --name init with error Can't reach database server at postgres:5432
My docker-compose.yml
version: "3.8"
services:
api:
build:
dockerfile: Dockerfile
context: .
depends_on:
- postgres
env_file:
- ./.env
ports:
- "8080:5000"
postgres:
image: postgres:10.4
ports:
- "5432:5432"
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: db
volumes:
- ./postgres-data:/var/lib/postgresql/data
env_file:
- ./.env
**My Dockerfile
**
FROM node:16
WORKDIR /qmessanger/src/server
COPY package*.json ./
COPY prisma ./prisma/
COPY .env ./
COPY . .
RUN npm install
RUN npm run build
RUN npx prisma generate
RUN npx prisma migrate dev --name init
EXPOSE 8080
CMD [ "node", "dist/main" ]
My .env config
DATABASE_URL="postgresql://user:password#postgres:5432/db"
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_USER=user
POSTGRES_PASSWORD=password
POSTGRES_DB=db
My prisma config
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
The issue is that you're trying to run the migration before the postgres service is running. You would need to run the migration as part of your startup command or entrypoint.
Here is an example of a similar problem in Django and they used an entrypoint script to run the migration:
How do you perform Django database migrations when using Docker-Compose?

Docker compose: Error: role "hleb" does not exist

Kindly ask you to help with docker and Postgres.
I have a local Postgres database and a project on NestJS.
I killed 5432 port.
My Dockerfile
FROM node:16.13.1
WORKDIR /app
COPY package.json ./
COPY yarn.lock ./
RUN yarn install
COPY . .
COPY ./dist ./dist
CMD ["yarn", "start:dev"]
My docker-compose.yml
version: '3.0'
services:
main:
container_name: main
build:
context: .
env_file:
- .env
volumes:
- .:/app
- /app/node_modules
ports:
- 4000:4000
- 9229:9229
command: yarn start:dev
depends_on:
- postgres
restart: always
postgres:
container_name: postgres
image: postgres:12
env_file:
- .env
environment:
PG_DATA: /var/lib/postgresql/data
POSTGRES_HOST_AUTH_METHOD: 'trust'
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
restart: always
volumes:
pgdata:
.env
DB_TYPE=postgres
DB_HOST=postgres
DB_PORT=5432
DB_USERNAME=hleb
DB_NAME=artwine
DB_PASSWORD=Mypassword
running sudo docker-compose build - NO ERRORS
running sudo docker-compose up --force-recreate - ERROR
ERROR [ExceptionHandler] role "hleb" does not exist.
I've tried multiple suggestions from existing issues but nothing helped.
What am I doing wrong?
Thanks!
Do not use sudo - unless you have to.
Use the latest Postgres release if possible.
The Postgresql Docker Image provides some environment variables, that will help you bootstrapping your database.
Be aware:
The PostgreSQL image uses several environment variables which are easy to miss. The only variable required is POSTGRES_PASSWORD, the rest are optional.
Warning: the Docker specific variables will only have an effect if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup.
When you do not provide the POSTGRES_USER environment variable in the docker-compose.yml file, it will default to postgres.
Your .env file used for Docker Compose does not contain the docker specific environment variables.
So amending/extending it to:
POSTGRES_USER=hleb
POSTGRES_DB=artwine
POSTGRES_PASSWORD=Mypassword
should do the trick. You will have to re-create the volume (delete it) to make this work, if the data directory already exists.

Unable to communicate with the postgreSQL database using Docker

I don't know why I am not able to fetch data from my postgreSQL database. I always got the same error when I do :
docker logs web I got that :
django.db.utils.OperationalError: could not connect to server: Connection refused Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432? could not connect to server: Address not available
I tried a lot of things but without effects...
Here is my docker-compose :
version: '3.7'
services:
web:
container_name: web
build:
context: .
dockerfile: Dockerfile
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/usr/src/web/
ports:
- 8000:8000
- 3000:3000
- 5432:5432
stdin_open: true
depends_on:
- db
db:
container_name: db
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=mydb
- POSTGRES_HOST=localhost
volumes:
postgres_data:
And this is the dockerfile :
# pull official base image
FROM python:3.8.3-alpine
# set work directory
WORKDIR /usr/src/web
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
RUN apk add zlib-dev jpeg-dev gcc musl-dev
# install nodejs
RUN apk add --update nodejs nodejs-npm
# copy project
ADD . .
# install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
in my settings.py I have this for the database :
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'mydb',
'USER': 'admin',
'PASSWORD': 'pass',
'HOST': 'localhost',
'PORT': '5432',
}
}
Could you help me please ?
I have absolutely no ideas how to solve that, I tried a lot of things but without any effects...
For instance, I tried to change in settings this parameter :
'HOST': 'localhost' to 'HOST':'db'
But I got that errors when I type that docker logs web :
django.db.utils.OperationalError: FATAL: password authentication failed for user "admin"
Thank you very much
have you tried to EXPOSE the port 5432 on the database container?
something like
db:
container_name: db
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=mydb
- POSTGRES_HOST=localhost
expose:
- "5432"
or inside the Dockerfile EXPOSE 5432/tcp
here is a link where you can check out the difference between ports: and expose:
What is the difference between docker-compose ports vs expose

How to create postgres database and run migration when docker-compose up

I'm setting up a simple backend that perform CRUD action with postgres database and want to create database and migration automatically when the docker-compose up runs.
I have already tried to add below code to Dockerfile or entrypoint.sh but none of them work.
createdb --host=localhost -p 5432 --username=postgres --no-password pg_development
createdb db:migrate
This code will work if run separately after docker is fully up
I have already tried to add - ./db-init:/docker-entrypoint-initdb.d to the volumes but that didn't work either
This is the Dockerfile
FROM node:10.12.0
# Create app directory
RUN mkdir -p /restify-pg
WORKDIR /restify-pg
EXPOSE 1337
ENTRYPOINT [ "./entrypoint.sh" ]
This is my docker-compose.yml
version: '3'
services:
db:
image: "postgres:11.2"
ports:
- "5432:5432"
volumes:
- ./pgData:/var/lib/psotgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD:
POSTGRES_DB: pg_development
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- .:/restify-pg
environment:
DB_HOST: db
entrypoint.sh (in here I get createdb: command not found)
#!/bin/bash
cd app
createdb --host=localhost -p 5432 --username=postgres --no-password pg_development
sequelize db:migrate
npm install
npm run dev
I expect that when I run docker, the migration and the db creation would happen.
entrypoint.sh (in here I get createdb: command not found)
Running createdb in the nodejs container will not work because it is postgres specific command and it's not installed by default in the nodejs image.
If you specify POSTGRES_DB: pg_development env var on postgres container, the database will be created automatically when the container starts. So no need to run createdb anyway in entrypoint.sh that is mounted in the nodejs container.
In order to make sequelize db:migrate work you need to:
add sequelize-cli to dependencies in package.json
run npm install so it gets installed
run npx sequelize db:migrate
Here is a proposal:
# docker-compose.yml
version: '3'
services:
db:
image: "postgres:11.2"
ports:
- "5432:5432"
volumes:
- ./pgData:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD:
POSTGRES_DB: pg_development
app:
working_dir: /restify-pg
entrypoint: ["/bin/bash", "./entrypoint.sh"]
image: node:10.12.0
ports:
- "3000:3000"
volumes:
- .:/restify-pg
environment:
DB_HOST: db
# package.json
{
...
"dependencies": {
...
"pg": "^7.9.0",
"pg-hstore": "^2.3.2",
"sequelize": "^5.2.9",
"sequelize-cli": "^5.4.0"
}
}
# entrypoint.sh
npm install
npx sequelize db:migrate
npm run dev
If you can run your migrations from nodejs instead of docker then consider this solution instead

Knex Migration with Docker Compose Psql

I have a problem migrating using Knex js inside my docker-compose container.
the problem is that npm run db (knex migrate:rollback && knex migrate:latest && knex seed:run) would run right before the database is even created. Is there anyway to say that I would only like to run npm run db after the database has been created?
NOTE : if I do this npm commands on the docker terminal after it has been built everything works fine. just fyi
here is my docker-compose.yml
version: '3.6'
services:
#Backend api
server:
container_name: server
build: ./
command: npm run db
working_dir: /user/src/server
ports:
- "5000:5000"
volumes:
- ./:/user/src/server
environment:
POSTGRES_URI: postgres://test:test#192.168.99.100:5432/interapp
links:
- postgres
# PostgreSQL database
postgres:
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: interapp
POSTGRES_HOST: postgres
image: postgres
ports:
- "5432:5432"
and here is my Dockerfile
FROM node:10.14.0
WORKDIR /user/src/server
COPY ./ ./
RUN npm install
CMD ["/bin/bash"]
on the docker-compose.yml file, using sh (bash) for a contained environment context for your command to run in. ie. sh -c 'npm run db'
your docker-compose file would now be
secondly, use the depends_on step to wait for the database to start
services:
#Backend api
server:
container_name: server
build: ./
command: sh -c 'npm run db'
working_dir: /user/src/server
depends_on:
-postgres
ports:
- "5000:5000"
volumes:
- ./:/user/src/server
environment:
POSTGRES_URI: postgres://test:test#192.168.99.100:5432/interapp
links:
- postgres
Simply adding depends_on to server service should do the trick here.
services:
server:
depends_on:
- postgres
...
This will cause docker-compose to start postgres container before the server container. It will not however wait for postgres to be ready. In this case it shouldn't be problem, because postgres starts really quickly.
If you want something more solid, or depends_on doesn't do the trick, you can add entrypoint wrapping script to your container. See https://docs.docker.com/compose/startup-order/, where you can read more about it. There are also links to tools, so you don't have to write your own script from scratch.