Docker NestJS with prisma can't connect to PostGresql - postgresql

I have a docker compose file, but they can't connect to the db, or to each other.
I tried several things but it keeps complaining it can not connect to the database.
Also changing db to localhost doesn't work.
I just ran: docker-compose up
The error I get is:
cinema_1 |
cinema_1 | > cinema#0.0.1 start:migrate:prod
cinema_1 | > prisma migrate deploy && npm run start:prod
cinema_1 |
quotes_1 | Prisma schema loaded from prisma/schema.prisma
quotes_1 | Datasource "db": PostgreSQL database "quotes", schema "public" at "db:5432"
docker-compose file
version: '3.8'
services:
gateway:
build: ./api-gateway
restart: always
env_file:
- .env
ports:
- "3000:3000"
quotes:
build: ./quotes
restart: always
environment:
- DATABASE_URL=postgres://myuser:mypassword#db:5432/quotes
ports:
- 8895:8895
shows:
build: ./shows
restart: always
environment:
- DATABASE_URL=postgres://myuser:mypassword#db:5432/shows
ports:
- 8893:8893
db:
image: postgres:13.5
restart: always
environment:
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypassword
volumes:
- integration-postgres:/var/lib/postgresql/data
ports:
- 5432:5432
volumes:
integration-postgres:

Could you add the thread schema.prisma file ?

Related

Accessing the same database in different docker containers

I have been using docker for a postgres database as I work on my project. I used this docker-compose file to spin it up
version: '3'
services:
postgres:
image: postgres
ports:
- "4001:5432"
environment:
- POSTGRES_DB=4x4-db
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- pgdata-4x4:/var/lib/postgresql/data
volumes:
pgdata-4x4: {}
I now want to containerise my back and front ends together with the database. I made this docker-compose file to do so
version: '3.8'
services:
frontend:
build: ./4x4
ports:
- "3000:3000"
backend:
build: ./server
ports:
- "8000:8000"
db:
image: postgres
ports:
- "4001:5432"
environment:
- POSTGRES_DB=4x4-db
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- pgdata-4x4:/var/lib/postgresql/data
volumes:
pgdata-4x4:
external: true
However, when I execute the command docker-compose up on the second file, I do not access the same data as the first one -- the database is blank. If I spin up the first one again, I return to the old data (i.e. nothing is overwritten).
I presumed that the same postgres database would be connected to
I would appreciate any elucidation

Docker: PostgresSQL - An error occurred using the connection to database

I am trying to run a demo ASP.NET Core with PostgresSQL in Docker, but the Database connection is failing.
fail: Microsoft.EntityFrameworkCore.Database.Connection[20004]
An error occurred using the connection to database 'ToDoApp' on server
'tcp://postgresserver:5432'.
fail: Microsoft.EntityFrameworkCore.Query[10100]
An exception occurred while iterating over the results of a query for
context type 'Postgress_ASPNETCore.Db.ToDoContext'.
Npgsql.PostgresException (0x80004005): 3D000: database "ToDoApp" does
not exist
appsettings.json
"ConnectionStrings": {
"ToDoDb": "User ID=postgres;Password=password;Host=postgresserver;Port=5432;Database=ToDoApp;Pooling=true;"
}
docker-compose.yml
version: '2'
services:
aspnetcore:
build:
context: .
dockerfile: Dockerfile
links:
- postgresserver
depends_on:
- "postgresserver"
ports:
- "5000:5000"
networks:
- todoapp-network
postgresserver:
image: postgres
restart: always
ports:
- 5432:5432
environment:
POSTGRES_PASSWORD: password
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- todoapp-network
networks:
todoapp-network:
driver: bridge
volumes:
pgdata:
I am unable to figure out the issue. Please help!
Thanks in advance!
The problem is that the ToDoApp database is not created on startup.
To do so you can add it as an environment variable to your docker-compose.yml
version: '2'
services:
...
postgresserver:
image: postgres
restart: always
ports:
- 5432:5432
environment:
POSTGRES_PASSWORD: password
POSTGRES_DB: ToDoApp
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- todoapp-network
...

Airflow via docker-compose keeps trying to access sqlite although postgres configured

I try to set up a Dockerized airflow instance, but whatever I do (so far..) it keeps trying to access some sqlite3 database where I do not know where the instruction comes from. I point to the Postgres instance everywhere (deemed) possible through AIRFLOW__CORE__SQL_ALCHEMY_CONN, and even AIRFLOW_CONN_METADATA_DB.
A typical error message when starting up is like:
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: job
Full docker-compose.yml:
version: '3'
x-airflow-common:
&airflow-common
image: apache/airflow:2.0.0
environment:
- AIRFLOW__CORE__EXECUTOR=LocalExecutor
- AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql+psycopg2://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres+psycopg2://postgres:postgres#db:9501/airflow
- AIRFLOW__CORE__FERNET_KEY=FB0o_zt4e3Ziq3LdUUO7F2Z95cvFFx16hU8jTeR1ASM=
- AIRFLOW__CORE__LOAD_EXAMPLES=True
- AIRFLOW__CORE__LOGGING_LEVEL=INFO
volumes:
- /home/x/docker/airflow/dags:/opt/airflow/dags
- /home/x/docker/airflow/airflow-data/logs:/opt/airflow/logs
- /home/x/docker/airflow/airflow-data/plugins:/opt/airflow/plugins
- /home/x/docker/airflow/airflow-data/airflow.cfg:/opt/airlfow/airflow.cfg
depends_on:
- db
services:
db:
image: postgres:12
#image: postgres:12.1-alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=airflow
- POSTGRES_PORT=9501
- POSTGRES_HOST_AUTH_METHOD=trust
ports:
- 9501:9501
command:
- -p 9501
airflow-init:
<< : *airflow-common
container_name: airflow_init
entrypoint: /bin/bash
environment:
- SQL_ALCHEMY_CONN=postgresql://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres://postgres:postgres#db:9501/airflow
command:
- -c
- airflow users list || ( airflow db init &&
airflow users create
--role Admin
--username airflow
--password airflow
--email airflow#airflow.com
--firstname airflow
--lastname airflow )
restart: on-failure
airflow-webserver:
<< : *airflow-common
command: airflow webserver
ports:
- 9500:8080
container_name: airflow_webserver
environment:
- AIRFLOW_USERNAME=airflow
- AIRFLOW_PASSWORD=airflow
- SQL_ALCHEMY_CONN=postgresql://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres://postgres:postgres#db:9501/airflow
restart: always
airflow-scheduler:
<< : *airflow-common
command: airflow scheduler
container_name: airflow_scheduler
environment:
- SQL_ALCHEMY_CONN=postgresql://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres://postgres:postgres#db:9501/airflow
restart: always
Solved by following this docker-compose.yaml file:
https://github.com/apache/airflow/blob/master/docs/apache-airflow/start/docker-compose.yaml
And instead of trying to tweak the ports of postgres (and redis) used the "expose" option, which avoids conflicts with other containers on the same host.
So not:
environment:
POSTGRES_PORT: 9501
ports:
- 9501:9501
But: run it (internally) with the default ports and do not try to share them external:
expose:
- 5432
Still not sure what was the problem with using the higher ports. It may be some default fallback to sqlite when the configured DB for some reason cannot be connected.

Docker file wait for postgres container

Hello I have the following error in my node project:
(node:51) UnhandledPromiseRejectionWarning: Error: getaddrinfo
ENOTFOUND ${DB_HOST}
I'm thinking the problem is that my postgress is not yet started when my project starts
and so I'm not able to think of a solution on how to start my container after my postgres is ready, I read something about dockerize, but I'm not able to imagine how to apply
my docker file:
FROM node:lts-alpine
RUN mkdir -p /home/node/api/node_modules && chown -R node:node /home/node/api
WORKDIR /home/node/api
COPY ormconfig.json .env package.json yarn.* ./
USER node
RUN yarn
COPY --chown=node:node . .
EXPOSE 4000
CMD ["yarn", "dev"]
my docker compose:
version: '3.7'
services:
ci-api:
build: .
container_name: ci-api
volumes:
- .:/home/node/api
- /home/node/api/node_modules
ports:
- '${SERVER_PORT}:${SERVER_PORT}'
depends_on:
- ci-postgres
networks:
- ci-network
ci-postgres:
image: postgres:12
container_name: ci-postgres
ports:
- '${DB_PORT}:5432'
environment:
- ALLOW_EMPTY_PASSWORD=no
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
- POSTGRES_DB=${DB_NAME}
volumes:
- ci-postgres-data:/data
networks:
- ci-network
volumes:
ci-postgres-data:
networks:
ci-network:
driver: bridge
and this is my .env
SERVER_PORT=4000
DB_HOST=ci-postgres
DB_PORT=5432
DB_USER=spirit
DB_PASS=api
DB_NAME=emasa_ci
You can reference the below docker-compose.yml in which depends_on, healthcheck and links are added as web service depends on db service.
Reference:
Postgresql Container is not running in docker-compose file - Why is this?
version: "3"
services:
webapp:
build: .
container_name: webapp
ports:
- "5000:5000"
links:
- postgres
depends_on:
postgres:
condition: service_healthy
postgres:
image: postgres:11-alpine
container_name: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_DB=tmp
- POSTGRES_USER=tmp
- POSTGRES_PASSWORD=tmp_password
volumes: # Persist the db data
- database-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
volumes:
database-data:

How to connect to localhost postgres database from docker container?

I'm configured my project to docker. I have database that have been used in non-docker period and now I want to connect my docker-compose db service to it. But when I write docker-compose up - existing database not used - new one created instead (I suspect, docker container simply doesn't see the database). If I do nonsense please let me know. Maybe I shoud migrate my server db into container.
Here is my docker-compose.yml:
services:
db:
restart: always
image: postgres:latest
environment:
- POSTGRES_DB=mydb
- POSTGRES_PASSWORD=p#ssw0rd
- POSTGRES_USER=root
ports:
- "5432:5432"
volumes:
# We'll mount the 'postgres-data' volume into the location Postgres stores it's data:
- postgres-data:/var/lib/postgresql/data
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
depends_on:
- db
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
I think, the canonic approach is to have your DB engine running in container while storing the data on the persistent storage (map the volume to your hard disk).
So I would use the Postgres in docker as ServerDB, as you suggested.
If you only want your application connect to the external database, declare it as an external host:
version: '2'
services:
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
extra_hosts:
- "db:192.168.1.2"
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
Just be sure your application reference the database as db and replace the ip I put there with your host ip.
Regards