flyway migrate fails with The connection attempt failed in docker-compose - postgresql

I have these entries in my docker-compose.yml
flyway:
container_name: flyway
image: flyway/flyway
command: -url=jdbc:postgresql://postgresql:5432/db_name -schemas=public -user=username -password=password -connectRetries=60 migrate -X
volumes:
- ./config/src/main/sql:/flyway/sql
depends_on:
- postgresql
postgresql:
container_name: postgresql
image: postgres:10.1-alpine
command:
- postgres
- '-clog_connections=yes'
- '-clog_statement=all'
env_file:
- ./dev.env
networks:
- internal
ports:
- '5439:5432'
volumes:
- volume-postgres:/var/lib/postgresql/data
When I run docker-compose up --build flyway, I get this error
postgresql is up-to-date
Recreating flyway ... done
Attaching to flyway
flyway | WARNING: Connection error: The connection attempt failed.
flyway | (Caused by postgresql)
flyway | Retrying in 1 sec...
flyway | WARNING: Connection error: The connection attempt failed.
flyway | (Caused by postgresql)
flyway | Retrying in 2 sec...
flyway | WARNING: Connection error: The connection attempt failed.
flyway | (Caused by postgresql)
How can I debug it? In some answers I see DEBUG printed in the output.
What is wrong that it errors out?

1.
The debug logs should be produced since you have the -X flag in the flyway command. It looks like it might be attaching to the flyway container after the initial debug logs have already been logged. You should be able to see full logs for the flyway container by running:
docker-compose logs flyway
2.
Flyway can't connect to Postgres because they're on different networks. You've configured postgresql to be on the internal network, but flyway does not have a network configured so it will be on the default network.
There's a few way you could fix this:
Remove the network property from postgresql so both services are on the default network, but this isn't ideal if you're relying on other services being able to reach it on internal.
Add flyway to the internal network.
Since you've mapped port 5432 on postgresql to 5439 on the host network, you could add flyway to the host network (by giving it the property network_mode: "host") and then use the address localhost:5439 to access postgresql.

Related

Can't connect to the Postgres Docker container using SqlAlchemy

I have a Postgres Docker Container running locally, and the docker compose code for it looks like this
version: '3.9'
services:
db:
image: "postgres"
container_name: db
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=dbname
The database is started using the docker compose run db command
I then find the IP address of the container once it's running, which is "192.168.240.2"
When I try to connect to the database with SqlAlchemy like the following in a python program (this is on the same computer but outside of the container)
import sqlalchemy
engine = sqlalchemy.create_engine('postgresql://postgres:password#192.168.240.2:5432/dbname')
engine.connect()
It shows me this error:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "192.168.240.2", port 5432 failed: Operation timed out
Is the server running on that host and accepting TCP/IP connections?
Anyone knows what the problem is here? thanks!
Tried searching for the error message, but changing the input to the create_engine() function according to other posts various ways still result in the same problem. However, I still imagine something's off with the string?

Error: P1001: Can't reach database server at `localhost`:`5432`

I'm having a problem when running the npx prisma migrate dev command. Docker desktop tells me that the database is running correctly on port 5432 but I still can't see the problem.
I tried to put connect_timeout=300 to the connection string, tried many versions of postgres and docker, but I can't get it to work.
I leave you the link of the repo and photos so you can see the detail of the code.
I would greatly appreciate your help, since I have been lost for a long time with this.
Repo: https://github.com/gabrielmcreynolds/prisma-vs-typeorm/tree/master/prisma-project
Docker-compose.yml
version: "3.1"
services:
postgres:
image: postgres
container_name: postgresprisma
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=santino2002
ports:
- "5432:5432"
volumes:
- postgres:/var/lib/postgresql/data
volumes:
postgres:
Error:
Error: P1001: Can't reach database server at localhost:5432
Please make sure your database server is running at localhost:5432.
Docker ps show this:
Looks like the application and the database are running on two separate containers. So, in this case, connecting to localhost:5432 from the application container will try to connect to 5432 port within that container and not in the docker host's localhost.
To connect to database from the application container, use postgres:5432 (If they are on the same network) or <dockerhost>:5432.
Your docker ps output is showing that your postgres container has no ports connected to your local network.
It should look something similiar to this on ports column.
0.0.0.0:5432->5432/tcp, :::5432->5432/tcp
But yours is just 5432/tcp
You need to open ports for your postgres container.
Your docker-compose.yml file you posted in the question is correct. Probably you started postgres container with no ports first, then changed your docker-compose.yml file to have ports. So you just need to restart it now.
Use docker compose down && docker compose up --build -d to do that.

Not able to connect to Postgres container from another container

This question has been asked many times here, here and here but these solutions are not working for me.
I have created a Postgres and a AppServer container with this docker-compose.yml file
version: "3.7"
services:
db:
image: postgres:alpine
container_name: db
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: mydb
POSTGRES_INITDB_ARGS: '-A md5'
volumes:
- ./pgdata:/var/lib/postgressql/data
ports:
- "5432:5432"
api:
build: api
container_name: api
volumes:
- ./database/migrations:/migrations
ports:
- "8080:8080"
links:
- db
depends_on:
- db
After running this, I can successfully do
docker exec -it db psql -U user mydb
and I connect to Postgres successfully. I can also successfully login into terminals of both containers with
docker exec -it api bash
docker exec -it db bash
from inside of bash of api I can ping db without any problem
However from my api container, I cannot establish a JDBC connection to the Postgres database.
api | Flyway Community Edition 7.3.2 by Redgate
api | ERROR:
api | Unable to obtain connection from database (jdbc:postgresql://db:5432/mydb) for user 'user': Connection to db:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
api | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
api | SQL State : 08001
api | Error Code : 0
api | Message : Connection to db:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
api |
api | Caused by: org.postgresql.util.PSQLException: Connection to db:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
api | Caused by: java.net.ConnectException: Connection refused (Connection refused)
Why am I getting connection refused when I can connect via psql? This is my flyway conf
flyway.url=jdbc:postgresql://db:5432/mydb
flyway.user=user
flyway.password=password
flyway.locations=filesystem:/migrations
Edit:: So if I wait and then execute flyway migrate after some time from docker exec -it api bash everything works fine. I think what is happening above is that my flyway migrate command is running even before the database is ready.
Why is this happening? because I have specified dependency so my API container should start only when the database has fully started. but it seems that is not the case.
Specifying the database container as a dependency doesn't guarantee that it will be ready before your other services/containers. It only guarantees that it will start before your other services.
One way to get around this is to implement a retry attempt(s) in your API application when failing to connect to your database during startup.
Here is a link to an article that uses a shell script to wait for a service to be ready.
IMO your application should be smart enough to retry a few times when it cannot establish a database connection. It will make it more robust anyways.

What am I doing wrong in docker-compose for .netcore and postgres?

I am banging my head for a while on this issue and can't find what the issue might be. Running Docker Desktop on Windows 10. I have one dotnetcore 3.1 api that connects to postgres. Both of these are being run in containers.
Everything seems to work except connection to the database. Since I looked at my docker-compose.yml milion times, I can't come up with any other idea.
Here is my connection string:
"Server=postgres;Port=5432;Database=IdentityManager;User Id=postgres;Password=12345678;"
Here is docker-compose.yml file:
version: '3'
services:
identityserver:
depends_on:
- "postgres"
container_name: identityserver
build:
context: ./my_project/
dockerfile: Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT='Development'
ports:
- "5000:80"
postgres:
image: "postgres"
container_name: "postgres"
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=12345678
- POSTGRES_DB=IdentityManager
expose:
- "5432"
Everything builds up, but connection to database fails:
Unhandled exception. Npgsql.NpgsqlException (0x80004005): Exception while connecting identityserver
---> System.Net.Internals.SocketExceptionFactory+ExtendedSocketException (99): Cannot assign requested address [::1]:5432
The weirdest thing is that when I run postgres alone with this same configuration on docker-compose.yml, and run the application outside of container with slightly different connection string:
"Server=127.0.0.1;Port=5432;Database=IdentityManager;User Id=postgres;Password=12345678;"
I am able to connect to database.
I tried cleaning everything docker system prune -a, tried restarting Docker, restarting PC, but to no awail. Can anyone try to help?
Finally, I was able to resolve my own problem and it wasn't in the docker-compose.yml file at all. Somewhere in the application code, connection string was changed to look for localhost as a host instead of postgres.
After changing it back to postgres, everything was fine.
try to
links:
- postgres
Maybe it will help

How to recreate Docker container?

I'm new to docker and I'm using docker compose. For some reason my postgres container is now broken
I'm trying this command docker-compose up --no-deps --build db
And it's returning me this:
MacBook-Pro-de-Javier:goxo.api javier$ docker-compose up --no-deps --build db
Recreating testapi_db_1
Attaching to testapi_db_1
db_1 | LOG: database system was shut down at 2017-04-20 17:19:05 UTC
db_1 | LOG: MultiXact member wraparound protections are now enabled
db_1 | LOG: database system is ready to accept connections
db_1 | LOG: autovacuum launcher started
Whenever I try to connect (with the same connection arguemnts than before) I get this:
^[[Adb_1 | FATAL: database "test" does not exist
This is part of my docker-compose.yml
version: "3"
services:
db:
image: postgres
ports:
- "3700:5432"
environment:
POSTGRES_HOST: "127.0.0.1"
POSTGRES_DB: "test"
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "postgres1"
tmpfs:
- /tmp
- /var/run/postgresql
volumes:
- db:/var/lib/postgresql/data
- ./config/postgres-initdb.sh:/docker-entrypoint-initdb.d/initdb.sh
Any ideas on how can I recreate the docker image to be how it was before? It was working as it was created the first time
Thanks
EDIT 1: If I run docker-compose build && docker-compose up
Terminal throws this:
db uses an image, skipping
EDIT 2: This command does not create database again neither:
docker-compose up --force-recreate --abort-on-container-exit --build db
have you tried to rebuild your single postgres container?
docker build -t <postgrescontainer>
or with docker-compose:
docker-compose up --build
to recreate the images and not use the old 'used' ones.
You can have a look at the images on your system with
docker images
which should show your image, and then
docker history --no-trunc your_image
should show the commands used for the creation of the image
This my be insufficient, as when you see something like
ADD * /opt
you do not know exactly which files were copied, and what thoses files contained
There is also dockerfile-from-image
https://github.com/CenturyLinkLabs/dockerfile-from-image
which seems to have a bug recently (I do not know if it is fixed)