Can't connect to the Postgres Docker container using SqlAlchemy - postgresql

I have a Postgres Docker Container running locally, and the docker compose code for it looks like this
version: '3.9'
services:
db:
image: "postgres"
container_name: db
ports:
- "5432:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=dbname
The database is started using the docker compose run db command
I then find the IP address of the container once it's running, which is "192.168.240.2"
When I try to connect to the database with SqlAlchemy like the following in a python program (this is on the same computer but outside of the container)
import sqlalchemy
engine = sqlalchemy.create_engine('postgresql://postgres:password#192.168.240.2:5432/dbname')
engine.connect()
It shows me this error:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "192.168.240.2", port 5432 failed: Operation timed out
Is the server running on that host and accepting TCP/IP connections?
Anyone knows what the problem is here? thanks!
Tried searching for the error message, but changing the input to the create_engine() function according to other posts various ways still result in the same problem. However, I still imagine something's off with the string?

Related

Error: P1001: Can't reach database server at `localhost`:`5432`

I'm having a problem when running the npx prisma migrate dev command. Docker desktop tells me that the database is running correctly on port 5432 but I still can't see the problem.
I tried to put connect_timeout=300 to the connection string, tried many versions of postgres and docker, but I can't get it to work.
I leave you the link of the repo and photos so you can see the detail of the code.
I would greatly appreciate your help, since I have been lost for a long time with this.
Repo: https://github.com/gabrielmcreynolds/prisma-vs-typeorm/tree/master/prisma-project
Docker-compose.yml
version: "3.1"
services:
postgres:
image: postgres
container_name: postgresprisma
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=santino2002
ports:
- "5432:5432"
volumes:
- postgres:/var/lib/postgresql/data
volumes:
postgres:
Error:
Error: P1001: Can't reach database server at localhost:5432
Please make sure your database server is running at localhost:5432.
Docker ps show this:
Looks like the application and the database are running on two separate containers. So, in this case, connecting to localhost:5432 from the application container will try to connect to 5432 port within that container and not in the docker host's localhost.
To connect to database from the application container, use postgres:5432 (If they are on the same network) or <dockerhost>:5432.
Your docker ps output is showing that your postgres container has no ports connected to your local network.
It should look something similiar to this on ports column.
0.0.0.0:5432->5432/tcp, :::5432->5432/tcp
But yours is just 5432/tcp
You need to open ports for your postgres container.
Your docker-compose.yml file you posted in the question is correct. Probably you started postgres container with no ports first, then changed your docker-compose.yml file to have ports. So you just need to restart it now.
Use docker compose down && docker compose up --build -d to do that.

Not able to connect to Postgres container from another container

This question has been asked many times here, here and here but these solutions are not working for me.
I have created a Postgres and a AppServer container with this docker-compose.yml file
version: "3.7"
services:
db:
image: postgres:alpine
container_name: db
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: mydb
POSTGRES_INITDB_ARGS: '-A md5'
volumes:
- ./pgdata:/var/lib/postgressql/data
ports:
- "5432:5432"
api:
build: api
container_name: api
volumes:
- ./database/migrations:/migrations
ports:
- "8080:8080"
links:
- db
depends_on:
- db
After running this, I can successfully do
docker exec -it db psql -U user mydb
and I connect to Postgres successfully. I can also successfully login into terminals of both containers with
docker exec -it api bash
docker exec -it db bash
from inside of bash of api I can ping db without any problem
However from my api container, I cannot establish a JDBC connection to the Postgres database.
api | Flyway Community Edition 7.3.2 by Redgate
api | ERROR:
api | Unable to obtain connection from database (jdbc:postgresql://db:5432/mydb) for user 'user': Connection to db:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
api | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
api | SQL State : 08001
api | Error Code : 0
api | Message : Connection to db:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
api |
api | Caused by: org.postgresql.util.PSQLException: Connection to db:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
api | Caused by: java.net.ConnectException: Connection refused (Connection refused)
Why am I getting connection refused when I can connect via psql? This is my flyway conf
flyway.url=jdbc:postgresql://db:5432/mydb
flyway.user=user
flyway.password=password
flyway.locations=filesystem:/migrations
Edit:: So if I wait and then execute flyway migrate after some time from docker exec -it api bash everything works fine. I think what is happening above is that my flyway migrate command is running even before the database is ready.
Why is this happening? because I have specified dependency so my API container should start only when the database has fully started. but it seems that is not the case.
Specifying the database container as a dependency doesn't guarantee that it will be ready before your other services/containers. It only guarantees that it will start before your other services.
One way to get around this is to implement a retry attempt(s) in your API application when failing to connect to your database during startup.
Here is a link to an article that uses a shell script to wait for a service to be ready.
IMO your application should be smart enough to retry a few times when it cannot establish a database connection. It will make it more robust anyways.

What am I doing wrong in docker-compose for .netcore and postgres?

I am banging my head for a while on this issue and can't find what the issue might be. Running Docker Desktop on Windows 10. I have one dotnetcore 3.1 api that connects to postgres. Both of these are being run in containers.
Everything seems to work except connection to the database. Since I looked at my docker-compose.yml milion times, I can't come up with any other idea.
Here is my connection string:
"Server=postgres;Port=5432;Database=IdentityManager;User Id=postgres;Password=12345678;"
Here is docker-compose.yml file:
version: '3'
services:
identityserver:
depends_on:
- "postgres"
container_name: identityserver
build:
context: ./my_project/
dockerfile: Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT='Development'
ports:
- "5000:80"
postgres:
image: "postgres"
container_name: "postgres"
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=12345678
- POSTGRES_DB=IdentityManager
expose:
- "5432"
Everything builds up, but connection to database fails:
Unhandled exception. Npgsql.NpgsqlException (0x80004005): Exception while connecting identityserver
---> System.Net.Internals.SocketExceptionFactory+ExtendedSocketException (99): Cannot assign requested address [::1]:5432
The weirdest thing is that when I run postgres alone with this same configuration on docker-compose.yml, and run the application outside of container with slightly different connection string:
"Server=127.0.0.1;Port=5432;Database=IdentityManager;User Id=postgres;Password=12345678;"
I am able to connect to database.
I tried cleaning everything docker system prune -a, tried restarting Docker, restarting PC, but to no awail. Can anyone try to help?
Finally, I was able to resolve my own problem and it wasn't in the docker-compose.yml file at all. Somewhere in the application code, connection string was changed to look for localhost as a host instead of postgres.
After changing it back to postgres, everything was fine.
try to
links:
- postgres
Maybe it will help

Using Docker: sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: password authentication failed for user "username"

Error:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL: password authentication failed for user "username"
Hello, I am trying to run a program locally using Docker and am getting the error in the title, even though it was working before.
I've tried reinstalling Docker, re-cloning the repo, reinstalling PostgresSQL (the problem started when I installed it for the first time). From reading similar questions, I ensured that the password matches. The password is 'password' for the Docker Postgres Database and I've tried changing it but it still hasn't worked.
I'm using 'docker-compose up -d' and then running tests but I get the error in the title. I've tried running 'docker-compose down' and then redoing it, but I still get the error.
.env file:
FLASK_ENV=development
REDIS_URL=redis://localhost:6379
DATABASE_URL=postgresql+psycopg2://username:password#localhost:5432/programname
PROGRAM_API_APP_NAME=test-prod.compute.random.com
docker-compose.yml:
version: '3'
services:
postgresql:
image: postgres:10-alpine
environment:
- POSTGRES_DB=programname
- POSTGRES_PASSWORD=password
- POSTGRES_USER=username
ports:
- "5432"
redis:
image: redis:5.0.3
ports:
- "6379:6379"
(I don't have the port as 5432:5432 because it didn't work with that and I found an answer to remove the second 5432.)
Boss helped me with this. Apparently, when I installed Postgres, it created a postgres user that had a process that was using port 5432 and even when we killed it, it automatically restarted. To solve this specific problem, we changed the docker-compose file to use port 5433 locally and 5432 in the container. Still have to find out how to get rid of the postgres user.

How to connect to OpenMapTiles Docker Postgres DB

I am currently playing around with openmaptiles (https://github.com/openmaptiles/openmaptiles) and I'd like to figure out how to import my own data into the resulting mbtiles. But first I'd like to look at how the postgres database it is using is structured.
I just can't figure out how I can connect to the postgres database using my GUI tool I have running locally.
I start postgres using the command provided on the help page:
docker-compose up -d postgres. Is it just not visible to outside of the docker container (I am also very new to docker)?
And is there a way to make it visible to my local system?
docker-compose up -d postgres refers to this part of the docker-compose.yaml file:
services:
postgres:
image: "openmaptiles/postgis:2.9"
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- postgres_conn
ports:
- "5432"
env_file: .env
...
As you can see in the ports: section, there is no container - host port mapping here. To access this postgres database from your host try using "5432:5432". (notice that if you're already using this port on the host, you will have to pick an available one).
For more information on the docker-compose file reference and ports, check the docs.