Cannot access Postgres instance running in Docker container from Pgadmin - postgresql

I am trying to connect to a Postgres instance running in a Docker container. In the docker-compose file, the postgres service looks like this:
flask-api-postgres:
container_name: flask-api-postgres
image: postgres:13.4-alpine
env_file:
- dev.env
ports:
- "5433:5433"
networks:
flask-network:
With docker inspect I get that the container has the address: 172.19.0.2.
The API works fine, but when trying to access the database from Pgadmin with the config shown in the image (user and password are correctly set), I get the shown error.
Pgadmin config
I do not know how to access the postgres instance from pgadmin.

One approach is you can access the postgres db docker container from pgadmin which is hosted in your host machine using 127.0.0.1 instead of 172.19.0.2
Another way is you can create another container for pgadmin. In this case, you can access your PostgreSQL using container IP (For example: 172.19.0.2). Add this to your docker-compose file
pgadmin:
image: dpage/pgadmin4
depends_on:
- flask-api-postgres
ports:
- "5050:80"
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin4#pgadmin.org
PGADMIN_DEFAULT_PASSWORD: admin
restart: unless-stopped
networks:
flask-network:
Make sure both are under same network.

Please check the port you are using. The default is 5432.
See experiment:
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0c4d92a623a6 postgres:latest "docker-entrypoint.s…" 14 minutes ago Up 14 minutes 5432/tcp, 0.0.0.0:5433->5433/tcp cannot-access-postgres-instance-running-in-docker-container-from-pgadmin-database-1
> docker exec -it 0c4d92a623a6 sh
# psql "host=127.0.0.1 port=5433"
psql: error: connection to server at "127.0.0.1", port 5433 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
# psql "host=127.0.0.1 port=5432"
psql: error: connection to server at "127.0.0.1", port 5432 failed: FATAL: role "root" does not exist
#

Related

How are these Docker setups different? One works and the other not

I'm running Postgres image in Docker on an M1 Mac with mapped ports "5432:5432". My app can connect to the DB from the host machine by calling localhost:5432. I'm now trying to run the app within Docker and I'm puzzled by the behavior I see.
This command works:
docker run --name api --add-host host.docker.internal:host-gateway -e DB_HOST=host.docker.internal -p 8000:8000
But when I try to replicate the same by putting the api within the docker-compose like this, it doesn't work:
services:
postgres:
image: postgres:14.2
ports:
- "5432:5432"
networks:
- my-network
api:
image: api
environment:
DB_HOST: host.docker.internal
extra_hosts:
- "host.docker.internal:host-gateway"
Connecting to the DB fails:
failed to connect to host=host.docker.internal user=postgres database=postgres: failed to receive message (unexpected EOF)
I've also tried to put the api container on the same my-network network as postgres, and changing the DB host to be the DB container:
api:
image: api
environment:
DB_HOST: postgres
networks:
- my-network
but that gives me a different error:
failed to connect to host=postgres user=postgres database=postgres: dial error (dial tcp 192.168.192.2:5432: connect: connection refused)
The DB is listening at IPv4 address "0.0.0.0", port 5432 and IPv6 address "::", port 5432. Why would the docker run command work but the other two not work?
As David figured out the issue in the comments, I would like to suggest wait-for-it for such an issue, Instant of waiting a bit then start manually again.
wait-for-it.sh usually located at entrypoint.sh like this
#!/bin/sh
# Wait fot the cassandra db to be ready
./wait-for.sh cassandra:9042 --timeout=0
--timeout=0 , will keep on waiting till cassandra is up and running.
And can be used directly in docker-compose.yaml like the following example :
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "cassandra"
command: ["./wait-for-it.sh", "cassandra:9042"]
db:
image: cassandra
For more information, You can check Control startup and shutdown order in Compose
Wait-for-it github

weblate connect to aws rds postgres

I'm trying to connect weblate to external rds postgres database.
I'm using docker compose file that run weblate container. To this container I add the environment variables to connect to rds postgres.
The weblate container doesn't connect to rds postgres and give me this error:
psql: error: connection to server at "XXXX.rds.amazonaws.com", port 5432 failed: FATAL: password authentication failed for user "postgres"
but if I try to connect to rds postgres from inside the weblate container via cli, it works.
docker compose file:
version: '3'
services:
weblate:
image: weblate/weblate
tmpfs:
- /app/cache
volumes:
- weblate-data:/app/data
env_file:
- ./environment
restart: always
ports:
- 80:8080
environment:
POSTGRES_PASSWORD: XXXX
POSTGRES_USER: myuser
POSTGRES_DATABASE: mydb
POSTGRES_HOST: XXX.rds.amazonaws.com
POSTGRES_PORT: 5432
It tries to connect as postgres user while your configuration states myuser. Maybe the ./environment file overrides that?
I found the problem.
The problem was the char $ inside password.
Maybe the library used to connect to postgres has a bug or, simple, not allowed $ in password string.
When I removed that char it works.

Can't connect to DB located in docker container

I'm trying to create PostgreSQL DB inside docker container and connect to it from my local machine. Running docker-compose up -d with that inside docker-compose.yml:
version: '3.5'
services:
db:
image: postgres:12.2
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_DB: db
POSTGRES_USER: root
POSTGRES_PASSWORD: root
ended successfully. No crashes, errors of something. But, when I'm trying to connect to it with pgAdmin4 with these credentials:
Host name/address: localhost
Port: 5432
Maintenance database: db
Username: root
Password: root
it says to me:
Unable to connect to server:
FATAL: password authentication failed for user "root"
My OS: Windows 10 build(1809)
PostgreSQL version (installed on local machine): 12
Docker version: 19.03.13, build 4484c46d9d
UPD 1:
After re-creating container with different ports (now it is 5433:5433), pgAdmin4 error changed:
Unable to connect to server:
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Host name/address: localhost
Port: 5432
You are trying to connect to 5432 port on localhost. Are you sure your container is taking the host IP?
To make the container run with the host IP run the container with --network host option.
docker run --network host <rest of the command>
Note that if you use '--network host' option, then portmapping '-p' option is not needed.
Read https://docs.docker.com/network/host/ for more information.
Have you checked you've cleaned away any old instances running locally and that you're not trying to access an old instance?
You can wipe out all local docker containers with: docker rm -f $(docker ps -aq)
Once you've got a clean environment you can try spin up the containers again locally and see if you can access the service. I copy/pasted what you have into a clean docker-compose.yaml and ran docker-compose up against the file - it worked and I logged in and was able to view the pg_user table.
If it still fails you can try to find the IP using: netstat -in | grep en0 which will show something like
en0 1500 192.168.1 **192.168.1.163** 15301832 - 9001208 - - -
this shows the external/accessible IP of the container. Try using the address shown (something similar to 192.168.1.163) instead of localhost

How to set up a Postgres SQL database locally in my computer?

I have configured a production postgres sql database.
If I need to do debugging work, I don't want to be interacting with the production database or else that will affect the user base. Instead, I need to create a local environment such that nothing will be changed in the production database during debugging.
I am using Postgres SQL 10 and PGAdmin 4
How can I achieve that?
Thanks.
You could set up a test environment with docker.
first a docker-compose.yml file:
version: "3"
services:
db:
image: postgres:10-alpine
volumes:
- ./local_path:/var/lib/postgresql/data
ports:
- "8000:5432"
expose:
- "5432"
admin:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=admin#admin.com
- PGADMIN_DEFAULT_PASSWORD=admin
ports:
- "8080:80"
See the documentation for the docker postgres image on how to set environment variables to define user/password/db name. https://hub.docker.com/_/postgres/
I'm not too familiar with pgadmin but container has minimal setup options:
https://hub.docker.com/r/dpage/pgadmin4/
Then you start the containers with sudo docker-compose up.
The db container is publishing its port on 8000 on your host machine, so there should be no conflict with the postgres server running on the host.
To connect:
psql -h localhost -p 8000 -U postgres
The admin page should be available at port 8080 on your host machine.
When you connect the admin to the database in the UI, the hostname is db and the port is 5432
Now that you have a docker container set up, you might also consider using it for production also :)

Changing a postgres containers server port in Docker Compose

I am trying to deploy a second database container on a remote server using Docker compose. This postgresql server runs on port 5433 as opposed to 5432 as used by the first postgresql container.
When I set up the application I get this error output:
web_1 | django.db.utils.OperationalError: could not connect to server: Connection refused
web_1 | Is the server running on host "db" (172.17.0.2) and accepting
web_1 | TCP/IP connections on port 5433?
and my docker compose file is:
db:
image: postgres:latest
environment:
POSTGRES_PASSWORD: route_admin
POSTGRES_USER: route_admin
expose:
- "5433"
ports:
- "5433"
volumes:
- ./backups:/home/backups
web:
build: .
command: bash -c "sleep 5 && python -u application/manage.py runserver 0.0.0.0:8081"
volumes:
- .:/code
ports:
- "81:8081"
links:
- db
environment:
- PYTHONUNBUFFERED=0
I feel the issue must be the postgresql.conf file on the server instance having set the port to 5432 causing the error when my app tries to connect to it. Is there a simple way of changing the port using a command in the compose file as opposed to messing around with volumes to replace the file?
I am using the official postgresql container for this job.
Some people may wish to actually change the port Postgres is running on, rather than remapping the exposed port to the host using the port directive.
To do so, use command: -p 5433
In the example used for the question:
db:
image: postgres:latest
environment:
POSTGRES_PASSWORD: route_admin
POSTGRES_USER: route_admin
expose:
- "5433" # Publishes 5433 to other containers but NOT to host machine
ports:
- "5433:5433"
volumes:
- ./backups:/home/backups
command: -p 5433
Note that only the host will respect the port directive. Other containers will not.
Assuming postgres is running on port 5432 in the container and you want to expose it on the host on 5433, this ports strophe:
ports:
- "5433:5432"
will expose the server on port 5433 on the host. You can get rid of your existing expose strophe in this scenario.
If you only want to expose the service to other services declared in the compose file (and NOT localhost), just use the expose strophe and point it to the already internally exposed port 5432.