How are these Docker setups different? One works and the other not - postgresql

I'm running Postgres image in Docker on an M1 Mac with mapped ports "5432:5432". My app can connect to the DB from the host machine by calling localhost:5432. I'm now trying to run the app within Docker and I'm puzzled by the behavior I see.
This command works:
docker run --name api --add-host host.docker.internal:host-gateway -e DB_HOST=host.docker.internal -p 8000:8000
But when I try to replicate the same by putting the api within the docker-compose like this, it doesn't work:
services:
postgres:
image: postgres:14.2
ports:
- "5432:5432"
networks:
- my-network
api:
image: api
environment:
DB_HOST: host.docker.internal
extra_hosts:
- "host.docker.internal:host-gateway"
Connecting to the DB fails:
failed to connect to host=host.docker.internal user=postgres database=postgres: failed to receive message (unexpected EOF)
I've also tried to put the api container on the same my-network network as postgres, and changing the DB host to be the DB container:
api:
image: api
environment:
DB_HOST: postgres
networks:
- my-network
but that gives me a different error:
failed to connect to host=postgres user=postgres database=postgres: dial error (dial tcp 192.168.192.2:5432: connect: connection refused)
The DB is listening at IPv4 address "0.0.0.0", port 5432 and IPv6 address "::", port 5432. Why would the docker run command work but the other two not work?

As David figured out the issue in the comments, I would like to suggest wait-for-it for such an issue, Instant of waiting a bit then start manually again.
wait-for-it.sh usually located at entrypoint.sh like this
#!/bin/sh
# Wait fot the cassandra db to be ready
./wait-for.sh cassandra:9042 --timeout=0
--timeout=0 , will keep on waiting till cassandra is up and running.
And can be used directly in docker-compose.yaml like the following example :
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "cassandra"
command: ["./wait-for-it.sh", "cassandra:9042"]
db:
image: cassandra
For more information, You can check Control startup and shutdown order in Compose
Wait-for-it github

Related

Cannot access Postgres instance running in Docker container from Pgadmin

I am trying to connect to a Postgres instance running in a Docker container. In the docker-compose file, the postgres service looks like this:
flask-api-postgres:
container_name: flask-api-postgres
image: postgres:13.4-alpine
env_file:
- dev.env
ports:
- "5433:5433"
networks:
flask-network:
With docker inspect I get that the container has the address: 172.19.0.2.
The API works fine, but when trying to access the database from Pgadmin with the config shown in the image (user and password are correctly set), I get the shown error.
Pgadmin config
I do not know how to access the postgres instance from pgadmin.
One approach is you can access the postgres db docker container from pgadmin which is hosted in your host machine using 127.0.0.1 instead of 172.19.0.2
Another way is you can create another container for pgadmin. In this case, you can access your PostgreSQL using container IP (For example: 172.19.0.2). Add this to your docker-compose file
pgadmin:
image: dpage/pgadmin4
depends_on:
- flask-api-postgres
ports:
- "5050:80"
environment:
PGADMIN_DEFAULT_EMAIL: pgadmin4#pgadmin.org
PGADMIN_DEFAULT_PASSWORD: admin
restart: unless-stopped
networks:
flask-network:
Make sure both are under same network.
Please check the port you are using. The default is 5432.
See experiment:
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0c4d92a623a6 postgres:latest "docker-entrypoint.s…" 14 minutes ago Up 14 minutes 5432/tcp, 0.0.0.0:5433->5433/tcp cannot-access-postgres-instance-running-in-docker-container-from-pgadmin-database-1
> docker exec -it 0c4d92a623a6 sh
# psql "host=127.0.0.1 port=5433"
psql: error: connection to server at "127.0.0.1", port 5433 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
# psql "host=127.0.0.1 port=5432"
psql: error: connection to server at "127.0.0.1", port 5432 failed: FATAL: role "root" does not exist
#

Docker-compose can't connect to Docker postgres container

My Postgres DB is running in a Docker container. When container is started, it says it's ready to listen on 5432.
My application container is set to depend on it.
container_name: my_postgres_db
image: library/postgres:latest
network_mode: bridge
expose:
- 5432
ports:
- 5432:5432
environment:
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=admin
- POSTGRES_DB=localdb
restart: always
The config for app:
container_name: my_test_app
depends_on:
- db
build:
context: ./
dockerfile: Dockerfile
image: my_test_app
ports:
- 8080:8080
Based on solutions to the similar questions, I changed the localhost in the DB URL to:
spring.datasource.url=jdbc:postgresql://db:5432/localdb
It causes another error = "Unknown host exception". Even if I manage to build app this way, it still doesn't work. Logs say,
Connection to localhost:5432 refused
What else am I missing?
Why is it still listening to localhost:5432 when I obviously changed it to db:5432 and gradlew clean/build it?
just change the network_mode in postgres and app to host
network_mode: host
note that this will ignore the expose option and will use the host network an containers network

Connecting to Postgres Docker server - authentication failed

I have a PostgreSQL container set up that I can successfully connect to with Adminer but I'm getting an authentication error when trying to connect via something like DBeaver using the same credentials.
I have tried exposing port 5432 in the Dockerfile and can see on Windows for docker the port being correctly binded. I'm guessing that because it is an authentication error that the issue isn't that the server can not be seen but with the username or password?
Docker Compose file and Dockerfile look like this.
version: "3.7"
services:
db:
build: ./postgresql
image: postgresql
container_name: postgresql
restart: always
environment:
- POSTGRES_DB=trac
- POSTGRES_USER=user
- POSTGRES_PASSWORD=1234
ports:
- 5432:5432
adminer:
image: adminer
restart: always
ports:
- 8080:8080
nginx:
build: ./nginx
image: nginx_db
container_name: nginx_db
restart: always
ports:
- "8004:8004"
- "8005:8005"
Dockerfile: (Dockerfile will later be used to copy ssl certs and keys)
FROM postgres:9.6
EXPOSE 5432
Wondering if there is something else I should be doing to enable this to work via some other utility?
Any help would be great.
Thanks in advance.
Update:
Tried accessing the database through the IP of the postgresql container 172.28.0.3 but the connection times out which suggests that PostgreSQL is correctly listening on 0.0.0.0:5432 and for some reason the user and password are not usable outside of Docker even from the host machine using localhost.
Check your pg_hba.conf file in the Postgres data folder.
The default configuration is that you can only login from localhost (which I assume Adminer is doing) but not from external IPs.
In order to allow access from all external addresses vi password authentication, add the following line to your pg_hba.conf:
host all all * md5
Then you can connect to your postgres DB running in the docker container from outside, given you expose the Port (5432)
Use the command docker container inspect ${container_number}, this will tell you which IPaddress:ports are exposed external to the container.
The command 'docker container ls' will help identify the 'container number'
After updating my default db_name, I also had to update the docker-compose myself by explicitly exposing the ports as the OP did
db:
image: postgres:13-alpine
volumes:
- dev-db-data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=devdb
- POSTGRES_USER=user
- POSTGRES_PASSWORD=1234
ports:
- 5432:5432
But the key here was restarting the server! DBeaver has connected to localhost:5432 :)

Docker-compose: App can not connect to Postgres container

I'm unable to get my Phoenix app connecting to the Postgres container when using docker-compose up.
My docker-compose.yml:
version: '3.5'
services:
web:
image: "solaris_cards:latest"
ports:
- "80:4000"
env_file:
- config/docker.env
depends_on:
- db
db:
image: postgres:10-alpine
volumes:
- "/var/lib/postgresql/data/pgdata/var/lib/postgresql/data"
ports:
- "5432:5432"
env_file:
- config/docker.env
The application running in web container complains that a connection to the Postgres container is non-existing:
[error] Postgrex.Protocol (#PID<0.2134.0>) failed to connect: ** (DBConnection.ConnectionError) tcp connect (db:5432): non-existing domain - :nxdomain
My env variables:
DATABASE_HOST=db
DATABASE_USER=postgres
DATABASE_PASS=postgres
I have tried running the Postgres container first separately and then running the web container but still have the same problem.
If I change the database host to 0.0.0.0 (which is what Postgres shows when running), then it seems to connect but the connection is refused rather than not found.
However docker should be able to translate the host name with out me manually inputing the ip.
Postgres was exiting due to its volume already containing data.
This was solved by cleaning the directory with:
docker-compose down -v

Changing a postgres containers server port in Docker Compose

I am trying to deploy a second database container on a remote server using Docker compose. This postgresql server runs on port 5433 as opposed to 5432 as used by the first postgresql container.
When I set up the application I get this error output:
web_1 | django.db.utils.OperationalError: could not connect to server: Connection refused
web_1 | Is the server running on host "db" (172.17.0.2) and accepting
web_1 | TCP/IP connections on port 5433?
and my docker compose file is:
db:
image: postgres:latest
environment:
POSTGRES_PASSWORD: route_admin
POSTGRES_USER: route_admin
expose:
- "5433"
ports:
- "5433"
volumes:
- ./backups:/home/backups
web:
build: .
command: bash -c "sleep 5 && python -u application/manage.py runserver 0.0.0.0:8081"
volumes:
- .:/code
ports:
- "81:8081"
links:
- db
environment:
- PYTHONUNBUFFERED=0
I feel the issue must be the postgresql.conf file on the server instance having set the port to 5432 causing the error when my app tries to connect to it. Is there a simple way of changing the port using a command in the compose file as opposed to messing around with volumes to replace the file?
I am using the official postgresql container for this job.
Some people may wish to actually change the port Postgres is running on, rather than remapping the exposed port to the host using the port directive.
To do so, use command: -p 5433
In the example used for the question:
db:
image: postgres:latest
environment:
POSTGRES_PASSWORD: route_admin
POSTGRES_USER: route_admin
expose:
- "5433" # Publishes 5433 to other containers but NOT to host machine
ports:
- "5433:5433"
volumes:
- ./backups:/home/backups
command: -p 5433
Note that only the host will respect the port directive. Other containers will not.
Assuming postgres is running on port 5432 in the container and you want to expose it on the host on 5433, this ports strophe:
ports:
- "5433:5432"
will expose the server on port 5433 on the host. You can get rid of your existing expose strophe in this scenario.
If you only want to expose the service to other services declared in the compose file (and NOT localhost), just use the expose strophe and point it to the already internally exposed port 5432.