Connecting two a database in a another container with docker-compose - postgresql

I'm trying to set up a docker-compose where one container has the database and the other one has the application. To my best knowledge, I need to connect two containers with a network.
version: "3"
services:
psqldb:
image: "postgres"
container_name: "psqldb"
environment:
- POSTGRES_USER=usr
- POSTGRES_PASSWORD=pwd
- POSTGRES_DB=mydb
ports:
- "5432:5432"
expose:
- "5432"
volumes:
- $HOME/docker/volumes/postgres/:/var/lib/postgresql/data
networks:
- backend
sbapp:
image: "dsb"
container_name: "dsb-cont"
ports:
- "8080:8080"
expose:
- "8080"
depends_on:
- psqldb
networks:
- backend
networks:
backend:
I also tried it with setting network driver to bridge (which didn't change the end result)
networks:
backend:
driver: bridge
I'm using the following url to connect to the database
spring.datasource.url=jdbc:postgresql://psqldb:5432/mydb
I also tried to expose the port and connect using localhost
spring.datasource.url=jdbc:postgresql://localhost:5432/mydb
I also tried to use port 5433 instead of 5432.
No matter what combination I used, I got the following error
Connection to psqldb:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
in my app container. However, my database container remains up and I can connect to it fine from host with the url
spring.datasource.url=jdbc:postgresql://localhost:5432/mydb
I can also connect to the database from host if I remove psqldb container entirely from docker-compose.yml (and use the latter connection url).
If it makes any difference, I'm using Spring
Boot for application with the Dockerfile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
EXPOSE 8080
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom", "-jar", "app.jar"]
and I'm building the image with the command
docker build --build-arg JAR_FILE=target/*.jar -t dsb .
What am I missing in the two container setup?

The issue I had was that docker depends_on only starts the containers in the defined order but doesn't wait for them to be actually ready.
https://docs.docker.com/compose/startup-order/

Related

Docker container getting connection refused from postgres container in docker-compose

I've been beating my head against this for a few days now and I'm finally asking for help after trying to find the solution myself from all over.
I have a docker-compose file that looks like this:
services:
db:
image: ...
container_name: db
ports:
- "8095:5432"
networks:
- mynetwork
springservice:
image: ...
container_name: springservice
depends_on:
- db
ports:
- "8090:8090"
networks:
- mynetwork
environment:
- SPRING_DATASOURCE_URL: jdbc:postgresql://db:8095/dbname
- SPRING_DATASOURCE_USER: user
- SPRING_DATASOURCE_PASSWORD: password
networks:
mynetwork:
driver: bridge
name: mynetwork
Postgres has to be put to another port because we've got 3 postgres containers in that compose, so each get their own port.
Postgres's listen_address is set to "*".
pg_hba is set with "host all 0.0.0.0/0 md5"
Both containers come up, but when I curl from the service container to http://db:8095/ , I get connection refused.
What am I missing here?
Your port mapping is meaningless inside the docker network. This is only a mapping to the host system. Inside the network, the container is always available on its native port.
- SPRING_DATASOURCE_URL: jdbc:postgresql://db:5432/dbname
Also note that you don't need to publish the port to access it from inside the network. Doing so for a database can impose security risks. If you can, you should not publish it. That way, it will be only accessible from inside the docker network.

Docker-compose can't connect to Docker postgres container

My Postgres DB is running in a Docker container. When container is started, it says it's ready to listen on 5432.
My application container is set to depend on it.
container_name: my_postgres_db
image: library/postgres:latest
network_mode: bridge
expose:
- 5432
ports:
- 5432:5432
environment:
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=admin
- POSTGRES_DB=localdb
restart: always
The config for app:
container_name: my_test_app
depends_on:
- db
build:
context: ./
dockerfile: Dockerfile
image: my_test_app
ports:
- 8080:8080
Based on solutions to the similar questions, I changed the localhost in the DB URL to:
spring.datasource.url=jdbc:postgresql://db:5432/localdb
It causes another error = "Unknown host exception". Even if I manage to build app this way, it still doesn't work. Logs say,
Connection to localhost:5432 refused
What else am I missing?
Why is it still listening to localhost:5432 when I obviously changed it to db:5432 and gradlew clean/build it?
just change the network_mode in postgres and app to host
network_mode: host
note that this will ignore the expose option and will use the host network an containers network

Connecting to Postgres Docker server - authentication failed

I have a PostgreSQL container set up that I can successfully connect to with Adminer but I'm getting an authentication error when trying to connect via something like DBeaver using the same credentials.
I have tried exposing port 5432 in the Dockerfile and can see on Windows for docker the port being correctly binded. I'm guessing that because it is an authentication error that the issue isn't that the server can not be seen but with the username or password?
Docker Compose file and Dockerfile look like this.
version: "3.7"
services:
db:
build: ./postgresql
image: postgresql
container_name: postgresql
restart: always
environment:
- POSTGRES_DB=trac
- POSTGRES_USER=user
- POSTGRES_PASSWORD=1234
ports:
- 5432:5432
adminer:
image: adminer
restart: always
ports:
- 8080:8080
nginx:
build: ./nginx
image: nginx_db
container_name: nginx_db
restart: always
ports:
- "8004:8004"
- "8005:8005"
Dockerfile: (Dockerfile will later be used to copy ssl certs and keys)
FROM postgres:9.6
EXPOSE 5432
Wondering if there is something else I should be doing to enable this to work via some other utility?
Any help would be great.
Thanks in advance.
Update:
Tried accessing the database through the IP of the postgresql container 172.28.0.3 but the connection times out which suggests that PostgreSQL is correctly listening on 0.0.0.0:5432 and for some reason the user and password are not usable outside of Docker even from the host machine using localhost.
Check your pg_hba.conf file in the Postgres data folder.
The default configuration is that you can only login from localhost (which I assume Adminer is doing) but not from external IPs.
In order to allow access from all external addresses vi password authentication, add the following line to your pg_hba.conf:
host all all * md5
Then you can connect to your postgres DB running in the docker container from outside, given you expose the Port (5432)
Use the command docker container inspect ${container_number}, this will tell you which IPaddress:ports are exposed external to the container.
The command 'docker container ls' will help identify the 'container number'
After updating my default db_name, I also had to update the docker-compose myself by explicitly exposing the ports as the OP did
db:
image: postgres:13-alpine
volumes:
- dev-db-data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=devdb
- POSTGRES_USER=user
- POSTGRES_PASSWORD=1234
ports:
- 5432:5432
But the key here was restarting the server! DBeaver has connected to localhost:5432 :)

How can i link mongodb with other services in docker-compose?

i got a problem.
I made a docker-compose that runs mongo and node.
The problem is there is no way i use mongo from the container, i cannot start my node server.
Here there is my docker-compose :
version: '3'
services:
database:
build: ./Database
container_name: "dashboard_database"
ports:
- "27017:27017"
backend:
build: ./Backend
container_name: "dashboard_backend"
ports:
- "8080:8080"
depends_on:
- database
links:
- database
But when i start mongo without the container my node can reach it, i don't know why ...
Any idea ?
Thanks !
Dont define ports in the DB service. But afterwards only application will be able to access DB. Most probably it will work then. If you still want to access it from your PC then you should define a network. Try this
version: '3'
services:
database:
build: ./Database
container_name: "dashboard_database"
backend:
build: ./Backend
container_name: "dashboard_backend"
ports:
- "8080:8080"
depends_on:
- database
links:
- database
And for creating network
version: '3'
networks:
back-tier:
services:
database:
build: ./Database
container_name: "dashboard_database"
networks:
- back-tier
ports:
- "27017:27017"
backend:
build: ./Backend
container_name: "dashboard_backend"
networks:
- back-tier
ports:
- "8080:8080"
depends_on:
- database
All services in docker-compose are within the docker-compose created network, and can be addressed by their service names from other services. In your case the service names are database and backend, so for instance the database can be accessed by the backend with something like tcp://database:27017. You don't need to link them anymore.
https://runnable.com/docker/docker-compose-networking
Be aware depends_on only waits until the process has been started and does not wait for the process to be ready to accept connections.
https://docs.docker.com/compose/compose-file/#depends_on
https://docs.docker.com/compose/startup-order
The port mappings are only necessary if you want to make a service accessible from the local machine. In your examplte the backend service is accessible via localhost:8080.
If you want an external container to access a docker-compose service tne localhost:8080 wont work because localhost in the container isn't the same localhost as on your local machine where docker containers are running. You can create manually a docker network and connect the container and docker-compose services to it. See docker-compose-networking link and take a look at section Pre-existing Networks.
Does that help you?

Accessing postgres data in docker-compose network

I'm having trouble accessing a database created from a docker-compose file.
Given the following compose file, I should be able to connect to it from java using something like:
jdbc:postgresql://eprase:eprase#database:7000/eprase
However, the connection is rejected. I can't even use PGAdmin to connect it using the same details to create a new server.
I've entered the database container and ran psql commands to verify that the eprase user and database have been created according to postgres Docker documentation, everything seems fine. I can't tell if the problem is within the database container or something I need to change in the compose network.
The client & server services can largely be ignored, the server is a java based web API and the client is an Angular app.
Compose file:
version: "3"
services:
client:
image: eprase/client:latest
build: ./client/eprase-app
networks:
api:
ports:
- "5000:80"
tty: true
depends_on:
- server
server:
image: eprase/server:latest
build: ./server
networks:
api:
ports:
- "6000:8080"
depends_on:
- database
database:
image: postgres:9
volumes:
- "./database/data:/var/lib/postgresql/data"
environment:
- "POSTGRES_USER=eprase"
- "POSTGRES_PASSWORD=eprase"
- "POSTGRES_DB=eprase"
networks:
api:
ports:
- "7000:5432"
restart: unless-stopped
pgadmin:
image: dpage/pgadmin4:latest
environment:
- "PGADMIN_DEFAULT_EMAIL=admin#eprase.com"
- "PGADMIN_DEFAULT_PASSWORD=eprase"
networks:
api:
ports:
- "8000:80"
depends_on:
- database
networks:
api:
The PostgreSQL database is listening on container port 5432. The 7000:5432 line is mapping host port 7000 to container port 5432. That allows you to connect to the database on port 7000. But, your services on a common network (api) should communicate with each other via the container ports.
So, from the perspective of the containers for the client and server services, the connection string should be:
jdbc:postgresql://eprase:eprase#database:5432/eprase