server and postgresql cannot connect when both running on Docker `getaddrinfo ENOTFOUND` - postgresql

after creating docker containers with docker compose file (below), I call
$ docker run myApp
However, I get
Error: getaddrinfo ENOTFOUND main_db
this only happens when both server and postgresql are in docker containers (I am able to connect to postgresql on localhost)
I'm running a NestJS app using TypeOrm to connect to a postgresql server
inside the app.module.ts where it boots up the connection my config should match my docker postgresql config. the host points to the container I created on docker main_db and I declared this as a dependency of my server, the main service. Everything should be on the same network webnet.:
TypeOrmModule.forRoot({
type: 'postgres',
host: 'main_db',
port: +process.env.POSTGRES_PORT,
username: process.env.POSTGRES_USER,
password: process.env.POSTGRES_PASSWORD,
database: process.env.POSTGRES_DB,
autoLoadEntities: true,
synchronize: true,
logging: dbLogging,
}),
docker-compose.yml
version: '3.7'
services:
main:
container_name: main
build:
context: .
target: development
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
ports:
- ${SERVER_PORT}:${SERVER_PORT}
- 9229:9229
command: npm run start:dev
env_file:
- .env
networks:
- webnet
depends_on:
- main_db
main_db:
container_name: main_db
image: postgres:12
restart: always
networks:
- webnet
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_USER: ${POSTGRES_USER}
PG_DATA: /var/lib/postgresql/data
ports:
- '${POSTGRES_PORT}:${POSTGRES_PORT}'
volumes:
- pgdata:/var/lib/postgresql/data
networks:
webnet:
volumes:
pgdata:
.env file
POSTGRES_PORT=5432
POSTGRES_USER=test
POSTGRES_PASSWORD=test
POSTGRES_DB=test
SERVER_PORT=3001

When creating a Dockerfile and a docker-compose.yml file and you call docker run myApp for the app defined inside of the Dockerfile instead of calling docker-compose up, you will see the app running; however, it will not start the containers defined in the docker-compose file. In the case of the NestJS server, it was running the server, but could not find the container with the database, since this container was not being spun up. Although the distinction between the app setup in the Dockerfile and the definition of the containers in the docker-compose.yml was clear, I didn't realize that the docker command didn't reference the docker-compose.yml. Thus, posting here in case anyone else has a similar confusion.

Related

Spring Boot connection to Postgres fails when deploying with docker compose [duplicate]

I have a Java Spring Boot app which works with a Postgres database. I want to use Docker for both of them. I initially put just the Postgres in Docker, and I had a docker-compose.yml file defined like this:
version: '2'
services:
db:
container_name: sample_db
image: postgres:9.5
volumes:
- sample_db:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=sample
- POSTGRES_USER=sample
- POSTGRES_DB=sample
- PGDATA=/var/lib/postgresql/data/pgdata
ports:
- 5432:5432
volumes:
sample_db: {}
Then, when I issued the commands sudo dockerd and sudo docker-compose -f docker-compose.yml up, it was starting the database. I could connect using pgAdmin for example, by using localhost as server and port 5432. Then, in my Spring Boot app, inside the application.properties file I defined the following properties.
spring.datasource.url=jdbc:postgresql://localhost:5432/sample
spring.datasource.username=sample
spring.datasource.password=sample
spring.jpa.generate-ddl=true
At this point I could run my Spring Boot app locally through Spring Suite, and it all was working fine. Then, I wanted to also add my Spring Boot app as Docker image. I first of all created a Dockerfile in my project directory, which looks like this:
FROM java:8
EXPOSE 8080
ADD /target/manager.jar manager.jar
ENTRYPOINT ["java","-jar","manager.jar"]
Then, I entered to the directory of the project issued mvn clean followed by mvn install. Next, issued docker build -f Dockerfile -t manager . followed by docker tag 9c6b1e3f1d5e myuser/manager:latest (the id is correct). Finally, I edited my existing docker-compose.yml file to look like this:
version: '2'
services:
web:
image: myuser/manager:latest
ports:
- 8080:8080
depends_on:
- db
db:
container_name: sample_db
image: postgres:9.5
volumes:
- sample_db:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=sample
- POSTGRES_USER=sample
- POSTGRES_DB=sample
- PGDATA=/var/lib/postgresql/data/pgdata
ports:
- 5432:5432
volumes:
sample_db: {}
But, now if I issue sudo docker-compose -f docker-compose.yml up command, the database again starts correctly, but I get errors and exit code 1 for the web app part. The problem is the connection string. I believe I have to change it to something else, but I don't know what it should be. I get the following error messages:
web_1 | 2017-06-27 22:11:54.418 ERROR 1 --- [ main] o.a.tomcat.jdbc.pool.ConnectionPool : Unable to create initial connections of pool.
web_1 |
web_1 | org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections
Any ideas?
Each container has its own network interface with its own localhost. So change how Java points to Postgres:
spring.datasource.url=jdbc:postgresql://localhost:5432/sample
To:
spring.datasource.url=jdbc:postgresql://db:5432/sample
db will resolve to the proper Postgres IP.
Bonus. With docker-compose you don't need to build your image by hand. So change:
web:
image: myuser/manager:latest
To:
web:
build: .
I had the same problem and I lost some time to understand and solve this problem:
org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
I show all the properties so that everyone understands.
application.properties:
spring.datasource.url=jdbc:postgresql://localhost:5432/testdb
spring.datasource.driver-class-name=org.postgresql.Driver
spring.datasource.username=postgres
spring.datasource.password=postgres
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQL82Dialect
spring.jpa.hibernate.ddl-auto=update
docker-compose.yml:
version: "3"
services:
springapp:
build: .
container_name: springapp
environment:
SPRING_DATASOURCE_URL: jdbc:postgresql://db:5432/testdb
ports:
- 8000:8080
restart: always
depends_on:
- db
db:
image: postgres
container_name: db
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=testdb
- PGDATA=/var/lib/postgresql/data/pgdata
ports:
- 5000:5432
volumes:
- pgdata:/var/lib/postgresql/data
restart: always
volumes:
pgdata:
For start spring application with local database we use url localhost.
For connect to container with database we need change 'localhost' on your database service, in my case 'localhost' to 'db'.
Solution: add SPRING_DATASOURCE_URL environment in docker-compose.yml wich rewrite spring.datasource.url value for connect:
environment:
SPRING_DATASOURCE_URL: jdbc:postgresql://db:5432/testdb
I hope this helps someone save his time.
You can use this.
version: "2"
services:
sample_db-postgresql:
image: postgres:9.5
ports:
- 5432:5432
environment:
- POSTGRES_PASSWORD=sample
- POSTGRES_USER=sample
- POSTGRES_DB=sample
volumes:
- sample_db:/var/lib/postgresql/data
volumes:
sample_db:
You can use ENV variable to change the db address in your docker-compose.
Dockerfile:
FROM java:8
EXPOSE 8080
ENV POSTGRES localhost
ADD /target/manager.jar manager.jar
ENTRYPOINT exec java $JAVA_OPTS -jar manager.jar --spring.datasource.url=jdbc:postgresql://$POSTGRES:5432/sample
docker-compose:
`
container_name: springapp
environment:
- POSTGRES=db`

Unable to access postgres in docker from web app in another container

I have a samle app I'm using docker-compose to run locally on my machine. The web app is in one container and the db (postgres) in another.
I am having an connection issue that I can't work through.
docker-compose
version: '3.8'
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: 'password'
POSTGRES_DB: 'postgres'
POSTGRES_USER: 'postgres'
volumes:
- ./postgres-db:/var/lib/postgresql/data
ports:
- '5432:5432'
app:
build:
context: .
dockerfile: app/Dockerfile
restart: always
environment:
APP_FRONTEND_PORT: '8080'
DB_PORT: '5433'
DB_HOST: 'db'
ports:
- '8080:8080'
depends_on:
- 'db'
volumes:
postgres-db:
Dockerfile
FROM golang:latest
WORKDIR /scratch
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o /bin/frontend ./...
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /go/bin/
COPY --from=build /bin/frontend /go/bin/frontend
ENTRYPOINT ["/go/bin/frontend"]
Both containers are running and I'm able to log into the running postgres container and postgres us running.
When I try to run a update from the US I get a 500 error and it does not seem like the app container can communicate with the db container. I'm not sure what I'm missing
client side error when trying to make a call to update date:
encountered err: failed to begin transaction: failed to connect to `host=db user=postgres database=postgres`: dial error (dial tcp 172.29.0.2:5433: connect: connection refused)
docker ps yeilds:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19eeed869434 sample_app "/go/bin/frontend" 48 minutes ago Up 48 minutes 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp sample_app_1
84804f00c751 postgres "docker-entrypoint.s…" 48 minutes ago Up 48 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp sample_app_db_1
$
As per stated in https://docs.docker.com/network/bridge/, you need to put both services into a user-defined bridge network for them to refer to each other by the container names. Here is how to do it inside docker-compose.yml:
Define a custom bridge network:
networks:
some-name:
driver: bridge
Put both services into that network:
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: 'password'
POSTGRES_DB: 'postgres'
POSTGRES_USER: 'postgres'
volumes:
- ./postgres-db:/var/lib/postgresql/data
ports:
- '5432:5432'
networks:
- some-name
app:
build:
context: .
dockerfile: app/Dockerfile
restart: always
environment:
APP_FRONTEND_PORT: '8080'
DB_PORT: '5433'
DB_HOST: 'db'
ports:
- '8080:8080'
depends_on:
- 'db'
networks:
- some-name
Force specific container names especially the one being referred to by the other, otherwise docker-compose will add prefix and suffix to the service name as the container name like sample_app_db_1:
services:
db:
container_name: db
image: postgres
environment:
POSTGRES_PASSWORD: 'password'
POSTGRES_DB: 'postgres'
POSTGRES_USER: 'postgres'
volumes:
- ./postgres-db:/var/lib/postgresql/data
ports:
- '5432:5432'
networks:
- some-name

Cannot connect to postico from docker-compose postgresql service

I've done a docker-compose up and been able to run my web service attached to a postgresql image. Problem is, I can't view the data on postico when I try to access the database. The name of the image is db and when i try to specify hostname to be "db" on postico before i connect, i get an error saying hostname not found. I've entered my credentials, port and database name the same way i keyed them in my docker-compose file.
Does anybody know how i can find the correct setup to connect to within the container?
version: '3.6'
services:
phoenix:
# tell docker-compose which Dockerfile it needs to build
build:
context: .
dockerfile: Dockerfile.phoenix.development
# map the port of phoenix to the local dev port
ports:
- 4000:4000
# mount the code folder inside the running container for easy development
volumes:
- ./my_app:/app
# make sure we start mongodb when we start this service
# links:
# - db
depends_on:
- db
- redis
environment:
GOOGLE_CLIENT_ID: ${GOOGLE_CLIENT_ID}
GOOGLE_CLIENT_SECRET: ${GOOGLE_CLIENT_SECRET}
FACEBOOK_CLIENT_ID: ${FACEBOOK_CLIENT_ID}
FACEBOOK_CLIENT_SECRET: ${FACEBOOK_CLIENT_SECRET}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_USER: ${POSTGRES_USER}
go:
build:
context: .
dockerfile: Dockerfile.go.development
ports:
- 8080:8080
volumes:
- ./genesys-api:/go/src/github.com/sc4224/genesys-api
depends_on:
- db
- redis
- phoenix
db:
container_name: db
image: postgres:latest
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_USER: ${POSTGRES_USER}
volumes:
- ./data/db:/data/db
restart: always
redis:
container_name: redis
image: redis:latest
ports:
- "6379:6379"
volumes:
- ./data/redis:/data/redis
entrypoint: redis-server
restart: always
use hostname as localhost.
You can't use the hostname db outside the internal docker network. That would work in the applications running in the same network.
Since you exposed the db to run on port 5432, it's exposed via 0.0.0.0:5432->5432/tcp and therefore is accessible with localhost as host and port 5432

How to access postgres-docker container other docker container without ip address

How to access postgres-docker container other docker container without ip address?
I want to store data in postgres by using myweb. in jar given host like localhost:5432/db..
Here my compose file:
version: "3"
services:
myweb:
build: ./myweb
container_name: app
ports:
- "8080:8080"
- "9090:9090"
networks:
- front-tier
- back-tier
depends_on:
- "postgresdb"
postgresdb:
build: ./mydb
image: ppk:postgres9.5
volumes:
- dbdata:/var/lib/postgresql
ports:
- "5432:5432"
networks:
- back-tier
volumes:
dbdata: {}
networks:
front-tier:
back-tier:
Instead of localhost:5432/db.. use postgresdb:5432/db.. connection string.
By default the container has the same hostname as the service name.
Here is my minimal working example, which is connecting a java client (boxfuse/flyway) with postgres server. The most important part is the heath check, which is delaying the start of the myweb container to the time when postgres is ready to accept connections.
Note that this can be directly executed by docker-compose up, it dosen't have any other dependencies. Both the images are from docker hub.
version: '2.1'
services:
myweb:
image: boxfuse/flyway
command: -url=jdbc:postgresql://postgresdb/postgres -user=postgres -password=123 info
depends_on:
postgresdb:
condition: service_healthy
postgresdb:
image: postgres
environment:
- POSTGRES_PASSWORD=123
healthcheck:
test: "pg_isready -q -U postgres"
That is the Docker Networking problem. The solution is to use postgresdb:5432/db in place of localhost:5432/db because the two service is in the same network named back-tier and docker deamon will use name service like a DNS name to make communication between the two container. I think that my solution will help you so.

How to make persistent storage with docker-compose up-down-up?

I have a multiple container application, that is using the postgres image in docker-compose.yml file. Postgres container has volume on host machine for persistent storage.
When I run docker-compose up at first time all is fine, postgres creates db files in my host folder.
After it I need to shut down application temporarily with docker-compose down if I'll change code of web container.
When I run docker-compose up second time, postgres overwriting all db files, but I need that data not changes. How can I solve this issue?
My docker-compose.yml
version: '2'
services:
web:
build: ./web
command: python3 main.py
volumes:
- ./web:/app
ports:
- "80:80"
depends_on:
- db
- redis
links:
- db:db
- redis:redis
db:
image: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD:0000
volumes:
- ./pgdb:/var/lib/postgresql/data
redis:
image: redis
ports:
- "6379:6379"
command: redis-server --appendonly yes
volumes:
- ./redisdb:/data
I solve this problem. It occurs probably because I changed permissions for pgdb directory with host root user. By default I couldn't open pgdb in host machine because owner is postgres user. I could be wrong but after I stopped to change the resolutions the problem was gone.