Can not resolve postgres hostname from other service on docker-compose - postgresql

I have a docker-compose file which run 2 containers.
One of the service is postgres and the other one is my backend service.
When I try to resolve postgres or try to connect it from cli of the backend service , it is successfull.
But when I try to initialize database with hostname , I got following error:
2022/09/01 08:49:53 /src/domain/domain.go:29 [error] failed to
initialize database, got error failed to connect to host=postgres user=devuser database=backenddev: hostname resolving error
(lookup postgres: device or resource busy) panic: failed to connect to
host=postgres user=devuser database=backenddev: hostname
resolving error (lookup postgres: device or resource busy)
goroutine 1 [running]:
github.com/anilkusc/backend/api.(*Api).Init(0xc00000e150)
/src/api/api.go:34 +0x76b github.com/anilkusc/backend/api.(*Api).Start(0xc00000e150)
/src/api/api.go:87 +0x25 main.main()
/src/main.go:48 +0x15f
when I try to connect postgresql from another container cli , I got the following:
root#8a0824fca084:/# nc -vz postgres 5432
Connection to postgres (172.25.0.2) 5432 port [tcp/postgresql] succeeded!
root#8a0824fca084:/# curl postgres:5432
curl: (52) Empty reply from server
Here is related code block:
d.Database, err = gorm.Open(postgres.Open(os.Getenv("DB_CONN")), &gorm.Config{})
if err != nil {
return err
}
Here is compose file :
version: '3.6'
services:
postgres:
image: postgres:14.5
restart: always
environment:
POSTGRES_PASSWORD: <>
POSTGRES_DB: backenddev
POSTGRES_USER: devuser
ports:
- 5432:5432
backend:
image: my-service:v0.0.2
restart: always
environment:
ENV: dev
STORE_KEY: 1234
DB_CONN: host=postgres user=devuser password=<> dbname=backenddev port=5432
ports:
- 8080:8080
depends_on:
- postgres
Here is dockerfile of backend service :
FROM golang:1.18.4 as build
WORKDIR /src
COPY go.sum go.mod ./
RUN go mod download
COPY . .
RUN go build -a -ldflags "-linkmode external -extldflags '-static' -s -w" -o /bin/app.
FROM alpine
WORKDIR /app
COPY --from=build /bin/app .
CMD ["./app"]
If I try to connect to database with external ip and port on backend service it is also successfull but no luck for internal connection to postgresql.
Why my application can not resolve postgres host even though its host container can ?

thanks to #Brits.
It is probably related with incompatibility between alpine container and golang 1.18.4 container dns libraries.
I changed my container image to debian from alpine and the issue has been resolved.
FROM golang:1.18.4 as build
WORKDIR /src
COPY go.sum go.mod ./
RUN go mod download
COPY . .
RUN go build -a -ldflags "-linkmode external -extldflags '-static' -s -w" -o /bin/app.
FROM debian:buster-slim #just changed here!
WORKDIR /app
COPY --from=build /bin/app .
CMD ["./app"]

Related

Error: timeout expired on trying to connect in Docker Postgres using pgAdmin

I've been created a docker container of postgres service, but on start it and try to connect in database I get erros like I didn't defined a user and database to Postgres instance, I already tried to change the docker-compose and find the poblem but I didn't find.
Follow the attachments:
Dockerfile:
FROM wyveo/nginx-php-fpm:latest
RUN chmod -R 775 /usr/share/nginx/
RUN export pwd=pwd
docker-compose.yml:
version: '3'
services:
laravel-app_prm:
build: .
ports:
- "8099:80"
volumes:
- ${pwd}/.docker/nginx/:/usr/share/nginx
postgres_prm:
image: postgres
restart: always
environment:
- POSTGRES_USER=db_usr
- POSTGRES_PASSWORD=postgres_password
- POSTGRES_DB=db_prm
ports:
- "5432:5440"
volumes:
- ${pwd}/.docker/dbdata:/var/lib/postgresql/data/
**when I try to connect to the database directly through the container's bash, I get an error that user and database, both being inserted in the same way as defined in docker-compose.yml, do not exist.
sudo docker container <postgres_container_id> bash
psql -h localhost -U db_usr
... and so on...
And to set up connection in pgAdmin I got the container IP using:
sudo docker container inspect <postgres_container_id>
and getting the value from IPAddress atribute.

Can't reach database server at `postgres`:`5432`

Trying to dockerize, nests, and Prisma.
Nest is responding correctly to curl requests and and I can connect to the Postgres server fine with this command
--- docker compose exec postgres psql -h localhost -U postgres -d webapp_dev
Everything works until i try to run
npx prisma migrate dev --name init
then i get back
Error: P1001: Can't reach database server at `postgres`:`5432`
Here is my code:
docker-compose.yml
version: "2"
services:
backend:
build: .
ports:
- 3000:3000
- 9229:9229 # debugger port
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
command: yarn start:debug
restart: unless-stopped
depends_on:
- postgres
environment:
DATABASE_URL: postgres://postgres#postgres/webapp_dev
PORT: 8000
postgres:
image: postgres:14-alpine
ports:
- 5432:5432
environment:
POSTGRES_DB: webapp_dev
POSTGRES_HOST_AUTH_METHOD: trust
DockerFile
FROM node:16
# Create app directory, this is in our container
WORKDIR /usr/src/app
# Install app dependencies
# Need to copy both package and lock to work
COPY package.json yarn.lock ./
RUN yarn install
COPY prisma/schema.prisma ./prisma/
RUN npx prisma generate
# Bundle app source
COPY . .
RUN yarn build
EXPOSE 8080
CMD ["node": "dist/main"]
.env
//.env
DATABASE_URL=postgres://postgres#postgres/webapp_dev
not sure if this is the only issue but your db url does not contain the db secret in it
DATABASE_URL: postgres://postgres:mysecret#postgres/webapp_dev?schema=public
I got the same error I solved it after adding ?connect_timeout=300 at my DATABASE_URL

Error Connect a docker Instance to mongoDB atlas

I made a go App conected to MongoDB Atlas and works fine when run locally, but when i tried to create docker-compose i get this error
error parsing uri: lookup _mongodb._tcp.cluster0.mrknb.mongodb.net on 127.0.0.11:53: read udp 127.0.0.1:37379->127.0.0.11:53: i/o timeout
My connection string is :
mongodb+srv://apiVentas:<password>#cluster0.mrknb.mongodb.net/<dbname>?retryWrites=true&w=majority
My DockerFile is:
FROM golang:alpine AS builder
ENV GO111MODULE=on \
CGO_ENABLED=0 \
GOOS=linux \
GOARCH=amd64
WORKDIR /build
COPY go.mod .
COPY go.sum .
RUN go mod download
COPY . .
RUN go test ./...
RUN go build -o main .
WORKDIR /dist
RUN cp /build/main .
FROM scratch
COPY --from=builder /dist/main /
ENTRYPOINT ["/main"]
And Docker-compose is
version: "3"
services:
web:
container_name: apiVentas
restart: always
build: .
ports:
- "3000:3000"
volumes:
- .:/home/perajim/go/src/api.ventas
dns:
- 1.1.1.1
- 1.0.0.1
- 8.8.8.8
I add my IP to list in mongoDB Atlas
It's necessary some configuration in docker?
MongoDB Atlas runs on port 27017 , change the port binding and then try.

docker-compose - Application can't communicate with postgres container

I have a scrapy application which I'm trying to containerized it. Basically, this is my docker-compose.yml file:
version: '3'
services:
scrapper:
container_name: scrapper
build: .
ports:
- 80:80
depends_on:
- db
links:
- db
db:
volumes:
- ./scrapper/sql:/docker-entrypoint-initdb.d
image: postgres
container_name: postgres
restart: always
ports:
- 5432:5432
And this is my Dockerfile:
FROM python:3
WORKDIR /usr/app
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY . .
But when I try to execute my application using the following command: docker run -it scrapper_scrapper scrapy crawl angeloni, I'm receiving this message:
File "/usr/local/lib/python3.7/site-packages/scrapy/crawler.py", line 88, in crawl
yield self.engine.open_spider(self.spider, start_requests)
psycopg2.OperationalError: could not translate host name "db" to address: Name or service not known
Why this is happening? When I execute docker-compose ps command, it shows:
Name Command State Ports
--------------------------------------------------------------------------
postgres docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
scrapper python3 Exit 0
When running docker-compose up to start db, that container will run under its network that is also created by docker compose. As such, running docker run ... will not be able to connect to that instance, since it is not running on the same network. But you can specify it with:
docker run --network $network_name
To get the docker networks available, you can run:
docker networks ls
I think you have to explicitly define a user network and put your containers on it:
https://docs.docker.com/network/bridge/
Under the section:
User-defined bridges provide automatic DNS resolution between containers.

Issue when a Python in a docker script connects to a SQL server in a stack

I have a stack in a swarm that works well on its own (at least I think it does...). It has a postgresql working on port 5432 and a web server on port 80. The web server can properly be accessed from the outside.
For unit tests, I run only the database side, in a stack mode:
version: "3"
services:
sql:
image: sql
networks:
- service_network
ports:
- 5432:5432
deploy:
replicas: 1
restart_policy:
condition: on-failure
volumes:
- ./local_storage_sql:/var/lib/postgresql/data
environment:
# provide your credentials here
- POSTGRES_USER=xxxx
- POSTGRES_PASSWORD=xxxx
networks:
service_network:
Then, the unit tests starts by connecting to the db in another simple python container:
FROM python:latest
LABEL version="0.1.0"
LABEL org.sgnn7.name="unittest"
# Make sure we are fully up to date
RUN apt-get update -q && \
apt-get dist-upgrade -y && \
apt-get clean && \
apt-get autoclean
RUN python -m pip install psycopg2-binary
RUN mkdir /testing
COPY ./*.py /testing/
The test script fail when connecting:
conn = connect(dbname=database, user=user, host=host, password=password)
with:
File "/usr/local/lib/python3.7/site-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
But it only fails when I run it inside the container. From outside, it works like a charm. I also tried setting up an external network and use it (same docker node):
docker run --rm --net service_network -t UnitTest-image py.test /testing
Obviously, I would have expected to be more difficult to access the database from the outside of the network, than from inside, so obviously, I missed something here, but I don't know what...
When you deploy a stack with Compose file, the full name of the network is created by combining the stack name with the base network name. So, let's say you deployed your stack with the name foo like so.
docker stack deploy -c compose-file.yml foo
Then, the full network name will be foo_service_network.
When you run your Python container, you need to connect it to foo_service_network, not service_network.
docker container run --rm --net foo_service_network -t UnitTest-image py.test /testing
You can also customize the network name by specifying the name property in your Compose file (version 3.5 and up).
networks:
service_network:
name: service_network
In which case, you would connect your container to the network with that custom name.
docker container run --rm --net service_network -t UnitTest-image py.test /testing
Edit 1/28: Added Compose file example.
version: "3.7"
services:
sql:
image: sql
networks:
- service_network
ports:
- 5432:5432
deploy:
replicas: 1
restart_policy:
condition: on-failure
volumes:
- ./local_storage_sql:/var/lib/postgresql/data
environment:
# provide your credentials here
- POSTGRES_USER=xxxx
- POSTGRES_PASSWORD=xxxx
networks:
service_network:
name: service_network
attachable: true