i'm trying to get a connection between 2 Docker container.
The first one is a postgres database and the second one is a jboss.
I'm using ansible and here is my Playbook:
---
- hosts: localhost
tasks:
- name: start postgresql
docker:
name: mypostgres
image: MYIMAGE_POSTGRES
ports:
- 5432:5432
expose:
- 5432:5432
state: started
env:
DB_USER: "user"
DB_PASS: "pass"
DB_NAME: "name"
- name: start jboss
docker:
name: jboss
image: MYIMAGE_JBOSS
ports:
- 1099:1099
expose:
- 1099:1099
state: running
env:
POSTGRES_PORT_5432_TCP_ADDR: "172.17.0.2"
POSTGRES_PORT_5432_TCP_PORT: 5432
HIBERNATE_CREATE_DDL: ""
DB_NAME: "name"
DB_USER: "user"
DB_PASS: "pass"
If i start both docker images, there is no connection between database and jboss.
Is there anything i had missed, in my configuration?
Thanks a lot,
Pascal
You need to link the postgres container to the jboss one. For doing that, use the link option
...
docker:
name: jboss
image: MYIMAGE_JBOSS
ports:
- 1099:1099
expose:
- 1099:1099
links:
-mypostgres
state: running
env:
POSTGRES_PORT_5432_TCP_ADDR: "mypostgres"
POSTGRES_PORT_5432_TCP_PORT: 5432
HIBERNATE_CREATE_DDL: ""
DB_NAME: "name"
DB_USER: "user"
DB_PASS: "pass"
...
Related
I tried implementing Postgres with Odoo in Azure through docker compose, but I faced this error when I tried to check Odoo's container logs :
Database connection failure: connection to server at "db" (127.0.0.1),
port 5432 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections ?
Here's my docker-compose.yml
version: '3.2'
services:
db:
image: registryodoo.azurecr.io/samples/postgres:13
volumes:
- db:/var/lib/postgresql/data/pgdata
deploy:
restart_policy:
condition: always
expose:
- 5432
environment:
POSTGRES_PASSWORD: odoo
POSTGRES_DB: postgres
POSTGRES_USER: odoo
PGDATA: /var/lib/postgresql/data/pgdata
odoocontainer:
depends_on:
- db
image: registryodoo.azurecr.io/samples/odoo:latest
volumes:
- data:/var/lib/odoo
- extra-addons:/mnt/extra-addons
ports:
- 8069:8069
deploy:
restart_policy:
condition: always
environment:
HOST: db
USER: odoo
PASSWORD: odoo
volumes:
data:
driver: azure_file
driver_opts:
share_name: odoofileshare
storage_account_name: odooaccount1
db:
driver: azure_file
driver_opts:
share_name: odoofileshare
storage_account_name: odooaccount1
extra-addons:
driver: azure_file
driver_opts:
share_name: odoofileshare
storage_account_name: odooaccount1
After looking at some answers in related questions I tried with : expose : - 5432 , ports : - 5432 and not writing it the docker compose I still had the same issue.
Here's a screen of the container logs:
Azure container logs
I'm trying to create 2 docker containers one for dev and one for prod using docker-compose. The two containers should be linked to separate postgres databases.
I tried the following, but it seems to create just one container and one database everytime.
docker-compose.yml
version: "3"
services:
db:
image: postgres:latest
restart: always
container_name: myinstance-postgres-database
environment:
- POSTGRES_USER= dbuser
- POSTGRES_PASSWORD= dbpass
- POSTGRES_DB= ProductionDB
ports:
- 127.17.0.1:5432:5432
volumes:
- myinstance-postgres-db:/var/lib/postgresql/data
app:
image: service/platform:latest
restart: always
container_name: prod-app
environment:
DB_SETUP: "true"
DB_VENDOR: "postgresql"
DB_HOST: db
DB_USER: "dbuser"
DB_PASSWORD: "dbpass"
DB_NAME: "ProductionDB"
DB_WAIT: 10
ports:
- 8443:8443
volumes:
- myinstance-postgres-git:/usr/local/tomcat/webapps/ROOT/WEB-INF/git
depends_on:
- db
volumes:
myinstance-postgres-db:
myinstance-postgres-git:
docker-compose.dev.yml
version: "3"
services:
db:
image: postgres:latest
restart: always
container_name: myinstancedev-postgres-database
environment:
- POSTGRES_USER= dbuser
- POSTGRES_PASSWORD= dbpass
- POSTGRES_DB= DevDB
ports:
- 127.17.0.1:5432:5432
volumes:
- myinstancedev-postgres-db:/var/lib/postgresql/data
app:
image: service/platform:latest
restart: always
container_name: dev-app
environment:
DB_SETUP: "true"
DB_VENDOR: "postgresql"
DB_HOST: db
DB_USER: "dbuser"
DB_PASSWORD: "dbpass"
DB_NAME: "DevDB"
DB_WAIT: 10
ports:
- 8444:8443
volumes:
- myinstancedev-postgres-git:/usr/local/tomcat/webapps/ROOT/WEB-INF/git
depends_on:
- db
volumes:
myinstancedev-postgres-db:
myinstancedev-postgres-git:
Then I run :
sudo docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
as a result I have one container which is dev-app and only one database is created.
Any solution ?
If you want to run both of them together use
sudo docker-compose -f docker-compose.yml -p prod up -d && sudo docker-compose -f docker-compose.dev.yml -p dev up -d
When you pass multiple files in the same docker-compose command, it does not create separate containers as you'd like. It instead merges them. Check Share Compose configurations between files and projects
Also note, you may have PORT conflict errors in the host. Because in both the compose files you are exposing the same ports 5432 and 8443
My output with 2 alpine postgres images on different ports.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
74a8e5a8fad0 postgres:alpine "docker-entrypoint.s…" 1 second ago Up Less than a second 0.0.0.0:5430->5432/tcp prod_web_1
23f2b995d499 postgres:alpine "docker-entrypoint.s…" 2 seconds ago Up 2 seconds 0.0.0.0:5431->5432/tcp dev_web_1
Also, do consider using env files for compose.
I setup a prisma project recently and here is my docker-compose.yml file
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.31
restart: always
ports:
- '4030:4466'
environment:
TZ: ${PRISMA_DB_TIME_ZONE}
PRISMA_CONFIG: |
port: 4466
# managementApiSecret: my-secret
databases:
default:
connector: postgres
host: postgres
port: 5432
user: prisma
password: ${PRISMA_DB_PASSWORD}
migrations: true
rawAccess: true
postgres:
image: postgres:10.3
restart: always
ports:
- "3306:3306"
environment:
POSTGRES_USER: prisma
POSTGRES_PASSWORD: ${PRISMA_DB_PASSWORD}
TZ: ${PRISMA_DB_TIME_ZONE}
volumes:
- postgres:/var/lib/postgresql/data
volumes:
prisma:
postgres:
I can open my prisma playground and it functions without any issue. but I can't create a direct connection to the postgre container with dbeaver.
dbeaver Error message
Connection reset
Why my connection to the database fails?
This photo will be helpful.
postgres by default listens on port 5432.
In your postgres container specification, you should expose the port 5432 not 3306.
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.31
restart: always
ports:
- '4030:4466'
environment:
TZ: ${PRISMA_DB_TIME_ZONE}
PRISMA_CONFIG: |
port: 4466
# managementApiSecret: my-secret
databases:
default:
connector: postgres
host: postgres
port: 5432
user: prisma
password: ${PRISMA_DB_PASSWORD}
migrations: true
rawAccess: true
postgres:
image: postgres:10.3
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_USER: prisma
POSTGRES_PASSWORD: ${PRISMA_DB_PASSWORD}
TZ: ${PRISMA_DB_TIME_ZONE}
volumes:
- postgres:/var/lib/postgresql/data
volumes:
prisma:
postgres:
If 5432 port in your host is already in use and if you want to use 3306 instead , then you can do port forwarding like below:
postgres:
image: postgres:10.3
restart: always
ports:
- "3306:5432"
environment:
POSTGRES_USER: prisma
POSTGRES_PASSWORD: ${PRISMA_DB_PASSWORD}
TZ: ${PRISMA_DB_TIME_ZONE}
volumes:
- postgres:/var/lib/postgresql/data
UPDATE - 1
Reason why prisma can access postgres
ports section is meant only to make our services accessible in host level. But in the container level, If a port is open in a container, any other running container can access that port with the help of networks or links section.
By default, docker-compose will create a network per docker-compose.yml file and joins all the services in that file into that network.
That's the reason we could use the <service name> as the hostname and compose will resolve that name to the ip-address of the respective (in your case postgres) container.
I have a docker-compose file with services for python, nginx, postgres and pgadmin:
services:
postgres:
image: postgres:9.6
env_file: .env
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5431:5431"
pgadmin:
image: dpage/pgadmin4
links:
- postgres
depends_on:
- postgres
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: pwdpwd
volumes:
- pgadmin:/root/.pgadmin
ports:
- "5050:80"
backend:
build:
context: ./foobar # This refs a Dockerfile with Python and Django requirements
command: ["/wait-for-it.sh", "postgres:5431", "--", "/gunicorn.sh"]
volumes:
- staticfiles_root:/foobar/static
depends_on:
- postgres
nginx:
build:
context: ./foobar/docker/nginx
volumes:
- staticfiles_root:/foobar/static
depends_on:
- backend
ports:
- "0.0.0.0:80:80"
volumes:
postgres_data:
staticfiles_root:
pgadmin:
When I run docker-compose up and visit localhost:5050, I see the pgadmin interface. When I try to create a new server there, with localhost or 0.0.0.0 as host name and 5431 as port, I get an error "Could not connect to server". If I remove these and instead enter postgres in the "Service" field, I get the error "definition of service "postgres" not found". How can I connect to the database with pgadmin?
the docker container name changes when you run docker-compose to prefix the folder name (to keep container names unique). You could force the name of the container with container_name property
version: "3"
services:
# postgres database
postgres:
image: postgres:12.3
container_name: postgres
environment:
- POSTGRES_DB=admin
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=admin
- POSTGRES_HOST_AUTH_METHOD=trust # allow all connections without a password. This is *not* recommended for prod
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
ports:
- "5432:5432"
# pgadmin for managing postgis db (runs at localhost:5050)
# To add the above postgres server to pgadmin, use hostname as defined by docker: 'postgres'
pgadmin:
image: dpage/pgadmin4
container_name: pgadmin
environment:
- PGADMIN_DEFAULT_EMAIL=admin
- PGADMIN_DEFAULT_PASSWORD=admin
- PGADMIN_LISTEN_PORT=5050
ports:
- "5050:5050"
volumes:
database-data:
Another option is to connect the postgres container to localhost with
network_mode: host
But you lose the nice network isolation from docker that way
Be careful that the default postgres port is 5432 not 5431. You should update the port mapping for the postgres service in your compose file. The wrong port might be the reason for the issues you reported. Change the port mapping and then try to connect to postgres:5432. localhost:5432 will not work.
I am successfully able to connect the servicebot service to the postgresql running within the docker container but I want to connect the servicebot to the postgresql running in instance ie not inside docker container.
I have installed the postgresql successfully. I have set the environment variables related to postgrsql in the docker-compose.yml as bellow. How can I make the docker-compose.yml connect to the
docker-compose.yml
version: '2'
services:
servicebot:
image: servicebot/servicebot
environment:
POSTGRES_DB_PORT : "5432"
POSTGRES_DB_HOST : "localhost"
POSTGRES_DB_USER : "servicebot_user"
POSTGRES_DB_PASSWORD : "servicebot_pass"
POSTGRES_DB_NAME : "servicebot_user"
PORT : "3000"
volumes:
- upload-data:/usr/src/app/uploads
- environment-file:/usr/src/app/env
- db-data:/var/lib/postgresql/data
# links:
# - db
ports:
- "80:3000"
- "443:3001"
command: ["sh", "-c", "node /usr/src/app/bin/wait-for-it.js db 5432 && npm run-script start"]
volumes:
upload-data:
environment-file:
db-data:
Previously I had a service named db for postgresql and connected to it with links as you can see, I have commented that out now.
I am very new to postgresql and not able to figure out the right way. I have tried few ways but nothing came to my success.
Tried:
Adding extra_hosts to the ip if the instance
Adding host.docker.internal instead of localhost
Error Logs:
On docker logs servicename It does not show anything. The service
stops after 29 30 seconds.
Your problem is POSTGRES_DB_HOST pointing to "localhost", as "localhost" will be the container running your current service.
If you want to connect to a postgre running in your host (localhost) I think you can use this special value host.docker.internal.
Just a heads up that if you're running Docker on a MAC (macOS), you need to use: docker.for.mac.host.internal instead.
I think you should add more information about postgresql to docker-compose.yml file as following:
version: '2'
services:
db:
image: postgres:13
environment:
- POSTGRES_PASSWORD=servicebot_pass
- POSTGRES_USER=servicebot_user
- POSTGRES_DB=postgres
restart: always
volumes:
- ./postgresql:/var/lib/postgresql/data
servicebot:
image: servicebot/servicebot
depends_on:
- db
environment:
POSTGRES_DB_PORT : "5432"
POSTGRES_DB_HOST : "localhost"
POSTGRES_DB_USER : "servicebot_user"
POSTGRES_DB_PASSWORD : "servicebot_pass"
POSTGRES_DB_NAME : "servicebot_user"
PORT : "3000"
volumes:
- upload-data:/usr/src/app/uploads
- environment-file:/usr/src/app/env
- db-data:/var/lib/postgresql/data
# links:
# - db
ports:
- "80:3000"
- "443:3001"
command: ["sh", "-c", "node /usr/src/app/bin/wait-for-it.js db 5432 && npm run-script start"]
volumes:
upload-data:
environment-file:
db-data: