Creating docker containers for dev and prod with multiple postgres databases - postgresql

I'm trying to create 2 docker containers one for dev and one for prod using docker-compose. The two containers should be linked to separate postgres databases.
I tried the following, but it seems to create just one container and one database everytime.
docker-compose.yml
version: "3"
services:
db:
image: postgres:latest
restart: always
container_name: myinstance-postgres-database
environment:
- POSTGRES_USER= dbuser
- POSTGRES_PASSWORD= dbpass
- POSTGRES_DB= ProductionDB
ports:
- 127.17.0.1:5432:5432
volumes:
- myinstance-postgres-db:/var/lib/postgresql/data
app:
image: service/platform:latest
restart: always
container_name: prod-app
environment:
DB_SETUP: "true"
DB_VENDOR: "postgresql"
DB_HOST: db
DB_USER: "dbuser"
DB_PASSWORD: "dbpass"
DB_NAME: "ProductionDB"
DB_WAIT: 10
ports:
- 8443:8443
volumes:
- myinstance-postgres-git:/usr/local/tomcat/webapps/ROOT/WEB-INF/git
depends_on:
- db
volumes:
myinstance-postgres-db:
myinstance-postgres-git:
docker-compose.dev.yml
version: "3"
services:
db:
image: postgres:latest
restart: always
container_name: myinstancedev-postgres-database
environment:
- POSTGRES_USER= dbuser
- POSTGRES_PASSWORD= dbpass
- POSTGRES_DB= DevDB
ports:
- 127.17.0.1:5432:5432
volumes:
- myinstancedev-postgres-db:/var/lib/postgresql/data
app:
image: service/platform:latest
restart: always
container_name: dev-app
environment:
DB_SETUP: "true"
DB_VENDOR: "postgresql"
DB_HOST: db
DB_USER: "dbuser"
DB_PASSWORD: "dbpass"
DB_NAME: "DevDB"
DB_WAIT: 10
ports:
- 8444:8443
volumes:
- myinstancedev-postgres-git:/usr/local/tomcat/webapps/ROOT/WEB-INF/git
depends_on:
- db
volumes:
myinstancedev-postgres-db:
myinstancedev-postgres-git:
Then I run :
sudo docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
as a result I have one container which is dev-app and only one database is created.
Any solution ?

If you want to run both of them together use
sudo docker-compose -f docker-compose.yml -p prod up -d && sudo docker-compose -f docker-compose.dev.yml -p dev up -d
When you pass multiple files in the same docker-compose command, it does not create separate containers as you'd like. It instead merges them. Check Share Compose configurations between files and projects
Also note, you may have PORT conflict errors in the host. Because in both the compose files you are exposing the same ports 5432 and 8443
My output with 2 alpine postgres images on different ports.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
74a8e5a8fad0 postgres:alpine "docker-entrypoint.s…" 1 second ago Up Less than a second 0.0.0.0:5430->5432/tcp prod_web_1
23f2b995d499 postgres:alpine "docker-entrypoint.s…" 2 seconds ago Up 2 seconds 0.0.0.0:5431->5432/tcp dev_web_1
Also, do consider using env files for compose.

Related

Unable to access postgres in docker from web app in another container

I have a samle app I'm using docker-compose to run locally on my machine. The web app is in one container and the db (postgres) in another.
I am having an connection issue that I can't work through.
docker-compose
version: '3.8'
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: 'password'
POSTGRES_DB: 'postgres'
POSTGRES_USER: 'postgres'
volumes:
- ./postgres-db:/var/lib/postgresql/data
ports:
- '5432:5432'
app:
build:
context: .
dockerfile: app/Dockerfile
restart: always
environment:
APP_FRONTEND_PORT: '8080'
DB_PORT: '5433'
DB_HOST: 'db'
ports:
- '8080:8080'
depends_on:
- 'db'
volumes:
postgres-db:
Dockerfile
FROM golang:latest
WORKDIR /scratch
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o /bin/frontend ./...
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /go/bin/
COPY --from=build /bin/frontend /go/bin/frontend
ENTRYPOINT ["/go/bin/frontend"]
Both containers are running and I'm able to log into the running postgres container and postgres us running.
When I try to run a update from the US I get a 500 error and it does not seem like the app container can communicate with the db container. I'm not sure what I'm missing
client side error when trying to make a call to update date:
encountered err: failed to begin transaction: failed to connect to `host=db user=postgres database=postgres`: dial error (dial tcp 172.29.0.2:5433: connect: connection refused)
docker ps yeilds:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19eeed869434 sample_app "/go/bin/frontend" 48 minutes ago Up 48 minutes 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp sample_app_1
84804f00c751 postgres "docker-entrypoint.s…" 48 minutes ago Up 48 minutes 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp sample_app_db_1
$
As per stated in https://docs.docker.com/network/bridge/, you need to put both services into a user-defined bridge network for them to refer to each other by the container names. Here is how to do it inside docker-compose.yml:
Define a custom bridge network:
networks:
some-name:
driver: bridge
Put both services into that network:
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: 'password'
POSTGRES_DB: 'postgres'
POSTGRES_USER: 'postgres'
volumes:
- ./postgres-db:/var/lib/postgresql/data
ports:
- '5432:5432'
networks:
- some-name
app:
build:
context: .
dockerfile: app/Dockerfile
restart: always
environment:
APP_FRONTEND_PORT: '8080'
DB_PORT: '5433'
DB_HOST: 'db'
ports:
- '8080:8080'
depends_on:
- 'db'
networks:
- some-name
Force specific container names especially the one being referred to by the other, otherwise docker-compose will add prefix and suffix to the service name as the container name like sample_app_db_1:
services:
db:
container_name: db
image: postgres
environment:
POSTGRES_PASSWORD: 'password'
POSTGRES_DB: 'postgres'
POSTGRES_USER: 'postgres'
volumes:
- ./postgres-db:/var/lib/postgresql/data
ports:
- '5432:5432'
networks:
- some-name

Connecting pgadmin to postgres in docker

I have a docker-compose file with services for python, nginx, postgres and pgadmin:
services:
postgres:
image: postgres:9.6
env_file: .env
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "5431:5431"
pgadmin:
image: dpage/pgadmin4
links:
- postgres
depends_on:
- postgres
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: pwdpwd
volumes:
- pgadmin:/root/.pgadmin
ports:
- "5050:80"
backend:
build:
context: ./foobar # This refs a Dockerfile with Python and Django requirements
command: ["/wait-for-it.sh", "postgres:5431", "--", "/gunicorn.sh"]
volumes:
- staticfiles_root:/foobar/static
depends_on:
- postgres
nginx:
build:
context: ./foobar/docker/nginx
volumes:
- staticfiles_root:/foobar/static
depends_on:
- backend
ports:
- "0.0.0.0:80:80"
volumes:
postgres_data:
staticfiles_root:
pgadmin:
When I run docker-compose up and visit localhost:5050, I see the pgadmin interface. When I try to create a new server there, with localhost or 0.0.0.0 as host name and 5431 as port, I get an error "Could not connect to server". If I remove these and instead enter postgres in the "Service" field, I get the error "definition of service "postgres" not found". How can I connect to the database with pgadmin?
the docker container name changes when you run docker-compose to prefix the folder name (to keep container names unique). You could force the name of the container with container_name property
version: "3"
services:
# postgres database
postgres:
image: postgres:12.3
container_name: postgres
environment:
- POSTGRES_DB=admin
- POSTGRES_USER=admin
- POSTGRES_PASSWORD=admin
- POSTGRES_HOST_AUTH_METHOD=trust # allow all connections without a password. This is *not* recommended for prod
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
ports:
- "5432:5432"
# pgadmin for managing postgis db (runs at localhost:5050)
# To add the above postgres server to pgadmin, use hostname as defined by docker: 'postgres'
pgadmin:
image: dpage/pgadmin4
container_name: pgadmin
environment:
- PGADMIN_DEFAULT_EMAIL=admin
- PGADMIN_DEFAULT_PASSWORD=admin
- PGADMIN_LISTEN_PORT=5050
ports:
- "5050:5050"
volumes:
database-data:
Another option is to connect the postgres container to localhost with
network_mode: host
But you lose the nice network isolation from docker that way
Be careful that the default postgres port is 5432 not 5431. You should update the port mapping for the postgres service in your compose file. The wrong port might be the reason for the issues you reported. Change the port mapping and then try to connect to postgres:5432. localhost:5432 will not work.

How to access wacore container using WhatsApp Business API

I recently started using WhatsAppBusiness API, i am able to install the docker containers for whatsappbusiness and i am able to access whatsapp web using the port 9090.
Ex: https://172.29.208.1:9090
But I don't know how to access MySQL and WhatsAppCore app.
I tried http://172.29.208.1:33060 but nothing is happened. Please let me know how to access MySQL and wacore.
Here is my docker-compose.yml file
docker-compose.yml
version: '3'
volumes:
whatsappData:
driver: local
whatsappMedia:
driver: local
services:
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_ROOT_PASSWORD: testpass
MYSQL_USER: testuser
MYSQL_PASSWORD: testpass
expose:
- "33060"
ports:
- "33060:3306"
network_mode: bridge
wacore:
image: docker.whatsapp.biz/coreapp:v2.19.4
command: ["/opt/whatsapp/bin/wait_on_mysql.sh", "/opt/whatsapp/bin/launch_within_docker.sh"]
volumes:
- whatsappData:/usr/local/waent/data
- whatsappMedia:/usr/local/wamedia
env_file:
- db.env
depends_on:
- "db"
network_mode: bridge
links:
- db
waweb:
image: docker.whatsapp.biz/web:v2.19.4
command: ["/opt/whatsapp/bin/wait_on_mysql.sh", "/opt/whatsapp/bin/launch_within_docker.sh"]
ports:
- "9090:443"
volumes:
- whatsappData:/usr/local/waent/data
- whatsappMedia:/usr/local/wamedia
env_file:
- db.env
environment:
WACORE_HOSTNAME: wacore
depends_on:
- "db"
- "wacore"
links:
- db
- wacore
network_mode: bridge
Mysql is not a HTTP server, it doesn't understand http://172.29.208.1:33060
you could run 'docker ps | grep mysql' to get mysql container id
8dfa30ab0200 mysql:5.7.22 "docker-entrypoint.s…" 6 minutes ago Up 6 minutes 33060/tcp, 0.0.0.0:33060->3306/tcp xxxx_db_1
then run 'docker exec -it 8dfa30ab0200 mysql -h localhost -P 3306 -u testuser --password=testpass' to access mysql
But because you haven't registered, you won't see much stuffs in mysql. Please follow steps in https://developers.facebook.com/docs/whatsapp/api/account to perform registration.
You don't need to access coreapp directly, you perform all API requests through webapp (https://172.29.208.1:9090).

Cannot connect to postico from docker-compose postgresql service

I've done a docker-compose up and been able to run my web service attached to a postgresql image. Problem is, I can't view the data on postico when I try to access the database. The name of the image is db and when i try to specify hostname to be "db" on postico before i connect, i get an error saying hostname not found. I've entered my credentials, port and database name the same way i keyed them in my docker-compose file.
Does anybody know how i can find the correct setup to connect to within the container?
version: '3.6'
services:
phoenix:
# tell docker-compose which Dockerfile it needs to build
build:
context: .
dockerfile: Dockerfile.phoenix.development
# map the port of phoenix to the local dev port
ports:
- 4000:4000
# mount the code folder inside the running container for easy development
volumes:
- ./my_app:/app
# make sure we start mongodb when we start this service
# links:
# - db
depends_on:
- db
- redis
environment:
GOOGLE_CLIENT_ID: ${GOOGLE_CLIENT_ID}
GOOGLE_CLIENT_SECRET: ${GOOGLE_CLIENT_SECRET}
FACEBOOK_CLIENT_ID: ${FACEBOOK_CLIENT_ID}
FACEBOOK_CLIENT_SECRET: ${FACEBOOK_CLIENT_SECRET}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_USER: ${POSTGRES_USER}
go:
build:
context: .
dockerfile: Dockerfile.go.development
ports:
- 8080:8080
volumes:
- ./genesys-api:/go/src/github.com/sc4224/genesys-api
depends_on:
- db
- redis
- phoenix
db:
container_name: db
image: postgres:latest
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_USER: ${POSTGRES_USER}
volumes:
- ./data/db:/data/db
restart: always
redis:
container_name: redis
image: redis:latest
ports:
- "6379:6379"
volumes:
- ./data/redis:/data/redis
entrypoint: redis-server
restart: always
use hostname as localhost.
You can't use the hostname db outside the internal docker network. That would work in the applications running in the same network.
Since you exposed the db to run on port 5432, it's exposed via 0.0.0.0:5432->5432/tcp and therefore is accessible with localhost as host and port 5432

How to connect to localhost postgres database from docker container?

I'm configured my project to docker. I have database that have been used in non-docker period and now I want to connect my docker-compose db service to it. But when I write docker-compose up - existing database not used - new one created instead (I suspect, docker container simply doesn't see the database). If I do nonsense please let me know. Maybe I shoud migrate my server db into container.
Here is my docker-compose.yml:
services:
db:
restart: always
image: postgres:latest
environment:
- POSTGRES_DB=mydb
- POSTGRES_PASSWORD=p#ssw0rd
- POSTGRES_USER=root
ports:
- "5432:5432"
volumes:
# We'll mount the 'postgres-data' volume into the location Postgres stores it's data:
- postgres-data:/var/lib/postgresql/data
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
depends_on:
- db
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
I think, the canonic approach is to have your DB engine running in container while storing the data on the persistent storage (map the volume to your hard disk).
So I would use the Postgres in docker as ServerDB, as you suggested.
If you only want your application connect to the external database, declare it as an external host:
version: '2'
services:
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
extra_hosts:
- "db:192.168.1.2"
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
Just be sure your application reference the database as db and replace the ip I put there with your host ip.
Regards