Docker Postgres running multiple dockers with segregated instances - postgresql

I have a requirement with docker/docker-compose to run 2 different instances of postgres but i need their data to be completely separate as both applications control the database server completely, not just a single database
Here is the docker file there is one in each project directory
FROM postgres:10-alpine
COPY data/resources.sql.gz /docker-entrypoint-initdb.d/resources.sql.gz
ENV POSTGRES_USER=postgres
ENV POSTGRES_PASSWORD=123456
Here is the section from each docker compose, there is one in each project directory
Project 1
db_test:
image: postgres:10-alpine
container_name: postgres_test
restart: always
expose:
- '5432'
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=123456
- PGDATA=/db
volumes:
- ./db:/db
networks:
backend:
ipv4_address: 172.16.1.6
Project 2
db:
image: postgres:10-alpine
container_name: postgres
restart: always
expose:
- '5432'
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=123456
- PGDATA=/db
volumes:
- ./db:/db
networks:
backend:
ipv4_address: 172.16.0.6
I should also note that the resources.sql.gz is unique to each project
The problem i am having is that i build project 1, then stop the docker
then i build project 2 and for some reason its inheriting the databases from project 1
what i need is to completely seperate both so that i could run them side by side with different ports (if required)

One option is to create two distinct docker volumes:
docker-compose.yml
version: "3.7"
volumes:
db1-pgdata-volume:
name: db1-postgres-data
db2-pgdata-volume:
name: db2-postgres-data
services:
db_test:
image: postgres:10-alpine
container_name: postgres_test
restart: always
expose:
- '5432'
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=123456
- PGDATA=/db
volumes:
- db1-pgdata-volume:/var/lib/postgresql/data
networks:
backend:
ipv4_address: 172.16.1.6
db:
image: postgres:10-alpine
container_name: postgres
restart: always
expose:
- '5432'
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=123456
- PGDATA=/db
volumes:
- db2-pgdata-volume:/var/lib/postgresql/data
networks:
backend:
ipv4_address: 172.16.0.6

You are using the same database storage for both services. It's like you're using a single database with two access points.
Change the volumes section to this :
in db service :
volumes:
- ./db:/var/lib/postgresql/data
in db_test service:
volumes:
- ./db_test:/var/lib/postgresql/data
Remove the following from the environment section :
- PGDATA=/db

Related

How To Pass Environment variables in Compose ( docker compose )

There are multiple parts of Compose that deal with environment variables in one sense or another. So how do I pass Environment variables in Compose ( docker-compose )
According to the documentation If you have multiple environment variables, you can substitute them by adding them to a default environment variable file named .env or by providing a path to your environment variables file using the --env-file command line option.
version: '3.9'
services:
nginx:
image: nginx:stable-alpine
container_name: nginx
ports:
- 80:80
- 443:443
restart: always
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
postgres:
container_name: postgres
image: postgres:13-alpine
environment:
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=${DB_DATABASE}
volumes:
- ./pgdata:/var/lib/postgresql/data
- ./database/app.sql:/docker-entrypoint-initdb.d/app.sql
restart: always
ports:
- "35000:5432"
networks:
- app_network
app-api:
container_name: app-api
build:
dockerfile: Dockerfile
context: ./app-api
target: production
environment:
- DB_TYPE=${DATABASE_TYPE}
- POSTGRES_HOST=${DB_HOST}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASS=${DB_PASSWORD}
- POSTGRES_DB=${DB_DATABASE}
- POSTGRES_PORT=${DB_PORT}
- APP_PORT=${SERVER_PORT}
- NODE_ENV:production
## AWS
- AWS_S3_ACCESS_KEY=${AWS_S3_ACCESS_KEY}
- AWS_S3_SECRET_ACCESS_KEY=${AWS_S3_SECRET_ACCESS_KEY}
- AWS_S3_BUCKET=${AWS_S3_BUCKET}
- AWS_S3_REGION=${AWS_S3_REGION}
ports:
- "5050:80"
volumes:
- ./pgadmin-data:/var/lib/pgadmin
depends_on:
- postgres
links:
- postgres
networks:
- app_network
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
restart: always
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
- PGADMIN_LISTEN_PORT=${PGADMIN_LISTEN_PORT}
restart: always
ports:
- "5400:5400"
depends_on:
- postgres
links:
- postgres
networks:
- app_network
you can use it like this. you have to pass value in each environment variable.
server:
environment:
- AWS_S3_ACCESS_KEY=ABCJQHWEQJHWQ
- AWS_S3_SECRET_ACCESS_KEY=ASKJHDAKJHNAWKLHEN
- AWS_S3_BUCKET=abc-text
- AWS_S3_REGION=eu-west-1
ports:
- "5050:80"
volumes:
- ./pgadmin-data:/var/lib/pgadmin
depends_on:
- postgres
links:
- postgres
networks:
- app_network
When you run docker-compose up, the web service defined above uses the image from the defined Dockerfile. You can verify this with the convert command, which prints your resolved application config to the terminal:
You can use this command to verify if you are pathing the proper environment variables
$ docker compose convert
https://docs.docker.com/compose/environment-variables/

container fails curl resolve

I'm working with docker-compose.yml files and have done so for about three years now.
In my solution, I have 6 containers that reside on one network, that network defined as a bridge. One of the containers must have access to a specific internet host, which it's able to do so without any issue. I'll name this solution as the "original" solution.
My next development task is now to replicate the original solution, on my workstation, ensuring that the replica can't interact with the original. My understanding is that I simply create another network, define it as a "bridge" and I should be good to go.
The issue here is that the one container in my replica solution is unable to resolve a cURL call to the internet host. Code is nigh on identical between original and replica (I've added curl debugging to replica and that's how I caught the unresolved host error) and the only thing I can see being any different is the fact that it's on it's own network (but it's bridged so that it should still be able to resolve to the internet host).
The YAML files that I'm using are as follows:
original-docker-compose.yml
version: '3.8'
services:
console-mysql:
container_name: console-mysql
image: PRIVATE_ECR_ADDRESS/mysql:5.7.34
command: --default-authentication-plugin=mysql_native_password
ports:
- "7306:3306"
volumes:
- ./perImageFiles/console-mysql/db:/var/lib/mysql
- ./perImageFiles/console-mysql/seed:/docker-entrypoint-initdb.d
env_file:
- ./.env
networks:
- dev-vlt-console
console-www:
image: PRIVATE_ECR_ADDRESS/bnoe-console:latest
container_name: console-www
env_file:
- ./.env
ports:
- 7080:80
volumes:
- ../../console-www:/var/www/secure
- ./perImageFiles/console/console-nginx.conf:/etc/nginx/conf.d/default.conf
- ./perImageFiles/console/nginx.conf:/etc/nginx/nginx.conf
- ./perImageFiles/console/logs-nginx:/var/log/nginx/
- ./perImageFiles/console/console-www-entrypoint.sh:/console-www-entrypoint.sh
depends_on:
- console-www-php
networks:
- dev-vlt-console
console-proc:
container_name: console-proc
image: PRIVATE_ECR_ADDRESS/bnoe-processor:latest
env_file:
- ./.env
ports:
- 7082:9001
volumes:
- ../../console-www:/var/www/secure
- ./perImageFiles/proc/supervisord.conf:/etc/supervisor/conf.d/supervisord.conf
- ./perImageFiles/dicom/storage:/var/lib/orthanc/db-v6
- ../../console-www/scripts/docker/console-proc-docker-run.sh:/console-proc-docker-run.sh
entrypoint: /console-proc-docker-run.sh
depends_on:
- console-mysql
networks:
- dev-vlt-console
console-www-php:
container_name: console-www-php
image: PRIVATE_ECR_ADDRESS/bnoe-console-php:latest
volumes:
- ../../console-www:/var/www/secure
- ./perImageFiles/console/php-log.conf:/usr/local/etc/php-fpm.d/zz-log.conf
- ./perImageFiles/console/www-php.ini:/usr/local/etc/php/conf.d/www-php.ini
- ./perImageFiles/console/console-php-entrypoint.sh:/console-php-entrypoint.sh
env_file:
- ./.env
- ./.setupenv
entrypoint: /console-php-entrypoint.sh
depends_on:
- console-mysql
networks:
- dev-vlt-console
console-dicom:
image: jodogne/orthanc-plugins:1.9.7
container_name: console-dicom
depends_on: [console-mysql]
ports: [7084:8042, 10401:10401]
volumes:
- ./perImageFiles/dicom/orthanc.json:/etc/orthanc/orthanc.json
- ./perImageFiles/dicom/storage:/var/lib/orthanc/db-v6
- ./perImageFiles/dicom/plugins:/usr/share/orthanc/plugins
- ./perImageFiles/dicom/lua-scripts:/usr/share/orthanc/lua-scripts
env_file:
- ./.env
networks:
- dev-vlt-console
console-vpacs:
image: PRIVATE_ECR_ADDRESS/vpacs:latest
container_name: console-vpacs
depends_on: [console-mysql]
ports: ["7085:8042"]
secrets:
- vpacs-orthanc.json
networks:
- dev-vlt-console
secrets:
console-orthanc.json:
file: ./perImageFiles/dicom/orthanc.json
vpacs-orthanc.json:
file: ./perImageFiles/vpacs/orthanc.json
networks:
dev-vlt-console:
name: dev-vlt-console
driver: bridge
replica-docker-compose.yml
version: '3.8'
services:
console-mysql-dev3:
container_name: console-mysql-dev3
image: PRIVATE_ECR_ADDRESS/mysql:5.7.34
command: --default-authentication-plugin=mysql_native_password
ports:
- "60001:3306"
volumes:
- ./perImageFiles/console-mysql/db:/var/lib/mysql
- ./perImageFiles/console-mysql/seed:/docker-entrypoint-initdb.d
env_file:
- ./.env
networks:
- dev-vlt-console-dev3
console-www-dev3:
image: PRIVATE_ECR_ADDRESS/bnoe-console:latest
container_name: console-www-dev3
env_file:
- ./.env
ports:
- 60002:80
volumes:
- ../console-www:/var/www/secure
- ./perImageFiles/console/console-nginx.conf:/etc/nginx/conf.d/default.conf
- ./perImageFiles/console/nginx.conf:/etc/nginx/nginx.conf
- ./perImageFiles/console/logs-nginx:/var/log/nginx/
- ./perImageFiles/console/console-www-entrypoint.sh:/console-www-entrypoint.sh
depends_on:
- console-www-php-dev3
networks:
- dev-vlt-console-dev3
console-proc-dev3:
container_name: console-proc-dev3
image: PRIVATE_ECR_ADDRESS/bnoe-processor:latest
env_file:
- ./.env
ports:
- 60003:9001
volumes:
- ../console-www:/var/www/secure
- ./perImageFiles/proc/supervisord.conf:/etc/supervisor/conf.d/supervisord.conf
- ./perImageFiles/dicom/storage:/var/lib/orthanc/db-v6
- ../console-www/scripts/docker/console-proc-docker-run.sh:/console-proc-docker-run.sh
entrypoint: /console-proc-docker-run.sh
depends_on:
- console-mysql-dev3
networks:
- dev-vlt-console-dev3
console-www-php-dev3:
container_name: console-www-php-dev3
image: PRIVATE_ECR_ADDRESS/bnoe-console-php:latest
volumes:
- ../console-www:/var/www/secure
- ./perImageFiles/console/php-log.conf:/usr/local/etc/php-fpm.d/zz-log.conf
- ./perImageFiles/console/www-php.ini:/usr/local/etc/php/conf.d/www-php.ini
- ./perImageFiles/console/console-php-entrypoint.sh:/console-php-entrypoint.sh
env_file:
- ./.env
- ./.setupenv
entrypoint: /console-php-entrypoint.sh
depends_on:
- console-mysql-dev3
networks:
- dev-vlt-console-dev3
console-dicom-dev3:
image: jodogne/orthanc-plugins:1.9.7
container_name: console-dicom-dev3
depends_on: [console-mysql-dev3]
ports: [60004:8042, 60005:10401]
volumes:
- ./perImageFiles/dicom/orthanc.json:/etc/orthanc/orthanc.json
- ./perImageFiles/dicom/storage:/var/lib/orthanc/db-v6
- ./perImageFiles/dicom/plugins:/usr/share/orthanc/plugins
- ./perImageFiles/dicom/lua-scripts:/usr/share/orthanc/lua-scripts
env_file:
- ./.env
networks:
- dev-vlt-console-dev3
console-vpacs-dev3:
image: PRIVATE_ECR_ADDRESS/vpacs:latest
container_name: console-vpacs-dev3
depends_on: [console-mysql-dev3]
ports: ["60006:8042"]
secrets:
- vpacs-orthanc-dev3.json
networks:
- dev-vlt-console-dev3
secrets:
console-orthanc-dev3.json:
file: ./perImageFiles/dicom/orthanc.json
vpacs-orthanc-dev3.json:
file: ./perImageFiles/vpacs/orthanc.json
networks:
dev-vlt-console-dev3:
name: dev-vlt-console-dev3
driver: bridge
Am I missing something here? I've added tens of other containers, all with their own networks configured as "bridge" and they are all able to access the internet without issue.
I've read posts from 5 years ago about this and the only resolution that seemed to work, was rebooting the docker host - which I've done, but it didn't help.
Any thoughts / comments?!
Thanks

Accessing the same database in different docker containers

I have been using docker for a postgres database as I work on my project. I used this docker-compose file to spin it up
version: '3'
services:
postgres:
image: postgres
ports:
- "4001:5432"
environment:
- POSTGRES_DB=4x4-db
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- pgdata-4x4:/var/lib/postgresql/data
volumes:
pgdata-4x4: {}
I now want to containerise my back and front ends together with the database. I made this docker-compose file to do so
version: '3.8'
services:
frontend:
build: ./4x4
ports:
- "3000:3000"
backend:
build: ./server
ports:
- "8000:8000"
db:
image: postgres
ports:
- "4001:5432"
environment:
- POSTGRES_DB=4x4-db
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- pgdata-4x4:/var/lib/postgresql/data
volumes:
pgdata-4x4:
external: true
However, when I execute the command docker-compose up on the second file, I do not access the same data as the first one -- the database is blank. If I spin up the first one again, I return to the old data (i.e. nothing is overwritten).
I presumed that the same postgres database would be connected to
I would appreciate any elucidation

Can't connect containers mariadb and phpmyadmin

I get the error "mysqli::real_connect(): (HY000/2002): No such file or directory" when trying to login to phpmyadmin. I verified I can connect to the DB container from the localhost using mysql -h 127.0.0.1 -P 3306 -u root -p. Below is my docker-compose file:
version: "3.7"
########################### SECRETS
secrets:
mysql_root_password:
file: $DOCKERDIR/secrets/mysql_root_password
########################### SERVICES
services:
# Portainer - WebUI for Containers
portainer:
container_name: portainer
image: portainer/portainer-ce:latest
restart: unless-stopped
command: -H unix:///var/run/docker.sock
security_opt:
- no-new-privileges:true
ports:
- "$PORTAINER_PORT:9000"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- $DOCKERDIR/portainer/data:/data
environment:
- TZ=$TZ
# MariaDB - MySQL Database
db:
container_name: db
image: linuxserver/mariadb:latest
restart: always
security_opt:
- no-new-privileges:true
ports:
- "$MARIADB_PORT:3306"
volumes:
- $DOCKERDIR/mariadb/data:/config
environment:
- PUID=$PUID
- PGID=$PGID
- TZ=$TZ
- FILE__MYSQL_ROOT_PASSWORD=/run/secrets/mysql_root_password
secrets:
- mysql_root_password
# phpMyAdmin - Database management
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin
restart: unless-stopped
depends_on:
- db
security_opt:
- no-new-privileges:true
ports:
- "$PHPMYADMIN_PORT:80"
volumes:
- $DOCKERDIR/phpmyadmin:/etc/phpmyadmin
environment:
- PMA_HOST=db
#- PMA_ARBITRARY=1
- MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql_root_password
secrets:
- mysql_root_password
# Dozzle - Real-time Docker Log Viewer
dozzle:
image: amir20/dozzle:latest
container_name: dozzle
restart: unless-stopped
security_opt:
- no-new-privileges:true
ports:
- "$DOZZLE_PORT:8080"
environment:
DOZZLE_LEVEL: info
DOZZLE_TAILSIZE: 300
DOZZLE_FILTER: "status=running"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
For the life of me, I can't figure out what I am doing wrong to log into Phpmyadmin. Can someone explain my mistake or mistakes and point me in the right direction? Thanks
I figured the issue out, first was I had the network set on the pphpmyadmin section, and not db, once I added the network statement to db section, I was able to connect.

Access fusionAuth-app UI from outside container (external access)?

I am using the following docker-compose.yml for deployment cloned from following https://fusionauth.io/docs/v1/tech/installation-guide/docker
version: '3'
services:
db:
image: postgres:9.6
environment:
PGDATA: /var/lib/postgresql/data/pgdata
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
# Un-comment to access the db service directly
ports:
- 5432:5432
networks:
- db
restart: unless-stopped
volumes:
- db_data:/var/lib/postgresql/data
search:
image: docker.elastic.co/elasticsearch/elasticsearch:6.3.1
environment:
- cluster.name=fusionauth
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=${ES_JAVA_OPTS}"
# Un-comment to access the search service directly
ports:
- 9200:9200
- 9300:9300
networks:
- search
restart: unless-stopped
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es_data:/usr/share/elasticsearch/data
fusionauth:
image: fusionauth/fusionauth-app:latest
depends_on:
- db
- search
environment:
DATABASE_URL: jdbc:postgresql://db:5432/fusionauth
DATABASE_ROOT_USER: ${POSTGRES_USER}
DATABASE_ROOT_PASSWORD: ${POSTGRES_PASSWORD}
DATABASE_USER: ${DATABASE_USER}
DATABASE_PASSWORD: ${DATABASE_PASSWORD}
FUSIONAUTH_MEMORY: ${FUSIONAUTH_MEMORY}
FUSIONAUTH_SEARCH_SERVERS: http://search:9200
FUSIONAUTH_URL: http://fusionauth:9011
networks:
- db
- search
restart: unless-stopped
ports:
- 9011:9011
volumes:
- fa_config:/usr/local/fusionauth/config
networks:
db:
driver: bridge
search:
driver: bridge
volumes:
db_data:
es_data:
fa_config:
I am unable to access the fusionAuth UI screen from http://localhost:9011 or http://fusionauth:9011
How can I access the UI welcome screen from outside docker container? Is there any
enableExternal: true env variable available for docker-compose.yml?
Is there a way to test or ping the fusionAuth App server to make sure it's up and running other than using docker ps -a
This was due to an error with the docker image. An updated docker image was released hours after this question was asked, and that resolved the issue.
More details here: https://github.com/FusionAuth/fusionauth-containers/issues/47