docker-compose multiple postgres databases - docker-compose

I try to up several postgres databases by writing docker-compose file and create database when container is up. The problem is - that database is not created
I do not understand why it doesn't create. But if i try to up only one postgres it works. What i missed?
works well:
services:
tenant_first:
image: postgres
restart: always
volumes:
- pgdata:/var/lib/postgresql/data/
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=tenant_first
ports:
- 50005:5432
Doesn't work (not create database if more than one recording)
services:
tenant_first:
image: postgres
restart: always
volumes:
- pgdata:/var/lib/postgresql/data/
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=tenant_first
ports:
- 50005:5432
tenant_second:
image: postgres
restart: always
volumes:
- pgdata:/var/lib/postgresql/data/
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=tenant_second
ports:
- 50008:5432
I did it works adding volumes
volumes:
pgdata:
pgdata2:

Here the full workable solution what i did
version: '3.3'
services:
tenant_first:
image: postgres
restart: always
volumes:
- pgdata:/var/lib/postgresql/data/
- ./sql/initdb.sh:/docker-entrypoint-initdb.d/initdb.sh
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
ports:
- 50005:5432
#
tenant_fours:
image: postgres
restart: always
volumes:
- pgdata2:/var/lib/postgresql/data/
- ./sql/initdb_f.sh:/docker-entrypoint-initdb.d/initdb_f.sh
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
ports:
- 50008:5432
networks:
default:
external:
name: flask_mu_one_dbs
volumes:
pgdata:
pgdata2:
initdb.sh
#!/bin/bash
psql -U postgres
psql -c "create database tenant_first"
initdb_f.sh
#!/bin/bash
psql -U postgres
psql -c "create database tenant_fours"

Related

Accessing the same database in different docker containers

I have been using docker for a postgres database as I work on my project. I used this docker-compose file to spin it up
version: '3'
services:
postgres:
image: postgres
ports:
- "4001:5432"
environment:
- POSTGRES_DB=4x4-db
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- pgdata-4x4:/var/lib/postgresql/data
volumes:
pgdata-4x4: {}
I now want to containerise my back and front ends together with the database. I made this docker-compose file to do so
version: '3.8'
services:
frontend:
build: ./4x4
ports:
- "3000:3000"
backend:
build: ./server
ports:
- "8000:8000"
db:
image: postgres
ports:
- "4001:5432"
environment:
- POSTGRES_DB=4x4-db
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- pgdata-4x4:/var/lib/postgresql/data
volumes:
pgdata-4x4:
external: true
However, when I execute the command docker-compose up on the second file, I do not access the same data as the first one -- the database is blank. If I spin up the first one again, I return to the old data (i.e. nothing is overwritten).
I presumed that the same postgres database would be connected to
I would appreciate any elucidation

docker-compose.yml for Postgres works either way, but not the expected one

I have the following docker-compose.yml for running Postgres with Docker:
version: '3.8'
services:
postgres:
image: postgres:14.2-alpine
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-postgres}
POSTGRES_DB: mydatabasename
PGDATA: /data/mydatabasename
volumes:
- postgres:/data/postgres
ports:
- '5432:5432'
networks:
- postgres
restart: unless-stopped
pgadmin:
... placeholder
networks:
postgres:
driver: bridge
volumes:
postgres:
pgadmin:
It works. What I don't understand, is that these two combinations work:
PGDATA: /data/mydatabasename
volumes:
- postgres:/data/postgres
and
PGDATA: /data/postgres
volumes:
- postgres:/data/mydatabasename
But this does not work:
PGDATA: /data/mydatabasename
volumes:
- postgres:/data/mydatabasename
I would just get: error: database "mydatabasename" does not exist.
The latter was my first attempt connecting everything though. So I am wondering, why do both fields not map to the actual database name? Thanks for your help!
Docker Compose File
version: '3'
services:
postgres:
image: postgres:alpine
container_name: postgres
restart: always
posts:
- 5432:5432
volumes:
- ./data_backup:/var/lib/postgresql/data #mount the data locally
env_file:
- .env
networks:
- postgres-net
pgadmin:
image: dpage/pgadmin4:6
restart: always
ports:
- 8080:80
volumes:
- pgadmin:/var/lib/pgadmin
env_file:
- .env
depends_on:
- postgres
netwroks:
- postgres-net
networks:
postgres-net:
name: postgres-net
.env File
#postgres
POSTGRES_PASSWORD="secure_password"
POSTGRES_DB=your_db_name
POSTGRES_USER=db_user_name
#pgadmin
PGADMIN_DEFAULT_EMAIL=ariful#firora.com
PGADMIN_DEFAULT_PASSWORD="secure_password"
PGADMIN_LISTEN_PORT=80
Since you may try to bind the database folder which was never created. Alternatively you can bind /var/lib/postgresql/data this path to store whatever database you created or about create in future.

How to connect postgresql with PGAdmin in docker-compse.yml file

This is my docker-compose.yml file:
version: "3.9"
services:
db:
image: postgres
volumes:
- ./data/db:/var/lib/postgresql/data
- ./innovators.sql:/docker-entrypoint-initdb.d/innovators.sql
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
pgadmin:
image: dpage/pgadmin4:4.18
restart: unless-stopped
environment:
- PGADMIN_DEFAULT_EMAIL=admin#domain.com
- PGADMIN_DEFAULT_PASSWORD=admin
- PGADMIN_LISTEN_PORT=80
ports:
- "8090:80"
volumes:
- ./pgadmin-data:/var/lib/pgadmin
links:
- "db:pgsql-server"
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
volumes:
pgadmin-data:
In postgreql I import my own table (./innovators.sql:/docker-entrypoint-initdb.d/innovators.sql).
What should I do to connect my postgresql database with my pgAdmin?
I wish the end result would be that I can see my tables which I imported in pgadmin.
Access pgadmin in the browser on your host on localhost:8090. Sign in and then navigate to Servers->Create->Server, in the connection tab use db or pgsql-server as "Host name/address" and 5423 as a port.

Permission denied when docker-compose exec -T database pg_dump -U teslamate teslamate > /backuplocation/teslamate.bck

When I execute this command I receive a PERMISSION DENIED
docker-compose exec -T database pg_dump -U teslamate teslamate > /backuplocation/teslamate.bck
What is going wrong?
below the docker-compose.yml
I am trying to follow the instructions to make a backup of the database
below the docker compose
version: "3"
services:
teslamate:
image: teslamate/teslamate:latest
restart: always
environment:
- DATABASE_USER=teslamate
- DATABASE_PASS=secret
- DATABASE_NAME=teslamate
- DATABASE_HOST=database
- MQTT_HOST=mosquitto
ports:
- 4000:4000
volumes:
- ./import:/opt/app/import
cap_drop:
- all
database:
image: postgres:13
restart: always
environment:
- POSTGRES_USER=teslamate
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=teslamate
volumes:
- teslamate-db:/var/lib/postgresql/data
grafana:
image: teslamate/grafana:latest
restart: always
environment:
- DATABASE_USER=teslamate
- DATABASE_PASS=secret
- DATABASE_NAME=teslamate
- DATABASE_HOST=database
ports:
- 3000:3000
volumes:
- teslamate-grafana-data:/var/lib/grafana
mosquitto:
image: eclipse-mosquitto:1.6
restart: always
ports:
- 1883:1883
volumes:
- mosquitto-conf:/mosquitto/config
- mosquitto-data:/mosquitto/data
volumes:
teslamate-db:
teslamate-grafana-data:
mosquitto-conf:
mosquitto-data:

Docker Postgres running multiple dockers with segregated instances

I have a requirement with docker/docker-compose to run 2 different instances of postgres but i need their data to be completely separate as both applications control the database server completely, not just a single database
Here is the docker file there is one in each project directory
FROM postgres:10-alpine
COPY data/resources.sql.gz /docker-entrypoint-initdb.d/resources.sql.gz
ENV POSTGRES_USER=postgres
ENV POSTGRES_PASSWORD=123456
Here is the section from each docker compose, there is one in each project directory
Project 1
db_test:
image: postgres:10-alpine
container_name: postgres_test
restart: always
expose:
- '5432'
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=123456
- PGDATA=/db
volumes:
- ./db:/db
networks:
backend:
ipv4_address: 172.16.1.6
Project 2
db:
image: postgres:10-alpine
container_name: postgres
restart: always
expose:
- '5432'
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=123456
- PGDATA=/db
volumes:
- ./db:/db
networks:
backend:
ipv4_address: 172.16.0.6
I should also note that the resources.sql.gz is unique to each project
The problem i am having is that i build project 1, then stop the docker
then i build project 2 and for some reason its inheriting the databases from project 1
what i need is to completely seperate both so that i could run them side by side with different ports (if required)
One option is to create two distinct docker volumes:
docker-compose.yml
version: "3.7"
volumes:
db1-pgdata-volume:
name: db1-postgres-data
db2-pgdata-volume:
name: db2-postgres-data
services:
db_test:
image: postgres:10-alpine
container_name: postgres_test
restart: always
expose:
- '5432'
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=123456
- PGDATA=/db
volumes:
- db1-pgdata-volume:/var/lib/postgresql/data
networks:
backend:
ipv4_address: 172.16.1.6
db:
image: postgres:10-alpine
container_name: postgres
restart: always
expose:
- '5432'
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=123456
- PGDATA=/db
volumes:
- db2-pgdata-volume:/var/lib/postgresql/data
networks:
backend:
ipv4_address: 172.16.0.6
You are using the same database storage for both services. It's like you're using a single database with two access points.
Change the volumes section to this :
in db service :
volumes:
- ./db:/var/lib/postgresql/data
in db_test service:
volumes:
- ./db_test:/var/lib/postgresql/data
Remove the following from the environment section :
- PGDATA=/db