How can I connect to postgres in docker from a host machine?
docker-compose.yml
version: '2'
networks:
database:
driver: bridge
services:
app:
build:
context: .
dockerfile: Application.Dockerfile
env_file:
- docker/Application/env_files/main.env
ports:
- "8060:80"
networks:
- database
depends_on:
- appdb
appdb:
image: postdock/postgres:1.9-postgres-extended95-repmgr32
environment:
POSTGRES_PASSWORD: app_pass
POSTGRES_USER: www-data
POSTGRES_DB: app_db
CLUSTER_NODE_NETWORK_NAME: appdb
NODE_ID: 1
NODE_NAME: node1
ports:
- "5432:5432"
networks:
database:
aliases:
- database
docker-compose ps
Name Command State Ports
-----------------------------------------------------------------------------------------------------
appname_app_1 /bin/sh -c /app/start.sh Up 0.0.0.0:8060->80/tcp
appname_appdb_1 docker-entrypoint.sh /usr/ ... Up 22/tcp, 0.0.0.0:5432->5432/tcp
From container I can connect successfully. Both from app container and db container.
List of dbs and users from running psql inside container:
# psql -U postgres
psql (9.5.13)
Type "help" for help.
postgres=# \du
List of roles
Role name | Attributes | Member of
------------------+------------------------------------------------------------+-----------
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
replication_user | Superuser, Create role, Create DB, Replication | {}
www-data | Superuser | {}
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
----------------+------------------+----------+------------+------------+-----------------------
app_db | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
replication_db | replication_user | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
(5 rows)
DB image is not official postgres image. But Dockerfile in GitHub seem looking fine.
cat /var/lib/postgresql/data/pg_hba.conf from DB container:
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local replication postgres trust
#host replication postgres 127.0.0.1/32 trust
#host replication postgres ::1/128 trust
host all all all md5
host replication replication_user 0.0.0.0/0 md5
I tried both users with no luck
$ psql -U postgres -h localhost
psql: FATAL: role "postgres" does not exist
$ psql -h localhost -U www-data appdb -W
Password for user www-data:
psql: FATAL: role "www-data" does not exist
Looks like on my host machine there is already PSQL running on that port. How can I check it?
I believe the problem is you have postgres running on the local machine at port 5432.
Issue can be resolved by mapping port 5432 of docker container to another port in the host machine.
This can be achieved by making a change in docker-compose.yml
Change
"5432:5432"
to
"5433:5432"
Restart docker-compose
Now the docker container postgres is running on 5433. (Locally installed postgres is on 5432)
You can try connecting to the docker container.
psql -p 5433 -d db_name -U user -h localhost
I ran this on Ubuntu 16.04
$ psql -h localhost -U www-data app_db
Password for user www-data:
psql (9.5.13)
Type "help" for help.
app_db=# \du
List of roles
Role name | Attributes | Member of
------------------+------------------------------------------------------------+-----------
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
replication_user | Superuser, Create role, Create DB, Replication | {}
www-data | Superuser | {}
And below from my mac to the VM inside which docker was running (192.168.33.100 is the IP address of the docker VM)
$ psql -h 192.168.33.100 -U www-data app_db
Password for user www-data:
psql (9.6.9, server 9.5.13)
Type "help" for help.
app_db=# \du
List of roles
Role name | Attributes | Member of
------------------+------------------------------------------------------------+-----------
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
replication_user | Superuser, Create role, Create DB, Replication | {}
www-data | Superuser | {}
They both work for me.
PSQL version on VM
$ psql --version
psql (PostgreSQL) 9.5.13
PSQL version on Mac
$ psql --version
psql (PostgreSQL) 9.6.9
I had the same problem on elementary OS 5.0 while running postgres 10 official docker container though. I was able to connect from host using the ip of the running container.
sudo docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name/id
Test after getting the ip:
psql -h 172.17.0.2 -U postgres
I believe you have an issue in pg_hba.conf. Here you've specified 1 host that has access - 127.0.0.1/32.
You can change it to this:
# IPv4 local connections:
host all all 0.0.0.0/0 md5
This will make sure your host (totally different IP) can connect.
To check if there is an instance of postgresql already running, you can do
netstat -plnt | grep 5432. If you get any result from this you can get the PID and verify the process itself.
I have a relatively similar setup, and the following works for me to open a psql session on the host machine into the docker postgres instance:
docker-compose run --rm db psql -h db -U postgres -d app_development
Where:
db is the name of the container
postgres is the name of the user
app_development is the name of the database
So for you, it would look like docker-compose run --rm appdb psql -h appdb -U www-data -d app_db.
Since you’re running it in OSX, you can always use the pre-installed Network Utility app to run a Port Scan on your host and identify if the postgres server is running (and if yes, on which port).
But I don’t think you have one running on your host. The problem is that Postgres by default runs on 5432 and the docker-compose file that you are trying to run exposes the db container on the same port i.e. 5432. If the Postgres server were already running on your host, then Docker would have tried to expose a a container to a port which is already being used, thereby giving an error.
Another potential solution:
As can be seen in this answer, mysql opens a unix socket with localhost and not a tcp socket. Maybe something similar is happening here.
Try using 127.0.0.1 instead of localhost while connecting to the server in the container.
Related
I'm continuously hitting an error when trying to psql into a docker composed postgres image that has its ports forwarded. (this issue seems to persist also when attempting to access the DB programatically via node application).
Running docker-compose up -d on the following docker compose file:
services:
postgres:
container_name: cnc-matches
image: postgres:12.1-alpine
ports:
- '5432:5432'
environment:
POSTGRES_USER: dbuser
POSTGRES_PASSWORD: pass
POSTGRES_DB: cnc-matches
When running psql to attempt to access it I hit the following error continuously:
C:\Users\danie\Desktop\dev\cnc-db\db-setup>psql -h "localhost" -p "5432" -U dbuser
Password for user dbuser: pass
psql: error: connection to server at "localhost" (::1), port 5432 failed: FATAL: password authentication failed for user "dbuser"
When running docker exec I'm able to access the table and info fine:
C:\Users\danie\Desktop\dev\cnc-db\db-setup>docker exec -it cnc-matches psql -U dbuser cnc-matches
psql (12.1)
Type "help" for help.
cnc-matches=# \du
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
dbuser | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
I've tried creating a new user as well as altering the dbuser profiles passwords in here with ALTER PASSWORD dbuser WITH PASSWORD 'pass' and I still cannot access the db with default psql command locally.
cnc-matches=# CREATE USER tester WITH PASSWORD 'tester';
CREATE ROLE
cnc-matches=# \du
List of roles
Role name | Attributes | Member of
-----------+------------------------------------------------------------+-----------
dbuser | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
tester | | {}
C:\Users\danie\Desktop\dev\cnc-db\db-setup>psql -h "localhost" -p "5432" -U tester
Password for user tester: tester
psql: error: connection to server at "localhost" (::1), port 5432 failed: FATAL: password authentication failed for user "tester"
Not sure what it is I'm misisng here, if relevant running via windows 11 cmd.
Any help/suggestions appreciated.
Looks like you also need to include the database name when connecting to the database running in the Docker container, ex
psql -h "0.0.0.0" -p "5432" -U dbuser cnc-matches
If the container was successfully started, try running:
psql -U dbuser -d cnc-matches -h 0.0.0.0 --port 5432
For the port use whatever you mapped yours in the container to; in your case 5432.
You must specify all that explicitly or else you will get an error that the database cannot accept TCP/IP connections.
First time I am posting, sorry if there is wrong formatting.
I updated your docker-compose file with a network bridge as still isolated from the network with the host machine.
Steps:
"db-net" can be any name you want but both must coincide with your docker compose file
Create a docker network with: docker network create db-net
Run: docker-compose up -d
version: "3.9"
services:
postgresCNC:
container_name: cnc-matches
image: postgres:15.0-alpine
ports:
- '5432:5432'
environment:
POSTGRES_USER: dbuser
POSTGRES_PASSWORD: pass
POSTGRES_DB: cnc-matches
networks:
db-net:
networks:
db-net:
driver: bridge
Network bridge by default uses the hardware network device to communicate between containers and host machine as with containers with others containers.
More info here:
https://docs.docker.com/network/bridge/
I am not sure as to why I cannot connect my postgres client in my docker container from OUTSIDE my docker container.
docker-compose setup
db:
container_name: postgres-container
image: postgres:latest
ports:
- "5432:5432"
environment:
- POSTGRES_USER=liondancer
- POSTGRES_PASSWORD=postgres
volumes:
- ../data/postgres:/var/lib/postgresql/data
With my container running via docker-compose
$ docker exec -it 451b psql -U liondancer
psql (13.1 (Debian 13.1-1.pgdg100+1))
Type "help" for help.
liondancer=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
---------------------+------------+----------+------------+------------+---------------------------
liondancer | liondancer | UTF8 | en_US.utf8 | en_US.utf8 |
journey_development | liondancer | UTF8 | en_US.utf8 | en_US.utf8 |
journey_test | liondancer | UTF8 | en_US.utf8 | en_US.utf8 |
postgres | liondancer | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | liondancer | UTF8 | en_US.utf8 | en_US.utf8 | =c/liondancer +
| | | | | liondancer=CTc/liondancer
template1 | liondancer | UTF8 | en_US.utf8 | en_US.utf8 | =c/liondancer +
| | | | | liondancer=CTc/liondancer
(6 rows)
Here is the output of docker ps
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
451b08a85664 postgres:latest "docker-entrypoint.s…" About an hour ago Up 19 minutes 0.0.0.0:5432->5432/tcp postgres-container
However, I want my rails server and pgAdmin (currently NOT in container) to be able to communicate with the postgres docker container client. I thought if my rails server, pgAdmin, or psql client connected to 0.0.0.0:5432, I should be able to connect to the docker container client.
My attempts to connect have been
$ psql -h 0.0.0.0 -p 5432 -U liondancer -d journey_development
psql: error: FATAL: database "journey_development" does not exist
$ psql postgresql://liondancer:postgres#localhost:5432/
psql: error: FATAL: database "liondancer" does not exist
$ psql postgresql://liondancer:postgres#localhost:5432/journey_development
psql: error: FATAL: database "journey_development" does not exist
In rails database.yml
default: &default
adapter: postgresql
encoding: unicode
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
host: <%= ENV.fetch("POSTGRES_HOST") { "0.0.0.0" } %>
port: <%= ENV.fetch("POSTGRES_PORT") { "5432" } %>
username: <%= ENV.fetch("POSTGRES_USER") { "liondancer" } %>
password: <%= ENV.fetch("POSTGRES_PASSWORD") { "postgres" } %>
development:
<<: *default
database: journey_development
test:
<<: *default
database: journey_test
Try adding this to the environment variables:
POSTGRES_DB=journey_development
To access from outside the container try this command:
psql journey_development -h localhost -U liondancer
# Password: postgres
You can checkout my blog post for a full setup with Node TS, Postgres and Docker for dev and prod builds.
https://yzia2000.github.io/blog/2021/01/15/docker-postgres-node.html
You can have an init script for creating the database and mount that init script to the postgres container.
Example -
db:
image: postgres:12
environment:
POSTGRES_USER: liondancer
POSTGRES_PASSWORD: postgres
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
- postgres_data:/var/lib/postgresql/data
Create an init.sql file in the same directory as the docker-compose.yml file with the following contents -
CREATE DATABASE journey_development WITH OWNER=liondancer LC_COLLATE='en_US.utf8' LC_CTYPE='en_US.utf8' ENCODING='UTF8';
This init.sql script is executed whenever the container is created and a database will be created for you.
[Optional]
You can also create an user inside the init.sql using -
create user liondancer;
alter user liondancer with encrypted password 'postgres';
create database journey_development;
grant all privileges on database journey_development to liondancer;
I followed the tutorial from prisma.io to get started building a local server.
Follow the docker-compose.yml:
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.34
restart: always
ports:
- '4466:4466'
environment:
PRISMA_CONFIG: |
port: 4466
databases:
default:
connector: postgres
host: postgres
port: 5432
user: prisma
password: prisma
postgres:
image: postgres:10.3
restart: always
environment:
POSTGRES_USER: prisma
POSTGRES_PASSWORD: prisma
volumes:
- postgres:/var/lib/postgresql/data
volumes:
postgres: ~
I builded two docker containers, one is prisma server, the other is postgres database.
As I thought after command prisma depoly the Model User should create a users table in the database.
But I try to check the schema in the database and got the result:
docker exec -it myContainer psql -U prisma
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+------------+------------+-----------------------
postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
prisma | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
(4 rows)
prisma=# \z // or postgres=# \z
Access privileges
Schema | Name | Type | Access privileges | Column privileges | Policies
--------+------+------+-------------------+-------------------+----------
(0 rows)
prisma=# \dt or postgres=# \dt
Did not find any relations.
Then I tried to check the volume folder in the VM machine
docker run -it --rm --privileged --pid=host justincormack/nsenter1
/var/lib/docker/volumes/first_prisma_postgres/_data # ls
PG_VERSION pg_commit_ts pg_ident.conf pg_notify pg_snapshots pg_subtrans pg_wal postgresql.conf
base pg_dynshmem pg_logical pg_replslot pg_stat pg_tblspc pg_xact postmaster.opts
global pg_hba.conf pg_multixact pg_serial pg_stat_tmp pg_twophase postgresql.auto.conf postmaster.pid
The data exactly exist at the VM machine, but how can I checked the data or made dump backup from it?
Once if you are connected to postgres in your container, you can execute normal queries.
Example:
\l to display all the schema
\dt to display all tables.
maybe connecting to database is the one you are missing.
Run - \c schema_name to connect to db
Once you are connected, you can execute any normal queries.
To check all volumes you can run:
docker volume ls
To check one of them:
docker volume inspect postgres_db
, where postgres_db is a name of your volume
As a result you may see something like this:
[
{
"CreatedAt": "2022-05-05T14:24:04Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "s-deployment",
"com.docker.compose.version": "2.4.1",
"com.docker.compose.volume": "postgres_db"
},
"Mountpoint": "/var/lib/docker/volumes/s-deployment_postgres_db/_data",
"Name": "s-deployment_postgres_db",
"Options": null,
"Scope": "local"
}
]
I have deployed my PostgreSQL database locally with Docker. This is my Docker Compose file:
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.30-alpha
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
# uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
# managementApiSecret: my-secret
prototype: true
databases:
default:
connector: postgres
host: postgres
user: prisma
password: prisma
port: 5432
postgres:
image: postgres
restart: always
environment:
POSTGRES_USER: prisma
POSTGRES_PASSWORD: prisma
volumes:
- postgres:/var/lib/postgresql/data
volumes:
postgres:
I've started this with docker-compose up -d.
The Docker containers are running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
26b14120f89f prismagraphql/prisma:1.30-alpha "/bin/sh -c /app/sta…" 17 minutes ago Up 17 minutes 0.0.0.0:4466->4466/tcp newdm1_prisma_1
05dfdaeaf609 postgres "docker-entrypoint.s…" 17 minutes ago Up 17 minutes 5432/tcp newdm1_postgres_1
Now, for some reason I can't connect using a database GUI (having tried Postico and TablePlus). In both clients, I get the following error:
FATAL: password authentication failed for user "prisma"
I'm 100% sure that I'm providing prisma as the password as specified in the Docker Compose file.
Also, when I'm connecting to the database using psql from inside the postgres Docker container, it does work:
docker exec -it 05dfdaeaf609 bash
Then inside the Docker container I do this:
root#05dfdaeaf609:/# psql -U prisma -W prisma
Password:
psql (11.1 (Debian 11.1-1.pgdg90+1))
Type "help" for help.
prisma=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+--------+----------+------------+------------+-------------------
postgres | prisma | UTF8 | en_US.utf8 | en_US.utf8 |
prisma | prisma | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | prisma | UTF8 | en_US.utf8 | en_US.utf8 | =c/prisma +
| | | | | prisma=CTc/prisma
template1 | prisma | UTF8 | en_US.utf8 | en_US.utf8 | =c/prisma +
| | | | | prisma=CTc/prisma
(4 rows)
prisma=# \c prisma
Password for user prisma:
You are now connected to database "prisma" as user "prisma".
The password I've provided inside psql was prisma, similar to the ones I've provided inside Postico and TablePlus.
Is there anything special I need to do when connecting to a PostgreSQL DB in a Docker container?
PORTS
0.0.0.0:4466->4466/tcp
5432/tcp
If you check the ports column of docker ps command, you would realize that the Postgres port is not exposed to be used by the host machine.
To solve this, you need to add the following to the docker-compose.yml file:
ports:
- "5432:5432"
Making the full file look like this:
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.30-alpha
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
# uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
# managementApiSecret: my-secret
prototype: true
databases:
default:
connector: postgres
host: postgres
user: prisma
password: prisma
port: 5432
postgres:
image: postgres
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_USER: prisma
POSTGRES_PASSWORD: prisma
volumes:
- postgres:/var/lib/postgresql/data
volumes:
postgres:
and then you need to run docker-compose up -d to apply the new changes. When setup correctly, the PORTS column of docker ps should look like this
PORTS
0.0.0.0:4466->4466/tcp
0.0.0.0:5432->5432/tcp
I am trying to set up a PostgreSQL container (https://hub.docker.com/_/postgres/). I have some data from a current PostgreSQL instance. I copied it from /var/lib/postgresql/data and want to set it as a volume to a PostgreSQL container.
My part from docker-compose.yml file about PostgreSQL:
db:
image: postgres:9.4
ports:
- 5432:5432
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
PGDATA : /var/lib/postgresql/data
volumes:
- /projects/own/docker_php/pgdata:/var/lib/postgresql/data
When I make docker-compose up I get this message:
db_1 | initdb: directory "/var/lib/postgresql/data" exists but is not empty
db_1 | If you want to create a new database system, either remove or empty
db_1 | the directory "/var/lib/postgresql/data" or run initdb
db_1 | with an argument other than "/var/lib/postgresql/data".
I tried to create my own image from the container, so my Dockerfile is:
FROM postgres:9.4
COPY pgdata /var/lib/postgresql/data
But I got the same error, what am I doing wrong?
Update
I got SQL using pg_dumpall and put it in /docker-entrypoint-initdb.d, but this file executes every time I do docker-compose up.
To build on irakli's answer, here's an updated solution:
use newer version 2 Docker Compose file
separate volumes section
extra settings deleted
docker-compose.yml
version: '2'
services:
postgres9:
image: postgres:9.4
expose:
- 5432
volumes:
- data:/var/lib/postgresql/data
volumes:
data: {}
demo
Start Postgres database server:
$ docker-compose up
Show all tables in the database. In another terminal, talk to the container's Postgres:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
It'll show nothing, as the database is blank. Create a table:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c 'create table beer()'
List the newly-created table:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
Access privileges
Schema | Name | Type | Access privileges | Column access privileges
--------+-----------+-------+-------------------+--------------------------
public | beer | table | |
Yay! We've now started a Postgres database using a shared storage volume, and stored some data in it. Next step is to check that the data actually sticks around after the server stops.
Now, kill the Postgres server container:
$ docker-compose stop
Start up the Postgres container again:
$ docker-compose up
We expect that the database server will re-use the storage, so our very important data is still there. Check:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
Access privileges
Schema | Name | Type | Access privileges | Column access privileges
--------+-----------+-------+-------------------+--------------------------
public | beer | table | |
We've successfully used a new-style Docker Compose file to run a Postgres database using an external data volume, and checked that it keeps our data safe and sound.
storing data
First, make a backup, storing our data on the host:
$ docker exec -it $(docker-compose ps -q postgres9 ) pg_dump -Upostgres > backup.sql
Zap our data from the guest database:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c 'drop table beer'
Restore our backup (stored on the host) into the Postgres container.
Note: use "exec -i", not "-it", otherwise you'll get a "input device is not a TTY" error.
$ docker exec -i $(docker-compose ps -q postgres9 ) psql -Upostgres < backup.sql
List the tables to verify the restore worked:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
Access privileges
Schema | Name | Type | Access privileges | Column access privileges
--------+-----------+-------+-------------------+--------------------------
public | beer | table | |
To sum up, we've verified that we can start a database, the data persists after a restart, and we can restore a backup into it from the host.
Thanks Tomasz!
It looks like the PostgreSQL image is having issues with mounted volumes. FWIW, it is probably more of a PostgreSQL issue than Dockers, but that doesn't matter because mounting disks is not a recommended way for persisting database files, anyway.
You should be creating data-only Docker containers, instead. Like this:
postgres9:
image: postgres:9.4
ports:
- 5432:5432
volumes_from:
- pg_data
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
PGDATA : /var/lib/postgresql/data/pgdata
pg_data:
image: alpine:latest
volumes:
- /var/lib/postgresql/data/pgdata
command: "true"
which I tested and worked fine. You can read more about data-only containers here: Why Docker Data Containers (Volumes!) are Good
As for: how to import initial data, you can either:
docker cp, into the data-only container of the setup, or
Use an SQL dump of the data, instead of moving binary files around (which is what I would do).
To restore your data from an existing dumped.sql file.
Create your Dockerfile
version: "3"
services:
postgres:
image: postgres
container_name: postgres
environment:
POSTGRES_DB: YOUR_DATABASE
POSTGRES_USER: your_username_if_needed
POSTGRES_PASSWORD: your_password_if_needed
expose:
- 5432
volumes:
- data:/var/lib/postgresql/data
volumes:
data: {}
Launch it and kill it docker-compose up then docker-compose stop
Then migrate your data from you dump.
docker exec -i YOUR_CONTAINER_NAME psql -U your_username_if_needed -W -d YOUR_DATABASE < your_dump.sql
-W is only needed if you set a password.
You should now see your console executing the copying.
When its done you good to go docker-compose up