Cannot connect to postgres client within docker container from OUTSIDE container - postgresql

I am not sure as to why I cannot connect my postgres client in my docker container from OUTSIDE my docker container.
docker-compose setup
db:
container_name: postgres-container
image: postgres:latest
ports:
- "5432:5432"
environment:
- POSTGRES_USER=liondancer
- POSTGRES_PASSWORD=postgres
volumes:
- ../data/postgres:/var/lib/postgresql/data
With my container running via docker-compose
$ docker exec -it 451b psql -U liondancer
psql (13.1 (Debian 13.1-1.pgdg100+1))
Type "help" for help.
liondancer=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
---------------------+------------+----------+------------+------------+---------------------------
liondancer | liondancer | UTF8 | en_US.utf8 | en_US.utf8 |
journey_development | liondancer | UTF8 | en_US.utf8 | en_US.utf8 |
journey_test | liondancer | UTF8 | en_US.utf8 | en_US.utf8 |
postgres | liondancer | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | liondancer | UTF8 | en_US.utf8 | en_US.utf8 | =c/liondancer +
| | | | | liondancer=CTc/liondancer
template1 | liondancer | UTF8 | en_US.utf8 | en_US.utf8 | =c/liondancer +
| | | | | liondancer=CTc/liondancer
(6 rows)
Here is the output of docker ps
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
451b08a85664 postgres:latest "docker-entrypoint.s…" About an hour ago Up 19 minutes 0.0.0.0:5432->5432/tcp postgres-container
However, I want my rails server and pgAdmin (currently NOT in container) to be able to communicate with the postgres docker container client. I thought if my rails server, pgAdmin, or psql client connected to 0.0.0.0:5432, I should be able to connect to the docker container client.
My attempts to connect have been
$ psql -h 0.0.0.0 -p 5432 -U liondancer -d journey_development
psql: error: FATAL: database "journey_development" does not exist
$ psql postgresql://liondancer:postgres#localhost:5432/
psql: error: FATAL: database "liondancer" does not exist
$ psql postgresql://liondancer:postgres#localhost:5432/journey_development
psql: error: FATAL: database "journey_development" does not exist
In rails database.yml
default: &default
adapter: postgresql
encoding: unicode
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
host: <%= ENV.fetch("POSTGRES_HOST") { "0.0.0.0" } %>
port: <%= ENV.fetch("POSTGRES_PORT") { "5432" } %>
username: <%= ENV.fetch("POSTGRES_USER") { "liondancer" } %>
password: <%= ENV.fetch("POSTGRES_PASSWORD") { "postgres" } %>
development:
<<: *default
database: journey_development
test:
<<: *default
database: journey_test

Try adding this to the environment variables:
POSTGRES_DB=journey_development
To access from outside the container try this command:
psql journey_development -h localhost -U liondancer
# Password: postgres
You can checkout my blog post for a full setup with Node TS, Postgres and Docker for dev and prod builds.
https://yzia2000.github.io/blog/2021/01/15/docker-postgres-node.html

You can have an init script for creating the database and mount that init script to the postgres container.
Example -
db:
image: postgres:12
environment:
POSTGRES_USER: liondancer
POSTGRES_PASSWORD: postgres
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
- postgres_data:/var/lib/postgresql/data
Create an init.sql file in the same directory as the docker-compose.yml file with the following contents -
CREATE DATABASE journey_development WITH OWNER=liondancer LC_COLLATE='en_US.utf8' LC_CTYPE='en_US.utf8' ENCODING='UTF8';
This init.sql script is executed whenever the container is created and a database will be created for you.
[Optional]
You can also create an user inside the init.sql using -
create user liondancer;
alter user liondancer with encrypted password 'postgres';
create database journey_development;
grant all privileges on database journey_development to liondancer;

Related

Postgres inside docker compose refusing connection from my server

I have a Golang server alongside Postgres instance running inside docker compose. For some reason the Postgres is refusing connection. From all of my previous searches, usually the problem is typo, not exposing the port, having SSL and so on, but I don't have anything like that going on and still having this issue
version: "3.2"
services:
ingress:
image: jwilder/nginx-proxy
ports:
- "3000:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
auth-service:
depends_on:
- rabbitmq
- auth-db
- ingress
build: ./auth
container_name: "auth-service"
ports:
- 3001:3000
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_HOST=auth-db
- POSTGRES_DB=auth-dev
- POSTGRES_PORT=5435
- PORT=3000
- RABBITMQ_USER=guest
- RABBITMQ_PASSWORD=guest
- RABBITMQ_HOST=rabbitmq
- RABBITMQ_PORT=5672
- VIRTUAL_HOST=api.twitchy.dev
- VIRTUAL_PATH=/v1/auth/
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
# networks:
# - rabbitmq_net
# - default
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: "rabbitmq"
ports:
- 5672:5672
- 15672:15672
volumes:
- rabbitmq_data:/var/lib/rabbitmq/
- rabbitmq_log:/var/log/rabbitmq/
# networks:
# - rabbitmq_net
auth-db:
image: postgres:14.1-alpine
restart: always
container_name: "auth-db"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=auth-dev
ports:
- "5435:5432"
volumes:
- db:/var/lib/postgresql/data
chat-db:
image: postgres:14.1-alpine
restart: always
container_name: "chat-db"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=chat-dev
ports:
- "5434:5432"
volumes:
- db:/var/lib/postgresql/data
# networks:
# rabbitmq_net:
# driver: bridge
volumes:
db:
driver: local
rabbitmq_data:
rabbitmq_log:
This is the error I am getting
auth-service | Retrying connection to database...
auth-service | failed to connect to `host=auth-db user=postgres database=auth-dev`: dial error (dial tcp 172.23.0.3:5435: connect: connection refused)
And my Golang code used to connect to the DB (using pgx)
dbUrl := fmt.Sprintf("postgres://%s:%s#%s:%s/%s?sslmode=disable",
os.Getenv("POSTGRES_USER"),
os.Getenv("POSTGRES_PASSWORD"),
os.Getenv("POSTGRES_HOST"),
os.Getenv("POSTGRES_PORT"),
os.Getenv("POSTGRES_DB"))
This is why I am confused
The ports match up, I expose 5435 from postgres, and I connect to 5435
The host should be correct as I am referencing the auth-db service name and they are on the same default network so that should be fine
The password and username match up
The POSTGRES_DB also match up, the default database should be auth-dev
POSTGRES_DB
This optional environment variable can be used to define a different name for the default database that is created when the image is first started. If it is not specified, then the value of POSTGRES_USER will be used.
I have sslmode disable as well
Is there anything else that can cause the connection to be refused?
Tried changing db to template1 and postgres as they are created by default but both aren't working either
54511e50369c postgres:14.1-alpine "docker-entrypoint.s…" 16 minutes ago Up 16 seconds 0.0.0.0:5435->5432/tcp, :::5435->5432/tcp auth-db
docker exec -it 54511e50369c psql -U postgres
psql (14.1)
Type "help" for help.
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+------------+------------+-----------------------
postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
(3 rows)
The database is ready when I am trying to connect (I am retrying 20 times so, and restarting the service if it crashes, so it should be available)
When you map ports in docker-compose, say like "5435:5432", you are mapping port 5435 on the HOST machine to 5432 on the CONTAINER. However, your db url in the auth-service definition is using the name of the service, auth-db, so you are actually hitting the db container directly, not going through the host machine. Because the db container does not expose 5435, you are unable to connect using port 5435.
If you were to try to connect to the database from your host machine for example, you would probably be successful using port 5435 and localhost.

How can I check postgres database in docker volume?

I followed the tutorial from prisma.io to get started building a local server.
Follow the docker-compose.yml:
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.34
restart: always
ports:
- '4466:4466'
environment:
PRISMA_CONFIG: |
port: 4466
databases:
default:
connector: postgres
host: postgres
port: 5432
user: prisma
password: prisma
postgres:
image: postgres:10.3
restart: always
environment:
POSTGRES_USER: prisma
POSTGRES_PASSWORD: prisma
volumes:
- postgres:/var/lib/postgresql/data
volumes:
postgres: ~
I builded two docker containers, one is prisma server, the other is postgres database.
As I thought after command prisma depoly the Model User should create a users table in the database.
But I try to check the schema in the database and got the result:
docker exec -it myContainer psql -U prisma
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+------------+------------+-----------------------
postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
prisma | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
(4 rows)
prisma=# \z // or postgres=# \z
Access privileges
Schema | Name | Type | Access privileges | Column privileges | Policies
--------+------+------+-------------------+-------------------+----------
(0 rows)
prisma=# \dt or postgres=# \dt
Did not find any relations.
Then I tried to check the volume folder in the VM machine
docker run -it --rm --privileged --pid=host justincormack/nsenter1
/var/lib/docker/volumes/first_prisma_postgres/_data # ls
PG_VERSION pg_commit_ts pg_ident.conf pg_notify pg_snapshots pg_subtrans pg_wal postgresql.conf
base pg_dynshmem pg_logical pg_replslot pg_stat pg_tblspc pg_xact postmaster.opts
global pg_hba.conf pg_multixact pg_serial pg_stat_tmp pg_twophase postgresql.auto.conf postmaster.pid
The data exactly exist at the VM machine, but how can I checked the data or made dump backup from it?
Once if you are connected to postgres in your container, you can execute normal queries.
Example:
\l to display all the schema
\dt to display all tables.
maybe connecting to database is the one you are missing.
Run - \c schema_name to connect to db
Once you are connected, you can execute any normal queries.
To check all volumes you can run:
docker volume ls
To check one of them:
docker volume inspect postgres_db
, where postgres_db is a name of your volume
As a result you may see something like this:
[
{
"CreatedAt": "2022-05-05T14:24:04Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "s-deployment",
"com.docker.compose.version": "2.4.1",
"com.docker.compose.volume": "postgres_db"
},
"Mountpoint": "/var/lib/docker/volumes/s-deployment_postgres_db/_data",
"Name": "s-deployment_postgres_db",
"Options": null,
"Scope": "local"
}
]

Connecting to PostgreSQL: Postico and TablePlus DB GUIs can't connect but `psql` in Docker works (FATAL: password authentication failed for user)

I have deployed my PostgreSQL database locally with Docker. This is my Docker Compose file:
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.30-alpha
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
# uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
# managementApiSecret: my-secret
prototype: true
databases:
default:
connector: postgres
host: postgres
user: prisma
password: prisma
port: 5432
postgres:
image: postgres
restart: always
environment:
POSTGRES_USER: prisma
POSTGRES_PASSWORD: prisma
volumes:
- postgres:/var/lib/postgresql/data
volumes:
postgres:
I've started this with docker-compose up -d.
The Docker containers are running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
26b14120f89f prismagraphql/prisma:1.30-alpha "/bin/sh -c /app/sta…" 17 minutes ago Up 17 minutes 0.0.0.0:4466->4466/tcp newdm1_prisma_1
05dfdaeaf609 postgres "docker-entrypoint.s…" 17 minutes ago Up 17 minutes 5432/tcp newdm1_postgres_1
Now, for some reason I can't connect using a database GUI (having tried Postico and TablePlus). In both clients, I get the following error:
FATAL: password authentication failed for user "prisma"
I'm 100% sure that I'm providing prisma as the password as specified in the Docker Compose file.
Also, when I'm connecting to the database using psql from inside the postgres Docker container, it does work:
docker exec -it 05dfdaeaf609 bash
Then inside the Docker container I do this:
root#05dfdaeaf609:/# psql -U prisma -W prisma
Password:
psql (11.1 (Debian 11.1-1.pgdg90+1))
Type "help" for help.
prisma=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+--------+----------+------------+------------+-------------------
postgres | prisma | UTF8 | en_US.utf8 | en_US.utf8 |
prisma | prisma | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | prisma | UTF8 | en_US.utf8 | en_US.utf8 | =c/prisma +
| | | | | prisma=CTc/prisma
template1 | prisma | UTF8 | en_US.utf8 | en_US.utf8 | =c/prisma +
| | | | | prisma=CTc/prisma
(4 rows)
prisma=# \c prisma
Password for user prisma:
You are now connected to database "prisma" as user "prisma".
The password I've provided inside psql was prisma, similar to the ones I've provided inside Postico and TablePlus.
Is there anything special I need to do when connecting to a PostgreSQL DB in a Docker container?
PORTS
0.0.0.0:4466->4466/tcp
5432/tcp
If you check the ports column of docker ps command, you would realize that the Postgres port is not exposed to be used by the host machine.
To solve this, you need to add the following to the docker-compose.yml file:
ports:
- "5432:5432"
Making the full file look like this:
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.30-alpha
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
# uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
# managementApiSecret: my-secret
prototype: true
databases:
default:
connector: postgres
host: postgres
user: prisma
password: prisma
port: 5432
postgres:
image: postgres
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_USER: prisma
POSTGRES_PASSWORD: prisma
volumes:
- postgres:/var/lib/postgresql/data
volumes:
postgres:
and then you need to run docker-compose up -d to apply the new changes. When setup correctly, the PORTS column of docker ps should look like this
PORTS
0.0.0.0:4466->4466/tcp
0.0.0.0:5432->5432/tcp

Connect to postgres in docker container from host machine

How can I connect to postgres in docker from a host machine?
docker-compose.yml
version: '2'
networks:
database:
driver: bridge
services:
app:
build:
context: .
dockerfile: Application.Dockerfile
env_file:
- docker/Application/env_files/main.env
ports:
- "8060:80"
networks:
- database
depends_on:
- appdb
appdb:
image: postdock/postgres:1.9-postgres-extended95-repmgr32
environment:
POSTGRES_PASSWORD: app_pass
POSTGRES_USER: www-data
POSTGRES_DB: app_db
CLUSTER_NODE_NETWORK_NAME: appdb
NODE_ID: 1
NODE_NAME: node1
ports:
- "5432:5432"
networks:
database:
aliases:
- database
docker-compose ps
Name Command State Ports
-----------------------------------------------------------------------------------------------------
appname_app_1 /bin/sh -c /app/start.sh Up 0.0.0.0:8060->80/tcp
appname_appdb_1 docker-entrypoint.sh /usr/ ... Up 22/tcp, 0.0.0.0:5432->5432/tcp
From container I can connect successfully. Both from app container and db container.
List of dbs and users from running psql inside container:
# psql -U postgres
psql (9.5.13)
Type "help" for help.
postgres=# \du
List of roles
Role name | Attributes | Member of
------------------+------------------------------------------------------------+-----------
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
replication_user | Superuser, Create role, Create DB, Replication | {}
www-data | Superuser | {}
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
----------------+------------------+----------+------------+------------+-----------------------
app_db | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
replication_db | replication_user | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
(5 rows)
DB image is not official postgres image. But Dockerfile in GitHub seem looking fine.
cat /var/lib/postgresql/data/pg_hba.conf from DB container:
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local replication postgres trust
#host replication postgres 127.0.0.1/32 trust
#host replication postgres ::1/128 trust
host all all all md5
host replication replication_user 0.0.0.0/0 md5
I tried both users with no luck
$ psql -U postgres -h localhost
psql: FATAL: role "postgres" does not exist
$ psql -h localhost -U www-data appdb -W
Password for user www-data:
psql: FATAL: role "www-data" does not exist
Looks like on my host machine there is already PSQL running on that port. How can I check it?
I believe the problem is you have postgres running on the local machine at port 5432.
Issue can be resolved by mapping port 5432 of docker container to another port in the host machine.
This can be achieved by making a change in docker-compose.yml
Change
"5432:5432"
to
"5433:5432"
Restart docker-compose
Now the docker container postgres is running on 5433. (Locally installed postgres is on 5432)
You can try connecting to the docker container.
psql -p 5433 -d db_name -U user -h localhost
I ran this on Ubuntu 16.04
$ psql -h localhost -U www-data app_db
Password for user www-data:
psql (9.5.13)
Type "help" for help.
app_db=# \du
List of roles
Role name | Attributes | Member of
------------------+------------------------------------------------------------+-----------
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
replication_user | Superuser, Create role, Create DB, Replication | {}
www-data | Superuser | {}
And below from my mac to the VM inside which docker was running (192.168.33.100 is the IP address of the docker VM)
$ psql -h 192.168.33.100 -U www-data app_db
Password for user www-data:
psql (9.6.9, server 9.5.13)
Type "help" for help.
app_db=# \du
List of roles
Role name | Attributes | Member of
------------------+------------------------------------------------------------+-----------
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
replication_user | Superuser, Create role, Create DB, Replication | {}
www-data | Superuser | {}
They both work for me.
PSQL version on VM
$ psql --version
psql (PostgreSQL) 9.5.13
PSQL version on Mac
$ psql --version
psql (PostgreSQL) 9.6.9
I had the same problem on elementary OS 5.0 while running postgres 10 official docker container though. I was able to connect from host using the ip of the running container.
sudo docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name/id
Test after getting the ip:
psql -h 172.17.0.2 -U postgres
I believe you have an issue in pg_hba.conf. Here you've specified 1 host that has access - 127.0.0.1/32.
You can change it to this:
# IPv4 local connections:
host all all 0.0.0.0/0 md5
This will make sure your host (totally different IP) can connect.
To check if there is an instance of postgresql already running, you can do
netstat -plnt | grep 5432. If you get any result from this you can get the PID and verify the process itself.
I have a relatively similar setup, and the following works for me to open a psql session on the host machine into the docker postgres instance:
docker-compose run --rm db psql -h db -U postgres -d app_development
Where:
db is the name of the container
postgres is the name of the user
app_development is the name of the database
So for you, it would look like docker-compose run --rm appdb psql -h appdb -U www-data -d app_db.
Since you’re running it in OSX, you can always use the pre-installed Network Utility app to run a Port Scan on your host and identify if the postgres server is running (and if yes, on which port).
But I don’t think you have one running on your host. The problem is that Postgres by default runs on 5432 and the docker-compose file that you are trying to run exposes the db container on the same port i.e. 5432. If the Postgres server were already running on your host, then Docker would have tried to expose a a container to a port which is already being used, thereby giving an error.
Another potential solution:
As can be seen in this answer, mysql opens a unix socket with localhost and not a tcp socket. Maybe something similar is happening here.
Try using 127.0.0.1 instead of localhost while connecting to the server in the container.

How to use a PostgreSQL container with existing data?

I am trying to set up a PostgreSQL container (https://hub.docker.com/_/postgres/). I have some data from a current PostgreSQL instance. I copied it from /var/lib/postgresql/data and want to set it as a volume to a PostgreSQL container.
My part from docker-compose.yml file about PostgreSQL:
db:
image: postgres:9.4
ports:
- 5432:5432
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
PGDATA : /var/lib/postgresql/data
volumes:
- /projects/own/docker_php/pgdata:/var/lib/postgresql/data
When I make docker-compose up I get this message:
db_1 | initdb: directory "/var/lib/postgresql/data" exists but is not empty
db_1 | If you want to create a new database system, either remove or empty
db_1 | the directory "/var/lib/postgresql/data" or run initdb
db_1 | with an argument other than "/var/lib/postgresql/data".
I tried to create my own image from the container, so my Dockerfile is:
FROM postgres:9.4
COPY pgdata /var/lib/postgresql/data
But I got the same error, what am I doing wrong?
Update
I got SQL using pg_dumpall and put it in /docker-entrypoint-initdb.d, but this file executes every time I do docker-compose up.
To build on irakli's answer, here's an updated solution:
use newer version 2 Docker Compose file
separate volumes section
extra settings deleted
docker-compose.yml
version: '2'
services:
postgres9:
image: postgres:9.4
expose:
- 5432
volumes:
- data:/var/lib/postgresql/data
volumes:
data: {}
demo
Start Postgres database server:
$ docker-compose up
Show all tables in the database. In another terminal, talk to the container's Postgres:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
It'll show nothing, as the database is blank. Create a table:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c 'create table beer()'
List the newly-created table:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
Access privileges
Schema | Name | Type | Access privileges | Column access privileges
--------+-----------+-------+-------------------+--------------------------
public | beer | table | |
Yay! We've now started a Postgres database using a shared storage volume, and stored some data in it. Next step is to check that the data actually sticks around after the server stops.
Now, kill the Postgres server container:
$ docker-compose stop
Start up the Postgres container again:
$ docker-compose up
We expect that the database server will re-use the storage, so our very important data is still there. Check:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
Access privileges
Schema | Name | Type | Access privileges | Column access privileges
--------+-----------+-------+-------------------+--------------------------
public | beer | table | |
We've successfully used a new-style Docker Compose file to run a Postgres database using an external data volume, and checked that it keeps our data safe and sound.
storing data
First, make a backup, storing our data on the host:
$ docker exec -it $(docker-compose ps -q postgres9 ) pg_dump -Upostgres > backup.sql
Zap our data from the guest database:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c 'drop table beer'
Restore our backup (stored on the host) into the Postgres container.
Note: use "exec -i", not "-it", otherwise you'll get a "input device is not a TTY" error.
$ docker exec -i $(docker-compose ps -q postgres9 ) psql -Upostgres < backup.sql
List the tables to verify the restore worked:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
Access privileges
Schema | Name | Type | Access privileges | Column access privileges
--------+-----------+-------+-------------------+--------------------------
public | beer | table | |
To sum up, we've verified that we can start a database, the data persists after a restart, and we can restore a backup into it from the host.
Thanks Tomasz!
It looks like the PostgreSQL image is having issues with mounted volumes. FWIW, it is probably more of a PostgreSQL issue than Dockers, but that doesn't matter because mounting disks is not a recommended way for persisting database files, anyway.
You should be creating data-only Docker containers, instead. Like this:
postgres9:
image: postgres:9.4
ports:
- 5432:5432
volumes_from:
- pg_data
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
PGDATA : /var/lib/postgresql/data/pgdata
pg_data:
image: alpine:latest
volumes:
- /var/lib/postgresql/data/pgdata
command: "true"
which I tested and worked fine. You can read more about data-only containers here: Why Docker Data Containers (Volumes!) are Good
As for: how to import initial data, you can either:
docker cp, into the data-only container of the setup, or
Use an SQL dump of the data, instead of moving binary files around (which is what I would do).
To restore your data from an existing dumped.sql file.
Create your Dockerfile
version: "3"
services:
postgres:
image: postgres
container_name: postgres
environment:
POSTGRES_DB: YOUR_DATABASE
POSTGRES_USER: your_username_if_needed
POSTGRES_PASSWORD: your_password_if_needed
expose:
- 5432
volumes:
- data:/var/lib/postgresql/data
volumes:
data: {}
Launch it and kill it docker-compose up then docker-compose stop
Then migrate your data from you dump.
docker exec -i YOUR_CONTAINER_NAME psql -U your_username_if_needed -W -d YOUR_DATABASE < your_dump.sql
-W is only needed if you set a password.
You should now see your console executing the copying.
When its done you good to go docker-compose up