Load Postgres dump after docker-compose up - postgresql

I have a dump.sql file that I would like to load with docker-compose.
docker-compose.yml:
services:
postgres:
environment:
POSTGRES_DB: my_db_name
POSTGRES_USER: my_name
POSTGRES_PASSWORD: my_password
build:
context: .
dockerfile: ./devops/db/Dockerfile.db
My Dockerfile.db is really simple at the moment:
FROM postgres
MAINTAINER me <me#me.me>
COPY ./devops/db ./devops/db
WORKDIR ./devops/db
I would like to run a command like psql my_db_name < dump.sql at some point. If I run a script like this from the Dockerfile.db, the issue is that the script is run after build but before docker-compose up, and the database is not running yet.
Any idea how to do this ?

Reading https://hub.docker.com/_/postgres/, the section 'Extend this image' explains that any .sql in /docker-entrypoint-initdb.d will be executed after build.
I just needed to change my Dockerfile.db to:
FROM postgres
ADD ./devops/db/dummy_dump.sql /docker-entrypoint-initdb.d
And it works!

Another option that doesn't require a Dockerfile would be to mount your sql file into the docker-entrypoint-initdb.d folder using the volumes attribute of docker-compose.
The official postgres image https://hub.docker.com/_/postgres/ will import and execute all SQL files placed in that folder. So something like
services:
postgres:
environment:
POSTGRES_DB: my_db_name
POSTGRES_USER: my_name
POSTGRES_PASSWORD: my_password
volumes:
- ./devops/db/dummy_dump.sql:/docker-entrypoint-initdb.d/dummy_dump.sql
This will automatically populate the specified POSTGRES_DB for you.

sudo docker exec postgres psql -U postgres my_db_name < dump.sql

You can use pg_restore inside the container:
cat ${BACKUP_SQL_File} | docker exec -i ${CONTAINER_NAME} pg_restore \
--verbose \
--clean \
--no-acl \
--no-owner \
-U ${USER} \
-d ${DATABASE}

After the docker-compose up, do docker ps it will give you a list of active docker containers. From that, you can get the container ID.
Then,
docker exec -i ${CONTAINER_ID} psql -U ${USER} < ${SQL_FILE}

CONTAINER_NAME="postgres"
DB_USER=postgres
LOCAL_DUMP_PATH="..."
docker run --name "${CONTAINER_NAME}" postgres
docker exec -i "${CONTAINER_NAME}" psql -U "${DB_USER}" < "${LOCAL_DUMP_PATH}"

In order to restore from I dump I use an sh to restore the database.
If you use a dump with docker-entrypoint-initdb.d it gives the error "The input is a PostgreSQL custom-format dump. Use the pg_restore command-line client to restore this dump to a database."
docker-compose.yaml
version: "3.9"
services:
db:
container_name: postgis_my_db_name
image: postgis/postgis:14-3.3
ports:
- "5430:5432"
# restart: always
volumes:
- ./my_db_name.sql:/my_db_name.sql
- ./restore.sh:/docker-entrypoint-initdb.d/restore.sh
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: my_password
POSTGRES_DB: my_db_name
restore.sh
pg_restore -d my_db_name my_db_name.sql

You can also do it without a dockerfile :
# start first container
docker compose start $db_container
# dump intial database
docker compose exec db pg_dump -U $user -Fc $database > $dump_file
# start container db
docker compose start $db_container
# get container id
docker ps
# copy to container
docker cp $dump_file $container_id:/var
# delete database container / Can't use
docker compose exec $db_container dropdb -U $user $database
# user pg_restore
docker compose exec $db_container pg_restore -U $user -C -d postgres /var/$dump_file

Related

How to populate PostgreSQL database running inside a Docker container using Docker compose [duplicate]

I have a dump.sql file that I would like to load with docker-compose.
docker-compose.yml:
services:
postgres:
environment:
POSTGRES_DB: my_db_name
POSTGRES_USER: my_name
POSTGRES_PASSWORD: my_password
build:
context: .
dockerfile: ./devops/db/Dockerfile.db
My Dockerfile.db is really simple at the moment:
FROM postgres
MAINTAINER me <me#me.me>
COPY ./devops/db ./devops/db
WORKDIR ./devops/db
I would like to run a command like psql my_db_name < dump.sql at some point. If I run a script like this from the Dockerfile.db, the issue is that the script is run after build but before docker-compose up, and the database is not running yet.
Any idea how to do this ?
Reading https://hub.docker.com/_/postgres/, the section 'Extend this image' explains that any .sql in /docker-entrypoint-initdb.d will be executed after build.
I just needed to change my Dockerfile.db to:
FROM postgres
ADD ./devops/db/dummy_dump.sql /docker-entrypoint-initdb.d
And it works!
Another option that doesn't require a Dockerfile would be to mount your sql file into the docker-entrypoint-initdb.d folder using the volumes attribute of docker-compose.
The official postgres image https://hub.docker.com/_/postgres/ will import and execute all SQL files placed in that folder. So something like
services:
postgres:
environment:
POSTGRES_DB: my_db_name
POSTGRES_USER: my_name
POSTGRES_PASSWORD: my_password
volumes:
- ./devops/db/dummy_dump.sql:/docker-entrypoint-initdb.d/dummy_dump.sql
This will automatically populate the specified POSTGRES_DB for you.
sudo docker exec postgres psql -U postgres my_db_name < dump.sql
You can use pg_restore inside the container:
cat ${BACKUP_SQL_File} | docker exec -i ${CONTAINER_NAME} pg_restore \
--verbose \
--clean \
--no-acl \
--no-owner \
-U ${USER} \
-d ${DATABASE}
After the docker-compose up, do docker ps it will give you a list of active docker containers. From that, you can get the container ID.
Then,
docker exec -i ${CONTAINER_ID} psql -U ${USER} < ${SQL_FILE}
CONTAINER_NAME="postgres"
DB_USER=postgres
LOCAL_DUMP_PATH="..."
docker run --name "${CONTAINER_NAME}" postgres
docker exec -i "${CONTAINER_NAME}" psql -U "${DB_USER}" < "${LOCAL_DUMP_PATH}"
In order to restore from I dump I use an sh to restore the database.
If you use a dump with docker-entrypoint-initdb.d it gives the error "The input is a PostgreSQL custom-format dump. Use the pg_restore command-line client to restore this dump to a database."
docker-compose.yaml
version: "3.9"
services:
db:
container_name: postgis_my_db_name
image: postgis/postgis:14-3.3
ports:
- "5430:5432"
# restart: always
volumes:
- ./my_db_name.sql:/my_db_name.sql
- ./restore.sh:/docker-entrypoint-initdb.d/restore.sh
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: my_password
POSTGRES_DB: my_db_name
restore.sh
pg_restore -d my_db_name my_db_name.sql
You can also do it without a dockerfile :
# start first container
docker compose start $db_container
# dump intial database
docker compose exec db pg_dump -U $user -Fc $database > $dump_file
# start container db
docker compose start $db_container
# get container id
docker ps
# copy to container
docker cp $dump_file $container_id:/var
# delete database container / Can't use
docker compose exec $db_container dropdb -U $user $database
# user pg_restore
docker compose exec $db_container pg_restore -U $user -C -d postgres /var/$dump_file

Passing a sql backup file through the docker daemon to populate a docker database

Context
I need to populate a database inside a docker container from a backup file that I have on the host machine.
I've tried this docker command while the PostGIS container is up (see docker-compose.yml at the end):
sudo docker exec -i db_container_1 ./usr/local/bin/pg_restore --no-owner --role=postgres -h localhost -U postgres -p 5434 -d database_name < ../db/dumps/dump_prod_2020.backup
But I've got this message:
read unix #->/var/run/docker.sock: read: connection reset by peer
As documented here, I also tried using this docker-compose command but it raises the exact same strange message:
docker-compose exec -T db /usr/local/bin/pg_restore --no-owner --role=postgres -h localhost -U postgres -p 5434 -d database_name < ../db/dumps/dump_prod_2020.backup
Question
What am I doing wrong and how could I populate my docker database with my local dump?
More infos
Here's the docker-compose.yml used to start the db service (docker ps outputs db_container_1 as the corresponding container name):
version: '3.6'
volumes:
db_data:
services:
db:
image: mdillon/postgis:11-alpine
environment:
POSTGRES_DB: database_name
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
ports:
- '${DB_PORT:-5434}:5432'
restart: 'no'
volumes:
- './docker/db:/docker-entrypoint-initdb.d:ro'
- 'db_data:/var/lib/postgresql/data' # to persist storage

Docker compose cannot run command service not running

I created docker-compose.yml which content you can find below. I navigate to the folder where file resist and run command:
docker-compose up -d
This was shown:
Starting postgres ... done
then i run that query:
docker-compose ps
Result:
Name Command State Ports
---------------------------------------------------------
postgres docker-entrypoint.sh postgres Exit 1
Now i wanted to run some command:
docker exec -it postgres psql -h localhost -p 54320 -U robert
This is what i get:
Error response from daemon: Container ae1565a84bcf0b3662b47d4f277efd2830273554b6bcf4437129e33b31c88b35 is not running
Is my container not running or? please of support.
docker-compose.yml:
version: "3"
services:
# Create a service named db.
db:
# Use the Docker Image postgres. This will pull the newest release.
image: "postgres"
# Give the container the name my_postgres. You can changes to something else.
container_name: "postgres"
# Setup the username, password, and database name. You can changes these values.
environment:
- POSTGRES_USER=robert
- POSTGRES_PASSWORD=robert
- POSTGRES_DB=mydb
# Maps port 54320 (localhost) to port 5432 on the container. You can change the ports to fix your needs.
ports:
- "54320:5432"
# Set a volume some that database is not lost after shutting down the container.
# I used the name postgres-data but you can changed it to something else.
volumes:
- ./volumes/postgres:/var/lib/postgresql/data
Can you attempt exec
docker run -it postgres psql -h localhost -p 54320 -U robert
?
$ docker exec --help
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
Since your container has the status exit, you can't use docker exec
Can you use this docker-compose file?
version: "3"
volumes:
postgres_app: ~
services:
# Create a service named db.
postgres:
image: "postgres"
environment:
POSTGRES_USER: robert
POSTGRES_PASSWORD: robert
POSTGRES_DB: "mydb"
volumes:
- "postgres_app:/var/lib/postgresql/data"
ports:
- "54320:5432"
restart: always
And this command docker-compose exec postgres psql -U robert -d mydb
I hope this will help!
On my computer i executed this file

how do i backup a database in docker

i'm running my app using docker-compose with the below yml file
postgres:
container_name: postgres
image: postgres:${POSTGRES_VERSION}
volumes:
- postgresdata:/var/lib/postgresql/data
expose:
- "5432"
environment:
- POSTGRES_DB=42EXP
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
node:
container_name: node
links:
- postgres:postgres
depends_on:
- postgres
volumes:
postgresdata:
As you can see here ,i'm using a named volume to manage postgres state.
According to the official docs, i can backup a volume like the below
docker run --rm --volumes postgresdata:/var/lib/postgresql/data -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
Some other tutorials suggested i use the pg-dump function provided by postgres for backups.
pg_dump -Fc database_name_here > database.bak
I guess i would have to go inside the postgres container to perform this function and mount the backup directory to the host.
Is one approach better/preferable than the other?
To run pg_dump you can use docker exec command:
To backup:
docker exec -u <your_postgres_user> <postgres_container_name> pg_dump -Fc <database_name_here> > db.dump
To drop db (Don't do it on production, for test purpose only!!!):
docker exec -u <your_postgres_user> <postgres_container_name> psql -c 'DROP DATABASE <your_db_name>'
To restore:
docker exec -i -u <your_postgres_user> <postgres_container_name> pg_restore -C -d postgres < db.dump
Also you can use docker-compose analog of exec. In that case you can use short services name (postgres) instead of full container name (composeproject_postgres).
docker exec
docker-compose exec
pg_restore
Since you have
expose:
- "5432"
you can run
pg_dump -U <user> -h localhost -Fc <db_name> > 1.dump
pg_dump connects to 5432 port to make dump since it is listened by postgres in container you will dump db from container
You can also run
docker-compose exec -T postgres sh -c 'pg_dump -cU $POSTGRES_USER $POSTGRES_DB' | gzip > netbox.sql.gz
where netbox.sql.gz is the name of the backup file.
restoring would be
gunzip -c netbox.sql.gz | docker-compose exec -T postgres sh -c 'psql -U $POSTGRES_USER $POSTGRES_DB'

How to use a PostgreSQL container with existing data?

I am trying to set up a PostgreSQL container (https://hub.docker.com/_/postgres/). I have some data from a current PostgreSQL instance. I copied it from /var/lib/postgresql/data and want to set it as a volume to a PostgreSQL container.
My part from docker-compose.yml file about PostgreSQL:
db:
image: postgres:9.4
ports:
- 5432:5432
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
PGDATA : /var/lib/postgresql/data
volumes:
- /projects/own/docker_php/pgdata:/var/lib/postgresql/data
When I make docker-compose up I get this message:
db_1 | initdb: directory "/var/lib/postgresql/data" exists but is not empty
db_1 | If you want to create a new database system, either remove or empty
db_1 | the directory "/var/lib/postgresql/data" or run initdb
db_1 | with an argument other than "/var/lib/postgresql/data".
I tried to create my own image from the container, so my Dockerfile is:
FROM postgres:9.4
COPY pgdata /var/lib/postgresql/data
But I got the same error, what am I doing wrong?
Update
I got SQL using pg_dumpall and put it in /docker-entrypoint-initdb.d, but this file executes every time I do docker-compose up.
To build on irakli's answer, here's an updated solution:
use newer version 2 Docker Compose file
separate volumes section
extra settings deleted
docker-compose.yml
version: '2'
services:
postgres9:
image: postgres:9.4
expose:
- 5432
volumes:
- data:/var/lib/postgresql/data
volumes:
data: {}
demo
Start Postgres database server:
$ docker-compose up
Show all tables in the database. In another terminal, talk to the container's Postgres:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
It'll show nothing, as the database is blank. Create a table:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c 'create table beer()'
List the newly-created table:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
Access privileges
Schema | Name | Type | Access privileges | Column access privileges
--------+-----------+-------+-------------------+--------------------------
public | beer | table | |
Yay! We've now started a Postgres database using a shared storage volume, and stored some data in it. Next step is to check that the data actually sticks around after the server stops.
Now, kill the Postgres server container:
$ docker-compose stop
Start up the Postgres container again:
$ docker-compose up
We expect that the database server will re-use the storage, so our very important data is still there. Check:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
Access privileges
Schema | Name | Type | Access privileges | Column access privileges
--------+-----------+-------+-------------------+--------------------------
public | beer | table | |
We've successfully used a new-style Docker Compose file to run a Postgres database using an external data volume, and checked that it keeps our data safe and sound.
storing data
First, make a backup, storing our data on the host:
$ docker exec -it $(docker-compose ps -q postgres9 ) pg_dump -Upostgres > backup.sql
Zap our data from the guest database:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c 'drop table beer'
Restore our backup (stored on the host) into the Postgres container.
Note: use "exec -i", not "-it", otherwise you'll get a "input device is not a TTY" error.
$ docker exec -i $(docker-compose ps -q postgres9 ) psql -Upostgres < backup.sql
List the tables to verify the restore worked:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
Access privileges
Schema | Name | Type | Access privileges | Column access privileges
--------+-----------+-------+-------------------+--------------------------
public | beer | table | |
To sum up, we've verified that we can start a database, the data persists after a restart, and we can restore a backup into it from the host.
Thanks Tomasz!
It looks like the PostgreSQL image is having issues with mounted volumes. FWIW, it is probably more of a PostgreSQL issue than Dockers, but that doesn't matter because mounting disks is not a recommended way for persisting database files, anyway.
You should be creating data-only Docker containers, instead. Like this:
postgres9:
image: postgres:9.4
ports:
- 5432:5432
volumes_from:
- pg_data
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
PGDATA : /var/lib/postgresql/data/pgdata
pg_data:
image: alpine:latest
volumes:
- /var/lib/postgresql/data/pgdata
command: "true"
which I tested and worked fine. You can read more about data-only containers here: Why Docker Data Containers (Volumes!) are Good
As for: how to import initial data, you can either:
docker cp, into the data-only container of the setup, or
Use an SQL dump of the data, instead of moving binary files around (which is what I would do).
To restore your data from an existing dumped.sql file.
Create your Dockerfile
version: "3"
services:
postgres:
image: postgres
container_name: postgres
environment:
POSTGRES_DB: YOUR_DATABASE
POSTGRES_USER: your_username_if_needed
POSTGRES_PASSWORD: your_password_if_needed
expose:
- 5432
volumes:
- data:/var/lib/postgresql/data
volumes:
data: {}
Launch it and kill it docker-compose up then docker-compose stop
Then migrate your data from you dump.
docker exec -i YOUR_CONTAINER_NAME psql -U your_username_if_needed -W -d YOUR_DATABASE < your_dump.sql
-W is only needed if you set a password.
You should now see your console executing the copying.
When its done you good to go docker-compose up