How to use a PostgreSQL container with existing data? - postgresql

I am trying to set up a PostgreSQL container (https://hub.docker.com/_/postgres/). I have some data from a current PostgreSQL instance. I copied it from /var/lib/postgresql/data and want to set it as a volume to a PostgreSQL container.
My part from docker-compose.yml file about PostgreSQL:
db:
image: postgres:9.4
ports:
- 5432:5432
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
PGDATA : /var/lib/postgresql/data
volumes:
- /projects/own/docker_php/pgdata:/var/lib/postgresql/data
When I make docker-compose up I get this message:
db_1 | initdb: directory "/var/lib/postgresql/data" exists but is not empty
db_1 | If you want to create a new database system, either remove or empty
db_1 | the directory "/var/lib/postgresql/data" or run initdb
db_1 | with an argument other than "/var/lib/postgresql/data".
I tried to create my own image from the container, so my Dockerfile is:
FROM postgres:9.4
COPY pgdata /var/lib/postgresql/data
But I got the same error, what am I doing wrong?
Update
I got SQL using pg_dumpall and put it in /docker-entrypoint-initdb.d, but this file executes every time I do docker-compose up.

To build on irakli's answer, here's an updated solution:
use newer version 2 Docker Compose file
separate volumes section
extra settings deleted
docker-compose.yml
version: '2'
services:
postgres9:
image: postgres:9.4
expose:
- 5432
volumes:
- data:/var/lib/postgresql/data
volumes:
data: {}
demo
Start Postgres database server:
$ docker-compose up
Show all tables in the database. In another terminal, talk to the container's Postgres:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
It'll show nothing, as the database is blank. Create a table:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c 'create table beer()'
List the newly-created table:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
Access privileges
Schema | Name | Type | Access privileges | Column access privileges
--------+-----------+-------+-------------------+--------------------------
public | beer | table | |
Yay! We've now started a Postgres database using a shared storage volume, and stored some data in it. Next step is to check that the data actually sticks around after the server stops.
Now, kill the Postgres server container:
$ docker-compose stop
Start up the Postgres container again:
$ docker-compose up
We expect that the database server will re-use the storage, so our very important data is still there. Check:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
Access privileges
Schema | Name | Type | Access privileges | Column access privileges
--------+-----------+-------+-------------------+--------------------------
public | beer | table | |
We've successfully used a new-style Docker Compose file to run a Postgres database using an external data volume, and checked that it keeps our data safe and sound.
storing data
First, make a backup, storing our data on the host:
$ docker exec -it $(docker-compose ps -q postgres9 ) pg_dump -Upostgres > backup.sql
Zap our data from the guest database:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c 'drop table beer'
Restore our backup (stored on the host) into the Postgres container.
Note: use "exec -i", not "-it", otherwise you'll get a "input device is not a TTY" error.
$ docker exec -i $(docker-compose ps -q postgres9 ) psql -Upostgres < backup.sql
List the tables to verify the restore worked:
$ docker exec -it $(docker-compose ps -q postgres9 ) psql -Upostgres -c '\z'
Access privileges
Schema | Name | Type | Access privileges | Column access privileges
--------+-----------+-------+-------------------+--------------------------
public | beer | table | |
To sum up, we've verified that we can start a database, the data persists after a restart, and we can restore a backup into it from the host.
Thanks Tomasz!

It looks like the PostgreSQL image is having issues with mounted volumes. FWIW, it is probably more of a PostgreSQL issue than Dockers, but that doesn't matter because mounting disks is not a recommended way for persisting database files, anyway.
You should be creating data-only Docker containers, instead. Like this:
postgres9:
image: postgres:9.4
ports:
- 5432:5432
volumes_from:
- pg_data
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
PGDATA : /var/lib/postgresql/data/pgdata
pg_data:
image: alpine:latest
volumes:
- /var/lib/postgresql/data/pgdata
command: "true"
which I tested and worked fine. You can read more about data-only containers here: Why Docker Data Containers (Volumes!) are Good
As for: how to import initial data, you can either:
docker cp, into the data-only container of the setup, or
Use an SQL dump of the data, instead of moving binary files around (which is what I would do).

To restore your data from an existing dumped.sql file.
Create your Dockerfile
version: "3"
services:
postgres:
image: postgres
container_name: postgres
environment:
POSTGRES_DB: YOUR_DATABASE
POSTGRES_USER: your_username_if_needed
POSTGRES_PASSWORD: your_password_if_needed
expose:
- 5432
volumes:
- data:/var/lib/postgresql/data
volumes:
data: {}
Launch it and kill it docker-compose up then docker-compose stop
Then migrate your data from you dump.
docker exec -i YOUR_CONTAINER_NAME psql -U your_username_if_needed -W -d YOUR_DATABASE < your_dump.sql
-W is only needed if you set a password.
You should now see your console executing the copying.
When its done you good to go docker-compose up

Related

How to populate PostgreSQL database running inside a Docker container using Docker compose [duplicate]

I have a dump.sql file that I would like to load with docker-compose.
docker-compose.yml:
services:
postgres:
environment:
POSTGRES_DB: my_db_name
POSTGRES_USER: my_name
POSTGRES_PASSWORD: my_password
build:
context: .
dockerfile: ./devops/db/Dockerfile.db
My Dockerfile.db is really simple at the moment:
FROM postgres
MAINTAINER me <me#me.me>
COPY ./devops/db ./devops/db
WORKDIR ./devops/db
I would like to run a command like psql my_db_name < dump.sql at some point. If I run a script like this from the Dockerfile.db, the issue is that the script is run after build but before docker-compose up, and the database is not running yet.
Any idea how to do this ?
Reading https://hub.docker.com/_/postgres/, the section 'Extend this image' explains that any .sql in /docker-entrypoint-initdb.d will be executed after build.
I just needed to change my Dockerfile.db to:
FROM postgres
ADD ./devops/db/dummy_dump.sql /docker-entrypoint-initdb.d
And it works!
Another option that doesn't require a Dockerfile would be to mount your sql file into the docker-entrypoint-initdb.d folder using the volumes attribute of docker-compose.
The official postgres image https://hub.docker.com/_/postgres/ will import and execute all SQL files placed in that folder. So something like
services:
postgres:
environment:
POSTGRES_DB: my_db_name
POSTGRES_USER: my_name
POSTGRES_PASSWORD: my_password
volumes:
- ./devops/db/dummy_dump.sql:/docker-entrypoint-initdb.d/dummy_dump.sql
This will automatically populate the specified POSTGRES_DB for you.
sudo docker exec postgres psql -U postgres my_db_name < dump.sql
You can use pg_restore inside the container:
cat ${BACKUP_SQL_File} | docker exec -i ${CONTAINER_NAME} pg_restore \
--verbose \
--clean \
--no-acl \
--no-owner \
-U ${USER} \
-d ${DATABASE}
After the docker-compose up, do docker ps it will give you a list of active docker containers. From that, you can get the container ID.
Then,
docker exec -i ${CONTAINER_ID} psql -U ${USER} < ${SQL_FILE}
CONTAINER_NAME="postgres"
DB_USER=postgres
LOCAL_DUMP_PATH="..."
docker run --name "${CONTAINER_NAME}" postgres
docker exec -i "${CONTAINER_NAME}" psql -U "${DB_USER}" < "${LOCAL_DUMP_PATH}"
In order to restore from I dump I use an sh to restore the database.
If you use a dump with docker-entrypoint-initdb.d it gives the error "The input is a PostgreSQL custom-format dump. Use the pg_restore command-line client to restore this dump to a database."
docker-compose.yaml
version: "3.9"
services:
db:
container_name: postgis_my_db_name
image: postgis/postgis:14-3.3
ports:
- "5430:5432"
# restart: always
volumes:
- ./my_db_name.sql:/my_db_name.sql
- ./restore.sh:/docker-entrypoint-initdb.d/restore.sh
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: my_password
POSTGRES_DB: my_db_name
restore.sh
pg_restore -d my_db_name my_db_name.sql
You can also do it without a dockerfile :
# start first container
docker compose start $db_container
# dump intial database
docker compose exec db pg_dump -U $user -Fc $database > $dump_file
# start container db
docker compose start $db_container
# get container id
docker ps
# copy to container
docker cp $dump_file $container_id:/var
# delete database container / Can't use
docker compose exec $db_container dropdb -U $user $database
# user pg_restore
docker compose exec $db_container pg_restore -U $user -C -d postgres /var/$dump_file

Passing a sql backup file through the docker daemon to populate a docker database

Context
I need to populate a database inside a docker container from a backup file that I have on the host machine.
I've tried this docker command while the PostGIS container is up (see docker-compose.yml at the end):
sudo docker exec -i db_container_1 ./usr/local/bin/pg_restore --no-owner --role=postgres -h localhost -U postgres -p 5434 -d database_name < ../db/dumps/dump_prod_2020.backup
But I've got this message:
read unix #->/var/run/docker.sock: read: connection reset by peer
As documented here, I also tried using this docker-compose command but it raises the exact same strange message:
docker-compose exec -T db /usr/local/bin/pg_restore --no-owner --role=postgres -h localhost -U postgres -p 5434 -d database_name < ../db/dumps/dump_prod_2020.backup
Question
What am I doing wrong and how could I populate my docker database with my local dump?
More infos
Here's the docker-compose.yml used to start the db service (docker ps outputs db_container_1 as the corresponding container name):
version: '3.6'
volumes:
db_data:
services:
db:
image: mdillon/postgis:11-alpine
environment:
POSTGRES_DB: database_name
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
ports:
- '${DB_PORT:-5434}:5432'
restart: 'no'
volumes:
- './docker/db:/docker-entrypoint-initdb.d:ro'
- 'db_data:/var/lib/postgresql/data' # to persist storage

how do i backup a database in docker

i'm running my app using docker-compose with the below yml file
postgres:
container_name: postgres
image: postgres:${POSTGRES_VERSION}
volumes:
- postgresdata:/var/lib/postgresql/data
expose:
- "5432"
environment:
- POSTGRES_DB=42EXP
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
node:
container_name: node
links:
- postgres:postgres
depends_on:
- postgres
volumes:
postgresdata:
As you can see here ,i'm using a named volume to manage postgres state.
According to the official docs, i can backup a volume like the below
docker run --rm --volumes postgresdata:/var/lib/postgresql/data -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata
Some other tutorials suggested i use the pg-dump function provided by postgres for backups.
pg_dump -Fc database_name_here > database.bak
I guess i would have to go inside the postgres container to perform this function and mount the backup directory to the host.
Is one approach better/preferable than the other?
To run pg_dump you can use docker exec command:
To backup:
docker exec -u <your_postgres_user> <postgres_container_name> pg_dump -Fc <database_name_here> > db.dump
To drop db (Don't do it on production, for test purpose only!!!):
docker exec -u <your_postgres_user> <postgres_container_name> psql -c 'DROP DATABASE <your_db_name>'
To restore:
docker exec -i -u <your_postgres_user> <postgres_container_name> pg_restore -C -d postgres < db.dump
Also you can use docker-compose analog of exec. In that case you can use short services name (postgres) instead of full container name (composeproject_postgres).
docker exec
docker-compose exec
pg_restore
Since you have
expose:
- "5432"
you can run
pg_dump -U <user> -h localhost -Fc <db_name> > 1.dump
pg_dump connects to 5432 port to make dump since it is listened by postgres in container you will dump db from container
You can also run
docker-compose exec -T postgres sh -c 'pg_dump -cU $POSTGRES_USER $POSTGRES_DB' | gzip > netbox.sql.gz
where netbox.sql.gz is the name of the backup file.
restoring would be
gunzip -c netbox.sql.gz | docker-compose exec -T postgres sh -c 'psql -U $POSTGRES_USER $POSTGRES_DB'

Docker - How can run the psql command in the postgres container?

I would like to use the psql in the postgres image in order to run some queries on the database.
But unfortunately when I attach to the postgres container, I got that error the psql command is not found...
For me a little bit it is a mystery how I can run postgre sql queries or commands in the container.
How run the psql command in the postgres container? (I am a new guy in Docker world)
I use Ubuntu as a host machine, and I did not install the postgres on the host machine, I use the postgres container instead.
docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------
yiialkalmi_app_1 /bin/bash Exit 0
yiialkalmi_nginx_1 nginx -g daemon off; Up 443/tcp, 0.0.0.0:80->80/tcp
yiialkalmi_php_1 php-fpm Up 9000/tcp
yiialkalmi_postgres_1 /docker-entrypoint.sh postgres Up 5432/tcp
yiialkalmi_redis_1 docker-entrypoint.sh redis ... Up 6379/tcp
Here the containers:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
315567db2dff yiialkalmi_nginx "nginx -g 'daemon off" 18 hours ago Up 3 hours 0.0.0.0:80->80/tcp, 443/tcp yiialkalmi_nginx_1
53577722df71 yiialkalmi_php "php-fpm" 18 hours ago Up 3 hours 9000/tcp yiialkalmi_php_1
40e39bd0329a postgres:latest "/docker-entrypoint.s" 18 hours ago Up 3 hours 5432/tcp yiialkalmi_postgres_1
5cc47477b72d redis:latest "docker-entrypoint.sh" 19 hours ago Up 3 hours 6379/tcp yiialkalmi_redis_1
And this is my docker-compose.yml:
app:
image: ubuntu:16.04
volumes:
- .:/var/www/html
nginx:
build: ./docker/nginx/
ports:
- 80:80
links:
- php
volumes_from:
- app
volumes:
- ./docker/nginx/conf.d:/etc/nginx/conf.d
php:
build: ./docker/php/
expose:
- 9000
links:
- postgres
- redis
volumes_from:
- app
postgres:
image: postgres:latest
volumes:
- /var/lib/postgres
environment:
POSTGRES_DB: project
POSTGRES_USER: project
POSTGRES_PASSWORD: project
redis:
image: redis:latest
expose:
- 6379
docker exec -it yiialkalmi_postgres_1 psql -U project -W project
Some explanation
docker exec -it
The command to run a command to a running container. The it flags open an interactive tty. Basically it will cause to attach to the terminal. If you wanted to open the bash terminal you can do this
docker exec -it yiialkalmi_postgres_1 bash
yiialkalmi_postgres_1
The container name (you could use the container id instead, which in your case would be 40e39bd0329a )
psql -U project -W project
The command to execute to the running container
U user
W Tell psql that the user needs to be prompted for the password at connection time. This parameter is optional. Without this parameter, there is an extra connection attempt which will usually find out that a password is needed, see the PostgreSQL docs.
project the database you want to connect to. There is no need for the -d parameter to mark it as the dbname when it is the first non-option argument, see the docs: -d "is equivalent to specifying dbname as the first non-option argument on the command line."
These are specified by you here
environment:
POSTGRES_DB: project
POSTGRES_USER: project
POSTGRES_PASSWORD: project
This worked for me:
goto bash :
docker exec -it <container-name> bash
from bash :
psql -U <dataBaseUserName> <dataBaseName>
or just this one-liner :
docker exec -it <container-name> psql -U <dataBaseUserName> <dataBaseName>
helps ?
After the Postgres container is configured using docker, open the bash terminal using:
docker exec -it <containerID>(postgres container name / ID) bash
Switch to the Postgres user:
su - postgres
Then run:
psql
It will open the terminal access for the Postgres.
If you need to restore the database in a container you can do this:
docker exec -i app_db_1 psql -U postgres < app_development.back
Don't forget to add -i.
:)
You can enter inside the postgres container using docker-compose by typing the following
docker-compose exec postgres bash
knowing that postgres is the name of the service. Replace it with the name of the Postgresql service in you docker-compose file.
if you have many docker-compose files, you have to add the specific docker-compose.yml file you want to execute the command with. Use the following commnand instead.
docker-compose -f < specific docker-compose.yml> exec postgres bash
For example if you want to run the command with a docker-compose file called local.yml, here the command will be
docker-compose -f local.yml exec postgres bash
Then, use psql command and specify the database name with the -d flag and the username with the -U flag
psql -U <database username you want to connect with> -d <database name>
Baammm!!!!! you are in.
If you have running "postgres" container:
docker run -it --rm --link postgres:postgres postgres:9.6 sh -c "exec psql -h \$POSTGRES_PORT_5432_TCP_ADDR -p \$POSTGRES_PORT_5432_TCP_PORT -U postgres"
We can enter the container with a terminal sh or bash by using,
docker run -it <container id | name> <sh | bash>
if assume it is sh,
psql -U postgres
will work
RUN /etc/init.d/postgresql start &&\
psql --command "CREATE USER docker WITH SUPERUSER PASSWORD 'docker';" &&\
createdb -O docker docker &&\
Just fired up a local test, not sure if -c is what you were after from the cli.
docker run -it --rm --name psql-test-connection -e PGPASSWORD=1234 postgres psql -h kubernetes.docker.internal -U awx -c "\conninfo"
You are connected to database "awx" as user "awx" on host "kubernetes.docker.internal" (address "192.168.65.4") at port "5432".
In many common setups, the PostgreSQL port is published out to the host.
postgres:
ports:
- '12345:5432'
If this is the case, you don't need to do anything Docker-specific to connect to the database. You can use the psql client directly on your host system pointing to the first ports: number.
psql -h localhost -p 12345 -U project
This approach only requires psql or another ordinary PostgreSQL client be installed on the host and that the database container be configured with ports: making it accessible from outside Docker. (The ports: are not necessary for inter-container communication and a production-oriented setup could reasonably not have them.) This does not require the ability to run docker commands and the attendant security concerns, and it can avoid multiple layers of additional command quoting from a docker exec sh -c '...' sequence.
Without using an external terminal a person can run SQL commands within the container CLI.
psql -d [database-name] -U [username] -W
** Don't forget to replace [database-name] with your db-name & [username] with your actual username
Flags:
-d : Specify the database name you want to connect
-U : Specify the username as whom you want to connect
-W : Prompt for the password

Load Postgres dump after docker-compose up

I have a dump.sql file that I would like to load with docker-compose.
docker-compose.yml:
services:
postgres:
environment:
POSTGRES_DB: my_db_name
POSTGRES_USER: my_name
POSTGRES_PASSWORD: my_password
build:
context: .
dockerfile: ./devops/db/Dockerfile.db
My Dockerfile.db is really simple at the moment:
FROM postgres
MAINTAINER me <me#me.me>
COPY ./devops/db ./devops/db
WORKDIR ./devops/db
I would like to run a command like psql my_db_name < dump.sql at some point. If I run a script like this from the Dockerfile.db, the issue is that the script is run after build but before docker-compose up, and the database is not running yet.
Any idea how to do this ?
Reading https://hub.docker.com/_/postgres/, the section 'Extend this image' explains that any .sql in /docker-entrypoint-initdb.d will be executed after build.
I just needed to change my Dockerfile.db to:
FROM postgres
ADD ./devops/db/dummy_dump.sql /docker-entrypoint-initdb.d
And it works!
Another option that doesn't require a Dockerfile would be to mount your sql file into the docker-entrypoint-initdb.d folder using the volumes attribute of docker-compose.
The official postgres image https://hub.docker.com/_/postgres/ will import and execute all SQL files placed in that folder. So something like
services:
postgres:
environment:
POSTGRES_DB: my_db_name
POSTGRES_USER: my_name
POSTGRES_PASSWORD: my_password
volumes:
- ./devops/db/dummy_dump.sql:/docker-entrypoint-initdb.d/dummy_dump.sql
This will automatically populate the specified POSTGRES_DB for you.
sudo docker exec postgres psql -U postgres my_db_name < dump.sql
You can use pg_restore inside the container:
cat ${BACKUP_SQL_File} | docker exec -i ${CONTAINER_NAME} pg_restore \
--verbose \
--clean \
--no-acl \
--no-owner \
-U ${USER} \
-d ${DATABASE}
After the docker-compose up, do docker ps it will give you a list of active docker containers. From that, you can get the container ID.
Then,
docker exec -i ${CONTAINER_ID} psql -U ${USER} < ${SQL_FILE}
CONTAINER_NAME="postgres"
DB_USER=postgres
LOCAL_DUMP_PATH="..."
docker run --name "${CONTAINER_NAME}" postgres
docker exec -i "${CONTAINER_NAME}" psql -U "${DB_USER}" < "${LOCAL_DUMP_PATH}"
In order to restore from I dump I use an sh to restore the database.
If you use a dump with docker-entrypoint-initdb.d it gives the error "The input is a PostgreSQL custom-format dump. Use the pg_restore command-line client to restore this dump to a database."
docker-compose.yaml
version: "3.9"
services:
db:
container_name: postgis_my_db_name
image: postgis/postgis:14-3.3
ports:
- "5430:5432"
# restart: always
volumes:
- ./my_db_name.sql:/my_db_name.sql
- ./restore.sh:/docker-entrypoint-initdb.d/restore.sh
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: my_password
POSTGRES_DB: my_db_name
restore.sh
pg_restore -d my_db_name my_db_name.sql
You can also do it without a dockerfile :
# start first container
docker compose start $db_container
# dump intial database
docker compose exec db pg_dump -U $user -Fc $database > $dump_file
# start container db
docker compose start $db_container
# get container id
docker ps
# copy to container
docker cp $dump_file $container_id:/var
# delete database container / Can't use
docker compose exec $db_container dropdb -U $user $database
# user pg_restore
docker compose exec $db_container pg_restore -U $user -C -d postgres /var/$dump_file