run pg_dump in a docker container and output file to host - postgresql

We typically have to pg_dump from multiple different versions of databases. I want to run the command inside a Docker container with the right Postgres version and have the dump output to my files rather than the container.
I thought I'd achieve it like so;
docker run -it postgres:9.6.6-alpine pg_dump --file backupFile.bak --dbname=CONNECTIONSTRING --verbose --format=c --blobs > backupFile.bak
however this just outputs the terminal output of the pg_dump command to a file, not the actual dump. I end up with a local file that's just the verbose log of the command.
What am I missing?

I can think of two options here:
Mount a volume to a local folder and dump the file there. When the container exits, the file will still be there on the host. You wouldn't need to run the container interactively. The command might look something like this (not tested):
docker run --rm -v <host_folder>:<container_folder> postgres:9.6.6-alpine pg_dump --file backupFile.bak --dbname=CONNECTIONSTRING --verbose --format=c --blobs
The backup file will still remain in <host_folder> after the container stops.
Start the container as-is, run docker cp to pull the file out of the container to the local filesystem, then stop the container. Probably not as easy or efficient as option 1.

Related

Postgres + Docker: pg_dump does not work from outside

I want to setup a cron script that automatically creates dumps from a specific Postgres database running in a Docker container. I know how to execute commands in a container from outside and also am familiar with pg_dump.
Somehow, for my container and database , I can't seem to make it work:
docker exec <container> pg_dump -U postgres <mydb> > /pg-snaps/<mydb>_$(date).sql
I get the following error:
zsh: no such file or directory: /pg-snaps/<mydb>_<date>.sql
The directory /pg-snaps exists. I can execute the same command inside the container, and it works. I have no idea why it doesn't work with docker exec. I looked up the methodology on how to do this, and it suggests the same as I want to do. Adding " " around the command to be executed also results in a 'no such file or directory'.
try this example:
docker exec -it <container> sh -c 'pg_dump -U postgres <mydb> > /pg-snaps/<mydb>_$(date).sql'

Dockerized PGAdmin Mapped volume + COPY not working

I have a scenario where a certain data set comes from a CSV and I need to allow a non-dev to hit PG Admin and update this data set. I want to be able to put this CSV in a mapped folder from the host system and then use the PG Admin GUI to run a COPY command. So far PG Admin is telling me:
ERROR: could not open file "/var/lib/pgadmin/data-files/some_data.csv" for reading: No such file or directory
Here are my steps so far, along with a sanity check inspect:
docker volume create --name=data-files
docker run -e PGADMIN_DEFAULT_EMAIL="pgadmin#example.com" -e PGADMIN_DEFAULT_PASSWORD=some_pass -v data-files:/var/lib/pgadmin/data-files -d -p 5050:80 --name pgadmin dpage/pgadmin4
docker volume inspect data-files --format '{{.Mountpoint}}'
/app/docker/volumes/data-files/_data
docker cp ./updated-data.csv pgadmin:/var/lib/pgadmin/data-files
And, now I think that PG Admin could see the updated-data.csv, so I try COPY, which I know works locally on my dev system where PG Admin is on baremetal:
COPY foo.bar(
...
)
FROM '/var/lib/pgadmin/data-files/updated-data.csv'
DELIMITER ','
CSV HEADER
ENCODING 'windows-1252';
Is there any glaring mistake here? When I do docker cp there's no feedback to stdout. No error, no mention of success or a hash or anything.
It looks like you thought the file should be inside the pgadmin container however the file you are going to copy must be inside the postgres container so the query you run will find the file. I suggest you copy the file to postgres container :
docker cp <path_from_your_local>/file.csv <postgres_container_name>:/file.csv
Then in the query tool from your pgadmin you can copy without problems !
I hope this help to others came here...

Cannot get time during PostgreSQL backup in Docker container

I am trying to use the following approach in order to backup PostgreSQL database that is in a Docker container:
cmd /c 'docker exec -t postgres-dev pg_dump dvdrental -U postgres -c -v >
C:\\postgresql\\backup\\dvdrental_%date:~-4,4%.%date:~-7,2%.%date:~-10,2%_%time:~0,2%.%time:~3,2%.sql'
However, it gives the following file that has the size ok 0 KB and with the extension of %tme:
dvdrental_2021.05.24_%tme
I tried some different combinations, but it still the same and I am not able to get backup with the following format:
dvdrental_2021.05.24_15.30
So, how can I get the expected result as mentioned above? I use Windows, but I think it is not important as the command is executed on Docker container.

Boot2Docker (on Windows) running Mongo with shared folder (This file system is not supported)

I am trying to start a Mongo container using shared folders on Windows using Boot2Docker. When starting using run -it -v /c/Users/310145787/Desktop/mongo:/data/db mongo i get a warning message inside the container saying:
WARNING: This file system is not supported.
After starting mongo shutsdown immediately.
Any hints or tips on how to solve this?
Apparently, according to this gist and Sev (sevastos), mongo doesn't support mounted volume through the VirtualBox shared folder:
See mongoDB Productions Notes:
MongoDB requires a filesystem that supports fsync() on directories.
For example, HGFS and Virtual Box’s shared folders do not support this operation.
the easiest solutions of all and a proper way for data persistance is Data Volumes:
Assuming you have a container that has VOLUME ["/data"]
# Create a data volume
docker create -v /data --name yourData busybox true
# and use
docker run --volumes-from yourData ...
This isn't always ideal (but the following is for Mac, by Edward Chu (chuyik)):
I don't think it's a good solution, because the data just moved to another container right?
But it still inside the container rather than local system(mac disk).
I found another solution, that is to use sshfs to map data between boot2docker vm and your mac, which may be better since data is not stored inside linux container.
Create a directory to store data inside boot2docker:
boot2docker ssh
mkdir -p /mnt/sda1/dev
Use sshfs to link boot2docker and mac:
echo tcuser | sshfs docker#localhost:/mnt/sda1/dev <your mac dir path> -p 2022 -o password_stdin
Run image with mongo installed:
docker run -v /mnt/sda1/dev:/data/db <mongodb-image> mongod
The corresponding boot2docker issue points out to docker issue 12590 ( Problem with -v shared folders in 1.6 #12590), which points to the work around of using double-slash.
using a double slash seems to work. I checked it locally and it works.
docker run -d -v //c/Users/marco/Desktop/data:/data <image name>
it also works with
docker run -v /$(pwd):/data
As an workaround I just copy from a folder before mongo deamon starts. Also, in my case I don't care of journal files, so i only copy database files.
I've used this command on my docker-compose.yml
command: bash -c "(rm /data/db/*.lock && cd /prev && cp *.* /data/db) && mongod"
And everytime before stoping the container I use:
docker exec <container_name> bash -c 'cd /data/db && cp $(ls *.* | grep -v *.lock) /prev'
Note: /prev is set as a volume. path/to/your/prev:/prev
Another workaround is to use mongodump and mongorestore.
in docker-compose.yml: command: bash -c "(sleep 30; mongorestore
--quiet) & mongod"
in terminal: docker exec <container_name> mongodump
Note: I use sleep because I want to make sure that mongo started, and it takes a while.
I know this involves manual work etc, but I am happy that at least I got mongo with existing data running on my Windows 10 machine, and still can work on my Macbook when I want.
It's seems like you don't need the data directory for MongoDb, removing those lines from your docker-composer.yml should run without problems.
The data directory is only used by mongo for cache.

How to fill in a Docker mongodb image

I don't really understand how I can fill in my mongodb image with a simple script like demoInstall.js.
I have my image and I can launch a mongodb instance. But I can not access to the docker image "mongo shell" to fill this one with the script.
I tried this command :
sudo docker run --entrypoint=/bin/cat mongo/ubuntu /tmp/devInstall.js | mongo --host IPAdress
But it's using the local mongo and not the image :/
Finally my aim is simple, I need to pull my image on a virgin server and launch a basic bash script who fill some informations in the Docker db.
The command you use does pipe on the output of the docker locally. You might call with explicit bash -c instead:
sudo docker run -it --rm mongo/ubuntu /bin/bash -c '/bin/cat /tmp/devInstall.js | mongo --host IPAdress'
I am not sure the IPAdress will be available though. You might want to define it via environmental parameter or container linking.
I would mount a volume with this argument:
-v /local_init_script_folder:/bootstrap
And then with a similar commandline like you proposed, you cann access the contents of the folder as /bootstrap from within the container.