Cannot get time during PostgreSQL backup in Docker container - postgresql

I am trying to use the following approach in order to backup PostgreSQL database that is in a Docker container:
cmd /c 'docker exec -t postgres-dev pg_dump dvdrental -U postgres -c -v >
C:\\postgresql\\backup\\dvdrental_%date:~-4,4%.%date:~-7,2%.%date:~-10,2%_%time:~0,2%.%time:~3,2%.sql'
However, it gives the following file that has the size ok 0 KB and with the extension of %tme:
dvdrental_2021.05.24_%tme
I tried some different combinations, but it still the same and I am not able to get backup with the following format:
dvdrental_2021.05.24_15.30
So, how can I get the expected result as mentioned above? I use Windows, but I think it is not important as the command is executed on Docker container.

Related

Postgres + Docker: pg_dump does not work from outside

I want to setup a cron script that automatically creates dumps from a specific Postgres database running in a Docker container. I know how to execute commands in a container from outside and also am familiar with pg_dump.
Somehow, for my container and database , I can't seem to make it work:
docker exec <container> pg_dump -U postgres <mydb> > /pg-snaps/<mydb>_$(date).sql
I get the following error:
zsh: no such file or directory: /pg-snaps/<mydb>_<date>.sql
The directory /pg-snaps exists. I can execute the same command inside the container, and it works. I have no idea why it doesn't work with docker exec. I looked up the methodology on how to do this, and it suggests the same as I want to do. Adding " " around the command to be executed also results in a 'no such file or directory'.
try this example:
docker exec -it <container> sh -c 'pg_dump -U postgres <mydb> > /pg-snaps/<mydb>_$(date).sql'

Docker Compose - Container Bash Forking

I am trying to run netbox based on their standard guide on Docker Hub with a slight difference that I need our existing postgres dump to be restored when the postgres container starts.
I have tried a few approaches like defining a command option in docker-compose file like (and a few more combinations):
sleep 2 && psql -U netbox -f netbox.sql
sleep is required to prevent psql command running before the postgres service is started.
Or defining a bash script that does the database restore but all these approaches cause the container to exit after that command/script is run.
My last resort was to utilize bash forking and this is what the postgres snippet of docker-compose looks like:
postgres:
image: postgres:13-alpine
env_file: env/postgres.env
command:
- sh
- -c
- (sleep 3 && cd /home && psql -U netbox -f netbox.sql) & su -c postgres postgres
volumes:
- ./my_db:/home/
- netbox-postgres-data:/var/lib/postgresql/data
Sadly this throws results in:
postgres: could not access the server configuration file
"/var/lib/postgresql/data/postgresql.conf": No such file or directory
If I omit the command section of docker-compose, the container starts up fine and I can navigate and ls the directory in the error message but it is not what I really need because this container will go on to be part of a much larger jungle of an ecosystem with little to no control over it afterwards.
Could it be my bash forking or the problem lies somewhere else?
Thanks in advance
I was able to find a solution by going through the thread that David Maze shared in the comments.
In my case, placing the *.sql file inside /docker-entrypoint-initdb.d did not work but I wrote a bash script, placed it in /docker-entrypoint-initdb.d directory and it got triggered.
The bash script was a very simple one, it would cd to the directory containing the sql dump and then restore it by running psql:
psql -U netbox -f netbox.sql

run pg_dump in a docker container and output file to host

We typically have to pg_dump from multiple different versions of databases. I want to run the command inside a Docker container with the right Postgres version and have the dump output to my files rather than the container.
I thought I'd achieve it like so;
docker run -it postgres:9.6.6-alpine pg_dump --file backupFile.bak --dbname=CONNECTIONSTRING --verbose --format=c --blobs > backupFile.bak
however this just outputs the terminal output of the pg_dump command to a file, not the actual dump. I end up with a local file that's just the verbose log of the command.
What am I missing?
I can think of two options here:
Mount a volume to a local folder and dump the file there. When the container exits, the file will still be there on the host. You wouldn't need to run the container interactively. The command might look something like this (not tested):
docker run --rm -v <host_folder>:<container_folder> postgres:9.6.6-alpine pg_dump --file backupFile.bak --dbname=CONNECTIONSTRING --verbose --format=c --blobs
The backup file will still remain in <host_folder> after the container stops.
Start the container as-is, run docker cp to pull the file out of the container to the local filesystem, then stop the container. Probably not as easy or efficient as option 1.

How to enable quiet mode for Postgres commands on Heroku

When using the psql command line utility on my local machine, I have the option to use the -q or --quiet switch to tell Postgres to do it's work quietly - i.e. it won't print every single INSERT statement to the console if you're doing a large import.
Here's an example of how I'm using it:
psql -q -d <SOME_DATABASE> -f <SOME_SQL_FILE>
However, when using the pg:psql command line utility in Heroku, that option doesn't seem to be available. So I'm currently having to use it like so:
heroku pg:psql DATABASE -a <SOME_HEROKU_APP> < <SOME_SQL_FILE>
which produces a lot of output to my console (hundreds of thousands of lines), because of the large size of the SQL file I'm importing. Whenever I try to use the -q or --quiet option, something like this:
heroku pg:psql DATABASE -q -a <SOME_HEROKU_APP> < <SOME_SQL_FILE>
it'll throw an error saying that -q is not a valid option.
Is there some way to enable quiet mode when running Postgres commands in Heroku?
heroku pg:psql is just a wrapper onto your local psql binary (https://github.com/heroku/heroku/blob/master/lib/heroku/command/pg.rb#L151)
So, given this - you are able to do:
psql `heroku config:get DATABASE_URL -a <yourappname>`
to get a psql connection and consequently pass -q other options accordingly.

How to fill in a Docker mongodb image

I don't really understand how I can fill in my mongodb image with a simple script like demoInstall.js.
I have my image and I can launch a mongodb instance. But I can not access to the docker image "mongo shell" to fill this one with the script.
I tried this command :
sudo docker run --entrypoint=/bin/cat mongo/ubuntu /tmp/devInstall.js | mongo --host IPAdress
But it's using the local mongo and not the image :/
Finally my aim is simple, I need to pull my image on a virgin server and launch a basic bash script who fill some informations in the Docker db.
The command you use does pipe on the output of the docker locally. You might call with explicit bash -c instead:
sudo docker run -it --rm mongo/ubuntu /bin/bash -c '/bin/cat /tmp/devInstall.js | mongo --host IPAdress'
I am not sure the IPAdress will be available though. You might want to define it via environmental parameter or container linking.
I would mount a volume with this argument:
-v /local_init_script_folder:/bootstrap
And then with a similar commandline like you proposed, you cann access the contents of the folder as /bootstrap from within the container.