When using the psql command line utility on my local machine, I have the option to use the -q or --quiet switch to tell Postgres to do it's work quietly - i.e. it won't print every single INSERT statement to the console if you're doing a large import.
Here's an example of how I'm using it:
psql -q -d <SOME_DATABASE> -f <SOME_SQL_FILE>
However, when using the pg:psql command line utility in Heroku, that option doesn't seem to be available. So I'm currently having to use it like so:
heroku pg:psql DATABASE -a <SOME_HEROKU_APP> < <SOME_SQL_FILE>
which produces a lot of output to my console (hundreds of thousands of lines), because of the large size of the SQL file I'm importing. Whenever I try to use the -q or --quiet option, something like this:
heroku pg:psql DATABASE -q -a <SOME_HEROKU_APP> < <SOME_SQL_FILE>
it'll throw an error saying that -q is not a valid option.
Is there some way to enable quiet mode when running Postgres commands in Heroku?
heroku pg:psql is just a wrapper onto your local psql binary (https://github.com/heroku/heroku/blob/master/lib/heroku/command/pg.rb#L151)
So, given this - you are able to do:
psql `heroku config:get DATABASE_URL -a <yourappname>`
to get a psql connection and consequently pass -q other options accordingly.
Related
i have a dump file on drive z (network drive)
im opening the psql from PgAdmin4
this is the command that im writeing:
psql -U postgres -d postgres -f Z:\DB_BU\md_20220729.sql
and this is the error that im getting:
Invalid command \DB_BU. Try \? for help.
when im doing this:
psql -U postgres -d postgres -f i\ Z:\DB_BU\md_20220729.sql
Invalid command \DB_BU. Try ? for help.
and when im doing this:
psql -U postgres -d postgres -f "Z:\DB_BU\md_20220729.sql"
im not getting any error but also its not restoring the file. how can i restor the file?
You're trying to call psql from within psql or PGAdmin. Since
psql is a standalone program, not an SQL command you can run in PGAdmin SQL window or psql's own, internal meta-command you're getting the error
Invalid command \DB_BU. Try \? for help
indicating that there was an attempt to interpret your entire command as a SQL query or an internal command and that this attempt failed.
You can open "psql tool" from within PGAdmin but your command won't work there either because it's trying to call psql itself, with some command-line options, which you cannot do when you're already inside an interactive psql session. The command
psql -U postgres -d postgres -f Z:\DB_BU\md_20220729.sql
can be used outside psql and PGAdmin, in your terminal, like zsh on Mac, sh/bash on Linux, cmd or PowerShell on Windows, where psql is installed and visible, along with your network path.
If you're able to open the psql tool window in PGAdmin, you can instead try and use an internal psql \i meta-command which is basically the same thing as the -f command-line option, but meant for use inside the psql session:
\i "Z:\DB_BU\md_20220729.sql"
I'd like to copy the content of my local machine to my remote one (inside a docker).
For some reason, it is more complicated that I was expected:
When I try to copy the data to the remote one, I get this "ERROR: CREATE DATABASE cannot run inside a transaction block".
Ok... So I get into my docker container, added the rule \set AUTOCOMMIT inside. But I still get this error.
This is the command I did:
// backup
pg_dump -C -h localhost -U postgres woof | xz >backup.xz
and then in my remote computer:
xz -dc backup.xz | docker exec -i -u postgres waf-postgres psql --set ON_ERROR_STOP=on --single-transaction
But each time I get this "CREATE DATABASE cannot run inside a transaction block" no matter what I try. Even if I put the autocommit to "on".
Here my problem: I don't know what a transaction block is. And I don't understand why copying one db to another need to be so hard pain: My remote db is empty. So why there is so much fuss and why psql just can't force what I want?
My aim is just to copy my local db to the remote one.
what happens here is: you add CREATE DATABASE statement with -C key and then try to run psql with --single-transaction, so the content of script are wrapped to BEGIN;...END;, where you can't use CREATE DATABASE
So iether remove -C and run psql against existing database, or remove --single-transaction for psql. Make decision based on what you really need...
from man pg_dump:
-C
--create
Begin the output with a command to create the database itself and reconnect to the created database. (With a script of this
form, it doesn't matter which database in the destination installation
you connect to before
running the script.) If --clean is also specified, the script drops and recreates the target database before reconnecting to
it.
from man psql:
--single-transaction
This option can only be used in combination with one or more -c and/or -f options. It causes psql to issue a BEGIN command
before the first such option and a COMMIT command after the last one, thereby wrapping all the commands into a single
transaction. This ensures that either all the commands complete successfully, or no changes are applied.
I'm a newbie at this. I want to write a script that I can execute from command line to run a query on a Heroku-hosted PostgreSQL database.
Right I have a script script.sh with executable permissions that looks like:
echo "Starting pull from postgres ..."
heroku pg:psql <db> --app <app-name>
\copy (<query>) to 'file.csv' WITH CSV
\q
echo "Done!"
The echo and heroku ... commands runs fine, however, once Heroku launches, the script no longer injects the commands. Only after I manually close out the Heroku app does it inject the last three lines.
I understand that this is a bash script that isn't intended to input postgreSQL commands once Heroku is open, but is there a way to do this?
I get the sense that it might involve connecting to Heroku and submitting the query in one line -- I searched around Heroku's documentation but didn't see anything that would be helpful.
You can use the --command flag to pass SQL commands to heroku pg:psql along with the server-based COPY and then redirect the output to a file:
echo "Starting pull from postgres ..."
heroku pg:psql <db> --app <app-name> --command "COPY (<query>) TO STDOUT WITH CSV" > file.csv
echo "Done!"
I got my database dump (tables, functions, triggers etc) in *.sql files.
At this moment I am deploying them via jenkins, by passing execute shell command:
sudo -u postgres psql -d my_db < /[path_to_my_file].sql
The problem is, that if something is wrong in my sql file, build finishes as SUCCESS. I would like to got information immediately if something fails, without looking into log and checking if every command executed succesfully.
Is it possible (and how if the answer is 'yes') to deploy postgres database via jenkins other way?
I changed my execution command to:
sudo -u postgres psql -v ON_ERROR_STOP=1 -d my_db < [path_to_file].sql
Make sure you have set
set -e
Before running the command.
If that does not work, I'd look at the return code from the command above. That can be done by running
echo $?
right after the command.
If that gives you a zero when it fails it's postgres fault (sice it should return with something else than 0 on fail).
Perhaps there is a postgres flag to fail on wrong input.
EDIT:
-v ON_ERROR_STOP=1
As a flag to postgres should make postgres fail on errors
I am migrating an application into Docker. One of the issues that I am bumping into is what is the correct way to load the initial data into PostgreSQL running in Docker? My typical method of restoring a database backup file are not working. I have tried the following ways:
gunzip -c mydbbackup.sql.gz | psql -h <docker_host> -p <docker_port> -U <dbuser> -d <db> -W
That does not work, because PostgreSQL is prompting for a password, and I cannot enter a password because it is reading data from STDOUT. I cannot use the $PGPASSWORD environment variable, because the any environment variable I set in my host is not set in my container.
I also tried a similar command above, except using the -f flag, and specify the path to a sql backup file. This does not work because my file is not on my container. I could copy the file to my container with the ADD statement in my Dockerfile, but this does not seem right.
So, I ask the community. What is the preferred method on loading PostgreSQL database backups into Docker containers?
I cannot use the $PGPASSWORD environment variable, because the any
environment variable I set in my host is not set in my container.
I don't use docker, but your container looks like a remote host in the command shown, with psql running locally. So PGPASSWORD never has to to be set on the remote host, only locally.
If the problems boils down to adding a password to this command:
gunzip -c mydbbackup.sql.gz |
psql -h <docker_host> -p <docker_port> -U <dbuser> -d <db> -W
you may submit it using several methods (in all cases, don't use the -W option to psql)
hardcoded in the invocation:
gunzip -c mydbbackup.sql.gz |
PGPASSWORD=something psql -h <docker_host> -p <docker_port> -U <dbuser> -d <db>
typed on the keyboard
echo -n "Enter password:"
read -s PGPASSWORD
export PGPASSWORD
gunzip -c mydbbackup.sql.gz |
psql -h <docker_host> -p <docker_port> -U <dbuser> -d <db>
Note about the -W or --password option to psql.
The point of this option is to ask for a password to be typed first thing, even if the context makes it unnecessary.
It's frequently misunderstood as the equivalent of the -poption of mysql. This is a mistake: while -p is required on password-protected connections, -W is never required and actually goes in the way when scripting.
-W, --password
Force psql to prompt for a password before connecting to a
database.
This option is never essential, since psql will automatically
prompt for a password if the server demands password
authentication. However, psql will waste a connection attempt
finding out that the server wants a password. In some cases it is
worth typing -W to avoid the extra connection attempt.