I have a very big SQL dump I'm working on. The overall structure looks like this:
BEGIN;
SET CONSTRAINTS ALL DEFERRED;
INSERT …;
-- … ~300K insert operations
INSERT …;
COMMIT;
The problem is if there is an error in any single statement, then error is shown and the message current transaction is aborted, commands ignored until end of transaction block is generated for EACH statement that follows it.
Why does it behave so weirdly? Is there a way to suppress the following messages? It's enough to just show the real error message and to skip transaction execution. I don't want to see ~300K meaningful error messages.
Do I need to structure my dump differently? Or is there a flag/option I can use?
Presumably you're using psql to send the queries to the server.
You may set the ON_ERROR_STOP built-in variable to on.
From https://www.postgresql.org/docs/current/static/app-psql.html:
ON_ERROR_STOP
By default, command processing continues after an error. When this
variable is set to on, processing will instead stop
immediately. In interactive mode, psql will return to the command
prompt; otherwise, psql will exit, returning error code 3 to
distinguish this case from fatal error conditions, which are
reported using error code 1.
It may be set from outside psql with psql -v ON_ERROR_STOP=on -f script.sql, or from inside the script or interactively with the meta-command \set ON_ERROR_STOP on.
pg_dump does not have an option to add this automatically to a dump (as far as I know, it doesn't emit any psql meta-command anyway, only pure SQL commands).
Related
I have two DB instances both running PG 9.4
When i issue the COPY command in one it will return with the number of rows affected, however in the second DB which is set up the same it will not.
I see nothing in the config that is different or may affect such. The imports do not error and import successfully on both accounts.
The Documentation states it should return as long as its not stdout.
This line in the documentation looks pertinent, but i'm not sure it applies to my situation.
Do not confuse COPY with the psql instruction \copy. \copy invokes COPY FROM STDIN or COPY TO STDOUT, and then fetches/stores the data in a file accessible to the psql client. Thus, file accessibility and access rights depend on the client rather than the server when \copy is used.
The command i'm issuing is:
COPY [tablename] from '/var/lib/pgsql/datafile.csv'
At the moment i'm down to looking at putty session variables, but i'm not sure this is the way to go.
Does anyone have any ideas as to why this may be happening?
When psql is quiet, it doesn't display these messages.
The quiet mode is activated with -q or \set QUIET on
Example:
test=# copy test to '/tmp/foo';
COPY 8
test=# \set QUIET on
test=# copy test to '/tmp/foo';
test=#
I executed a long-running Postgres query in the psql terminal. It displayed the result, but I pressed q too soon, exiting out of the result and back to the prompt.
Is there any way for me to see the result again without rerunning the query?
You've lost that output. Though the query is likely fairly "hot" atm as it was just requested (by you).
In the future though you can use the \o [filename] syntax of psql to save the output of the command locally, such that you don't run into this issue again.
For a migration I have automatically built scripts containing anonymous blocks for every entity to update and keep a log table for the results.
DO $$
DECLARE productId varchar;
BEGIN
productId := getProductId('9783980493017');
update product set ...;
...
EXCEPTION
WHEN OTHERS THEN
insert into mig_messages(createddate,identifier,message)
values(now(), '9783980493017', SQLERRM);
END $$;
This works fine so far. But when I run these scripts with psql every DO is printed on the prompt. This sounds a bit silly, but there are lots of scripts with lots of product update blocks in it (about 5 millions or more). How can I suppress this output without redirecting it completely to /dev/null or switching psql to silent? At last there MAY be some output I want to see (errors, warnings etc.).
I would prepend the script with this line:
SET client_min_messages=WARNING
Or start psql with that setting in the environment (I am using this in bash).
env PGOPTIONS='-c client_min_messages=WARNING' psql ...
This way you still get messages with severity WARNING or higher.
Related:
How to suppress INFO messages when running psql scripts
Reduce bothering notices in plpgsql
Is there an equivalent in PostgreSQL of the Oracle SQLPLUS "set echo on" so that I can get batch input
statements echoed in the output?
I have a very large file with input statements in it that has a few errors when I run it.
I am having difficulty finding the statement that produced the error because psql is only reporting
the error - not the statement that generated the error.
You need to pass the -a (or --echo-all) argument to psql. It's described at https://www.postgresql.org/docs/current/static/app-psql.html under OPTIONS.
PostgreSQL also logs errors in its server logs, along with the statement that caused it. That might be useful to bear in mind for debugging errors with tools other than psql that don't report errors very well.
Is there a way to specify that when executing a sql script it stops when encountering the first error on the script, it usually continues, regardless of previous errors.
I think the solution to add following to .psqlrc is far from perfection
\set ON_ERROR_STOP on
there exists much more simple and convenient way - use psql with parameter:
psql -v ON_ERROR_STOP=1
better to use also -X parameter turning off .psqlrc file usage.
Works perfectly for me
p.s. the solution found in great post from Peter Eisentraut. Thank you, Peter!
http://petereisentraut.blogspot.com/2010/03/running-sql-scripts-with-psql.html
I assume you are using psql, this might be handy to add to your ~/.psqlrc file.
\set ON_ERROR_STOP on
This will make it abort on the first error. If you don't have it, even with a transaction it will keep executing your script but fail on everything until the end of your script.
And you probably want to use a transaction as Paul said. Which also can be done with psql --single-transaction ... if you don't want to alter the script.
So a complete example, with ON_ERROR_STOP in your .psqlrc:
psql --single-transaction --file /your/script.sql
It's not exactly what you want, but if you start your script with begin transaction; and end with end transaction;, it will actually skip everything after the first error, and then it will rollback everything it did before the error.
I always like to reference the manual directly.
From the PostgreSQL Manual:
Exit Status
psql returns 0 to the shell if it finished normally, 1 if a fatal
error of its own occurs (e.g. out of memory, file not found), 2 if the
connection to the server went bad and the session was not interactive,
and 3 if an error occurred in a script and the variable
ON_ERROR_STOP was set.
By default if the sql code you are running on the PostgreSQL server error psql won't quit an error. It will catch the error and continue. If, as mentioned above, you set the ON_ERROR_STOP setting to on, when psql catches an error in the sql code it will exit and return 3 to the shell.