Script execution prints every "DO" from my script - postgresql

For a migration I have automatically built scripts containing anonymous blocks for every entity to update and keep a log table for the results.
DO $$
DECLARE productId varchar;
BEGIN
productId := getProductId('9783980493017');
update product set ...;
...
EXCEPTION
WHEN OTHERS THEN
insert into mig_messages(createddate,identifier,message)
values(now(), '9783980493017', SQLERRM);
END $$;
This works fine so far. But when I run these scripts with psql every DO is printed on the prompt. This sounds a bit silly, but there are lots of scripts with lots of product update blocks in it (about 5 millions or more). How can I suppress this output without redirecting it completely to /dev/null or switching psql to silent? At last there MAY be some output I want to see (errors, warnings etc.).

I would prepend the script with this line:
SET client_min_messages=WARNING
Or start psql with that setting in the environment (I am using this in bash).
env PGOPTIONS='-c client_min_messages=WARNING' psql ...
This way you still get messages with severity WARNING or higher.
Related:
How to suppress INFO messages when running psql scripts
Reduce bothering notices in plpgsql

Related

PostgreSQL : when using RAISE NOTICE in psql, can I avoid the "NOTICE :" prefix from the output

I am trying to port some functionalities from Oracle PL/SQL to PostgreSQL PL/pgSQL.
As it seems to be the most common way of printing text to screen, I would like to use the RAISE NOTICE to display messages on the standard output, output that would go to a file using -o file.txt argument of psql.
Problem is that the message level (here RAISE NOTICE 'msg' => level : NOTICE) goes before the message in the output (prefix) and I don't like it.
E.g. this PL/pgSQL code when run in psql :
DO $$
BEGIN
RAISE NOTICE 'my message';
END; $$;
will generate this output :
NOTICE: my message
Is there any way to remove the message level before the message itself ?
(here I would like to avoid "NOTICE :" prefix).
PS : I have seen this post where the same question is asked in the context of psycopg2, but here my context is the psql tool.
Also this other post is close to what I'm trying to do here but the answer is not satisfying to me.
NB: I'm using latest PostgreSQL and psql (version 12.1).
No, you cannot. This is the behaviour of libpq - Postgres driver, and psql uses this driver.

How to suspend PostgreSQL's ON_ERROR_STOP for just part of a psql script?

I'm maintaining a psql script which I usually want to immediately abort with a non-zero exit code when any part of it fails.
Thus I'm considering to either place
\set ON_ERROR_STOP on
at the beginning, or to instruct users to run the script with
psql -v ON_ERROR_STOP=on -f my_script.sql
However, there is a part of the script that deliberately fails (and gets rolled back). As the script is for education and demonstration purposes, and as that part demonstrates a CONSTRAINT actually working as it should (by making a subsequent constraint-violating INSERT fail), I can't really "fix" that part to not fail, so the accepted answer from How to save and restore value of ON_ERROR_STOP? doesn't solve my problem.
Thus, I'd like to disable ON_ERROR_STOP before that part and restore the setting after the part. If I know that ON_ERROR_STOP is enabled in general, this is easy:
\set ON_ERROR_STOP off
BEGIN;
-- [ part of the script that purposfully fails ]
ROLLBACK;
\set ON_ERROR_STOP on
or
\unset ON_ERROR_STOP
BEGIN;
-- [ part of the script that purposefully fails ]
ROLLBACK;
\set ON_ERROR_STOP on
However, this blindly (re-)enables ON_ERROR_STOP, whether it was enabled before or not.
\set previous_ON_ERROR_STOP :ON_ERROR_STOP
\unset ON_ERROR_STOP
BEGIN;
-- [ part of the script that purposefully fails ]
ROLLBACK;
\set ON_ERROR_STOP :previous_ON_ERROR_STOP
works if ON_ERROR_STOP has previously been explicitly disabled (e.g., set to off) but fails if it was unset (and thus just implicitly disabled).
I'd like the script to remain backwards compatible to PostgreSQL 9.x, so I can't yet use the \if meta commands introduced in PostgreSQL 10.
I don't think you can do that.
What you could do, however, is to use a PL/pgSQL block to run the statement and catch and report the error, somewhat like this:
DO
$$BEGIN
INSERT INTO mytab VALUES (...);
EXCEPTION
WHEN integrity_constraint_violation THEN
RAISE NOTICE 'Caught error: %', SQLERRM;
END;$$;
That will report the error, but it won't cause psql to stop.
\set CACHE_ON_ERROR_STOP :ON_ERROR_STOP
SELECT CASE
WHEN :'CACHE_ON_ERROR_STOP' = ':CACHE_ON_ERROR_STOP' THEN 'off'
WHEN :'CACHE_ON_ERROR_STOP'::BOOLEAN is true THEN 'on'
ELSE 'off'
END::boolean AS CACHE_ON_ERROR_STOP
\gset
\set ON_ERROR_STOP off
-- your code here
\set ON_ERROR_STOP :CACHE_ON_ERROR_STOP

COPY command not returning row count

I have two DB instances both running PG 9.4
When i issue the COPY command in one it will return with the number of rows affected, however in the second DB which is set up the same it will not.
I see nothing in the config that is different or may affect such. The imports do not error and import successfully on both accounts.
The Documentation states it should return as long as its not stdout.
This line in the documentation looks pertinent, but i'm not sure it applies to my situation.
Do not confuse COPY with the psql instruction \copy. \copy invokes COPY FROM STDIN or COPY TO STDOUT, and then fetches/stores the data in a file accessible to the psql client. Thus, file accessibility and access rights depend on the client rather than the server when \copy is used.
The command i'm issuing is:
COPY [tablename] from '/var/lib/pgsql/datafile.csv'
At the moment i'm down to looking at putty session variables, but i'm not sure this is the way to go.
Does anyone have any ideas as to why this may be happening?
When psql is quiet, it doesn't display these messages.
The quiet mode is activated with -q or \set QUIET on
Example:
test=# copy test to '/tmp/foo';
COPY 8
test=# \set QUIET on
test=# copy test to '/tmp/foo';
test=#

Suppress "current transaction is aborted…" messages in PostgreSQL

I have a very big SQL dump I'm working on. The overall structure looks like this:
BEGIN;
SET CONSTRAINTS ALL DEFERRED;
INSERT …;
-- … ~300K insert operations
INSERT …;
COMMIT;
The problem is if there is an error in any single statement, then error is shown and the message current transaction is aborted, commands ignored until end of transaction block is generated for EACH statement that follows it.
Why does it behave so weirdly? Is there a way to suppress the following messages? It's enough to just show the real error message and to skip transaction execution. I don't want to see ~300K meaningful error messages.
Do I need to structure my dump differently? Or is there a flag/option I can use?
Presumably you're using psql to send the queries to the server.
You may set the ON_ERROR_STOP built-in variable to on.
From https://www.postgresql.org/docs/current/static/app-psql.html:
ON_ERROR_STOP
By default, command processing continues after an error. When this
variable is set to on, processing will instead stop
immediately. In interactive mode, psql will return to the command
prompt; otherwise, psql will exit, returning error code 3 to
distinguish this case from fatal error conditions, which are
reported using error code 1.
It may be set from outside psql with psql -v ON_ERROR_STOP=on -f script.sql, or from inside the script or interactively with the meta-command \set ON_ERROR_STOP on.
pg_dump does not have an option to add this automatically to a dump (as far as I know, it doesn't emit any psql meta-command anyway, only pure SQL commands).

How can I stop a Postgres script when it encounters an error?

Is there a way to specify that when executing a sql script it stops when encountering the first error on the script, it usually continues, regardless of previous errors.
I think the solution to add following to .psqlrc is far from perfection
\set ON_ERROR_STOP on
there exists much more simple and convenient way - use psql with parameter:
psql -v ON_ERROR_STOP=1
better to use also -X parameter turning off .psqlrc file usage.
Works perfectly for me
p.s. the solution found in great post from Peter Eisentraut. Thank you, Peter!
http://petereisentraut.blogspot.com/2010/03/running-sql-scripts-with-psql.html
I assume you are using psql, this might be handy to add to your ~/.psqlrc file.
\set ON_ERROR_STOP on
This will make it abort on the first error. If you don't have it, even with a transaction it will keep executing your script but fail on everything until the end of your script.
And you probably want to use a transaction as Paul said. Which also can be done with psql --single-transaction ... if you don't want to alter the script.
So a complete example, with ON_ERROR_STOP in your .psqlrc:
psql --single-transaction --file /your/script.sql
It's not exactly what you want, but if you start your script with begin transaction; and end with end transaction;, it will actually skip everything after the first error, and then it will rollback everything it did before the error.
I always like to reference the manual directly.
From the PostgreSQL Manual:
Exit Status
psql returns 0 to the shell if it finished normally, 1 if a fatal
error of its own occurs (e.g. out of memory, file not found), 2 if the
connection to the server went bad and the session was not interactive,
and 3 if an error occurred in a script and the variable
ON_ERROR_STOP was set.
By default if the sql code you are running on the PostgreSQL server error psql won't quit an error. It will catch the error and continue. If, as mentioned above, you set the ON_ERROR_STOP setting to on, when psql catches an error in the sql code it will exit and return 3 to the shell.