I'm writing a script for PostgreSQL and since I want it to be executed atomically, I'm wrapping it inside a transaction.
I expected the script to look something like this:
BEGIN
-- 1) Execute some valid actions;
-- 2) Execute some action that causes an error.
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
END; -- A.k.a. COMMIT;
However, in this case pgAdmin warns me about a syntax error right after the initial BEGIN. If I terminate the command there by appending a semicolon like so: BEGIN; it instead informs me about error near EXCEPTION.
I realize that perhaps I'm mixing up syntax for control structures and transactions, however I couldn't find any mention of how to roll back a failed transaction in the docs (nor in SO for that matter).
I also considered that perhaps the transaction is rolled back automatically on error, but it doesn't seem to be the case since the following script:
BEGIN;
-- 1) Execute some valid actions;
-- 2) Execute some action that causes an error.
COMMIT;
warns me that: ERROR: current transaction is aborted, commands ignored until end of transaction block and I have to then manually ROLLBACK; the transaction.
It seems I'm missing something fundamental here, but what?
EDIT:
I tried using DO as well like so:
DO $$
BEGIN
-- 1) Execute some valid actions;
-- 2) Execute some action that causes an error.
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
END; $$
pgAdmin hits me back with a: ERROR: cannot begin/end transactions in PL/pgSQL. HINT: Use a BEGIN block with an EXCEPTION clause instead. which confuses me to no end, because that is exactly what I am (I think) doing.
POST-ACCEPT EDIT:
Regarding Laurenz's comment: "Your SQL script would contain a COMMIT. That ends the transaction and rolls it back." - this is not the behavior that I observe. Please consider the following example (which is just a concrete version of an example I already provided in my original question):
BEGIN;
-- Just a simple, self-referencing table.
CREATE TABLE "Dummy" (
"Id" INT GENERATED ALWAYS AS IDENTITY,
"ParentId" INT NULL,
CONSTRAINT "PK_Dummy" PRIMARY KEY ("Id"),
CONSTRAINT "FK_Dummy_Dummy" FOREIGN KEY ("ParentId") REFERENCES "Dummy" ("Id")
);
-- Foreign key violation terminates the transaction.
INSERT INTO "Dummy" ("ParentId")
VALUES (99);
COMMIT;
When I execute the script above, I'm greeted with: ERROR: insert or update on table "Dummy" violates foreign key constraint "FK_Dummy_Dummy". DETAIL: Key (ParentId)=(99) is not present in table "Dummy". which is as expected.
However, if I then try to check whether my Dummy table was created or rolled back like so:
SELECT EXISTS (
SELECT FROM information_schema."tables"
WHERE "table_name" = 'Dummy');
instead of a simple false, I get the same error that I already mentioned twice: ERROR: current transaction is aborted, commands ignored until end of transaction block. Then I have to manually terminate the transaction via issuing ROLLBACK;.
So to me it seems that either the comment mentioned above is false or at least I'm heavily misinterpreting something here.
You cannot use ROLLBACK in PL/pgSQL, except in certain limited cases inside procedures.
You don't need to explicitly roll back in your PL/pgSQL code. Just let the exception propagate out of the PL/pgSQL code, and it will cause an error, which will cause the whole transaction to be rolled back.
Your comments suggest that this code is called from an SQL script. Then the solution would be to have a COMMIT in that SQL script at some place after the PL/pgSQL code. That would end the transaction and roll it back.
I think you must be using an older version, as the exact code from your question works without error for me:
(The above is with PostgreSQL 13.1, and pgAdmin 4.28.)
It also works fine for me, without the exception block:
As per this comment, you can remove the exception block within a function, and if an error occurs, the transaction run within it will automatically be rolled back. That appears to be the case, from my limited testing.
Related
I am using Postgres 13.5 and I am unsure how to combine commit and error handling in a stored procedure or DO block. I know that if I include the EXCEPTION clause in my block, then I cannot include a commit.
I am new to Postgres. It has also been over 15 years since I have written SQL that was working with transactions. When I was working with transactions I was using Oracle and recall using AUTONOMOUS_TRANSACTION to resolve some of these issues. I am just not sure how to do something like that in Postgres.
Here is a very simplified DO block. As I said above, I know that the Commits will cause the procedure to throw and exception. But, if I remove the EXCEPTION clause, then how will I trap an error if it happens? After reading many things, I still have not found a solution. So, I am not understanding something that will lead me to the solution.
Do
$$
DECLARE
v_Start timestamptz;
v_id integer;
v_message_type varchar(500);
Begin
select current_timestamp into start;
select q.id, q.message_type into (v_id, v_message_type) from message_queue;
call Load_data(v_id, v_message_type);
commit; -- if Load_Data completes successfully, I want to commmit the data
insert into log (id, message_type, Status, start, end)
values (v_id, v_message_type, 'Success', v_start, Currrent_Timestamp);
commit; -- commit the log issert for success
EXCEPTION
WHEN others THEN
insert into log (id, message_type, status, start, end, error_message)
values (v_id, v_message_type, 'Failue', v_start, Currrent_Timestamp, SQLERRM || '', ' ||
SQLSTATE );
commit; -- commit the log insert for failure.
end;
$$
Thanks!
Since this is a pattern that I will have to do tens of times, I want to understand the right way to do this.
Since you cannot use transaction management statements in a subtransaction, you will have to move part of the processing to the client side.
But your sample code doesn't need any transaction management at all! Simply remove all the COMMIT statements, and the procedure will work just as you want it to. Remember that PostgreSQL uses the autocommit mode, so your procedure call from the client will automatically run in its own transaction and commit when it is done.
But perhaps your sample code is simplified, and you would like more complicated processing (looping etc.) in your actual use cases. So let's discuss your options:
One option is to remove the EXCEPTION handler and move only that part to the client side: if the procedure causes an error, roll back and insert a log message. Another, perhaps cleaner, method is to move the whole transaction management to the client side. In that case, you would replace the complete procedure with client code and call load_data directly from client code.
I've got an app built on top of PostgresQL, which makes use of a custom sequence. I think I understand sequences pretty well by now: they are non-transactional, currval is defined only within the current session, etc. But I don't understand this:
2015-10-13 10:37:16 SQLSelect: SELECT nextval('commit_id_seq')
2015-10-13 10:37:16 commit_id_seq: 57
2015-10-13 10:37:16 SQLExecute: UPDATE bid SET is_archived=false,company_id=1436,contact_id=15529,...(etc)...,sharing_policy='' WHERE id = 56229
2015-10-13 10:37:16 ERROR: ERROR: currval of sequence "commit_id_seq" is not yet defined in this session
CONTEXT: SQL statement "INSERT INTO history (table_name, record_id, sec_user_id, created, action, notes, status, before, after, commit_id)
SELECT TG_TABLE_NAME, rec.id, (SELECT id FROM sec_user WHERE name = CURRENT_USER), now(), SUBSTR(TG_OP,1,1), note, stat, oldH, newH, currval('commit_id_seq')"
PL/pgSQL function log_to_history() line 28 at SQL statement
[3]
We log every call to the database, and in the case of the SELECT nextval, I also log the result. The above are the exact calls, except that I trimmed the UPDATE statement (because the original is really long).
So, you can see that we just called nextval on the sequence, got a reasonable number back, and then we do an UPDATE that invokes a trigger function that attempts to use currval on that sequence... and it fails, claiming currval is not defined.
Note that this doesn't usually happen, but once it does start happening, it does so consistently (perhaps until the user disconnects from the DB).
How can this be? And what can I do about it?
Your UPDATE statement obviously calls a trigger. The most plausible cause of this error is that the trigger function is in a different schema from where the sequence is defined and the schema of the sequence is not in the search_path. That gives you two options to resolve this:
Make the schema of the sequence visible to the trigger function using SET search_path TO .... Note that this will make all objects in the schema of the sequence visible, which may be something of a security risk, depending on your database design.
Schema-qualify the sequence name in the trigger function: currval('my_schema.commit_id_seq').
Another plausible cause is connection pooling at your application end. Log the "session ID" (really just the starting time and pid of the current session) by adding %c to your log_line_prefix() parameter in postgresql.conf. In PostgreSQL every command runs in its own transaction unless a transaction is explicitly established. Connection pooling software also works at the transaction level (i.e. you start a transaction and then your connection will stay open until you close it, outside of a transaction there are no guarantees about session persistence). If that is the case you can wrap your entire set of commands in a BEGIN ... COMMIT block (you should probably use a specific call from your pooling software), or better yet, change your code to not depend on a previous nextval() call.
Is a PostgreSQL function such as the following automatically transactional?
CREATE OR REPLACE FUNCTION refresh_materialized_view(name)
RETURNS integer AS
$BODY$
DECLARE
_table_name ALIAS FOR $1;
_entry materialized_views%ROWTYPE;
_result INT;
BEGIN
EXECUTE 'TRUNCATE TABLE ' || _table_name;
UPDATE materialized_views
SET last_refresh = CURRENT_TIMESTAMP
WHERE table_name = _table_name;
RETURN 1;
END
$BODY$
LANGUAGE plpgsql VOLATILE SECURITY DEFINER;
In other words, if an error occurs during the execution of the function, will any changes be rolled back? If this isn't the default behavior, how can I make the function transactional?
PostgreSQL 12 update: there is limited support for top-level PROCEDUREs that can do transaction control. You still cannot manage transactions in regular SQL-callable functions, so the below remains true except when using the new top-level procedures.
Functions are part of the transaction they're called from. Their effects are rolled back if the transaction rolls back. Their work commits if the transaction commits. Any BEGIN ... EXCEPT blocks within the function operate like (and under the hood use) savepoints like the SAVEPOINT and ROLLBACK TO SAVEPOINT SQL statements.
The function either succeeds in its entirety or fails in its entirety, barring BEGIN ... EXCEPT error handling. If an error is raised within the function and not handled, the transaction calling the function is aborted. Aborted transactions cannot commit, and if they try to commit the COMMIT is treated as ROLLBACK, same as for any other transaction in error. Observe:
regress=# BEGIN;
BEGIN
regress=# SELECT 1/0;
ERROR: division by zero
regress=# COMMIT;
ROLLBACK
See how the transaction, which is in the error state due to the zero division, rolls back on COMMIT?
If you call a function without an explicit surounding transaction the rules are exactly the same as for any other Pg statement:
BEGIN;
SELECT refresh_materialized_view(name);
COMMIT;
(where COMMIT will fail if the SELECT raised an error).
PostgreSQL does not (yet) support autonomous transactions in functions, where the procedure/function could commit/rollback independently of the calling transaction. This can be simulated using a new session via dblink.
BUT, things that aren't transactional or are imperfectly transactional exist in PostgreSQL. If it has non-transactional behaviour in a normal BEGIN; do stuff; COMMIT; block, it has non-transactional behaviour in a function too. For example, nextval and setval, TRUNCATE, etc.
As my knowledge of PostgreSQL is less deeper than Craig Ringer´s I will try to give a shorter answer: Yes.
If you execute a function that has an error in it, none of the steps will impact in the database.
Also, if you execute a query in PgAdmin the same happen.
For example, if you execute in a query:
update your_table yt set column1 = 10 where yt.id=20;
select anything_that_do_not_exists;
The update in the row, id = 20 of your_table will not be saved in the database.
UPDATE Sep - 2018
To clarify the concept I have made a little example with non-transactional function nextval.
First, let´s create a sequence:
create sequence test_sequence start 100;
Then, let´s execute:
update your_table yt set column1 = 10 where yt.id=20;
select nextval('test_sequence');
select anything_that_do_not_exists;
Now, if we open another query and execute
select nextval('test_sequence');
We will get 101 because the first value (100) was used in the latter query (that is because the sequences are not transactional) although the update was not committed.
https://www.postgresql.org/docs/current/static/plpgsql-structure.html
It is important not to confuse the use of BEGIN/END for grouping statements in PL/pgSQL with the similarly-named SQL commands for transaction control. PL/pgSQL's BEGIN/END are only for grouping; they do not start or end a transaction. Functions and trigger procedures are always executed within a transaction established by an outer query — they cannot start or commit that transaction, since there would be no context for them to execute in. However, a block containing an EXCEPTION clause effectively forms a subtransaction that can be rolled back without affecting the outer transaction. For more about that see Section 39.6.6.
In the function level, it is not transnational. In other words, each statement in the function belongs to a single transaction, which is the default db auto commit value. Auto commit is true by default. But anyway, you have to call the function using
select schemaName.functionName()
The above statement 'select schemaName.functionName()' is a single transaction, let's name the transaction T1, and so the all the statements in the function belong to the transaction T1. In this way, the function is in a single transaction.
Postgres 14 update: All statements written in between the BEGIN and END block of a Procedure/Function is executed in a single transaction. Thus, any errors arising while execution of this block will cause automatic roll back of the transaction.
Additionally, the ATOMIC Transaction including triggers as well.
Postgres automatically aborts transactions whenever any SQL statement terminates with an error, which includes any constraint violation. For example:
glyph=# create table foo (bar integer, constraint blug check(bar > 5));
CREATE TABLE
glyph=# begin;
BEGIN
glyph=# insert into foo values (10);
INSERT 0 1
glyph=# insert into foo values (1);
ERROR: new row for relation "foo" violates check constraint "blug"
STATEMENT: insert into foo values (1);
ERROR: new row for relation "foo" violates check constraint "blug"
No message has yet been issued to that effect, but the transaction is rolled back. My personal favorite line of this session is the following:
glyph=# commit;
ROLLBACK
... since "ROLLBACK" seems like an odd success-message for COMMIT. But, indeed, it's been rolled back, and there are no rows in the table:
glyph=# select * from foo;
bar
-----
(0 rows)
I know that I can create a ton of SAVEPOINTs and handle errors in SQL that way, but that involves more traffic to the database, more latency (I might have to handle an error from the SAVEPOINT after all), for relatively little benefit. I really just want to handle the error in my application language anyway (Python) with a try/except, so the only behavior I want out of the SQL is for errors not to trigger automatic rollbacks. What can I do?
I'm extremely new to PostgreSQL, but one of the examples in the PostgreSQL documentation for triggers / server-side programming looks like it does exactly what you're looking for.
See: http://www.postgresql.org/docs/9.2/static/trigger-example.html
Snippet from the page: "So the trigger acts as a not-null constraint but doesn't abort the transaction."
I know this is a very old ticket but (as of 2017) PostgreSQL still have this same behavior of auto-rolling back itself when something goes wrong in the commit. I'd like to share some thoughts here.
I don't know if we can change this behavior, and I don't need this, maybe for the best of delegating PostgreSQL to manage the rollback for us (he knows what he is doing, right ?). Rolling back means changing the data back to its original state before the failed transaction, that means altered or inserted data from triggers will also be discarded. In an ACID logic, this is what we want. Let say you are managing the rollback on the back-end yourself, if something goes wrong during your custom rollback or if the database is changed at the same time from external transactions during your rollback, the data becomes inconsistent and your whole structure most likely to collapse.
So knowing that PostgreSQL will manage its own rollback strategy, the question to ask is "how can I extend the rollback strategy ?". The thing you first should think of is "what caused the transaction to fail ?". In your try/catch structure, try to handle all the possible exceptions and run the transaction again or send feedback to the front-end application with some appropriate "don't do" messages. For me, this is the best way of handling things, it is less code, less overhead, more control, more user-friendly and your database will thank you.
A last point I want to shed light on, SQL standard is having a sqlstate code that can be use to communicate with back-end modules.
The failing operation during a transaction will return a sqlstate code, you can then use these codes to make appropriate drawbacks. You can make your own sqlstate codes, as long as it doesn't mess with the reserved ones (https://www.postgresql.org/message-id/20185.1359219138%40sss.pgh.pa.us).
For instance in a plpgsql function
...
$$
begin
...do something...if it goes wrong
raise exception 'custom exception message' using errcode='12345';
end
$$
...
This is a example using PDO in PHP (using the error code above) :
...
$pdo->beginTransaction();
try {
$s = $pdo->prepare('...');
$s->execute([$value]);
/**
* Simulate a null violation exception
* If it fails, PDO will not wait the commit
* and will throw the exception, the code below
* is not executed.
*/
$s->execute([null]);
/**
* If nothing wrong happened, we commit to change
* the database state.
*/
$pdo->commit();
}
catch (PDOException $e) {
/**
* It is important to also have the commit here.
* It will trigger PostgreSQL rollback.
* And make the pdo Object back to auto-commit mode.
*/
$pdo->commit();
if ($e->getCode() === '12345') {
send_feedback_to_client('please do not hack us.');
}
}
...
I would strongly suggest SqlAlchemy and use subtransactions. You can code like:
#some code
Session.begin(subtransactions=True)
#more stuff including sql activity then:
with Session.begin(nested=True):
# get the security
try:
foo = MyCodeToMakeFOO(args)
Session.add(foo)
Session.flush()
except:
log.error("Database hated your foo(%s) but I'll do the rest"%foo)
Most useful when the subtransaction is in a loop where you want to process the good records and log the bad ones.
I have this Trigger in Postgresql that I can't just get to work (does nothing). For understanding, there's how I defined it:
CREATE TABLE documents (
...
modification_time timestamp with time zone DEFAULT now()
);
CREATE FUNCTION documents_update_mod_time() RETURNS trigger
AS $$
begin
new.modification_time := now();
return new;
end
$$
LANGUAGE plpgsql;
CREATE TRIGGER documents_modification_time
BEFORE INSERT OR UPDATE ON documents
FOR EACH ROW
EXECUTE PROCEDURE documents_update_mod_time();
Now to make it a bit more interesting.. How do you debug triggers?
Use the following code within a trigger function, then watch the 'messages' tab in pgAdmin3 or the output in psql:
RAISE NOTICE 'myplpgsqlval is currently %', myplpgsqlval; -- either this
RAISE EXCEPTION 'failed'; -- or that
To see which triggers actually get called, how many times etc, the following statement is the life-saver of choice:
EXPLAIN ANALYZE UPDATE table SET foo='bar'; -- shows the called triggers
Note that if your trigger is not getting called and you use inheritance, it may be that you've only defined a trigger on the parent table, whereas triggers are not inherited by child tables automatically.
To step through the function, you can use the debugger built into pgAdmin3, which on Windows is enabled by default; all you have to do is execute the code found in ...\8.3\share\contrib\pldbgapi.sql against the database you're debugging, restart pgAdmin3, right-click your trigger function, hit 'Set Breakpoint', and then execute a statement that would cause the trigger to fire, such as the UPDATE statement above.
Turns out I was using inheritance in the above problem and forgot to mention it. Now for everybody who might run into this as well, here's some debugging hints:
Use the following code to debug what a trigger is doing:
RAISE NOTICE 'test'; -- either this
RAISE EXCEPTION 'failed'; -- or that
To see which triggers actually get called, how many times etc, the following statement is the life-saver of choice:
EXPLAIN ANALYZE UPDATE table SET foo='bar'; -- shows the called triggers
Then there's the one thing I didn't know before: triggers only fire when updating the exact table they're defined on. If you use inheritance, you MUST define them on the child tables as well!
You can use 'raise notice' statements inside your trigger function to debug it. To debug the trigger not being called at all is another story.
If you add a 'raise exception' inside your trigger function, can you still do inserts/updates?
Also, if your update test occurs in the same transaction as your insert test, now() will be the same (since it's only calculated once per transaction) and therefore the update won't seem to do anything. If that's the case, either do them in separate transactions, or if this is a unit test and you can't do that, use clock_timestamp().
I have a unit test that depends on some time going by between transactions, so at the beginning of the unit test I have something like:
ALTER TABLE documents
ALTER COLUMN modification_time SET DEFAULT clock_timestamp();
Then in the trigger, use "set modification_time = default".
So normally it doesn't do the extra calculation, but during a unit test this allows me to do inserts with pg_sleep in between to simulate time passing and actually have that be reflected in the data.