How to prevent recursive execution of trigger? Let's say I want to construct a "tree-able" description on chart of account. So what I do is when a new record is inserted/updated, I update the the parent record's down_qty, so this would trigger the update trigger recursively.
Right now, my code is ok - I put this on UPDATE trigger's first line:
-- prevents recursive trigger
if new.track_recursive_trigger <> old.track_recursive_trigger then
return new;
end if;
And this is the sample code from my trigger when I need to update the parent record's qty:
update account_category set
track_recursive_trigger = track_recursive_trigger + 1, -- i put this line to prevent recursive trigger
down_qty = down_qty - (old.down_qty + 1)
where account_category_id = m_parent_account;
I'm thinking if there's a way in PostgreSQL to detect recursive trigger without introducing a new field, something analogous to MSSQL's trigger_nestlevel.
[EDIT]
I loop inside the tree, I need to bubble up the down_qty of each account_category back to its root. For example, I insert a new account category, it needs to increment the down_qty of its parent account_category, likewise when I change the account category's parent account_category, I need to decrement the down_qty of account_category's previous parent account_category. Though I think it can, I'm not letting PostgreSQL do the recursive trigger. I used MSSQL before where the trigger recursive depth level is limited only up to 16 levels.
This is what I do in PostgreSQL 9.2, although I must admit I did not find this approach documented. There is a function pg_trigger_depth() documented here, which I use to differentiate between original and nested calls in the trigger.
CREATE TRIGGER trg_taxonomic_positions
AFTER INSERT OR UPDATE OF taxonomic_position
ON taxon_concepts
FOR EACH ROW
WHEN (pg_trigger_depth() = 0)
EXECUTE PROCEDURE trg_taxonomic_positions()
In pg, it's up to you to track trigger recursion.
If a trigger function executes SQL
commands then these commands might
fire triggers again. This is known as
cascading triggers. There is no direct
limitation on the number of cascade
levels. It is possible for cascades to
cause a recursive invocation of the
same trigger; for example, an INSERT
trigger might execute a command that
inserts an additional row into the
same table, causing the INSERT trigger
to be fired again. It is the trigger
programmer's responsibility to avoid
infinite recursion in such scenarios.
https://www.postgresql.org/docs/13/trigger-definition.html
At the beggining of the definition of the trigger you can disable triggers on that particular table, and reenable them at the end (and make sure an exception doesn't terminate the execution before expected!). This has many deep holes, but may work for some light implementations. Notice that for this implementation, you will also need priviliges to disable triggers.
To avoid unbounded recursion, see my answer here. As others have commented, if your data structure is a true tree (the root(s) will have no parent(s)) and the recursion will always stop at the root(s). For nodes with only one parent pointer, the only way for unbounded recursion would be if there were loops present. (the method in my link will visit any node at most once)
Related
I had a problem that some of the CTE didn't run in the order I wanted, and I had no way to call one from the other.
WITH insert_cte AS (
INSERT INTO some_table
SELECT *
FROM (...) AS some_values_from_first_relationship
)
UPDATE some_table
-- here I had no way to call insert_cte and values from first relationship were not updated
SET <some_values_from_first_and_second_relation_ship>
https://dbfiddle.uk/?rdbms=postgres_14&fiddle=c584581a91fbb1ca8f51c3b32c04283f
So I created server function via CREATE OR REPLACE FUNCTION/PROCEDURE and moved CTE to logic block BEGIN - END; like
<<main_label>>
BEGIN
<<insert_cte_analogue>>
BEGIN
[insert_cte_logic]
END;
<<update_cte_analogue>>
BEGIN
[update_cte_logic]
END;
END;
Will it run sequentially or I am going to run into the same problem as in the CTE?
I apologize for the comment I left on your last question suggesting that you force the execution order by referencing the previous CTE. I use that frequently for setting FK values that rely on PKs generated in prior insert CTEs and force the order by referring to what comes back from RETURNING *.
I have never tried your use case, and the docs say it is not possible to update the same row twice within a single statement:
https://www.postgresql.org/docs/current/queries-with.html#QUERIES-WITH-MODIFYING
Trying to update the same row twice in a single statement is not supported. Only one of the modifications takes place, but it is not easy (and sometimes not possible) to reliably predict which one. This also applies to deleting a row that was already updated in the same statement: only the update is performed. Therefore you should generally avoid trying to modify a single row twice in a single statement. In particular avoid writing WITH sub-statements that could affect the same rows changed by the main statement or a sibling sub-statement. The effects of such a statement will not be predictable.
Okay, it works. First the insert_cte_analogue logical block created rows, then the update_cte_analogue logical block updated this rows. I didn't need a commit between blocks. Everything went without errors. I think that logical blocks always will run sequentially.
I created 5 triggers in my small (2 table database).
After I added the last one (to change INVPOS.INVSYMBOL after INVOICE.SYMBOL has been updated) these triggers activated each other and I got a
Too many concurrent executions of the same request.
error.
Could you please look at the triggers I created and help me out?
What can I do to avoid these problems in future? Should I merge a few triggers into one?
One solution could be to check has the intresting field(s) changed and only run the trigger's action if really nessesary (data has changed), ie
CREATE TRIGGER Foo FOR T
AS
BEGIN
-- only execute update statement when the Fld changed
if(new.Fld is distinct from old.Fld)then begin
update ...
end
END
Another option could be to check has the trigger already done it's thing in this transaction, ie
CREATE TRIGGER Foo FOR T
AS
DECLARE trgrDone VARCHAR(255);
BEGIN
trgrDone = RDB$GET_CONTEXT('USER_TRANSACTION', 'Foo');
IF(trgrDone IS NULL)THEN BEGIN
-- trigger hasn't been executed yet
-- register the execution
rdb$set_context('USER_TRANSACTION', 'Foo', 1);
-- do the work which might cause reentry
update ...
END
END
You should avoid circular references between triggers.
In general, triggers are not suitable for complex business logic, they work good for simple "if-then" business rules.
For the case you described you'd better implemenent a stored procedure where you could prepare data for all tables (perform data check, calculate necessary values, etc) and then insert them. It will lead to straightforward, fast and easy-to-maintain code.
Also, use CHECK for "preventing from inserting 0 to AMOUNT and PRICENET", and calculated fields for tasks like "calculate NETVAL".
Lately I fight with Firebird server with several clients project.I can avoid problems with deadlock in my programming enveroment but I want to do some work in triggers.
Thanks to advices which I've got from StackOverflow I realy close to my goals but I can not find information about catch deadlock in trigger, wait until it unlock and continue trigger procedure.
Could someone give me link or advice how to face with it?
Siple trigger definition with update or insert inside it:
CREATE TRIGGER XYZ FOR TABLE_X ACTIVE AFTER UPDATE POSITION 0 AS
begin
UPDATE TABLE_X SET FIELD = 1 where contidion
end
How to avoid problem when the row I want to change is lock by other process?
Regards,
Artik
In your comments you say that you want to update the same row that the trigger fired for, in that case no deadlock can occur, as the current transaction already has the 'lock' on the row, so it is allowed to modify it again. However your approach is wrong. If you want to modify the content of the same row the trigger fired for, you should not use an AFTER UPDATE trigger, but a BEFORE UPDATE trigger and use the NEW trigger context variables to update one or more columns.
So you trigger should be something like:
CREATE TRIGGER XYZ FOR TABLE_X ACTIVE BEFORE UPDATE POSITION 0 AS
begin
IF condition THEN
NEW.FIELD = 1;
end
I’ve got a table that stores events.
However, the way the system has been designed, events are usually recorded in batches. What I mean by this is that a set of events (10 or so) are usually recorded together, rather than just single events.
We can assume that: there is a column called “batch_no” in the events table, so we know which events belong to which batch no.
The question: What I am trying to do is execute a trigger function, every time a batch of events have finished loading to the table. However, the problem is I can’t think of how the trigger will know this, and not just call the function for every row.
The solutions I’ve been thinking about involves something like: (a) define a trigger for each row; (b) on condition that calculate count(select * from events, where NEW.batchNO = events.batchNO); delay some time; calculate again the same count, and if they are equal, we know the batch has finished loading, and we call the trigger.
Although, clearly the solution above sounds complicated? Is there a more better or simpler solution? (Or if not, any help for how I could implement what I described?)
You can pass parameters to a trigger function but only in the CREATE TRIGGER statement, which helps to use the same trigger function for multiple triggers, but does not help with your situation.
You need the trigger to fire on a condition that is not known at the time of trigger creation. I see basically three possibilities:
1) Statement-level trigger
Using the FOR EACH STATEMENT clause. The manual:
A trigger that is marked FOR EACH ROW is called once for every row
that the operation modifies. For example, a DELETE that affects 10
rows will cause any ON DELETE triggers on the target relation to be
called 10 separate times, once for each deleted row. In contrast, a
trigger that is marked FOR EACH STATEMENT only executes once for any
given operation, regardless of how many rows it modifies (in
particular, an operation that modifies zero rows will still result in
the execution of any applicable FOR EACH STATEMENT triggers).
Only applicable if you insert all your batches with a single INSERT command (multiple rows), but not more than one batch at a time.
2) WHEN condition for a row-level trigger.
Using the FOR EACH ROW clause plus a WHEN condition. You need version 9.0+ for that.
If you can tell from a single inserted row, which is the last one of a batch then your trigger definition could look like this:
CREATE TRIGGER insert_after_batch
AFTER INSERT ON tbl
FOR EACH ROW
WHEN (NEW.batch_last) -- any expression identifying the last
EXECUTE PROCEDURE trg_tbl_insert_after_batch();
This assumes a column batch_last boolean in your table, where you flag the last row of a batch. Any expression based on column values is possible.
This way the trigger only fires for the last row of each batch. Make it an AFTER trigger, so all rows are already visible in the table and you can query them together for whatever you want to do in your trigger. Probably the way to go.
3) Conditional code inside your trigger
That's basically the fallback for versions before 9.0 without the WHEN clause. Do the same check inside the trigger before executing the payload. More expensive than 2).
I have a trigger function for a table test which has the following code snippet:
IF TG_OP='UPDATE' THEN
IF OLD.locked > 0 AND
( OLD.org_id <> NEW.org_id OR
OLD.document_code <> NEW.document_code OR
-- other columns ...
)
THEN
RAISE EXCEPTION 'Message';
-- more code
So I am statically checking all the column's new value with its previous value to ensure integrity. Now every time my business logic changes and I have to add new columns into that table, I will have to modify this trigger each time. I thought it would be better if somehow I could dynamically check all the columns of that table, without explicitly typing their name.
How can it be done?
From 9.0 beta2 documentation about WHEN clause in triggers, which might be able to be used in earlier versions within the trigger body:
OLD.* IS DISTINCT FROM NEW.*
or possibly (from 8.2 release notes)
IF row(new.*) IS DISTINCT FROM row(old.*)
Take a look at the information_schema, there is a view "columns". Execute a query to get all current columnnames from the table that fired the trigger:
SELECT
column_name
FROM
information_schema.columns
WHERE
table_schema = TG_TABLE_SCHEMA
AND
table_name = TG_TABLE_NAME;
Loop through the result and there you go!
More information can be found in the fine manual.
In Postgres 9.0 or later add a WHEN clause to your trigger definition (CREATE TRIGGER statement):
CREATE TRIGGER foo
BEFORE UPDATE
FOR EACH ROW
WHEN (OLD IS DISTINCT FROM NEW) -- parentheses required!
EXECUTE PROCEDURE ...;
Only possible for triggers BEFORE / AFTER UPDATE, where both OLD and NEW are defined. You'd get an exception trying to use this WHEN clause with INSERT or DELETE triggers.
And radically simplify the trigger function accordingly:
...
IF OLD.locked > 0 THEN
RAISE EXCEPTION 'Message';
END IF;
...
No need to test IF TG_OP='UPDATE' ... since this trigger only works for UPDATE anyway.
Or move that condition in the WHEN clause, too:
CREATE TRIGGER foo
BEFORE UPDATE
FOR EACH ROW
WHEN (OLD.locked > 0
AND OLD IS DISTINCT FROM NEW)
EXECUTE PROCEDURE ...;
Leaving only an unconditional RAISE EXCEPTION in your trigger function, which is only called when needed to begin with.
Read the fine print:
In a BEFORE trigger, the WHEN condition is evaluated just before the
function is or would be executed, so using WHEN is not materially
different from testing the same condition at the beginning of the
trigger function. Note in particular that the NEW row seen by the
condition is the current value, as possibly modified by earlier
triggers. Also, a BEFORE trigger's WHEN condition is not allowed to
examine the system columns of the NEW row (such as oid), because those
won't have been set yet.
In an AFTER trigger, the WHEN condition is evaluated just after the
row update occurs, and it determines whether an event is queued to
fire the trigger at the end of statement. So when an AFTER trigger's
WHEN condition does not return true, it is not necessary to queue an
event nor to re-fetch the row at end of statement. This can result in
significant speedups in statements that modify many rows, if the
trigger only needs to be fired for a few of the rows.
Related:
Fire trigger on update of columnA or ColumnB or ColumnC
To also address the question title
Is it possible to dynamically loop through a table's columns?
Yes. Examples:
Handle result when dynamic SQL is in a loop
Removing all columns with given name
Iteration over RECORD variable inside trigger
Use pl/perl or pl/python. They are much better suited for such tasks. much better.
You can also install hstore-new, and use it's row->hstore semantics, but that's definitely not a good idea when using normal datatypes.