how to stop/fire trigger in stored procedure dynamically? - tsql

Suppose I have a table MyTab in database. I have a trigger, for example, delete trigger on this table.
Then in a stored procedure, I try to delete data from this table but want to stop the delete trigger only for this deletion. After that, put the trigger back on normal. Is it possible to have codes in stored procedure like:
stop trigger on MyTab;
delete from MyTab where ...;
put the trigger back;

As mentioned here, you can disable and enable the trigger, though I'd probably put this in a try-catch and/or transaction so you don't get stuck with your trigger disabled because of an error.
For example:
set xact_abort on; -- Auto-rollback on any error
begin transaction;
alter table MyTab disable trigger TR_MyTab_Delete;
delete from MyTab where 1/0 = 1; -- Causes div by zero error
alter table MyTab enable trigger TR_MyTab_Delete; -- Won't run becuase of above error
commit;
The script above will throw an error but won't leave the trigger disabled, as set xact_abort on guarantees that my transaction is rolled back, including the disabling of the trigger.

Related

Postgres concurrent transactions unexpected issue

When the following transaction is run concurrently on different connections it sometimes errors with
trigger "my_trigger" for relation "my_table" already exists
What am I doing wrong?
BEGIN;
DROP TRIGGER IF EXISTS my_trigger ON my_table;
CREATE TRIGGER my_trigger
AFTER INSERT ON my_table
REFERENCING NEW TABLE AS new_table
FOR EACH STATEMENT EXECUTE PROCEDURE my_function();
COMMIT;
I am trying to set up a system where I can add triggers to notify about data changes in specific tables. If a table already has such a trigger then skip it. Otherwise CREATE all CRUD triggers. This logic needs to run sequentially in case of concurrent requests.
After trying ISOLATION LEVEL SERIALIZABLE I noticed that any conflicting transactions are failed and dropped (I would need to manually check sql status and retry). But what I want is to queue up these transactions and run afterwards one by one in the order they're sent.
At the moment I am trying to achieve this by having a my_triggers (table_name TEXT) table that has a BEFORE INSERT OR DELETE trigger. Within this trigger I do the actual table trigger upsert logic. Inserts or deletes on my_triggers are made with LOCK TABLE my_triggers IN ACCESS EXCLUSIVE MODE ... which should queue up conflicting CRUD transactions ?!
What happens is following:
BEGIN....DROP TRIGGER IF EXISTS....CREATE TRIGGER....COMMIT;
..BEGIN....DROP TRIGGER IF EXISTS....CREATE TRIGGER--------EXCEPTION.
Both transactions starts when trigger is not present.
Both succeed in drop trigger because of "IF EXISTS" statement.
First transaction starts creating a trigger. For that a SHARE ROW EXCLUSIVE lock is placed on table my_table. The lock SHARE ROW EXCLUSIVE conflicts with it self so no other transaction is allowed to create a trigger until the first one completes.
Second transaction blocks on CREATE TRIGGER.
First transaction completes.
Second transaction proceeds with CREATE TRIGGER but it already exists. Exception is raised.
What you need is adding a LOCK before DROP TRIGGER statement. This way you will ensure the trigger is dropped and not created in concurrent transaction.
BEGIN;
LOCK TABLE my_table IN SHARE ROW EXCLUSIVE MODE ;
DROP TRIGGER IF EXISTS my_trigger ON my_table;
CREATE TRIGGER my_trigger
AFTER INSERT ON my_table
REFERENCING NEW TABLE AS new_table
FOR EACH STATEMENT EXECUTE PROCEDURE my_function();
COMMIT;

Cannot drop priorly modified new table in execute block

I'm not well acquainted with FB database and its subtleties.
On script executing, the problem occurres:
EXECUTE ibeblock
AS
BEGIN
-- 1. Create temporary table
execute statement 'recreate GLOBAL TEMPORARY table TMPTBL (ID bigint) /*on commit delete rows*/;';
commit;
-- 2. dummy fill of temporary table
insert into tmptbl (ID)
values (0xFE);
commit; -- not necessary
-- 3. perform some actions...
-- 4. Delete temporary table
execute statement 'drop table TMPTBL;';
commit; -- FAILURE!
END
The idea of script is primitive: 1) create temporary table; 2) fill it with records; 3) perform actions on other DB objects using populated records; 4) drop temp table.
For simulation, step-3 is useless (skipped). Step-4 leads to an error on commit: "This operation is not defined for system tables. unsuccessful metadata update. object TABLE "TMPTBL" is in use.".
Neither triggers nor constraints are applied for the table. Obviously, there should be nothing locking temp table.
Help, please, with resolution. Hopefully I missed something.
P.S.: FB 2.5, IBExpert 2017.12.13.1 used as DB managing tool
There are a number of problems with your code:
A global temporary table is intended as a permanent object, it is just the content that is temporary (either for transaction or connection duration). So normally you would create a global temporary table once, and not drop it, but instead reuse its definition.
Although you technically can execute DDL using execute statement, you are not supposed to, and it is not guaranteed to work. Your code is specifically an example of one of the things that will not work.
The problem here, is that you are trying to drop the table in the same transaction that used it (though to be honest, I'm surprised the insert even worked, because normally you can't insert into a table that was created in the same transaction).
The insert you executed on TMPTBL will mark the table in use, and given the transaction isn't committed yet, you can't drop the table: it is in use.
You shouldn't call commit in PSQL code (to be honest, I thought this wasn't even possible).
In short, you need to rethink how you use global temporary tables: define it once, and do not use execute statement to create it, but create it separately.
If you do want to create and drop it and not retain the definition of the global temporary table, then create it before the execute block, commit, then the execute block (with only the inserts and the 'perform some actions'), commit, and then drop it (and commit).
Alternatively, you might get away with executing the create using execute statement ... with autonomous transaction, the inserts and the 'perform some actions' in another execute statement ... with autonomous transaction, and finally the drop in yet another execute statement ... with autonomous transaction. However that makes your code very brittle, and this is not a recommend approach.
I have been forced again by devops guys to find robust solution to provide DB structure upgrades. Requirements: safely combine DDL and DML statements; ability to create temporary tables (for heavy selections); leave no garbage. Of course, upgrade is handled within single connection.
Referencing to the clues given by Mark a deeper insight and lots of experiments were made.
Here is template filescript that really worked out (isql native utility used):
SET TERM #;
-- 1. Create temporary table
EXECUTE BLOCK
AS
BEGIN
execute statement 'recreate GLOBAL TEMPORARY table TMPTBL (ID bigint) /*on commit preserve rows*/;';
END#
commit#
-- Data manipulations
EXECUTE BLOCK
AS
declare xid bigint;
BEGIN
-- 2. dummy fill of temporary table
begin
insert into TMPTBL (ID) values (0xFE);
end
-- 3. perform some actions...
for
select tt.ID
from TMPTBL tt
into :xid
do
begin
-- use :xid var
end
END#
commit#
-- 4. Delete temporary table
EXECUTE BLOCK
AS
BEGIN
execute statement 'drop table TMPTBL;';
END#
commit#
SET TERM ;#
Might be usefull for someone.
Damn, Firebird do drives crazy!

Trigger data changed of replicated table - postgres

I have 2 DB , 1st is master db and 2nd is replica db (just some tables of master DB). I try to add the trigger on replica table to catching when it is replicated from master db tables.
I using the trigger for catching insert, delete and update action but it seem not work. that trigger only work for some table that change from sql statement. Not work for replica tables.
Is there any way to catching the replica tables changes? I using go lang and follow guide of this post
https://coussej.github.io/2015/09/15/Listening-to-generic-JSON-notifications-from-PostgreSQL-in-Go/
i did these step :
-- create function
CREATE OR REPLACE FUNCTION notify_event() RETURNS TRIGGER AS $$
DECLARE
data json;
notification json;
BEGIN
IF (TG_OP = 'DELETE') THEN
data = row_to_json(OLD);
ELSE
data = row_to_json(NEW);
END IF;
notification = json_build_object(
'table',TG_TABLE_NAME,
'action', TG_OP,
'data', data);
PERFORM pg_notify('events',notification::text);
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
-- create trigger
CREATE TRIGGER user_warehouse_notify_event
AFTER INSERT OR UPDATE OR DELETE ON users_warehouse_rel
FOR EACH ROW EXECUTE PROCEDURE notify_event();
-- enable replica trigger
ALTER TABLE users_warehouse_rel ENABLE REPLICA TRIGGER user_warehouse_notify_event
it's still not work
When logical replication applies its changes, session_replication_role is set to replica so that normal triggers are not triggered.
To change that for your trigger, use
ALTER TABLE atable ENABLE ALWAYS TRIGGER trigger_name;
Think twice before using this – badly defined triggers can break replication.

Enabling and Disabling a trigger in a Postgres function

I have a trigger on a table and I want to catch the messages that are introduced between an interval. I had created a function who recive an integer() as a parameter.
The pseudocode is something like this:
alter table <table_name> enable trigger <trigger_name>;
PERFORM pg_sleep(<nrofseconds>);
alter table <table_name> disable trigger <trigger_name>;
The problem is that, a Postgres function works as a single transaction and the trigger is enabled, then disabled at the end of execution, when commit is called so it does not track anything.
How can I solve this problem?

Trigger preventing insert

Is there any reason that having a trigger on a table would prevent the original insert statement from inserting? The trigger is run AFTER the row is inserted in the table, and there is no transaction rollback in the trigger.
It will happen when there is an exception in the trigger (when an error happens in a trigger the batch is aborted).