INSERT statement that does not fire an INSERT trigger - postgresql

I am using PostgreSQL 9.2 and I need to write an INSERT statement which copies data from table A to table B without firing the INSERT trigger defined on table B (maybe some sort of bulk insertion operation??).
On this specific table (table B) many INSERT, UPDATE and DELETE operations are executed. During each and every one of this executions, a trigger must fire.
I cannot temporary disable the triggers because of standard, day-to-day DML operations.
Can anyone help me with the syntax for this non-trigger-firing INSERT statement?

Run your "privileged" inserts as a different user. That way your trigger can check the current user and exit if it shouldn't do anything.

Related

Postgres concurrent transactions unexpected issue

When the following transaction is run concurrently on different connections it sometimes errors with
trigger "my_trigger" for relation "my_table" already exists
What am I doing wrong?
BEGIN;
DROP TRIGGER IF EXISTS my_trigger ON my_table;
CREATE TRIGGER my_trigger
AFTER INSERT ON my_table
REFERENCING NEW TABLE AS new_table
FOR EACH STATEMENT EXECUTE PROCEDURE my_function();
COMMIT;
I am trying to set up a system where I can add triggers to notify about data changes in specific tables. If a table already has such a trigger then skip it. Otherwise CREATE all CRUD triggers. This logic needs to run sequentially in case of concurrent requests.
After trying ISOLATION LEVEL SERIALIZABLE I noticed that any conflicting transactions are failed and dropped (I would need to manually check sql status and retry). But what I want is to queue up these transactions and run afterwards one by one in the order they're sent.
At the moment I am trying to achieve this by having a my_triggers (table_name TEXT) table that has a BEFORE INSERT OR DELETE trigger. Within this trigger I do the actual table trigger upsert logic. Inserts or deletes on my_triggers are made with LOCK TABLE my_triggers IN ACCESS EXCLUSIVE MODE ... which should queue up conflicting CRUD transactions ?!
What happens is following:
BEGIN....DROP TRIGGER IF EXISTS....CREATE TRIGGER....COMMIT;
..BEGIN....DROP TRIGGER IF EXISTS....CREATE TRIGGER--------EXCEPTION.
Both transactions starts when trigger is not present.
Both succeed in drop trigger because of "IF EXISTS" statement.
First transaction starts creating a trigger. For that a SHARE ROW EXCLUSIVE lock is placed on table my_table. The lock SHARE ROW EXCLUSIVE conflicts with it self so no other transaction is allowed to create a trigger until the first one completes.
Second transaction blocks on CREATE TRIGGER.
First transaction completes.
Second transaction proceeds with CREATE TRIGGER but it already exists. Exception is raised.
What you need is adding a LOCK before DROP TRIGGER statement. This way you will ensure the trigger is dropped and not created in concurrent transaction.
BEGIN;
LOCK TABLE my_table IN SHARE ROW EXCLUSIVE MODE ;
DROP TRIGGER IF EXISTS my_trigger ON my_table;
CREATE TRIGGER my_trigger
AFTER INSERT ON my_table
REFERENCING NEW TABLE AS new_table
FOR EACH STATEMENT EXECUTE PROCEDURE my_function();
COMMIT;

Cannot drop priorly modified new table in execute block

I'm not well acquainted with FB database and its subtleties.
On script executing, the problem occurres:
EXECUTE ibeblock
AS
BEGIN
-- 1. Create temporary table
execute statement 'recreate GLOBAL TEMPORARY table TMPTBL (ID bigint) /*on commit delete rows*/;';
commit;
-- 2. dummy fill of temporary table
insert into tmptbl (ID)
values (0xFE);
commit; -- not necessary
-- 3. perform some actions...
-- 4. Delete temporary table
execute statement 'drop table TMPTBL;';
commit; -- FAILURE!
END
The idea of script is primitive: 1) create temporary table; 2) fill it with records; 3) perform actions on other DB objects using populated records; 4) drop temp table.
For simulation, step-3 is useless (skipped). Step-4 leads to an error on commit: "This operation is not defined for system tables. unsuccessful metadata update. object TABLE "TMPTBL" is in use.".
Neither triggers nor constraints are applied for the table. Obviously, there should be nothing locking temp table.
Help, please, with resolution. Hopefully I missed something.
P.S.: FB 2.5, IBExpert 2017.12.13.1 used as DB managing tool
There are a number of problems with your code:
A global temporary table is intended as a permanent object, it is just the content that is temporary (either for transaction or connection duration). So normally you would create a global temporary table once, and not drop it, but instead reuse its definition.
Although you technically can execute DDL using execute statement, you are not supposed to, and it is not guaranteed to work. Your code is specifically an example of one of the things that will not work.
The problem here, is that you are trying to drop the table in the same transaction that used it (though to be honest, I'm surprised the insert even worked, because normally you can't insert into a table that was created in the same transaction).
The insert you executed on TMPTBL will mark the table in use, and given the transaction isn't committed yet, you can't drop the table: it is in use.
You shouldn't call commit in PSQL code (to be honest, I thought this wasn't even possible).
In short, you need to rethink how you use global temporary tables: define it once, and do not use execute statement to create it, but create it separately.
If you do want to create and drop it and not retain the definition of the global temporary table, then create it before the execute block, commit, then the execute block (with only the inserts and the 'perform some actions'), commit, and then drop it (and commit).
Alternatively, you might get away with executing the create using execute statement ... with autonomous transaction, the inserts and the 'perform some actions' in another execute statement ... with autonomous transaction, and finally the drop in yet another execute statement ... with autonomous transaction. However that makes your code very brittle, and this is not a recommend approach.
I have been forced again by devops guys to find robust solution to provide DB structure upgrades. Requirements: safely combine DDL and DML statements; ability to create temporary tables (for heavy selections); leave no garbage. Of course, upgrade is handled within single connection.
Referencing to the clues given by Mark a deeper insight and lots of experiments were made.
Here is template filescript that really worked out (isql native utility used):
SET TERM #;
-- 1. Create temporary table
EXECUTE BLOCK
AS
BEGIN
execute statement 'recreate GLOBAL TEMPORARY table TMPTBL (ID bigint) /*on commit preserve rows*/;';
END#
commit#
-- Data manipulations
EXECUTE BLOCK
AS
declare xid bigint;
BEGIN
-- 2. dummy fill of temporary table
begin
insert into TMPTBL (ID) values (0xFE);
end
-- 3. perform some actions...
for
select tt.ID
from TMPTBL tt
into :xid
do
begin
-- use :xid var
end
END#
commit#
-- 4. Delete temporary table
EXECUTE BLOCK
AS
BEGIN
execute statement 'drop table TMPTBL;';
END#
commit#
SET TERM ;#
Might be usefull for someone.
Damn, Firebird do drives crazy!

insert values on trigger in temporal tables in postgres

So I am new to using procedures and triggers and it is really confusing me
I have used temporal tables and want to basically create a history table of records inserted,updated or deleted.
Infact I have created my history table and works fine when I use this trigger sql
DROP TRIGGER if exists versioning_trigger on mytable;
CREATE TRIGGER versioning_trigger BEFORE INSERT OR UPDATE OR DELETE ON mytable FOR EACH ROW EXECUTE PROCEDURE versioning('sys_period', 'table_history', true);
This creates records of the rows updated or deleted,precisely copies the old row record from mytable into table_history table and updates the record in mytable.But I want to insert the updated record from mytable to table_history also so that it has records of all types('current active record'and 'record before updation').Also insert some other fields in table_history when the trigger is executed.
I want to ask
How is it possible to have different trigger events(BEFORE or AFTER) together in one CREATE TRIGGER query in temporal_tables?
Is it possible to insert new field values in table_history on trigger execution? How can I accomplish this?
https://www.postgresql.org/docs/current/static/plpgsql-trigger.html
A trigger procedure is created with the CREATE FUNCTION command,
declaring it as a function with no arguments and a return type of
trigger
and also
same trigger can't fire both before and after event - just create two triggers if you really need it
https://www.postgresql.org/docs/current/static/sql-createtrigger.html
Determines whether the function is called before, after, or instead of
the event.
use NEW instead of OLD for new values
https://www.postgresql.org/docs/current/static/plpgsql-trigger.html
NEW
Data type RECORD; variable holding the new database row for
INSERT/UPDATE operations in row-level triggers. This variable is
unassigned in statement-level triggers and for DELETE operations.

How to refer to the new inserted row in TSQL Trigger

I know that in plpgsql if one would want to refer to the new inserted row, you can use "NEW".
How can I do this in T-SQL (transact sql)?
The following is the trigger I am trying to create:
CREATE Trigger setAlertId on rules_table
FOR INSERT AS
DECLARE #max_id integer
SELECT #max_id = (select max(AlertId) from rules_table)
NEW.AlertId = #max_id+1
END
GO
I get the error message:
Incorrect syntax near 'NEW'
Thanks.
inserted and deleted pseudo tables:
DML trigger statements use two special tables: the deleted table and the inserted tables. SQL Server automatically creates and manages these tables. You can use these temporary, memory-resident tables to test the effects of certain data modifications and to set conditions for DML trigger actions. You cannot directly modify the data in the tables
In your case why dont you use an identity on the alertid field that increments itself?
If you want to do it in your trigger you will need to select your primary key from inserted and then do an update on rules tables.

Can "Insert Trigger For Each Row After Each Statement" use index with the newly added values?

I am using Postgres 9.3.
I just added a trigger to a table.
It is an after insert trigger which is executed for each row after each statement.
I coded the trigger function assuming the index of the same table contains the newly added rows.
If this is not true, mass inserts will slow down significantly.
I google it a bit but couldn't find an answer.
So, to sum up my questions is after a statement, is index updated before or after the "after insert trigger for each statement" in Postgres 9.3?
Here is the trigger definition I've used:
CREATE TRIGGER trigger_name
AFTER INSERT OR UPDATE
ON table_name
FOR EACH STATEMENT
EXECUTE PROCEDURE trigger_funtion();
An AFTER trigger FOR EACH ROW will see that row in the table. For that to happen reliably the row must have already been added to any indexes. So the index has been updated.
However, if you attempt to modify the table that caused the AFTER trigger to be fired within the AFTER trigger, this usually results in an infinite loop and an error. It is rarely the correct thing to do.
Usually when you're trying to do that, you actually want a BEFORE trigger that modifies the row before it is saved.
If you need to modify some other row in the same table, that often suggests a data model problem. You should very rarely, if ever, need to modify one row in a table using a trigger when a different row is modified.