DB2 Trigger Loop through all fields/column in a row - db2

I'm trying to create a trigger which will loop through all columns in the changed row, compare them to each other, and then insert a row into a transaction history for each column that changed (where NEW.column != OLD.column).
I can't seem to find any documentation on how to do this. I've found plenty about using cursors for rows, but not for using them for columns.
I have something like this:
CREATE OR REPLACE TRIGGER ADD_AUDIT_HISTORY AFTER INSERT OR DELETE OR UPDATE ON MY_TABLE
REFERENCING OLD AS o
NEW AS n
FOR EACH ROW MODE DB2SQL
BEGIN
# LOOP THROUGH COLUMNS
IF newColumnValue != oldColumnValue THEN
INSERT INTO AUDIT_HISTORY (columnName, oldValue, newValue) VALUES (columnName newColumnValue, oldColumnValue)
# /LOOP THROUGH COLUMNS
END;
The trigger needs to work on INSERT, UPDATE and DELETE so I know I may need to do some juggling of which one I pull the columns off of as well.

Related

How to exclude columns which passed to procedure by trigger FOR EACH ROW in PostgreSQL, or at least exclude them thereafter

I have issue with creating PostgreSQL triggers
I have rigger executing amendments to table on FOR EACH ROW basis, but i need to process all columns in these rows except one.
I tried to find solution how prevent passing one of columns to procedure, but as i understand it's done automatically and can not be interfered.
OLD and NEW variables with old and new rows passed to procedure and i was searching how remove columns from OLD/NEW for further processing and failed so far
I have simple table1 with columns:
name
number
description
I have following trigger
CREATE TRIGGER table1_trigger
AFTER INSERT OR UPDATE OR DELETE ON table1
FOR EACH ROW EXECUTE PROCEDURE trigger_proc();
and following procedure
CREATE OR REPLACE FUNCTION trigger_proc() RETURNS TRIGGER AS
$$
DECLARE
old_v jsonb;
new_v jsonb;
BEGIN
old_v = json_strip_nulls(row_to_json(OLD));
new_v = json_strip_nulls(row_to_json(NEW));
END;
$$ LANGUAGE plpgsql;
After executing for example:
insert into table1 (name, number, description)
values ('name', 1, 'useless text to vanish')
I need column "description" either to be excluded
at stage passing to procedure "FOR EACH ROW EXECUTE PROCEDURE trigger_proc()"
or to be removed from OLD/NEW in procedure
or OLD/NEW data to be copied to some new variables with RECORD type, but of course without "description" column
or any other ways where old_v and new_v will not contain data from this column!
Thanks in advance for any help!

Accidentally detected all data from a table, insert dummy data to table using loop psql

I had accidentally deleted most of the rows in my Postgres table (data is not important its in my test environment, but I need a dummy data to be insert in to these table).
Let us take three tables,
MAIN_TABLE(main_table_id, main_fields)
ADDRESS_TABLE(address_table_id, main_table_id, address_type, other_fielsds)
CHAID_TABLE(chaid_table_id,main_table_id, shipping_address_id, chaild_fields)
I had accidentally deleted most of the data from ADDRESS_TABLE.
ADDRESS_TABLE has a foreign key from MAIN_TABLE ,i.e. main_table_id. for each row in MAIN_TABLE there is two entries in ADDRESS_TABLE, in which one entry is its address_type is "billing/default" and other entry is for address_type "shipping".
CHAID_TABLE has two foreign keys one from MAIN_TABLE, i.e. main_table_id and other from ADDRESS_TABLE i.e., shipping_address_id. this shipping_address_id is address id of ADDRESS_TABLE, its address_type is shipping and ADDRESS_TABLE.main_table_id = CHAID_TABLE.main_table_id.
These are the things that I needed.
I need to create two dummy address entries for each raw in MAIN_TABLE one is of address type "billing/default" and other is of type "shipping".
I need to insert address_table_id to the CHAID_TABLE whose ADDRESS_TABLE.main_table_id = CHAID_TABLE.main_table_id. and addres_type ="shipping"
if first is done I know how to insert second, because it is a simple insert query. I guess.
it can be done like,
UPDATE CHAID_TABLE
SET shipping_address_id = ADDRESS_TABLE.address_table_id
FROM ADDRESS_TABLE
WHERE ADDRESS_TABLE.main_table_id = CHAID_TABLE.main_table_id
AND ADDRESS_TABLE.addres_type ='shipping';
for doing first one i can use loop in psql, ie loop through all the entries in MAIN_TABLE and insert two dummy rows for each rows. But I don't know how to do these please help me to solve this.
I hope your solution is this, Create a function that loop through all the rows in MAIN_TABLE, inside the loop do the action you want, here two insert statement, one issue of this solution is you have same data in all address.
CREATE OR REPLACE FUNCTION get_all_MAIN_TABLE () RETURNS SETOF MAIN_TABLE AS
$BODY$
DECLARE
r MAIN_TABLE %rowtype;
BEGIN
FOR r IN
SELECT * FROM MAIN_TABLE
LOOP
-- can do some processing here
INSERT INTO ADDRESS_TABLE ( main_table_id, address_type, other_fielsds)
VALUES('shipping', r.main_table_id,'NAME','','other_fielsds');
INSERT INTO ADDRESS_TABLE ( main_table_id, address_type, other_fielsds)
VALUES('billing/default',r.main_table_id,'NAME','','other_fielsds');
END LOOP;
RETURN;
END
$BODY$
LANGUAGE plpgsql;
SELECT * FROM get_all_MAIN_TABLE ();

Get data of multiple inserted rows in one object using trigger in postgres

I am trying to write a trigger which gets data from the table attribute in which multiple rows are inserted corresponding to one actionId at one time and group all that data into the one object:
Table Schema
actionId
key
value
I am firing trigger on rows insertion,SO how can I handle this multiple row insertion and how can I collect all the data.
CREATE TRIGGER attribute_changes
AFTER INSERT
ON attributes
FOR EACH ROW
EXECUTE PROCEDURE log_attribute_changes();
and the function,
CREATE OR REPLACE FUNCTION wflowr222.log_task_extendedattribute_changes()
RETURNS trigger AS
$BODY$
DECLARE
_message json;
_extendedAttributes jsonb;
BEGIN
SELECT json_agg(tmp)
INTO _extendedAttributes
FROM (
-- your subquery goes here, for example:
SELECT attributes.key, attributes.value
FROM attributes
WHERE attributes.actionId=NEW.actionId
) tmp;
_message :=json_build_object('actionId',NEW.actionId,'extendedAttributes',_extendedAttributes);
INSERT INTO wflowr222.irisevents(message)
VALUES(_message );
RETURN NULL;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
and data format is,
actionId key value
2 flag true
2 image http:test.com/image
2 status New
I tried to do it via Insert trigger, but it is firing on each row inserted.
If anyone has any idea about this?
I expect that the problem is that you're using a FOR EACH ROW trigger; what you likely want is a FOR EACH STATEMENT trigger - ie. which only fires once for your multi-line INSERT statement. See the description at https://www.postgresql.org/docs/current/sql-createtrigger.html for a more through explanation.
AFAICT, you will also need to add REFERENCING NEW TABLE AS NEW in this mode to make the NEW reference available to the trigger function. So your CREATE TRIGGER syntax would need to be:
CREATE TRIGGER attribute_changes
AFTER INSERT
ON attributes
REFERENCING NEW TABLE AS NEW
FOR EACH STATEMENT
EXECUTE PROCEDURE log_attribute_changes();
I've read elsewhere that the required REFERENCING NEW TABLE ... syntax is only supported in PostgreSQL 10 and later.
Considering the version of postgres you have, and therefore keeping in mind that you can't use a trigger defined FOR EACH STATEMENT for your purpose, the only alternative I see is
using a trigger after insert in order to collect some information about changes in a utility table
using a unix cron that execute a pl/sql that do the job on data set
For example:
Your utility table
CREATE TABLE utility (
actionid integer,
createtime timestamp
);
You can define a trigger FOR EACH ROW with a body that do something like this
INSERT INTO utilty values(NEW.actionid, curent_timestamp);
And, finally, have a crontab UNIX that execute a file or a procedure that to something like this:
SELECT a.* FROM utility u JOIN yourtable a ON a.actionid = u.actionid WHERE u.createtime < current_timestamp;
// do something here with records selected above
TRUNCATE table utility;
If you had postgres 9.5 you could have used pg_cron instead of unix cron...

How to check a sequence efficiently for used and unused values in PostgreSQL

In PostgreSQL (9.3) I have a table defined as:
CREATE TABLE charts
( recid serial NOT NULL,
groupid text NOT NULL,
chart_number integer NOT NULL,
"timestamp" timestamp without time zone NOT NULL DEFAULT now(),
modified timestamp without time zone NOT NULL DEFAULT now(),
donotsee boolean,
CONSTRAINT pk_charts PRIMARY KEY (recid),
CONSTRAINT chart_groupid UNIQUE (groupid),
CONSTRAINT charts_ichart_key UNIQUE (chart_number)
);
CREATE TRIGGER update_modified
BEFORE UPDATE ON charts
FOR EACH ROW EXECUTE PROCEDURE update_modified();
I would like to replace the chart_number with a sequence like:
CREATE SEQUENCE charts_chartnumber_seq START 16047;
So that by trigger or function, adding a new chart record automatically generates a new chart number in ascending order. However, no existing chart record can have its chart number changed and over the years there have been skips in the assigned chart numbers. Hence, before assigning a new chart number to a new chart record, I need to be sure that the "new" chart number has not yet been used and any chart record with a chart number is not assigned a different number.
How can this be done?
Consider not doing it. Read these related answers first:
Gap-less sequence where multiple transactions with multiple tables are involved
Compacting a sequence in PostgreSQL
If you still insist on filling in gaps, here is a rather efficient solution:
1. To avoid searching large parts of the table for the next missing chart_number, create a helper table with all current gaps once:
CREATE TABLE chart_gap AS
SELECT chart_number
FROM generate_series(1, (SELECT max(chart_number) - 1 -- max is no gap
FROM charts)) chart_number
LEFT JOIN charts c USING (chart_number)
WHERE c.chart_number IS NULL;
2. Set charts_chartnumber_seq to the current maximum and convert chart_number to an actual serial column:
SELECT setval('charts_chartnumber_seq', max(chart_number)) FROM charts;
ALTER TABLE charts
ALTER COLUMN chart_number SET NOT NULL
, ALTER COLUMN chart_number SET DEFAULT nextval('charts_chartnumber_seq');
ALTER SEQUENCE charts_chartnumber_seq OWNED BY charts.chart_number;
Details:
How to reset postgres' primary key sequence when it falls out of sync?
Safely and cleanly rename tables that use serial primary key columns in Postgres?
3. While chart_gap is not empty fetch the next chart_number from there.
To resolve possible race conditions with concurrent transactions, without making transactions wait, use advisory locks:
WITH sel AS (
SELECT chart_number, ... -- other input values
FROM chart_gap
WHERE pg_try_advisory_xact_lock(chart_number)
LIMIT 1
FOR UPDATE
)
, ins AS (
INSERT INTO charts (chart_number, ...) -- other target columns
TABLE sel
RETURNING chart_number
)
DELETE FROM chart_gap c
USING ins i
WHERE i.chart_number = c.chart_number;
Alternatively, Postgres 9.5 or later has the handy FOR UPDATE SKIP LOCKED to make this simpler and faster:
...
SELECT chart_number, ... -- other input values
FROM chart_gap
LIMIT 1
FOR UPDATE SKIP LOCKED
...
Detailed explanation:
Postgres UPDATE ... LIMIT 1
Check the result. Once all rows are filled in, this returns 0 rows affected. (you could check in plpgsql with IF NOT FOUND THEN ...). Then switch to a simple INSERT:
INSERT INTO charts (...) -- don't list chart_number
VALUES (...); -- don't provide chart_number
In PostgreSQL, a SEQUENCE ensures the two requirements you mention, that is:
No repeats
No changes once assigned
But because of how a SEQUENCE works (see manual), it can not ensure no-skips. Among others, the first two reasons that come to mind are:
How a SEQUENCE handles concurrent blocks with INSERTS (you could also add that the concept of Cache also makes this impossible)
Also, user triggered DELETEs are an uncontrollable aspect that a SEQUENCE can not handle by itself.
In both cases, if you still do not want skips, (and if you really know what you're doing) you should have a separate structure that assign IDs (instead of using SEQUENCE). Basically a system that has a list of 'assignable' IDs stored in a TABLE that has a function to pop out IDs in a FIFO way. That should allow you to control DELETEs etc.
But again, this should be attempted, only if you really know what you're doing! There's a reason why people don't do SEQUENCEs themselves. There are hard corner-cases (for e.g. concurrent INSERTs) and most probably you're over-engineering your problem case, that probably can be solved in a much better / cleaner way.
Sequence numbers usually have no meaning, so why worry? But if you really want this, then follow the below, cumbersome procedure. Note that it is not efficient; the only efficient option is to forget about the holes and use the sequence.
In order to avoid having to scan the charts table on every insert, you should scan the table once and store the unused chart_number values in a separate table:
CREATE TABLE charts_unused_chart_number AS
SELECT seq.unused
FROM (SELECT max(chart_number) FROM charts) mx,
generate_series(1, mx(max)) seq(unused)
LEFT JOIN charts ON charts.chart_number = seq.unused
WHERE charts.recid IS NULL;
The above query generates a contiguous series of numbers from 1 to the current maximum chart_number value, then LEFT JOINs the charts table to it and find the records where there is no corresponding charts data, meaning that value of the series is unused as a chart_number.
Next you create a trigger that fires on an INSERT on the charts table. In the trigger function, pick a value from the table created in the step above:
CREATE FUNCTION pick_unused_chart_number() RETURNS trigger AS $$
BEGIN
-- Get an unused chart number
SELECT unused INTO NEW.chart_number FROM charts_unused_chart_number LIMIT 1;
-- If the table is empty, get one from the sequence
IF NOT FOUND THEN
NEW.chart_number := next_val(charts_chartnumber_seq);
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER tr_charts_cn
BEFORE INSERT ON charts
FOR EACH ROW EXECUTE PROCEDURE pick_unused_chart_number();
Easy. But the INSERT may fail because of some other trigger aborting the procedure or any other reason. So you need a check to ascertain that the chart_number was indeed inserted:
CREATE FUNCTION verify_chart_number() RETURNS trigger AS $$
BEGIN
-- If you get here, the INSERT was successful, so delete the chart_number
-- from the temporary table.
DELETE FROM charts_unused_chart_number WHERE unused = NEW.chart_number;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER tr_charts_verify
AFTER INSERT ON charts
FOR EACH ROW EXECUTE PROCEDURE verify_chart_number();
At a certain point the table with unused chart numbers will be empty whereupon you can (1) ALTER TABLE charts to use the sequence instead of an integer for chart_number; (2) delete the two triggers; and (3) the table with unused chart numbers; all in a single transaction.
While what you want is possible, it can't be done using only a SEQUENCE and it requires an exclusive lock on the table, or a retry loop, to work.
You'll need to:
LOCK thetable IN EXCLUSIVE MODE
Find the first free ID by querying for the max id then doing a left join over generate_series to find the first free entry. If there is one.
If there is a free entry, insert it.
If there is no free entry, call nextval and return the result.
Performance will be absolutely horrible, and transactions will be serialized. There'll be no concurrency. Also, unless the LOCK is the first thing you run that affects that table, you'll face deadlocks that cause transaction aborts.
You can make this less bad by using an AFTER DELETE .. FOR EACH ROW trigger that keeps track of entries you delete by INSERTing them into a one-column table that keeps track of spare IDs. You can then SELECT the lowest ID from the table in your ID assignment function on the default for the column, avoiding the need for the explicit table lock, the left join on generate_series and the max call. Transactions will still be serialized on a lock on the free IDs table. In PostgreSQL you can even solve that using SELECT ... FOR UPDATE SKIP LOCKED. So if you're on 9.5 you can actually make this non-awful, though it'll still be slow.
I strongly advise you to just use a SEQUENCE directly, and not bother with re-using values.

PL/pgSQL: Delete (update) row matching a record

I'm writing three triggers in PL/pgSQL. In each case, I have a RECORD variable and want to insert that into a table, delete it from the table, or update it to represent a second RECORD variable.
Adding is easy: INSERT INTO mytable VALUES (NEW.*);
Deleting isn't as easy, there doesn't seem to be a syntax for something like this:
DELETE FROM mytable
WHERE * = OLD.*;
Updating has the same problem. Is there an easy solution, short of generating matching SQL queries that compare each attribute using ideas from this answer?
You can use a trick for delete
create table t(a int, b int);
create table ta(a int, b int);
create function t1_delete()
returns trigger as $$
begin
delete from ta where ta = old;
return null;
end
$$ language plpgsql;
But this trick doesn't work for UPDATE. So fully simple trigger in PL/pgSQL is not possible simply.
You write about a record variable and it is, indeed, not trivial to access individual columns of an anonymous record in plpgsql.
However, in your example, you only use OLD and NEW, which are well known row types, defined by the underlying table. It is trivial to access individual columns in this case.
DELETE FROM mytable
WHERE mytable_id = OLD.mytable_id;
UPDATE mytable_b
SET some_col = NEW.some_other_col
WHERE some_id = NEW.mytable_id;
Etc.
Just be careful not to create endless loops.
In case you just want to "update" columns of the current row, you can simply assign to columns the NEW object in the trigger. You know that, right?
NEW.some_col := 'foo';
Dynamic column names
If you don't know column names beforehand, you can still do this generically with dynamic SQL as detailed in this related answer:
Update multiple columns in a trigger function in plpgsql