Writing an audit trigger. Inside the postgresql function I'm trying todo:
'INSERT INTO ' || able_name || ' (' || columns || ') VALUES ' || NEW || ';'
When NEW is turned into string, varchar variables will not have quotes around them. This will cause the insert to fail. Easier would be to turn all the column values of NEW into varchar values, and postgres would automatically cast them into right values - when INSERT is executed.
Can I loop over the NEW record without turning it into json?
Looking around, I couldn't find good resource explaining how to work with Postgres Record type.
If your target table's structure is identical to the new structure, you don't really need to iterate over the columns.
Something like this will work:
create function audit_trigger()
returns trigger
as
$$
declare
l_columns text;
l_table_name text;
begin
-- this builds the name of the target table dynamicall
l_table_name := tg_table_name||'_audit';
execute format('insert into %I select ($1).*', l_table_name) using new;
return new;
end;
$$
language plpgsql;
Even if you don't want to store the changed data as a JSONB column, you can still use JSON functions to iterate over the columns of the new record if think you need it nevertheless.
The following will store the list of column names of the new record in the variable l_columns:
select string_agg(quote_ident(col), ',')
into l_columns
from jsonb_each_text(to_jsonb(new)) as t(col, val);
Related
May I know on how to call an array in stored procedure? I tried to enclosed it with a bracket to put the column_name that need to be insert in the new table.
CREATE OR REPLACE PROCEDURE data_versioning_nonull(new_table_name VARCHAR(100),column_name VARCHAR(100)[], current_table_name VARCHAR(100))
language plpgsql
as $$
BEGIN
EXECUTE ('CREATE TABLE ' || quote_ident(new_table_name) || ' AS SELECT ' || quote_ident(column_name) || ' FROM ' || quote_ident(current_table_name));
END $$;
CALL data_versioning_nonull('sales_2019_sample', ['orderid', 'product', 'address'], 'sales_2019');
Using execute format() lets you replace all the quote_ident() with %I placeholders in a single text instead of a series of concatenated snippets. %1$I lets you re-use the first argument.
It's best if you use ARRAY['a','b','c']::VARCHAR(100)[] to explicitly make it an array of your desired type. '{"a","b","c"}'::VARCHAR(100)[] works too.
You'll need to convert the array into a list of columns some other way, because when cast to text, it'll get curly braces which are not allowed in the column list syntax. Demo
It's not a good practice to introduce random limitations - PostgreSQL doesn't limit identifier lengths to 100 characters, so you don't have to either. The default limit is 63 bytes, so you can go way, way longer than 100 characters (demo). You can switch that data type to a regular text. Interestingly, exceeding specified varchar length would just convert it to unlimited varchar, making it just syntax noise.
DBFiddle online demo
CREATE TABLE sales_2019(orderid INT,product INT,address INT);
CREATE OR REPLACE PROCEDURE data_versioning_nonull(
new_table_name TEXT,
column_names TEXT[],
current_table_name TEXT)
LANGUAGE plpgsql AS $$
DECLARE
list_of_columns_as_quoted_identifiers TEXT;
BEGIN
SELECT string_agg(quote_ident(name),',')
INTO list_of_columns_as_quoted_identifiers
FROM unnest(column_names) name;
EXECUTE format('CREATE TABLE %1$I.%2$I AS SELECT %3$s FROM %1$I.%4$I',
current_schema(),
new_table_name,
list_of_columns_as_quoted_identifiers,
current_table_name);
END $$;
CALL data_versioning_nonull(
'sales_2019_sample',
ARRAY['orderid', 'product', 'address']::text[],
'sales_2019');
Schema awareness: currently the procedure creates the new table in the default schema, based on a table in that same default schema - above I made it explicit, but that's what it would do without the current_schema() calls anyway. You could add new_table_schema and current_table_schema parameters and if most of the time you don't expect them to be used, you can hide them behind procedure overloads for convenience, using current_schema() to keep the implicit behaviour. Demo
First, change your stored procedure to convert selected columns from array to csv like this.
CREATE OR REPLACE PROCEDURE data_versioning_nonull(new_table_name VARCHAR(100),column_name VARCHAR(100)[], current_table_name VARCHAR(100))
language plpgsql
as $$
BEGIN
EXECUTE ('CREATE TABLE ' || quote_ident(new_table_name) || ' AS SELECT ' || array_to_string(column_name, ',') || ' FROM ' || quote_ident(current_table_name));
END $$;
Then call it as:
CALL data_versioning_nonull('sales_2019_sample', '{"orderid", "product", "address"}', 'sales_2019');
I am trying to create a trigger function that insert a new record to the audit tables dynamically.
CREATE OR REPLACE FUNCTION schema.table()
RETURNS trigger
LANGUAGE 'plpgsql'
COST 100
VOLATILE NOT LEAKPROOF
AS $BODY$
DECLARE
_tablename text;
BEGIN
_tablename := 'user_audit';
EXECUTE 'INSERT INTO audit.' || quote_ident(_tablename) || ' VALUES ($1.*)' USING OLD;
RETURN NEW;
END;
$BODY$;
Trigger function above works fine as it take everything in OLD and inserts it into the audit table as expected. However I have a tstzrange range column in my tables called timestampzt_range and what I need to do is set the value for it in audit table using LOWER(OLD.timestampzt_range), LOWER(NEW.timestampzt_range). How can I achieve this dynamically without using insert statement like below as I would like to use this trigger function on multiple tables.
INSERT INTO audit.user_audit
(
column_1,
column_2,
timestampzt_range
)
VALUES
(
OLD.column_1,
OLD.column_2,
tstzrange(LOWER(OLD.timestampzt_range), LOWER(NEW.timestampzt_range))
);
I only need to use this on update and table name will be passed as a parameter to the trigger function if I can achieve dynamic statement. Only the audit columns are consistent across the entire database so it is important for me create insert using OLD or somehow dynamically extract everything from it but the timestampzt_range and then use tstzrange(LOWER(OLD.timestampzt_range), LOWER(NEW.timestampzt_range)) as value for the range column.
First comment : you can use the NEW and OLD key words only in an UPDATE trigger. In an INSERT trigger, OLD doesn't exist. In a DELETE trigger, NEW doesn't exist. See the manual.
Then you can replace in your trigger function
'INSERT INTO audit.' || quote_ident(_tablename) || ' VALUES ($1.*)' USING OLD
by
'INSERT INTO audit.' || quote_ident(_tablename) || ' VALUES (OLD.column_1, OLD.column_2, tstzrange(LOWER(OLD.timestampzt_range), LOWER(NEW.timestampzt_range)))'
so that to achieve your expected result.
Last but not least, your dynamic statement EXECUTE 'INSERT INTO audit.' ... is useless because _tablename is a static parameter declared inside the function.
I have a problem on creating PostgreSQL (9.3) trigger on update table.
I want set new values in the loop as
EXECUTE 'NEW.'|| fieldName || ':=''some prepend data'' || NEW.' || fieldName || ';';
where fieldName is set dynamically. But this string raise error
ERROR: syntax error at or near "NEW"
How do I go about achieving that?
You can implement that rather conveniently with the hstore operator #=:
Make sure the additional module is installed properly (once per database), in a schema that's included in your search_path:
How to use % operator from the extension pg_trgm?
Best way to install hstore on multiple schemas in a Postgres database?
Trigger function:
CREATE OR REPLACE FUNCTION tbl_insup_bef()
RETURNS TRIGGER AS
$func$
DECLARE
_prefix CONSTANT text := 'some prepend data'; -- your prefix here
_prelen CONSTANT int := 17; -- length of above string (optional optimization)
_col text := quote_ident(TG_ARGV[0]);
_val text;
BEGIN
EXECUTE 'SELECT $1.' || _col
USING NEW
INTO _val;
IF left(_val, _prelen) = _prefix THEN
-- do nothing: prefix already there!
ELSE
NEW := NEW #= hstore(_col, _prefix || _val);
END IF;
RETURN NEW;
END
$func$ LANGUAGE plpgsql;
Trigger (reuse the same func for multiple tables):
CREATE TRIGGER insup_bef
BEFORE INSERT OR UPDATE ON tbl
FOR EACH ROW
EXECUTE PROCEDURE tbl_insup_bef('fieldName'); -- unquoted, case-sensitive column name
Closely related with more explanation and advice:
Assignment of a column with dynamic column name
How to access NEW or OLD field given only the field's name?
Get values from varying columns in a generic trigger
Your problem is that EXECUTE can only be used to execute SQL statements and not PL/pgSQL statements like the assignment in your question.
You can maybe work around that like this:
Let's assume that table testtab is defined like this:
CREATE TABLE testtab (
id integer primary key,
val text
);
Then a trigger function like the following will work:
BEGIN
EXECUTE 'SELECT $1.id, ''prefix '' || $1.val' INTO NEW USING NEW;
RETURN NEW;
END;
I used hard-coded idand val in my example, but that is not necessary.
I found a working solution:
trigger should execute after insert/update, not before. Then desired row takes the form
EXECUTE 'UPDATE ' || TG_TABLE_SCHEMA || '.' || TG_TABLE_NAME ||
' SET ' || fieldName || '= ''prefix:'' ||''' || fieldValue || ''' WHERE id = ' || NEW.id;
fieldName and fieldValue I get in the next way:
FOR fieldName,fieldValue IN select key,value from each(hstore(NEW)) LOOP
IF .... THEN
END LOOP:
I would like to find all column names in a table that contains a value in any given record.
I.e All columns that contains a string in the record value.
'%ABC%' or '%QAW%' or '%IGH%'
If possible give me all the tables and columns in a DB schema, so I do not have to query ever table manually
2016-06-15
So I got a little further, I can now get all the values from each column in each row in each table. Now I need to see if that value ( v_value ) exist in a list of airport codes. i.e ['LAS','LAX','BIL']
I have all the airports in a table that I want to read into and array.
I am having trouble with creating that array and getting the data into it.
Here is what I have so far.
Look at the TODO's
CREATE OR REPLACE PROCEDURE "CMSDB"."TEST1"
()
LANGUAGE SQL
SPECIFIC SQL3
P1: BEGIN
DECLARE v_tabschema VARCHAR(255);
DECLARE v_tabname VARCHAR(255);
DECLARE v_colname VARCHAR(255);
DECLARE v_airport VARCHAR(255);
DECLARE v_stmt VARCHAR(3000);
DECLARE V_SQL VARCHAR(3000);
DECLARE v_value VARCHAR(255);
DECLARE SQLSTATE CHAR(5) DEFAULT '00000';
DECLARE v_stmt2 STATEMENT;
DECLARE v_value_cursor CURSOR FOR v_stmt2;
DECLARE v_airport_cursor CURSOR FOR select IDX from CMSDB.AIRPORTS;
DECLARE syscat_cursor CURSOR FOR select trim(tabschema), tabname, colname from cmsdb.syscat.columns where tabname = 'ACCTGROUP' and tabschema = 'CMSDB' and TYPENAME = 'VARCHAR' and colname not in ('CHGDATE','CHGPAGE','CHGPROG','CHGTYPE','CHGUSER','CREATEDATETIME','CREATEDBYID','REC_ID');
open v_airport_cursor;
FETCH FROM v_airport_cursor INTO v_airport;
WHILE (SQLSTATE = '00000') DO
call DBMS_OUTPUT.PUT_LINE(v_airport);
-- TODO Add each value to a list, arryalist that can be used to check if the v_value is in the list.
FETCH FROM v_airport_cursor INTO v_airport;
END WHILE;
close v_airport_cursor;
OPEN syscat_cursor;
FETCH FROM syscat_cursor INTO v_tabschema, v_tabname, v_colname;
WHILE (SQLSTATE = '00000') DO
--call DBMS_OUTPUT.PUT_LINE(v_tabschema || ' ' || v_tabname || ' ' || v_colname);
SET v_stmt = 'select ' || v_colname || ' from ' || v_tabschema || '.' || v_tabname;
--call DBMS_OUTPUT.PUT_LINE(v_stmt);
PREPARE v_stmt2 FROM v_stmt;
OPEN v_value_cursor;
FETCH FROM v_value_cursor INTO v_value;
WHILE (SQLSTATE = '00000') DO
-- TODO
--IF ( airportList contains v_value) THEN
--call DBMS_OUTPUT.PUT_LINE(v_value);
--END IF;
FETCH FROM v_value_cursor INTO v_value;
END WHILE;
CLOSE v_value_cursor;
FETCH FROM syscat_cursor INTO v_tabschema, v_tabname, v_colname;
END WHILE;
close syscat_cursor;
END P1
You can use sysibm.syscolumns:
select colname
from sysibm.syscolumns
where tbname = 'XX' and
(name like %ABC%' or name like '%QAW%' or name like '%IGH%');
You'll need to create a cursor over SYSTABLES that returns all the tables in the system. Then have another cursor that returns all the column names in a given table. Once you have those, you can build a dynamic statement that checks all the columns in a given table for the value you are looking for. Fetch the next table name and do it all over again.
Obviously, if you can narrow down your search to a particular schema or even limit the search to tables/columns with a particular naming pattern; you'd be better off.
Another technique, depending on your platform and version of DB2. You might be able to do some sort of a bulk export to a set of text files. Then use a tool that will serach the contents of those text files.
I have a table with many columns, one of which is a lastUpdate column.
I am writing a trigger in plpgsql for Postgres 9.1, that should set a value for lasUpdate upon an UPDATE to the record.
The challenge is to exclude some pre-defined columns from that trigger; Meaning, updating those specific columns shouldn't affect the lastUpdate value of the record.
Any advise?
In PostgreSQL you can access the previous value using OLD. and the new ones using NEW. aliases. There is even a specific example in the docs for what you need:
CREATE TRIGGER check_update
BEFORE UPDATE ON accounts
FOR EACH ROW
WHEN (OLD.balance IS DISTINCT FROM NEW.balance)
EXECUTE PROCEDURE check_account_update();
I know it is too old question, but I found myself with the same need and I managed to do it with a trigger using the information_schema.colmns table.
I attach here the possible solution where the only parameters to edit would be the TIMEUPDATE_FIELD and EXCLUDE_FIELDS in the trigger function check_update_testtrig():
CREATE TABLE testtrig
(
id bigserial NOT NULL,
col1 integer,
col2 integer,
col3 integer,
lastupdate timestamp not null default now(),
lastread timestamp,
CONSTRAINT testtrig_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
CREATE OR REPLACE FUNCTION check_update_testtrig()
RETURNS trigger AS
$BODY$
DECLARE
TIMEUPDATE_FIELD text := 'lastupdate';
EXCLUDE_FIELDS text[] := ARRAY['lastread'];
PK_FIELD text := 'id';
ROW_RES RECORD;
IS_DISTINCT boolean := false;
COND_RES integer := 0;
BEGIN
FOR ROW_RES IN
SELECT column_name
FROM information_schema.columns
WHERE table_schema = TG_TABLE_SCHEMA
AND table_name = TG_TABLE_NAME
AND column_name != TIMEUPDATE_FIELD
AND NOT(column_name = ANY (EXCLUDE_FIELDS))
LOOP
EXECUTE 'SELECT CASE WHEN $1.' || ROW_RES.column_name || ' IS DISTINCT FROM $2.' || ROW_RES.column_name || ' THEN 1 ELSE 0 END'
INTO STRICT COND_RES
USING NEW, OLD;
IS_DISTINCT := IS_DISTINCT OR (COND_RES = 1);
END LOOP;
IF (IS_DISTINCT)
THEN
EXECUTE 'UPDATE ' || TG_TABLE_SCHEMA || '.' || TG_TABLE_NAME || ' SET ' || TIMEUPDATE_FIELD || ' = now() WHERE ' || PK_FIELD || ' = $1.' || PK_FIELD
USING NEW;
END IF;
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
CREATE TRIGGER trigger_update_testtrig
AFTER UPDATE
ON testtrig
FOR EACH ROW
EXECUTE PROCEDURE check_update_testtrig();
Looking to your question and your comment on the answer of Jakub Kania, would I say that part of the solution is that you will create an extra table.
The issue is that constraints on columns should only apply the functioning of the column itself, it should not affect the values of other columns in the table. Specifying which columns should influence the status 'lastUpdate' is imo business logic.
The idea which columns should have impact on the value of the status column 'lastUpdate' changes along with the business, not with the table design. Therefor should the solution imo consist of a table in combination with a trigger.
I would add a table with a column for a list of columns (column can be of type array) that can be used in a trigger on the table like described by Jakub Kania. If the default behaviour should be that a new column has to change the value of the column 'lastUpdate', then would I create the trigger so that it only lists names of columns that do not change the value of 'lastUpdate'. If the default behaviour is to not change the value of the column 'lastUpdate',then would I advise you to add the name of the column to the list of columns in case the members in the list would change the value of the column 'lastUpdate'.
If the table column is within the list of columns then should it update the field lastUpdate.