My problem is as follow: I insert or update a row in a postgresql database and need to modify one field in this row. BUT I need to know the new serial PK when I insert a new row to make a SELECT with JOIN on other tables.
I'm now stucked because I've done a AFTER INSERT AND UPDATE trigger to get the new PK (kkw_block_id). I get the value I need with the SELECT but after that I can't modify the value in the row: modifying the NEW.value is not possible with AFTER INSERT AND UPDATE and if I do an UPDATE on the row, I enter in an infinite loop, the trigger beeing called in the trigger...
CREATE TRIGGER tsvectorupdate
AFTER INSERT OR UPDATE
ON kkw_block
FOR EACH ROW
EXECUTE PROCEDURE kkw_search_trigger();
CREATE OR REPLACE FUNCTION kkw_search_trigger()
RETURNS trigger AS
$BODY$
DECLARE vector_en TEXT;
DECLARE vector_fr TEXT;
DECLARE vector_de TEXT;
BEGIN
-- I need the new serial PK(kkw_id) in the following section.
SELECT coalesce(modell_en, '') || ', ' || coalesce(bezeichnung_en,'') || ', ' || coalesce(kkw.kkw_name_en,'') || ', ' || coalesce(kkw_typ.typ_abr,'') || ', ' || coalesce(kkw_typ.typ_desc_en,'') || ', ' || coalesce(kkw_typ.typ_desc_short_en,'') INTO vector_en
FROM kkw_block
LEFT JOIN kkw ON NEW.kkw_id = kkw.kkw_id
LEFT JOIN kkw_typ ON NEW.kkw_typ_id = kkw_typ.kkw_typ_id
WHERE kkw_block_id = NEW.kkw_block_id;
-- I need to update a field of the newly created or updated row.
NEW.search_vector_en := to_tsvector('english', 'new test vector'); --- This doesn't work with 'AFTER UPDATE' trigger.
RETURN NULL;
END
$BODY$
Any idea?
Drop default for your PK and assign it in your BEFORE trigger. You will have to change that existing trigger from AFTER to BEFORE.
You can assign PK from sequence like that:
NEW.kkw_block_id = nextval('your_sequence_name_here');
Since you are using the same function for both INSERT and DELETE, you need to check if it is INSERT and only then use sequence. I have also included check if PK is null or not. I suppose that alone would be enough to not overwrite it during update.
IF (TG_OP = 'INSERT') AND NEW.kkw_block_id IS NULL THEN
NEW.kkw_block_id = nextval('your_sequence_name_here');
END IF;
This will be fine as long as this trigger will work for each new row, with seems to be the case. This will let you modify NEW and it will be reflected in data saved in table.
I end up with the following solution. I made a BEFORE trigger. The problem was the LEFT JOIN with reference to the table where the new row doesn't exist yet. It's not ideal but here it is:
CREATE TRIGGER tsvectorupdate
BEFORE INSERT OR UPDATE
ON kkw_block
FOR EACH ROW
EXECUTE PROCEDURE kkw_search_trigger();
CREATE TYPE kkw_type_record_type AS (typ_abr TEXT, typ_desc_en TEXT, typ_desc_short_en TEXT);
CREATE TYPE kkw_record_type AS (kkw_name_en TEXT);
CREATE OR REPLACE FUNCTION kkw_search_trigger()
RETURNS trigger AS
$BODY$
DECLARE kkw_rec kkw_record_type;
DECLARE kkw_typ_rec kkw_type_record_type;
DECLARE vector_en TEXT;
BEGIN
--- make a individual select instead of LEFT JOIN
SELECT kkw_name_en INTO kkw_rec.kkw_name_en
FROM kkw
WHERE kkw.kkw_id = NEW.kkw_id;
--- make a individual select instead of LEFT JOIN
SELECT typ_abr, typ_desc_en, typ_desc_short_en INTO kkw_typ_rec.typ_abr, kkw_typ_rec.typ_desc_en, kkw_typ_rec.typ_desc_short_en
FROM kkw_typ
WHERE kkw_typ.kkw_typ_id = NEW.kkw_typ_id;
vector_en := coalesce(NEW.modell_en, '') || ', ' || coalesce(NEW.bezeichnung_en,'') || ', ' || coalesce(kkw_rec.kkw_name_en,'') || ', ' || coalesce(kkw_typ_rec.typ_abr,'') || ', ' || coalesce(kkw_typ_rec.typ_desc_en,'') || ', ' || coalesce(kkw_typ_rec.typ_desc_short_en,'');
NEW.search_vector_en := to_tsvector('english', vector_en);
RETURN NEW;
END
$BODY$
Related
I have a function that creates a set of INSERT INTO ... VALUES scripts. If I uncomment the dvp.content line, the function fails with an "ERROR: could not open relation with OID ###", which refers to the temp table. The content column is a jsonb type. Not sure where to begin?
CREATE OR REPLACE FUNCTION export_docs_as_sql(doc_list uuid[], to_org_id uuid)
RETURNS table(id integer, sql text)
AS $$
BEGIN
...
-- use a temp table to gather all INSERT statements
CREATE TEMP TABLE IF NOT EXISTS doc_data_export(
id serial PRIMARY KEY,
sql text
);
...
-- get doc_version_pages
INSERT INTO doc_data_export(sql)
SELECT 'INSERT INTO doc_version_pages(id, doc_version_id, persona_id, care_category_id, patient_group_id, title, content, created_at, updated_at, is_guide, is_root) VALUES (' ||
quote_literal(dvp.id::TEXT) || ', ' ||
quote_literal(dvp.doc_version_id::TEXT) || ', ' ||
CASE WHEN p.name IS NOT NULL THEN '(SELECT px.id FROM personas px WHERE px.org_id = ' || quote_literal(dv.id::TEXT) || ' AND px.name = ' || quote_literal(p.name) || '), ' ELSE 'NULL, ' END ||
CASE WHEN c.name IS NOT NULL THEN '(SELECT cx.id FROM care_categories cx WHERE cx.org_id = ' || quote_literal(to_org_id) || ' AND cx.name = ' || quote_literal(c.name) || '), ' ELSE 'NULL, ' END ||
CASE WHEN g.name IS NOT NULL THEN '(SELECT gx.id FROM patient_groups gx WHERE gx.org_id = ' || quote_literal(to_org_id) || ' AND gx.name = ' || quote_literal(g.name) || '), ' ELSE 'NULL, ' END ||
quote_literal(dvp.title::TEXT) || ', ' ||
--dvp.content || ', ' ||
quote_literal(dvp.created_at::TEXT) || ', ' ||
quote_literal(now()::timestamp) || ', ' ||
quote_literal(dvp.is_guide::TEXT) || ', ' ||
quote_literal(dvp.is_root::TEXT) || ');'
FROM unnest(doc_list) l
INNER JOIN doc_versions dv ON l = dv.doc_id
INNER JOIN doc_version_pages dvp ON dv.id = dvp.doc_version_id
LEFT JOIN personas p ON dvp.persona_id = p.id
LEFT JOIN care_categories c ON dvp.care_category_id = c.id
LEFT JOIN patient_groups g ON dvp.patient_group_id = g.id;
...
-- output all inserts
RETURN QUERY SELECT * FROM doc_data_export;
-- drop temp table
DROP TABLE doc_data_export;
END;
$$ LANGUAGE plpgsql;
The "Could Not Open Relation" problem is occurring due to the bug described here, which remains an issue as of Postgres 14.0:
What seems to be happening is that if the strings are large enough to be
toasted, then the data returned out of the function with RETURN QUERY
contains toast pointers referencing the temp table's toast table.
If you drop the temp table then those pointers will fail upon use.
To explain further, when a column value is greater than the TOAST_TUPLE_THRESHOLD configuration parameter (usually 2KB) and cannot be compressed or when the column is configured with a storage parameter of EXTERNAL, the value will be broken down into chunks and stored in a special secondary table called a TOAST table. This table will be stored in the pg_toast schema and will be named like pg_toast.pg_toast_<table OID>.
So when you add dvp.content to the sql statement you insert that into doc_data_export, some of these values are larger than the aforementioned constraints and are thus TOASTed. Your RETURN QUERY is only sending the pointers to the values in the toast table. After the return is done, the temporary table and its corresponding TOAST table is dropped. Thus when the outer query attempts to materialize the results, it can't find the TOAST table that these pointers reference - hence the cryptic error message you see.
You can avoid sending TOAST pointers for the temporary table -and thus safely DROP it after the RETURN QUERY -by performing an operation on the sql column that returns the same value:
RETURN QUERY SELECT id, sql || '' FROM doc_data_export;
The simple function below will reproduce a minimal example of the TOAST bug when you set fail to true and demonstrate the successful workaround when you set fail to false.
DROP FUNCTION IF EXISTS buttered_toast(boolean);
CREATE OR REPLACE FUNCTION buttered_toast(fail boolean)
RETURNS table(id integer, enormous_data text)
AS $$
BEGIN
CREATE TEMPORARY TABLE tbl_with_toasts (
id integer PRIMARY KEY,
enormous_data text
) ON COMMIT DROP;
--generate a giant string that is sure to generate a TOAST table.
INSERT INTO tbl_with_toasts(id,enormous_data) SELECT 1, string_agg(gen_random_uuid()::text,'-') FROM generate_series(1,10000) as ints(int);
IF buttered_toast.fail THEN
-- will return pointers to tbl_with_toast's TOAST table for the "enormous_data" column.
RETURN QUERY SELECT tbl_with_toasts.id, tbl_with_toasts.enormous_data FROM tbl_with_toasts ;
ELSE
-- will generate and return new values for the "enormous_data" column
RETURN QUERY SELECT tbl_with_toasts.id, tbl_with_toasts.enormous_data || '' FROM tbl_with_toasts ;
END IF;
DROP TABLE tbl_with_toasts;
END;
$$ LANGUAGE plpgsql;
-- fails with "Could Not Open Relation"
select * from buttered_toast(true)
--succeeds
select * from buttered_toast(false);
I am trying to write a plpgsql procedure to perform spatial tiling of a postGIS table. I can perform the operation successfully using the following procedure in which the table names are hardcoded. The procedure loops through the tiles in tile_table and for each tile clips the area_table and inserts it into split_table.
CREATE OR REPLACE PROCEDURE splitbytile()
AS $$
DECLARE
tile RECORD;
BEGIN
FOR tile IN
SELECT tid, geom FROM test_tiles ORDER BY tid
LOOP
INSERT INTO split_table (id, areaname, ttid, geom)
SELECT id, areaname, tile.tid,
CASE WHEN st_within(base.geom, tile.geom) THEN st_multi(base.geom)
ELSE st_multi(st_intersection(base.geom, tile.geom)) END as geom
FROM area_table as base
WHERE st_intersects(base.geom, tile.geom);
COMMIT;
END LOOP;
END;
$$ LANGUAGE 'plpgsql';
Having tested this successfully, now I need to convert it to a dynamic procedure where I can provide the table names as parameters. I tried the following partial conversion, using format() for inside of loop:
CREATE OR REPLACE PROCEDURE splitbytile(in_table text, grid_table text, split_table text)
AS $$
DECLARE
tile RECORD;
BEGIN
FOR tile IN
EXECUTE format('SELECT tid, geom FROM %I ORDER BY tid', grid_table)
LOOP
EXECUTE
FORMAT(
'INSERT INTO %1$I (id, areaname, ttid, geom)
SELECT id, areaname, tile.tid,
CASE WHEN st_within(base.geom, tile.geom) THEN st_multi(base.geom)
ELSE st_multi(st_intersection(base.geom, tile.geom)) END as geom
FROM %2$I as base
WHERE st_intersects(base.geom, tile.geom)', split_table, in_table
);
COMMIT;
END LOOP;
END;
$$ LANGUAGE 'plpgsql';
But it throws an error
missing FROM-clause entry for table "tile"
So, how can I convert the procedure to a dynamic one? More specifically, how can I use the record data type (tile) returned by the for loop inside the loop? Note that it works when format is not used.
You can use EXECUTE ... USING to supply parameters to a dynamic query:
EXECUTE
format(
'SELECT r FROM %I WHERE c = $1.val',
table_name
)
INTO result_var
USING record_var;
The first argument to USING will be used for $1, the second for $2 and so on.
See the documentation for details.
Personally I use somehow different way to create dynamic functions. By concatination and execute function. You can also do like this.
CREATE OR REPLACE FUNCTION splitbytile()
RETURNS void AS $$
declare
result1 text;
table_name text := 'test_tiles';
msi text := '+7 9912 231';
msi text := 'Hello world';
code text := 'code_name';
_operator_id integer := 2;
begin
query1 := 'SELECT msisdn from ' || table_name || ' where msisdn = ''' || msi::text ||''';';
query2 := 'INSERT INTO ' || table_name || '(msisdn,usage,body,pr_code,status,sent_date,code_type,operator_id)
VALUES( ''' || msi::text || ''',' || true || ',''' || _body::text || ''',''' || code::text || ''',' || false || ',''' || time_now || ''',' || kod_type || ',' || _operator_id ||');';
execute query1 into result1;
execute query2;
END;
$function$
You just make your query as text then anywhere you want you can execute it. Maybe by checking result1 value inside If statement or smth like that.
I have a problem on creating PostgreSQL (9.3) trigger on update table.
I want set new values in the loop as
EXECUTE 'NEW.'|| fieldName || ':=''some prepend data'' || NEW.' || fieldName || ';';
where fieldName is set dynamically. But this string raise error
ERROR: syntax error at or near "NEW"
How do I go about achieving that?
You can implement that rather conveniently with the hstore operator #=:
Make sure the additional module is installed properly (once per database), in a schema that's included in your search_path:
How to use % operator from the extension pg_trgm?
Best way to install hstore on multiple schemas in a Postgres database?
Trigger function:
CREATE OR REPLACE FUNCTION tbl_insup_bef()
RETURNS TRIGGER AS
$func$
DECLARE
_prefix CONSTANT text := 'some prepend data'; -- your prefix here
_prelen CONSTANT int := 17; -- length of above string (optional optimization)
_col text := quote_ident(TG_ARGV[0]);
_val text;
BEGIN
EXECUTE 'SELECT $1.' || _col
USING NEW
INTO _val;
IF left(_val, _prelen) = _prefix THEN
-- do nothing: prefix already there!
ELSE
NEW := NEW #= hstore(_col, _prefix || _val);
END IF;
RETURN NEW;
END
$func$ LANGUAGE plpgsql;
Trigger (reuse the same func for multiple tables):
CREATE TRIGGER insup_bef
BEFORE INSERT OR UPDATE ON tbl
FOR EACH ROW
EXECUTE PROCEDURE tbl_insup_bef('fieldName'); -- unquoted, case-sensitive column name
Closely related with more explanation and advice:
Assignment of a column with dynamic column name
How to access NEW or OLD field given only the field's name?
Get values from varying columns in a generic trigger
Your problem is that EXECUTE can only be used to execute SQL statements and not PL/pgSQL statements like the assignment in your question.
You can maybe work around that like this:
Let's assume that table testtab is defined like this:
CREATE TABLE testtab (
id integer primary key,
val text
);
Then a trigger function like the following will work:
BEGIN
EXECUTE 'SELECT $1.id, ''prefix '' || $1.val' INTO NEW USING NEW;
RETURN NEW;
END;
I used hard-coded idand val in my example, but that is not necessary.
I found a working solution:
trigger should execute after insert/update, not before. Then desired row takes the form
EXECUTE 'UPDATE ' || TG_TABLE_SCHEMA || '.' || TG_TABLE_NAME ||
' SET ' || fieldName || '= ''prefix:'' ||''' || fieldValue || ''' WHERE id = ' || NEW.id;
fieldName and fieldValue I get in the next way:
FOR fieldName,fieldValue IN select key,value from each(hstore(NEW)) LOOP
IF .... THEN
END LOOP:
I would like to find all column names in a table that contains a value in any given record.
I.e All columns that contains a string in the record value.
'%ABC%' or '%QAW%' or '%IGH%'
If possible give me all the tables and columns in a DB schema, so I do not have to query ever table manually
2016-06-15
So I got a little further, I can now get all the values from each column in each row in each table. Now I need to see if that value ( v_value ) exist in a list of airport codes. i.e ['LAS','LAX','BIL']
I have all the airports in a table that I want to read into and array.
I am having trouble with creating that array and getting the data into it.
Here is what I have so far.
Look at the TODO's
CREATE OR REPLACE PROCEDURE "CMSDB"."TEST1"
()
LANGUAGE SQL
SPECIFIC SQL3
P1: BEGIN
DECLARE v_tabschema VARCHAR(255);
DECLARE v_tabname VARCHAR(255);
DECLARE v_colname VARCHAR(255);
DECLARE v_airport VARCHAR(255);
DECLARE v_stmt VARCHAR(3000);
DECLARE V_SQL VARCHAR(3000);
DECLARE v_value VARCHAR(255);
DECLARE SQLSTATE CHAR(5) DEFAULT '00000';
DECLARE v_stmt2 STATEMENT;
DECLARE v_value_cursor CURSOR FOR v_stmt2;
DECLARE v_airport_cursor CURSOR FOR select IDX from CMSDB.AIRPORTS;
DECLARE syscat_cursor CURSOR FOR select trim(tabschema), tabname, colname from cmsdb.syscat.columns where tabname = 'ACCTGROUP' and tabschema = 'CMSDB' and TYPENAME = 'VARCHAR' and colname not in ('CHGDATE','CHGPAGE','CHGPROG','CHGTYPE','CHGUSER','CREATEDATETIME','CREATEDBYID','REC_ID');
open v_airport_cursor;
FETCH FROM v_airport_cursor INTO v_airport;
WHILE (SQLSTATE = '00000') DO
call DBMS_OUTPUT.PUT_LINE(v_airport);
-- TODO Add each value to a list, arryalist that can be used to check if the v_value is in the list.
FETCH FROM v_airport_cursor INTO v_airport;
END WHILE;
close v_airport_cursor;
OPEN syscat_cursor;
FETCH FROM syscat_cursor INTO v_tabschema, v_tabname, v_colname;
WHILE (SQLSTATE = '00000') DO
--call DBMS_OUTPUT.PUT_LINE(v_tabschema || ' ' || v_tabname || ' ' || v_colname);
SET v_stmt = 'select ' || v_colname || ' from ' || v_tabschema || '.' || v_tabname;
--call DBMS_OUTPUT.PUT_LINE(v_stmt);
PREPARE v_stmt2 FROM v_stmt;
OPEN v_value_cursor;
FETCH FROM v_value_cursor INTO v_value;
WHILE (SQLSTATE = '00000') DO
-- TODO
--IF ( airportList contains v_value) THEN
--call DBMS_OUTPUT.PUT_LINE(v_value);
--END IF;
FETCH FROM v_value_cursor INTO v_value;
END WHILE;
CLOSE v_value_cursor;
FETCH FROM syscat_cursor INTO v_tabschema, v_tabname, v_colname;
END WHILE;
close syscat_cursor;
END P1
You can use sysibm.syscolumns:
select colname
from sysibm.syscolumns
where tbname = 'XX' and
(name like %ABC%' or name like '%QAW%' or name like '%IGH%');
You'll need to create a cursor over SYSTABLES that returns all the tables in the system. Then have another cursor that returns all the column names in a given table. Once you have those, you can build a dynamic statement that checks all the columns in a given table for the value you are looking for. Fetch the next table name and do it all over again.
Obviously, if you can narrow down your search to a particular schema or even limit the search to tables/columns with a particular naming pattern; you'd be better off.
Another technique, depending on your platform and version of DB2. You might be able to do some sort of a bulk export to a set of text files. Then use a tool that will serach the contents of those text files.
So my problem is simple. I have a schema prod with many tables, and another one log with the exact same tables and structure (primary keys change that's it).
When I do UPDATE or DELETE in the schema prod, I want to record old data in the log schema.
I have the following function called after a update or delete:
CREATE FUNCTION prod.log_data() RETURNS trigger
LANGUAGE plpgsql AS $$
DECLARE
v RECORD;
column_names text;
value_names text;
BEGIN
-- get column names of current table and store the list in a text var
column_names = '';
value_names = '';
FOR v IN SELECT * FROM information_schema.columns WHERE table_name = quote_ident(TG_TABLE_NAME) AND table_schema = quote_ident(TG_TABLE_SCHEMA) LOOP
column_names = column_names || ',' || v.column_name;
value_names = value_names || ',$1.' || v.column_name;
END LOOP;
-- remove first char ','
column_names = substring( column_names FROM 2);
value_names = substring( value_names FROM 2);
-- execute the insert into log schema
EXECUTE 'INSERT INTO log.' || TG_TABLE_NAME || ' ( ' || column_names || ' ) VALUES ( ' || value_names || ' )' USING OLD;
RETURN NULL; -- no need to return, it is executed after update
END;$$;
The annoying part is that I have to get column names from information_schema for each row.
I would rather use this:
EXECUTE 'INSERT INTO log.' || TG_TABLE_NAME || ' SELECT ' || OLD;
But some values can be NULL so this will execute:
INSERT INTO log.user SELECT 2,,,"2015-10-28 13:52:44.785947"
instead of
INSERT INTO log.user SELECT 2,NULL,NULL,"2015-10-28 13:52:44.785947"
Any idea to convert ",," to ",NULL,"?
Thanks
-Quentin
First of all I must say that in my opinion using PostgreSQL system tables (like information_schema) is the proper way for such a usecase. Especially that you must write it once: you create the function prod.log_data() and your done. Moreover it may be dangerous to use OLD in that context (just like *) as always because of not specified elements order.
But,
to answer your exact question the only way I know is to do some operations on OLD. Just observe that you cast OLD to text by doing concatenation ... ' SELECT ' || OLD. The default casting create that ugly double-commas. So, next you can play with that text. In the end I propose:
DECLARE
tmp TEXT
...
BEGIN
...
/*to make OLD -> text like (2,,3,4,,)*/
SELECT '' || OLD INTO tmp; /*step 1*/
/*take care of commas at the begining and end: '(,' ',)'*/
tmp := replace(replace(tmp, '(,', '(NULL,'), ',)', ',NULL)'); /*step 2*/
/* replace rest of commas to commas with NULL between them */
SELECT array_to_string(string_to_array(tmp, ',', ''), ',', 'NULL') INTO tmp; /*step 3*/
/* Now we can do EXECUTE*/
EXECUTE 'INSERT INTO log.' || TG_TABLE_NAME || ' SELECT ' || tmp;
Of course you can do steps 1-3 in one big step
SELECT array_to_string(string_to_array(replace(replace('' || NEW, '(,', '(NULL,'), ',)', ',NULL)'), ',', ''), ',', 'NULL') INTO tmp;
In my opinion this approach isn't any better from using information_schema, but it's your call.