PostgreSQL triggers with LibreOffice Base front-end - postgresql

I have the following trigger function & trigger in PostgreSQL 12.1:
create or replace function constraint_for_present()
returns trigger
as $$
BEGIN
if
new.present_status = 'viewing'
and new.name not in (select viewable_item from sourcing)
then raise exception 'a present_status of "viewing" requires that the viewable item is in sourcing';
end if;
return new;
END;
$$ language plpgsql;
create trigger constraint_for_present
before insert or update of present_status on viewable_item
for each row
execute function constraint_for_present();
These work as expected during data entry in the psql and TablePlus clients. However, the function throws an error when accessing the database via LibreOffice Base:
pq_driver: [PGRES_FATAL_ERROR]ERROR: relation "sourcing" does not exist
LINE 2: and new.name not in (select viewable_item from sourcing)
QUERY: SELECT new.present_status = 'viewing'
and new.name not in (select viewable_item from sourcing)
CONTEXT: PL/pgSQL function viewing.constraint_for_present() line 3 at IF
(caused by statement 'UPDATE "viewing"."viewable_item" SET "present_status" = 'none' WHERE "name" = 'test4'')
In Base I have a simple form set up for the trigger's table, with each foreign-key column set to list box, and the Type of list contents set to Sql (also tried Sql [Native]). The List content of each is (with appropriate table and primary key columns):
select name from viewing.cv_present_status order by name
(This database is using natural keys for now, for organizational political reasons.) The Bound field is set to 0, which is the displayed and primary key column.
So ... 2 questions:
Why is this problem happening only in Base, and how might I fix it (or at least better trouble-shoot it)?
Since Bound field appears to take only a single integer, does that in effect mean that you can't use list boxes for tables with multi-column primary keys, at least if there is a single displayed column?

In the trigger function, you can fully qualify the table
...
and new.name not in (select viewable_item from viewing.sourcing)
...

Related

How to upgrade table inside a trigger function in POSTGRESQL?

I would like to create a trigger function inside my database which checks, if the newly "inserted" value (max_bid) is at least +1 greater than the largest max_bid value currently in the table.
If this is the case, the max_bid value inside the table should be updated, although not with the newly "inserted" value, but instead it should be increased by 1.
For instance, if max_bid is 10 and the newly "inserted" max_bid is 20, the max_bid value inside the table should be increased by +1 (in this case 11).
I tried to do it with a trigger, but unfortunatelly it doesn't work. Please help me to solve this problem.
Here is my code:
CREATE TABLE bidtable (
mail_buyer VARCHAR(80) NOT NULL,
auction_id INTEGER NOT NULL,
max_bid INTEGER,
PRIMARY KEY (mail_buyer),
);
CREATE OR REPLACE FUNCTION max_bid()
RETURNS TRIGGER LANGUAGE PLPGSQL AS $$
DECLARE
current_maxbid INTEGER;
BEGIN
SELECT MAX(max_bid) INTO current_maxbid
FROM bidtable WHERE NEW.auction_id = OLD.auction_id;
IF (NEW.max_bid < (current_maxbid + 1)) THEN
RAISE EXCEPTION 'error';
RETURN NULL;
END IF;
UPDATE bidtable SET max_bid = (current_maxbid + 1)
WHERE NEW.auction_id = OLD.auction_id
AND NEW.mail_buyer = OLD.mail_buyer;
RETURN NEW;
END;
$$;
CREATE OR REPLACE TRIGGER max_bid_trigger
BEFORE INSERT
ON bidtable
FOR EACH ROW
EXECUTE PROCEDURE max_bid();
Thank you very much for your help.
In a trigger function that is called for an INSERT operation the OLD implicit record variable is null, which is probably the cause of "unfortunately it doesn't work".
Trigger function
In a case like this there is a much easier solution. First of all, disregard the value for max_bid upon input because you require a specific value in all cases. Instead, you are going to set it to that specific value in the function. The trigger function can then be simplified to:
CREATE OR REPLACE FUNCTION set_max_bid() -- Function name different from column name
RETURNS TRIGGER LANGUAGE PLPGSQL AS $$
BEGIN
SELECT MAX(max_bid) + 1 INTO NEW.max_bid
FROM bidtable
WHERE auction_id = NEW.auction_id;
RETURN NEW;
END; $$;
That's all there is to it for the trigger function. Update the trigger to the new function name and it should work.
Concurrency
As several comments to your question pointed out, you run the risk of getting duplicates. This will currently not generate an error because you do not have an appropriate constraint on your table. Avoiding duplicates requires a table constraint like:
UNIQUE (auction_id, max_bid)
You cannot deal with any concurrency issue in the trigger function because the INSERT operation will take place after the trigger function completes with a RETURN NEW statement. What would be the most appropriate way to deal with this depends on your application. Your options are table locking to block any concurrent inserts, or looping in a function until the insert succeeds.
Avoid the concurrency issue altogether
If you can change the structure of the bidtable table, you can get rid of the whole concurrency issue by changing your business logic to not require the max_bid column. The max_bid column appears to indicate the order in which bids were placed for each auction_id. If that is the case then you could add a serial column to your table and use that to indicate order of bids being placed (for all auctions). That serial column could then also be the PRIMARY KEY to make your table more agile (no indexing on a large text column). The table would look something like this:
CREATE TABLE bidtable (
id SERIAL PRIMARY KEY,
mail_buyer VARCHAR(80) NOT NULL,
auction_id INTEGER NOT NULL
);
You can drop your trigger and trigger function and just depend on the proper id value being supplied by the system.
The bids for a specific action can then be extracted using a straightforward SELECT:
SELECT id, mail_buyer
FROM bidtable
WHERE auction_id = xxx
ORDER BY id;
If you require a max_bid-like value (the id values increment over the full set of auctions), you can use a simple window function:
SELECT mail_buyer, row_number() AS max_bid OVER (PARTITION BY auction_id ORDER BY id)
FROM bidtable
WHERE auction_id = xxx;

Declare and return value for DELETE and INSERT

I am trying to remove duplicated data from some of our databases based upon unique id's. All deleted data should be stored in a separate table for auditing purposes. Since it concerns quite some databases and different schemas and tables I wanted to start using variables to reduce chance of errors and the amount of work it will take me.
This is the best example query I could think off, but it doesn't work:
do $$
declare #source_schema varchar := 'my_source_schema';
declare #source_table varchar := 'my_source_table';
declare #target_table varchar := 'my_target_schema' || source_table || '_duplicates'; --target schema and appendix are always the same, source_table is a variable input.
declare #unique_keys varchar := ('1', '2', '3')
begin
select into #target_table
from #source_schema.#source_table
where id in (#unique_keys);
delete from #source_schema.#source_table where export_id in (#unique_keys);
end ;
$$;
The query syntax works with hard-coded values.
Most of the times my variables are perceived as columns or not recognized at all. :(
You need to create and then call a plpgsql procedure with input parameters :
CREATE OR REPLACE PROCEDURE duplicates_suppress
(my_target_schema text, my_source_schema text, my_source_table text, unique_keys text[])
LANGUAGE plpgsql AS
$$
BEGIN
EXECUTE FORMAT(
'WITH list AS (INSERT INTO %1$I.%3$I_duplicates SELECT * FROM %2$I.%3$I WHERE array[id] <# %4$L :: integer[] RETURNING id)
DELETE FROM %2$I.%3$I AS t USING list AS l WHERE t.id = l.id', my_target_schema, my_source_schema, my_source_table, unique_keys :: text) ;
END ;
$$ ;
The procedure duplicates_suppress inserts into my_target_schema.my_source_table || '_duplicates' the rows from my_source_schema.my_source_table whose id is in the array unique_keys and then deletes these rows from the table my_source_schema.my_source_table .
See the test result in dbfiddle.
As has been commented, you need some kind of dynamic SQL. In a FUNCTION, PROCEDURE or a DO statement to do it on the server.
You should be comfortable with PL/pgSQL. Dynamic SQL is no beginners' toy.
Example with a PROCEDURE, like Edouard already suggested. You'll need a FUNCTION instead to wrap it in an outer transaction (like you very well might). See:
When to use stored procedure / user-defined function?
CREATE OR REPLACE PROCEDURE pg_temp.f_archive_dupes(_source_schema text, _source_table text, _unique_keys int[], OUT _row_count int)
LANGUAGE plpgsql AS
$proc$
-- target schema and appendix are always the same, source_table is a variable input
DECLARE
_target_schema CONSTANT text := 's2'; -- hardcoded
_target_table text := _source_table || '_duplicates';
_sql text := format(
'WITH del AS (
DELETE FROM %I.%I
WHERE id = ANY($1)
RETURNING *
)
INSERT INTO %I.%I TABLE del', _source_schema, _source_table
, _target_schema, _target_table);
BEGIN
RAISE NOTICE '%', _sql; -- debug
EXECUTE _sql USING _unique_keys; -- execute
GET DIAGNOSTICS _row_count = ROW_COUNT;
END
$proc$;
Call:
CALL pg_temp.f_archive_dupes('s1', 't1', '{1, 3}', 0);
db<>fiddle here
I made the procedure temporary, since I assume you don't need to keep it permanently. Create it once per database. See:
How to create a temporary function in PostgreSQL?
Passed schema and table names are case-sensitive strings! (Unlike unquoted identifiers in plain SQL.) Either way, be wary of SQL-injection when concatenating SQL dynamically. See:
Are PostgreSQL column names case-sensitive?
Table name as a PostgreSQL function parameter
Made _unique_keys type int[] (array of integer) since your sample values look like integers. Use a the actual data type of your id columns!
The variable _sql holds the query string, so it can easily be debugged before actually executing. Using RAISE NOTICE '%', _sql; for that purpose.
I suggest to comment the EXECUTE line until you are sure.
I made the PROCEDURE return the number of processed rows. You didn't ask for that, but it's typically convenient. At hardly any cost. See:
Dynamic SQL (EXECUTE) as condition for IF statement
Best way to get result count before LIMIT was applied
Last, but not least, use DELETE ... RETURNING * in a data-modifying CTE. Since that has to find rows only once it comes at about half the cost of separate SELECT and DELETE. And it's perfectly safe. If anything goes wrong, the whole transaction is rolled back anyway.
Two separate commands can also run into concurrency issues or race conditions which are ruled out this way, as DELETE implicitly locks the rows to delete. Example:
Replicating data between Postgres DBs
Or you can build the statements in a client program. Like psql, and use \gexec. Example:
Filter column names from existing table for SQL DDL statement
Based on Erwin's answer, minor optimization...
create or replace procedure pg_temp.p_archive_dump
(_source_schema text, _source_table text,
_unique_key int[],_target_schema text)
language plpgsql as
$$
declare
_row_count bigint;
_target_table text := '';
BEGIN
select quote_ident(_source_table) ||'_'|| array_to_string(_unique_key,'_') into _target_table from quote_ident(_source_table);
raise notice 'the deleted table records will store in %.%',_target_schema, _target_table;
execute format('create table %I.%I as select * from %I.%I limit 0',_target_schema, _target_table,_source_schema,_source_table );
execute format('with mm as ( delete from %I.%I where id = any (%L) returning * ) insert into %I.%I table mm'
,_source_schema,_source_table,_unique_key, _target_schema, _target_table);
GET DIAGNOSTICS _row_count = ROW_COUNT;
RAISE notice 'rows influenced, %',_row_count;
end
$$;
--
if your _unique_key is not that much, this solution also create a table for you. Obviously you need to create the target schema yourself.
If your unique_key is too much, you can customize to properly rename the dumped table.
Let's call it.
call pg_temp.p_archive_dump('s1','t1', '{1,2}','s2');
s1 is the source schema, t1 is source table, {1,2} is the unique key you want to extract to the new table. s2 is the target schema

Automatically DROP FUNCTION when DROP TABLE on POSTGRESQL 11.7

I'm trying to, somehow, trigger a automatic function drop when a table is dropped and I can't figure out how to do it.
TL;DR: Is there a way to trigger a function drop when a specific table is dropped? (POSTGRESQL 11.7)
Detailed explanation
I'll try to explain my problem using a simplified use case with dummy names.
I have three tables: sensor1, sensor2 and sumSensors;
A FUNCTION (sumdata) was created to INSERT data on sumSensors table. Inside this function I'll fetch data from sensor1 and sensor2 tables and insert its sum on table sumSensors;
A trigger was created for each sensor table which like this:
CREATE TRIGGER trig1
AFTER INSERT ON sensor1
FOR EACH ROW EXECUTE
FUNCTION sumdata();
Now, when a new row is inserted on tables sensor1 OR sensor2, the function sumdata will be executed and insert the sum of last values from both on table sumSensors
If I wanted to DROP FUNTION sumdata CASCADE;, the triggers would be automatically removed from tables sensor1 and sensor2. Until now that's everything fine! But that's not what I want.
My problem is:
Q: And if I just DROP TABLE sumSensors CASCADE;? What would happen to the function which was meant to insert on this table?
A: As expected, since there's no association between sumSensors table and sumdata function, the function won't be dropped (still exist)! The same happens to the triggers which use it (still exist). This means that when a new row is inserted on sensor tables, the function sumdata will be executed and corrupted, leading to a failure (even the INSERT which triggered the function execution won't be actually inserted).
Is there a way to trigger a function drop when a specific table is dropped?
Thank you in advance
There is no dependency tracking for functions in PostgreSQL (as of version 12).
You can use event triggers to maintain the dependencies yourself.
Full example follows.
More information: documentation of event triggers feature, support functions.
BEGIN;
CREATE TABLE _testtable ( id serial primary key, payload text );
INSERT INTO _testtable (payload) VALUES ('Test data');
CREATE FUNCTION _testfunc(integer) RETURNS integer
LANGUAGE SQL AS $$ SELECT $1 + count(*)::integer FROM _testtable; $$;
SELECT _testfunc(100);
CREATE FUNCTION trg_drop_dependent_functions()
RETURNS event_trigger
LANGUAGE plpgsql AS $$
DECLARE
_dropped record;
BEGIN
FOR _dropped IN
SELECT schema_name, object_name
FROM pg_catalog.pg_event_trigger_dropped_objects()
WHERE object_type = 'table'
LOOP
IF _dropped.schema_name = 'public' AND _dropped.object_name = '_testtable' THEN
EXECUTE 'DROP FUNCTION IF EXISTS _testfunc(integer)';
END IF;
END LOOP;
END;
$$;
CREATE EVENT TRIGGER trg_drop_dependent_functions ON sql_drop
EXECUTE FUNCTION trg_drop_dependent_functions();
DROP TABLE _testtable;
ROLLBACK;

record "new" has no field "cure" postgreSQL

so here's the thing,I have two tables: apointments(with a single p) and medical_folder and i get this
ERROR: record "new" has no field "cure"
CONTEXT: SQL statement "insert into medical_folder(id,"patient_AMKA",cure,drug_id)
values(new.id,new."patient_AMKA",new.cure,new.drug_id)"
PL/pgSQL function new_medical() line 3 at SQL statement
create trigger example_trigger after insert on apointments
for each row execute procedure new_medical();
create or replace function new_medical()
returns trigger as $$
begin
if apointments.diagnosis is not null then
insert into medical_folder(id,"patient_AMKA",cure,drug_id)
values(new.id,new."patient_AMKA",new.cure,new.drug_id);
return new;
end if;
end;
$$ language plpgsql;
insert into apointments(id,time,"patient_AMKA","doctor_AMKA",diagnosis)
values('30000','2017-05-24 0
07:42:15','4017954515276','6304745877947815701','M3504');
I have checked multiple times and all of my tables and columns are existing
Please help!Thank you!
Table structures are:
create table medical_folder (
id bigInt,
patient bigInt,
cure text,
drug_id bigInt);
create table apointments (
id bigint,
time timestamp without time zone,
"patient_AMKA" bigInt,
"doctor_AMKA" bigInt);
I was facing the same issue.
Change:
values(new.id,new."patient_AMKA",new.cure,new.drug_id);
to:
values(new.id,new."patient_AMKA",new."cure",new."drug_id");
This error means the table apointments (with 1 p) doesn't have a field named cure. The trigger occurs when inserting an apointment, so "new" is an apointment row. Maybe it is part of the diagnosis object?
The values for the second table are not available in the "new" row. You need a way to get and insert them, and using a trigger is not the easiest/clean way to go.
You can have your application do two inserts, one by table, and wrap them in a transaction to ensure they are both committed/rolled back. Another option, which lets you better enforce the data integrity, is to create a stored procedure that takes the values to be inserted in both tables and do the two inserts. You can go as far as forbidding user to write to the tables, effectively leaving the stored procedure the only way to insert the data.

Get values from varying columns in a generic trigger

I am new to PostgreSQL and found a trigger which serves my purpose completely except for one little thing. The trigger is quite generic and runs across different tables and logs different field changes. I found here.
What I now need to do is test for a specific field which changes as the tables change on which the trigger fires. I thought of using substr as all the column will have the same name format e.g. XXX_cust_no but the XXX can change to 2 or 4 characters. I need to log the value in theXXX_cust_no field with every record that is written to the history_ / audit table. Using a bunch of IF / ELSE statements to accomplish this is not something I would like to do.
The trigger as it now works logs the table_name, column_name, old_value, new_value. I however need to log the XXX_cust_no of the record that was changed as well.
Basically you need dynamic SQL for dynamic column names. format helps to format the DML command. Pass values from NEW and OLD with the USING clause.
Given these tables:
CREATE TABLE tbl (
t_id serial PRIMARY KEY
,abc_cust_no text
);
CREATE TABLE log (
id int
,table_name text
,column_name text
,old_value text
,new_value text
);
It could work like this:
CREATE OR REPLACE FUNCTION trg_demo()
RETURNS TRIGGER AS
$func$
BEGIN
EXECUTE format('
INSERT INTO log(id, table_name, column_name, old_value, new_value)
SELECT ($2).t_id
, $3
, $4
,($1).%1$I
,($2).%1$I', TG_ARGV[0])
USING OLD, NEW, TG_RELNAME, TG_ARGV[0];
RETURN NEW;
END
$func$ LANGUAGE plpgsql;
CREATE TRIGGER demo
BEFORE UPDATE ON tbl
FOR EACH ROW EXECUTE PROCEDURE trg_demo('abc_cust_no'); -- col name here.
SQL Fiddle.
Related answer on dba.SE:
How to access NEW or OLD field given only the field's name?
List of special variables visible in plpgsql trigger functions in the manual.