Postgresql: Check if Value exists in a table through Trigger - postgresql

I am getting acquainted with Triggers in (Postgre)sql.
What I have now is a table Verine (which is teams in german).
Vereine :{[team:string, punkte:int, serie:int]}
This is a very very small thing I wrote just to understand how creating tables, sorting stuff and views work and now I'm using it for triggers. Anyway, team is team obviously the name of the team and primary key, punkte means points and serie refers to the division of the team (just so you understand what the different domains mean).
Problem starts here:
So assume I have a team, let's say "Juventus", already in my table "Vereine". If I then want to insert another row/tuple that has the same key "Juventus", instead of wanting two entries for them, what I'd like is to update the values for key "Juventus" (replacing new values with old ones). In the example below I try to do that with points.
create table vereine(
team varchar(20) primary key,
punkte int not null,
serie int not null
)
--beispiel was die Aufgabe verlangt
create trigger prevent_redundancy
before insert on vereine
for each row
execute procedure update_points()
create or replace function update_points()
returns trigger as
$BODY$
begin
if (new.team in (old.team)) then
update vereine
set punkte = new.punkte
where team = new.team;
else
end if;
end;
$BODY$
LANGUAGE plpgsql;
--Was die aufgabe verlangt ist, dass keine bereits existierende ID eingefügt wird,
--sondern der entsprechende Modellname dann umgeändert wird
insert into vereine values('JuventusFC', 50, 1);
insert into vereine values('AS Roma', 30, 1);
insert into vereine values('ParmaCalcio1913', 25, 1);
insert into vereine values('Palermo', 37, 2);
insert into vereine values('Pescara', 32, 2);
insert into vereine values('Spezia', 26, 2);
insert into vereine values('Carrarese Calcio', 34, 3);
insert into vereine values('Virtus Entella', 31, 3);
insert into vereine values('Juventus U-23', 50, 3);
select *
from vereine
insert into vereine values('JuventusFC', 53, 1);
Here are my problems:
First of all: How would I check if a key already exists in the table? In queries, I would use things like case when, where in or just approach the problem differently. Do I need if statements here? In other words, how would you rewrite
if (new.team in (old.team)) then
so that it checks if it already exists in there?
Secondly: This may be related to the first problem: I get this when trying to insert any tuple:
Query execution failed
Reason:
SQL Error [54001]: ERROR: stack depth limit exceeded
Hint: Increase the configuration parameter "max_stack_depth" (currently 2048kB), after ensuring the platform's stack depth limit is adequate.
Where: SQL statement "insert into vereine values(new.team, new.punkte, new.serie)"
How can I fix my code so that it does what I want it to do?
Sorry for being too wordy. I just want to make sure you understand how this is supposed to work. Yes, Ive asked a question before related to the same assignment but this is a whole different problem. Sorry for that as well.

What you basically want is an UPSERT syntax, as shown in this answer using a UNIQUE CONSTRAINT on team column. I would suggest you to use this for your insert operation in the relevant function, instead of a trigger.
If you still insist on using a trigger, you could write something like this.
create or replace function update_points()
returns trigger as
$BODY$
begin
if EXISTS (select 1 FROM vereine WHERE team = new.team ) then
update vereine
set punkte = new.punkte
where team = new.team;
RETURN NULL;
else
RETURN NEW;
end if;
end;
$BODY$
LANGUAGE plpgsql;
The difference between RETURN NULL; and RETURN NEW; in a Trigger returning procedure and a Before Insert Trigger is that RETURN NULL won't execute the triggering statement( i.e the main INSERT operation ), whereas RETURN NEW; continues with the intended INSERT statement execution normally.
Demo
Now, coming to your error SQL Error [54001]: ERROR: stack depth limit exceeded,
it appears that you have another trigger already created on the table that is triggered by the update operation or you have written another insert and returning NEW from the Trigger. You have to take a call on how to handle if it's another trigger( either to drop it or modify it, but that's beyond the scope of this question, and you should ask it separately if you have further issues)
You can check the existence of Triggers on the table by simply querying the information_schema.triggers or pg_trigger , Refer to this answer for more details.

Related

How to update the tables NEW values after INSERT Trigger in PostgreSQL/PostGIS?

I try to automatize some calculations on tables in my database. I try to perform some UPDATE on rows that are newly inserted in a table, but I newer used NEW or OLD statements before. I tried writing the code that updates happen on new values by assigning NEW.[tablename], but it wont work. Isn't there any statement in the beginning of the trigger function to specify running the function only on new values, I cannot find useful information about this.
CREATE OR REPLACE FUNCTION cost_estimation()
RETURNS TRIGGER AS
$func$
DECLARE
a INTEGER := 3;
BEGIN
UPDATE NEW.cost_table
SET column4 = a;
UPDATE NEW.cost_table
SET column 5 = column4 - column2;
[...]
RETURN NEW;
END
$func$ language plpgsql
UPDATE:
Thank you for the answers so far.
My original code is written based on the update structure, and needs to be rewritten when omitting UPDATE. I should give a better example of my situation. Easy spoken: I have a table (T1) which will be filled with data from another table (T2).
After data is inserted in T1 from T2 I want to run calculations on the new values inside of T1.(The code includes PostGIS functionalities):
CREATE OR REPLACE FUNCTION cost_estimation()
RETURNS TRIGGER AS
$func$
BEGIN
NEW.column6 = column2 FROM external_table WHERE
St_Intersects(NEW.geom, external_table.geom) LIMIT1;
NEW.column8 = CASE
WHEN st_intersects(NEW.geom, external_table2.geom) then 'intersects'
WHEN (NEW.column9 = 'K' and NEW.column10 <= 6) then 'somethingelse'
ELSE 'nothing'
END
FROM external_table2;
[...]
RETURN NEW;
END
$func$ language plpgsql
CREATE TRIGGER table_calculation_on_new
BEFORE INSERT OR UPDATE ON cost_estimation
FOR EACH ROW EXECUTE PROCEDURE road_coast_estimation();
After inserting values in my table no calculations will be performed.
UPDATE2: I checked my tables again and detected that another trigger was blocking the table operation. The code in the lower half is working fine now, thanks to #a_horse_with_no_name.
NEW and OLD aren't "statements", those are records that represent the modified rows from the DML statement that fired the trigger.
Assuming the trigger is defined on cost_table you can simply change the fields in the NEW record. No need to UPDATE anything:
CREATE OR REPLACE FUNCTION cost_estimation()
RETURNS TRIGGER AS
$func$
DECLARE
a INTEGER := 3;
BEGIN
new.column4 := a;
new.column5 := new.column4 - new.column2;
return new;
END;
$func$ language plpgsql
For this to work the trigger needs to be defined as a BEFORE trigger:
create trigger cost_table_trigger
BEFORE insert or update on cost_table
for each row execute procedure cost_estimation();

Question for SAP DBTech JDBC when trigger is execute

I would like to ask what is this error and how do I fix for it?
SAP DBTech JDBC: [7]: feature not supported: trigger execution with transition variable is not supported with piecewise lob writing.
Situation:
Read from a file that more than 100kb. Then insert into SAP HANA database.
The column to be inserted is BLOB type.
This table have a trigger which will insert the same information into another table (For tracking purpose).
If without this trigger, it able to insert data without any error.
I cannot give the actual code due to confidential. So I create a sample below.
I'm create a hdbtrigger file and compile it using SAP HANA Database Module in SAP Web IDE to create the table, trigger, etc.
TRIGGER "M_IMG_T_T"
AFTER INSERT OR UPDATE OR DELETE
ON "M_IMG_T"
REFERENCING NEW ROW new_row, OLD ROW old_row
FOR EACH ROW
BEGIN
DECLARE upd_kbn VARCHAR(1) := 'U';
IF :new_row."ID_PK" IS NULL THEN
upd_kbn = 'D';
ELSEIF :old_row."ID_PK" IS NULL THEN
upd_kbn = 'I';
END IF;
IF :upd_kbn = 'I' OR :upd_kbn = 'U' THEN
INSERT INTO "M_IMG_T_H" VALUES(
:new_row."ID_PK",
:new_row."I_IMG_FILE_DATA"
);
ELSE
INSERT INTO "M_IMG_T_H" VALUES(
:new_row."ID_PK",
:new_row."I_IMG_FILE_DATA"
);
END IF;
END
When I export catalog of the table, I will get this. (I just copy insert part)
CREATE TRIGGER "M_IMG_T_T_I" AFTER INSERT ON "M_IMG_T" REFERENCING NEW ROW NEW_ROW FOR EACH ROW
BEGIN
DECLARE UPD_CLS VARCHAR(2) := 'I';
INSERT INTO "M_IMG_T_H" (
"I_ID",
"I_IMG_FILE_DATA"
) VALUES(
:NEW_ROW."I_ID",
:NEW_ROW."I_IMG_FILE_DATA"
);
END

Return the value changed by an update without a trigger

Postgres has a great RETURNING clause for INSERT, DELETE and UPDATE...and it's made me a bit greedy. In a few cases, what I'd like to get is not only the current value, but the previous value:
UPDATE analytic_productivity
SET points = 1000
WHERE points > 1000
RETURNING id,
points,
OLD.points;
I don't believe there's any way to access previous values outside of the lifespan and context of a trigger. So, I'll guess what I'd like isn't possible as such. If that's right, can anyone suggest an alternative? I'm overwriting outliers with some set values, and would like to record the modified values in another table. This is why I don't know the current value in advance. This is a rare (and clearly suspect) operation, and I don't want to record the change on normal inserts and updates.
As an alternative, I'm thinking that I can select the outliers, revise them, and then write back the modifications. So, do most of the work on the client side with a couple of requests to Postgres. If so, can someone suggest the right locking level to apply between my initial SELECT and my following UPDATE? I believe that the FOR UPDATE lock is right.
Any suggestions on a smart way to capture previous values, during an update, without a trigger would be great to hear about.
Follow-up
Thanks to comments here, I experimented a bit and came up with a solution that works in my case. To make my objectives clearer:
I've got a table named outlier_rule that defines values that are too high for a specific column.
The goal is to loop over the table, and apply the rules to set outliers to a fixed value.
Stomping on outliers like this is...questionable. There must be leaks in the app's UI that allow for unreasonable values. To help track these down, I'm recording the large values in a table named outlier_change.
I'd like to push this behavior into server-side function so that any of our servers, regardless of their codebase version, can invoke the current logic.
The client servers compose and send an email with a result summary, when outliers are found and corrected.
So, a server-side function to do everything, log some data, and return a result. I've got that working, but it's got the smell of You Don't Know What You're Doing So Just Keep Adding Code Until it Works. I've at least got a better handle on using FORMAT and think I understand now that a single function can do many things, and that you can choose what to return with the RETURN clause. For reference, the various bits of code:
CREATE TABLE IF NOT EXISTS data.outlier_rule (
id uuid NOT NULL DEFAULT extensions.gen_random_uuid(),
schema_name text NOT NULL DEFAULT NULL,
table_name text NOT NULL DEFAULT NULL,
column_name text NOT NULL DEFAULT NULL,
threshold integer,
set_to integer,
CONSTRAINT outlier_rule_id_pkey
PRIMARY KEY (schema_name,table_name,column_name)
);
For tracking the modifications, I've got a second table named outlier_change:
------------------------------
-- Table
------------------------------
DROP TABLE IF EXISTS data.outlier_change CASCADE;
CREATE TABLE IF NOT EXISTS data.outlier_change (
id uuid NOT NULL DEFAULT NULL,
outlier_rule_id uuid NOT NULL DEFAULT NULL,
value_was integer NOT NULL DEFAULT NULL,
set_to integer NOT NULL DEFAULT NULL,
change_count integer NOT NULL DEFAULT 0,
last_changed_dts timestamptz NOT NULL DEFAULT NOW(),
CONSTRAINT outlier_change_id_pkey
PRIMARY KEY (id,outlier_rule_id)
);
ALTER TABLE data.outlier_change OWNER TO user_change_structure;
------------------------------
-- Trigger Function
------------------------------
CREATE OR REPLACE FUNCTION data.on_outlier_change_upsert()
RETURNS pg_catalog.trigger AS $BODY$
BEGIN
NEW.last_changed_dts := NOW();
NEW.change_count := OLD.change_count + 1;
RETURN NEW; -- important!
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
------------------------------
-- Trigger
------------------------------
CREATE TRIGGER outlier_change_upsert BEFORE INSERT OR UPDATE ON data.outlier_change
FOR EACH ROW
EXECUTE PROCEDURE data.on_outlier_change_upsert();
DROP FUNCTION IF EXISTS data.outlier_fix ();
CREATE OR REPLACE FUNCTION data.outlier_fix ()
RETURNS TABLE (
schema_name text,
table_name text,
column_name text,
id uuid,
value_was integer,
set_to integer,
change_count integer
)
AS $$
DECLARE
rule record;
now_ timestamptz = NOW();
BEGIN
FOR rule IN SELECT * FROM data.outlier_rule LOOP
EXECUTE FORMAT (
'INSERT INTO outlier_change (
outlier_rule_id,
set_to,
id,
value_was)
SELECT %6$L,
%5$s,
%2$I.id,
%2$I.%3$I
FROM %1$I.%2$I
WHERE %3$I > %4$s
ON CONFLICT(id,outlier_rule_id) DO UPDATE SET
value_was = EXCLUDED.value_was,
set_to = EXCLUDED.set_to
RETURNING outlier_rule_id,
id,
value_was,
set_to
change_count;
UPDATE %1$I.%2$I
SET %3$I = %5$s
WHERE %3$I > %4$s;',
rule.schema_name,
rule.table_name,
rule.column_name,
rule.threshold,
rule.set_to,
rule.id);
END LOOP;
RETURN QUERY EXECUTE ('
SELECT outlier_rule.schema_name,
outlier_rule.table_name,
outlier_rule.column_name,
outlier_change.id,
outlier_change.value_was,
outlier_change.set_to,
outlier_change.change_count
FROM outlier_change
JOIN outlier_rule ON (outlier_rule.id = outlier_change.outlier_rule_id)
WHERE last_changed_dts = $1')
USING now_;
END;
$$ LANGUAGE plpgsql;
ALTER FUNCTION data.outlier_fix() OWNER TO user_bender;
You could achieve that with a bit of a hack. You can self join the table in your update query like this:
UPDATE analytic_productivity NEW
SET points = 1000
FROM analytic_productivity OLD
WHERE NEW.points > 1000
and NEW.id = OLD.id
RETURNING NEW.id,
NEW.points,
OLD.points as old_points;

Update and insert record based on list of value in the parameter

I am using postgresql 9.3.
I want to create a function to update my table (flag='9') and insert new record (flag='0') with the rfidnumber specified in a parameter.
This parameter may have several value seperated by space (ie. 11 22 33 44)
CREATE OR REPLACE FUNCTION public.fcreate_rfid (
znumber varchar
)
RETURNS boolean AS
$body$
BEGIN
--update old record which has the same rfid number and flag='9' if exists
update tblrfid set flag='9' where flag='0' and rfidnumber in (znumber);
-- generate new record
insert into tblrfid(tanggal, flag, rfidnumber)
select localtimestamp, '0', regexp_split_to_table(znumber, ' ');
return true;
END;
$body$
LANGUAGE 'plpgsql';
when I call this function using:
select fcreate_rfid('11 22 33 44');
This function fails to update the old record, but success to insert the new record.
Help me to fixed this problem. I know the problem is the update command, but I just don't know to correct it.
You can not do it in this way - znumber is a VARCHAR, not a list of elements. So the IN operator is useless here - you have only 1 element in the list and it is the whole string znumber, not its space-delimited elements.
I am not sure if the code below will work but it should give you some direction for your research:
UPDATE tblrfid SET flag='9'
WHERE flag='0' AND rfidnumber = ANY (string_to_array(znumber,' '));

postgres trigger creates index: BEFORE INSERT ON hides one row

I have a trigger AFTER INSERT ON mytable that calls a function
CREATE OR REPLACE FUNCTION myfunction() RETURNS trigger AS
$BODY$
DECLARE
index TEXT;
BEGIN
index := 'myIndex_' || NEW.id2::text;
IF to_regclass(index::cstring) IS NULL THEN
EXECUTE 'CREATE INDEX ' || index || ' ON mytable(id) WITH (FILLFACTOR=100) WHERE id2=' || NEW.id2|| ';';
RAISE NOTICE 'Created new index %',index;
END IF;
RETURN NULL;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
SECURITY DEFINER
COST 100;
ALTER FUNCTION myfunction()
OWNER TO theadmin;
This works wonderfully. For each distinct id2 I create an index. Speeds up relevant queries by a lot.
As mentioned above I trigger this AFTER INSERT ON. Before doing that however I had the trigger set to BEFORE INSERT ON. And the function did some strange things. (Yes, I had changed the RETURN NULL to RETURN NEW)
insert of a new row insert into mytable VALUES(1391, 868, 0.5, 0.5);
creates the corresponding index myIndex_868
the inserted row does not appear in mytable when doing a select :(
trying to insert the same row results in ERROR: duplicate key value violates unique constraint "mytable_pkey" because of course DETAIL: Key (id, id2)=(1391, 868) already exists.
inserting other rows for the same id2 works as expected :)
DELETE FROM mytable WHERE id = 1391 and id2 = 868 does nothing
DROP INDEX myIndex_868; drops the index. And suddenly the initial row that never appeared in the table is suddenly there!
Why does BEFORE INSERT ON behave so differently? Is this a bug in postgres 9.4 or did I overlook something?
Just for completeness' sake:
CREATE TRIGGER mytrigger
AFTER INSERT ON mytable
FOR EACH ROW EXECUTE PROCEDURE myfunction();
vs.
CREATE TRIGGER mytrigger
BEFORE INSERT ON mytable
FOR EACH ROW EXECUTE PROCEDURE myfunction();
I'd argue that this is a bug in PostgreSQL. I could reproduce it with 9.6.
It is clear that the row is not contained in the index as it is created in the BEFORE trigger, but the fact that the index is not updated when the row is inserted is a bug in my opinion.
I have written to pgsql-hackers to ask for an opinion.
But apart from that, I don't see the point of the whole exercise.
Better than creating a gazillion indexes would be to create a single one:
CREATE INDEX ON mytable(id2, id);