Does THROW statement includes ROLLBACK? - triggers

I am currently learning SQL Server 2012 and this is my trigger on the ANIMALS table that doesn't allow insert of more than 3 animals of the same type.
Probably there are better solutions to implement this but I am curious about THROW and ROLLBACK statement.
Question is: does THROW include a ROLLBACK and is it necessary (or which is better) to use both statements? I tried with them both and separate, trigger always work...
create trigger TRI_ANIMALS
on ANIMALS
after insert
as
begin
declare #NewAnimalId int
select #NewAnimalId = inserted.FK_ANIMAL_TYPE
from inserted
if (select count(*) from ANIMALS
where #NewAnimalId = ANIMALS.FK_ANIMAL_TYPE) > 4
begin;
throw 50002, 'There are already 3 animals of the same type', 1;
rollback;
end;
end
go
/* test */
insert into ANIMALS(NAME, FK_ANIMAL_TYPE) values('Bono', 2);
insert into ANIMALS(NAME, FK_ANIMAL_TYPE) values('Cesar', 2);
insert into ANIMALS(NAME, FK_ANIMAL_TYPE) values('Ron', 2);
/* this must not be inserted */
insert into ANIMALS(NAME, FK_ANIMAL_TYPE) values('Ares', 2);

Related

How to DELETE/INSERT rows in the same table using a UPDATE Trigger?

I want to create a trigger function, which copies certain columns of an recent updated row and deletes the old data. After that I want to insert the copied columns in exact the same table in the same row (overwrite). I need the data to be INSERTED because this function will be embedded in an existing program, with predefined Triggers.
That's what I have so far:
CREATE OR REPLACE FUNCTION update_table()
RETURNS TRIGGER AS
$func$
BEGIN
WITH tmp AS (DELETE FROM table
WHERE table.id = NEW.id
RETURNING id, geom )
INSERT INTO table (id, geom) SELECT * FROM tmp;
END;
$func$ language plpgsql;
CREATE TRIGGER T_update
AFTER UPDATE OF geom ON table
EXECUTE PROCEDURE update_table();
But I get the Error message:
ERROR: cannot perform DELETE RETURNING on relation "table"
HINT: You need an unconditional ON DELETE DO INSTEAD rule with a RETURNING clause.
Why I should use a rule here?
I'm using PostgreSQL 9.6
UPDATE:
A little bit of clarification. When I have two columns in my table (id, geom), after I updated geom I want to make a copy of this (new)row and insert it into the same table, while overwriting the updated row. (I'm not interested in any value before the update) I know that this is odd but I need this row to be inserted again because the program i embed this function in, listens to a INSERT statement and cannot be changed by me.
Right after you update a row, its old values will no longer be available. So, if you simply want to preserve the old row in case of an update you need to create a BEFORE UPDATE trigger, so that you can still access the OLD values and create a new row, e.g.
CREATE TABLE t (id int, geom geometry(point,4326));
CREATE OR REPLACE FUNCTION update_table() RETURNS TRIGGER AS $$
BEGIN
INSERT INTO t (id, geom) VALUES (OLD.id,OLD.geom);
RETURN NEW;
END; $$ LANGUAGE plpgsql;
CREATE TRIGGER t_update
BEFORE UPDATE OF geom ON t FOR EACH ROW EXECUTE PROCEDURE update_table();
INSERT INTO t VALUES (1,'SRID=4326;POINT(1 1)');
If you update the record 1 ..
UPDATE t SET geom = 'SRID=4326;POINT(2 2)', id = 2 WHERE id = 1;
UPDATE t SET geom = 'SRID=4326;POINT(3 3)', id = 3 WHERE id = 2;
.. you get a new record in the same table as you wished
SELECT id, ST_AsText(geom) FROM t;
id | st_astext
----+------------
1 | POINT(1 1)
2 | POINT(2 2)
3 | POINT(3 3)
Demo: db<>fiddle
Unrelated note: consider upgrading your PostgreSQL version! 9.6 will reach EOL in November, 2021.
First thanks to #JimJones for the answer. I´d like to post his answer modified for this purpose. This code "overwrites" the updated row by inserting a copy of itself and then deleting the old duplicate. That way I can Trigger on INSERT.
CREATE TABLE t (Unique_id SERIAL,id int, geom geometry(point,4326));
CREATE OR REPLACE FUNCTION update_table() RETURNS TRIGGER AS $$
BEGIN
INSERT INTO t (id, geom) VALUES (NEW.id,NEW.geom);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER t_update
BEFORE UPDATE OF geom ON t FOR EACH ROW EXECUTE PROCEDURE update_table();
CREATE OR REPLACE FUNCTION delete_table() RETURNS TRIGGER AS $$
BEGIN
DELETE FROM t a
USING t b
WHERE a.Unique_id < b.Unique_id
AND a.geom = b.geom;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER t_delete
AFTER UPDATE OF geom ON t FOR EACH ROW EXECUTE PROCEDURE delete_table();
INSERT INTO t VALUES (1,1,'SRID=4326;POINT(1 1)');
UPDATE t SET geom = 'SRID=4326;POINT(2 2)' WHERE id = 1;

Postgresql trigger syntax error at or near "NEW"

Here is what i'm trying to do:
ALTER TABLE publishroomcontacts ADD COLUMN IF NOT EXISTS contactorder integer NOT NULL default 1;
CREATE OR REPLACE FUNCTION publishroomcontactorder() RETURNS trigger AS $publishroomcontacts$
BEGIN
IF (TG_OP = 'INSERT') THEN
with newcontactorder as (SELECT contactorder FROM publishroomcontacts WHERE publishroomid = NEW.publishroomid ORDER BY contactorder limit 1)
NEW.contactorder = (newcontactorder + 1);
END IF;
RETURN NEW;
END;
$publishroomcontacts$ LANGUAGE plpgsql;
CREATE TRIGGER publishroomcontacts BEFORE INSERT OR UPDATE ON publishroomcontacts
FOR EACH ROW EXECUTE PROCEDURE publishroomcontactorder();
I've been looking into a lot of examples and they all look like this. Most of them a couple of years old tho. Has this changed or why doesn't NEW work? And do i have to do the insert in the function or does postgres do the insert with the returned NEW object after the function is done?
I'm not sure what you're trying to do, but your syntax is wrong here:
with newcontactorder as (SELECT contactorder FROM publishroomcontacts WHERE publishroomid = NEW.publishroomid ORDER BY contactorder limit 1)
NEW.contactorder = (newcontactorder + 1);
Do not use CTE query if there is no select that comes afterwards. If you want to increment contactorder column for particular publishroomid whenever new one is being added and this is your sequence (auto increment) mechanism then you should replace it with:
NEW.contactorder = COALESCE((
SELECT max(contactorder)
FROM publishroomcontacts
WHERE publishroomid = NEW.publishroomid
), 1);
Note the changes:
there's no CTE, just variable assignment with SELECT query
use MAX() aggregate function instead of ORDER BY + LIMIT
wrapped up with COALESCE(x,1) function to properly insert first contacts for rooms, it will return 1 if your query does return NULL
Your trigger should look like this
CREATE OR REPLACE FUNCTION publishroomcontactorder() RETURNS trigger AS $publishroomcontacts$
BEGIN
IF (TG_OP = 'INSERT') THEN
NEW.contactorder = COALESCE((
SELECT max(contactorder) + 1
FROM publishroomcontacts
WHERE publishroomid = NEW.publishroomid
), 1);
END IF;
RETURN NEW;
END;
$publishroomcontacts$ LANGUAGE plpgsql;
Postgres will insert the row itself, you don't have to do anything, because RETURN NEW does that.
This solution does not take care of concurrent inserts which makes it unsafe for multi-user environment! You can work around this by performing an UPSERT !
WITH is not an assignment in PL/pgSQL.
PL/pgSQL interprets the line as SQL statement, but that is bad SQL because the WITH clause is followed by NEW.contactorder rather than SELECT or another CTE.
Hence the error; it has nothing to do with NEW as such.
You probably want something like
SELECT contactorder INTO newcontactorder
FROM publishroomcontacts
WHERE publishroomid = NEW.publishroomid
ORDER BY contactorder DESC -- you want the biggest one, right?
LIMIT 1;
You'll have to declare newcontactorder in the DECLARE section.
Warning: If there are two concurrent inserts, they might end up with the same newcontactorder.

Up-to-date dictionary of distinct values for column

I have a table with many columns and several million rows like
CREATE TABLE foo (
id integer,
thing1 text,
thing2 text,
...
stuff text);
How can I manage the relevance of dictionary of unique values of stuff column that originally populates like this:
INSERT INTO stuff_dict SELECT DISTINCT stuff from foo;
Should I manually synchronize (check if new stuff value already in stuff_dict before every insert/update) or use triggers for each insert/update/delete from foo table. In latter case, what is the best design for such a trigger(s)?
UPDATE: view does not suit here, because the SELECT * FROM stuff_dict should run as fast as possible (even CREATE INDEX ON foo(stuff) does not help much when foo has tens of millions of records).
A materialized view seems to be the simplest option for a large table.
In the trigger function just refresh the view. You can use concurrently option (see the pozs's comment below).
create materialized view stuff_dict as
select distinct stuff
from foo;
create or replace function refresh_stuff_dict()
returns trigger language plpgsql
as $$
begin
refresh materialized view /*concurrently*/ stuff_dict;
return null;
end $$;
create trigger refresh_stuff_dict
after insert or update or delete or truncate
on foo for each statement
execute procedure refresh_stuff_dict();
While the solution with a materialized view is straightforward it may be suboptimal when the table foo is modified frequently. In this case use a table for the dictionary. An index will be helpful.
create table stuff_dict as
select distinct stuff
from foo;
create index on stuff_dict(stuff);
The trigger function is more complicated and should be fired for each row after insert/update/delete:
create or replace function refresh_stuff_dict()
returns trigger language plpgsql
as $$
declare
do_insert boolean := tg_op = 'INSERT' or tg_op = 'UPDATE' and new.stuff <> old.stuff;
do_delete boolean := tg_op = 'DELETE' or tg_op = 'UPDATE' and new.stuff <> old.stuff;
begin
if do_insert and not exists (select 1 from stuff_dict where stuff = new.stuff) then
insert into stuff_dict values(new.stuff);
end if;
if do_delete and not exists (select 1 from foo where stuff = old.stuff) then
delete from stuff_dict
where stuff = old.stuff;
end if;
return case tg_op when 'DELETE' then old else new end;
end $$;
create trigger refresh_stuff_dict
after insert or update or delete
on foo for each row
execute procedure refresh_stuff_dict();

postgresql trigger to make name unique

I'm using postgres 9.4; I have a table with a unique index. I would like to mutate the name by adding a suffix to ensure the name is unique.
I have created a "before" trigger which computes a suffix. It works well in autocommit mode. However, if two items with the same name are inserted in the same transaction, they both get the same unique suffix.
What is the best way to accomplish my task? Is there a way to handle it with a trigger, or should I ... hmm... wrap the insert or update in a savepoint and then handle the error?
UPDATE (re comment from #Haleemur Ali ):
I don't think my question depends on the details. The salient point is that I query the subset of the collection over which I want to enforce uniqueness, and
choose a new name... however, it would seem that when the queries are run on two objects identically named in the same transaction, one doesn't see the others' modification to the new value.
But ... just in case... my trigger contains ("type" is fixed parameter to the trigger function):
select find_unique(coalesce(new.name, capitalize(type)),
'vis_operation', 'name', format(
'sheet_id = %s', new.sheet_id )) into new.name;
Where "find_unique" contains:
create or replace function find_unique(
stem text, table_name text, column_name text, where_expr text = null)
returns text language plpgsql as $$
declare
table_nt text = quote_ident(table_name);
column_nt text = quote_ident(column_name);
bstem text = replace(btrim(stem),'''', '''''');
find_re text = quote_literal(format('^%s(( \d+$)|$)', bstem));
xtct_re text = quote_literal(format('^(%s ?)', bstem));
where_ext text = case when where_expr is null then '' else 'and ' || where_expr end;
query_exists text = format(
$Q$ select 1 from %1$s where btrim(%2$s) = %3$s %4$s $Q$,
table_nt, column_nt, quote_literal(bstem), where_ext );
query_max text = format($q$
select max(coalesce(nullif(regexp_replace(%1$s, %4$s, '', 'i'), ''), '0')::int)
from %2$s where %1$s ~* %3$s %5$s
$q$,
column_nt, table_nt, find_re, xtct_re, where_ext );
last int;
i int;
begin
-- if no exact match, use exact
execute query_exists;
get diagnostics i = row_count;
if i = 0 then
return coalesce(bstem, capitalize(right(table_nt,4)));
end if;
-- find stem w/ number, use max plus one.
execute query_max into last;
if last is null then
return coalesce(bstem, capitalize(right(table_nt,4)));
end if;
return format('%s %s', bstem, last + 1);
end;
$$;
A BEFORE trigger sees rows modified by the statement that is currently running. So this should work. See demo below.
However, your design will not work in the presence of concurrency. You have to LOCK TABLE ... IN EXCLUSIVE MODE the table you're updating, otherwise concurrent transactions could get the same suffix. Or, with a UNIQUE constraint present, all but one will error out.
Personally I suggest:
Create a side table with the base names and a counter
When you create an entry, lock the side table in EXCLUSIVE mode. This will serialize all sessions that create entries, which is necessary so that you can:
UPDATE side_table SET counter = counter + 1 WHERE name = $1 RETURNING counter to get the next free ID. If you get zero rows, then instead:
Create a new entry in the side table if the base name being created and the counter set to zero.
Demo showing that BEFORE triggers can see rows inserted in the same statement, though not the row that fired the trigger:
craig=> CREATE TABLE demo(id integer);
CREATE TABLE
craig=> \e
CREATE FUNCTION
craig=> CREATE OR REPLACE FUNCTION demo_tg() RETURNS trigger LANGUAGE plpgsql AS $$
DECLARE
row record;
BEGIN
FOR row IN SELECT * FROM demo
LOOP
RAISE NOTICE 'Row is %',row;
END LOOP;
IF tg_op = 'DELETE' THEN
RETURN OLD;
ELSE
RETURN NEW;
END IF;
END;
$$;
CREATE FUNCTION
craig=> CREATE TRIGGER demo_tg BEFORE INSERT OR UPDATE OR DELETE ON demo FOR EACH ROW EXECUTE PROCEDURE demo_tg();
CREATE TRIGGER
craig=> INSERT INTO demo(id) VALUES (1),(2);
NOTICE: Row is (1)
INSERT 0 2
craig=> INSERT INTO demo(id) VALUES (3),(4);
NOTICE: Row is (1)
NOTICE: Row is (2)
NOTICE: Row is (1)
NOTICE: Row is (2)
NOTICE: Row is (3)
INSERT 0 2
craig=> UPDATE demo SET id = id + 100;
NOTICE: Row is (1)
NOTICE: Row is (2)
NOTICE: Row is (3)
NOTICE: Row is (4)
NOTICE: Row is (2)
NOTICE: Row is (3)
NOTICE: Row is (4)
NOTICE: Row is (101)
NOTICE: Row is (3)
NOTICE: Row is (4)
NOTICE: Row is (101)
NOTICE: Row is (102)
NOTICE: Row is (4)
NOTICE: Row is (101)
NOTICE: Row is (102)
NOTICE: Row is (103)
UPDATE 4
craig=>

how to execute the next T-SQL when a before T-SQL throw an error?

I am working with SQlite and I have many T-SQL that I want to execute all of them in this way:
T-SQL1; T-SQL2, ... T-SQLN
My T-SQL are:
insert into myRelationTable(IDTable1, IDTabl2) VALUES (1,1);
insert into myRelationTable(IDTable1, IDTabl2) VALUES (1,2);
insert into myRelationTable(IDTable1, IDTabl2) VALUES (1,3);
...
With this T.SQLs I want to related records from the table1 with the table2. If any of the relations exist, there are no problem all the T-SQL is execute, but if for exameple there are a problem with the second, the first is execute but the third and the next T-SQL are not executed.
My quiestion it's if there are any way to continue execute the T-SQL and don't take care if some of the T-SQL throw an error, because what I want it's to have the relation, if some relation exists it's because other user created it, so at the end it's what I want, that the relation exists, so I would like to continue with the next T-SQL.
Is it possible?
However, if I try to delete a record that does not exists, the next T-SQL are executed, so SQLite does not take care about the error and continue with the following. Why when I try to add a new record it does not work in the same way?
Thanks.
I would strongly recommend checking if it is OK to perform the T-SQL rather than ignoring errors.
You can do this by:
DECLARE #count int
SET #count = (SELECT COUNT(1) FROM myRelationTable WHERE IDTable1 =1 AND IDTabl2 = 1)
IF #count = 0 OR #count IS NULL
BEGIN
INSERT INTO myRelationTable(IDTable1, IDTabl2) VALUES (1,1)
END
SET #count = (SELECT COUNT(1) FROM myRelationTable WHERE IDTable1 =1 AND IDTabl2 = 2)
IF #count = 0 OR #count IS NULL
BEGIN
INSERT INTO myRelationTable(IDTable1, IDTabl2) VALUES (1,2)
END
SET #count = (SELECT COUNT(1) FROM myRelationTable WHERE IDTable1 =1 AND IDTabl2 = 3)
IF #count = 0 OR #count IS NULL
BEGIN
INSERT INTO myRelationTable(IDTable1, IDTabl2) VALUES (1,3)
END
Which can very easily be wrapped within a stored procedure.
As to your question the answer is:
Sure, easily:
BEGIN TRY
insert into myRelationTable(IDTable1, IDTabl2) VALUES (1,1);
END TRY
BEGIN CATCH
--Do nothing
END CATCH
BEGIN TRY
insert into myRelationTable(IDTable1, IDTabl2) VALUES (1,2);
END TRY
BEGIN CATCH
--Do nothing
END CATCH
BEGIN TRY
insert into myRelationTable(IDTable1, IDTabl2) VALUES (1,3);
END TRY
BEGIN CATCH
--Do nothing
END CATCH
Really my problem is that I am using SQLite Expert to execute the T-SQL, but this program detect as error syntax the Begin try line.
Other way to solve the problem is to use the igonre keyword in this way:
insert or ignore into my table...
This makes that if exists an error, the ignore it and execute the next statement. But again, the "ignore" keyword is detected as an syntax error by SQLite Expert.
The "ignore" keyword belongs to the on conflict clouse. There are more information in this link.
Thanks.