How to prevent bulk row deletion operations? - postgresql

I can prevent DELETE completely like this:
CREATE TRIGGER prevent_multiple_row_del
BEFORE DELETE ON ALL
BEGIN
RAISE EXCEPTION 'Cant delete more than 1 row at a time';
END;
But how do I check if the delete operation will lead to deletion of multiple rows? Deletion is not a problem, as long as it's limited to a constant number (1 or 5 or 10, as long as it's not unlimited).
Alternatively, how do I allow deletions but prevent deletions of full tables?

A before statement trigger is too early to know the affected rows.
As to full table deletes, use an after statement trigger. All you'd have to do is select from the table and see whether there is some record left in it.
As to deletes of up to n records, this too would have to be determined after statement. You tagged your question PostgreSQL, but as a_horse_with_no_name pointed out your code is Oracle. In PostgreSQL you'd want to check pg_affected_rows() and in Oracle SQL%ROWCOUNT. I don't know whether PostgreSQL allows to check pg_affected_rows() in an after statement trigger. For Oracle, checking SQL%ROWCOUNT in an after statement trigger doesn't work. It's too early for this variable to check.
So at least for Oracle the trick is to have some custom counter to set to zero before statement, increase after each row and check after statement. I don't know precisely how to do that in PostgreSQL, but there certainly will be a way. In Oracle you'd use a compound trigger, i.e. a super trigger housing the individual triggers.
CREATE OR REPLACE TRIGGER prevent_multiple_row_del
FOR DELETE ON mytable COMPOUND TRIGGER
v_count INTEGER := 0;
AFTER EACH ROW IS
BEGIN
v_count := v_count + 1;
END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN
IF v_count > 1 THEN
raise_application_error(-20000, 'Can''t delete more than 1 row at a time');
END IF;
END AFTER STATEMENT;
END prevent_multiple_row_del;

Related

How to make a PostgreSQL constraint only apply to a new value

I'm new to PostgreSQL and really loving how constraints work with row level security, but I'm confused how to make them do what I want them to.
I have a column and I want add a constraint that creates a minimum length for a text column, this check works for that:
(length((column_name):: text) > 6)
BUT, it also then prevents users updating any rows where column_name is already under 6 characters.
I want to make it so they can't change that value TO that, but can still update a row where that is already happening, so they can change it as needed according to my new policy.
Is this possible?
BUT, it also then prevents users updating any rows where column_name is already under 6 characters.
Well, no. When you try to add that CHECK constraint, all existing rows are checked, and an exception is raised if any violation is found.
You would have to make it NOT VALID. Then yes.
You really need a trigger on INSERT or UPDATE that checks new values. Not as cheap and not as bullet-rpoof, but still pretty solid. Like:
CREATE OR REPLACE FUNCTION trg_col_min_len6()
RETURNS trigger
LANGUAGE plpgsql AS
$func$
BEGIN
IF TG_OP = 'UPDATE'
AND OLD.column_name IS NOT DISTINCT FROM NEW.column_name THEN
-- do nothing
ELSE
RAISE EXCEPTION 'New value for column "note" must have at least 6 characters.';
END IF;
RETURN NEW;
END
$func$;
-- trigger
CREATE TRIGGER tbl1_column_name_min_len6
BEFORE INSERT OR UPDATE ON tbl
FOR EACH ROW
WHEN (length(NEW.column_name) < 7)
EXECUTE FUNCTION trg_col_min_len6();
db<>fiddle here
It should be most efficient to check in a WHEN condition to the trigger directly. Then the trigger function is only ever called for short values and can be super simple.
See:
Trigger with multiple WHEN conditions
Fire trigger on update of columnA or ColumnB or ColumnC
You can create separate triggers for Insert and Update letting each completely define when it should fired. If completely different logic is required for the DML action this technique allows writing dedicated trigger functions. In this case that is not required the trigger function reduces to raise exception .... See Demo
-- Single trigger function for both Insert and Delete
create or replace function trg_col_min_len6()
returns trigger
language plpgsql
as $$
begin
raise exception 'Cannot % val = ''%''. Must have at least 6 characters.'
, tg_op, new.val;
return null;
end;
$$;
-- trigger before insert
create trigger tbl_val_min_len6_bir
before insert
on tbl
for each row
when (length(new.val) < 6)
execute function trg_col_min_len6();
-- trugger before update
create trigger tbl_val_min_len6_bur
before update
on tbl
for each row
when ( length(new.val) < 6
and new.val is distinct from old.val
)
execute function trg_col_min_len6();

How to get value from table using Firebird trigger

I want to get a value from the table and compare it with inserted value in the Firebird trigger.
Here is my code.
SET TERM ^;
CREATE TRIGGER after_in_systab FOR SYSTEMTAB
ACTIVE AFTER INSERT POSITION 0
AS
declare sys_code integer;
select sys_code from system_table;
BEGIN
/* enter trigger code here */
if(sys_code == NEW.SYSTEM_CODE) then
insert into logs(log_detail)values('code matched');
end
END^
SET TERM;^
Alternatively, you can use a singular select expression instead.
CREATE TRIGGER after_in_systab FOR SYSTEMTAB
ACTIVE AFTER INSERT POSITION 0
AS
declare sys_code integer;
BEGIN
sys_code = (select sys_code from system_table);
if(sys_code == NEW.SYSTEM_CODE) then
begin
insert into logs(log_detail)values('code matched');
end
END
If your select returns...
one single row or more, then it is the same as Mark's answer (error when multiple rows).
not a single row, the expression would return NULL while Mark's statement would do nothing (no change of variable value)
You may also think SQL SINGULAR existence predicate and about how it is different from EXISTS one.
Firebird docs - chapter 4.2.3. Existential Predicates
Interbase docs, stemming from old pre-Firebird documentation.
You also have to make your mind clearly what should happen if the transaction was rolled back (because of any database or network error, or because an application commanded to ROLLBACK changes): should your LOG still contain a record about the data modification that was not persisted or should the LOG record vanish with the un-inserted data row it describes.
If former is the case you have to insert log records in autonomous transaction (chapter 7.6.16).
You need to use the INTO clause:
CREATE TRIGGER after_in_systab FOR SYSTEMTAB
ACTIVE AFTER INSERT POSITION 0
AS
declare sys_code integer;
BEGIN
select sys_code from system_table into sys_code;
if(sys_code == NEW.SYSTEM_CODE) then
begin
insert into logs(log_detail)values('code matched');
end
END

How to prevent or avoid running update and delete statements without where clauses in PostgreSQL

How to prevent or avoid running update or delete statements without where clauses in PostgreSQL?
Same as SQL_SAFE_UPDATES statement in MySQL is needed for PostgreSQL.
For example:
UPDATE table_name SET active=1; -- Prevent this statement or throw error message.
UPDATE table_name SET active=1 WHERE id=1; -- This is allowed
My company database has many users with insert and update privilege any one of the users do that unsafe update.
In this secoario how to handle this.
Any idea can write trigger or any extension to handle the unsafe update in PostgreSQL.
I have switched off autocommits to avoid these errors. So I always have a transaction that I can roll back. All you have to do is modify .psqlrc:
\set AUTOCOMMIT off
\echo AUTOCOMMIT = :AUTOCOMMIT
\set PROMPT1 '%[%033[32m%]%/%[%033[0m%]%R%[%033[1;32;40m%]%x%[%033[0m%]%# '
\set PROMPT2 '%[%033[32m%]%/%[%033[0m%]%R%[%033[1;32;40m%]%x%[%033[0m%]%# '
\set PROMPT3 '>> '
You don't have to insert the PROMPT statements. But they are helpful because they change the psql prompt to show the transaction status.
Another advantage of this approach is that it gives you a chance to prevent any erroneous changes.
Example (psql):
database=# SELECT * FROM my_table; -- implicit start transaction; see prompt
-- output result
database*# UPDATE my_table SET my_column = 1; -- missed where clause
UPDATE 525125 -- Oh, no!
database*# ROLLBACK; -- Puh! revert wrong changes
ROLLBACK
database=# -- I'm completely operational and all of my circuits working perfectly
There actually was a discussion on the hackers list about this very feature. It had a mixed reception, but might have been accepted if the author had persisted.
As it is, the best you can do is a statement level trigger that bleats if you modify too many rows:
CREATE TABLE deleteme
AS SELECT i FROM generate_series(1, 1000) AS i;
CREATE FUNCTION stop_mass_deletes() RETURNS trigger
LANGUAGE plpgsql AS
$$BEGIN
IF (SELECT count(*) FROM OLD) > TG_ARGV[0]::bigint THEN
RAISE EXCEPTION 'must not modify more than % rows', TG_ARGV[0];
END IF;
RETURN NULL;
END;$$;
CREATE TRIGGER stop_mass_deletes AFTER DELETE ON deleteme
REFERENCING OLD TABLE AS old FOR EACH STATEMENT
EXECUTE FUNCTION stop_mass_deletes(10);
DELETE FROM deleteme WHERE i < 100;
ERROR: must not modify more than 10 rows
CONTEXT: PL/pgSQL function stop_mass_deletes() line 1 at RAISE
DELETE FROM deleteme WHERE i < 10;
DELETE 9
This will have a certain performance impact on deletes.
This works from v10 on, when transition tables were introduced.
If you can afford making it a little less convinient for your users, you might try revoking UPDATE privilege for all "standard" users and creating a stored procedure like this:
CREATE FUNCTION update(table_name, col_name, new_value, condition) RETURNS void
/*
Check if condition is acceptable, create and run UPDATE statement
*/
LANGUAGE plpgsql SECURITY DEFINER
Because of SECURITY DEFINER this way your users will be able to UPDATE despite not having UPDATE privilege.
I'm not sure if this is a good approach, but this way you can force as strict UPDATE (or anything else) requirements as you wish.
Of course the more complicated UPDATES are required, the more complicated has to be your procedure, but if this is mostly just about updating single row by ID (as in your example) this might be worth a try.

Which explicit lock to use for a trigger?

I am trying to understand which type of a lock to use for a trigger function.
Simplified function:
CREATE OR REPLACE FUNCTION max_count() RETURNS TRIGGER AS
$$
DECLARE
max_row INTEGER := 6;
association_count INTEGER := 0;
BEGIN
LOCK TABLE my_table IN ROW EXCLUSIVE MODE;
SELECT INTO association_count COUNT(*) FROM my_table WHERE user_id = NEW.user_id;
IF association_count > max_row THEN
RAISE EXCEPTION 'Too many rows';
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE CONSTRAINT TRIGGER my_max_count
AFTER INSERT OR UPDATE ON my_table
DEFERRABLE INITIALLY DEFERRED
FOR EACH ROW
EXECUTE PROCEDURE max_count();
I initially was planning to use EXCLUSIVE but it feels too heavy. What I really want is to ensure that during this function execution no new rows are added to the table with concerned user_id.
If you want to prevent concurrent transactions from modifying the table, a SHARE lock would be correct. But that could lead to a deadlock if two such transactions run at the same time — each has modified some rows and is blocked by the other one when it tries to escalate the table lock.
Moreover, all table locks that conflict with SHARE UPDATE EXCLUSIVE will lead to autovacuum cancelation, which will cause table bloat when it happens too often.
So stay away from table locks, they are usually the wrong thing.
The better way to go about this is to use no explicit locking at all, but to use the SERIALIZABLE isolation level for all transactions that access this table.
Then you can simply use your trigger (without lock), and no anomalies can occur. If you get a serialization error, repeat the transaction.
This comes with a certain performance penalty, but allows more concurrency than a table lock. It also avoids the problems described in the beginning.

Mutating table in SQL for a specific case of update

I have create a trigger for table stock
The schema of the table is as follows:
create table stock(item_code varchar2(2) primary key, p_qty number(2),s_qty number(2));
The Trigger is as follows:
CREATE OR REPLACE TRIGGER TR_STOCK BEFORE UPDATE OF S_QTY ON STOCK FOR EACH ROW
DECLARE
V_P STOCK.P_QTY%TYPE;
V_S STOCK.S_QTY%TYPE;
V_I VARCHAR2(2);
BEGIN
V_S:=:NEW.S_QTY;
V_I:=:NEW.ITEM_CODE;
SELECT P_QTY INTO V_P FROM STOCK WHERE ITEM_CODE=V_I;
IF V_S>V_P THEN
RAISE_APPLICATION_ERROR(-20400,'SOLD QTY CANNOT EXCEED PURCHASED QTY...');
END IF;
END;
/
Now every time I execute an update query, it says the table is mutating and flags the following error:
update stock set s_qty=2 where item_code='i4'
*
ERROR at line 1:
ORA-04091: table HR.STOCK is mutating, trigger/function may not see it
ORA-06512: at "HR.TR_STOCK", line 8
ORA-04088: error during execution of trigger 'HR.TR_STOCK'
Any help with this specific problem?
There is no need to query the STOCK table. Simply compare the :NEW.P_QTY and :NEW.S_QTY fields directly
CREATE OR REPLACE TRIGGER TR_STOCK BEFORE UPDATE OF S_QTY ON STOCK FOR EACH ROW
DECLARE
BEGIN
IF :new.s_qty > :new.p_qty THEN
RAISE_APPLICATION_ERROR(-20400,'SOLD QTY CANNOT EXCEED PURCHASED QTY...');
END IF;
END;
/
You really should consider using a database constraint to implement this logic, in which case you wouldn't need the trigger at all.
ALTER TABLE hr.stock
ADD CONSTRAINT stock_ck1
CHECK (
s_qty <= p_qty
)
Triggers have many drawbacks compared with constraints:
Triggers do not account for existing data rows, constraints can do this if you desire.
A FOR EACH ROW trigger has to context-switch between the SQL engine and the PL/SQL engine for every row, which increases the overhead of the INSERT or UPDATE statement running. This adds up as your number of rows increases.
Oracle can use constraints when optimising your SQL statements (it knows that a WHERE clause that violates a CHECK constraint will never return any rows without needing to inspect the rows).
If you're using the trigger to provide an error message, you should really consider moving this into your application logic, with constraints as a safeguard.