Prevent deletion of all rows of a table - sql-server-2008-r2

Is there a way to prevent deletion of all rows in a table? I know how to user DDL triggers to prevent dropping a table or truncation. I once failed to highlight the entire commmand, leaving off the where clause, and deleted all rows in a table. I want to prevent this from occurring again unless a where clause is present.

You could do this with a trigger.
CREATE TRIGGER dbo.KeepRows
ON dbo.TableName
FOR DELETE
AS
BEGIN
SET NOCOUNT ON;
IF NOT EXISTS (SELECT 1 FROM dbo.TableName)
BEGIN
RAISERROR('Naughty developer, you tried to delete the last row!', 11, 1);
ROLLBACK TRANSACTION;
END
END
GO
In order to avoid the delete and then rollback, which can be quite expensive if the table is large, you could instead implement this as an instead of trigger, which will prevent any work from happening (except the check of course). This assumes the column id is the primary key:
CREATE TRIGGER dbo.KeepRows2
ON dbo.TableName
INSTEAD OF DELETE
AS
BEGIN
SET NOCOUNT ON;
IF EXISTS (SELECT 1 FROM dbo.TableName WHERE id NOT IN (SELECT id FROM deleted))
BEGIN
DELETE dbo.TableName WHERE id IN (SELECT id FROM deleted);
END
-- optional:
ELSE
BEGIN
RAISERROR('Nice try, you need a WHERE clause!', 11, 1);
END
END
GO

Related

Is it safe to insert data from inside of a CTE expression?

Background
I have a function that generates detail rows and inserts these into a detail table. I want to automatically ensure that the referenced master row is available before inserting the detail rows.
A BEFORE INSERT trigger does the job, but unfortunately the job is done too well. If a detail row is prevented due to a unique index, the master row will still be inserted, leaving me with a master without children (I don't want that).
I managed to solve this by inserting the master rows inside of a cte and then inserting the detail rows in the actual query. This works, but I'm worried that it's not a safe way of doing it.
INSERT rows from inside of a CTE expression
Here is the code to try this out. The input_cte is just generating static dummy data in the example. This particular example could be built with separate static insert SQLs, but in my real case the input_cte is dynamic.
Is this a safe way to solve this, or is it out of spec and could blow up in the next PG version?
CREATE TABLE master (
id INT NOT NULL GENERATED BY DEFAULT AS IDENTITY,
PRIMARY KEY (id)
);
CREATE TABLE detail (
id INT NOT NULL GENERATED BY DEFAULT AS IDENTITY,
master_id INT,
PRIMARY KEY (id, master_id)
);
ALTER TABLE detail
ADD CONSTRAINT detail_master_id_fkey
FOREIGN KEY (master_id)
REFERENCES master (id);
WITH input_cte AS (
SELECT 1 AS master_id, 1 AS detail_id
UNION SELECT 1, 2
UNION SELECT 2, 1
),
insert_cte AS (
--Ignore conflicts, as the master row could already exist
INSERT INTO master (id)
SELECT DISTINCT master_id
FROM input_cte
ON CONFLICT DO NOTHING
)
INSERT INTO detail (id, master_id)
SELECT detail_id, master_id
FROM input_cte;
SELECT * FROM master;
SELECT * FROM detail;
Edit
Bummer.. I just realised that this method will also insert master rows even if the detail rows would be stopped by my unique index.
I see two options. Which should I choose?
Check exactly what I can insert before trying to insert. I.e. do the same thing as my unique index does.
Go ahead with the above solution or the BEFORE INSERT trigger concept and then clean up unused master rows afterwards with a separate DELETE query.
Edit 2 as a reponse to Haleemur Ali's comment
I agree with Haleemur, but in my case it's a bit more complicated than the small example I created. The unique key on my detail can actually have null values. The index in my detail table (project_sequence) looks like this to enable null values:
CREATE UNIQUE INDEX
project_sequence_unique_combinations ON main.project_sequence
(project_id, controlpoint_type_id, COALESCE(drawing_id, 0), COALESCE(layer_guid, '00000000-0000-0000-0000-000000000000'));
Due to possible NULL values I cannot use these fields in my primary key, so I have a surrogate integer key. I'm calculating these key values in my CTE so they will always be unique. I.e. they can be inserted into the master table (sequence) even if the detail rows gets stopped by the unique index.*
To clarify I've inserted my actual code below. This code works as it should, but it sure would be nice to be able to utilize the new fancy DEFERRED and REFERENCES triggers.
SELECT COALESCE(MAX(id), 0)
INTO _max_sequence_id
FROM main.sequence;
WITH cte AS (
SELECT
d.project_id,
pct.controlpoint_type_id,
d.id as drawing_id,
DENSE_RANK() OVER(ORDER BY d.id, sequence_group_key) + _max_sequence_id + 1 AS new_sequence_id
FROM main.drawing d
CROSS JOIN main.project_controlpoint_type pct
--This left JOIN along with "ps.project_id IS NULL" is my
--current solution, i.e. its "option 1" from above.
LEFT JOIN main.project_sequence ps
ON ps.project_id = d.project_id
AND ps.drawing_id = d.id
AND ps.controlpoint_type_id = pct.controlpoint_type_id
WHERE d.project_id = _project_id
AND pct.project_id = _project_id
AND pct.sequence_level_id = 2
AND ps.project_id IS NULL
),
insert_sequence_cte AS (
INSERT INTO main.sequence
(id, project_id, last_value)
SELECT DISTINCT cte.new_sequence_id, cte.project_id, 0
FROM cte
ON CONFLICT DO NOTHING
)
INSERT INTO main.project_sequence
(project_id, controlpoint_type_id, drawing_id, sequence_id)
SELECT
project_id,
controlpoint_type_id,
drawing_id,
new_sequence_id
FROM cte;
The manual page on WITH queries states that your use case is legitimate and supported:
You can use data-modifying statements (INSERT, UPDATE, or DELETE) in WITH.
and
... data-modifying statements are only allowed in WITH clauses that are attached to the top-level statement. However, normal WITH visibility rules apply, so it is possible to refer to the WITH statement's output from the sub-SELECT.
Further:
If a data-modifying statement in WITH lacks a RETURNING clause, then it forms no temporary table and cannot be referred to in the rest of the query. Such a statement will be executed nonetheless.
If the rule "no master without a detail" is important to your model, you should let the DB enforce it. This will free you from "auto-inserting" master to reduce the potential for error and let you use conventional methods again.
Take a look at CONSTRAINT TRIGGERS. They allow you to just detect and reject violations of your no-master-without-detail rule while leaving the actual INSERTs to application code.
Your use case would need a CONSTRAINT TRIGGER on your master table that is DEFERRABLE INITIALLY DEFERRED. This allows you to INSERT master, then INSERT detail and still be sure that the transaction only commits if all is consistent.
From the manual linked above:
Constraint triggers must be AFTER ROW triggers on plain tables (not foreign tables). They can be fired either at the end of the statement causing the triggering event, or at the end of the containing transaction; in the latter case they are said to be deferred.
You'll need two triggers, one handling INSERT/UPDATE on master and another one handling DELETE/UPDATE on detail:
CREATE CONSTRAINT TRIGGER trigger_assert_master_has_detail
AFTER INSERT OR UPDATE OF id ON master
DEFERRABLE INITIALLY DEFERRED FOR EACH ROW
EXECUTE PROCEDURE assert_master_has_detail();
CREATE CONSTRAINT TRIGGER trigger_assert_no_leftover_master
AFTER DELETE OR UPDATE OF master_id ON detail
DEFERRABLE INITIALLY DEFERRED FOR EACH ROW
EXECUTE PROCEDURE assert_no_leftover_master();
Note that UPDATEs will only fire the trigger if they concern the FK/PK column.
The two trigger functions will then check if there are 1-n details for the master:
CREATE FUNCTION assert_master_has_detail() RETURNS trigger
AS
$$
BEGIN
IF NOT EXISTS (SELECT 1 FROM detail WHERE master_id = NEW.id)
THEN
RAISE EXCEPTION 'no detail for master_id=%', NEW.id;
ELSE
RETURN NEW;
END IF;
END;
$$
LANGUAGE plpgsql;
CREATE FUNCTION assert_no_leftover_master() RETURNS trigger
AS
$$
BEGIN
IF NOT EXISTS (SELECT 1 FROM detail WHERE master_id = OLD.master_id)
AND EXISTS (SELECT 1 FROM master WHERE id = OLD.master_id)
THEN
RAISE EXCEPTION 'last detail for master_id=% removed, but master still exists', OLD.master_id;
ELSE
RETURN NULL;
END IF;
END;
$$
LANGUAGE plpgsql;
Example of a violation:
INSERT INTO master
VALUES (1);
-- ERROR: no detail for master_id=1
-- CONTEXT: PL/pgSQL function assert_master_has_detail() line 5 at RAISE
and a legal scenario:
INSERT INTO master
VALUES (1);
INSERT INTO detail
VALUES (10, 1);
-- trigger fired at end of transaction, finds everything is OK
Here's a complete solution as dbfiddle.

Simple IBM DB2 Trigger

I'm trying to create a trigger in DB2 (AS400) to insert/delete a row into a table when an insert/delete is triggered on a different table, but I need to use information about the triger table.
The example would be I would like is like this (column 1 from table 1 and table 2 are the same and unique in table 2):
CREATE TRIGGER MY_TRIGGER
AFTER INSERT OR DELETE ON DB1.TABLE1
BEGIN
IF INSERTING
THEN INSERT INTO DB1.TABLE2 (Col1, Col2) VALUES (Db1.TABLE1.Col1, 0);
ELSEIF DELETING
THEN DELETE FROM Db1.TABLE2 WHERE Col1=TABLE1.Col1;
END IF;
END
But this doesn't work (it doesn't recognize TABLE1.Col1 on insert/delete statements).
Also it would prompt me an error (I guess) since it would create a duplicate key when a second row is inserted in Table 1. How could I avoid errors (just skip the insert) when the Table2.Col1 already exists?
Try adding correlation names like this:
CREATE TRIGGER MY_TRIGGER
AFTER INSERT OR DELETE ON DB1.TABLE1
REFERENCING OLD ROW AS OLD
NEW ROW AS NEW
BEGIN
IF INSERTING
THEN INSERT INTO DB1.TABLE2 (Col1, Col2) VALUES (NEW.Col1, 0);
ELSEIF DELETING
THEN DELETE FROM Db1.TABLE2 WHERE Col1=OLD.Col1;
END IF;
END
The trigger has access to both the old and new image of a row. You need to tell it which to use. BTW, only the update action populates both the old and new image. Insert only provides the new image, and delete only provides the old image. One might think that SQL could figure that out, but no, you still have to tell it explicitly.
EDIT This is the final trigger actually used (from comments, thank you #MarçalTorroella)
CREATE TRIGGER MY_TRIGGER
AFTER INSERT OR DELETE ON DB1.TABLE1
REFERENCING OLD ROW AS OLD
NEW ROW AS NEW
FOR EACH ROW MODE DB2ROW
BEGIN
DECLARE rowcnt INTEGER;
IF INSERTING THEN
SELECT COUNT(*)
INTO rowcnt
FROM DB1.TABL2
WHERE Col1 = NEW.Col1;
IF rowcnt = 0 THEN
INSERT INTO DB1.TABLE2 (Col1, Col2)
VALUES (NEW.Col1, 0);
END IF;
ELSEIF DELETING THEN
DELETE FROM Db1.TABLE2
WHERE Col1=OLD.Col1;
END IF;
END

How can i get my table to populate using a trigger function?

i have a function:
CREATE OR REPLACE FUNCTION delete_student()
RETURNS TRIGGER AS
$BODY$
BEGIN
IF (TG_OP = 'DELETE')
THEN
INSERT INTO cancel(eno, excode,sno,cdate,cuser)
VALUES ((SELECT entry.eno FROM entry
JOIN student ON (entry.sno = student.sno)
WHERE entry.sno = OLD.sno),(SELECT entry.excode FROM entry
JOIN student ON (entry.sno = student.sno)
WHERE entry.sno = OLD.sno),
OLD.sno,current_timestamp,current_user);
END IF;
RETURN OLD;
END; $BODY$ LANGUAGE plpgsql;
and i also have the trigger:
CREATE TRIGGER delete_student
BEFORE DELETE
on student
FOR EACH ROW
EXECUTE PROCEDURE delete_student();
the idea is when i delete a student from the student relation then the entry in the entry relation also delete and my cancel relation updates.
this is what i put into my student relation:
INSERT INTO
student(sno, sname, semail) VALUES (1, 'a. adedeji', 'ayooladedeji#live.com');
and this is what i put into my entry relation:
INSERT INTO
entry(excode, sno, egrade) VALUES (1, 1, 98.56);
when i execute the command
DELETE FROM student WHERE sno = 1;
it deletes the student and also the corresponding entry and the query returns with no errors however when i run a select on my cancel table the table shows up empty?
You do not show how the corresponding entry is deleted. If the entry is deleted before the student record then that causes the problem because then the INSERT in the trigger will fail because the SELECT statement will not provide any values to insert. Is the corresponding entry deleted through a CASCADING delete on student?
Also, your trigger can be much simpler:
CREATE OR REPLACE FUNCTION delete_student() RETURNS trigger AS $BODY$
BEGIN
INSERT INTO cancel(eno, excode, sno, cdate, cuser)
VALUES (SELECT eno, excode, sno, current_timestamp, current_user
FROM entry
WHERE sno = OLD.sno);
RETURN OLD;
END; $BODY$ LANGUAGE plpgsql;
First of all, the function only fires on a DELETE trigger, so you do not have to test for TG_OP. Secondly, in the INSERT statement you never access any data from the student relation so there is no need to JOIN to that relation; the sno does come from the student relation, but through the OLD implicit parameter.
You didn't post your DB schema and it's not very clear what your problem is, but it looks like a cascade delete is interfering somewhere. Specifically:
Before deleting the student, you insert something into cancel that references it.
Postgres proceeds to delete the row in student.
Postgres proceeds to honors all applicable cascade deletes.
Postgres deletes rows in entry and ... cancel (including the one you just inserted).
A few remarks:
Firstly, and as a rule of thumb, before triggers should never have side-effects on anything but the row itself. Inserting row in a before delete trigger is a big no no: besides introducing potential problems related such as Postgres reporting an incorrect FOUND value or incorrect row counts upon completing the query, consider the case where a separate before trigger cancels the delete altogether by returning NULL. As such, your trigger function should be running on an after trigger -- only at that point can you be sure that the row is indeed deleted.
Secondly, you don't need these inefficient, redundant, and ugly-as-sin sub-select statements. Use the insert ... select ... variety of inserts instead:
INSERT INTO cancel(eno, excode,sno,cdate,cuser)
SELECT entry.eno entry.excode, OLD.sno, current_timestamp, current_user
FROM entry
WHERE entry.sno = OLD.sno;
Thirdly, your trigger should probably be running on the entry table, like so:
INSERT INTO cancel(eno, excode,sno,cdate,cuser)
SELECT OLD.eno OLD.excode, OLD.sno, current_timestamp, current_user;
Lastly, there might be a few problems in your schema. If there is a unique row in entry for each row in student, and you need information in entry to make your trigger work in order to fill in cancel, it probably means the two tables (student and entry) ought to be merged. Whether you merge them or not, you might also need to remove (or manually manage) some cascade deletes where applicable, in order to enforce the business logic in the order you need it to run.

Trigger that logs changes on every INSERT, DELETE

I tried create a trigger:
CREATE TRIGGER DataTrigger ON Data AFTER INSERT , DELETE
AS
BEGIN
IF EXISTS(SELECT * FROM Inserted)
INSERT INTO [dbo].[AuditTrail]
([ActionType]
,[TableName]
,[Name]
,[Time])
('INSERT' //Incorrect syntax near 'INSERT'
,'Data'
,SELECT col1 FROM Inserted
,CURRENT_TIMESTAMP //Incorrect syntax near the keyword 'CURRENT_TIMESTAMP')
END
but it keep saying that I have thoes errors, can somebody show me where I did wrong?
P/S: what is the best way to detect an update?
Thank you
CREATE TRIGGER DataTrigger ON Data AFTER INSERT , DELETE
AS
BEGIN
INSERT INTO [dbo].[AuditTrail]
([ActionType]
,[TableName]
,[Name]
,[Time])
SELECT 'INSERT'
,'Data'
,col1
,CURRENT_TIMESTAMP
FROM Inserted
END
Just for clarification.
Your query, if syntatically correct would have failed if more than one row was inserted, the version above allows for multiple inserts.
The IF EXISTS was redundant, which is why it was removed, if there are no rows there will be no insert into your audit table.
If you want to audit DELETE you'll need a similar statement again, but using the Deleted table rather than Inserted
To audit UPDATE, create a new trigger, for each updated row you get an entry in Inserted with the new updates and an entry in Deleted with the old data, you can join these if you want to track old and new.
CREATE TRIGGER DataTrigger ON Data AFTER INSERT , DELETE
AS
BEGIN
INSERT INTO [dbo].[AuditTrail]
([ActionType]
,[TableName]
,[Name]
,[Time])
SELECT 'INSERT', 'Data', col1, CURRENT_TIMESTAMP FROM Inserted
END
I am not entirely sure what you are trying to accomplish here so I have just corrected the syntax to get it to work, but it should help. I have also tried to handle the deleted case as well
CREATE TRIGGER DataTrigger
ON Data
AFTER INSERT , DELETE
AS
BEGIN
INSERT INTO [dbo].[AuditTrail]([ActionType],
[TableName],
[Name],
[Time])
SELECT 'INSERT','Data', col1, CURRENT_TIMESTAMP
FROM Inserted
INSERT INTO [dbo].[AuditTrail]([ActionType],
[TableName],
[Name],
[Time])
SELECT 'DELETE','Data', col1, CURRENT_TIMESTAMP
FROM Deleted
END

After Insert Trigger....Looping

What would the processing load concern be if I had an "After Insert" trigger created on a table and in that trigger I performed a While loop to iterate through "potentially" multiple rows?
End result is I will 99.999% of the time have only 1 row, but as the future is unpredictable i also want to be able to handle multiple rows being inserted.
Trigger Model:
1) Insert information into the table
2) Create views specific to the client, via stored procedures (if possible)
What Say You? :)
Haven't fully developed but this is the design i am looking for, may not be structurally sound but should get the point acrossed.
CREATE TRIGGER dbo.New_Client_Setup
ON dbo.client
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
--Fill Temp Table
select * into #clients
from inserted
--Iterate through Temp Table
While (select count(*) from #clients) <> 0 BEGIN
declare #id int, #clnt nvarchar(10)
select top(1)
#id = id
, #clnt = short
order by id desc
Execute dbo.sp_Create_View_Client ( #id, #clnt )
-- Drop used ID
delete from #clients
where id = #id
END
Drop table #clients
END
GO
Again, observe the design of the trigger not necessarily the syntactic sugar
Design wise, reading the comments, I think you do not neccesarily need to do this in triggers. I would say you should do it as part of your insert statement in transactions - i.e. do the insert, and then do the loop that you want to do (whatever that does - execute dbo.sp_Create_View_Client)...
The second thing I would mention is what exactly is dbo.sp_Create_View_Client doing - is it a must-dependent on the insert? Meaning, what happens if the insert works fine, and the trigger fails? I would maybe do the whole insert and execute of the SP all in one transaction, so as to preserve data integrity.