SQL Server trigger not inserting row - triggers

I work with SQL Server 2008 R2
I have a simple trigger
CREATE TRIGGER [dbo].[T_Personne_ITrig] ON [dbo].[Personne] FOR INSERT AS
BEGIN
SET NOCOUNT ON
insert into syn_HistoriquePersonne
(hpers_Timestamp, Supprime, ID, Nom, Prenom, Champ1,
Champ2, Champ3, Champ4 SiteAssocie)
select GETDATE(), 0, ID, Nom, Prenom, Champ1, Champ2, Champ3,
Champ4, SiteAssocie
from inserted
END
It works properly. Problem is, I work on a program with an horrible code base so my boss don't want to trigger to ever cause a rollback on the table Personne even if it fails. I know it's really improbable, but he's scared of timeout in case of huge database activity... ANYWAY
So I searched about committing in triggers. And changed the trigger to :
CREATE TRIGGER [dbo].[T_Personne_ITrig] ON [dbo].[Personne] FOR INSERT AS
BEGIN
SET NOCOUNT ON
COMMIT
insert into syn_HistoriquePersonne
(hpers_Timestamp, Supprime, ID, Nom, Prenom, Champ1,
Champ2, Champ3, Champ4 SiteAssocie)
select GETDATE(), 0, ID, Nom, Prenom, Champ1, Champ2, Champ3,
Champ4, SiteAssocie
from inserted
END
But the trigger kept shooting message
Transaction stopped in trigger, batch aborted.
So I made it like that :
CREATE TRIGGER [dbo].[T_Personne_ITrig] ON [dbo].[Personne] FOR INSERT AS
BEGIN
SET NOCOUNT ON
COMMIT
BEGIN TRAN
insert into syn_HistoriquePersonne
(hpers_Timestamp, Supprime, ID, Nom, Prenom, Champ1,
Champ2, Champ3, Champ4, SiteAssocie)
select GETDATE(), 0, ID, Nom, Prenom, Champ1, Champ2, Champ3,
Champ4, SiteAssocie
from inserted
END
It stopped doing the batch aborted, but it seems to never insert anything in my historic table... I read about the subject and this should work I think. But it doesn't...
Anyone else already had that problem and how can I fix that?
I am doing simple insert into to test my trigger.

Unfortunately you're barking up the wrong tree. You can't play with transactions in that way.
If the only concern was failed inserts, you could simply code a check around your insert. Or just be very throrough in ensuring the constraints on your table accurately reflect it's use. But, as you're also concerned about the duration of the process causing a command time-out, this won't cover you entirely (in fact it'll make that time-out very slightly more likely).
The only approach that I can see working is to massively simplify the insert statement, and insert something (all the data, or just the timestamp and id?) into a holding table which has no constraints, or indexes. You would then need a server side process that is called repeatedly to process your holding table.
As your case seems to be just maintaining a historic log, perhaps an option could be as simple as removing all constraints from the historique table. All the solutions are a bit dirty, but then the requirement from your boss seems a bit unusual; the answer should really be capacity planning in my opinion.
I don't know if that fits with your real world scenario, but I hope it helps.

Related

how can I do conditional insert in postgres when there can be concurrent inserts that can create conflict?

I am trying to write an experimentation framework where user can schedule some experiments based on location-ids and time.
my table schema looks like :
TABLE experiment (
id INT NOT NULL PRIMARY KEY,
name varchar(20) NOT NULL,
locationIds varchar[] NOT NULL,
timeStart timestamp NOT NULL,
timeEnd timestamp NOT NULL,
createdAt timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
updatedAt timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP
)
there are insert operations to be done with condition that the location(s) and time should not overlap.
I wanted to know what can be done to avoid in-consistency of data state when there are 2 concurrent inserts taken up where location OR time overlaps,
Ideally I want one of the insert to succeed, but I am fine If both fails and application is supposed to retry again.
Few Approached I tried to think:
Approach:
APPROACH-1
Have an enable column that tells whether certain entry is valid
OR not.
I insert the experiment schedule entry with enable=FALSE
Then I check if there is any other entry which is enabled and is
overlapping with the current Insert.
IF there is such entry then I do nothing and that experiment is not
scheduled. Else I update the entry to enable=TRUE.
Problem : If there is a concurrent conflicting insert, then both will get enable=TRUE when both cleared the step-3.
I gave a thought if I let the transaction-isolation level to be read-uncommitted then also, I can't differentiate the ones in process and the ones already enable=TRUE
Then I thought, If I mark enable as a enum [IN_PROGRESS, ENABLED, DISABLED] then approach will look like this.
APPROACH-2
Have an enable column that tells whether certain entry is [IN_PROGRESS, ENABLED, DISABLED]
I insert the experiment schedule entry with enable=IN_PROGRESS
Then I check if there is any other entry which is enable=ENABLED OR enable=IN_PROGRESS and is overlapping with the current Insert.
IF there is such entry then I update enable=DISABLED and that experiment is not
scheduled. Else I update the entry to enable=ENABLED.
Problem : If there is a concurrent conflicting insert, then both will get enable=DISABLED when both cleared the step-3 and get such overlapping entry.
If the transaction-isolation level is READ-COMMITTED then this will only work IF each step is a transaction, rather whole process as one transaction.
If the transaction-isolation level is READ-UNCOMMITTED then this can be taken up as one transaction, with DISABLED state can be taken as a ROLLBACK step too.
APPROACH-3
Using Trigger Based solution as I am using POSTGRES, I can add a trigger for each insert operation, post insert where I check for such overlapping entry, if there is none, then I update the row to have enable=TRUE
CREATE OR REPLACE FUNCTION enable_if_unique()
RETURNS TRIGGER AS $$
BEGIN
IF (TG_OP = 'INSERT') THEN
UPDATE experiment
SET NEW.enable=true
WHERE (SELECT count(1)
FROM experiment
WHERE enable= true AND location_Ids && OLD.location_ids AND (OLD.timeStart, OLD.timeEnd) OVERLAPS (timeStart, timeEnd)
) = 0;
RETURN NEW;
END IF;
END;
$$ LANGUAGE 'plpgsql';
CREATE TRIGGER enable_if_unique_trigger BEFORE INSERT ON experiment FOR EACH ROW EXECUTE PROCEDURE enable_if_unique();
I am not sure about Approach 3 because I feel it require trigger to act in a serial manner for each insert operation so that one of the Experiment is actually enabled while rest of overlapping ones are disabled.
APPROACH-4
From online search for other possible solution, I See Inserts taken up using Select Statement and the WHERE clause helping to add the required condition.
INSERT INTO experiment(id, name, locationIds, timeStart, timeEnd)
SELECT 1, 'exp-1', ARRAY[123,234,345], '2020-03-13 12:00:00'
WHERE (
SELECT count(1)
FROM EXPERIMENT
WHERE enable= true
AND
location_Ids && OLD.location_ids
AND
(OLD.timeStart, OLD.timeEnd) OVERLAPS (timeStart, timeEnd)
) = 0;
I feel there is still possibility of consistency issue as both concurrent operations will not be able to read each in the SELECT statement checking the constraint.
Final APPROACH : APPROACH-2
I like to know following things :
Which is the best approach in terms of scalability and high-throughput ?
Which approach is actually making the sure the data consistency is maintained?
Any Other Approach that I could have used and missed here!!!
Newbie To POSTGRES, Will APPRECIATE example OR links
as mentioned by #a_horse_with_no_name
we can use exclusion constraint :
-- this prevents overlaps in the locationids AND the time range
alter table experiment
add constraint no_overlap
exclude using gist (locationids with &&, tsrange(timestart, timeend) with &&);

Lock row, release later

I'm trying to understand how to lock a row, and only release that lock later.
I have a table like this :
create table testTable (Name varchar(100));
Some test data
insert into testTable (name) select 'Bob';
insert into testTable (name) select 'John';
insert into testTable (name) select 'Steve';
Now, I want to select one of those rows, and prevent other other queries from seeing this row. I achieve that like this :
begin transaction;
select * from testTable where name = 'Bob' for update;
In another window, I do this :
select * from testTable for update skip locked;
Great, I don't see 'Bob' in that result set. Now, I want to do something with the primary retrieved row (Bob), and after I did my work, I want to release that row again. Simple answer would be to do :
commit transaction
However, I am running multiple transactions on the same connection, so I can't just begin and commit transactions all over the show. Ideally I would like to have a "named" transaction, something like :
begin transaction 'myTransaction';
select * from testTable where name = 'Bob' for update;
//do stuff with the data, outside sql then later call ...
commit transaction 'myTransaction';
But postgres doesn't support that. I have found "prepare transaction", but that seems to be a pear-shaped path I don't want to go down, especially as these transaction seem to persist through restarts even.
Is there anyway I can have a reference to commit/rollback for a specific transaction?
You can have only one transaction in a database session, so the question as such is moot.
But I assume that you do not really want to run a transaction, you want to block access to a certain row for a while.
It is usually not a good idea to use regular database locks for such a purpose (the exception are advisory locks, which serve exactly that purpose, but are not tied to table rows). The problem is that long database transactions keep autovacuum from doing its job.
I recommend that you add a status column to the table and change the status rather than locking the row. That would server the same purpose in a more natural fashion and make your problem go away.
If you are concerned that the status flag might not get cleared due to application logic problems, replace it with a visible_from column of type timestamp with time zone that initially contains -infinity. Instead of locking the row, set the value to current_timestamp + INTERVAL '5 minutes'. Only select rows that fulfill WHERE visible_from < current_timestamp. That way the “lock” will automatically expire after 5 minutes.

AFTER INSERT trigger causes query execution to hang up

In a ms sql database I have a table named combo where multiple inserts, updates and deletes can happen (as well as single, of course). In another table named migrimi_temp I keep track of these changes in the form of queries (query that would have to be executed in mysql to achieve the same result).
For example, if a delete query is performed for all rows where id > 50, the trigger should activate to store the following query into the log table:
DELETE FROM combo where id > 50;
Therefore this one delete query in the combo table would result in one row in the log table.
But if instead I have an insert query inserting 2 rows, a trigger should activate to store each insert into the log table. So this one insert query in the combo table would result into 2 new rows in the log table.
I intend to handle insert, update and delete actions into separated triggers. I had managed to write triggers for single row insert / update/ delete. Then it occurred to me that multiple actions might be performed too.
This is my attempt to handle the case of multiple inserts in one single query. I resorted to using cursors after not being able to adapt the initial trigger without a cursor. The trigger is executed successfully, but when I perform an insert (single or multiple rows) the execution hangs up indefinitely, or at least longer than reasonable .
USE [migrimi_test]
GO
/****** Object: Trigger [dbo].[c_combo] Script Date: 12/11/2017 5:33:46 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
create TRIGGER [dbo].[u_combo]
ON [migrimi_test].[dbo].[combo]
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON
DECLARE #c_id INT;
DECLARE #c_name nvarchar(100);
DECLARE #c_duration int;
DECLARE #c_isavailable INT;
DECLARE c CURSOR FOR
SELECT id, name, duration, isvisible FROM inserted
OPEN c
FETCH NEXT FROM c INTO #c_id, #c_name, #c_duration, #c_isavailable
WHILE ##FETCH_STATUS = 0
INSERT INTO [migrimi_temp].[dbo].[sql_query] (query)
VALUES ('INSERT INTO combo (id, name, duration, value, isavailable, createdAt, updatedAt) values ('+CAST(#c_id as nvarchar(50))+', '+'"'+#c_name+'"'+',
'+CAST(#c_duration as nvarchar(50))+', 1, '+CAST(#c_isavailable as nvarchar(50))+', Now(), Now());' )
FETCH NEXT FROM c INTO #c_id, #c_name, #c_duration, #c_isavailable
CLOSE c
END
DEALLOCATE c
GO
SQL server version is 2012. OS is windows server 2008 (though I doubt that is relevant). I was based mainly on these two resources: https://social.msdn.microsoft.com/Forums/sqlserver/en-US/40f5635c-9034-4e9b-8fd5-c02cec44ce86/how-to-let-trigger-act-for-each-row?forum=sqlgetstarted
and How can I get a trigger to fire on each inserted row during an INSERT INTO Table (etc) SELECT * FROM Table2?
This is part of a larger idea I am trying to accomplish, and until 2 days ago I was totally unfamiliar with triggers. I am trying to balance learning with accomplishing in reasonable amounts of time, but not doing so great
Cursors are notoriously slow in SQL Server.
Instead of using a cursor to loop over the inserted table, you can use insert...select which is a set based approach. It is much faster and is the recommended way to work in SQL:
CREATE TRIGGER [dbo].[u_combo]
ON [migrimi_test].[dbo].[combo]
AFTER INSERT
AS
BEGIN
INSERT INTO [migrimi_temp].[dbo].[sql_query] (query)
SELECT 'INSERT INTO combo (id, name, duration, value, isavailable, createdAt, updatedAt) values ('+CAST(id as nvarchar(50))+', "'+ name +'",
'+ CAST(duration as nvarchar(50)) +', 1, '+ CAST(isvisible as nvarchar(50))+ ', Now(), Now());'
FROM inserted
END
GO

Trigger Code on a table in my ERP Database

My ERP Vendor has the following trigger on a table:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TRIGGER [dbo].[SOItem_DeleteCheck]
ON [dbo].[soitem]
FOR DELETE
AS
BEGIN
DECLARE #RecCnt int, #LogInfo varchar(256)
SET #RecCnt = (SELECT COUNT(*) FROM deleted)
IF #RecCnt > 150
BEGIN
RAISERROR (54010, 18, 1, 'SOItem') WITH LOG
ROLLBACK TRANSACTION
END
SET #LogInfo = 'Deleting ' + LTRIM(STR(#RecCnt)) + ' Rows From SOItem'
EXEC LogDeletes #LogInfo
END
GO
This seems very inefficient to me. Doesn't select count(*) take longer than Count(specific field)?
Honestly even if is is slower, I can run a select stament like that in less than a millisecond on my largest table that has millions of rows which this trigger is unlikely to hit. There is no real performance gain from changing it. I'm curious as to why you would want to rollback any transaction with more than 150 records thoug.
I think in the past there was a benefit to count(1) vs count(*), and we were all taught to use that approach, but at this point it's more about style than performance.

How do I cancel a Delete in SQL

I want to create a trigger to check what is being deleted against business rules and then cancel the deletion if needed. Any ideas?
The solution used the Instead of Delete trigger. The Rollback tran stopped the delete. I was afraid that I would have a cascade issue when I did the delete but that didn't seem to happen. Maybe a trigger cannot trigger itself.
Use an INSTEAD OF DELETE (see MSDN) trigger and decide within the trigger what you really want to do.
The solution used the Instead of Delete trigger. The Rollback tran stopped the delete. I was afraid that I would have a cascade issue when I did the delete but that did'nt seem to happen. Maybe a trigger cannot trigger itself. Anyhow, thanks all for your help.
ALTER TRIGGER [dbo].[tr_ValidateDeleteForAssignedCalls]
on [dbo].[CAL]
INSTEAD OF DELETE
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE #RecType VARCHAR(1)
DECLARE #UserID VARCHAR(8)
DECLARE #CreateBy VARCHAR(8)
DECLARE #RecID VARCHAR(20)
SELECT #RecType =(SELECT RecType FROM DELETED)
SELECT #UserID =(SELECT UserID FROM DELETED)
SELECT #CreateBy =(SELECT CreateBy FROM DELETED)
SELECT #RecID =(SELECT RecID FROM DELETED)
-- Check to see if the type is a Call and the item was created by a different user
IF #RECTYPE = 'C' and not (#USERID=#CREATEBY)
BEGIN
RAISERROR ('Cannot delete call.', 16, 1)
ROLLBACK TRAN
RETURN
END
-- Go ahead and do the update or some other business rules here
ELSE
Delete from CAL where RecID = #RecID
END
The trigger can roll back the current transaction, which will have the effect of cancelling the deletion. As the poster above also states, you can also use an instead of trigger.
According to MSDN documentation about INSTEAD OF DELETE triggers:
The deleted table sent to a DELETE
trigger contains an image of the rows
as they existed before the DELETE
statement was issued.
If I understand it correctly the DELETE is actually being executed. What am I missing?
Anyway, I don't understand why do you want to delete the records and if the business rules are not passed then undelete those records. I would have swear it should be easier to test if you pass the business rules before deleting the records.
And I would have said use a transaction, I haven't heard before about INSTEAD OF triggers.