I have a Trigger i am developing and want it to basically handle Updates in the sense of the Update statement, not so worried about Delete or Insert statements at the moment.
The end result is that i have two status fields active & inactive, which are bit & datetime respectively. The active field is used to invalidate a record, manually for the time, and i want to make sure that the inactive field contains a value of active change.
Ideally, i would like to check to see if active=0 and place a datetime stamp, using now(), in the field for the record. Long-term im sure i will want to validate that the inactive field doesnt have a datetime stamp already, but for now overriding it fine. As well, would like to have it check if active=1 and clear out the inactive field if any value exists.
The inactive field is nullable so there wont be a conflict.
Trigger Definition, as of now:
CREATE TRIGGER dbo.TRG_tblRegistration_Inactive
ON dbo.tblRegistration
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
IF update(active) BEGIN
update r
set inactive=now()
from tblRegistration r
join inserted i on r.id = i.id
where i.active = 0
END
END
I have a pretty good understanding of the inserted & deleted trigger tables.
Question:
Is there a better logic pattern within the trigger that will get me the ultimate desired goal?
Any considerations to be had if they submit a inactive value with the active=0, or active=1?
Current Form of the Trigger:
ALTER TRIGGER dbo.TRG_tblRegistration_Inactive
ON dbo.tblRegistration
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
IF update(active) BEGIN
update r
set inactive=getdate()
from tblRegistration r
join inserted i on r.id = i.id
where i.active = 0
update r
set inactive=NULL
from tblRegistration r
join inserted i on r.id = i.id
where i.active=1
END
END
It works and does as expected but want to make sure that first there isnt a better way of doing this (at the database side or in the trigger).
I think that it'd be better to change active to a (persisted) computed column, with this definition:
CASE WHEN inactive IS NULL THEN 0 ELSE 1 END
Then you don't need any triggers at all, and you can just set inactive = GETDATE() to accomplish the purpose.
Nonetheless, the way that your trigger is currently written (in both versions), the inactive column is going to be updated to the current date even if I run a query like this:
UPDATE tblRegistration
SET active = 0
WHERE active = 0
If you only want the date to be updated if the field changes from 1 to 0, then your update statement in the trigger would need to be:
UPDATE tblRegistration
SET inactive = GETDATE()
FROM inserted i
INNER JOIN deleted d ON i.id = d.id
WHERE tblRegistration.id = i.id AND i.active = 0 AND d.active = 1
Related
I'm relatively new to triggers so forgive me if this doesn't look how it should. I am creating a trigger that checks a user account for last payment date and sets a value to 0 if they haven't paid in a while. I created what I thought was a correct trigger but I get the error, "error during execution of trigger" when its triggered. From what I understand the select statement is causing the error as it selecting values which are in the process of being changed. Here is my code.
CREATE OR REPLACE TRIGGER t
BEFORE
UPDATE OF LASTLOGINDATE
ON USERS
FOR EACH ROW
DECLARE
USER_CHECK NUMBER;
PAYMENTDATE_CHECK DATE;
ISACTIVE_CHECK CHAR(1);
BEGIN
SELECT U.USERID, U.ISACTIVE, UP.PAYMENTDATE
INTO USER_CHECK, PAYMENTDATE_CHECK, ISACTIVE_CHECK
FROM USERS U JOIN USERPAYMENTS UP ON U.USERID = UP.USERID
WHERE UP.PAYMENTDATE < TRUNC(SYSDATE-60);
IF ISACTIVE_CHECK = 1 THEN
UPDATE USERS U
SET ISACTIVE = 0
WHERE U.USERID = USER_CHECK;
INSERT INTO DEACTIVATEDUSERS
VALUES(USER_CHECK,SYSDATE);
END IF;
END;
From what I thought, since the select is in the begin statement, it would run before an update, nothing would be changing about the tables until after if runs through the trigger. I tried but using :old in front of the select variables but that doesn't seem to be the right use.
And here is the update statement i was trying.
UPDATE USERS
SET LASTLOGINDATE = SYSDATE
WHERE USERID = 5;
Some issues:
The select you do in the trigger sets the variable isactive_check to a payment date, and vice versa. There is an accidental switch there, which will have a negative effect on the next if;
The same select should return exactly one record, which by the looks of it is not guaranteed, since you join with table userpayments, which may have several payments for the selected user that meet the condition, or none at all. Change that select to do an aggregation.
If a user has more than one payment record, the condition might be true for one, but not for another. So if you are interested only in users who have not paid in a long while, such user should not be included, even though they have an old payment record. Instead you should check whether all records meet the condition. This you can do with a having clause.
As the table users is mutating (the update trigger is on that table), you cannot perform every action on that same table, as it would otherwise lead to a kind of deadlock. This means you need to rethink what the purpose is of the trigger. As this is about an update for a specific user, you actually don't need to check the whole table, but only the record that is being changed. For that you can use the special new variable.
I would suggest this SQL instead:
SELECT MAX(UP.PAYMENTDATE)
INTO PAYMENTDATE_CHECK
FROM USERPAYMENTS
WHERE USERID = :NEW.USERID
and then continue with the checks:
IF :NEW.ISACTIVE = 1 AND PAYMENTDATE_CHECK < TRUNC(SYSDATE-60) THEN
:NEW.ISACTIVE := 0;
INSERT INTO DEACTIVATEDUSERS (USER_ID, DEACTIVATION_DATE)
VALUES(USER_CHECK,SYSDATE);
END IF;
Now you have avoided to do anything in the table users and have made the checks and modification via the :new "record".
Also, it is good practice to mention the column names in an insert statement, which I have done in above code (adapt column names as needed):
Make sure the trigger is compiled and produces no compilation errors.
I have a database purge process that uses a stored procedure to delete records from a huge table based on Expire Date, it runs every 3 weeks and delete about 3 million records.
Currently it is taking about 5 hours to purge the data which is causing lot of problems. I know there are lot of efficient way to write the code, but I'm out of ideas, please help me to the right direction.
--Stored Procedure
CREATE PROCEDURE [dbo].[pa_Expire_StoredValue_By_Date]
#ExpireDate DateTime, #NumExpired int OUTPUT, #RunAgain int OUTPUT
AS
-- This procedure expires all the StoredValue records that have an ExpireDate less than or equal to the DeleteDate provided
-- and have QtyUsed<QtyEarned
-- invoked by DBPurgeAgent
declare #NumRows int
set nocount on
BEGIN TRY
BEGIN TRAN T1
set #RunAgain = 1;
select #NumRows = count(*) from StoredValue where ExpireACK = 1;
if #NumRows = 0
begin
set rowcount 1800; -- only delete 1800 records at a time
update StoredValue with (RowLock)
set ExpireACK = 1
where ExpireACK = 0
and ExpireDate < #ExpireDate
and QtyEarned > QtyUsed;
set #NumExpired=##RowCount;
set rowcount 0
end
else begin
set #NumExpired = #NumRows;
end
if #NumExpired = 0
begin -- stop processing when there are no rows left
set #RunAgain = 0;
end
else begin
Insert into SVHistory (LocalID, ServerSerial, SVProgramID, CustomerPK, QtyUsed, Value, ExternalID, StatusFlag, LastUpdate, LastLocationID, ExpireDate, TotalValueEarned, RedeemedValue, BreakageValue, PresentedCustomerID, PresentedCardTypeID, ResolvedCustomerID, HHID)
select
SV.LocalID, SV.ServerSerial, SV.SVProgramID, SV.CustomerPK,
(SV.QtyEarned-SV.QtyUsed) as QtyUsed, SV.Value, SV.ExternalID,
3 as StatusFlag, getdate() as LastUpdate,
-9 as LocationID, SV.ExpireDate, SV.TotalValueEarned,
0 as RedeemedValue,
((SV.QtyEarned-SV.QtyUsed)*SV.Value*isnull(SVUOM.UnitOfMeasureLimit, 1)),
PresentedCustomerID, PresentedCardTypeID,
ResolvedCustomerID, HHID
from
StoredValue as SV with (NoLock)
Left Join
SVUnitOfMeasureLimits as SVUOM on SV.SVProgramID = SVUOM.SVProgramID
where
SV.ExpireACK = 1
Delete from StoredValue with (RowLock) where ExpireACK = 1;
end
COMMIT TRAN T1;
END TRY
BEGIN CATCH
set #RunAgain = 0;
IF ##TRANCOUNT > 0 BEGIN
ROLLBACK TRAN T1;
END
DECLARE #ErrorMessage NVARCHAR(4000);
DECLARE #ErrorSeverity INT;
DECLARE #ErrorState INT;
SELECT
#ErrorMessage = ERROR_MESSAGE(),
#ErrorSeverity = ERROR_SEVERITY(),
#ErrorState = ERROR_STATE();
RAISERROR (#ErrorMessage, #ErrorSeverity, #ErrorState);
END CATCH
Why you're running with this logic makes no sense to me. It looks like you are batching by rerunning the stored proc over and over again. You really should just do it in a WHILE loop and use smaller batches within a single run of the stored proc. You also should run in smaller transactions, this will speed things up considerably. Arguably, the way this is written you don't need a transaction. You can resume since you are flagging every record.
It's also not clear why you are touching the table 3 times. You really shouldn't need to update a flag AND select the rows into a new table AND then delete them. You can just use an output clause to do this in one step if desired, but you need to clarify your logic to get help on that.
Also, why are you using ROWLOCK? Lock escalation is fine and makes things run faster (less memory holding locks). Are you running this while the system is live? If it's after hours, use TABLOCK instead.
This is some suggested pseudo-code you can flesh out. I recommend taking #BatchSize as a parameter. Also obviously missing is error handling, this is just the core for the delete logic.
WHILE 1=1
BEGIN
BEGIN TRAN
UPDATE TOP (#BatchSize) StoredValue
SET <whatever>
INSERT INTO SVHistory <insert statement>
DELETE FROM StoredValue WHERE ExpireAck=1
IF ##ROWCOUNT = 0
BEGIN
COMMIT TRAN
BREAK;
END
COMMIT TRAN
END
First look at what is causing it to be slow by looking at the execution oplan. Is it the insert statement or the delete?
Due to the calculations in the insert, I would suspect it is the slower part (unless you have triggers or cascade delete on the table). You could change the history table to have the columns you use in the calculation and allow nulls in the calculated fields. Now you can insert to that table more quickly and then do the delete. In a separate step in your job, you can then update the calculations. This would at least tie things up for a shorter period of time, but depending on how the history table is acccessed may or may not be possible.
Another out of the box possibility is to rename your table StoredValue to StoredValueRaw and create a view called StoredValue that only displays the active records. Then the job to delete the records could run every fifteen minutes or so and only delete a few records at a time. This might be far less disruptive to the users even if the actual deletes take longer. You still might need to put the records in the history table at the time they are identified as expired.
You should possibly rethink doing this process every three weeks as well, the fewer records you have to deal with the faster it will go.
I have a table, 'game.FileAttachments' which has a stream_id column that links to the stream_id column of a File table, 'game.Attachments'. I created an Update trigger on the File table, to update the stream_id on the linked table; reason being, when a file in the File table is modified, the stream_id changes. Here's my trigger; help please!
CREATE TRIGGER game.tr_Update_Attachments
ON game.Attachments
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
IF ( UPDATE(stream_id) )
BEGIN
UPDATE game.FileAttachments
SET stream_id = i.stream_id
FROM inserted i
WHERE game.FileAttachments.stream_id IN
(
SELECT d.stream_id
FROM deleted d INNER JOIN
game.FileAttachments f ON f.stream_id = d.stream_id
)
END
END
also tried this:
IF ( UPDATE(stream_id) )
BEGIN
UPDATE game.FileAttachments
SET stream_id = i.stream_id
FROM inserted i INNER JOIN
deleted d ON 1 = 1 INNER JOIN
game.FileAttachments f ON f.stream_id = d.stream_id
END
But that also does not work.
Ok, I created a Delete trigger to test a theory; the FileTable record associated with the file I am modifying is NOT updated, but is instead deleted, and a totally new record created. Well, turns out, for an Ms Word doc, this is true. But, created a plain text file, and I can update as many times as I want, and the stream_id never changes. So, Microsoft Word, the application, it seems, clones the original document, giving it a temp name, then, when a user chooses to save it, the original is simply deleted, and the clone renamed the same as the original. That BYTES !
I think your trigger definition needs to have this:
CREATE TRIGGER game.tr_Update_Attachments
ON game.Attachments
AFTER UPDATE, DELETE
as you refer to:
SELECT d.stream_id
FROM deleted d INNER JOIN
game.FileAttachments f ON f.stream_id = d.stream_id
in your where clause...either that or you incorrectly refer to this table and need to capture the "deleted" attachments into a temp table?
I have tableA called Order
It has a PK id several columns and a bool column named active and a column tableid. Two orders cant be active at the same time for a tableid that is ensured with a two column unique contraint (&& active=true) that works like an index and finding active orders is pretty fast.
The problem is that there is another table orderitems. I want active to be true unless all orderitems for the order-id are marked as paid=true..
Using serialization transactions this can be achieved i think in payment code by setting an update query if all items are paid. I think that this wont work always.Because if they run concurrently they might both see that there are unpaid items(due to old snapshot) but when commit they would pay all items,but not update active column.(different) .
Adding new items and payment transaction tries to set active=false wont be a problem with serial transactions because one of them would fail..
I think triggers are a solution but i dont know what to do exactly.. Thank you for reading
What you'll want to do is add an AFTER UPDATE OR INSERT OR DELETE FOR EACH ROW trigger on orderitems that determines whether Order.active should be changed. You'll have to do a SELECT ... FOR UPDATE on the Order row that owns those orderitems otherwise you'll risk concurrent runs of the trigger racing against each other and doing out-of-order updates.
Presuming orderitems has a field order_id that is your foreign key reference to Order.id, try something like the (untested, general example only) code following:
CREATE OR REPLACE FUNCTION orderitems_order_active_trigger() RETURNS trigger AS $$
DECLARE
_old_active BOOLEAN;
_new_active BOOLEAN;
_order_id INTEGER;
BEGIN
IF tg_op = 'INSERT' THEN
_order_id = NEW.order_id;
ELIF tg_op = 'UPDATE' THEN
_order_id = NEW.order_id;
ELIF tg_op = 'DELETE' THEN
_order_id = OLD.order_id;
ELSE
RAISE EXCEPTION 'Unexpected trigger operation %',tg_op;
END IF;
-- Lock against concurrent trigger runs and other changes in the parent record while
-- obtaining the current value of `active`
SELECT INTO _old_active Order.active FROM Order WHERE Order.id = _order_id FOR UPDATE;
-- Now figure out whether the order should be active. We'll say that if there are
-- more than zero unpaid order items found we'll treat it as active.
_new_active = EXISTS(SELECT 1 FROM orderitems WHERE orderitems.order_id = _order_id AND orderitems.paid='f');
-- If the active state has flipped, update the order.
IF _old_active IS DISTINCT FROM _new_active THEN
UPDATE Order SET active = _new_active WHERE Order.id = _order_id;
END IF;
END;
$$ LANGUAGE 'plpgsql' VOLATILE;
CREATE TRIGGER orderitems_ensure_active_correct AFTER INSERT OR UPDATE OR DELETE
ON orderitems FOR EACH ROW EXECUTE PROCEDURE orderitems_order_active_trigger();
I'm writing a script that converts a datetime column from server time to UTC time. I have a function that does this, however, the migration script will be run as part of a larger process with the following constraints:
The script must be able to run multiple times (i.e. don't convert the column twice)
Can't leave temp tables or other data around after the migration
This is the script so far:
SET XACT_ABORT ON;
BEGIN TRANSACTION;
UPDATE SOME_TABLE
SET LastModified = [dbo].[f_ServerToUTC]( LastModified )
COMMIT TRANSACTION;
Since the milliseconds are not important in my scenario, I've considered setting the millisecond portion to some specific value, indicating that the migration has already been performed. However, I feel like the probability of encountering this value in unconverted data is too high (given enough).
Is there some other way I can signify that the script has been run, given my constraints?
The way we solved this is somewhat particular to our system, but may be useful to others. We have a User-Defined Type, UtcDateTime, defined as:
CREATE TYPE [dbo].[UtcDateTime] FROM [datetime] NOT NULL
Since we're updating the column to be UTC instead of server time, it made sense to change the data type as well. Therefore, we could check to see if the data type had already been changed on the column as a guard against running the conversion more than once.
IF NOT EXISTS(
SELECT 1
FROM sys.tables t
INNER JOIN sys.columns c
ON t.object_id = c.object_id
INNER JOIN sys.types ty
ON c.user_type_id = ty.user_type_id
WHERE t.object_id = object_id( 'SOME_TABLE' )
AND c.name = 'LastModified'
AND ty.name = 'utcdatetime'
)
BEGIN
SET XACT_ABORT ON;
BEGIN TRANSACTION;
ALTER TABLE SOME_TABLE
ALTER COLUMN [LastModified] UTCDATETIME
UPDATE SOME_TABLE
SET LastModified = [dbo].[f_ServerToUTC]( LastModified )
COMMIT TRANSACTION;
END