Firebird Trigger: Modify Value & Insert Record - triggers

I am currently new in Firebird, especially in triggers. Usually, I do this in script manually, but I am really fascinated to create it with trigger.
Please let me explain my tables first.
***STOCK***
CODE
NAME
TOTAL
GOOD
BROKEN
SERVICE
***DETAIL***
ID
STOCK_CODE
SERIAL
***BROKEN***
DETAIL_ID
MARK
***SERVICE***
DETAIL_ID
START_DATE
END_DATE
COST
***LOGS***
DETAIL_ID
MARK
START_DATE
END_DATE
COST
And now my problems:
How to modify STOCK.GOOD and STOCK.BROKEN value after insert a new record into BROKEN? That will be: STOCK.GOOD-1, STOCK.BROKEN+1.
How to insert all record from BROKEN and SERVICE into LOGS before current record in SERVICE is deleted?
I hope my questions could be accepted.

Below are two triggers:
CREATE TRIGGER bi_broken FOR broken
BEFORE INSERT
POSITION 0
AS
BEGIN
UPDATE stock SET good = good - 1, broken = broken + 1
WHERE code = (SELECT d.stock_code
FROM detail d WHERE d.id = NEW.detail_id);
END
CREATE TRIGGER bd_service FOR service
BEFORE DELETE
POSITION 0
AS
BEGIN
INSERT INTO logs (detail_id, mark, start_date, end_date, cost)
SELECT detail_id, (SELECT b.mark FROM broken b WHERE b.detail_id = OLD.detail_id),
start_date, end_date, cost
FROM service
WHERE detail_id = OLD.detail_id;
END
By the way, what is a reason to put mark into a separate table? It belongs to STOCK, doesn't it?

Related

Redshift insert a date value into a table

insert into table1 (ID,date)
select
ID,sysdate
from table2
assume i insert a record into table2 with value ID:1,date:2023-1-1
the expected result is update the ID of table1 base on the ID from table2 and update the value of date of table1 base on the sysdate from table2.
select *
from table1;
the expected result after running the insert statement will be
ID
date
1
2023-1-6
but what i get is:
ID
date
1
2023-1-1
I see a few possibilities based on the information given:
You say "the expected result is update the ID of table1 base on the ID from table2" and this begs the question - did ID = 1 exist in table1 BEFORE you ran the INSERT statement? If so are you expecting that the INSERT will update the value for ID #1? Redshift doesn't enforce or check uniqueness of primary keys and you would get 2 rows in the table1 in this case. Is this what is happening?
SYSDATE on Redshift provides the start timestamp of the current transaction, NOT the current statement. Have you had the current transaction open since the 1st?
You didn't COMMIT the results (or the statement failed) and are checking from a different session. It could also be that the transaction started before in the second session before the COMMIT completed. Working with MVCC across multiple sessions can trip anyone up.
There are likely other possible explanations. If you could provide DDL, sample data, and a simple test case so that others can recreate what you are seeing it would greatly narrow down the possibilities.

Proc is running slow with NOT EXISTS

I'm working on trying to create a stored procedure however I'm running into a issue where the stored procedure runs for over 5 minutes due to close to 50k records.
The process seems pretty straight forward, I'm just not sure why it is taking so long.
Essentially I have two tables:
Table_1
ApptDate ApptName ApptDoc ApptReason ApptType
-----------------------------------------------------------------------
03/15/2021 Physical Dr Smith Yearly Day
03/15/2021 Check In Dr Doe Check In Day
03/15/2021 Appt oth Dr Dee Check In Monthly
Table_2 - this table has the same exact structure as Table_1, what I am trying to achieve is simply archive the the data from Table_1
DECLARE #Date_1 as DATETIME
SET #Date_1 = GetDate() - 1
INSERT INTO Table_2 (ApptDate, ApptName, ApptDoc, ApptReason)
SELECT ApptDate, ApptName, ApptDoc, ApptReason
FROM Table_1
WHERE ApptType = 'Day' AND ApptDate = #Date_1
AND NOT EXISTS (SELECT 1 FROM Table_2
WHERE AppType = 'Day' AND ApptDate = #Date_1)
So this stored procedure seems pretty straight forward, however the NOT EXIST is causing it to be really slow.
The reason for NOT EXIST, is that this stored procedure is part of a bigger process that runs multiple times a day (morning, afternoon, night). I'm trying to make sure that I only have 1 copy of the the '03/15/2021' data. I'm basically running an archive process on previous days data (#Date_1)
Any thoughts how this can be "sped up".
For this query:
INSERT INTO Table_2 (ApptDate, ApptName, ApptDoc, ApptReason)
SELECT ApptDate, ApptName, ApptDoc, ApptReason
from Table_1 t1
Where ApptType = 'Day' and
ApptDate = #Date_1 and
NOT EXISTS (Select 1
from Table_2 t2
where t2.AppType = t1.AppType and
t2.ApptDate = t1.ApptDate
);
You want indexes on: table_1(ApptType) and more importantly, Table_2(AppType, ApptDate) or Table_2(ApptDate, AppType).
Note: I changed the correlation clause to just refer to the values in the outer query. This seems more general than your version, but should have the same performance (in this case).

How to add unique constraint over a time range?

I have a table slot which has a start_time and end_time. I want no other slot to be created having the same start and end time. A unique constraint as shown in the schema below
CREATE TABLE slot(
id SERIAL PRIMARY KEY,
start_time TIMETZ NOT NULL,
end_time TIMETZ NOT NULL,
CONSTRAINT slot_start_end_unique UNIQUE (start_time,end_time)
);
can be easily bypassed by picking up one minute + or - time. I want to add a constraint so that no equivalent time slot can be created or a subset time slot cannot be created.
I am thinking of using check to prevent any practically same slot from being created.
Can anyone please point towards the right direction?
Your idea of using a check constraint as unique enforcement is can probable be made to work but there could be issues and should probably be avoided. Your requirement necessitates comparing with other rows in the table but
PostgreSQL does not support CHECK constraints that reference table
data other than the new or updated row being checked. ...
It goes on to indicate a custom trigger is best employed. So, that is the approach here. See Section 5.4.1. Check Constraints.
Beyond that you have a couple issues: First off the data type TIME WITH TIMEZONE (TIMETZ) is a poor choice for data type and is somewhat misleading as it not actually used as indicated as the. As Section 8.5.3. Time Zones puts it:
Although the date type cannot have an associated time zone, the time
type can. Time zones in the real world have little meaning unless
associated with a date as well as a time, ... PostgreSQL assumes
your local time zone for any type containing only date or time.
(emphases mine)
Secondly, by using time only you may have problems specifying some ranges. How, for example, do you code the range from 22:00 to 06:00 or 23:45 to 00:15. But now back to the process.
The following trigger assumes data type TIME rather than TIMETZ and adjusts for the over midnight issue by assuming 'the next day' whenever start_time is greater than end_time.
create or replace
function is_valid_irange()
returns trigger
language plpgsql
strict
as $$
declare
k_existing_message constant text =
'Range Requested (%s,%s). Overlaps existing range (%s,%s).';
l_existing_range tsrange;
l_parm_range tsrange;
begin
with p_times(new_start_time, new_end_time) as
( values ('1970-01-01'::timestamp + new.start_time
,'1970-01-01'::timestamp + new.end_time
)
)
select tsrange(new_start_time,end_time,'[)')
into l_parm_range
from (select new_start_time
, case when new_start_time>new_end_time
then new_end_time + interval '1 day'
else new_end_time
end end_time
from p_times
) pr;
with db_range (id, existing_range) as
( select id, tsrange(start_time, end_time, '[)')
from ( select id, '1970-01-01'::timestamp + start_time start_time
, case when start_time>end_time
then '1970-01-02'::timestamp + end_time
else '1970-01-01'::timestamp + end_time
end end_time
from irange
) dr
)
select d.existing_range
into l_existing_range
from db_range d
where l_parm_range && existing_range
and d.id != new.id
limit 1;
if l_existing_range is not null
then
raise exception 'Invalid Range Requested:'
using detail= format( k_existing_message
, lower(l_parm_range)
, upper(l_parm_range)
, lower(l_existing_range)::time
, upper(l_existing_range)::time
);
end if;
return new;
end ;
$$;
How it works:
Postgres provides a set of built in data range types and a set of range operator functions.
The trigger coheres the start and end times,both new row and existing table rows, into timestamps with a fixed date ( the beginning of time 1970-01-01 according to unix).
Then employs the Overlaps (&&) operator. If any overlaps are found the trigger raises and exception. Instead of an exception it could return null to suppress
the insert or update but otherwise continue processing. For that it needs to become a BEFORE trigger. It is currently an AFTER trigger.
For full example see here. Do not worry about the date, pick any you want, just used a a generator for calculating times and to provide a common base for testing.
Create the table as normal then before you INSERT data into the table perform a SELECT query to search whether or not the time you are looking to insert already exists. For example you want to enter start 1pm and end 2pm as such:
DECLARE #start_value INT = 1
#end_value INT = 2;
Select COUNT(ID) as UseCheck FROM slot WHERE start_time = #start_value or end_time = #end_value
Then apply logic to say; IF UseCheck > 0 Then do stuff

Update performance issues - best practice

I've just started working with PostgreSQL, I've used to work with SQL Server and I'm currently migrating some of the existing processes.
The current issue which I'm facing is the performance for an Update statement.
I'm trying to update all records from one table (e.g. MyTable_History) and set new values for some columns.
In Sql Server I've used the following syntax:
declare #NewEndDate datetime = (select dateadd(minute, -1, getdate()))
update MyTable_History
set isLastestVersion=0, ValidTo=#NewEndDate , ModifiedBy='TestSCriptSql',ModifiedTime=GETDATE()
The code which i could come up with (since I don't know how to simply use variables, therefore used a temp tbl) for PostgreSQL is:
CREATE TEMP TABLE dates AS VALUES (current_timestamp + (-1 ||' minutes')::interval);
with d as (
select th.validto as validto, th.islatestversion as islatestversion,
th.modifiedby as modifiedby, th.modifiedtime as modifiedtime, d.column1 as newvalidto
from MyTable_History th, dates d
)
update MyTable_History
set validto = d.newvalidto, islatestversion=false, modifiedby='test_update_script', modifiedtime=current_timestamp
from d
The Sql Server runs localy on my laptop (not a super config) and the PosgreSQL server runs on AWS as RDS (i don't know the exact specs).
My question is am I doing something wrong in the PostgreSql update statement? Because on a 5000+ dataset sample on Sql Server the statement is instantly performed, while on PostgreSql it takes around 50 secs to successfully finish.
Also, from my point of view it seems I've over engineered, since on Sql Server I was having 3 lines of code, while on postgreSql i'm using a CTE.
Regrards,
I don't see why you would need a variable to begin with. current_timestamp returns the same value throughout a transaction as documented in the manual and thus will have the same value for all updated rows.
update mytable_history
set islastestversion = 0,
validto = current_timestamp - interval '1 minute',
modifiedby = 'test_update_script',
modifiedtime = current_timestamp;
But your usage of FROM in the UPDATE statement is wrong. The semantics of using FROM in an UPDATE statement are very different between Postgres and SQL Server
The way you use it, creates a cross join between the CTE and mytable_history. (so essentially a cross join of the table with itself).
You need to have a join condition in the WHERE clause on the primary key:
with d as (...)
update MyTable_History
set validto = d.newvalidto, islatestversion=false,
modifiedby='test_update_script', modifiedtime=current_timestamp
from d
where d.pk_column = MyTable_History.pk_column;
But if you really want to simulate something like variables, you don't need the CTE:
update mytable_history
set islastestversion = 0,
validto = t.newvalidto
modifiedby = 'test_update_script',
modifiedtime = current_timestamp
from (
values (current_timestamp - interval '1 minute')
) t (newvalidto);
The above still creates a "cross join" but as the joined table (from (values ...)) only contains a single row, it's not really a cross join.

Where/How to Create Reference Number

I'm using Entity Framework and MSSQL...
I need to insert a custom reference number when a record is inserted. The format is YYYY-01, YYYY-02, etc but the sequential number needs to be reset when a new year begins.
For example 2011-01, 2011-02, 2012-01
I'm curious if I should just go with a trigger or manage this with EF or ?
Having the sequential numbering reset each year has me a little confused...
Thanks for any advice!
Update:
Sorry, couldn't get the Code tag to work well with the markup
--Variables
DECLARE #year INT,
#seqNum INT;
--Try to find if the [ComplaintCount] table already contains the current year
SET #year = (SELECT [Count_Year]
FROM [ComplaintCount]
WHERE [Count_Year] = YEAR(Getdate()))
--If the current year cannot be found in the [ComplaintCount] table, a new record for the current year needs to be made
IF #year IS NULL
BEGIN
--Get the Current Year and set the initial sequence number to start counting for the new year
SET #year = YEAR(Getdate());
SET #seqNum = 1;
--Insert the new default values into the [ComplaintCount] table
INSERT INTO [ComplaintCount]
(count_year,
count_current)
VALUES (#year,
#seqNum);
END
ELSE
BEGIN
--We found a record already in the [ComplaintCount] table for the current year
--Get the sequence number and increase it by one
SET #seqNum = (SELECT [Count_Current]
FROM [ComplaintCount]
WHERE [Count_Year] = #year) + 1
--Insert the new values into the [ComplaintCount] table
UPDATE [ComplaintCount]
SET [Count_Current] = #seqNum
WHERE [Count_Year] = #year;
END
--Its now safe to insert the correct reference number into the [Complaint] table
UPDATE
UPDATE [Complaint]
SET [Complaint_Reference] = CAST(#year AS VARCHAR) + '-' + CAST(
#seqNum AS VARCHAR)
FROM [Complaint]
INNER JOIN inserted
ON [Complaint].[PK_Complaint_Id] = inserted.[PK_Complaint_Id]
I'd say a trigger. Create a two column table that stores the year and the current record number and then uses a trigger to look up the current year, increment the count column by one, then return that count to the trigger. Build logic into the trigger that if the new year doesn't exist, insert the new year record. I know most people like to avoid triggers if possible but that's a pretty legit use of a trigger and way less processing than trying to count records on every insert.
Having a single row for every year and it's related count may also prove useful in the future when you're trying to audit a past year or answer BI questions.