Trigger to update parent table based on max value of child table - tsql

this is my first foray into SSMS/T-SQL (coming from Access). I have a trigger setup that keeps the value of a column in a parent table always equal to the MAX value of a column a child table based on the key between them. To calculate the MAX I have a UDF defined that i Think works ok.
The problem I seem to have is that the trigger executes for EVERY key in the table and not just the one that got updated/deleted/inserted (or so is what I can glean from the debugger).
Here is the parent table:
CREATE TABLE [dbo].[factMeasures](
[MeasureID] [int] IDENTITY(1,1) NOT NULL,
[QARTOD] [int] NULL,
[Param] [char](10) NOT NULL,
[Value] [real] NOT NULL,
CONSTRAINT [PK_factMeasures] PRIMARY KEY CLUSTERED
(
[MeasureID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
Here is the child table:
CREATE TABLE [dbo].[dt_QCflags](
[QC_ID] [int] IDENTITY(1,1) NOT NULL,
[fkMeasureID] [int] NOT NULL,
[RuleValue] [int] NOT NULL,
CONSTRAINT [PK_dt_QCflags] PRIMARY KEY CLUSTERED
(
[QC_ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[dt_QCflags] WITH CHECK ADD CONSTRAINT [FK_dt_QCflags_factMeasures] FOREIGN KEY([fkMeasureID])
REFERENCES [dbo].[factMeasures] ([MeasureID])
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[dt_QCflags] CHECK CONSTRAINT [FK_dt_QCflags_factMeasures]
GO
HEre is the UDF that calculates the MAX value of [RuleValue] for the input [MeasureID]
CREATE FUNCTION [dbo].[MaxQC](#MeasureID INT)
RETURNS INT
AS
BEGIN
RETURN
(SELECT
Max([dt_QCflags].[RuleValue]) AS Max_RuleValue
FROM
dbo.dt_QCflags
WHERE
dt_QCflags.fkMeasureID = #MeasureID
GROUP BY
fkMeasureID);
END
And here is the trigger on the child table:
ALTER TRIGGER [dbo].[UpdateQARTOD]
ON [dbo].[dt_QCflags]
AFTER INSERT, UPDATE, DELETE
AS
BEGIN
UPDATE factMeasures
SET QARTOD = dbo.MaxQC(MeasureID) -- by QARTOD Definition, QARTOD flag is set to the MAX of all sub-test results
END
So what I want is for the column in the parent (factMeasures.QARTOD) to always contain the Maximum of the column in the child table (dt_QCFlags.RuleValue), for the given MeasureID value
When I debug this, it seems to be running the trigger for EVERY record in the parent table, so I think I need to modify the trigger, but i"m not sure how to get the MeasureID of JUST the record that was added/deleted/modified.
I'm guessing it has something to do with the "magic tables" inserted, deleted, etc. but I can't seem to get the syntax right.
Thanks!

I would argue that unless you have a very good reason, storing values that can easily be computed on a query level is a mistake.
This seems like one of many cases I've seen where people think they gain something by storing values on one table that's calculated from values of another table, but in fact the opposite is true - now you have two points of data that needs to be synchronized at all times, and since the process synchronizing them is a trigger, you don't really have control over that - it's quite easy to disable / enable triggers, for instance.
Therefore, My advice to you would be to remove that trigger all together and simply calculate the value when you need to.
Please note that since SQL Server supports max() over(partition by) meaning you don't even need a group by if you want to calculate the max of a column.
Updated Following your comments to the answer, it seems like you have a good reason to store these values.
Having said all that, here's a direct answer to the question you've asked.
In SQL Server triggers, the database enables you to query two special tables called inserted and deleted. These tables contains the data that was (or going to be, in case of instead of triggers) inserted or deleted to the table on which the trigger is declared.
Please note that in SQL Server, triggers are fired per statement, not per row. This means that the inserted and deleted tables might contain 0, 1 or many rows.
If you still want to calculate the value using triggers, I would advise a trigger for insert/update and another trigger for deletes.
This would make for much simpler code.
In the delete trigger, you left join to the deleted table:
UPDATE T
SET QARTOD = MaxValue
FROM factMeasures As T
JOIN
(
SELECT d.fkMeasureID, Max(t.RuleValue) As MaxValue
FROM Deleted AS d
LEFT JOIN dt_QCflags As t
ON d.QC_ID = t.QC_ID
GROUP BY d.fkMeasureID
) as D
ON T.MeasureID = D.fkMeasureID
In the insert/update trigger, you write a very similar code - but you don't need to refer to the physical table in this case, only the inserted table:
UPDATE T
SET QARTOD = MaxValue
FROM factMeasures As T
JOIN
(
SELECT fkMeasureID, Max(RuleValue) As MaxValue
FROM Inserted
GROUP BY fkMeasureID
) as I
ON t.MeasureID = I.fkMeasureID

Related

Why am I unable to create an index using low privileged user

I created a Role to which this user belongs to, and have set execute rights on multiple Schemas, however on Data Schema i need to be able to dynamically create and delete tables. Based on the screenshot, i gave all available permissions to the role (and user effectively) but when i try creating an index this is the error i get :
Could not create constraint or index. See previous errors
There are no previous errors :(
This is the code that should create the table and indexes :
CREATE TABLE [Data].[24C6B124-137C-4F06-B690-F80C0C0A1347]
(
[ApplicationId] [INT] NOT NULL,
[UUID] [UNIQUEIDENTIFIER] ROWGUIDCOL NOT NULL,
CONSTRAINT [PK_24C6B124-137C-4F06-B690-F80C0C0A1347]
PRIMARY KEY CLUSTERED ([ApplicationId] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY];
ALTER TABLE [Data].[24C6B124-137C-4F06-B690-F80C0C0A1347]
ADD CONSTRAINT [DF_24C6B124-137C-4F06-B690-F80C0C0A1347_UUID]
DEFAULT (NEWID()) FOR [UUID];
ALTER TABLE [Data].[24C6B124-137C-4F06-B690-F80C0C0A1347] WITH CHECK
ADD CONSTRAINT [FK_24C6B124-137C-4F06-B690-F80C0C0A1347_Application]
FOREIGN KEY([ApplicationId]) REFERENCES [dbo].[Application] ([ApplicationId])
ON DELETE CASCADE;
rolling back...
If I execute same query with sa level user, it runs with no issues.
My best bet is that this is a permission problem related to some permission required to create indexes, but my search and experiments lead me nowhere.
As i posted a question, i figured it out so let me share :
I needed to add reference permission also on the table [dbo].[Application] that's in dbo Schema unlike table in question that's in Data Schema.

Execute Function Once per Unique Value

I have two tables that contain data related to everyday business:
CREATE TABLE main_table (
main_id serial,
cola text,
colb text,
colc text,
CONSTRAINT main_table_pkey PRIMARY KEY (main_id)
);
CREATE TABLE second_table (
second_id serial,
main_id integer,
cold text,
CONSTRAINT second_table_pkey PRIMARY KEY (second_id),
CONSTRAINT second_table_fkey FOREIGN KEY (main_id)
REFERENCES main_table (main_id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
);
We have a need to know when some data was updated in these tables so that exports can be generated and pushed to third parties. I've created a third table to hold the update information:
CREATE TYPE field AS ENUM ('cola', 'colb', 'colc', 'cold');
CREATE TABLE table_updates (
main_id int,
field field
updated_on date NOT NULL DEFAULT NOW(),
CONSTRAINT table_updates_fkey FOREIGN KEY (main_id)
REFERENCES main_table (main_id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION
);
main_table has a trigger to update table_updates before UPDATE queries, which satisfies the need to track three of the four column updates.
I can easily add the same type of trigger to second_table, however because main_id is not unique the function can be executed several times for a single main_id value, which is not desirable.
How can I create a function that, when updating several rows in second_table, executes only once per main_id?
How can I create a function that, when updating several rows in second_table, executes only once per main_id?
If your inserts are batched insert by main_id ie, INSERT INTO tbl (main_id...) VALUES (main_id ...),(main_id ...),(main_id ...) you can use the rule system to trigger once for the INSERT or UPDATE
For the things that can be implemented by both, which is best depends on the usage of the database. A trigger is fired once for each affected row. A rule modifies the query or generates an additional query. So if many rows are affected in one statement, a rule issuing one extra command is likely to be faster than a trigger that is called for every single row and must re-determine what to do many times. However, the trigger approach is conceptually far simpler than the rule approach, and is easier for novices to get right.
Shy of that, you may also want to look into the normal LISTEN, and NOTIFY. Which give you the ability to use Async actions. If that's your thing and you decide to keep the trigger method consider Trigger Change Notification module, via tcn.
My suggestion is to do this in the app (outside of the DB) if at all possible. Remember in PostgreSQL temp tables are local to the session. So you can have each loader-session do something like this,
BEGIN
CREATE TEMP TABLE UNLOGGED etl_inventory;
COPY foo FROM stdin;
-- Are they different, if so `NOTIFY`
-- UPSERT
COMMIT;
And then one have one daemon that does exportation add to exportation queue when it receives the NOTIFY event.
While Evan's answer is correct, I think this question could benefit from an example.
This is the rule definition I used with the example tables in the question:
CREATE OR REPLACE RULE update_update_table
AS ON UPDATE TO second_table
DO ALSO (
INSERT INTO table_updates (
main_id, field
)
SELECT DISTINCT OLD.main_id, 'cold'::field
WHERE NOT EXISTS (
SELECT TRUE
FROM table_updates
WHERE main_id = OLD.main_id
AND field = 'cold'
);
UPDATE table_updates
SET updated_on = NOW()
WHERE main_id = OLD.main_id
AND field = 'cold'
)

Is it possible to create a Foreign Key on 2 columns with differing Collations?

I've tried searching for about an hour through all the Foreign Key / Collation Questions but I can't find something even remotely close to my question.
I have 2 tables from two different software vendors sitting in the same database. One vendor hard codes their collations to Latin1_General_BIN while the other uses the Database Default (in this case Latin1_General_CI_AS). Is it possible, without altering any of the columns, to create Foreign Keys between these two Collation Types?
Typically you would simply change one but in this case the Tables are very sensitive and I'm not allowed to make such a change but I have to create a Foreign Key due to a piece of logic in a Trigger that reads data between these two tables only if it finds a Foreign Key:
SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE
WHERE CONSTRAINT_NAME =
(
SELECT name FROM sys.foreign_keys
WHERE parent_object_id = OBJECT_ID( 'Table1' )
AND referenced_object_id = OBJECT_ID( 'Table2' )
)
Any help would really be appreciate
P.S. I just can't seem to figure out how this code thing works if anyone would help me out, I put in the 4 required spaces but it's still just displaying my code as text :(
Adding a foreign key constraint from a field of one collation to a field of another collation can't be done. You will get error message 1757.
Either change the collation of one of the tables or create a work around with a new column that is used instead with the correct collation or create surrogate key columns with integers used for referencing.
If nothing else works and you really really need to fix this type of constraint and performance is not an issue, add a check constraints and/or triggers that will check the referential integrity of data put into the tables. These rules will have to cast all values in one table to the collation of the other in order to compare values so it will be slow and it will be really tricky for you to get use of indexes, proceed with caution.
For example you could have an insert trigger on the referencing table that check if a record with the inserted string exists in the referenced table. Then you would also have to add an update and delete trigger for the referenced table so that it doesn't fall out of range of values that are referenced by records in the referencing table or which cascades updates/deletes. Basically you replicate what foreign keys are and it gets really slow and scales horribly.
Short answer: don't do it, let the tables stay untied or fix the collation of one of them.
Sweet, I think the solution is very elegant. I'm writing it as an answer purely as it's the full alternative that closest resembles the required solution. But I'm going to mark your answer as the answer as it's the one that correctly answers my original question.
Right, so first what I did, was I got permission from the vendor who's trigger requires the foreign key, to create a new column in their table as a persisted computed column in the collation of the other vendors table:
DECLARE #Collation nvarchar(100)
DECLARE #SQL nvarchar(1000)
SET #Collation = ( SELECT collation_name FROM sys.columns WHERE OBJECT_ID IN ( SELECT OBJECT_ID FROM sys.objects WHERE type = 'U' AND name = 'Vendor1Table' ) AND name = 'Vendor1Column' )
SET #SQL = 'ALTER TABLE [Vendor2Table] ADD [Vendor2ComputedColumn] AS [Vendor2Column] COLLATE ' + #Collation + ' PERSISTED'
EXECUTE( #SQL )
GO
Next up, I added a candidate key to the computed column:
ALTER TABLE [Vendor2Table] ADD CONSTRAINT [CCUNQ1_x] UNIQUE NONCLUSTERED
(
[Vendor2ComputedColumn] ASC
)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
ON [PRIMARY]
GO
Then, I simply created the foreign key to the computed column:
ALTER TABLE [dbo].[Vendor1Table] WITH CHECK ADD CONSTRAINT [CCFOK01_x] FOREIGN KEY ( [Vendor1Column] )
REFERENCES [dbo].[Vendor2Table] ( [Vendor2ComputedColumn] )
GO
ALTER TABLE [dbo].[Vendor1Table] CHECK CONSTRAINT [CCFOK01_x]
GO
and finally, the original SQL Script passes with flying colours:
SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE
WHERE CONSTRAINT_NAME =
(
SELECT name FROM sys.foreign_keys
WHERE parent_object_id = OBJECT_ID( 'Vendor1Table' )
AND referenced_object_id = OBJECT_ID( 'Vendor2Table' )
)
Hopefully this small walkthrough helps some other soul some day :)
Thanks for the assist David, appreciate it!

Can't drop Index Constraint

I'm trying to first drop the index and then the PK (because ultimately I'm gonna need to do a truncate on this table).
Here's a screen shot of this table and constraints:
Here are the 2 constraints (code obtained from clipboard after I right-click them and do a create to clipboard in SQL 2008):
(the Primary Key)
ALTER TABLE [dbo].[Entry] ADD CONSTRAINT [PK_Entry_Id] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
(supposedly this is the create index code, after I do a right-click create to clipboard..but it's the same exact code! not sure why):
ALTER TABLE [dbo].[Entry] ADD CONSTRAINT [PK_Entry_Id] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
Dropping the primary key should drop the index as well as the index was automatically created when you created the primary key.
You only need to drop the Foreign Key constraints when you need to truncate a table. This is to make sure there are no tables dependent on it.
You don't need to, and can't, drop the index associated with the primary key. All you need to do is drop the primary key and the index will be removed.
If you're going to truncate the data, you shouldn't have to drop the primary key though.
If you can't truncate the table, then you should remove foreign keys that reference this table. I assume that you removed foreign keys where this table references other tables, because we don't see any in your data.
Constant and index are not the same.
But in some syntax an index is referred to as a constraint.
You have 5 constraints and one index.
You did not like notice it has the same name in both spots PK_Entry_ID.
A PK is an index.
And an index does not prevent a truncate.
A FK prevents a truncate.
If you are tying to drop the index the why are you creating a script to create.
Create a script to drop and drop it.

Adding a non clustered index to a table with less than 1000 rows but accessed frequently will increase performance?

I have a table with just 400-500 rows but this table is being accessed very often so I was wondering if should add a non clustered index on one of its columns to see any improvement?
This table keeps the same data all the time and rarely is updated.
Here's the structure of the table
CREATE TABLE [dbo].[tbl_TimeZones](
[country] [char](2) NOT NULL,
[region] [char](2) NULL,
[timezone] [varchar](50) NOT NULL
) ON [PRIMARY]
With this Cluster Index:
CREATE CLUSTERED INDEX [IX_tbl_TimeZones] ON [dbo].[tbl_TimeZones]
(
[country] ASC,
[region] ASC,
[timezone] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
This table doesn't has a primary key because the region column could be null so that's why I haven't used a key yet.
So I want to add a non cluster index on the column timezone in order to increase it's performance.
Short answer: an index will probably improve performance for you.
Longer answer.
Even with just that number of records, you could see query improvement with a well-chosen index. Assuming this table is used in joins, you could see changes (improvements) in the query plans that join through that table, which may give you a bigger benefit than you might anticipate.
You seem to give the impression that you expect to index "one" column. Indexing just one column is probably not the optimal solution. A "covering" index is generally going to be a better solution (Google for "covering index").
Now, having said all of that, I suspect that the best performance may come from how the clustered index is defined. You did not indicate the nature of the clustered index on this table, or of the use of the data. But, if the queries almost always access the table in the same way (e.g., the WHERE and JOIN clauses always reference the same columns), then you might find that changing the clustered index is gives the most improvement.
Also, part of the art of choosing indexes involves balancing query performance versus insert/update performance. You don't have that challenge if the data aren't changing. Keep that in mind when reading general index-tuning advice.
Bottom line: I suspect that a clustered index over the columns used in the WHERE and JOIN clauses is the answer. Consider column order matters. Selectivity matters.