T-SQL Update trigger with multiple rows - tsql

Consider a trigger after update on the table A.
For every update the trigger should update all the records in table B.
Then consider this query:
UPDATE A SET X = Y
Apparently there are many rows updated. After the update the trigger takes place.
Now if the trigger would be using inserted table, and you would like to update the table B with every single row of the temporary table inserted, and in MSDN is not recommended to use cursors, how would you do that?
Thank you

I don't know what exactly you want to do in your update trigger, but you could e.g.
UPDATE dbo.B
SET someColumn = i.Anothervalue
FROM Inserted i
WHERE b.Criteria = i.Criteria
or something else - you need to tell us a bit more about what it is you want to do with table B! But it's definitely possible to update, insert into or other things, without using a cursor and handling multiple rows from the Inserted table.

I will assume that table A is related to table B via a key (have to assume, as you posted no details).
If that is the case, you can use either sub-queries or joins with inserted to select the rows that need changing on table B.
UPDATE tableB B
SET B.colx = someValue
WHERE B.id IN
(
SELECT b_id
FROM INSERTED
)

Related

incorrect data update on Sybase trigger execution

I have a table test_123 with the column as:
int_1 (int),
datetime_1 (datetime),
tinyint_1 (tinyint),
datetime_2 (datetime)
So when column datetime_1 is updated and the value at column tinyint_1 = 1 that time i have to update my column datetime_2 with column value of datetime_1
I have created the below trigger for this.. but with my trigger it is updating all datetime2 column values with datetime_1 column when tinyint_1 = 1 .. but i just want to update that particular row where datetime_1 value has updated( i mean changed)..
Below is the trigger..
CREATE TRIGGER test_trigger_upd
ON test_123
FOR UPDATE
AS
FOR EACH STATEMENT
IF UPDATE(datetime_1)
BEGIN
UPDATE test_123
SET test_123.datetime_2 = inserted.datetime_1
WHERE test_123.tinyint_1 = 1
END
ROW-level triggers are not supported in ASE. There are only after-statement triggers.
As commented earlier, the problem you're facing is that you need to be able to link the rows in the 'inserted' pseudo-table to the base table itself. You can only do that if there is a key -- meaning: a column that uniquely identifies a row, or a combination of columns that does so. Without that, you simply cannot identify the row that needs to be updated, since there may be multiple rows with identical column values if uniqueness is not guaranteed.
(and on a side note: not having a key in a table is bad design practice -- and this problem is one of the many reasons why).
A simple solution is to add an identity column to the table, e.g.
ALTER TABLE test_123 ADD idcol INT IDENTITY NOT NULL
You can then add a predicate 'test_123.idcol = inserted.idcol' to the trigger join.

Using a Database Trigger to move a record

I am new to the use of Database triggers so I want to get pointed in the right direction here. I would like to make a trigger to execute on 'insert' of new Invoice or 'Update' of 'BalanceDue' of my Invoice table to take the VendorID in Invoices, Grab the Vendor row in the Vendors table and move some data from that row to another table for ShippingLabels. This is what I got so far but Im kinda at a loss for where to go from here.
CREATE TRIGGER trSetShippingLabels
ON tblInvoices
AFTER Insert, Update
AS
INSERT INTO tblShippingLabels
SELECT VendorName, VendorAddress, VendorCity, VendorState, VendorZipCode
FROM tblVendors
JOIN tblInvoices i on i.VendorID = Vendors.VendorID
You're pretty close. You just need to use the special "inserted" table within your trigger. This table is accessible within triggers (or in conjunction with the output clause), and holds all the data inserted by the last statement executed against the relevant permanent table. There is also a corresponding "deleted" table if you wanted to remove some data in a trigger.
CREATE TRIGGER trSetShippingLabels
ON tblInvoices
AFTER Insert,Update
AS
INSERT INTO tblShippingLabels
SELECT VendorName, VendorAddress, VendorCity, VendorState, VendorZipCode
FROM Vendors
JOIN Inserted i on i.VendorID = Vendors.VendorID

Would inserts and updates fire a delete trigger?

I am in the unfortunate situation of needing to add triggers to a table to track changes to a legacy system. I have insert, update, and delete triggers on TABLE_A each one of them writes the values of two columns to a TABLE_B, and a bit flag that is set to 1 if populated by the delete trigger.
Every entry in TABLE_B shows up twice. An insert crates two rows, and update creates two rows (we believe), and a delete creates an insert and then a delete.
Is the legacy application doing this, or is SQL doing it?
EDIT (adding more detail):
body of triggers:
.. after delete
INSERT INTO TableB(col1, isdelete) SELECT col1, 1 from DELETED
.. after insert
INSERT INTO TableB(col1, isdelete) SELECT col1, 0 from INSERTED
.. after update
INSERT INTO TableB(col1, isdelete) SELECT col1, 0 from DELETED
I have tried profiler, and do not see any duplicate statements being executed.
It may be that the application is changing the data again when it sees the operations on its data.
It's also possible that triggers exist elsewhere - is there any possiblity that there is a trigger on TableB that is creating extra rows?
More detail would be needed to address the question more fully.

PostgreSQL: dynamic row values (?)

Oh helloes!
I have two tables, first one (let's call it NameTable) is preset with a set of values (id, name) and the second one (ListTable) is empty but with same columns.
The question is: How can I insert into ListTable a value that comes from NameTable? So that if I change one name in the NameTable then automagically the values in ListTable are updated aswell.
Is there INSERT for this or does the tables has to be created in some special manner?
Tried browsing the manual but without success :(
The suggestion for using INSERT...SELECT is the best method for moving between tables in the same database.
However, there's another way to deal with the auto-update requirement.
It sounds like these are your criteria:
Table A is defined with columns (x,y)
(x,y) is unique
Table B is also defined with columns (x,y)
Table A is a superset of Table B
Table B is to be loaded with data from Table A and needs to remain in sync with UPDATEs on Table A.
This is a job for a FOREIGN KEY with the option ON UPDATE CASCADE:
ALTER TABLE B ADD FOREIGN KEY (x,y) REFERENCES A (x,y) ON UPDATE CASCADE;
Now, not only will it auto-update Table B when Table A is updated, table B is protected against containing (x,y) pairs that do not exist in Table A. If you want records to auto-delete from Table B when deleted from Table A, add "ON UPDATE DELETE."
Hmmm... I'm a bit confused about exactly what you want to do or why, but here are a couple of pointers towards things you might want to take a look at: table inheritance, triggers and rules.
Table inheritance in postgresql allows a table to share the data of a another table. So, if you add a row to the base table, it won't show up in the inherited table, but if you add a row to the inherited table, it will now show up in both tables and updates in either place will reflect it in both tables.
Triggers allow you to setup code that will be run when insert, update or delete operations happen on a table. This would allow you to add the behavior you describe manually.
Rules allow you to setup a rule that will replace a matching query with an alternative query when a specific condition is met.
If you describe your problem further as in why you want this behavior, it might be easier to suggest the right way to go about things :-)

How do I INSERT and SELECT data with partitioned tables?

I set up a set of partitioned tables per the docs at http://www.postgresql.org/docs/8.1/interactive/ddl-partitioning.html
CREATE TABLE t (year, a);
CREATE TABLE t_1980 ( CHECK (year = 1980) ) INHERITS (t);
CREATE TABLE t_1981 ( CHECK (year = 1981) ) INHERITS (t);
CREATE RULE t_ins_1980 AS ON INSERT TO t WHERE (year = 1980)
DO INSTEAD INSERT INTO t_1980 VALUES (NEW.year, NEW.a);
CREATE RULE t_ins_1981 AS ON INSERT TO t WHERE (year = 1981)
DO INSTEAD INSERT INTO t_1981 VALUES (NEW.year, NEW.a);
From my understanding, if I INSERT INTO t (year, a) VALUES (1980, 5), it will go to t_1980, and if I INSERT INTO t (year, a) VALUES (1981, 3), it will go to t_1981. But, my understanding seems to be incorrect. First, I can't understand the following from the docs
"There is currently no simple way to specify that rows must not be inserted into the master table. A CHECK (false) constraint on the master table would be inherited by all child tables, so that cannot be used for this purpose. One possibility is to set up an ON INSERT trigger on the master table that always raises an error. (Alternatively, such a trigger could be used to redirect the data into the proper child table, instead of using a set of rules as suggested above.)"
Does the above mean that in spite of setting up the CHECK constraints and the RULEs, I also have to create TRIGGERs on the master table so that the INSERTs go to the correct tables? If that were the case, what would be the point of the db supporting partitioning? I could just set up the separate tables myself? I inserted a bunch of values into the master table, and those rows are still in the master table, not in the inherited tables.
Second question. When retrieving the rows, do I select from the master table, or do I have to select from the individual tables as needed? How would the following work?
SELECT year, a FROM t WHERE year IN (1980, 1981);
Update: Seems like I have found the answer to my own question
"Be aware that the COPY command ignores rules. If you are using COPY to insert data, you must copy the data into the correct child table rather than into the parent. COPY does fire triggers, so you can use it normally if you create partitioned tables using the trigger approach."
I was indeed using COPY FROM to load data, so RULEs were being ignored. Will try with TRIGGERs.
Definitely try triggers.
If you think you want to implement a rule, don't (the only exception that comes to mind is updatable views). See this great article by depesz for more explanation there.
In reality, Postgres only supports partitioning on the reading side of things. You're going to have setup the method of insertition into partitions yourself - in most cases TRIGGERing. Depending on the needs and applicaitons, it can sometimes be faster to teach your application to insert directly into the partitions.
When selecting from partioned tables, you can indeed just SELECT ... WHERE... on the master table so long as your CHECK constraints are properly setup (they are in your example) and the constraint_exclusion parameter is set corectly.
For 8.4:
SET constraint_exclusion = partition;
For < 8.4:
SET constraint_exclusion = on;
All this being said, I actually really like the way Postgres does it and use it myself often.
Does the above mean that in spite of
setting up the CHECK constraints and
the RULEs, I also have to create
TRIGGERs on the master table so that
the INSERTs go to the correct tables?
Yes. Read point 5 (section 5.9.2)
If that were the case, what would be
the point of the db supporting
partitioning? I could just set up the
separate tables myself?
Basically: the INSERTS in the child tables must be done explicitly (either creating TRIGGERS, or by specifying the correct child table in the query). But the partitioning
is transparent for SELECTS, and (given the storage and indexing advantages of this schema) that's the point.
(Besides, because the partitioned tables are inherited,
the schema is inherited from the parent, hence consistency
is enforced).
Triggers are definitelly better than rules.
Today I've played with partitioning of materialized view table and run into problem with triggers solution.
Why ?
I'm using RETURNING and current solution returns NULL :)
But here's solution which works for me - correct me if I'm wrong.
1. I have 3 tables which are inserted with some data, there's an view (let we call it viewfoo) which contains
data which need to be materialized.
2. Insert into last table have trigger which inserts into materialized view table
via INSERT INTO matviewtable SELECT * FROM viewfoo WHERE recno=NEW.recno;
That works fine and I'm using RETURNING recno; (recno is SERIAL type - sequence).
Materialized view (table) need to be partitioned because it's huge, and
according to my tests it's at least x10 faster for SELECT in this case.
Problems with partitioning:
* Current trigger solution RETURN NULL - so I cannot use RETURNING recno.
(Current trigger solution = trigger explained at depesz page).
Solution:
I've changed trigger of my 3rd table TO NOT insert into materialized view table (that table is parent of partitioned tables), but created new trigger which inserts
partitioned table directly FROM 3rd table and that trigger RETURN NEW.
Materialized view table is automagically updated and RETURNING recno works fine.
I'll be glad if this helped to anybody.