sql update trigger to grab updated data and also select other row data - tsql

I am trying to find a way so that when a specific column gets updated on a table that an update trigger (or maybe something else) can then select the stop number column from the same row that the datetime was update on. I want to capture the stop number and the column data before/after the update into another table. I do ok with SQL but I'm no expert so I just can't think of how to accomplish this.
Is it possible?

Yes, it is. Have a read through this. Basically there are two virtual tables, deleted and inserted, that you can query in a trigger. Deleted contains the row that is being deleted, and inserted (you guessed it) the row being inserted.
"How does that help? I'm doing an update." Indeed but an update is effectively a delete followed by an insert, so in an after update trigger you can get at the old value in deleted.

Related

COPY support with postgreSQL v12 triggers

We have this pair of trigger and function that we use on our psql database for the longest time. Basically, the trigger is called each time there is a new record to the main table, and each row is inserted to the monthly partition individually. Following is the trigger function:
CREATE TRIGGER partition_mic_teams_endpoint_trg1
BEFORE INSERT ON "mic_teams_endpoint"
FOR EACH ROW EXECUTE
PROCEDURE trg_partition_mic_teams_endpoint('month');
The function we have creates monthly partitions based on a timestamp field in each row.
I have two questions:
List item Even if I try to COPY a bunch of rows from CSV to the main table, is this trigger/function going to insert each row individually? Is this efficient?
If that is the case, is it possible to have support for COPYing data to partitions instead of INSERT.
Thanks,
Note: I am sorry if I did not provide enough information for an answer
Yes, a row level trigger will be called for each row separately, and that will make COPY quite a bit slower.
One thing you could try is a statement level AFTER trigger that uses a transition table, so that you can
INSERT INTO destination SELECT ... FROM transition_table;
That should be faster, but you should test it to be certain.
See the documentation for details.

Trigger on any column change except specific one

I want to trigger on all column changes on a table except a specific one, but listing all the columns in the AFTER INSERT OR UPDATE OF clause is a problem because the table has many columns and the set of columns changes at times, making maintaining the trigger definition in sync very error prone. How can I do this in a way that I just specify the column to omit from the trigger? For the moment I have an event trigger that prints a warning to update the trigger when that table is altered, but that relies on the user noticing the warning and obviously won't help when creating a new db.

"ON UPDATE" equivalent for Amazon Redshift

I want a create a table that has a column updated_date that is updated to SYSDATE every time any field in that row is updated. How should I do this in Redshift?
You should be creating table definition like below, that will make sure whenever you insert the record, it populates sysdate.
create table test(
id integer not null,
update_at timestamp DEFAULT SYSDATE);
Every time field update?
Remember, Redshift is DW solution, not a simple database, hence updates should be avoided or minimized.
UPDATE= DELETE + INSERT
Ideally instead of updating any record, you should be deleting and inserting it, so takes care of update_at population while updating which is eventually, DELETE+INSERT.
Also, most of use ETLs, you may using stg_sales table for populating you date, then also, above solution works, where you could do something like below.
DELETE from SALES where id in (select Id from stg_sales);
INSERT INTO SALES select id from stg_sales;
Hope this answers your question.
Redshift doesn't support UPSERTs, so you should load your data to a temporary/staging table first and check for IDs in the main tables, which also exist in the staging table (i.e. which need to be updated).
Delete those records, and INSERT the data from the staging table, which will have the new updated_date.
Also, don't forget to run VACUUM on your tables every once in a while, because your use case involves a lot of DELETEs and UPDATEs.
Refer this for additional info.

would postgres really update page file when fields are all equals before and after update?

I am working with a little website crawler program. I use PostgresQL to store data and use such statement to update that,
INSERT INTO topic (......) VALUES (......)
ON CONFLICT DO UPDATE /* updagte all fields here */
The question is if all fields before upate and after update are really equals, would PostgresQL really update that?
Postgres (like nearly all other DBMS) will not check if the target values are different then the original ones. So the answer is: yes, it will update the row even if the values are different.
However, you can easily prevent the "empty" update in this case by including a where clause:
INSERT INTO topic (......)
VALUES (......)
ON CONFLICT (...)
DO UPDATE
set ... -- update all column
WHERE topic IS DISTINCT FROM excluded;
The where clause will prevent updating a row that is identical to the one that is being inserted. To make that work correctly your insert has to list all columns of the target tables. Otherwise the topic is distinct from excluded condition will always be true because the excluded row has fewer columns then the topic row and thus it id "distinct" from it.
Adding a check for modified values has been discussed multiple times on the mailing list and has always be discarded. The main reason being, that it doesn't make sense to have the overhead of checking for changes for every statement just to cope with a few badly written ones.

Insert data from staging table into multiple, related tables?

I'm working on an application that imports data from Access to SQL Server 2008. Currently, I'm using a stored procedure to import the data individually by record. I can't go with a bulk insert or anything like that because the data is inserted into two related tables...I have a bunch of fields that go into the Account table (first name, last name, etc.) and three fields that will each have a record in an Insurance table, linked back to the Account table by the auto-incrementing AccountID that's selected with SCOPE_IDENTITY in the stored procedure.
Performance isn't very good due to the number of round trips to the database from the application. For this and some other reasons I'm planning to instead use a staging table and import the data from there. Reading up on my options for approaching this, a cursor that executes the same insert stored procedure on the data in the staging table would make sense. However it appears that cursors are evil incarnate and should be avoided.
Is there any way to insert data into one table, retrieve the auto-generated IDs, then insert data for the same records into another table using the corresponding ID, in a set-based operation? Or is a cursor my only option here?
Look at the OUTPUT clause. You should be able to add it to your INSERT statement to do what you want.
BTW, if you need to output columns into the second table that weren't inserted into the first one, then use MERGE instead of INSERT (as suggested in the comment to the original question) as its OUTPUT clause supports referencing other columns from the source table(s). Otherwise, keeping it with an INSERT is more straightforward, and it does give you access to the inserted identity column.
I'm having experiment to worked out in inserting multiple record into related table using databinding. So, try this!
Hopefully this is very helpful. Follow this link How to insert record into related tables. for more information.