I got this trigger as a tip and I would like to know how it works with updates. It is supposed to create a record everytime there is an update or insert action on my main table.
create trigger tblTriggerAuditRecord on tblOrders
after **update, insert**
as
begin
insert into tblOrdersAudit
(OrderID, OrderApprovalDateTime, OrderStatus, UpdatedBy, UpdatedOn )
select i.OrderID, i.OrderApprovalDateTime, i.OrderStatus, SUSER_SNAME(), getdate()
from tblOrders t
inner join **inserted** i on t.OrderID=i.OrderID
end
go
From my understanding, it inserts all the inserted records to the main table to the stated columns in the audit including timestamp and user but how about the update? What if I update the rows in my main table? should not I have a joing also on the updated records?
Hope my question is clear, thanks a lot for help!
There is no table updated when a trigger is fired. In case of an update, the old values from your main table you'll find in a table deleted and the new ones are (like in case of an insert) in the table inserted.
That's the same as in this example:
UPDATE tabEmployee SET Salary = Salary * 1.05
OUTPUT inserted.EmployeeName, deleted.Salary, inserted.Salary
INTO tabSalaryHistory (EmployeeName, OldSalary, NewSalary)
In this example, every employee gets a salary increase. The value before the increase is stored in the output table deleted and the new value in inserted.
Have a look at this for better understanding.
Related
This may be a simple fix but Im somehow drawing a blank. I have this code below, and I want the results that I got from it to be added into their own column in an existing table. How would i go about doing this.
Select full_name, SUM(total) as sum_sales
FROM loyalty
where invoiceyear = 2013
GROUP BY full_name
order by sum_sales DESC;
This leaves me with one column with the name of employee and the second with their sales from that year.
How can i just take these results and add them into a column in addition to the table
Is it as simple as...
Alter table loyalty
Add column "2013 sales"
and then add in some sort of condition?
Any help would be greatly appreciated!
If i got your question right, you need to first alter the table allowing the new field to be null (you can change it later on) then you could use an insert clause to store the value permanently.
I have 2 table invoice_header table and invoice_line table with details below:
invoice_header colums : invoice_id, customer_id
invoice_line colums : invoice_id, line_id ,item_id, quantity, line_flag
They are joined by the common column invoice_id
Values in these 2 tables are inserted using a database transaction by the application
Tables are on Microsoft SQL Server
I created a trigger on invoice_line to update line_flag to zero if customer_id is 10. However the trigger is not working, I believe because it's failing to find a matching line in invoice_header since these two tables are inserted by a database transaction at the same time.
Below is the trigger
update invoice_line
set line_flag = 0
from invoice_line l
inner join Inserted v on v.line_id = l.line_id
inner join invoice_header h on h.invoice_id = v.invoice_id
where h.customer_id = 10
If I don't join the tables, It works but updates all the lines.
I have also tried to rewrite the trigger on the invoice_header to update the lines but it's still not working.
Is the a way to write an after insert trigger that joins tables inserted by database transaction?
I believe because it's failing to find a matching line in
invoice_header since these two tables are inserted by a database
transaction at the same time.
This is not true; the data may still not be committed for "outsiders", but your own transaction can see the data it iself has already modified.
the trigger is not working
While your question has some details, you don't mention why the trigger doesn't work. Do you mean it does nothing? Or is there an error? If so, which one?
update invoice_line
I think here's the cause for the error. You want to update l instead, since you have already aliased your table.
I am creating 2 tables, one called quizes with:
id, quiz_name, plays
and the second one called quizes taken with:
session_id, quiz_id
and I would like to store count of rows, inside of plays column in table quizes, but I have no clue how to do it.
In more detail, what I am trying to achieve, is that on every update/insert into quizes_taken, in table quizes, the play" column updates with the row count of that quiz's ID from table quizes_taken.
If someone could explain how to achieve it, I'd be grateful!
Thanks in advance!
you could make a view or materialized view for the expected result output,
CREATE OR REPLACE VIEW AS quizes_live_update AS
SELECT a.id , a.quiz_name, COUNT(b.quiz_id) as plays
FROM quizes a
JOIN quizes_taken b ON a.id = b.quiz_id
GROUP BY a.id, a.quiz_name
Postgres: Create view
Although, if you need to proceed with tables approach, you can set up some upsert triggers
on quizes_taken table to perform update and insert on quizes table, plays column.
I have a situation where I have multiple (potentially hundreds) threads repeating the same task (using a java scheduled executor, if you are curious). This task entails selecting rows of changes (from a table called change) that have not yet been processed (processed changes are kept track in a m:n join table called process_change_rel that keeps track of the process id, record id and status) processing them, then updating back the status.
My question is, how is the best way to prevent two threads from the same process from selecting the same row? Will the below solution (using for update to lock rows ) work? If not, please suggest a working solution
Create table change(
—id , autogenerated pk
—other fields
)
Create table change_process_rel(
—change id (pk of change table)
—process id (pk of process table)
—status)
Query I would use is listed below
Select * from
change c
where c.id not in(select changeid from change_process_rel with cs) for update
Please let me know if this would work
You have to "lock" a row which you are going to process somehow. Such a "locking" should be concurrent of course with minimum conflicts / errors.
One way is as follows:
Create table change
(
id int not null generated always as identity
, v varchar(10)
) in userspace1;
insert into change (v) values '1', '2', '3';
Create table change_process_rel
(
id int not null
, pid int not null
, status int not null
) in userspace1;
create unique index change_process_rel1 on change_process_rel(id);
Now you should be able to run the same statement from multiple concurrent sessions:
SELECT ID
FROM NEW TABLE
(
insert into change_process_rel (id, pid, status)
select c.id, mon_get_application_handle(), 1
from change c
where not exists (select 1 from change_process_rel r where r.id = c.id)
fetch first 1 row only
with ur
);
Every such a statement inserts 1 or 0 rows into the change_process_rel table, which is used here as a "lock" table. The corresponding ID from change is returned, and you may proceed with processing of the corresponding event in the same transaction.
If the transaction completes successfully, then the row inserted into the change_process_rel table is saved, so, the corresponding id from change may be considered as processed. If the transaction fails, the corresponding "lock" row from change_process_rel disappears, and this row may be processed later by this or another application.
The problem of this method is, that when both tables become large enough, such a sub-select may not work as quick as previously.
Another method is to use Evaluate uncommitted data through lock deferral.
It requires to place the status column into the change table.
Unfortunately, Db2 for LUW doesn't have SKIP LOCKED functionality, which might help with such a sort of algorithms.
If, let's say, status=0 is "not processed", and status<>0 is some processing / processed status, then after setting these DB2_EVALUNCOMMITTED and DB2_SKIP* registry variables and restart the instance, you may "catch" the next ID for processing with the following statement.
SELECT ID
FROM NEW TABLE
(
update
(
select id, status
from change
where status=0
fetch first 1 row only
)
set status=1
);
Once you get it, you may do further processing of this ID in the same transaction as previously.
It's good to create an index for performance:
create index change1 on change(status);
and may be set this table as volatile or collect distribution statistics on this column in addition to regular statistics on table and its indexes periodically.
Note that such a registry variables setting has global effect, and you should keep it in mind...
I have a table in my database and I want for each row in my table to have an unique id and to have the rows named sequently.
For example: I have 10 rows, each has an id - starting from 0, ending at 9. When I remove a row from a table, lets say - row number 5, there occurs a "hole". And afterwards I add more data, but the "hole" is still there.
It is important for me to know exact number of rows and to have at every row data in order to access my table arbitrarily.
There is a way in sqlite to do it? Or do I have to manually manage removing and adding of data?
Thank you in advance,
Ilya.
It may be worth considering whether you really want to do this. Primary keys usually should not change through the lifetime of the row, and you can always find the total number of rows by running:
SELECT COUNT(*) FROM table_name;
That said, the following trigger should "roll down" every ID number whenever a delete creates a hole:
CREATE TRIGGER sequentialize_ids AFTER DELETE ON table_name FOR EACH ROW
BEGIN
UPDATE table_name SET id=id-1 WHERE id > OLD.id;
END;
I tested this on a sample database and it appears to work as advertised. If you have the following table:
id name
1 First
2 Second
3 Third
4 Fourth
And delete where id=2, afterwards the table will be:
id name
1 First
2 Third
3 Fourth
This trigger can take a long time and has very poor scaling properties (it takes longer for each row you delete and each remaining row in the table). On my computer, deleting 15 rows at the beginning of a 1000 row table took 0.26 seconds, but this will certainly be longer on an iPhone.
I strongly suggest that you re-think your design. In my opinion your asking yourself for troubles in the future (e.g. if you create another table and want to have some relations between the tables).
If you want to know the number of rows just use:
SELECT count(*) FROM table_name;
If you want to access rows in the order of id, just define this field using PRIMARY KEY constraint:
CREATE TABLE test (
id INTEGER PRIMARY KEY,
...
);
and get rows using ORDER BY clause with ASC or DESC:
SELECT * FROM table_name ORDER BY id ASC;
Sqlite creates an index for the primary key field, so this query is fast.
I think that you would be interested in reading about LIMIT and OFFSET clauses.
The best source of information is the SQLite documentation.
If you don't want to take Stephen Jennings's very clever but performance-killing approach, just query a little differently. Instead of:
SELECT * FROM mytable WHERE id = ?
Do:
SELECT * FROM mytable ORDER BY id LIMIT 1 OFFSET ?
Note that OFFSET is zero-based, so you may need to subtract 1 from the variable you're indexing in with.
If you want to reclaim deleted row ids the VACUUM command or pragma may be what you seek,
http://www.sqlite.org/faq.html#q12
http://www.sqlite.org/lang_vacuum.html
http://www.sqlite.org/pragma.html#pragma_auto_vacuum