T-SQL "after insert trigger" invoke by store procedure - tsql

I have a store procedure that inserts records in table and at the same time delete all records older then 20 minutes.
I have try to optimized it and have seen that the delete operation cost to much. So, I have decided to create after insert trigger that do the delete operation.
It seems, that it works faster now, but the execution plan is showing the "delete" statement from the trigger - the only difference is that now the delete statement has "Query Cost (relative to the batch): 0%."
My question is, when the procedure inserts the records, will it immediately returns the results or it will wait for the after insert trigger to complete?

The trigger is part of the same transaction as the calling procedure, so the whole transaction will only complete once the trigger has fired, from MSDN:
The trigger and the statement that fires it are treated as a single transaction, which can be rolled back from within the trigger. If a severe error is detected (for example, insufficient disk space), the entire transaction automatically rolls back.
MSDN - Understanding DML Triggers

Related

Is there a way to SELECT and read from the table such that no COMMIT or ROLLBACK is necessary?

In fact, the PostgreSQL documentation states that all interactions with the database are performed within a transaction.
PostgreSQL actually treats every SQL statement as being executed within a transaction. If you do not issue a BEGIN command, then each individual statement has an implicit BEGIN and (if successful) COMMIT wrapped around it. A group of statements surrounded by BEGIN and COMMIT is sometimes called a transaction block.
Given this, even something like SELECT name FROM users will initiate a COMMIT. Is there a way to avoid this? In other words—is there a means of looking at the data in a table without issuing a COMMIT or a ROLLBACK? In the case of statements whose sole purpose is to fetch some data, it seems like superfluous overhead.
I read this and recognize that having SELECT statements be within a transaction is important; it allows one to take a snapshot of the data and thus remain consistent about what rows are where and what data they contain—but, then, is there a way to end a transaction without the overhead of COMMIT or ROLLBACK (in the case where neither is actually necessary)?
I recognize that having SELECT statements be within a transaction is important; it allows one to take a snapshot of the data and thus remain consistent
Good.
but, then, is there a way to end a transaction without the overhead of COMMIT or ROLLBACK?
Committing a transaction that did only read data does not have any overhead. All you need to do is drop the transaction handle and the resources allocated for it.
The "implicit COMMIT" just means that the transaction is getting closed/exited/completed - with or without actually writing anything. You cannot have the transaction without ending it.

Can not execute select queries while making a long lasting insert transaction

I'm pretty new to PostgreSQL and I'm sure I'm missing something here.
The scenario is with version 11, executing a big drop table and insert transaction on a given table with the nodejs driver, which may take 30 minutes.
While doing that, if I try to query with select on that table using the jdbc driver, the query execution waits for the transaction to finish. If I close the transaction (by finishing it or by forcing it to exit), the jdbc query becomes responsive.
I thought I can read a table with one connection while performing a transaction with another one.
What am I missing here?
Should I keep the table (without dropping it at the beginning of the transaction) ?
DROP TABLE takes an ACCESS EXCLUSIVE lock on the table, which is there precisely to prevent it from taking place concurrently with any other operation on the table. After all, DROP TABLE physically removes the table.
Since all locks are held until the end of the database transaction, all access to the dropped table is blocked until the transaction ends.
Of course the files are only removed when the transaction commits, so you might wonder why PostgreSQL doesn't let concurrent transactions read in the mean time. But that would mean that COMMIT may be blocked by a concurrent reader, or a SELECT might cause a system error in the middle of reading, both of which don't sound appealing.

How does SQL trigger affect performance?

I have a table in my postgresql table named traffic. My traffic table transactions fires some triggers. I have two trigger named trigger1 and trigger2. When a row inderted or updated or deleted these triggers will fire. I wonder, how does SQL trigger affect the transaction performance?
If my traffic table insert transaction takes 1ms and trigger1 takes 2ms and trigger2 takes 3 ms, will an insert cost 1+2+3 = 6ms? Or does total insert transaction 1ms and triggers runs seperately?
The triggers run as part of the data modifying statement, so when the statement is completed, so are the triggers. In your example, the INSERT will take 6 ms.
A slight exception here are deferred constraint triggers, which will run when the whole transaction is committed. If you are running a transaction with multiple statements, these triggers will run after the statements have completed. But the total run time of the transaction will be the same.
If you're inserting/updating in bulk, you might be able to get some benefit from a statement-level trigger. Here's a good blog post on this subject from Laurenz Albe, who posted a comment to you already.
https://www.cybertec-postgresql.com/en/rules-or-triggers-to-log-bulk-updates/

Discover if another record is being inserted right now by another transaction in postgresql

Imagine there is an open, ongoing transaction in Postgresql inserting a record and doing something else as well.
BEGIN
INSERT INTO films(id, name) VALUES(10, 'A comedy')
# We are at this moment in time
# The transaction is not yet committed
# ...
COMMIT
Is there any non-blocking way to discover from outside of this transaction that there is an ongoing transaction inserting record with ID=10 right now?
The only way I could think about was:
BEGIN
SET statement_timeout to 100
INSERT INTO films(id, name) VALUES(10, '') ON CONFLICT (id) DO NOTHING
ROLLBACK
if I get a timeout from INSERT than it means that there is another ongoing transaction
if I inserted nothing then there was a transaction which is now finished and conflict on unique ID occured
if the INSERT succeeded and thenI rolled-back than it means there is no transaction right now trying to insert a row with ID=10
However this is less than ideal as:
It is not non-blocking, I am waiting 100ms here
I am doing an INSERT operation, whereas I would prefer a read-only solution
As far as I understand I am actively triggering a conflict but I cannot in any easy way enforce that the 2nd transaction gets deadlock and that I won't ever interrupt the work of the 1st transaction
I am effectively trying to workaround the lack of READ UNCOMMITTED transaction isolation level in Postgresql.
I am in charge of both parts of the code so I can change them in any way necessary to allow it.

basic doubts in T-SQL triggers

What is the difference between FOR and AFTER in trigger definition. Any benefits of using one vs another?
If I issue an update statement which updates 5 rows, does the trigger (with FOR UPDATE) fires 5 times? If it is so, is there any way to make trigger fire only once for the entire UPDATE statment (even though it updates multiple rows)
Is there any chance/situation of having more than one row in "inserted" or "deleted" table at any time in a trigger life cycle. If it so, can I have a very quick sample on that?
thanks
Trigger fire once for each batch and should always be designed with that in mind. Yes if you do a multi-row update insert or delte, allthe rows will be in the inserted or deleted tables. For instance the command
Delete table1 where state = 'CA'
would have all the rows in the table that have a state of CA in them even if it was 10,000,000 of them. That is why trigger testing is critical and why the trigger must be designed to handle multi-row actions. A trigger that works well for one row may bring the deatabase toa screeching halt for hours if poorly designed to handle mulitple rows or could cause data integrity issues if not designed correctly to handle mulitple rows. Triggers should not rely on either cursors or loops for the most part but on set-based operations. If you are setting the contents of inserted or delted to a variable, you are almost certainly expecting one row and yor trigger will not work properly when someone does a set-based operation on it.
SQL Server has two basic kinds of DML triggers, after triggers which happen after the record has been placed in the table. These are typically used to update some other table as well. Before triggers take the place of the insert/update/delete, they are used for special processing onthe table inserted usually. It is important to know that a before trigger will not perform the action that was sent to the table and if you still want to delete/update or insert as part of the trigger you must write that into the trigger.