What is the point of "ROLLBACK TRANSACTION named_transaction"? - tsql

I've read through MSDN on ROLLBACK TRANSACTION and nesting transactions. While I see the point of ROLLBACK TRANSACTION savepointname, I do not understand ROLLBACK TRANSACTION transactionname.
It only works when transactionname is the outermost transaction
ROLLBACK always rolls back the entire transaction "stack", except in the case of savepointname
Basically, as I read the documentation, except in the case of a save point, ROLLBACK rolls back all transactions (to ##TRANCOUNT=0). The only difference I can see is this snippet:
If a ROLLBACK TRANSACTION transaction_name statement using the name of
the outer transaction is executed at any level of a set of nested
transactions, all of the nested transactions are rolled back.
If a ROLLBACK WORK or ROLLBACK TRANSACTION statement without a
transaction_name parameter is executed at any level of a set of nested
transaction, it rolls back all of the nested transactions, including
the outermost transaction.
From the reading, this suggests to me that rolling back a named transaction (which must be the name of the outermost transaction), only the nested transactions will be rolled back. This would give some meaning to rolling back a named transaction. So I set up a test:
CREATE TABLE #TEMP (id varchar(50))
INSERT INTO #TEMP (id) VALUES ('NO')
SELECT id AS NOTRAN FROM #TEMP
SELECT ##TRANCOUNT AS NOTRAN_TRANCOUNT
BEGIN TRAN OUTERTRAN
INSERT INTO #TEMP (id) VALUES ('OUTER')
SELECT id AS OUTERTRAN FROM #TEMP
SELECT ##TRANCOUNT AS OUTERTRAN_TRANCOUNT
BEGIN TRAN INNERTRAN
INSERT INTO #TEMP (id) VALUES ('INNER')
SELECT id AS INNERTRAN FROM #TEMP
SELECT ##TRANCOUNT AS INNERTRAN_TRANCOUNT
ROLLBACK TRAN OUTERTRAN
IF ##TRANCOUNT > 0 ROLLBACK TRAN
SELECT id AS AFTERROLLBACK FROM #TEMP
SELECT ##TRANCOUNT AS AFTERROLLBACK_TRANCOUNT
DROP TABLE #TEMP
results in (all "X row(s) affected" stuff removed)
NOTRAN
--------------------------------------------------
NO
NOTRAN_TRANCOUNT
----------------
0
OUTERTRAN
--------------------------------------------------
NO
OUTER
OUTERTRAN_TRANCOUNT
-------------------
1
INNERTRAN
--------------------------------------------------
NO
OUTER
INNER
INNERTRAN_TRANCOUNT
-------------------
2
AFTERROLLBACK
--------------------------------------------------
NO
AFTERROLLBACK_TRANCOUNT
-----------------------
0
Note that there is no difference to the output when I change
ROLLBACK TRAN OUTERTRAN
to simply
ROLLBACK TRAN
So what is the point of ROLLBACK TRANSACTION named_transaction?

Save points are exactly as the name implies: 'save points' in the log sequence. The log sequence is always linear. If you rollback to a save point, you rollback everything your transaction did between your current log position and the save point. Consider your example:
LSN 1: BEGIN TRAN OUTERTRAN
LSN 2: INSERT INTO ...
LSN 3: BEGIN TRAN INNERTRAN
LSN 4: INSERT INTO ...
LSN 5: ROLLBACK TRAN OUTERTRAN
At Log Sequence Number (LSN) 1 the OUTERTRAN save point is created. The first INSERT creates LSN 2. Then the INNERTRAN creates a save point with LSN 3. Second INSERT creates a new LSN, 4. The ROLLBACK OUTERTRAN is equivalent to 'ROLLBACK log until the LSN 1'. You cannot 'skip' portions of the log, so you must rollback every operation in the log until LSN 1 (when the save point OUTERTRAN was created) is hit.
On the other hand if at the last operation you would issue ROLLBACK INNERTRAN the engine would roll back until the LSN 3 (where the 'INNERTRAN' save point was inserted in the log) thus preserving LSN 1 and LSN 2 (ie. the first INSERT).
For a practical example of save points see Exception handling and nested transactions.

Related

What is a good way of rolling back a transaction in Postgres

I want to insert data into a table from a staging table but keep the data unchanged if an error happens.
What I have is a working happy path
Begin transaction;
DELETE FROM mytable;
INSERT INTO mytable SELECT * FROM mytable_staging ;
Commit transaction;
In case the insert statement is failing how can I rollback the transaction?
PostgreSQL transactions will roll back on error automatically, see this.
Atomicity − Ensures that all operations within the work unit are
completed successfully; otherwise, the transaction is aborted at the
point of failure and previous operations are rolled back to their
former state.
Consistency − Ensures that the database properly changes states upon a
successfully committed transaction.
Isolation − Enables transactions to operate independently of and
transparent to each other.
Durability − Ensures that the result or effect of a committed
transaction persists in case of a system failure.
You can rollback a Postgres transaction using the ROLLBACK [WORK | TRANSACTION] statement:
Begin transaction;
DELETE FROM mytable;
INSERT INTO mytable SELECT * FROM mytable_staging ;
Rollback transaction;
All the SQL commands are case-insensitive and the transaction part of the statement is optional, but I like to include it for clarity.

Locking in Postgres function

Let's say I have a transactions table and transaction_summary table. I have created following trigger to update transaction_summary table.
CREATE OR REPLACE FUNCTION doSomeThing() RETURNS TRIGGER AS
$BODY$
DECLARE
rec_cnt bigint;
BEGIN
-- lock rows which have to be updated
SELECT count(1) from (SELECT 1 FROM transaction_summary WHERE receiver = new.receiver FOR UPDATE) r INTO rec_cnt ;
IF rec_cnt = 0
THEN
-- if there are no rows then create new entry in summary table
-- lock whole table
LOCK TABLE "transaction_summary" IN ACCESS EXCLUSIVE MODE;
INSERT INTO transaction_summary( ... ) VALUES ( ... );
ELSE
UPDATE transaction_summary SET ... WHERE receiver = new.receiver;
END IF;
SELECT count(1) from (SELECT 1 FROM transaction_summary WHERE sender = new.sender FOR UPDATE) r INTO rec_cnt ;
IF rec_cnt = 0
THEN
LOCK TABLE "transaction_summary" IN ACCESS EXCLUSIVE MODE;
INSERT INTO transaction_summary( ... ) VALUES ( ... );
ELSE
UPDATE transaction_summary SET ... WHERE sender = new.sender;
END IF;
RETURN new;
END;
$BODY$
language plpgsql;
Question: Will there be a dead lock? According to my understanding deadlock it might happen like this:
_________
|__table__| <- executor #1 waits on executor #2 to be able to lock the whole table AND
|_________| executor #2 waits on executor #1 to be able to lock the whole table
|_________|
|_________| <- row is locked by executor #1
|_________|
|_________| <- row is locked by executor #2
It seems that only option is to lock the whole table every time in transaction beginning.
Are your 'SELECT 1 FROM transactions WHERE ...' meant to access 'transactions_summary' instead? Also, notice that those two queries can at least theoretically deadlock each other if two DB transactions are inserting two 'transactions' rows, with new.sender1=new.receiver2 and new.receiver1=new.sender2.
You can't, in general, guarantee that you won't get a deadlock from a database. Even if you try and prevent them by writing your queries carefully (eg, ordering updates) you can still get caught out because you can't control the order of INSERT/UPDATE, or of constraint checks. In any case, comparing every transaction against every other to check for deadlocks doesn't scale as your application grows.
So, your code should always be prepared to re-run transactions when you get 'deadlock detected' errors. If you do that and you think that conflicting transactions will be uncommon then you might as well let your deadlock handling code deal with it.
If you think deadlocks will be common then it might cause you a performance problem - although contending on a big table lock could be, too. Here are some options:
If new.receiver and new.sender are, for example, the IDs of rows in a MyUsers table, you could require all code which inserts into 'transactions_summary' to first do 'SELECT 1 FROM MyUsers WHERE id IN (user1, user2) FOR UPDATE'. It'll break if someone forgets, but so will your table locking. By doing it that way you'll swap one big table lock for many separate row locks.
Add UNIQUE constraints to transactions_summary and look for the error when it's violated. You should probably add constraints anyway, even if you handle this another way. It'll detect bugs.
You could allow duplicate transaction_summary rows, and require users of that table to add them up. Messy, and easy for developers who don't know to create bugs (though you could add a view which does the adding). But if you really can't take the performance hit of locking and deadlocks you could do it.
You could try the SERIALIZABLE transaction isolation level and take out the table locks. By my reading, the SELECT ... FOR UPDATE should create a predicate lock (and so should a plain SELECT). That'd stop any other transaction that does a conflicting insert from committing successfully. However, using SERIALIZABLE throughout your application will cost you performance and give you a lot more transactions to retry.
Here's how SERIALIZABLE transaction isolation level works:
create table test (id serial, x integer, total integer); ...
Transaction 1:
DB=# begin transaction isolation level serializable;
BEGIN
DB=# insert into test (x, total) select 3, 100 where not exists (select true from test where x=3);
INSERT 0 1
DB=# select * from test;
id | x | total
----+---+-------
1 | 3 | 100
(1 row)
DB=# commit;
COMMIT
Transaction 2, interleaved line for line with the first:
DB=# begin transaction isolation level serializable;
BEGIN
DB=# insert into test (x, total) select 3, 200 where not exists (select true from test where x=3);
INSERT 0 1
DB=# select * from test;
id | x | total
----+---+-------
2 | 3 | 200
(1 row)
DB=# commit;
ERROR: could not serialize access due to read/write dependencies among transactions
DETAIL: Reason code: Canceled on identification as a pivot, during commit attempt.
HINT: The transaction might succeed if retried.

Transaction mismatch in EntityFramework only

I have a sproc
CREATE PROCEDURE [dbo].[GetNextImageRequest]
(...) AS
DECLARE #ReturnValue BIT
SET #ReturnValue = 1
-- Set paranoid level isolation: only one access at a time!
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
BEGIN TRY
...
UPDATE STATEMENT THAT THROWS FOREIGN KEY EXCEPTION
IF ##trancount > 0
BEGIN
COMMIT TRANSACTION
END
SET #ReturnValue = 0
END TRY
BEGIN CATCH
IF ##trancount > 0
BEGIN
ROLLBACK TRANSACTION
END
SET #ReturnValue = 1
END CATCH
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SET NOCOUNT ON
RETURN #ReturnValue -- 0=success
GO
When i call this manually from Sql Server Management studio, i don't get any exception.
When I Call this through Entity Framework 6, i get
Transaction count after EXECUTE indicates a mismatching number of
BEGIN and COMMIT statements. Previous count = 1, current count = 0.
What am I doing wrong? The foreign key constraint is doing roll back but i am checking ##TRANCOUNT.
Entity Framework 6 (EF6)
Add this line before ExecuteSqlCommand
db.Configuration.EnsureTransactionsForFunctionsAndCommands = false;
It turns out EF6 wraps sproc call in its own transaction.
The workaround is to check if transaction is already open or not and only do BEGIN|COMMIT|ROLLBACK TRANSACTION if it isn't already open

Create a stored procedure in PostgreSQL that is never rolled back?

From the PostgreSQL 9.0 manual:
Important: To avoid blocking concurrent transactions that obtain
numbers from the same sequence, a nextval operation is never rolled
back; that is, once a value has been fetched it is considered used,
even if the transaction that did the nextval later aborts. This means
that aborted transactions might leave unused "holes" in the sequence
of assigned values. setval operations are never rolled back, either.
So, how can I create a PL\PgSQL function with the same behaviour: "operation is never rolled back"?
In a call like this, whatever the function changes will NOT be rolled back:
BEGIN;
SELECT composite_nextval(...);
ROLLBACK;
You can use a savepoint after selecting composite_nextval. Then, just rollback to that savepoint and commit the rest.
Something like this:
BEGIN;
SELECT composite_nextval(...);
SAVEPOINT my_savepoint;
INSERT INTO some_table(a) VALUES (2);
ROLLBACK TO SAVEPOINT my_savepoint;
COMMIT;
This way, select composite_nextval(...) will be committed, but insert into some_table will not.

insert exclusive locking

I have thought about the following SQL statements:
INSERT INTO A(a1, a2)
SELECT b1, udf_SomeFunc(b1)
FROM B
Where udf_SomeFunc makes a select on table A. As I understand, first, a shared lock is set on A (I am talking just about table A now), then, after this lock is released, an exclusive lock is obtained to insert the data. The question is: is it possible, that another transaction will get the exclusive lock on table A, just before the current transaction takes its exclusive lok on A?
Food for thought
create table test(id int)
insert test values(1)
GO
Now in one window run this
begin tran
insert into test
select * from test with (holdlock, updlock)
waitfor delay '00:00:30'
commit
while that is running open another connection and do this
begin tran
insert into test
select * from test with (holdlock, updlock)
commit
as you can see the second insert doesn't happen until the first transaction is complete
now take out the locking hints and observer the difference