How to avoid deadlock when delete/update the same record in the Postgres - postgresql

I have a scenario when I play with Postgres.
We have one table with primary key, and there are two concurrent process, the one can update record, another process can delete record.
Now we are facing deadlock, when two processes play with update/delete the same record in the table.
I google how to avoid deadlock, someone says to use "SELECT FOR UPDATE".
Suppose there are two statements as following
update table_A set name='aaaa' where cid=1;
delete table_A where cid=1;
My question is,
(1) Do I need to add "SELECT FOR UPDATE" to both statements or just one statement in order to avoid deadlock?
(2) Could you give a complete example how to add "SELECT FOR UPDATE" ? I mean, what does it look like after you add "SELECT FOR UPDATE"? I never do it before, I want to learn how to add it.

SELECT ... FOR UPDATE locks the selected rows so that any other transaction can neither perform an update nor a SELECT ... FOR UPDATE on these rows. These transactions must wait until the transaction with the first SELECT ... FOR UPDATE releases the lock on the rows again.
If SELECT ... FOR UPDATE is the first statement in all transactions, no deadlock can occur. Because no transaction can lock other lines, which could be used in the further course of other transactions.
So your two transactions should look like this:
BEGIN;
SELECT * FROM table_A WHERE cid = 1 FOR UPDATE;
-- some other statements
UPDATE table_A SET name = 'aaaa' WHERE cid = 1;
END;
and:
BEGIN;
SELECT * FROM table_A WHERE cid = 1 FOR UPDATE;
-- some other statements
DELETE FROM table_A WHERE cid = 1;
END;

Related

When will select query aquire ExclusiveLock and RowExclusiveLock in PostgreSQL?

According to official documentation, select query only need sharelock, but I found my select query acquired Exclusive lock. How did it happen? Here is my select query:
select gc.id
from group_access_strategy ga
left outer join person_group pg on gp.person_group_id=pg.id
where gp.id=3
what is different from official documentation is that I added left join.
Most likely you ran another command like ALTER TABLE person_group ... (Access Exclusive) or an UPDATE/INSERT/DELETE (Row exclusive) in the same transaction. Locks will persist until a transaction is completed or aborted.
So if you ran:
BEGIN; --BEGIN starts the transaction
UPDATE group_access_strategy SET column = 'some data' where id = 1;
SELECT
gc.id,
FROM
group_access_strategy ga
LEFT OUTER JOIN person_group pg ON (gp.person_group_id = pg.id)
WHERE
pg.id = 3
The UPDATE statement would have created a Row Exclusive Lock that will not be released until you end the transaction by:
Saving all of the changes made since BEGIN:
COMMIT;
OR
nullifying any of the effects of statements since BEGIN with
ROLLBACK;
If you're new to Postgres and typically run your queries in an IDE like PG Admin or DataGrip, the BEGIN / COMMIT ROLLBACK commands are issued behind the scenes for you when you click the corresponding UI buttons.

Lock row, release later

I'm trying to understand how to lock a row, and only release that lock later.
I have a table like this :
create table testTable (Name varchar(100));
Some test data
insert into testTable (name) select 'Bob';
insert into testTable (name) select 'John';
insert into testTable (name) select 'Steve';
Now, I want to select one of those rows, and prevent other other queries from seeing this row. I achieve that like this :
begin transaction;
select * from testTable where name = 'Bob' for update;
In another window, I do this :
select * from testTable for update skip locked;
Great, I don't see 'Bob' in that result set. Now, I want to do something with the primary retrieved row (Bob), and after I did my work, I want to release that row again. Simple answer would be to do :
commit transaction
However, I am running multiple transactions on the same connection, so I can't just begin and commit transactions all over the show. Ideally I would like to have a "named" transaction, something like :
begin transaction 'myTransaction';
select * from testTable where name = 'Bob' for update;
//do stuff with the data, outside sql then later call ...
commit transaction 'myTransaction';
But postgres doesn't support that. I have found "prepare transaction", but that seems to be a pear-shaped path I don't want to go down, especially as these transaction seem to persist through restarts even.
Is there anyway I can have a reference to commit/rollback for a specific transaction?
You can have only one transaction in a database session, so the question as such is moot.
But I assume that you do not really want to run a transaction, you want to block access to a certain row for a while.
It is usually not a good idea to use regular database locks for such a purpose (the exception are advisory locks, which serve exactly that purpose, but are not tied to table rows). The problem is that long database transactions keep autovacuum from doing its job.
I recommend that you add a status column to the table and change the status rather than locking the row. That would server the same purpose in a more natural fashion and make your problem go away.
If you are concerned that the status flag might not get cleared due to application logic problems, replace it with a visible_from column of type timestamp with time zone that initially contains -infinity. Instead of locking the row, set the value to current_timestamp + INTERVAL '5 minutes'. Only select rows that fulfill WHERE visible_from < current_timestamp. That way the “lock” will automatically expire after 5 minutes.

Locking in Postgres function

Let's say I have a transactions table and transaction_summary table. I have created following trigger to update transaction_summary table.
CREATE OR REPLACE FUNCTION doSomeThing() RETURNS TRIGGER AS
$BODY$
DECLARE
rec_cnt bigint;
BEGIN
-- lock rows which have to be updated
SELECT count(1) from (SELECT 1 FROM transaction_summary WHERE receiver = new.receiver FOR UPDATE) r INTO rec_cnt ;
IF rec_cnt = 0
THEN
-- if there are no rows then create new entry in summary table
-- lock whole table
LOCK TABLE "transaction_summary" IN ACCESS EXCLUSIVE MODE;
INSERT INTO transaction_summary( ... ) VALUES ( ... );
ELSE
UPDATE transaction_summary SET ... WHERE receiver = new.receiver;
END IF;
SELECT count(1) from (SELECT 1 FROM transaction_summary WHERE sender = new.sender FOR UPDATE) r INTO rec_cnt ;
IF rec_cnt = 0
THEN
LOCK TABLE "transaction_summary" IN ACCESS EXCLUSIVE MODE;
INSERT INTO transaction_summary( ... ) VALUES ( ... );
ELSE
UPDATE transaction_summary SET ... WHERE sender = new.sender;
END IF;
RETURN new;
END;
$BODY$
language plpgsql;
Question: Will there be a dead lock? According to my understanding deadlock it might happen like this:
_________
|__table__| <- executor #1 waits on executor #2 to be able to lock the whole table AND
|_________| executor #2 waits on executor #1 to be able to lock the whole table
|_________|
|_________| <- row is locked by executor #1
|_________|
|_________| <- row is locked by executor #2
It seems that only option is to lock the whole table every time in transaction beginning.
Are your 'SELECT 1 FROM transactions WHERE ...' meant to access 'transactions_summary' instead? Also, notice that those two queries can at least theoretically deadlock each other if two DB transactions are inserting two 'transactions' rows, with new.sender1=new.receiver2 and new.receiver1=new.sender2.
You can't, in general, guarantee that you won't get a deadlock from a database. Even if you try and prevent them by writing your queries carefully (eg, ordering updates) you can still get caught out because you can't control the order of INSERT/UPDATE, or of constraint checks. In any case, comparing every transaction against every other to check for deadlocks doesn't scale as your application grows.
So, your code should always be prepared to re-run transactions when you get 'deadlock detected' errors. If you do that and you think that conflicting transactions will be uncommon then you might as well let your deadlock handling code deal with it.
If you think deadlocks will be common then it might cause you a performance problem - although contending on a big table lock could be, too. Here are some options:
If new.receiver and new.sender are, for example, the IDs of rows in a MyUsers table, you could require all code which inserts into 'transactions_summary' to first do 'SELECT 1 FROM MyUsers WHERE id IN (user1, user2) FOR UPDATE'. It'll break if someone forgets, but so will your table locking. By doing it that way you'll swap one big table lock for many separate row locks.
Add UNIQUE constraints to transactions_summary and look for the error when it's violated. You should probably add constraints anyway, even if you handle this another way. It'll detect bugs.
You could allow duplicate transaction_summary rows, and require users of that table to add them up. Messy, and easy for developers who don't know to create bugs (though you could add a view which does the adding). But if you really can't take the performance hit of locking and deadlocks you could do it.
You could try the SERIALIZABLE transaction isolation level and take out the table locks. By my reading, the SELECT ... FOR UPDATE should create a predicate lock (and so should a plain SELECT). That'd stop any other transaction that does a conflicting insert from committing successfully. However, using SERIALIZABLE throughout your application will cost you performance and give you a lot more transactions to retry.
Here's how SERIALIZABLE transaction isolation level works:
create table test (id serial, x integer, total integer); ...
Transaction 1:
DB=# begin transaction isolation level serializable;
BEGIN
DB=# insert into test (x, total) select 3, 100 where not exists (select true from test where x=3);
INSERT 0 1
DB=# select * from test;
id | x | total
----+---+-------
1 | 3 | 100
(1 row)
DB=# commit;
COMMIT
Transaction 2, interleaved line for line with the first:
DB=# begin transaction isolation level serializable;
BEGIN
DB=# insert into test (x, total) select 3, 200 where not exists (select true from test where x=3);
INSERT 0 1
DB=# select * from test;
id | x | total
----+---+-------
2 | 3 | 200
(1 row)
DB=# commit;
ERROR: could not serialize access due to read/write dependencies among transactions
DETAIL: Reason code: Canceled on identification as a pivot, during commit attempt.
HINT: The transaction might succeed if retried.

SQL Isolation levels or locks in large procedures

I have big stored procedures that handle user actions.
They consist of multiple select statements. These are filtered, most of the times only getting one row. The Selects are copied into temptables or otherwise evaluated.
Finally, a merge-Statement does the needed changes in the DB.
All is encapsulated in a transaction.
I have concurrent input from users, and the selected rows of the select statements should be locked to keep data integrity.
How can I lock the selected Rows of all select statements, so that they aren't updated through other transactions while the current transaction is in process?
Does a table hint combination of ROWLOCK and HOLDLOCK work in a way that only the selected rows are locked, or are the whole tables locked because of the HOLDLOCK?
SELECT *
FROM dbo.Test
WITH (ROWLOCK HOLDLOCK )
WHERE id = #testId
Can I instead use
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
right after the start of the transaction? Or does this lock the whole tables?
I am using SQL2008 R2, but would also be interested if things work differently in SQL2012.
PS: I just read about the table hints UPDLOCK and SERIALIZE. UPDLOCK seems to be a solution to lock only one row, and it seems as if UPDLOCK always locks instead of ROWLOCK, which does only specify that locks are row based IF locks are applied. I am still confused about the best way to solve this...
Changing the isolation level fixed the problem (and locked on row level):
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
Here is how I tested it.
I created a statement in a blank page of the SQL Management Studio:
begin tran
select
*
into #message
from dbo.MessageBody
where MessageBody.headerId = 28
WAITFOR DELAY '0:00:05'
update dbo.MessageBody set [message] = 'message1'
where headerId = (select headerId from #message)
select * from dbo.MessageBody where headerId = (select headerId from #message)
drop table #message
commit tran
While executing this statement (which takes at last 5 seconds due to the delay), I called the second query in another window:
begin tran
select
*
into #message
from dbo.MessageBody
where MessageBody.headerId = 28
update dbo.MessageBody set [message] = 'message2'
where headerId = (select headerId from #message)
select * from dbo.MessageBody where headerId = (select headerId from #message)
drop table #message
commit tran
and I was rather surprised that it executed instantaneously. This was due to the default SQL Server transaction level "Read Commited" http://technet.microsoft.com/en-us/library/ms173763.aspx . Since the update of the first script is done after the delay, during the second script there are no umcommited changes yet, so the row 28 is read and updated.
Changing the Isolation level to Serialization prevented this, but it also prevented concurrency - both scipts were executed consecutively.
That was OK, since both scripts read and changed the same row (via headerId=28). Changing headerId to another value in the second script, the statements were executed parallel. So the lock from SERIALIZATION seems to be on row level.
Adding the table hint
WITH ( SERIALIZABLE)
in the first select of the first statement does also prevent further reads oth the selected row.

After Insert Trigger....Looping

What would the processing load concern be if I had an "After Insert" trigger created on a table and in that trigger I performed a While loop to iterate through "potentially" multiple rows?
End result is I will 99.999% of the time have only 1 row, but as the future is unpredictable i also want to be able to handle multiple rows being inserted.
Trigger Model:
1) Insert information into the table
2) Create views specific to the client, via stored procedures (if possible)
What Say You? :)
Haven't fully developed but this is the design i am looking for, may not be structurally sound but should get the point acrossed.
CREATE TRIGGER dbo.New_Client_Setup
ON dbo.client
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
--Fill Temp Table
select * into #clients
from inserted
--Iterate through Temp Table
While (select count(*) from #clients) <> 0 BEGIN
declare #id int, #clnt nvarchar(10)
select top(1)
#id = id
, #clnt = short
order by id desc
Execute dbo.sp_Create_View_Client ( #id, #clnt )
-- Drop used ID
delete from #clients
where id = #id
END
Drop table #clients
END
GO
Again, observe the design of the trigger not necessarily the syntactic sugar
Design wise, reading the comments, I think you do not neccesarily need to do this in triggers. I would say you should do it as part of your insert statement in transactions - i.e. do the insert, and then do the loop that you want to do (whatever that does - execute dbo.sp_Create_View_Client)...
The second thing I would mention is what exactly is dbo.sp_Create_View_Client doing - is it a must-dependent on the insert? Meaning, what happens if the insert works fine, and the trigger fails? I would maybe do the whole insert and execute of the SP all in one transaction, so as to preserve data integrity.