Does the ctid chain in Postgres fork under concurrent update of a row? - postgresql

I am reading this guide and this guide to get a better idea of how postgres internals work, and mostly things make sense. However, I have a question about how concurrent updates would impact the ctid chain.
As I understand it, ctids point to the "next" (page,pointer) combo to describe revisions to a given row. However, what happens when two concurrent transactions are executing, attempting to update the same row? The transactions have an ordering imposed by their txids, but that is no guarantee of what order they attempt to change the row in. What are the possible outcomes of these updates?
For example, if we have a table t with a single column s as in the second guide and one update, we might have:
=> BEGIN;
=> UPDATE t SET s = 'BAR';
=> SELECT txid_current();
txid_current
--------------
3666
=> SELECT * FROM t;
id | s
----+-----
1 | BAR
(1 row)
And when we use the heap_page function to look at the page internals, it looks like:
=> SELECT * FROM heap_page('t',0);
ctid | state | xmin | xmax | t_ctid
-------+--------+----------+-------+--------
(0,1) | normal | 3664 (c) | 3666 | (0,2)
(0,2) | normal | 3666 | 0 (a) | (0,2)
If two updates are occurring at the same time, how does the final table state vary with
The serialization levels of the two updates
The order the transactions start
The order the transactions write to the page for this row?
UPDATE: Laurenz is correct. Corroborating pseudocode for how updates are executed, from later in one of the linked guides, is:
(1) FOR each row that will be updated by this UPDATE command
(2) WHILE true
/* The First Block */
(3) IF the target row is being updated THEN
(4) WAIT for the termination of the transaction that updated the target row
(5) IF (the status of the terminated transaction is COMMITTED)
AND (the isolation level of this transaction is REPEATABLE READ or SERIALIZABLE) THEN
(6) ABORT this transaction /* First-Updater-Win */
ELSE
(7) GOTO step (2)
END IF
/* The Second Block */
(8) ELSE IF the target row has been updated by another concurrent transaction THEN
(9) IF (the isolation level of this transaction is READ COMMITTED THEN
(10) UPDATE the target row
ELSE
(11) ABORT this transaction /* First-Updater-Win */
END IF
/* The Third Block */
ELSE /* The target row is not yet modified or has been updated by a terminated transaction. */
(12) UPDATE the target row
END IF
END WHILE
END FOR
Note line 4, which describes the row-locking effect of setting an xmax in another transaction.

The answer is simpler than you think: row locks prevent concurrent modifications of the same row, so the problem never arises, and the chain of updates does not fork.

Related

Why does PostgreSQL think there is a conflict between the two serializable transactions?

I'm trying to figure out how the serializable isolation level in PostgreSQL works. In theory and according to PostgreSQL's own documentation PostgreSQL should be smart enough to somehow detect serialization conflicts and automatically roll back offending transactions. Yet when I tried to play with serializable isolation level myself I stumbled upon a lot of false positives and started to doubt my own understanding of the concept of serializability or PostgreSQL's implementation of it. Below you can find one of the simplest examples of such false positives:
create table mytab(
class integer,
value integer not null
);
create index mytab_class_idx on mytab (class);
insert into mytab (class, value) values (1, 10);
insert into mytab (class, value) values (1, 20);
insert into mytab (class, value) values (2, 100);
insert into mytab (class, value) values (2, 200);
The table data is the following:
class | value
-------+-------
1 | 10
1 | 20
2 | 100
2 | 200
Then I run two concurrent transactions. Step n comments in code show an order in which I execute the statements. Following advice from https://stackoverflow.com/a/42303225/3249257 I explicitly disabled sequential scan to force PostgreSQL to use an index:
SET enable_seqscan=off;
Transaction A:
begin; -- step 1
select sum(value) from mytab where class = 1; -- step 2
insert into mytab(class, value) values (3, 30); -- step 5
commit; -- step 7
Transaction B:
begin; -- step 3
select sum(value) from mytab where class = 2; -- step 4
insert into mytab(class, value) values (4, 300); -- step 6
commit; -- step 8
As I understand it, there shoudn't be any conflict between those two transactions. They don't touch the same rows. However, when I commit the second transaction it fails with this error:
[40001] ERROR: could not serialize access due to read/write dependencies among transactions
Detail: Reason code: Canceled on identification as a pivot, during commit attempt.
Hint: The transaction might succeed if retried.
What's going on here? Is my understanding of serializable isolation level flawed? Is it a failure of PostgreSQL's heuristics mentioned in this answer https://stackoverflow.com/a/50809788/3249257?
I'm using PostgreSQL 11.5 on x86_64-apple-darwin18.6.0, compiled by Apple LLVM version 10.0.1 (clang-1001.0.46.4), 64-bit.
The problem here is with predicate locks (SIReadLock) that are used by PostgreSQL to figure out whether there is a conflict between concurrent transactions. If you run the query bellow during the course of transactions' execution, you will see these locks:
select relation::regclass, locktype, page, tuple, pid from pg_locks
where mode = 'SIReadLock';
In this case, the issue was with page locks on the mytab_class_idx index. If the concurrent transactions happen to acquire a lock for the same page of mytab_class_idx relation, serialization conflict occurs. If they acquire locks for different pages, they both commit successfully.
If there is not enough data like in the question above, index entries for all rows will fall on the same page and then a serialization conflict will inevitably occur. For big enough tables serialization conflicts will happen rarely, though not as rare as they could.

apparent transaction isolation violation in postgresql

I am using PostgreSQL 9.3.12 on CentOS Linux.
I have two processes connecting to the same database, using a default transaction isolation level of "read committed". According to the postgres docs, one process in a transaction should not "see" changes made by another process in a transaction until they are committed.
A sequence I am seeing is:
process A starts its transaction
process A deletes everything from table T
process B starts its transaction
process B attempts a select for update on one row in table T
process B comes up empty (0 rows) and calls rollback
process A repopulates table T from incoming data
process A commits its transaction
Now, table T should have been populated before both transactions began, and process B's query should have turned up one row. And it does if these processes do not run concurrently.
My understanding is that process B should see the old copy of the desired row in table T, makes its changes, and those changes should be clobbered by process A's deletion and repopulation of table T. I can't figure out why process B is coming up empty.
Beyond a complete misunderstanding by myself of these preconditions, can anyone think of another reason why I would see this behaviour?
Worry not about the lousy architecture, it is going away. I'm just trying to understand why this situation seems to violate the "read committed" transaction isolation as I understand it.
Thanks.
According to the postgres docs, one process in a transaction should
not "see" changes made by another process in a transaction until they
are committed.
Yes and No - as usual, it depends. The documentation strictly says that:
Read Committed is the default isolation level in PostgreSQL.
When a transaction uses this isolation level, a SELECT query (without
a FOR UPDATE/SHARE clause) sees only data committed before the query
began; it never sees either uncommitted data or changes committed
during query execution by concurrent transactions. In effect, a SELECT
query sees a snapshot of the database as of the instant the query
begins to run. However, SELECT does see the effects of previous
updates executed within its own transaction, even though they are not
yet committed. Also note that two successive SELECT commands can see
different data, even though they are within a single transaction, if
other transactions commit changes after the first SELECT starts and
before the second SELECT starts.
UPDATE, DELETE, SELECT FOR UPDATE, and SELECT FOR SHARE commands
behave the same as SELECT in terms of searching for target rows: they
will only find target rows that were committed as of the command start
time. However, such a target row might have already been updated (or
deleted or locked) by another concurrent transaction by the time it is
found. In this case, the would-be updater will wait for the first
updating transaction to commit or roll back (if it is still in
progress). If the first updater rolls back, then its effects are
negated and the second updater can proceed with updating the
originally found row. If the first updater commits, the second updater
will ignore the row if the first updater deleted it, otherwise it will
attempt to apply its operation to the updated version of the row. The
search condition of the command (the WHERE clause) is re-evaluated to
see if the updated version of the row still matches the search
condition. If so, the second updater proceeds with its operation using
the updated version of the row. In the case of SELECT FOR UPDATE and
SELECT FOR SHARE, this means it is the updated version of the row that
is locked and returned to the client.
In other word, simply SELECT differs from SELECT FOR UPDATE/DELETE/UPDATE.
You can create simple test case to observe that behaviour:
Session 1
test=> START TRANSACTION;
START TRANSACTION
test=> SELECT * FROM test;
x
----
1
2
3
4
5
6
7
8
9
10
(10 rows)
test=> DELETE FROM test;
DELETE 10
test=>
Now login in another Session 2:
test=> START TRANSACTION;
START TRANSACTION
test=> SELECT * FROM test;
x
----
1
2
3
4
5
6
7
8
9
10
(10 rows)
test=> SELECT * FROM test WHERE x = 5 FOR UPDATE;
After the last command SELECT ... FOR UPDATE session 1 "hangs" and is waiting for something ......
Back in session 1
test=> insert into test select * from generate_series(1,10);
INSERT 0 10
test=> commit;
COMMIT
And now when you go back to session 2 you will see this:
test=> SELECT * FROM test WHERE x = 5 FOR UPDATE;
x
---
(0 rows)
test=> select * from test;
x
----
1
2
3
4
5
6
7
8
9
10
(10 rows)
That is - simple SELECT still doesn't see any changes, while SELECT ... FOR UPDATE does see that rows have been deleted. But it doesn't see new rows inserted by session 1
In fact a sequence you are seeing is:
process A starts its transaction
process A deletes everything from table T
process B starts its transaction
process B attempts a select for update on one row in table T
process B "hangs" and is waiting until session A does a commit or rollback
process A repopulates table T from incoming data
process A commits its transaction
process B comes up empty (0 rows- after session A commit) and calls rollback

State machine represented on DB, enforcement of state transition

I've got a finite state machine which represents the phases of a job.
I need to represent the states in a Postgres database. I would like to enforce the code correctness by forbidding updates from one state to the other unless the state machine allows so.
A naive way to accomplish my goal could be the acquisition of an exclusive lock on the table, within the transaction check the current state and the next state, abort with errors in case of invalid update.
This is clearly a performances killer, since I'm going to lock the Job table at each state transition.
Is there a way to accomplish the same goal via constraints?
Trigger is the answer for your problem.
Let's consider simple table:
CREATE TABLE world (id serial PRIMARY KEY, state VARCHAR);
insert into world (state) values ('big bang');
insert into world (state) values ('stars formation');
insert into world (state) values ('human era');
Function that will be called by the trigger. Define your state machine logic here. RAISE EXCEPTION is useful, since you can provide custom message here.
CREATE FUNCTION check_world_change() RETURNS trigger as $check_world_change$
BEGIN
IF OLD.state = 'big bang' AND NEW.state = 'human era' THEN
RAISE EXCEPTION 'Dont skip stars';
END IF;
IF OLD.state = 'stars formation' AND NEW.state = 'big bang' THEN
RAISE EXCEPTION 'Impossible to reverse order of things';
END IF;
RETURN NEW;
END;
$check_world_change$ LANGUAGE plpgsql;
And define trigger for your table:
CREATE TRIGGER check_world_change BEFORE UPDATE ON world
FOR EACH ROW EXECUTE PROCEDURE check_world_change();
Now, when you try to update state of one of the rows, you'll get error:
world=# select * from world;
id | state
----+-----------------
2 | stars formation
1 | human era
3 | big bang
(3 rows)
world=# update world set state='human era' where state='big bang';
ERROR: Wrong transition
world=# select * from world;
id | state
----+-----------------
2 | stars formation
1 | human era
3 | big bang
(3 rows)
References:
https://www.postgresql.org/docs/9.5/static/plpgsql-trigger.html
https://www.postgresql.org/docs/9.5/static/sql-createtrigger.html

Locking in Postgres function

Let's say I have a transactions table and transaction_summary table. I have created following trigger to update transaction_summary table.
CREATE OR REPLACE FUNCTION doSomeThing() RETURNS TRIGGER AS
$BODY$
DECLARE
rec_cnt bigint;
BEGIN
-- lock rows which have to be updated
SELECT count(1) from (SELECT 1 FROM transaction_summary WHERE receiver = new.receiver FOR UPDATE) r INTO rec_cnt ;
IF rec_cnt = 0
THEN
-- if there are no rows then create new entry in summary table
-- lock whole table
LOCK TABLE "transaction_summary" IN ACCESS EXCLUSIVE MODE;
INSERT INTO transaction_summary( ... ) VALUES ( ... );
ELSE
UPDATE transaction_summary SET ... WHERE receiver = new.receiver;
END IF;
SELECT count(1) from (SELECT 1 FROM transaction_summary WHERE sender = new.sender FOR UPDATE) r INTO rec_cnt ;
IF rec_cnt = 0
THEN
LOCK TABLE "transaction_summary" IN ACCESS EXCLUSIVE MODE;
INSERT INTO transaction_summary( ... ) VALUES ( ... );
ELSE
UPDATE transaction_summary SET ... WHERE sender = new.sender;
END IF;
RETURN new;
END;
$BODY$
language plpgsql;
Question: Will there be a dead lock? According to my understanding deadlock it might happen like this:
_________
|__table__| <- executor #1 waits on executor #2 to be able to lock the whole table AND
|_________| executor #2 waits on executor #1 to be able to lock the whole table
|_________|
|_________| <- row is locked by executor #1
|_________|
|_________| <- row is locked by executor #2
It seems that only option is to lock the whole table every time in transaction beginning.
Are your 'SELECT 1 FROM transactions WHERE ...' meant to access 'transactions_summary' instead? Also, notice that those two queries can at least theoretically deadlock each other if two DB transactions are inserting two 'transactions' rows, with new.sender1=new.receiver2 and new.receiver1=new.sender2.
You can't, in general, guarantee that you won't get a deadlock from a database. Even if you try and prevent them by writing your queries carefully (eg, ordering updates) you can still get caught out because you can't control the order of INSERT/UPDATE, or of constraint checks. In any case, comparing every transaction against every other to check for deadlocks doesn't scale as your application grows.
So, your code should always be prepared to re-run transactions when you get 'deadlock detected' errors. If you do that and you think that conflicting transactions will be uncommon then you might as well let your deadlock handling code deal with it.
If you think deadlocks will be common then it might cause you a performance problem - although contending on a big table lock could be, too. Here are some options:
If new.receiver and new.sender are, for example, the IDs of rows in a MyUsers table, you could require all code which inserts into 'transactions_summary' to first do 'SELECT 1 FROM MyUsers WHERE id IN (user1, user2) FOR UPDATE'. It'll break if someone forgets, but so will your table locking. By doing it that way you'll swap one big table lock for many separate row locks.
Add UNIQUE constraints to transactions_summary and look for the error when it's violated. You should probably add constraints anyway, even if you handle this another way. It'll detect bugs.
You could allow duplicate transaction_summary rows, and require users of that table to add them up. Messy, and easy for developers who don't know to create bugs (though you could add a view which does the adding). But if you really can't take the performance hit of locking and deadlocks you could do it.
You could try the SERIALIZABLE transaction isolation level and take out the table locks. By my reading, the SELECT ... FOR UPDATE should create a predicate lock (and so should a plain SELECT). That'd stop any other transaction that does a conflicting insert from committing successfully. However, using SERIALIZABLE throughout your application will cost you performance and give you a lot more transactions to retry.
Here's how SERIALIZABLE transaction isolation level works:
create table test (id serial, x integer, total integer); ...
Transaction 1:
DB=# begin transaction isolation level serializable;
BEGIN
DB=# insert into test (x, total) select 3, 100 where not exists (select true from test where x=3);
INSERT 0 1
DB=# select * from test;
id | x | total
----+---+-------
1 | 3 | 100
(1 row)
DB=# commit;
COMMIT
Transaction 2, interleaved line for line with the first:
DB=# begin transaction isolation level serializable;
BEGIN
DB=# insert into test (x, total) select 3, 200 where not exists (select true from test where x=3);
INSERT 0 1
DB=# select * from test;
id | x | total
----+---+-------
2 | 3 | 200
(1 row)
DB=# commit;
ERROR: could not serialize access due to read/write dependencies among transactions
DETAIL: Reason code: Canceled on identification as a pivot, during commit attempt.
HINT: The transaction might succeed if retried.

SQL Isolation levels or locks in large procedures

I have big stored procedures that handle user actions.
They consist of multiple select statements. These are filtered, most of the times only getting one row. The Selects are copied into temptables or otherwise evaluated.
Finally, a merge-Statement does the needed changes in the DB.
All is encapsulated in a transaction.
I have concurrent input from users, and the selected rows of the select statements should be locked to keep data integrity.
How can I lock the selected Rows of all select statements, so that they aren't updated through other transactions while the current transaction is in process?
Does a table hint combination of ROWLOCK and HOLDLOCK work in a way that only the selected rows are locked, or are the whole tables locked because of the HOLDLOCK?
SELECT *
FROM dbo.Test
WITH (ROWLOCK HOLDLOCK )
WHERE id = #testId
Can I instead use
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
right after the start of the transaction? Or does this lock the whole tables?
I am using SQL2008 R2, but would also be interested if things work differently in SQL2012.
PS: I just read about the table hints UPDLOCK and SERIALIZE. UPDLOCK seems to be a solution to lock only one row, and it seems as if UPDLOCK always locks instead of ROWLOCK, which does only specify that locks are row based IF locks are applied. I am still confused about the best way to solve this...
Changing the isolation level fixed the problem (and locked on row level):
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
Here is how I tested it.
I created a statement in a blank page of the SQL Management Studio:
begin tran
select
*
into #message
from dbo.MessageBody
where MessageBody.headerId = 28
WAITFOR DELAY '0:00:05'
update dbo.MessageBody set [message] = 'message1'
where headerId = (select headerId from #message)
select * from dbo.MessageBody where headerId = (select headerId from #message)
drop table #message
commit tran
While executing this statement (which takes at last 5 seconds due to the delay), I called the second query in another window:
begin tran
select
*
into #message
from dbo.MessageBody
where MessageBody.headerId = 28
update dbo.MessageBody set [message] = 'message2'
where headerId = (select headerId from #message)
select * from dbo.MessageBody where headerId = (select headerId from #message)
drop table #message
commit tran
and I was rather surprised that it executed instantaneously. This was due to the default SQL Server transaction level "Read Commited" http://technet.microsoft.com/en-us/library/ms173763.aspx . Since the update of the first script is done after the delay, during the second script there are no umcommited changes yet, so the row 28 is read and updated.
Changing the Isolation level to Serialization prevented this, but it also prevented concurrency - both scipts were executed consecutively.
That was OK, since both scripts read and changed the same row (via headerId=28). Changing headerId to another value in the second script, the statements were executed parallel. So the lock from SERIALIZATION seems to be on row level.
Adding the table hint
WITH ( SERIALIZABLE)
in the first select of the first statement does also prevent further reads oth the selected row.