As per my understanding, I can see that a transaction is holding a snapshot by either of the columns backend_xid or backend_xmin not being NULL in pg_stat_activity.
I am currently investigating cases where backend_xid is not null for sessions from dbeaver and I don't understand why the transaction is requiring a snapshot. This is of interest as long running transaction that are holding a snapshot can cause problems, for autovacuum for instance.
My question is: Can I (serverside) find the reason why a transaction is holding a snapshot? Is there a table where I can see why the transaction is holding a snapshot?
backend_xid is the transaction ID of the session and does not mean that the session has an active snapshot. The documentation says:
Top-level transaction identifier of this backend, if any.
backend_xmin is described as
The current backend's xmin horizon.
“xmin horizon” is PostgreSQL jargon and refers to the lowest transaction ID that was active when the snapshot was taken. It is an upper limit of what VACUUM is allowed to remove.
Related
When I run vacuum verbose on a table, the result is showing an oldest xmin value of 9696975, as shown below:
table_xxx: found 0 removable, 41472710 nonremovable row versions in 482550 out of 482550 pages
DETAIL: 41331110 dead row versions cannot be removed yet, oldest xmin: 9696975
There were 0 unused item identifiers.
But when I check in pg_stat_activity, there are no entries with the backend_xmin value that matches this oldest xmin value.
Below is the response I get when I run the query:
SELECT backend_xmin
FROM pg_stat_activity
WHERE backend_xmin IS NOT NULL
ORDER BY age(backend_xmin) DESC;
Response:
backend_xmin
------------
10134695
10134696
10134696
10134696
10134696
The issue I am facing is that the vacuum is not removing any dead tuples from the table.
I tried methods mentioned in: this post. But it didn't help.
edit:
The PostgreSQL version is 13.6 running in Aurora cluster.
A row is only completely dead when no live transaction can see it anymore. I.e. no transaction that has been started before the row was updated / deleted is still running. That does not necessarily involve any locks at all. The mere existence of a long-running transaction can block VACUUM from cleaning up.
So the system view to consult is pg_stat_activity. Look for zombi-transactions that you can kill. Then VACUUM can proceed.
Old prepared transactions can also block for the same reason. You can check pg_prepared_xacts for those.
Of course, VACUUM only runs on the primary server, not on replicas (standby) instances - in case streaming replication has been set up.
Related:
Long running function locking the database?
What are the consequences of not ending a database transaction?
What does backend_xmin and backend_xid represent in pg_stat_activity?
Do postgres autovacuum properties persist for DB replications?
Apart from old transactions, there are some other things that can hold the “xmin horizon” back:
stale replication slots (see pg_replication_slots)
abandoned prepared transactions (see pg_prepared_xacts)
I have a table with 500k elements. Now I want to add a new column
(type boolean, nullable = false) without a default value.
The query to do so is running like for ever.
I'm using PostgreSQL 12.1, compiled by Visual C++ build 1914, 64-bit on my Windows 2012 Server
In pgAdmin I can see the query is blocked by PID 0. But when I execute this query, I can't see the query with pid = 0
SELECT *
FROM pg_stat_activity
Can someone help me here? Why is the query blocked and how can I fix this to add a new column to my table.
UPDATE attempt:
SELECT *
FROM pg_prepared_xacts
Update
It works after rollback all prepared transaction.
ROLLBACK PREPARED 'gid goes here';
You have got stale prepared transactions. I say that as in "you have got the measles", because it is a disease for a database.
Such prepared transactions keep holding locks and block autovacuum progress, so they will bring your database to its knees if you don't take action. In addition, such transactions are persisted, so even a restart of the database won't get rid of them.
Remove them with
ROLLBACK PREPARED 'gid goes here'; /* use the transaction names shown in the view */
If you use prepared transactions, you need a distributed transaction manager. That is a piece of software that keeps track of all prepared transactions and their state and persists that information, so that no distributed transaction can become stale. Even if there is a crash, the distributed transaction manager will resolve in-doubt transactions in all involved databases.
If you don't have that, don't use prepared transactions. You now know why. Best is to set max_prepared_transactions to 0 in that case.
Can someone please explain to me the following situation:
I am using Postgresql 12 as main rdbms in my project, there are several background jobs accessing and writing to the database in parallel, also there are some user interactions (which of course produce updates and inserts to the database from the front of application)
Periodically i am getting exceptions like this one:
SQLSTATE[40P01]: Deadlock detected: 7 ERROR: deadlock detected
DETAIL: Process 18046 waits for ShareLock on transaction 212488; blocked by process 31036.
Process 31036 waits for ShareLock on transaction 212489; blocked by process 18046.
HINT: See server log for query details.
CONTEXT: while updating tuple (1637,16) in relation "my_table"
Inside my application i don't lock manually any rows or tables during my transactions. But i have 'large' transactions that can modify a lot of rows in single operation frequently. So the questions are:
Does ordinary transactions produce table-wide locks, or row-wide locks? (I assume yes, unless this whole situation is magic)
Shouldn't the rdbms resolve automatically this kind of problems when two queries are trying to modify the same resource, if they are wrapped inside transaction?
If answer to the second question is "no" then how i should handle that kind of situations?
re 1) DML statements only lock the rows that are modified. There is no lock escalation in Postgres where the whole table is locked for writes. There is a "table lock" but that is only there to prevent concurrent DDL - a SELECT will also acquire that. Those share locks don't prevent DML on the table.
re 2) no, the DBMS can not resolve this because a deadlock means tx1 is waiting for a lock to be released from tx2 and tx2 is waiting for a lock to be released by tx1. How would the DBMS know what to do? The only way the DBMS can solve this is by choosing one of the two sessions as a victim and kill the transaction (which is the error you see).
re 3) the usual approach to avoiding deadlocks is to always update rows in the same order. Which usually turns the deadlock into a simple "lock wait" for the second transaction.
Assume the following UPDATE sequence
tx1 tx2
-------------------------------
update id = 1 |
| update id = 2
update id = 2 |
(tx1 waits) |
| update id = 1
(now we have a deadlock)
If you always update the rows in e.g. ascending order this changes to:
tx1 tx2
-------------------------------
update id = 1 |
| update id = 1
| (waits)
update id = 2 |
|
|
commit; |
(locks released) |
|
| update id = 2
| commit;
So you don't get a deadlock, just a wait for the second transaction.
All SQL statements that affect tables will take a lock on the table (albeit not necessarily a strong one). But that doesn't seem to be your problem here.
All SQL statements that modify a row (or SELECT ... FOR UPDATE) will lock the affected rows. Your two transactions probably blocked on a row-level lock.
Yes; that is what the error message shows. PostgreSQL has resolved the deadlock by killing one of the involved transactions.
If transactoin 1 holds a lock that transaction 2 is waiting for and vice versa, there is no other way to resolve the situation. The only way to release a lock is to end the transaction that holds it.
You should catch the error on your application code and retry the database transaction. A deadlock is a transient error.
If you get a lot of deadlocks, you should try to reduce them. What helps is to keep your transactions short and small. If that is not an option, make sure that all transactions that lock several rows lock them in the same order.
I would like to query for the lowest transaction_timestamp() value among the currently active (not yet committed nor rolled back) transactions from all connections to my PostgreSQL database (I am not interested in just the transaction_timestamp() of my own session).
I can get the transaction xid-s for such by txid_current_snapshot() and related, but I would like to find out what the transaction_timestamp for them are (or at least the lowest of the timestamps).
You could query the pg_stat_activity view for that:
SELECT MIN(xact_start) FROM pg_stat_activity
WHERE state IN ('active', 'idle in transaction')
You should have a look at the possible states you're interested in.
How does Postgres decide which transactions are visible to a given transaction according to the isolation level?
I know that Postgres uses xmin and xmax and compares it to xid, but I haven't found the articles with proper details.
Do you know the process under hood?
This depends on the current snapshot.
READ COMMITTED transactions take a new snapshot for every query, while REPEATABLE READ and SERIALIZABLE transactions take a snapshot when the first query is run and keep it for the whole duration of the transaction.
The snapshot is defined as struct SnapshotData in include/utils/snapshot.h and essentially contains the following:
a minimal transaction ID xmin: all older transactions are visible to this snapshot.
a maximal transaction ID xmax: all later transactions are not visible to this snapshot.
an array of transaction IDs xid that contains all in-between transactions that are not visible to this snapshot.
To determine if a tuple is visible to a snapshot, its xmin must be a committed transaction ID that is visible and its xmax must not be a committed transaction ID that is visible.
To determine if a transaction is committed or not, the commit log has to be consulted unless the hint bits of the tuple (which cache that information) have already been set.