I have two locks in my database I cannot kill.
when I type
KILL '8A551D5D-887D-4776-AEB3-F603A4CDF0E0'
I get an error saying:
Distributed transaction with UOW {8A551D5D-887D-4776-AEB3-F603A4CDF0E0} is rolling back: estimated rollback completion: 0%, estimated time left 0 seconds.
It's been like that for 24 hours now, so I believe that the completion level will not change.
when I try to kill the other one I get the error saying:
There is a connection associated with the distributed transaction with UOW {6B820CA6-5836-4CCB-BBA6-C3ED615EA933}. First, kill the connection using KILL SPID syntax.
I tried to kill all the connections using the following script:
USE master
GO
ALTER DATABASE YourDatabaseName
SET OFFLINE WITH ROLLBACK IMMEDIATE
GO
But the script never ends.
How do I kill these locks?
Related
We have a Postgres 12 system running one master master and two async hot-standby replica servers and we use SERIALIZABLE transactions. All the database servers have very fast SSD storage for Postgres and 64 GB of RAM. Clients connect directly to master server if they cannot accept delayed data for a transaction. Read-only clients that accept data up to 5 seconds old use the replica servers for querying data. Read-only clients use REPEATABLE READ transactions.
I'm aware that because we use SERIALIZABLE transactions Postgres might give us false positive matches and force us to repeat transactions. This is fine and expected.
However, the problem I'm seeing is that randomly a single line INSERT or UPDATE query stalls for a very long time. As an example, one error case was as follows (speaking directly to master to allow modifying table data):
A simple single row insert
insert into restservices (id, parent_id, ...) values ('...', '...', ...);
stalled for 74.62 seconds before finally emitting error
ERROR 40001 could not serialize access due to concurrent update
with error context
SQL statement "SELECT 1 FROM ONLY "public"."restservices" x WHERE "id" OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x"
We log all queries exceeding 40 ms so I know this kind of stall is rare. Like maybe a couple of queries a day. We average around 200-400 transactions per second during normal load with 5-40 queries per transaction.
After finally getting the above error, the client code automatically released two savepoints, rolled back the transaction and disconnected from database (this cleanup took 2 ms total). It then reconnected to database 2 ms later and replayed the whole transaction from the start and finished in 66 ms including the time to connect to the database. So I think this is not about performance of the client or the master server as a whole. The expected transaction time is between 5-90 ms depending on transaction.
Is there some PostgreSQL connection or master configuration setting that I can use to make PostgreSQL to return the error 40001 faster even if it caused more transactions to be rolled back? Does anybody know if setting
set local statement_timeout='250'
within the transaction has dangerous side-effects? According to the documentation https://www.postgresql.org/docs/12/runtime-config-client.html "Setting statement_timeout in postgresql.conf is not recommended because it would affect all sessions" but I could set the timeout only for transactions by this client that's able to automatically retry the transaction very fast.
Is there anything else to try?
It looks like someone had the parent row to the one you were trying to insert locked. PostgreSQL doesn't know what to do about that until the lock is released, so it blocks. If you failed rather than blocking, and upon failure retried the exact same thing, the same parent row would (most likely) still be locked and so would just fail again, and you would busy-wait. Busy-waiting is not good, so blocking rather than failing is generally a good thing here. It blocks and then unblocks only to fail, but once it does fail a retry should succeed.
An obvious exception to blocking-better-than-failing being if when you retry, you can pick a different parent row to retry with, if that make sense in your context. In this case, maybe the best thing to do is explicitly lock the parent row with NOWAIT before attempting the insert. That way you can perhaps deal with failures in a more nuanced way.
If you must retry with the same parent_id, then I think the only real solution is to figure out who is holding the parent row lock for so long, and fix that. I don't think that setting statement_timeout would be hazardous, but it also wouldn't solve your problem, as you would probably just keep retrying until the lock on the offending row is released. (Setting it on the other session, the one holding the lock, might be helpful, depending on what that session is doing while the lock is held.)
I have a savepoint which has been running for almost 24 hrs now. Its causing other issues like long running queries which refreshes materialized view concurrently.
Is there a way to know which query is causing the RELEASE SAVEPOINT <savepoint-name> to be in idle in transaction. Is it safe to use SELECT pg_cancel_backend(__pid__); against its pid?
If the session is “idle in transaction”, it is not running.
What you see in pg_stat_activity is the last statement executed in that session.
There is a bug in your application that causes a transaction to remain open, and the locks that are held by this transmission can block concurrent sessions.
Our PostgreSQL 10.1 server ran out of connections today because a monitor process that was calling
select pg_database_size('databasename');
was getting stuck. It was NOT getting an obvious Lock. It just never returned. The monitor dutifully logged in every few minutes, over and over until we ran out of connections. When I run the query for other databases it works, but not for our main database.
Killing the calling process did not clear the query.
select pg_cancel_backend(1234)
doesn't kill the query. Nor does
select pg_terminate_backend(1234)
Ditto if I run the query by hand, nothing kills it in the database.
I will probably have to restart the database server to recover from this. However I'd like to prevent it from happening again.
What is this function doing that would resist signals and never return (like 8 hours after being invoked)? Is there any way to clear them from the process table without restarting the database and breaking the users who still have the few remaining connections still active in the system?
I have a python script that executes multiple sql scripts (one after another) in Redshift. Some of the tables in these sql scripts can be queried multiple times. For ex. Table t1 can be SELECTed in one script and can be dropped/recreated in another script. This whole process is running in one transaction. Now, sometimes, I am getting deadlock detected error and the whole transaction is rolled back. If there is a deadlock on a table, I would like to wait for the table to be released and then retry the sql execution. For other types of errors, I would like to rollback the transaction. From the documentation, it looks like the table lock isn't released until end of transaction. I would like to achieve all or no data changes (which is accomplished by using transaction) but also would like to handle deadlocks. Any suggestion on how this can be accomplished?
I would execute all of the SQL you are referring to in one transaction with a retry loop. Below is the logic I use to handle concurrency issues and retry (pseudocode for brevity). I do not have the system wait indefinitely for the lock to be released. Instead I handle it in the application by retrying over time.
begin transaction
while not successful and count < 5
try
execute sql
commit
except
if error code is '40P01' or '55P03'
# Deadlock or lock not available
sleep a random time (200 ms to 1 sec) * number of retries
else if error code is '40001' or '25P02'
# "In failed sql transaction" or serialized transaction failure
rollback
sleep a random time (200 ms to 1 sec) * number of retries
begin transaction
else if error message is 'There is no active transaction'
sleep a random time (200 ms to 1 sec) * number of retries
begin transaction
increment count
The key components are catching every type of error, knowing which cases require a rollback, and having an exponential backoff for retries.
> db.currentOp().inprog.length
11587
Several minutes later, the count is still the same. I made a small script to cycle through and killOp() all the ops that originated from the offending client, but when it finishes, all of the ops are still running.
I then tried a single killOp() and checked the op count and it was the same. I tried killing 10 ops, then checking the op count and it still hand't changed.
Most of the queries are all on the same table, which has over 20 million documents. The client generating all the queries has been terminated but I can't call getIndexes() to see if there's an indexing misconfiguration on the table because that call just goes on the end of the op queue and never returns.
We're running MongoDB on a single Linux server. There's no replication in place at this point.
What should I do?
Do you know which op is that? Check mongod log to see whether it moves or has any error message. If you don't see any progress, I would suggest you to restart mongod (don't kill -9, normal kill should be ok).