DROP INDEX CONCURRENTLY first appeared in PSQL 9.2, but my server runs 9.1. Unfortunately that operation locks my app for an unpredictable amount of time, that's a very sad fact when doing it on production.
Is there a way to drop an index concurrently?
No, there's no simple workaround - otherwise it's rather less likely that DROP INDEX CONCURRENTLY would've been added in 9.2.
However, you can kill all sessions to force the drop to occur promptly.
What you want to avoid is the drop waiting on a partially acquired exclusive lock that prevents other transactions from proceeding, but doesn't let it proceed either, while it waits for other transactions to finish and release their share locks. The best way to ensure that happens is to kill all concurrent sessions.
So, in one session:
DROP INDEX my_index;
In another session, as a superuser, terminate all other sessions using the following untested query, which you'll need to adapt appropriately and test before use:
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE procpid <> (
SELECT pid
FROM pg_stat_activity
WHERE query = 'DROP INDEX my_index;')
AND
procpid <> pg_backend_pid();
Your well-written and well-tested application will immediately reconnect and retry its queries without bothering the user or getting upset, because it knows that transient errors are something it has to cope with so it runs all its database access in retry loops. If it isn't well written then you'll discover that with a flood of error user-visible messages. If it's really not well written you'll have to restart it before it gets its head together, but it's rare to see apps that are quite that broken.
This is a heavy handed approach. You can be rather softer about it by joining against pg_locks and only terminating sessions that actually hold a lock on the relation you're interested in or the index you wish to modify. You get to enjoy writing that query, because my interest in working around limitations of older database versions is limited.
Related
I'm pretty new to PostgreSQL and I'm sure I'm missing something here.
The scenario is with version 11, executing a big drop table and insert transaction on a given table with the nodejs driver, which may take 30 minutes.
While doing that, if I try to query with select on that table using the jdbc driver, the query execution waits for the transaction to finish. If I close the transaction (by finishing it or by forcing it to exit), the jdbc query becomes responsive.
I thought I can read a table with one connection while performing a transaction with another one.
What am I missing here?
Should I keep the table (without dropping it at the beginning of the transaction) ?
DROP TABLE takes an ACCESS EXCLUSIVE lock on the table, which is there precisely to prevent it from taking place concurrently with any other operation on the table. After all, DROP TABLE physically removes the table.
Since all locks are held until the end of the database transaction, all access to the dropped table is blocked until the transaction ends.
Of course the files are only removed when the transaction commits, so you might wonder why PostgreSQL doesn't let concurrent transactions read in the mean time. But that would mean that COMMIT may be blocked by a concurrent reader, or a SELECT might cause a system error in the middle of reading, both of which don't sound appealing.
I'm running backup restore on a schema every day and get this every now and then:
pg_dump: Error message from server: ERROR: relation not found (OID
86157003) DETAIL: This can be validly caused by a concurrent delete
operation on this object. pg_dump: The command was: LOCK TABLE
myschema.products IN ACCESS SHARE MODE
How can this be avoided? It seems the table was being used at the time, or someone was running something against the table. can I just kill all connections to the DB before restoring or is there another alternative?
As far as I understand, pg_dump could run even if users are doing something with the table but it doesn't seem to be the case.
Thanks,
It is somewhat buried but the answer lies here:
https://www.postgresql.org/docs/current/app-pgdump.html
"
-j njobs
...
To detect this conflict, the pg_dump worker process requests another shared lock using the NOWAIT option. If the worker process is not granted this shared lock, somebody else must have requested an exclusive lock in the meantime and there is no way to continue with the dump, so pg_dump has no choice but to abort the dump.
"
Which is borne out by the this in the error message:
"LOCK TABLE myschema.products IN ACCESS SHARE MODE"
ACCESS SHARE will cooperate with all other locks modes except ACCESS EXCLUSIVE. ACCESS EXCLUSIVE is used by DROP TABLE, TRUNCATE, REINDEX, etc. See here Locks for more information. So you need to do the dump during a time where the operations listed for ACCESS EXCLUSIVE are known to not happen or by blocking/dropping connections.
Somebody dropped a table between the time pg_dump took an inventory of the tables and the time it tries to dump the table.
This can happen if your application is in the habit of dropping tables all the time.
This is not an answer to your main question, but a caution regarding:
As far as I understand, pg_dump could run even if users are doing something with the table but it doesn't seem to be the case.
It assumes that the application performs every action in a single transaction. I have known of applications which accomplish some tasks using more than one.
I don't know exactly what the tasks were or if it was unavoidable that they use multiple transactions, but dumps could only be trusted when the application was idle or, better yet, when the service was stopped.
For the function that those applications performed, it wasn't a big deal to work around down times or stop services.
I don't know how you'd determine this behaviour without being told by the developers. Just something to consider.
I had an open connection from Matlab to my postgres server, and the last query was insert into table_name ..., which had state idle when I look at the processes on the database server using:
SELECT datname,pid,state,query FROM pg_stat_activity;
I tried to create a unique index on table_name and it was taking a very long time, with no discernable CPU usage for the pgadmin process. When I closed the connection from Matlab, the query dropped out of pg_stat_activity, and the unique index was immediately built.
Was the idle query the reason why it took so long to build the index?
No, a session in state “idle” does not hold any locks and cannot block anything. It's “idle in transaction” sessions that usually are the problem, because they tend to hold locks over a long time. Such sessions are almost always caused by an application bug.
To investigate problems like that, look at the view pg_locks. A hanging CREATE INDEX statement will usually hang acquiring the ACCESS EXCLUSIVE lock on the table to be indexed. You should see that by a value of FALSE in the granted column of pg_locks. Then figure out which backend (pid) has a lock on the table in question, an you have the culprit(s).
I have an issue with a postgresql-9.2 DB that is causing what is effectively a deadlock of the entire DB system.
Basically I have a table that acts as an operation queue. Entries are added to the table to indicate the need for an operation to be done. Subsequently one of multiple services will update these entries to indicate that the operation has been picked up for processing, and eventually delete the entry to indicate that the operation has been completed.
All access to the table is through transactions that first acquire an transactional advisory lock. This is to ensure that only one service is manipulating the queue at any point in time.
I have seen instances where queries on this queue will basically lock up and do nothing. I can see from pg_stat_activity that the affected query is state = active, waiting = false. I can also see that all requested locks for the PID in pg_locks have been granted. But the process just sits there and does nothing.
Typically I can precipitate this by repeated mass addition and removal of (several hundred thousand) entries to the queue. All access has to go through the advisory lock, so only one thing is getting done at a time.
Once one of these queries locks up then other queries pile up behind it, waiting on the advisory lock - eventually exhausting all DB connections.
The query that locks up is typically a deletion of all entries from the queue (delete from queue_table;). I have, however, seen one instance where the query that locked up was an update of several tuples within the table.
Right now I don't see anywhere where I could be deadlocking against any other transaction. These are straightforward inserts, deletes and updates. No other tables are involved (accept during addition of entries where the data is selected from other tables).
Other pertinent facts:
All access to the table is in fact through a view (can't access the table directly which is why i'm using an advisory lock instead of an exclusive lock on the table or similar).
The table is logged (which is probably a really bad choice in this case, i'm going to try using an unlogged table next week).
I usually also see an autovacuum (analyze) operation, also active, also waiting = false and also apparently locked up. I presume the autovacuum is coming along to re-optimize after my mass additions / removals.
Looking for any suggestions of what else I might look at to debug this issue when I next reproduce it. I'm kind of starting to feel that this might be some kind of performance optimization / DB configuration related issue.
Any suggestions of things to look at would be most welcomed!
Postgres 9.4, Ubuntu 10
I have been unable to find this exact problem here, so here it goes:
For each table t in my database, I have a table t_audit. Each delete, insert, and update on table t triggers a function that inserts a record to table t_audit.
Each night, a process truncates each t_audit table.
Last night, a select on table t prevented the truncate on t_audit from proceeding. I did not save what was in pg_stat_activity at the time, but I did save the output from blocking_locks().
Blocking pid: RowExclusiveLock, t, select * from t where ...,
Waiting pid: AccessExclusiveLock, t_audit, truncate table t_audit,
I am uncertain as to why a select on t would block the truncate on t_audit. As I did not save pg_stat_activity, the best that I can assume is that the select was "idle in transaction". I asked the person who was running the query at the time, and he said he was not running the update as part of a transaction. He did update table t just prior to the select. He did not close his connection as the pid was still active until I ran pg_terminate_backend on the pid.
Has anyone experienced this issue before? Is there a recommended procedure for this other than running pg_terminate_backend on any pids which are "idle in transaction" just prior to calling truncates?
Thank you for reading and taking time to respond.
Are there any triggers in place that might cause even something as innocuous as a SELECT on the audit table at the same time as the TRUNCATE (although the fact that it's a Row Exclusive lock indicates that whatever is being triggered is something like an UPDATE instead)? From the PG 9.4 locking documentation, SELECT and TRUNCATE would indeed block each other as expected behavior. The relevant tidbits are these:
ACCESS SHARE
Conflicts with the ACCESS EXCLUSIVE lock mode only.
The SELECT command acquires a lock of this mode on referenced tables. In general, any query that only reads a table and does not modify it will acquire this lock mode.
ACCESS EXCLUSIVE
Conflicts with locks of all modes (ACCESS SHARE, ROW SHARE, ROW EXCLUSIVE, SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, and ACCESS EXCLUSIVE). This mode guarantees that the holder is the only transaction accessing the table in any way.
Acquired by the DROP TABLE, TRUNCATE, REINDEX, CLUSTER, and VACUUM FULL commands. Many forms of ALTER TABLE also acquire a lock at this level.
And even more specifically telling is this explicit tip on that page:
Tip: Only an ACCESS EXCLUSIVE lock blocks a SELECT (without FOR UPDATE/SHARE) statement.
As for what to do in this scenario, if your use case is tolerant of unceremonious terminations of (possibly idle) connections, that is certainly a straightforward way of ensuring that the TRUNCATE succeeds.
A more flexible alternative may be to clear out the table with DELETE instead, and then follow up with some variation of VACUUM afterwards (DELETE and SELECT will not block each other, but it will block UPDATE). The suitability of this approach would depend a lot on things like the growth pattern of the table from day-to-day (simply VACUUM may be suitable if its maximum size is not that different day-to-day) and how badly you need that space reclaimed in the short term if it is a huge table - you may need to VACUUM FULL that table during a quiet window if you need the space quickly and badly, but VACUUM FULL is not a gentle hammer to swing by any means.