Why do DDL statements hang frequently? - postgresql

If we perform alter query (like adding constraint or adding column) or update the column, Postgres hangs and keeps on processing the query taking never ending time.
We have to kill the query explicitly.
Why do our ALTER statements frequently get stuck?

There are two scenarios in which such a query will hang for a long time:
Such an ALTER TABLE will require an ACCESS EXCLUSIVE lock on the table, which will block all concurrent activity and will be blocked by all concurrent activity.
The lock request will be queued at the end of the queue of transactions waiting for a lock on that table, so if these are many and take a long time to finish, the ALTER TABLE will have to wait for a long time. Other transactions that request locks on that table later will also hang because they queue after the ALTER TABLE statement.
Note that sessions that are “idle in transacrion” or prepared statements can hold locks for a very long time. It is an error to leave such things lying around.
Many forms of ALTER TABLE have to either rewrite the table (for example if a new row with a non-NULL default values is added) or scan the whole table (for example if a constraint is added, it will have to be checked for each row).
This can take a long time to finish if the table is large.
To disambiguate between these two cases, look at the locks in the pg_lockssystem view.

Related

Can not execute select queries while making a long lasting insert transaction

I'm pretty new to PostgreSQL and I'm sure I'm missing something here.
The scenario is with version 11, executing a big drop table and insert transaction on a given table with the nodejs driver, which may take 30 minutes.
While doing that, if I try to query with select on that table using the jdbc driver, the query execution waits for the transaction to finish. If I close the transaction (by finishing it or by forcing it to exit), the jdbc query becomes responsive.
I thought I can read a table with one connection while performing a transaction with another one.
What am I missing here?
Should I keep the table (without dropping it at the beginning of the transaction) ?
DROP TABLE takes an ACCESS EXCLUSIVE lock on the table, which is there precisely to prevent it from taking place concurrently with any other operation on the table. After all, DROP TABLE physically removes the table.
Since all locks are held until the end of the database transaction, all access to the dropped table is blocked until the transaction ends.
Of course the files are only removed when the transaction commits, so you might wonder why PostgreSQL doesn't let concurrent transactions read in the mean time. But that would mean that COMMIT may be blocked by a concurrent reader, or a SELECT might cause a system error in the middle of reading, both of which don't sound appealing.

Can I configure a table such that inserted rows always have a greater primary key

I would like to configure a table in Postgres to behave like an append only log. This table will have an automatically generated primary ID.
Workers will work on the items in the table in order and should only need to store the last row ID that they have completed.
How can i prevent rows being written to the table (perhaps by some transactions taking longer than others) where the row ID is less than the greatest value in the table?
There is no way to prevent concurrent inserts in the table (short of locking the table, which is a bad idea, because it breaks autovacuum).
So there is no way to to guarantee that rows are inserted in a certain order. The order in which rows are inserted isn't really a meaningful concept in PostgreSQL.
If you really want that, you have to use a different mechanism to serialize inserts, for example using PostgreSQL advisory locks or synchronization mechanisms on the client side.
Except the numbers assigned are session specific, so a session that starts earlier but lasts longer can write to the table with an id that is less then one that started later but finished sooner. So either you create your own number sequence generation that involves locking or you use an INSERT timestamp.

Will long-lasting readOnly transaction slow down other request?

For example, I want to iterate over 5000k rows via without holder cursor inside a readOnly transaction, it will definitely run for a long period.
Will such kind transaction slow down other requests on the same table ?
No, it won't, unless concurrent transaction require an ACCESS EXCLUSIVE lock on the table (they are running something like DROP TABLE, ALTER TABLE or CREATE INDEX). Such transactons would hang until the read-only transaction is done.
The problem with long transactions is that they keep autovacuum from doing its work, and if there is a lot of concurrent data modifying activity, you could end up with bloated tables and indexes.

truncate on one table blocked by select of another

Postgres 9.4, Ubuntu 10
I have been unable to find this exact problem here, so here it goes:
For each table t in my database, I have a table t_audit. Each delete, insert, and update on table t triggers a function that inserts a record to table t_audit.
Each night, a process truncates each t_audit table.
Last night, a select on table t prevented the truncate on t_audit from proceeding. I did not save what was in pg_stat_activity at the time, but I did save the output from blocking_locks().
Blocking pid: RowExclusiveLock, t, select * from t where ...,
Waiting pid: AccessExclusiveLock, t_audit, truncate table t_audit,
I am uncertain as to why a select on t would block the truncate on t_audit. As I did not save pg_stat_activity, the best that I can assume is that the select was "idle in transaction". I asked the person who was running the query at the time, and he said he was not running the update as part of a transaction. He did update table t just prior to the select. He did not close his connection as the pid was still active until I ran pg_terminate_backend on the pid.
Has anyone experienced this issue before? Is there a recommended procedure for this other than running pg_terminate_backend on any pids which are "idle in transaction" just prior to calling truncates?
Thank you for reading and taking time to respond.
Are there any triggers in place that might cause even something as innocuous as a SELECT on the audit table at the same time as the TRUNCATE (although the fact that it's a Row Exclusive lock indicates that whatever is being triggered is something like an UPDATE instead)? From the PG 9.4 locking documentation, SELECT and TRUNCATE would indeed block each other as expected behavior. The relevant tidbits are these:
ACCESS SHARE
Conflicts with the ACCESS EXCLUSIVE lock mode only.
The SELECT command acquires a lock of this mode on referenced tables. In general, any query that only reads a table and does not modify it will acquire this lock mode.
ACCESS EXCLUSIVE
Conflicts with locks of all modes (ACCESS SHARE, ROW SHARE, ROW EXCLUSIVE, SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, and ACCESS EXCLUSIVE). This mode guarantees that the holder is the only transaction accessing the table in any way.
Acquired by the DROP TABLE, TRUNCATE, REINDEX, CLUSTER, and VACUUM FULL commands. Many forms of ALTER TABLE also acquire a lock at this level.
And even more specifically telling is this explicit tip on that page:
Tip: Only an ACCESS EXCLUSIVE lock blocks a SELECT (without FOR UPDATE/SHARE) statement.
As for what to do in this scenario, if your use case is tolerant of unceremonious terminations of (possibly idle) connections, that is certainly a straightforward way of ensuring that the TRUNCATE succeeds.
A more flexible alternative may be to clear out the table with DELETE instead, and then follow up with some variation of VACUUM afterwards (DELETE and SELECT will not block each other, but it will block UPDATE). The suitability of this approach would depend a lot on things like the growth pattern of the table from day-to-day (simply VACUUM may be suitable if its maximum size is not that different day-to-day) and how badly you need that space reclaimed in the short term if it is a huge table - you may need to VACUUM FULL that table during a quiet window if you need the space quickly and badly, but VACUUM FULL is not a gentle hammer to swing by any means.

Does DROP COLUMN block on a postrgeSQL database

I have the following column in a postgreSQL database
column | character varying(10) | not null default 'default'::character varying
I want to drop it, but the database is huge and if it blocks updates for an extended period of time I will be publicly flogged, and likely drawn and quartered. I found a blog from braintree, here, which suggests this is a safe operation but its a little vague.
The ALTER TABLE command needs to acquire an ACCESS EXCLUSIVE lock on the table, which will block everything trying to access that table, including SELECTs, and, as the name implies, needs to wait for existing operations to finish so it can be exclusive.
So, if your table is extremely busy, it may not get an opportunity to actually acquire the exclusive lock, and will simply block for what is functionally forever.
It also depends whether this column has a lot of indexes and dependencies. If there are dependencies (i.e. foreign keys or views), you'll need to add CASCADE to the DROP COLUMN, and this will increase the work that needs to be done, and the amount of time it will need to hold the exclusive lock.
So, it's not risk free. However, you should know fairly quickly after trying it whether it's likely to block for a long time. If you can try it and safely take a minute or two of potentially blocking that table, it's worth a shot -- try the drop and see. If it doesn't complete within a relatively short period of time, abort the command and you'll likely need to schedule some downtime of at least the app(s) that are hammering the table. (You can take a look at the server activity and the lock activity to try to surmise what's hammering that table.)
does drop column block a PostgreSQL database
The answer to that is no, because it does not block the database.
However any DDL statement requires an exclusive lock on the table being changed. Which means no other transaction can access the table. So the table is "blocked", not the database.
However the time to drop a column is really very short, because the column isn't physically removed from the table but only marked as no longer there.
And don't forget to commit the DDL statement (if you have turned autocommit off), otherwise the table will be blocked until you commit your change.