Drop index locks - postgresql

My Postgres version is 9.6
I tried today to drop an index from my DB with this command:
drop index index_name;
And it caused a lot of locks - the whole application was stuck until I killed all the sessions of the drop (why it was devided to several sessions?).
When I checked the locks I saw that almost all the blocked sessions execute that query:
SELECT a.attname, format_type(a.atttypid, a.atttypmod),
pg_get_expr(d.adbin, d.adrelid), a.attnotnull, a.atttypid, a.atttypmod
FROM
pg_attribute a LEFT JOIN pg_attrdef d
ON a.attrelid = d.adrelid AND a.attnum = d.adnum
WHERE a.attrelid = <index_able_name>::regclass
AND a.attnum > 0 AND NOT a.attisdropped
ORDER BY a.attnum;
Is that make sense that this will block system actions?
So I decided to drop the index with concurrently option to prevent locks.
drop index concurrently index_name;
I execute it now from PGAdmin (because you can't run it from noraml transaction).
It run over that 20 minutes and didnt finished yet. Index size is 20MB+-.
And when I'm checking the DB for locks I see that there is a select query on that table and that's blocks the drop command.
But when I took this select and execute in another session - this was vary fast (2-3) seconds.
So why is that blocking my drop? is there another option to do that? maybe to disable index instead?

drop index and drop index concurrently are usually very fast commands, but they both, as all DDL commands, require exclusive access to the table.
They differ only in how they try to achieve this exclusive access. Plain drop index would simply request exclusive lock on the table. This would block all queries (even selects) that try to use the table after the start of the drop query. It will do this until it will get the exclusive lock - when all transactions touching the table in any way, which started before the drop command, would finish and the transaction with a drop is committed. This explains why your application stopped working.
The concurrently version also needs brief exclusive access. But it works differently - it will not block the table, but wait until there no other query touching it and then does its (usually brief) work. But if the table is constantly busy it will never find such a moment, and wait for it infinitely. Also I suppose it just tries to lock the table repeatedly every X milliseconds until it succeeds, so a later parallel execution can be more lucky and finish faster.
If you see multiple simultaneous sessions trying to drop an index, and you do not expect that, then you have a bug in your application. The database would never do this on its own.

Related

When will select query aquire ExclusiveLock and RowExclusiveLock in PostgreSQL?

According to official documentation, select query only need sharelock, but I found my select query acquired Exclusive lock. How did it happen? Here is my select query:
select gc.id
from group_access_strategy ga
left outer join person_group pg on gp.person_group_id=pg.id
where gp.id=3
what is different from official documentation is that I added left join.
Most likely you ran another command like ALTER TABLE person_group ... (Access Exclusive) or an UPDATE/INSERT/DELETE (Row exclusive) in the same transaction. Locks will persist until a transaction is completed or aborted.
So if you ran:
BEGIN; --BEGIN starts the transaction
UPDATE group_access_strategy SET column = 'some data' where id = 1;
SELECT
gc.id,
FROM
group_access_strategy ga
LEFT OUTER JOIN person_group pg ON (gp.person_group_id = pg.id)
WHERE
pg.id = 3
The UPDATE statement would have created a Row Exclusive Lock that will not be released until you end the transaction by:
Saving all of the changes made since BEGIN:
COMMIT;
OR
nullifying any of the effects of statements since BEGIN with
ROLLBACK;
If you're new to Postgres and typically run your queries in an IDE like PG Admin or DataGrip, the BEGIN / COMMIT ROLLBACK commands are issued behind the scenes for you when you click the corresponding UI buttons.

Postgresql - Unable to see exact Query or Transaction that blocked my actual Transaction

Note: I'm using DBeaver 21.1.3 as my PostgreSQL development tool.
For my testing, I have created a table:
CREATE TABLE test_sk1(n numeric(2));
then I have disabled Auto-Commit on my DBEaver to verify whether I can see the blocking query for my other transaction.
I have then executed an insert on my table:
INSERT INTO test_sk1(n) values(10);
Now this insert transaction is un-committed, which will block the table.
Then I have taken another new SQL window and tried alter command on the table.
ALTER TABLE test_sk1 ADD COLUMN v VARCHAR(2);
Now I see the alter transaction got blocked.
But when I verified in the locks, I see that this Alter transaction got blocked by "Show search_path;' transaction, where I'm expecting "INSERT..." transaction as blocking query.
I used below query to fetch the lock:
SELECT p1.pid,
p1.query as blocked_query,
p2.pid as blocked_by_pid,
p2.query AS blocking_query
FROM pg_stat_activity p1, pg_stat_activity p2
WHERE p2.pid IN (SELECT UNNEST(pg_blocking_pids(p1.pid)))
AND cardinality(pg_blocking_pids(p1.pid)) > 0;
Why is this happening on our databases?
Try using psql for such experiments. DBeaver and many other tools will execute many SQL statements that you didn't intend to run.
The query that you see in pg_stat_activity is just the latest query done by that process, not necessarily the one locking any resource.

Postgres: SELECT FOR UPDATE does not see new rows after lock release

Trying to support PostgreSQL DB in my application, found this strange behaviour.
Preparation:
CREATE TABLE test(id INTEGER, flag BOOLEAN);
INSERT INTO test(id, flag) VALUES (1, true);
Assume two concurrent transactions (Autocommit=false, READ_COMMITTED) TX1 and TX2:
TX1:
UPDATE test SET flag = FALSE WHERE id = 1;
INSERT INTO test(id, flag) VALUES (2, TRUE);
-- (wait, no COMMIT yet)
TX2:
SELECT id FROM test WHERE flag=true FOR UPDATE;
-- waits for TX1 to release lock
Now, if I COMMIT in TX1, the SELECT in TX2 returns empty cursor.
It is strange to me, because same experiment in Oracle and MariaDB results in selecting newly created row (id=2).
I could not find anything about this behaviour in PG documentation.
Am I missing something?
Is there any way to force PG server to "refresh" statement visibility after acquiring lock?
PS: PostgreSQL version 11.1
TX2 scans the table and tries to lock the results.
The scan sees the snapshot of the database from the start of the query, so it cannot see any rows that were inserted (or made eligible in some other way) by concurrent modifications that started after that snapshot was taken.
That is why you cannot see the row with the id 2.
For id 1, that is also true, so the scan finds that row. But the query has to wait until the lock is released. When that finally happens, it fetches that latest committed version of the row and performs the check again, so that row is excluded as well.
This “EvalPlanQual” recheck (to use PostgreSQL jargon) is only performed for rows that were found during the scan, but were locked. The second row isn't even found during the scan, so no such processing happens there.
This is a bit odd, admitted. But it is not a bug, it is just the way PostgreSQL wirks.
If you want to avoid such anomalies, use the REPEATABLE READ isolation level. Then you will get a serialization error in such a case and can retry the transaction, thus avoiding inconsistencies like that.

Force a "lock" with Postgres and GO

I am new to Postgres so this may be obvious (or very difficult, I am not sure).
I would like to force a table or row to be "locked" for at least a few seconds at a time. Which will cause a second operation to "wait".
I am using golang with "github.com/lib/pq" to interact with the database.
The reason I need this is because I am working on a project that monitors postgresql. Thanks for any help.
You can also use select ... for update to lock a row or rows for the length of the transaction.
Basically, it's like:
begin;
select * from foo where quatloos = 100 for update;
update foo set feens = feens + 1 where quatloos = 100;
commit;
This will execute an exclusive row-level lock on foo table rows where quatloos = 100. Any other transaction attempting to access those rows will be blocked until commit or rollback has been issued once the select for update has run.
Ideally, these locks should live as short as possible.
See: https://www.postgresql.org/docs/current/static/explicit-locking.html

PostgreSQL, ODBC and temp table

Could you tell me why this query works in pgAdmin, but doesn't with software using ODBC:
CREATE TEMP TABLE temp296 WITH (OIDS) ON COMMIT DROP AS
SELECT age_group AS a,male AS m,mode AS t,AVG(speed) AS speed
FROM person JOIN info ON person.ppid=info.ppid
WHERE info.mode=2
GROUP BY age_group,male,mode;
SELECT age_group,male,mode,
CASE
WHEN age_group=1 AND male=0 THEN (info_dist_km/(SELECT avg_speed FROM temp296 WHERE a=1 AND m=0))*60
ELSE 0
END AS info_durn_min
FROM person JOIN info ON person.ppid=info.ppid
WHERE info.mode IN (7) AND info.info_dist_km>2;
I got "42P01: ERROR: relation "temp296" does not exist".
I also have tried with "BEGIN; [...] COMMIT;" - "HY010:The cursor is open".
PostgreSQL 9.0.10, compiled by Visual C++ build 1500, 64-bit
psqlODBC 09.01.0200
Windows 7 x64
I think that the reason why it did not work for you because by default ODBC works in autocommit mode. If you executed your statements serially, the very first statement
CREATE TEMP TABLE temp296 ON COMMIT DROP ... ;
must have autocommitted after finishing, and thus dropped your temp table.
Unfortunately, ODBC does not support directly using statements like BEGIN TRANSACTION; ... COMMIT; to handle transactions.
Instead, you can disable auto-commit using SQLSetConnectAttr function like this:
SQLSetConnectAttr(hdbc, SQL_ATTR_AUTOCOMMIT, SQL_AUTOCOMMIT_OFF, 0);
But, after you do that, you must remember to commit any change by using SQLEndTran like this:
SQLEndTran(SQL_HANDLE_DBC, hdbc, SQL_COMMIT);
While WITH approach has worked for you as a workaround, it is worth noting that using transactions appropriately is faster than running in auto-commit mode.
For example, if you need to insert many rows into the table (thousands or millions), using transactions can be hundreds and thousand times faster than autocommit.
It is not uncommon for temporary tables to not be available via SQLPrepare/SQLExecute in ODBC i.e., on prepared statements e.g., MS SQL Server is like this. The solution is usually to use SQLExecDirect.