Acquiring advisory locks in postgres - postgresql

I think there must be something basic I'm not understanding about advisory locking in postgres. If I enter the following commands on the psql command line client, the function returns true both times:
SELECT pg_try_advisory_lock(20); --> true
SELECT pg_try_advisory_lock(20); --> true
I was expecting that the second command should return false, since the lock should already have been acquired. Oddly, I do get the following, suggesting that the lock has been acquired twice:
SELECT pg_advisory_unlock(20); --> true
SELECT pg_advisory_unlock(20); --> true
SELECT pg_advisory_unlock(20); --> false
So I guess my question is, how does one acquire an advisory lock in a way that stops it being acquired again?

What if you will try doing this from the 2 different PostgreSQL sessions?
Check out more in the docs.

My first impression on advisory locks was similar. I expected the second query (SELECT pg_tryadvisory_lock(20)) to return false too (because the first one got the lock). But this query only confirmed that a bigInt with value 20 has a lock. The interpretation is up to the user.
Imagine the advisory locks as a table where you can store a value and get a lock on that value (normally a BigInt). It is no explicit lock and no transction will be delayed. It depends on you how to interpret and use the result - and it is not blocking.
I use it in my projects with the two-integers-options. SELECT pg_try_advisory_lock(classId, objId) whereas both parameters are integers.
To make it work with more than a table just use the OID of the table as classId and the primary id (here 17) as objId:
SELECT pg_try_advisory_lock((SELECT 'first_table'::regclass::oid)::integer, 17);
In this example "first_table" is the name of the table and the second integer is the primary key id (here: 17).
Using a bigInt as parameter allows a wider range of ids, but if you use it with "second_table" than the id 17 is locked as well (because you locked the number "17" and not the relation to a specific row in a table).
It took me some time to figure that out, so hopefully it helps to understand the inner-workings of advisory locks.

Related

Should I lock a PostgreSQL table when invoking setval for sequence with the max ID function?

I have the following SQL script which sets the sequence value corresponding to max value of the ID column:
SELECT SETVAL('mytable_id_seq', COALESCE(MAX(id), 1)) FROM mytable;
Should I lock 'mytable' in this case in order to prevent changing ID in a parallel request, such in the example below?
request #1 request #2
MAX(id)=5
inserted id 6
SETVAL=5
Or setval(max(id)) is an atomic operation?
Your suspicion is right, this approach is subject to race conditions.
But locking the table won't help, because it won't keep a concurrent transaction from fetching new sequence values. This transaction will block while the table is locked, but will happily continue inserting once the lock is gone, using a sequence value it got while the table was locked.
If it were possible to lock sequences, that might be a solution, but it is not possible to lock sequences.
I can think of two solutions:
Remove all privileges on the sequence while you modify it, so that concurrent requests to the sequence will fail. That causes errors, of course.
The pragmatic way: use
SELECT SETVAL('mytable_id_seq', COALESCE(MAX(id), 1) + 100000) FROM mytable;
Here 100000 is a value that is safely bigger than the number rows that might get inserted while your operatoin is running.
You can use two requests in the same transaction:
ALTER SEQUENCE mytable_id_seq RESTART;
SELECT SETVAL('mytable_id_seq', COALESCE(MAX(id), 1)) FROM mytable;
Note: the first command will lock the sequence for other transactions

close client before finishing command postgresql

Let's say that I would like to create an index. It will take some time. I use pgadmin. Let's say that when query is being executed pgadmin crashed for some reason (for example computer is restarted).
What would be the status of that index. Would it keep on being created and finally at some point it would be created with success or it will fail immedately or it will fail aftger some time?
Is there any way to check what is the status of index being created? (Im using postgres version 10.x)
It is hard to answer the question if we do not know the reason for the crash of an application. If it was not caused by a server failure it is likely that the index was created correctly. You can check this by querying the system catalog pg_index. You have to know the index name, e.g.:
select indexrelid::regclass, indisvalid
from pg_index
where indexrelid::regclass::text = 'my_table_unique_col_key'
Per the documantaion:
indisvalid - If true, the index is currently valid for queries. False means the index is possibly incomplete: it must still be modified by INSERT/UPDATE operations, but it cannot safely be used for queries. If it is unique, the uniqueness property is not guaranteed true either.
If the index is not created yet (or at all due to a query failure) the above query returns no rows. You can check whether the query that creates the index is still running (see Dynamic Statistics Views):
select *
from pg_stat_activity
where query ilike 'create index%'

How can pgsql sequence be undefined when I just called nextval?

I've got an app built on top of PostgresQL, which makes use of a custom sequence. I think I understand sequences pretty well by now: they are non-transactional, currval is defined only within the current session, etc. But I don't understand this:
2015-10-13 10:37:16 SQLSelect: SELECT nextval('commit_id_seq')
2015-10-13 10:37:16 commit_id_seq: 57
2015-10-13 10:37:16 SQLExecute: UPDATE bid SET is_archived=false,company_id=1436,contact_id=15529,...(etc)...,sharing_policy='' WHERE id = 56229
2015-10-13 10:37:16 ERROR: ERROR: currval of sequence "commit_id_seq" is not yet defined in this session
CONTEXT: SQL statement "INSERT INTO history (table_name, record_id, sec_user_id, created, action, notes, status, before, after, commit_id)
SELECT TG_TABLE_NAME, rec.id, (SELECT id FROM sec_user WHERE name = CURRENT_USER), now(), SUBSTR(TG_OP,1,1), note, stat, oldH, newH, currval('commit_id_seq')"
PL/pgSQL function log_to_history() line 28 at SQL statement
[3]
We log every call to the database, and in the case of the SELECT nextval, I also log the result. The above are the exact calls, except that I trimmed the UPDATE statement (because the original is really long).
So, you can see that we just called nextval on the sequence, got a reasonable number back, and then we do an UPDATE that invokes a trigger function that attempts to use currval on that sequence... and it fails, claiming currval is not defined.
Note that this doesn't usually happen, but once it does start happening, it does so consistently (perhaps until the user disconnects from the DB).
How can this be? And what can I do about it?
Your UPDATE statement obviously calls a trigger. The most plausible cause of this error is that the trigger function is in a different schema from where the sequence is defined and the schema of the sequence is not in the search_path. That gives you two options to resolve this:
Make the schema of the sequence visible to the trigger function using SET search_path TO .... Note that this will make all objects in the schema of the sequence visible, which may be something of a security risk, depending on your database design.
Schema-qualify the sequence name in the trigger function: currval('my_schema.commit_id_seq').
Another plausible cause is connection pooling at your application end. Log the "session ID" (really just the starting time and pid of the current session) by adding %c to your log_line_prefix() parameter in postgresql.conf. In PostgreSQL every command runs in its own transaction unless a transaction is explicitly established. Connection pooling software also works at the transaction level (i.e. you start a transaction and then your connection will stay open until you close it, outside of a transaction there are no guarantees about session persistence). If that is the case you can wrap your entire set of commands in a BEGIN ... COMMIT block (you should probably use a specific call from your pooling software), or better yet, change your code to not depend on a previous nextval() call.

Options to retrieve the current (on a moment of running query) sequence value

How is it possible to get the current sequence value in postgresql 8.4?
Note: I need the value for the some sort of statistics, just retrieve and store. Nothing related to the concurrency and race conditions in case of manually incrementing it isn't relevant to the question.
Note 2: The sequence is shared across several tables
Note 3: currval won't work because of:
Return the value most recently obtained by nextval for this sequence in the current session
ERROR: currval of sequence "<sequence name>" is not yet defined in this session
My current idea: is to parse DDL, which is weird
You may use:
SELECT last_value FROM sequence_name;
Update:
this is documented in the CREATE SEQUENCE statement:
Although you cannot update a sequence directly, you can use a query
like:
SELECT * FROM name;
to examine the parameters and current state of a
sequence. In particular, the last_value field of the sequence shows
the last value allocated by any session. (Of course, this value might
be obsolete by the time it's printed, if other sessions are actively
doing nextval calls.)
If the sequence is being used for unique ids in a table, you can simply do this:
select max(id) from mytable;
The most efficient way, although postgres specific, is:
select currval('mysequence');
although technically this returns the last value generated by the call to nextval('mysequence'), which may not necessarily be used by the caller (and if unused would leave gaps in an auto increments id column).
As of some point in time, a secret function (which is undocumented ... still), does exactly this:
SELECT pg_sequence_last_value('schema.your_sequence_name');

How do I do large non-blocking updates in PostgreSQL?

I want to do a large update on a table in PostgreSQL, but I don't need the transactional integrity to be maintained across the entire operation, because I know that the column I'm changing is not going to be written to or read during the update. I want to know if there is an easy way in the psql console to make these types of operations faster.
For example, let's say I have a table called "orders" with 35 million rows, and I want to do this:
UPDATE orders SET status = null;
To avoid being diverted to an offtopic discussion, let's assume that all the values of status for the 35 million columns are currently set to the same (non-null) value, thus rendering an index useless.
The problem with this statement is that it takes a very long time to go into effect (solely because of the locking), and all changed rows are locked until the entire update is complete. This update might take 5 hours, whereas something like
UPDATE orders SET status = null WHERE (order_id > 0 and order_id < 1000000);
might take 1 minute. Over 35 million rows, doing the above and breaking it into chunks of 35 would only take 35 minutes and save me 4 hours and 25 minutes.
I could break it down even further with a script (using pseudocode here):
for (i = 0 to 3500) {
db_operation ("UPDATE orders SET status = null
WHERE (order_id >" + (i*1000)"
+ " AND order_id <" + ((i+1)*1000) " + ")");
}
This operation might complete in only a few minutes, rather than 35.
So that comes down to what I'm really asking. I don't want to write a freaking script to break down operations every single time I want to do a big one-time update like this. Is there a way to accomplish what I want entirely within SQL?
Column / Row
... I don't need the transactional integrity to be maintained across
the entire operation, because I know that the column I'm changing is
not going to be written to or read during the update.
Any UPDATE in PostgreSQL's MVCC model writes a new version of the whole row. If concurrent transactions change any column of the same row, time-consuming concurrency issues arise. Details in the manual. Knowing the same column won't be touched by concurrent transactions avoids some possible complications, but not others.
Index
To avoid being diverted to an offtopic discussion, let's assume that
all the values of status for the 35 million columns are currently set
to the same (non-null) value, thus rendering an index useless.
When updating the whole table (or major parts of it) Postgres never uses an index. A sequential scan is faster when all or most rows have to be read. On the contrary: Index maintenance means additional cost for the UPDATE.
Performance
For example, let's say I have a table called "orders" with 35 million
rows, and I want to do this:
UPDATE orders SET status = null;
I understand you are aiming for a more general solution (see below). But to address the actual question asked: This can be dealt with in a matter milliseconds, regardless of table size:
ALTER TABLE orders DROP column status
, ADD column status text;
The manual (up to Postgres 10):
When a column is added with ADD COLUMN, all existing rows in the table
are initialized with the column's default value (NULL if no DEFAULT
clause is specified). If there is no DEFAULT clause, this is merely a metadata change [...]
The manual (since Postgres 11):
When a column is added with ADD COLUMN and a non-volatile DEFAULT
is specified, the default is evaluated at the time of the statement
and the result stored in the table's metadata. That value will be used
for the column for all existing rows. If no DEFAULT is specified,
NULL is used. In neither case is a rewrite of the table required.
Adding a column with a volatile DEFAULT or changing the type of an
existing column will require the entire table and its indexes to be
rewritten. [...]
And:
The DROP COLUMN form does not physically remove the column, but
simply makes it invisible to SQL operations. Subsequent insert and
update operations in the table will store a null value for the column.
Thus, dropping a column is quick but it will not immediately reduce
the on-disk size of your table, as the space occupied by the dropped
column is not reclaimed. The space will be reclaimed over time as
existing rows are updated.
Make sure you don't have objects depending on the column (foreign key constraints, indices, views, ...). You would need to drop / recreate those. Barring that, tiny operations on the system catalog table pg_attribute do the job. Requires an exclusive lock on the table which may be a problem for heavy concurrent load. (Like Buurman emphasizes in his comment.) Baring that, the operation is a matter of milliseconds.
If you have a column default you want to keep, add it back in a separate command. Doing it in the same command applies it to all rows immediately. See:
Add new column without table lock?
To actually apply the default, consider doing it in batches:
Does PostgreSQL optimize adding columns with non-NULL DEFAULTs?
General solution
dblink has been mentioned in another answer. It allows access to "remote" Postgres databases in implicit separate connections. The "remote" database can be the current one, thereby achieving "autonomous transactions": what the function writes in the "remote" db is committed and can't be rolled back.
This allows to run a single function that updates a big table in smaller parts and each part is committed separately. Avoids building up transaction overhead for very big numbers of rows and, more importantly, releases locks after each part. This allows concurrent operations to proceed without much delay and makes deadlocks less likely.
If you don't have concurrent access, this is hardly useful - except to avoid ROLLBACK after an exception. Also consider SAVEPOINT for that case.
Disclaimer
First of all, lots of small transactions are actually more expensive. This only makes sense for big tables. The sweet spot depends on many factors.
If you are not sure what you are doing: a single transaction is the safe method. For this to work properly, concurrent operations on the table have to play along. For instance: concurrent writes can move a row to a partition that's supposedly already processed. Or concurrent reads can see inconsistent intermediary states. You have been warned.
Step-by-step instructions
The additional module dblink needs to be installed first:
How to use (install) dblink in PostgreSQL?
Setting up the connection with dblink very much depends on the setup of your DB cluster and security policies in place. It can be tricky. Related later answer with more how to connect with dblink:
Persistent inserts in a UDF even if the function aborts
Create a FOREIGN SERVER and a USER MAPPING as instructed there to simplify and streamline the connection (unless you have one already).
Assuming a serial PRIMARY KEY with or without some gaps.
CREATE OR REPLACE FUNCTION f_update_in_steps()
RETURNS void AS
$func$
DECLARE
_step int; -- size of step
_cur int; -- current ID (starting with minimum)
_max int; -- maximum ID
BEGIN
SELECT INTO _cur, _max min(order_id), max(order_id) FROM orders;
-- 100 slices (steps) hard coded
_step := ((_max - _cur) / 100) + 1; -- rounded, possibly a bit too small
-- +1 to avoid endless loop for 0
PERFORM dblink_connect('myserver'); -- your foreign server as instructed above
FOR i IN 0..200 LOOP -- 200 >> 100 to make sure we exceed _max
PERFORM dblink_exec(
$$UPDATE public.orders
SET status = 'foo'
WHERE order_id >= $$ || _cur || $$
AND order_id < $$ || _cur + _step || $$
AND status IS DISTINCT FROM 'foo'$$); -- avoid empty update
_cur := _cur + _step;
EXIT WHEN _cur > _max; -- stop when done (never loop till 200)
END LOOP;
PERFORM dblink_disconnect();
END
$func$ LANGUAGE plpgsql;
Call:
SELECT f_update_in_steps();
You can parameterize any part according to your needs: the table name, column name, value, ... just be sure to sanitize identifiers to avoid SQL injection:
Table name as a PostgreSQL function parameter
Avoid empty UPDATEs:
How do I (or can I) SELECT DISTINCT on multiple columns?
Postgres uses MVCC (multi-version concurrency control), thus avoiding any locking if you are the only writer; any number of concurrent readers can work on the table, and there won't be any locking.
So if it really takes 5h, it must be for a different reason (e.g. that you do have concurrent writes, contrary to your claim that you don't).
You should delegate this column to another table like this:
create table order_status (
order_id int not null references orders(order_id) primary key,
status int not null
);
Then your operation of setting status=NULL will be instant:
truncate order_status;
First of all - are you sure that you need to update all rows?
Perhaps some of the rows already have status NULL?
If so, then:
UPDATE orders SET status = null WHERE status is not null;
As for partitioning the change - that's not possible in pure sql. All updates are in single transaction.
One possible way to do it in "pure sql" would be to install dblink, connect to the same database using dblink, and then issue a lot of updates over dblink, but it seems like overkill for such a simple task.
Usually just adding proper where solves the problem. If it doesn't - just partition it manually. Writing a script is too much - you can usually make it in a simple one-liner:
perl -e '
for (my $i = 0; $i <= 3500000; $i += 1000) {
printf "UPDATE orders SET status = null WHERE status is not null
and order_id between %u and %u;\n",
$i, $i+999
}
'
I wrapped lines here for readability, generally it's a single line. Output of above command can be fed to psql directly:
perl -e '...' | psql -U ... -d ...
Or first to file and then to psql (in case you'd need the file later on):
perl -e '...' > updates.partitioned.sql
psql -U ... -d ... -f updates.partitioned.sql
I am by no means a DBA, but a database design where you'd frequently have to update 35 million rows might have… issues.
A simple WHERE status IS NOT NULL might speed up things quite a bit (provided you have an index on status) – not knowing the actual use case, I'm assuming if this is run frequently, a great part of the 35 million rows might already have a null status.
However, you can make loops within the query via the LOOP statement. I'll just cook up a small example:
CREATE OR REPLACE FUNCTION nullstatus(count INTEGER) RETURNS integer AS $$
DECLARE
i INTEGER := 0;
BEGIN
FOR i IN 0..(count/1000 + 1) LOOP
UPDATE orders SET status = null WHERE (order_id > (i*1000) and order_id <((i+1)*1000));
RAISE NOTICE 'Count: % and i: %', count,i;
END LOOP;
RETURN 1;
END;
$$ LANGUAGE plpgsql;
It can then be run by doing something akin to:
SELECT nullstatus(35000000);
You might want to select the row count, but beware that the exact row count can take a lot of time. The PostgreSQL wiki has an article about slow counting and how to avoid it.
Also, the RAISE NOTICE part is just there to keep track on how far along the script is. If you're not monitoring the notices, or do not care, it would be better to leave it out.
Are you sure this is because of locking? I don't think so and there's many other possible reasons. To find out you can always try to do just the locking. Try this:
BEGIN;
SELECT NOW();
SELECT * FROM order FOR UPDATE;
SELECT NOW();
ROLLBACK;
To understand what's really happening you should run an EXPLAIN first (EXPLAIN UPDATE orders SET status...) and/or EXPLAIN ANALYZE. Maybe you'll find out that you don't have enough memory to do the UPDATE efficiently. If so, SET work_mem TO 'xxxMB'; might be a simple solution.
Also, tail the PostgreSQL log to see if some performance related problems occurs.
I would use CTAS:
begin;
create table T as select col1, col2, ..., <new value>, colN from orders;
drop table orders;
alter table T rename to orders;
commit;
Some options that haven't been mentioned:
Use the new table trick. Probably what you'd have to do in your case is write some triggers to handle it so that changes to the original table also go propagated to your table copy, something like that... (percona is an example of something that does it the trigger way). Another option might be the "create a new column then replace the old one with it" trick, to avoid locks (unclear if helps with speed).
Possibly calculate the max ID, then generate "all the queries you need" and pass them in as a single query like update X set Y = NULL where ID < 10000 and ID >= 0; update X set Y = NULL where ID < 20000 and ID > 10000; ... then it might not do as much locking, and still be all SQL, though you do have extra logic up front to do it :(
PostgreSQL version 11 handles this for you automatically with the Fast ALTER TABLE ADD COLUMN with a non-NULL default feature. Please do upgrade to version 11 if possible.
An explanation is provided in this blog post.