postgres acquire row-level lock - postgresql

How do I acquire a row-specific lock in Postgres?
According to the documentation, I can acquire a table lock, but I don't want to lock the whole table; I only care about monitoring a specific row in a given transaction.
My use case is that I want to:
read a row
perform some (potentially expensive, potentially race-condition) operations that prepare new rows for another table
when I'm finally ready to insert/update these new rows, I want to make sure no other process beat me to the punch, so I think I want to:
acquire a row-level lock on the same row I read earlier
see if the version or state or column like that is the same as when I read it earlier
if it is the same, I want to insert/update into the other table, then increment this version, then release the lock
if it is not the same version, then I want to abort - I don't care about my possible new/updated rows, they are stale and I don't want to save them (so: do nothing, release the lock)
But, I don't want to lock the entire table for this whole period; I just want to lock that one specific row. I cannot figure out how to do that based on numerous . blog . posts and Postgres . documentation
All I want is an example query that shows me how to explicitly row-level lock.

Related

Does Postgres guarantee to lock rows in the order of supplied update-statements?

I like to do batch updates to Postgres. Sometimes, the batch may contain update-statements to the same record. (*)
To this end I need to be sure that Postgres locks rows based on the order in which the update-statements are supplied.
Is this guaranteed?
To be clear, I'm sending a sequence of single row update-statements, so not a single multi-row update-statement. E.g.:
update A set x='abc', dt='<timeN>' where id='123';
update A set x='def', dt='<timeN+1>' where id='123';
update A set x='ghi', dt='<timeN+2>' where id='123';
*) This might seem redundant: just only save the last one. However, I have defined an after-trigger on the table so history is created in a different table. Therefore I need the multiple updates.
The rows will definitely be locked in the order of the UPDATE statements.
Moreover, locks only affect concurrent transactions, so if all the UPDATEs take place in one database session, you don't have to be afraid to get blocked by a lock.

How to update an aggregate table in a trigger procedure while taking care of proper concurrency?

For illustration, say I'm updating a table ProductOffers and their prices. Mutations to this table are of the form: add new ProductOffer, change price of existing ProductOffer.
Based on the above changes, I'd like to update a Product-table which holds pricing info per product aggregated over all offers.
It seems logical to implement this using a row-based update/insert trigger, where the trigger runs a procedure creating/updating a Product row.
I'd like to properly implement concurrent updates (and thus triggers). I.e.: updating productOffers of the same Product concurrently, would potentially lead to wrong aggregate values (because multiple triggered procedures would concurrently attempt to insert/update the same Product-row)
It seems I cannot use row-based locking on the product-table (i.e.: select .. for update) because it's not guaranteed that a particular product-row already exists. Instead the first time around a Product row must be created (instead of updated) once a ProductOffer triggers the procedure. Afaik, row-locking can't work with new rows to be inserted, which totally makes sense.
So where does that leave me? Would I need to roll my own optimistic locking scheme? This would need to include:
check row not exists => create new row fail if already exists. (which is possible if 2 triggers concurrently try to create a row). Try again afterwards, with an update.
check row exists and has version=x => update row but fail if row.version !=x. Try again afterwards
Would the above work, or any better / more out-of-the-box solutions?
EDIT:
For future ref: found official example which exactly illustrates what I want to accomplish: Example 39-6. A PL/pgSQL Trigger Procedure For Maintaining A Summary Table
Things are much simpler than you think they are, thanks to the I an ACID.
The trigger you envision will run in the same transaction as the data modification that triggered it, and each modification to the aggregate table will first lock the row that it wants to update with an EXCLUSIVE lock.
So if two concurrent transactions cause an UPDATE on the same row in the aggregate table, the first transaction will get the lock and proceed, while the second transaction will have to wait until the first transaction commits (or rolls back) before it can get the lock on the row and modify it.
So data modifications that update the same row in the aggregate table will effectively be serialized, which may hurt performance, but guarantees exact results.

Does DROP COLUMN block on a postrgeSQL database

I have the following column in a postgreSQL database
column | character varying(10) | not null default 'default'::character varying
I want to drop it, but the database is huge and if it blocks updates for an extended period of time I will be publicly flogged, and likely drawn and quartered. I found a blog from braintree, here, which suggests this is a safe operation but its a little vague.
The ALTER TABLE command needs to acquire an ACCESS EXCLUSIVE lock on the table, which will block everything trying to access that table, including SELECTs, and, as the name implies, needs to wait for existing operations to finish so it can be exclusive.
So, if your table is extremely busy, it may not get an opportunity to actually acquire the exclusive lock, and will simply block for what is functionally forever.
It also depends whether this column has a lot of indexes and dependencies. If there are dependencies (i.e. foreign keys or views), you'll need to add CASCADE to the DROP COLUMN, and this will increase the work that needs to be done, and the amount of time it will need to hold the exclusive lock.
So, it's not risk free. However, you should know fairly quickly after trying it whether it's likely to block for a long time. If you can try it and safely take a minute or two of potentially blocking that table, it's worth a shot -- try the drop and see. If it doesn't complete within a relatively short period of time, abort the command and you'll likely need to schedule some downtime of at least the app(s) that are hammering the table. (You can take a look at the server activity and the lock activity to try to surmise what's hammering that table.)
does drop column block a PostgreSQL database
The answer to that is no, because it does not block the database.
However any DDL statement requires an exclusive lock on the table being changed. Which means no other transaction can access the table. So the table is "blocked", not the database.
However the time to drop a column is really very short, because the column isn't physically removed from the table but only marked as no longer there.
And don't forget to commit the DDL statement (if you have turned autocommit off), otherwise the table will be blocked until you commit your change.

Online backup blocking truncate table

It´s documented that in DB2 the TRUNCATE statement is not compatible with online backup because it gets a Z lock on the table and prevents an online backup from running concurrently.
The lock wait happens when a truncate tries to get a shared lock on an internal online backup object.
Since this is by design in the product I will have to go for workarounds, so this thread is not about a solution, but why they can´t work together. I didn´t find a reasonable explanation why there is such limitation in db2.
Any insights?
Thanks,
Luciano Moreira
from http://www.ibm.com/developerworks/data/library/techarticle/dm-0501melnyk/
When a table holds a Z lock, no concurrent application can read or
update data in that table.
So now we know that a Z lock is and exclusive access to a table denying read and write access to the table.
from http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/topic/com.ibm.db2.luw.sql.ref.doc/doc/r0053474.html
Exclusive Access: No other session can have a cursor open on the table, or a lock held on the table (SQLSTATE 25001).
from https://sites.google.com/site/umeshdanderdbms/difference-between-truncate-and-delete
Delete is logging operation, where as Truncate is makes the table empty on container level.
(Logging operation – DML operation are logged into logs (redo log in oracle, transaction log in DB2 etc). It is stored in logs for commit or rollback operation.)
This is the most interesting part. Truncate just 'forgets' the content of the table whereas deletes removes line by line processing all triggers, bells, and whistles. Therefore when you truncate all reading cursors will get invalid. To prevent stupid stuff like that you can only completely empty a table when nobody tries to access it. Online backup obviously needs to read the table. Therefore it is not possible to have both accessing the same table at the same time.

How to wait during SELECT that pending INSERT commit?

I'm using PostgreSQL 9.2 in a Windows environment.
I'm in a 2PC (2 phase commit) environment using MSDTC.
I have a client application, that starts a transaction at the SERIALIZABLE isolation level, inserts a new row of data in a table for a specific foreign key value (there is an index on the column), and vote for completion of the transaction (The transaction is PREPARED). The transaction will be COMMITED by the Transaction Coordinator.
Immediatly after that, outside of a transaction, the same client requests all the rows for this same specific foreign key value.
Because there may be a delay before the previous transaction is really commited, the SELECT clause may return a previous snapshot of the data. In fact, it does happen sometimes, and this is problematic. Of course the application may be redesigned but until then, I'm looking for a lock solution. Advisory Lock ?
I already solved the problem while performing UPDATE on specific rows, then using SELECT...FOR SHARE, and it works well. The SELECT waits until the transaction commits and return old and new rows.
Now I'm trying to solve it for INSERT.
SELECT...FOR SHARE does not block and return immediatley.
There is no concurrency issue here as only one client deals with a specific set of rows. I already know about MVCC.
Any help appreciated.
To wait for a not-yet-committed INSERT you'd need to take a predicate lock. There's limited predicate locking in PostgreSQL for the serializable support, but it's not exposed directly to the user.
Simple SERIALIZABLE isolation won't help you here, because SERIALIZABLE only requires that there be an order in which the transactions could've occurred to produce a consistent result. In your case this ordering is SELECT followed by INSERT.
The only option I can think of is to take an ACCESS EXCLUSIVE lock on the table before INSERTing. This will only get released at COMMIT PREPARED or ROLLBACK PREPARED time, and in the mean time any other queries will wait for the lock. You can enforce this via a BEFORE trigger to avoid the need to change the app. You'll probably get the odd deadlock and rollback if you do it that way, though, because INSERT will take a lower lock then you'll attempt lock promotion in the trigger. If possible it's better to run the LOCK TABLE ... IN ACCESS EXCLUSIVE MODE command before the INSERT.
As you've alluded to, this is mostly an application mis-design problem. Expecting to see not-yet-committed rows doesn't really make any sense.