Database row level locking in JDBI and postgres - postgresql

I am using jdbi with postgres. And as i am running application in multiple instances locking is not an option on application code level. So i am using row level locking on database. So i am using select for update. But ideally i don't need any update to the row but i am going a head due to for update. So everytime to release a lock i fire an update query to update last updated time field,
Can i just lock it and once the transaction is done release the lock without any update. DO we have such an option?

When you commit or rollback your transaction the lock is automatically released, regardless whether you actually updated the row or not.
There is absolutely no need to UPDATE the row in order for the lock to be released.

Related

How does Postgres support rollback without undo logs

I was going through this link and it is mentioned that PostgreSQL does not support undo log. So I am wondering how PostgreSQL rollback a transaction without undo log.
When a row is updated or deleted in PostgreSQL, it is not really updated or deleted. The old row version just remains in the table and is marked as removed by a certain transaction. That makes an update much like a delete of the old and insert of the new row version.
To roll back a transaction is nothing but to mark the transaction as aborted. Then the old row version automatically becomes the current row version again, without a need for undoing anything.
The Achilles' heel of this technique is that data modifications produce “dead row versions”, which have to be reclaimed later by a background procedure called “vacuuming”.

How to fix Liquibase database lock that does not get cleared

I'm using liquibase in project and it is working fine so far.
I added a new changeset and it works good locally, once deployed , the container's state hungs with the following statement:
"liquibase: Waiting for changelog lock...".
The limit resources of the deployment are not set.
The update of table "databasechangeloglock" is not working, cause the pod keeps locking it.
How can i solve this?
See other question here. If the lock happens and the process exits unexpectedly, then the lock will stay there.
According to this answer, you can remove the lock by running SQL directly:
UPDATE DATABASECHANGELOGLOCK SET LOCKED=0, LOCKGRANTED=null, LOCKEDBY=null where ID=1;
Note: Depending on your DB engine, you may need to use FALSE or 'f' instead of 0 for the LOCKED value.
Per your question, your process itself is creating a new lock and still failing every time, then most likely it is the process that is exiting/failing for a different reason (or checking for the lock in the wrong order.
Another option is to consider the Liquibase No ChangeLog Lock extension.
Note: This is probably a last resort. The extension could be an option if you were having more trouble with the changelog lock than getting any benefit (e.g. only running one instance of the app and don't really need locking). It is likely not the "best" solution, but is certainly an option depending on what you need. The README in the link says this too.
If you are completely sure that there is no active migration (pod) running, you can manually release the lock:
UPDATE <your table name> (f.e. DATABASECHANGELOG)
SET locked=false, lockgranted=null, lockedby=null
WHERE id=1;
Usually the lock is cleared automatically, you might want to check your isolation level for the database connection as well.
There is an extension to handle it using session lock, this supports most of the RDMBS. The way it works is, if the database connection closes, it will release lock https://liquibase.jira.com/wiki/spaces/CONTRIB/pages/2933293057/SessionLock

postgres acquire row-level lock

How do I acquire a row-specific lock in Postgres?
According to the documentation, I can acquire a table lock, but I don't want to lock the whole table; I only care about monitoring a specific row in a given transaction.
My use case is that I want to:
read a row
perform some (potentially expensive, potentially race-condition) operations that prepare new rows for another table
when I'm finally ready to insert/update these new rows, I want to make sure no other process beat me to the punch, so I think I want to:
acquire a row-level lock on the same row I read earlier
see if the version or state or column like that is the same as when I read it earlier
if it is the same, I want to insert/update into the other table, then increment this version, then release the lock
if it is not the same version, then I want to abort - I don't care about my possible new/updated rows, they are stale and I don't want to save them (so: do nothing, release the lock)
But, I don't want to lock the entire table for this whole period; I just want to lock that one specific row. I cannot figure out how to do that based on numerous . blog . posts and Postgres . documentation
All I want is an example query that shows me how to explicitly row-level lock.

Dead lock occurs even update on different fields

I have been trying to develop replication from a Firebird database to other.
I simply add a new field to tables named replication_flag.
My replication program starts a read committed transaction, select rows, update this replication_flag field of rows then then commits or rollbacks.
My production client(s) does not update this replication_flag field and uses read committed isolation. My only one replication client only update this replication_flag field and does not update any other fields.
I still see dead locks and do not understand why. How can I avoid dead locks?
It seems that your replication app use a large transaction updating each record of each table. Probably, at the end, the whole database has been "locked".
You should consider using transactions by table or record packets. it's also possible to use a read-only transaction to read, and use an other transaction to write, with frequent commit, that allow other transactions to update the record.
An interesting slideshow: http://slideplayer.us/slide/1651121/

How to wait during SELECT that pending INSERT commit?

I'm using PostgreSQL 9.2 in a Windows environment.
I'm in a 2PC (2 phase commit) environment using MSDTC.
I have a client application, that starts a transaction at the SERIALIZABLE isolation level, inserts a new row of data in a table for a specific foreign key value (there is an index on the column), and vote for completion of the transaction (The transaction is PREPARED). The transaction will be COMMITED by the Transaction Coordinator.
Immediatly after that, outside of a transaction, the same client requests all the rows for this same specific foreign key value.
Because there may be a delay before the previous transaction is really commited, the SELECT clause may return a previous snapshot of the data. In fact, it does happen sometimes, and this is problematic. Of course the application may be redesigned but until then, I'm looking for a lock solution. Advisory Lock ?
I already solved the problem while performing UPDATE on specific rows, then using SELECT...FOR SHARE, and it works well. The SELECT waits until the transaction commits and return old and new rows.
Now I'm trying to solve it for INSERT.
SELECT...FOR SHARE does not block and return immediatley.
There is no concurrency issue here as only one client deals with a specific set of rows. I already know about MVCC.
Any help appreciated.
To wait for a not-yet-committed INSERT you'd need to take a predicate lock. There's limited predicate locking in PostgreSQL for the serializable support, but it's not exposed directly to the user.
Simple SERIALIZABLE isolation won't help you here, because SERIALIZABLE only requires that there be an order in which the transactions could've occurred to produce a consistent result. In your case this ordering is SELECT followed by INSERT.
The only option I can think of is to take an ACCESS EXCLUSIVE lock on the table before INSERTing. This will only get released at COMMIT PREPARED or ROLLBACK PREPARED time, and in the mean time any other queries will wait for the lock. You can enforce this via a BEFORE trigger to avoid the need to change the app. You'll probably get the odd deadlock and rollback if you do it that way, though, because INSERT will take a lower lock then you'll attempt lock promotion in the trigger. If possible it's better to run the LOCK TABLE ... IN ACCESS EXCLUSIVE MODE command before the INSERT.
As you've alluded to, this is mostly an application mis-design problem. Expecting to see not-yet-committed rows doesn't really make any sense.