JPA: LockModeType.NONE - jpa

Am I right with following ?
JPA: LockModeType.NONE means:
There is no explicit locking on JPA level / application layer
In the absence of explicit locking on JPA level, the application will use implicit locking
Implicit locking means, JPA is delegating the whole locking responsibility to the database system
If the database system has as default islation level, for example Committed Read, so JPA-LockModeType.NONE will cause that transactions will be handled as Committed Read
So, JPA-LockModeType.NONE is not the same as Uncommited Read because no database system has Uncommited Read as default, and it's also a little misleading because a database system always uses locks even with Uncommited Read.
is this correct ?

Related

EF consistency between two or more reads

In this page in Microsoft's documentation on EF it is stated literally
Entity Framework does not wrap queries in a transaction
If I am right, this means that sql reads are not implied with transactions and thus every select in our code is executed independently. But if this is so, can we ensure that two reads are consistent between each other? In the typical scenario, is there a warranty that the sum of the loaded amount of A and the loaded amount of B will be right (in some connection) if a transfer between A and B is started (in a different connection) between the read of A and the read of B? Would Entity Framework be able to solve this case in some way?
The built-in solution in EF is client-side optimistic concurrency. On update EF will build a query that ensures that the row to be updated has not been changed since it was read.
Properties configured as concurrency tokens are used to implement
optimistic concurrency control: whenever an update or delete operation
is performed during SaveChanges, the value of the concurrency token on
the database is compared against the original value read by EF Core.
If the values match, the operation can complete. If the values do not
match, EF Core assumes that another user has performed a conflicting
operation and aborts the current transaction.
You can also opt in to Transactions at whatever isolation level you choose, which may provide similar protections. Or use Raw SQL queries with lock hints for your target database.

does sql statement ensure atomicity in postgres

I have a simple bug in my program that uses multi user support. I'm using knex to build sql queries, and I have a pseudocode that depicts the scenerio:
const value = queryBuilder().readDataFromTheDatabase();//executes this
//do some other work and get value
queryBuilder.writeValueToTheDatabase(updateValue(value));
This piece of code is being use in sort of a middleware function. And as you can see, this is a possible race condition i.e. when multiple users access the thing, one of them gets a stale value when they try to execute this at roughly the same amount of time.
My solution
So, I was think a possible solution would be create a single queryBuilder statement:
queryBuilder().readAndUpdateValueInTheDatabase();
So, I'll probably have to use a little bit of plpgsql. I was wondering if this solution will be sufficient. Will the statement be executed atomically? i.e. When one request reads and doesn't finish his write, does another request wait around to both read and write or just waits to write but, reads the stale value?
I think what you are looking for here is isolation, not atomicity. You could set all transactions to the highest isolation level, serializable (which is higher than the usual default level). With that level, if data that a transaction read (and presumably relied upon) is changed, then when it tries to commit it might get a serialization failure error. I say "might", because the system could conclude the situation would be consistent with the data change having happened after the commit, in which case the commit is allowed to stand.
To avoid a race condition with such a setup, you must run both the read and the write in the same database transaction.
There are two ways to do that:
Use the default READ COMMITTED isolation level and lock the rows when you read them:
SELECT ... FROM ... FOR NO KEY UPDATE;
That locks the rows against concurrent modifications, and the lock is held until the end of the transaction.
Use the REPEATABLE READ isolation level and don't lock anything. Then your UPDATE will receive a serialization error (SQLSTATE 40001) if somebody modified the row concurrently. In that case, you roll the transaction back and try again in a new REPEATABLE READ transaction.
The first solution is usually better if you expect conflicts frequently, while the second is better if conflicts are rare.
Note that you should keep the database transaction as short as possible in both cases to keep the risk of conflicts low.
Transaction in PostgreSQL use an optimistic locking model when accessing to tables, while some other DBMS do pessimistic locking (IBM Db2) or the two locking model (MS SQL Server).
Optimistic locking snapshot the data on which you are working, and the modifications are done on the snapshot until the transaction ended. When the transaction finishes, the snapshot modifications are postponed on the real database (table rows), but if some other user had made a change between the moment of the snapshot capture and the commit, then the commit cannot apply and the COMMIT is rejected as a ROLLBACK.
You can try to raise the ISOLATION LEVEL (REPEATABLE READ or SERIALIZABLE) to avoid the trouble.

Locking for JPA entities without #Version

Env:
Java EE 7
JPA 2.1
EJB 3.1
Hibernate 4
Recently we are experiencing data problems in one of the table. Couple of points
The table is mapped to JPA entity
Table as well as Entity does not have "version" column/attribute.
In other words, there is no optimistic locking available for this table. On doing RCA, it turned out to be concurrent data modification issues.
Questions :
In such cases where #Version is not available/used (in other words optimistic locking), is using a singleton repository class is the only option to make sure data consistency is maintained ?
What about pessimistic locking in such cases ?
I believe its a general use case where an application (especially legacy) can have some tables with version column and some dont. Are there any known patterns for handling tables/entities without version column ?
Thanks in advance,
Rakesh
JPA supports pessimistic locking and you are free to use it in case you cannot or do not want to use optimistic locking.
In short, EntityManager provides lock methods to lock already retrieved entity, and also overloaded em.find and em.merge, as well as Query.setLockMode provide means to supply lock options to apply locks atomically at the time when the data is retrieved from DB.
However, with pessimistic locking, you should be aware you should prevent deadlocks. The best way to tackle it is always locking at most one entity per transaction.
You might also consider setting timeout for attempt to lock an entity, so that your transaction does not wait for long time if the entity is already locked.
In more detail, a very intelligible explanation of optimistic and pessimistic locking with JPA is provided here, including differences between READ and WRITE lock modes and setting lock timeout.

JPA locks and database isolation levels

Is there any mutual influence between JPA locks (optimistic/pessimistic) and database isolation levels (in example http://www.postgresql.org/docs/9.1/static/transaction-iso.html)?
EJB3.2 Spec (8.3.2 "Isolation levels") says that Bean Provider is responsible for setting isolation level of transaction, so generally I shouldn't care, but anyway I am still confused. In example in PostgreSQL, according to mentioned source, the default isolation level is "read commited". Does this mean, that when I do not lock any entity, the transaction isolation level will be still "read commited"?
By having #Version column on your entities and using no locking (equivalent of using LockModeType.NONE) you are implicitly working with READ_COMMITED isolation. This is achieved in the JPA layer because all updates are usually deferred until commit time or OptimisticLockException is thrown is case of an update conflict (I'm still assuming no explicit locking).
It assumes ... that writes to the database
will typically occur only when the flush method has been invoked—whether explicitly by the application,
or by the persistence provider runtime in accordance with the flush mode setting
On the database layer, JPA specification also assumes you have READ_COMMITED isolation.
It assumes that the databases to
which persistence units are mapped will be accessed by the implementation using read-committed isolation
(or a vendor equivalent in which long-term read locks are not held)
Of course manual flush/refresh, queries and flush type modes (AUTO, COMMIT) complicates the situation. Also 2nd level and query cache configuration might play a role. However with all defaults, JPA READ_COMMITED behaves pretty predictably and as a rule of thumb it is safe to accompany it with READ_COMMITED isolation at the db level.
In order to achieve REPETABLE_READ with JPA you have to use locks (but that's another story).
Lock modes are intended to provide a facility that enables the effect of “repeatable read” semantics

ADO.Net vs ADO Record Locking

I'm struggling to understand the differences between ADO and ADO.NET.
ADO "Classic" has different lock levels... I'm wondering now, what is the default lock level for ADO.NET? How would I open a connection as Batch Lock or Read Only.
What is the default behavior of ADO.NET? What sort of lock does it place on a MSSQL database when doing a .fill().
ADO.net uses an optimistic locking concurrency by default but you also have to look at what is happening on the SQL server.
Unless you specify a hint such as NoLock a shared lock will be issued. This is a lightweight lock that allows other transactions to read a resource but no other transactions to modify the data. This lock is released after the data is finished being read