JPA locking database record - jpa

I am writing a spring boot service using JPA to interact with database. What I want to do is to lock down a database record as soon as it is read by JPA i.e. once is has been read no other thread will be able to read it until the first has updated the record.

Related

Could I use stored persistence context if DB temporarily shut down?

I have to implement some kind of cache, or temporal storage, that matches the following condition :
This cache stores 4 tables from a specific (maria) DB. len(column) < 50, len(row) < 1000
This cache runs innate when a spring web server turned on. When server turned on, it immediately crawls data from DB, stores them into cache, to minimize direct DB querying.
The spring web server fetches data from cache when received HTTP.get resquest.
The spring web server updates DB column when received HTTP.post, HTTP.delete, HTTP.put, by updating the data in cache, and paste them into tables consequently.
The spring web server must not invoke exception when DB suddenly shuts down, and lose connection. It must handle HTTP requests by cached data, delaying any direct connect logic to DB, and synchronize DB data when DB restores.
I'm not familiar to JPA, but it seems that Spring JPA itself supports the former 4 conditions, by using EntityManager and Persistence context.
However, I cannot find any information that makes this context tolerant. I cannot find any option that makes whole JPA structure Check DB connection alive, and only update after checking returns true.
Since I'm requested to use JPA as far as I could, I want to find out whether using JPA to match the conditions above is possible or not.
Thanks for any Information provided.
Could I use stored persistence context if DB temporarily shut down?
No, not really.
The persistence context gets flushed on every commit, which you don't want, because you want to serve queries from the cache.
Also it doesn't serve the result of any kind of query, it just serves entities.
And most importantly: when a flush event happens and the database is not available you will get an exception.

SpringBatch is blocking insertion of data in other tables

i am using Postgres as my SQL.My Springboot application uses Spring Batch for processing and insertion of data.I am auditing my code flow like say suppose one 3rd party api which i call if it fails i audit this failure event.This piece of code is in my Spring Batch Writer.I see logs of my AUDIT DTO class getting created however i dont see data in audit table.The same if i move code of auditing outside Spring Batch writer -it works.What should be done so that my audit table insertion code in Spring Batch writer works?
More details would be needed to be sure but I assume your writer writes to the 3rd party API and you write the audit log to the same DataSource that you use for the Spring Batch meta data.
Every write of a chunk that Spring Batch does in a writer is wrapped in a transaction. Such a transaction will be rolled back if you throw an exception in the writer.
You need to write the audit log outside of the transaction created by Spring Batch. For example by using Spring transaction management and starting a new transaction with propagation level REQUIRES_NEW.

Read from database and delete from database through spring batch

I have to read data from Relational database and delete all the data which is present in the table through spring batch.
Is there any way to do so?
I have tried reading through jdbscursorItemReader and update delete query in ItemWriter Also, by Itemprocessor but failed to do it.
Being new makes it difficult for me to perform it to its accuracy.
As I'm not getting any error in both cases but data is not getting deleted.

Spring batch with MongoDB and transactions

I have a Spring Batch application with two databases: one SQL DB for the Spring Batch meta data, and another which is a MongoDB where all the business data is stored. The relation DB still uses DataSourceTransactionManager.
However I dont think the Mongo writes are done within an active transaction with rollbacks. Here is the excerpt from the official Spring Batch documentation on MongoItemWriter:
A ItemWriter implementation that writes to a MongoDB store using an implementation of Spring Data's MongoOperations. Since MongoDB is not a transactional store, a best effort is made to persist written data at the last moment, yet still honor job status contracts. No attempt to roll back is made if an error occurs during writing.
However this is not the case any more; MongoDB introduced ACID transactions in version 4.
How do I go about adding transactions to my writes? I could use #Transactional on my service methods when I use ItemWriterAdapter. But still dont know what to do with MongoItemWriter... What is the right configuration here? Thank you.
I have a Spring Batch application with two databases: one SQL DB for the Spring Batch meta data, and another which is a MongoDB where all the business data is stored.
I invite you to take a look at the following posts to understand the implications of this design choice:
How to java-configure separate datasources for spring batch data and business data? Should I even do it?
How does Spring Batch transaction management work?
In your case, you have a distributed transaction across two data sources:
SQL datasource for the job repository, which is managed by a DataSourceTransactionManager
MongoDB for your step (using the MongoItemWriter), which is managed by a MongoTransactionManager
If you want technical meta-data and business data to be committed/rolled back in the scope of the same distributed transaction, you need to use a JtaTransactionManager that coordinates the DataSourceTransactionManager and MongoTransactionManager. You can find some resources about the matter here: https://stackoverflow.com/a/56547839/5019386.
BTW, there is a feature request to use MongoDB as a job repository in Spring Batch: https://github.com/spring-projects/spring-batch/issues/877. When this is implemented, you could store both business data and technical meta-data in the same datasource (so no need for a distributed transaction anymore) and you would be able to use the same MongoTransactionManager for both the job repository and your step.

Using JPA and JDBC together

In my project, we are upgrading an old J2EE application to Java EE 6.
Since the business logic is quite complex, we want to retain the EJB + DAO code as much as possible. The old code uses plain JDBC for database persistence.
But in our new/upgraded application, we plan to use JPA.
We would like to use JPA for all the read operations on database and use JDBC for the writes (inserts/updates/deletes). Is this possible in the same transaction?
For example:
obtain an Entity Manager reference and read an employee record from the database (employee entity) using entity manager find (or a named query)
convert the entity instance to a POJO
update the POJO
perform some business logic on the POJO (reuse old code as-is)
create a JDBC connection and use a prepared statement to update the employee record in database (reuse old code as-is)
Questions:
If I open 2 separate connections to the database - one from JPA and another from JDBC, will it still be in the same transaction (since app server manages the JTA transaction)?
What are the potential issues with this approach?
Since I am updating the database via JDBC, the entity in persistence context will not be in sync with the database. How to handle caching in such cases?
I looked at the following related threads, but I would like to know more in detail.
Combining JPA and JDBC actions in one transaction
Hibernate and JDBC in one transaction