I have multiple quartz workers
Each worker picks a db record (printer) and then doing some work on it (reading info from the printer over the network).
It can take up to 30 sec to 1 min to complete each job.
Back in JDBC days I would run (pseudo code)
printer = "select from printer where status=? for update"
do the work, (takes 1 min)
update the printer record.
My question is this approach with PESSIMISTIC_WRITE is ok:
public interface PrinterRepo extends CrudRepository<Printer,String>
{
#Lock(LockModeType.PESSIMISTIC_WRITE)
#Query("SELECT r FROM Printers r where r.status = :status")
Printer findOneAndLock(#Param("status")String status);
}
Then the worker:
#Transactional
public void execute(JobExecutionContext jobExecutionContext) {
Printer p = printerRepo.findOneAndLock("status");
//do the work here (30 sec to 1 min)
printerRepo.save(p);
}
For my understanding lock will be release at the end of the function annotated with #Transactional correct?
My question is what will happen to other workers?
Will they starve while waiting for findOneAndLock?
Thank you
Regardless of which type and level of locks you are going to use, and what will happen to other workers, the long-term lock, as well as long-term transaction, is not a good solution. IMHO in your case is better to use different approach without any locks, for example, an additional table to record the printer 'locks':
create table printer_locks (
printer_id bigint not null constraint printer_locks_pkey primary key,
created_at timestamp not null,
worker_id bigint not null constraint fk_printer_locks_worker_id references workers,
constraint fk_printer_locks_printer_id foreign key (printer_id) references printers(id)
);
When a worker wants to start the job with some printer, first it tries to insert the record to this table. Then, in case of success, it starts the job. When the job is completed then the worker deletes this record.
Because the printer_id column is unique - other workers will not be able to start working with the same printer at the same time.
Implementation:
#Entity
#Table(name = "workers")
public class Worker {
#Id
#GeneratedValue(...)
private Long id;
// other stuff...
}
#Entity
#Table(name = "printers")
public class Printer {
#Id
#GeneratedValue(...)
private Long id;
// other stuff...
}
#Entity
#Table(name = "printer_locks")
public class PrinterLock {
#Id
private Long id;
private Instant createdAt = Instant.now();
#OneToOne(fetch = FetchType.LAZY)
#MapsId
private Printer printer;
#ManyToOne(fetch = FetchType.LAZY)
private Worker worker;
public PrinterLock(Printer printer, Worker worker) {
this.printer = printer;
this.worker = worker;
}
// other stuff...
}
public void execute(...) {
Worker worker = ...;
Long printerId = ...;
printerRepo.findById(printerId)
.map(printer -> {
try {
printerLockRepo.save(new PrinterLock(printer, worker));
try {
// do the work here (30 sec to 1 min)
} finally {
printerLockRepo.deleteById(printerId);
}
} catch(Exception e) {
log.warn("Printer {} is busy", printerId);
}
})
.orElseThrow(() -> new PrinterNotFoundException(printerId));
}
Note that the execute method even doesn't have #Transactional annotation.
An additional advantage of this approach is the column createdAt which allow you to control hanging jobs.
Further reading:
Row Level Locking in Mysql
Pessimistic Locking in JPA
Explicit Locking in PostgreSQL
Locking Reads in MySQL
The best way to map a #OneToOne relationship with JPA and Hibernate
Related
The title maybe is not properly written but here is what, more or less, I want to achieve.
I would like to be able to write dynamic queries with use of Query by Example that would join multiple tables and create (projection?) DTO for me.
This DTO would have fields that are mapped to different columns in joined tables. Consider following:
Tables:
CREATE TABLE address
(
id SERIAL,
address_code VARCHAR(255) NOT NULL,
street_name VARCHAR(255),
building_number VARCHAR(255)
);
CREATE TABLE account
(
id SERIAL,
account_number BIGINT UNIQUE
);
CREATE TABLE customer
(
id SERIAL,
name VARCHAR(255)
)
I would like to be able to create a query which result would be:
address.address_code, account.account_number, customer.name
so basically the result would be a custom DTO. I also mentioned that I would like to have this backed up with Query by Example because I will to dynamically append WHERE clauses so I thought that if I created a DTO like:
public record CustomQueryResultDTO(String addressCode, BigInteger accountNumber, String name) {}
I could simply query just like it is in Spring R2DBC documentation.
The problem here is that I am not sure what should be a viable solution for such problem because on one hand I would like to reuse ReactiveQueryByExampleExecutor but that would mean that I have to create something like:
#Repository
public interface CustomQueryResultRepository extends ReactiveCrudRepository<CustomQueryResultDTO, Integer>, ReactiveQueryByExampleExecutor<CustomQueryResultDTO> {
}
Which kind of seems to me not a way to go as I do not have a corresponding table for CustomQueryResultDTO therefore there is really no mapping for this repository interface - or am I overthinking this and it is actually a way to go?
I think you are potentially overthinking it.
You can do it in a number of ways (note Java 17 text blocks):
Via R2DBC JPA-like #Query
Create a normal ReactiveCrudRepository but collect into a projection (DTOP)
// Repository
#Repository
public interface UserRefreshTokenRepository extends ReactiveCrudRepository<UserRefreshToken, Integer> {
#Query(
"""
select *
from user.user_refresh_tokens t
join user.user_infos c on c.user_id = t.user_id
where c.username = :username
"""
)
Flux<UserRefreshTokenDtop> findAllByUsername(String username);
}
// Entity
#Data
#Builder
#AllArgsConstructor
#NoArgsConstructor
#ToString(exclude = {"refreshToken"})
#Table(schema = "user", name = "user_refresh_tokens")
public class UserRefreshToken {
#Id private Integer id;
private String userId;
private String username; # will be joined
private String ipAddr;
private OffsetDateTime createdAt;
private String refreshToken;
private OffsetDateTime refreshTokenIat;
private OffsetDateTime refreshTokenExp;
}
// DTO projection
public interface UserRefreshTokenDtop {
Integer getId();
String getUserId();
String getUsername(); # will be joined
String getIpAddr();
OffsetDateTime getRefreshTokenIat();
OffsetDateTime getRefreshTokenExp();
}
Via DatabaseClient
This one also uses TransactionalOperator to ensure query atomicity
private final DatabaseClient client;
private final TransactionalOperator operator;
#Override
public void deleteAllUsedExpiredAttempts(Duration resetInterval) {
// language=PostgreSQL
String allUsedExpiredAttempts = """
select t.id failed_id, c.id disable_id, t.username
from user.failed_sign_attempts t
join user.disable_sign_attempts c on c.username = t.username
where c.is_used = true
and :now >= c.expires_at + interval '%d seconds'
""";
// POTENTIAL SQL injection - half-arsed but %d ensures that only Number is allowed
client
.sql(String.format(allUsedExpiredAttempts, resetInterval.getSeconds()))
.bind("now", Instant.now())
.fetch()
.all()
.flatMap(this::deleteFailed)
.flatMap(this::deleteDisabled)
.as(operator::transactional)
.subscribe(v1 -> log.debug("Successfully reset {} user(s)", v1));
}
Via R2dbcEntityTemplate
I don't have a working example but it is pain in the ass to join via the .join() operator
If you are interested check the docs for R2dbcEntityTemplate
13.4.3. Fluent API > Methods for the Criteria Class
https://docs.spring.io/spring-data/r2dbc/docs/current/reference/html
I am writing an API where I am inserting a record into a table (Postgres). I was hoping to use JPA for the work. Here is the potential challenge: the primary key for the insert is generated from a database trigger, rather than from sequence count or similar. In fact, the trigger creates the primary key using the values of other fields being passed in as part of the insert. So for example,
if I have a entity class like the following:
#Entity
#Validated
#Table(name = "my_table", schema="common")
public class MyModel {
#Id
#Column(name = "col_id")
private String id;
#Column(name = "second_col")
private String secCol;
#Column(name = "third_col")
private String thirdCol;
public MyModel() {
}
public MyModel(String id, String secCol, String thirdCol) {
this.id = id;
this.secCol = secCol;
this.thirdCol = thirdCol;
}
}
I would need the col_id field to somehow honor that the key is generated from the trigger, and the trigger would need to be able to read the values for second_col and third_col in order to generate the primary key. Finally, I would need the call to return the value of the primary key.
Can this be done with jpa and repository interface such as:
public interface MyRepo extends JpaRepository <MyModel, String> {
}
and then use either default save method such as myRepo.saveAndFlush(myModel) or custom save methods? I can't find anything on using JPA with DB triggers that generating keys. If it cannot be done with JPA, I would be grateful for any alternative ideas. Thanks.
ok, I was able to get this to work. It required writing a custom query that ignored the primary key field:
public interface MyRepo extends JpaRepository <MyModel, String> {
#Transactional
#Modifying
#Query(value = "INSERT INTO my_table(second_col, third_col)", nativeQuery = true)
int insertMyTable(#Param("second_col") String second_col, #Param("third_col") String third_col);
}
The model class is unchanged from above. Because it was executed as a native query, it allowed postGres to do its thing uninterrupted.
I am using JPA 2.1(with EclipseLink implementation), to get a record from Database.
By default it first level cache is enabled, it caches the record in PersistenceContext. If I try to get same record I will get it from first level cache, so no query will be fired on database second time.
Once transaction is over the first level cache will be cleared, and If I try to get same entry one more time, query has to be fired as the cache is cleared and it should come from database but it is not.
At least the query should be fired on database if I close the current entity manager,re-open it, and try to get the record.
Even now second query is not going to database. Once I get the record from the database first time(at this time I can see the select query in console logs), after wards if I try to get one more time, its coming from cache memory(as I can not see the query one more time in console logs, I am pretending that it is coming from cache), no matter what I do(use new transaction or close and re-open entity manager) the first level cache ain't cleared.
The code which I am using is below:
EntityManagerFactory entityManagerFactory=
Persistence.createEntityManagerFactory("01EmployeeBasics");
EntityManager entityManager=entityManagerFactory.createEntityManager();
System.out.println("EM1 : "+entityManager);
entityManager.getTransaction().begin();
System.out.println("Tx1 : "+entityManager.getTransaction());
Employee employee=entityManager.find(Employee.class, 123);
entityManager.getTransaction().commit();
entityManager.close();
entityManager=entityManagerFactory.createEntityManager();
System.out.println("EM2 : "+entityManager);
entityManager.getTransaction().begin();
System.out.println("Tx2 : "+entityManager.getTransaction());
Employee employee2=entityManager.find(Employee.class, 123);
entityManager.getTransaction().commit();
entityManager.close();
entityManagerFactory.close();
Employee class is as below:
package in.co.way2learn;
import javax.persistence.Entity;
import javax.persistence.Id;
#Entity
public class Employee {
#Id
private int id;
private String name;
private int salary;
public Employee() {
// TODO Auto-generated constructor stub
}
public Employee(int id, String name, int salary) {
super();
this.id = id;
this.name = name;
this.salary = salary;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getName() {
System.out.println("Employee.getName()");
return name;
}
public void setName(String name) {
this.name = name;
}
public int getSalary() {
return salary;
}
public void setSalary(int salary) {
this.salary = salary;
}
}
In database there is a record with id 123.
Now my question is why the first level cache is not cleared??
EclipseLink has a shared object (2nd level) cache which is enabled by default and which:
...exists for the duration of the persistence unit (EntityManagerFactory,
or server) and is shared by all EntityManagers and users of the
persistence unit.
If you disable this according to the instructions in the below then you should see the 2nd query firing.
https://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching
I have been using a play framework rest api for a couple of months now hosted on heroku and using the underlying postgres db. All of a sudden, I started getting the following error today
Execution exception[[PersistenceException: ERROR executing DML bindLog[] error[ERROR: duplicate key value violates unique constraint "pk_informal_sector_waste_composition"\n Detail: Key (id)=(1366) already exists.
Heroku support suspected index corruption, so I followed the steps outlined here How to reset postgres' primary key sequence when it falls out of sync?.
However, this did not help. When I checked the table causing the issue, I did find the ID mentioned above.
My understanding is that based on the database, ebean knows what kind of sequence generator to use for the ID field (annotated with #Id).
Is it possible that ebean is causing this issue? It's puzzling because everything worked ok for the last couple of months and there have been no code changes since.
Below are my model objects:
#Entity
public class InformalSectorWasteComposition extends Model {
#Id
public String id;
.......
.......
#ManyToOne
#JsonBackReference
public InformalSector informalSector;
public static String create(InformalSectorWasteComposition wc) {
//TODO: check if lead exists and update...
wc.save();
return wc.id;
}
}
#Entity
public class InformalSector extends Model {
#Id
public String id;
#OneToMany(cascade=CascadeType.ALL) //one lead can have many wcData
#JsonManagedReference
public List<InformalSectorWasteComposition> wcData;
public static String create(InformalSector informalSector) {
//TODO: Check if lead exists...if so update
Long id = -1L;
try{
id = Long.parseLong(informalSector.id);
}
catch (Exception e){
e.printStackTrace();
}
if (id == -1){
//new lead
informalSector.save();
return informalSector.id;
}
else{ //existing
InformalSector current = InformalSector.get(id);
if (current != null){
Logger.debug(current.id);
current = informalSector;
current.update();
return current.id;
}
}
return null;
}
.....
.....
}
Greatly appreciate any insights the community can provide.
Thanks,
RK
In JPA, I am using #GeneratedValue:
#TableGenerator(name = "idGenerator", table = "generator", pkColumnName = "Indecator" , valueColumnName = "value", pkColumnValue = "man")
#Entity
#Table(name="Man")
public class Man implements Serializable {
#Id
#GeneratedValue(strategy = GenerationType.TABLE, generator = "idGenerator")
#Column(name="ID")
private long id;
public void setId(Long i) {
this.id=i;
}
public Long getId(){
return id;
}
}
I initially set the ID to some arbitrary value (used as a test condition later on):
public class Sear {
public static void main(String[] args) throws Exception {
EntityManagerFactory emf = Persistence.createEntityManagerFactory("testID");
EntityManager em = emf.createEntityManager();
Man man = new Man();
man.setId(-1L);
try {
em.getTransaction().begin();
em.persist(man);
em.getTransaction().commit();
} catch (Exception e) { }
if(man.getId() == -1);
}
}
}
What is the expected value of man.id after executing commit()? Should it be (-1), a newly generated value, or I should expect an exception?
I want to use that check to detect any exceptions while persisting.
What is the expected value of man.id after executing commit()? Should it be (-1), a newly generated value, or I should expect an exception?
You are just not supposed to set the id when using GeneratedValue. Behavior on persist will differ from one implementation to another and relying on this behavior is thus a bad idea (non portable).
I want to use that check to detect any exceptions while persisting.
JPA will throw a (subclass of) PersistenceException if a problem occurs. The right way to handle a problem would be to catch this exception (this is a RuntimeExeption by the way).
If you insist with a poor man check, don't assign the id and check if you still have the default value after persist (in your case, it would be 0L).
You setting the value of a field that is auto-generated is irrelevant. It will be (should be) set by the JPA implementation according to the strategy specified.
In EclipseLink this is configurable using the IdValidation enum and the #PrimaryKey annotation or the "eclipselink.id-validation" persistence unit property.
By default null and 0 will cause the id to be regenerated, but other values will be used. If you set the IdValidation to NEGATIVE, then negative numbers will also be replaced.
You can also configure your Sequence object to always replace the value.