Mybatis with JPA in SpringBoot - jpa

I'm trying to use JPA (#CrudRepository), but I want to create also my custom controller with Mybatis.
I have that working, but the problem is that for example in procedures, they don't work together.
Is it possible to implement JPA with Mybatis to work together?
I've been reading a lot, I understand that Mybatis is not ORM. Some blogs indicate that it's possible, but not how.

It's possible to manage the both JPA and mybatis together under Spring Transaction. Both of them, in fact, can be rollback together within the same transaction. However, do take note of side effects such as:
e.g.
Within the same transaction:
// Perform insert and expect id to be returned
TableA tableA = new TableA();
jpaRepositoryForTableA.save( tableA );
// Use the tableA in the next mybatis mapper
TableB tableB = new TableB();
tableB.setTableAId( tableA.getId() );
this.mapper.saveTableB( tableB )
In the scenario above, TableB will not be able to get the TableA's ID.

I don't think it's good idea at all. Use one or the other.
I can imagine that you could make it working, but your can't use them on top of same tables or use transaction management for both persistence frameworks.
So if you have such use case (you didn't explain any), I would argue that your application should be split into two separate services. Optionally consider to separate your storage into two separate DB instances.

Related

Optimistic Locking in Spring Data JDBC

I noticed that Spring Data JDBC doesn't seem to implemented Optimistic Locking (something like a JPA's #Version annotation).
I was thinking on creating a #Modifying query which considers the version field and returns boolean to check manually if the update was successful or not. But I'm afraid this approach is limited to simple entities, not aggregates implying multiple tables.
What's the best way to implement optimistic locking for aggregates?
It depends on your situation. If you just have 7 aggregates of which 5 are single entity aggregates go for the #Modifying solution for the single aggregates and write custom methods for the other 2.
If you have more aggregates consisting of more then one class consider properly implementing it and submitting a PR. The issue is already there: https://jira.spring.io/projects/DATAJDBC/issues/DATAJDBC-219
The main code changes will be in SqlGenerator which would need to add a where clause for aggregate roots if they have a version attribute.
If you are interested in doing a PR and need more assistance, please leave comment on the issue.

JPA Entity CRUD type Operations support in MyBatis

Due to some odd reason, I cannot go with JPA vendor like Hibernate, etc and I must use MyBatis.
Is there any implementation where we can enrich similar facility of CRUD operation in Mybatis?
(Like GenericDAO save, persist, merge, etc)
I have managed to come up with single interface implementation of CRUD type of operations (like Generic DAO) but still each table has to write it's own query in XML file (as table name, column names are different).
Will that make sense to come up with generic implementation?
Where I can give any table object for any CRUD operation through only 4 XML queries. (insert, update, read, delete) passing arguments of table name, column names, column values..etc.
Does it look like re-inventing the wheel in MyBatis or does MyBatis has some similar support?
you can try Mybatis Plus.This is for these cases.
MyBatis is not an ORM, instead it maps the result from SQL statements to objects.
You need to write SQL.
You will have a hard time if you try and apply the JPA model to working in MyBatis. You need to learn how MyBatis works instead.
You may be interested in the MyBatis Generator. Here is a screenshot of the introduction paragraph.
And here is the URL.
The generator looks at the Physical tables in an RDBMS and generates the CRUD mapping.That is half the job done. The other half is to utilize these mappings in your actual code.
Let this assumption also be cleared. The generator generates only the CRUD. For more complex operations like aggregations or joins et al, you may need to write the mappers on your own.

Efficient querying when using DTOs in Breeze

We are using DTOs server side, and have configured a dbcontext using fluent api in order to give breeze the metadata it needs. We map 1:1 to our real database entities, and each DTO contains a simple subset of the real database entity properties.
This works great, but now I need to figure out a way to make queries efficient - i.e. if the Breeze client queries for a single item I don't want to have to create a whole set of DTO objects before I can filter. i.e. I want to figure out a way to execute the filter/sort on the actual entities, but still return DTO objects.
I guess I need to figure out a way to intercept the query execution in order to query my real database entities and return a DTO instead of the real database entity.
Any ideas for how to best approach this?
Turns out that if you use projection in a link statement, e.g.
From PossibleCustomer As Customer
In Customers
Select New CustomerDto With {.Id = PossibleCustomer.Id,
.Name = PossibleCustomer.Name,
.Email = PossibleCustomer.Email}
.. then linq is smart enough to still optimize any queries to the database - i.e. if I query on the linq statement above to filter for a single item by Id, the database is hit with that query for just a single item and a single DTO is created. Pretty clever stuff. This only works if you do a direct projection in the linq statement - if you call off to a function to create your DTO then this won't work.
Just in case others are facing the same scenario, you might want to look at AutoMapper - it can create these projections for you using a model you create just once - avoids all those huge linq statements that are hard to read and validate. The automapper projections (assuming you stick to the simple stuff) still allow the linq to entities magic that ensures you don't have to do table scans when you create your DTOs.

Create new or update existing entity at one go with JPA

A have a JPA entity that has timestamp field and is distinguished by a complex identifier field. What I need is to update timestamp in an entity that has already been stored, otherwise create and store new entity with the current timestamp.
As it turns out the task is not as simple as it seems from the first sight. The problem is that in concurrent environment I get nasty "Unique index or primary key violation" exception. Here's my code:
// Load existing entity, if any.
Entity e = entityManager.find(Entity.class, id);
if (e == null) {
// Could not find entity with the specified id in the database, so create new one.
e = entityManager.merge(new Entity(id));
}
// Set current time...
e.setTimestamp(new Date());
// ...and finally save entity.
entityManager.flush();
Please note that in this example entity identifier is not generated on insert, it is known in advance.
When two or more of threads run this block of code in parallel, they may simultaneously get null from entityManager.find(Entity.class, id) method call, so they will attempt to save two or more entities at the same time, with the same identifier resulting in error.
I think that there are few solutions to the problem.
Sure I could synchronize this code block with a global lock to prevent concurrent access to the database, but would it be the most efficient way?
Some databases support very handy MERGE statement that updates existing or creates new row if none exists. But I doubt that OpenJPA (JPA implementation of my choice) supports it.
Event if JPA does not support SQL MERGE, I can always fall back to plain old JDBC and do whatever I want with the database. But I don't want to leave comfortable API and mess with hairy JDBC+SQL combination.
There is a magic trick to fix it using standard JPA API only, but I don't know it yet.
Please help.
You are referring to the transaction isolation of JPA transactions. I.e. what is the behaviour of transactions when they access other transactions' resources.
According to this article:
READ_COMMITTED is the expected default Transaction Isolation level for using [..] EJB3 JPA
This means that - yes, you will have problems with the above code.
But JPA doesn't support custom isolation levels.
This thread discusses the topic more extensively. Depending on whether you use Spring or EJB, I think you can make use of the proper transaction strategy.

How do I use JPQL to delete entries from a join table?

I have a JPA object which has a many-to-many relationship like this:
#Entity
public class Role {
//...
#ManyToMany(fetch=FetchType.EAGER)
#JoinTable(
name="RolePrivilege",
joinColumns=
#JoinColumn(name="role", referencedColumnName="ID"),
inverseJoinColumns=
#JoinColumn(name="privilege", referencedColumnName="ID")
)
private Set<Privilege> privs;
}
There isn't a JPA object for RolePrivilege, so I'm not sure how to write a JPQL query to delete entries from the privs field of a role object. For instance, I've tried this, but it doesn't work. It complains that Role.privs is not mapped.
DELETE FROM Role.privs p WHERE p.id=:privId
I'm not sure what else to try. I could of course just write a native query which deletes from the join table RolePrivilege. But, I'm worried that doing so would interact badly with locally cached objects which wouldn't be updated by the native query.
Is it even possible to write JPQL to remove entries from a join table like this? If not I can just load all the Role objects and remove entries from the privs collection of each one and then persist each role. But it seems silly to do that if a simple JPQL query will do it all at once.
The JPQL update and delete statements need to refer to an Entity name, not a table name, so I think you're out of luck with the approach you've suggested.
Depending on your JPA provider, you could delete the entries from the JoinTable using a simple raw SQL statement (should be 100% portable) and then programmatically interact with your cache provider API to evict the data. For instance in Hibernate you can call evict() to expire all "Role.privs" collections from the 2nd level cache:
sessionFactory.evictCollection("Role.privs", roleId); //evict a particular collection of privs
sessionFactory.evictCollection("Role.privs"); //evict all privs collections
Unfortunately I don't work with the JPA APIs enough to know for sure what exactly is supported.
I also was looking for a JPQL approach to delete a many-to-many relation (contained in a joining table). Obviously, there's no concept of tables in an ORM, so we can't use DELETE operations... But I wish there were a special operator for these cases. Something like LINK and UNLINK. Maybe a future feature?
Anyway, what I've been doing to achieve this need has been to work with the collections implemented in the entity beans, the ones which map the many-to-many relations.
For example, if I got a class Student which has a many-to-many relation with Courses:
#ManyToMany
private Collection <Courses> collCourses;
I build in the entity a getter and setter for that collection. Then, for example from an EJB, I retrieve the collection using the getter, add or remove the desired Course, and finally, use the setter to assign the new collection. And it's done. It works perfectly.
However my main worry is the performance. An ORM is supposed to keep a huge cache of all objects (if I'm not mistaken), but even using it to work faster, I wonder if retrieving all and every element from a collection is really effective...
Because for me it's as much inefficient as retrieving the registries from a table and post-filter them using pure Java instead of a query language which works direct or undirectly with the internal DB engine (SQL, JPQL...).
Java is an object-oriented language and the whole point of ORM is to hide details of join tables and such from you. Consequently even contemplating deletion from a join table without considering the consequences would be strange.
No you can't do it with JPQL, that is for entities.
Retrieve the entities at either end and clear out their collections. This will remove the join tables entries. Object-oriented.