Custom operator support in naming convention - spring-data-jpa

as we know, spring-data-jpa supports repositories that generate queries based on function names, i.e (kotlin):
#Repository
interface LocationRepository : JpaRepository<DbLocation, UUID>,
JpaSpecificationExecutor<DbLocation>{
fun findOneByNameIgnoringCase(name: String)
}
Now, postgres supports some custom operators, for example timerange '#>' timestamp (read: timerange contains timestamp).
I'd like to have a function like this, without resorting to native queries/specifications:
#Repository
interface LocationRepository : JpaRepository<DbLocation, UUID>,
JpaSpecificationExecutor<DbLocation>{
fun findOneBySomeFieldContainsTimestamp(something: Instant)
}
Is there any way to extend spring data jpa to also support new operators?
Thanks in advance.

Is there any way to extend Spring Data JPA to also support new operators?
Not really.
Of course, technically the answer is yes since Spring Data is Open Source and you can fork it. A good starting point would be JpaQueryCreator.build() which contains a switch statement for all the supported operators.

Related

Spring data Dynamic projection without creating projection dto/interface

I use Spring data by extending SimpleJpaRepository, Sometimes we need only a few special fields of an entity on sometimes other fields. if we create a projection class or interface for every need, there will be many classes that are used only for one application. is there any way to pass fields/columns as map/list to createQuery ?
I use Spring data by extending SimpleJpaRepository
That is at least weird, if not wrong. you'd normally extend on or multiple of Spring Data interfaces.
Anyway, yes this is possible like so:
Is there a way to achieve this?
Yes, there is.
Version 2.6 RC1 of Spring Data JPA introduced fluent APIs for Query By Example, Specifications, and Querydsl.
This you can use among other things to configure projections. Note that only interface projections are supported.
You can use projections like this:
interface SomeRepository extends CrudRepository, JpaSpecificationExecutor{}
MyService {
#Autowired
SomeRepository repository;
void doSomething(){
List<User> users = repository.findBy(
someSpecification,
q -> q.project("firstname", "roles").all()
);
// ...
}
}
It will return an entity, but only the fields given in the project clause will be filled.

Spring data repository and DAO Java Generics

Reading about using Java Generics in DAO layer, I have a doubt applying this in spring data repositories. I mean, with spring data repositories, you have something like this:
public interface OrderRepository extends CrudRepository<Order,OrderPK>{
}
But if I have other 10 entities, I have to create 10 interfaces like the one above to execute CRUD operations and so on and I think this is not very scalable. Java Generics and DAO is about creating one interface and one implementation and reuse this for entities but with Spring Data repositories I have to create one interface for each entity so ...
You didn't really state a question, so I just add
Is this really true? And if so, why?
and answer it:
Yes, this is (almost) correct. Almost, because you should not create one repository per entity, but one repository per Aggregate Root. See http://static.olivergierke.de/lectures/ddd-and-spring/
Spring Data Repositories offer various features for which Spring Data needs to know, what entity it is dealing with. For example query methods need to know the properties of the entity, in order to convert the method name to JPA based query. So you have to pass in the information to Spring Data at some point and you also have to pass in the information, which entities should be considered Aggregate Roots. The way you do that, is by specifying the interface.
Do you really need that? Well if all you want is generic Crud functionality, you can get that straight out of the box with JPA. But if you want query methods, Pagination, simple native queries and much more Spring Data is a nice way to avoid lots of boiler-plate code.
(Please keep in mind that I'm biased)

What is the difference between spring-data-jpa Repository pattern Vs Querydsl query pattern?

I am devloping Spring MVC + spring-data-jpa + Hibernate example. I'm using simple Repository (by extending JpaRepository<T, ID extends Serializable>) pattern to perform querying on DataSource(DS) and get the result. Even I can write any custom query as per my business needs.
While doing research, I find the "querydsl-sql" API. This API uses plugins and need to use QueryDslPredicateExecutor<T> like (by
extending JpaRepository<T, ID extends Serializable>,
QueryDslPredicateExecutor<T>)
. But on high level it look to me that this API also does the same thing that Repository API does.
Could someone please suggest / guide what is the difference between two methods? One use simple Repository and another uses QueryDslPredicateExecutor
List<Customer> findByCustomerNumberAndCustomerId(Integer customerNumber, Integer customerId);
Querydsl method
#Query("select c from Customer c where c.customerNumber=:customerNumber and c.customerId=:customerId")
List<Customer> findByCustomerNumberAndCustomerId(#Param("customerNumber")
Integer customerNumber, #Param("customerId") Integer customerId);
Your Querydsl method example is actually a spring-data repository method.
The difference is that QueryDsl offers a simple and beautiful way to create dynamic queries to database. I.e. it allows to create SQL "on the fly".
This is useful when for example you need to retrieve a collection of entities with a complex filter. E.g. by name, date, cost etc. And resulting SQL should contain only conditions which specified in filter.
Spring data allows to achieve this without Querydsl using built-in Specifications API but Querydsl way is simpler and even more IDE-friendly.
More on this:
https://spring.io/blog/2011/04/26/advanced-spring-data-jpa-specifications-and-querydsl/

Spring Data: what problems could I have if I replace CrudRepository with JpaRepository?

I have a few repository classes. Initially they all extend CrudRepository. For the need to return pageable records, I have made one repository class extend JpaRepository, which makes offers more than just pageable results.
Now I am thinking about replacing all CrudRepository with JpaRepository, no matter whether there is a need for a repository class at this moment.
Are there any runtime issues such as memory, speed, etc?
Thanks for any input!
Regards.
If you need only pagination abstraction, consider using PagingAndSortingRepository instead JpaRepository. JpaRepository add some functionality that is specific to JPA then perhaps it is not necessary for you.
In terms of memory and speed issues, it is not relevant in most cases.
As it is not in-memory pagination (such as in SimpleJpaRepository), you will not have problems with memory in this case. If you are not using JPA repository implementation, you will note only the JPA overhead, still slightly in most cases.
For pagination, an additional query is necessary to count the records, so we have a small overhead.
SimpleJpaRepository has a JPA implementation for PagingAndSortingRepository methods. You can write a custom implementation using JDBC, for example. See this gist of an implementation example of PagingAndSortingRepository using JdbcTemplate
Overall I believe it is this :)

Integration tests in Scala when using compagnons with Play2? -> Cake pattern?

I'm working on my first Scala application, where we use an ActiveRecord style to retrieve data from MongoDB.
I have models like User and Category, which all have a companion object that uses the trait:
class MongoModel[T <: IdentifiableModel with CaseClass] extends ModelCompanion[T, ObjectId]
ModelCompanion is a Salat class which provide common MongoDB crud operations.
This permits to retrieve data like this:
User.profile(userId)
I never had any experience with this ActiveRecord query style. But I know Rails people are using it. And I think I saw it on Play documentation (version 1.2?) to deal with JPA.
For now it works fine, but I want to be able to run integration tests on my MongoDB.
I can run an "embedded" MongoDB with a library. The big deal is that my host/port configuration are actually kind of hardcoded on the MongoModel class which is extended by all the model companions.
I want to be able to specify a different host/port when I run integration tests (or any other "profile" I could create in the future).
I understand well dependency injection, using Spring for many years in Java, and the drawbacks of all this static stuff in my application. I saw that there is now a scala-friendly way to configure a Spring application, but I'm not sure using Spring is appropriate in Scala.
I have read some stuff about the Cake pattern and it seems to do what I want, being some kind of typesafe, compile-time-checked spring context.
Should I definitely go to the Cake pattern, or is there any other elegant alternative in Scala?
Can I keep using an ActiveRecord style or is it a total anti-pattern for testability?
Thanks
No any static references - using Cake pattern you got 2 different classes for 2 namespaces/environments, each overriding "host/port" resource on its own. Create a trait containing your resources, inherit it 2 times (by providing actual information about host/port, depending on environment) and add to appropriate companion objects (for prod and for test). Inside MongoModel add self type that is your new trait, and refactor all host/port references in MongoModel, to use that self type (your new trait).
I'd definitely go with the Cake Pattern.
You can read the following article with show an example of how to use the Cake Pattern in a Play2 application:
http://julien.richard-foy.fr/blog/2011/11/26/dependency-injection-in-scala-with-play-2-it-s-free/