Spring data repository and DAO Java Generics - spring-data

Reading about using Java Generics in DAO layer, I have a doubt applying this in spring data repositories. I mean, with spring data repositories, you have something like this:
public interface OrderRepository extends CrudRepository<Order,OrderPK>{
}
But if I have other 10 entities, I have to create 10 interfaces like the one above to execute CRUD operations and so on and I think this is not very scalable. Java Generics and DAO is about creating one interface and one implementation and reuse this for entities but with Spring Data repositories I have to create one interface for each entity so ...

You didn't really state a question, so I just add
Is this really true? And if so, why?
and answer it:
Yes, this is (almost) correct. Almost, because you should not create one repository per entity, but one repository per Aggregate Root. See http://static.olivergierke.de/lectures/ddd-and-spring/
Spring Data Repositories offer various features for which Spring Data needs to know, what entity it is dealing with. For example query methods need to know the properties of the entity, in order to convert the method name to JPA based query. So you have to pass in the information to Spring Data at some point and you also have to pass in the information, which entities should be considered Aggregate Roots. The way you do that, is by specifying the interface.
Do you really need that? Well if all you want is generic Crud functionality, you can get that straight out of the box with JPA. But if you want query methods, Pagination, simple native queries and much more Spring Data is a nice way to avoid lots of boiler-plate code.
(Please keep in mind that I'm biased)

Related

Spring Data: when use Projection interfaces and DTO projections?

I have this situation:
Spring Data JPA: Work with Pageable but with a specific set of fields of the entity
It about to work with Spring Data and working with a specific set of fields of an #Entity
The two suggestions are totally valid for me:
DTO projections
Projection interfaces
Even more, in spring-data-examples appears both together (I know for sample purposes):
CustomerRepository.java
Thus:
When is mandatory use one over the other and why?
Exists a cost of performance one over the other?
Note in the Class-based Projections (DTOs) section says the following:
Another way of defining projections is by using value type DTOs (Data
Transfer Objects) that hold properties for the fields that are
supposed to be retrieved. These DTO types can be used in exactly the
same way projection interfaces are used, except that no proxying
happens and no nested projections can be applied.
Seems the advantages are: except that no proxying happens and no nested projections can be applied
DTO Approach
Pro
Simple and straigt forward
Con
It will result in more code as you have to create DTO class with constructor and getters/setters (unless you utilize Project Lombok to avoid boilerplate
code for DTOs).
No nested projections can be applied.
Projections
Pro
Less code as it uses only interfaces.
Nested projections can be applied
Dynamic projection allows you write one generic repository method to return
different subset of the attributes in entity object based on client's needs.
Con
Spring generates proxy at runtime
Query could return the entire entity object from database to Spring layer though a trimmed version (via Projection) is returned from Spring layer to client. I wasn't sure about this specific disadvantage, hoping someone to edit this answer if necessary.
If you need nested or dynamic projection, you probably want Projection approach rather than DTO approach.
Refer to official Spring doc for details.
I think that DTO was the first possible solution to work with a small set of data from the Entities. Today, many operations can also be made with projections, but you need to be careful with performance. If you see this Janssen's post Entities or DTOs – When should you use which projection? you will note that DTOs have better performance than projections for reading operations.
If you don't have the problem with performance, projections will be more graceful.

Can I use Spring Data JPA #Entity and Spring Data GemFire #Region together on the same POJO?

When I try to use the same POJO for Spring Data JPA integration with Spring Data GemFire, the repository always accesses the database with the POJO. But I want the repository to access data from GemFire, even though I added annotations #EnableGemfireRepositories and #EnableEntityDefinedRegions.
I think it is because I added the #Entity and #Region together on the same POJO class.
Please help fix and let me know if I can do so? Do I need to separate it into 2 POJO classes working for database and GemFire?
Thanks
No, you do not need 2 separate POJOs. However, you do need 2 separate Repository interface definitions, 1 for JPA and a 2nd for GemFire. I have an example of such an implementation here, in the repository-example.
In the contacts-core module, I have an example.app.model.Contact class that is annotated with both JPA's #Entity annotation as well as SDG's #Region annotation in addition to other annotations (e.g. Jackson).
I then create 2 Repository interfaces in the repository-example module, 1 for JPA, which extends o.s.d.jpa.repository.JpaRepository, and another for GemFire, which extends o.s.d.gemfire.repository.GemfireRepository. Notice too that these Repositories are separated by package (i.e. example.app.repo.jpa vs. example.app.repo.gemfire) in my example.
Keep in mind, Spring Data enforces a strict policy mode which prevents ambiguity if the application Repository definition (e.g. ContactRepository) is generic, meaning that the interface extends 1 of the common Spring Data interfaces: o.s.d.repository.Repository, o.s.d.repository.CrudRepository or perhaps o.s.d.repository.PagingAndSortingRepository, and the interface resides in the same package as the "scan" for both JPA and GemFire. This is the same for any Spring Data module that supports the Repository abstraction, including, but not limited to, MongoDB and Redis.
You must be very explicit in your declarations and intent. While it is generally not a requirement to extend store-specific Repository interface definitions (e.g. o.s.d.gemfire.repository.GemfireRepository), and rather extend a common interface (e.g. o.s.d.repository.CrudRepository), it is definitely advisable to put your different, per store Repository definitions in a separate package and configure the scan accordingly. This is good practice to limit the scan in the first place.
Some users are tempted to want a single, "reusable" Repository interface definition per application domain model type (e.g. Contact) for all the stores they persist the POJO to. For example, a single ContactRepository for both JPA and GemFire. This is ill-advised.
This stems from the fact that while most stores support basic CRUD and simple queries (e.g. findById(..)), though not all (so be careful), not all stores are equal in their query capabilities (e.g. JOINS) or function (e.g. Paging). For example, SDG does not, as of yet, support Paging.
So the point is, use 1 domain model type, but define a Repository per store clearly separated by package. Then you can configure the Spring Data Repository infrastructure accordingly. For instance, for JPA I have a configuration which points to the JPA-based ContactRepository using the ContactRepository class (which is type-safe and better than specifying the package by name using the basePackages attribute). Then, I do the same for the GemFire-based ContactRepository here.
By following this recipe, then all is well and then you can inject the appropriate Repository (by type) into the service class that requires it. If you have a service class that requires both Repositories, then you must inject them appropriately, for example.
Hope this helps!

Is there a mismatch between Domain-Driven Design repositories and Spring Data ones?

DDD specifies repository per aggregate, but when embracing Spring Data JPA, we can leverage the benefits only when we declare interface per entity. How this impedance mismatch can be resolved?
I'm hoping to try out repository interfaces encapsulated within the aggregate repository, is that a OK solution or anything better available?
To given an example: Customer is the aggregate root and entities are like Demographics, Identification, AssetSummary etc. where each entity can benefit from having their own repository interfaces. What is the best way without violating DDD much?
…, but when embracing Spring Data JPA, we can leverage the benefits only when we declare interface per entity…
That's wrong and I would like to learn where you get this impression from (feel free to comment). Spring Data repositories are expecting the exactly same approach to your domain model design: you identify aggregates in your domain model and only create repository interfaces for exactly those.
I'd argue that all you need to do is applying the DDD concept to your domain model. Simply don't declare repository interfaces for entities that are not an aggregate root. In fact, if you declared those, you basically break the concept of an aggregate, as the actual root cannot control business constraints anymore as the other entities can be manipulated through the repository interface defined for them, i.e. without using the aggregate root.
Find an example of this applied correctly in this Spring Data example. In it, Order is an aggregate root, LineItem is just an ordinary entity. The same applies to Customer (root) and Address (ordinary entity). Repository interfaces only exist for the aggregate roots.
In fact, that particular relationship is the fundamental principle that makes modules like Spring Data REST working in the first place. It only exposes HTTP resources for aggregate roots, embeds ordinary entities within the representations created and creates links to other aggregates.

Pushing OData filters down through layers

I have a working project with the following layers:
DataAccess - The project hits multiple DBs and web services. This layer defines the interfaces to each of them. It exposes the native types for each source (EF types for DBs, SOAP-defined types for web services, etc.). Let's say it exposes an EFProject and SoapProject.
Repository - This layer stiches results from the various sources to form a single entity, and exposes it. Let's call this ModelProject
Service - Adds REST attributes to to the entity (action links, etc.). This exposes a ProjectDTO.
WebApi - The controller spits out the ProjectDTO directly.
I'm trying to implement OData, specifically to page the results of very large queries. I've read a lot of examples, but they all seem to expose the source objects directly, and then map them to the final DTOs in the controller.
I would like to somehow push the ODataQueryOptions down to the Repository. This would allow me to keep the existing structure, and pass the query logic down to SQL. I understand that, because the ODataQueryOptions reference the ProjectDTO type, they can't be applied until an object of that type is available. Is there a way to "translate" the ODataQueryOptions from one type to another? Is there another way of doing this that I'm not aware of?
It is able to change the ODataQueryOptions, and such actions in a controller are for you to handle the options yourself:
public IQueryable Get(ODataQueryOptions queryOptions)
public IQueryable Get(int key, ODataQueryOptions queryOptions)
Here is a sample about this:
https://aspnet.codeplex.com/SourceControl/latest#Samples/WebApi/OData/v4/ODataQueryableSample/Controllers/OrdersController.cs .
For your reference, the source code of ODataQueryOptions is: https://aspnetwebstack.codeplex.com/SourceControl/latest#src/System.Web.OData/OData/Query/ODataQueryOptions.cs
Is there a way to "translate" the ODataQueryOptions from one type to another? Is there another way of doing this that I'm not aware of?
There is a way to actually translate the IQueryable<DomainModel> to IQueryable<DtoModel>.
I've done something similar in the past by leveraging AutoMapper's projection functionality. By calling the Project<TSource>/To<TTarget> methods, you can change an IQueryable that points to your domain models to another IQueryable that targets the Dto models, without actually executing it.
This means that you can now perform any OData operations on the DTO level and they will transfer through projection to the DAL layer into EntityFramework and SQL. In a scenario like this, there shouldn't be any need to manually handle the query logic so you can just use [EnableQuery] on the API route and let OData do its thing on the resulting IQueryable<DtoModel>.
I used this very successfully in one of the projects I worked on: as long as you rely just on AutoMapper projection to convert the types, it should work fine.
Granted, you can't do a lot of fancy mapping that way. The project methods will not be able to apply all kinds of mappings that you create, so I recommend checking the documentation on that front.
You also have to keep in mind that the original IQueryable needs to be exposed outside of the repository layer for this to work properly, otherwise the query will be executed too early. Some people will find that a boundary violation and will advocate for materializing the query inside the repository layer, but I don't have an issue with that particular aspect.

Persistence ignorance and DDD reality

I'm trying to implement fully valid persistence ignorance with little effort. I have many questions though:
The simplest option
It's really straightforward - is it okay to have Entities annotated with Spring Data annotations just like in SOA (but make them really do the logic)? What are the consequences other than having to use persistance annotation in the Entities, which doesn't really follow PI principle? I mean is it really the case with Spring Data - it provides nice repositories which do what repositories in DDD should do. The problem is with Entities themself then...
The harder option
In order to make an Entity unaware of where the data it operates on came from it is natural to inject that data as an interface through constructor. Another advantage is that we always could perform lazy loading - which we have by default in Neo4j graph database for instance. The drawback is that Aggregates (which compose of Entities) will be totally aware of all data even if they don't use them - possibly it could led to debugging difficulties as data is totally exposed (DAO's would be hierarchical just like Aggregates). This would also force us to use some adapters for the repositories as they doesn't store real Entities anymore... And any translation is ugly... Another thing is that we cannot instantiate an Entity without such DAO - though there could be in-memory implementations in domain... again, more layers. Some say that injecting DAOs does break PI too.
The hardest option
The Entity could be wrapped around a lazy-loader which decides where data should come from. It could be both in-memory and in-database, and it could handle any operations which need transactions and so on. Complex layer though, but might be generic to some extent perhaps...? Have a read about it here
Do you know any other solution? Or maybe I'm missing something in mentioned ones. Please share your thoughts!
I achieve persistence ignorance (almost) for free, as a side effect of proper domain modeling.
In particular:
if you correctly define each context's boundary, you will obtain small entities without any need for lazy loading (that, actually becomes an antipattern/code smell in a DDD project)
if you can't simply use SQL into your repository, map a set of DTO to your db schema, and use them into factories to initialize entity classes.
In DDD projects, persistence ignorance is relevant for the domain model itself, not for repositories, factories and other applicative code. Indeed you are very unlikely to change the ORM and/or the DB in the future.
The only (but very strong) rational behind persistence ignorance of the domain model is separation of concerns: in the domain model you should express business invariants only! Persistence is an infrastructural concern!
For example without persistence ignorance (and with lazy loading) the domain model should handle possible exceptions from the db, it's complexity grows and business rules are buried under technological details.
Personally I find it near impossible to achieve a clean domain model when trying to use the same entities as the ORM.
My solution is to model my domain entities as I see fit and ensure that any ORM entities don't leak outside of the repositories. This means that my repositories accept and return domain entities.
This means you lose "most of your ORM goodness" and end up "using your ORM for simple CRUD operations".
Both of these trade-offs are fine for me, I would rather have a clean domain model that I can use, rather than one polluted with artefacts from my DB or ORM. It also cuts down the amount of time I spend "wrestling with my ORM" to zero.
As a side-note, I find document databases a much better fit for DDD.
Once you will provide persistence mapping in you domain model:
your code depends on framework. If you decided to change this framework, you want to change persistence layer and model layer source code - more work, more changes, more merging of code etc.
your domain model jar file depends on spring/nhibernate jars etc.
your classes become larger and larger how business code and persistence related code grows
I've to admit that I dont understand harder and hardest option.
We used separated interfaces and implementations for domain entities. Provide separated mapping files using Hibernate along with repositories.
Entities are created using factory (or repository later), identifier is generated within persistence layer, entity does not need it until it's being persisted.
Lazy loading is provided by special implementation of List once:
mapping of an entity contains it
entity/aggregate is fetched from persistence layer
The only issue is related to transaction as when you use lazy-loaded collection out of transaction scope, it fails.
I would follow the simplest option unless I ran into a stone wall. There are also pitfalls such as this when you adopt pi principle.
Somtimes some compromises are acceptable.
public class Order {
private String status;//my orm does not support enum
public Status status() {
return Status.of(this.status);
}
public is(Status status) {
return status() == status;//use status() instead of getStatus() in domain model
}
}