I have two persistence units with the same name, one in src/main/resources, another in src/test/resources. However, there is some information which is the same between them: list of entity classes, some (not all) properties, etc. How can I avoid duplicating it? If the answer depends on JPA implementation, I am interested in both OpenJPA and EclipseLink.
You can pass persistence unit properties at runtime in Persistence.createEntityManagerFactory(Map).
If they are separate persistence units they should have different names...
Related
There are classes that are entities according to DDD, and there are classes that have #javax.persistence.Entity annotation. Should they be the same classes? Or should JPA entities act just as a mechanism for a mapper (https://martinfowler.com/eaaCatalog/dataMapper.html) to load DDD entities from a database (and store them) and be kept outside the domain model?
Would it make a difference if database metadata were separated and stored externally (for example, in XML)? If such classes are entities, where is the boundary? I think classes generated from XSD (for example, with JAXB) or even from database with MyBatis Generator are not entities as understood in DDD.
That's an implementation detail really. They could be or they could not depending on the flexibility of your ORM. For instance, if your ORM allows to map your domain objects without polluting them with persistence concerns then that's the approach that requires the less overhead and which I'd go for.
On the other hand, if your ORM is not flexible enough then you could go for a pragmatic hybrid approach where your AR and it's state are two different classes and where the state class is simple enough to easily be mapped. Note that the AR would still be responsible to protect it's state here and the state object wouldn't be accessed directly from outside the AR. The approach is described by Vaughn Vernon here.
Your JPA entities should be the domain entities. Why?
Your domain entities should express some strong constraints, e.g. by
Having parameterized constructors
Not exposing all setters
Do validation on write operations
If possible, a domain entity should always remain a valid business entity.
By introducing some kind of mapper, you introduce a possibility to automagically write arbitrary stuff into your domain entities, which basically renders your constraints useless.
The other option would be enforcing the same constraints on JPA and domain entities which introduces redundancy.
Your best bet is keeping your JPA entities as ORM-agnostic as possible. Using Hibernate, this can be done using a configurating class or XML file. But I am no Java EE/JPA guy, so it's hard for me to give a good implementation advice.
After some more experience with JPA and microservices, I would say that I would most likely not separate them when using JPA, unless there's a reason that makes me do otherwise. On the other hand, entities in a single bounded context do not necessarily have to be only JPA entities. It's possible to have both entities mapped by JPA implementation and entities mapped from DTOs using other technologies (like JSON mappers) or manually.
I agree that both ways are possible. After programming some applications with DDD in mind, I find that this heuristic works well:
If you start from having an entity and not having JPA, it will probably be too hard to refactor an entity so that it can be used by ORM framework, so keep them separate
If you start from scratch, it is worth not distinguishing DDD entities from JPA entities
I'm aware that copying entity classes and properties into DTOs is considered anti-pattern, so by Exposed domain model pattern the same #Entity can be used as both database entity class, and DTO for service and MVC layer. (see here https://codereview.stackexchange.com/questions/93511/data-transfer-objects-vs-entities-in-java-rest-server-application)
But suppose we have microservice architecture where the same set of properties is used as entity in one project with persistence, and as DTO in another project which uses the first one as a service. What's the proposed pattern in such a situation?
Because the second project doesn't need #Entity related functionality, and if we put that class in shared library, it will be tied unnecessary to JPA specific APIs and libraries. And the alternative is to again use separate DTO classes anti-pattern.
When your requirements for a DTO model exactly match your entity model you are either in a very early stage of the project or very lucky that you just have a simple model. If your model is very simple, then DTOs won't give you many immediate benefits.
At some point, the requirements for the DTO model and the entity model will diverge though. Imagine you add some audit aspects, statistics or denormalization to your entity/persistence model. That kind of data is usually never exposed via DTOs directly, so you will need to split the models. It is also often the case that the main driver for DTOs is the fact that you don't need all the data all the time. If you display objects in e.g. a dropdown you only need a label and the object id, so why would you load the whole entity state for such a use case?
The fact that you have annotations on your DTO models shouldn't bother you that much, what is the alternative? An XML-like mapping? Manual object wiring?
If your model is used by third parties directly, you could use a subclassing i.e. keep the main model free of annotations and have annotated subclasses in your project that extend the main model.
Since implementing a DTO approach correctly, I created Blaze-Persistence Entity Views which will not only simplify the way you define DTOs, but it will also improve the performance of your queries.
If you are interested, I even have an example for an external model that uses entity view subclasses to keep the main model clean.
Thank you for the answers, but emphasize in the question is on microservice (MS) architecture and reusing defined entity POJOs from one MS in another as POJOs. From what I've read on microservices it's closely related to another question - should MSs share any common functionality and classes at all, or be completely independent? It seems there is no definite agreement on it, and also no definite answer, or widely accepted pattern, to this.
From my recent experience here is what I adopted, and it works well so far.
Have common functionality across MSs - yes, in form of a commons project added as dependency to all MSs, with its dependencies set as optional. Share entity classes (expose them in commons) - no.
The main reason is that entity classes are closely related to data store for particular MS. And as the established rule is that MSs shouldn't share data stores, then it makes sense not to share entity classes for those data stores. It helps MSs to be more independent, and freedom to manage their data store in their own way. It means some more typing to add additional DTO classes and conversion between them, but it's a trade-off worth taking to retain MS independence. Reasons Christian Beikov and Maksim Gumerov mentioned apply as well.
What we do share (put in commons) are some common functionality and helper classes (for cloud, discovery, error handling, rest and json configuration...), and pure DTOs, where T is transfer between MSs (rest entities or message payloads).
I would like to be able to inject some dependencies (by using an IoC container) into entities just after they are loaded and materialized by Entity Framework (as a result of a query for instance).
It is possible to do so by hooking on the ObjectMaterialized event but I'm wondering if there is no better manner to achieve this as I use EF 6 and code first.
Any advices or ideas ?
Thanks
Riana
Although Entity Framework can be configured to allow dependencies to be injected into entities, I think it's safe to say that the general consensus (take a look at the opinions of Jimmy Bogard, Mark Seemann and me) is to not do this at all.
For me the main point is that classes like entities, DTOs and messages are very different from service classes. Entities, DTOs and messages are short lived objects containing runtime data, while services contain behavior, are often long lives and simply process runtime data (such as entities).
That doesn't mean that you can't use services into your entities though. As Mark describes here, not letting your entities use services lead to an Anemic Domain Model. But what this means is that entities shouldn't be part of your object graph.
Instead, if you are practicing DDD, your entities can simply accept dependencies into the domain methods that you define on the entities. Those dependencies can than be supplied by the command handlers that execute the use case. In other words, dependencies are injected into the constructor of a command handler, and when calling an entity's domain method, the command handler will supply the dependencies that this method requires (usually just one or two) to that method (method injection).
I'm trying to implement fully valid persistence ignorance with little effort. I have many questions though:
The simplest option
It's really straightforward - is it okay to have Entities annotated with Spring Data annotations just like in SOA (but make them really do the logic)? What are the consequences other than having to use persistance annotation in the Entities, which doesn't really follow PI principle? I mean is it really the case with Spring Data - it provides nice repositories which do what repositories in DDD should do. The problem is with Entities themself then...
The harder option
In order to make an Entity unaware of where the data it operates on came from it is natural to inject that data as an interface through constructor. Another advantage is that we always could perform lazy loading - which we have by default in Neo4j graph database for instance. The drawback is that Aggregates (which compose of Entities) will be totally aware of all data even if they don't use them - possibly it could led to debugging difficulties as data is totally exposed (DAO's would be hierarchical just like Aggregates). This would also force us to use some adapters for the repositories as they doesn't store real Entities anymore... And any translation is ugly... Another thing is that we cannot instantiate an Entity without such DAO - though there could be in-memory implementations in domain... again, more layers. Some say that injecting DAOs does break PI too.
The hardest option
The Entity could be wrapped around a lazy-loader which decides where data should come from. It could be both in-memory and in-database, and it could handle any operations which need transactions and so on. Complex layer though, but might be generic to some extent perhaps...? Have a read about it here
Do you know any other solution? Or maybe I'm missing something in mentioned ones. Please share your thoughts!
I achieve persistence ignorance (almost) for free, as a side effect of proper domain modeling.
In particular:
if you correctly define each context's boundary, you will obtain small entities without any need for lazy loading (that, actually becomes an antipattern/code smell in a DDD project)
if you can't simply use SQL into your repository, map a set of DTO to your db schema, and use them into factories to initialize entity classes.
In DDD projects, persistence ignorance is relevant for the domain model itself, not for repositories, factories and other applicative code. Indeed you are very unlikely to change the ORM and/or the DB in the future.
The only (but very strong) rational behind persistence ignorance of the domain model is separation of concerns: in the domain model you should express business invariants only! Persistence is an infrastructural concern!
For example without persistence ignorance (and with lazy loading) the domain model should handle possible exceptions from the db, it's complexity grows and business rules are buried under technological details.
Personally I find it near impossible to achieve a clean domain model when trying to use the same entities as the ORM.
My solution is to model my domain entities as I see fit and ensure that any ORM entities don't leak outside of the repositories. This means that my repositories accept and return domain entities.
This means you lose "most of your ORM goodness" and end up "using your ORM for simple CRUD operations".
Both of these trade-offs are fine for me, I would rather have a clean domain model that I can use, rather than one polluted with artefacts from my DB or ORM. It also cuts down the amount of time I spend "wrestling with my ORM" to zero.
As a side-note, I find document databases a much better fit for DDD.
Once you will provide persistence mapping in you domain model:
your code depends on framework. If you decided to change this framework, you want to change persistence layer and model layer source code - more work, more changes, more merging of code etc.
your domain model jar file depends on spring/nhibernate jars etc.
your classes become larger and larger how business code and persistence related code grows
I've to admit that I dont understand harder and hardest option.
We used separated interfaces and implementations for domain entities. Provide separated mapping files using Hibernate along with repositories.
Entities are created using factory (or repository later), identifier is generated within persistence layer, entity does not need it until it's being persisted.
Lazy loading is provided by special implementation of List once:
mapping of an entity contains it
entity/aggregate is fetched from persistence layer
The only issue is related to transaction as when you use lazy-loaded collection out of transaction scope, it fails.
I would follow the simplest option unless I ran into a stone wall. There are also pitfalls such as this when you adopt pi principle.
Somtimes some compromises are acceptable.
public class Order {
private String status;//my orm does not support enum
public Status status() {
return Status.of(this.status);
}
public is(Status status) {
return status() == status;//use status() instead of getStatus() in domain model
}
}
I am developing a small (but growing) Java EE project based on the technologies EJB 3.1, JSF 2, CDI (WELD) and JPA 2 deployed on a JBOSS AS 7.1Beta1.
As a starting point I created a Maven project based on the Knappsack Maven archetypes.
My architecture is basically the same provided by the archetype and as my project grows I think this archetype seems to be reaching its limits. I want to modify the basic idea of the archetype according to my needs. But let me first explain how the project is organized at the moment.
The whole project is built around Seam like Home classes. The view is referencing them (via EL in xhtml templates). Most of the Home classes are #Named and #RequestScoped (or shortly #Model) or #ConversationScoped and Entripse Java Beans are #Injected. Basically these (normally #Local) EJBs are responsible for the database access (Some kind of DAOs) to get transactions managed automatically by the container. So every DAO class has its own EntityManager injected via CDI. At the moment every DAO integrates aspects which logically belong to each other (e. g. there is a SchoolDao in the archetype which is responsible for creating Teachers, Students and Courses).
This of course results in growing DAOs which have no well defined task and which become hard to maintain and hard to understand. And as a painful side effect the risk of duplicate code grows.
As a consequence I want to breakup this design by having only DAOs which are responsible for one specific task (a #StudentDao, a #TeacherDaoand so on). And at this point I am in trouble. As each DAO has a reference to its own EntityManager it cannot be guaranteed that something like the following will work (I think it never will :)
Teacher teacher = teacherDao.find(teacherId);
course.setTeacher(teacher);
courseDao.save(course);
The JPA implementaion complains about a null value for column COURSE.TEACHER_ID (assuming Course has a not nullable FK realtionship to Teacher). Each DAO holds its own EntityManager, the teacher is managed by the one in the TeacherDao, but the other one in the CourseDao tries to merge the Course #Entity.
Maybe the archetye I used is not suitable for larger applications. But what would be a appropriate design for such an aplication then IF the technologies I used are obligatory (EJB 3.1 for container managed transactions [and later on other business related stuff], JSF as view technologie, JPA as the database mapper and CDI as the 'must have because it's hip:)?
Edit:
I now have an EntityManager injected in the base class all other DAO classes inherit from. So all DAOs use the same instance (debugger shows same object id) but I still have the problem that all entities that I read from the database are immediately detached. This is something that makes me wonder as it means that there is either no container managed transaction or the transaction gets immediately closed after the entity was read. Each DAO is a #Local #Stateless EJB. They are injected into my JSF Beans (#Named and #RequestScoped) from where I want to make use of the CRUD operations. Is there anything I miss?
Having each DOA have its own EntityManager is a very bad design.
You should have an EntityManager per transaction/request and pass it to each DOA, or have them share the same one or get it from the context.