JPA Entity in JSF vs tight coupling - jpa

I have a follow up question to the answer provided in below post
JPA Entity as JSF Bean?
I know this is the practice used in JSF applications but, I fail to understand how the approach provided in that post eliminates tight coupling? Could someone please answer?

The view is not really decoupled from it's underlying model as it's purpose it to display the internals of it's model.
However it makes a difference what this view model is and how it is attached to the view, and there are some gains in not backing the view by a persistent entity directly.
The backing bean often has to perform service calls or at least CRUD operations. Neither should be a concern of a JPA entity. The controller might also be useful for validation, access restrictions, caching etc.
A JPA entity usually does not control it's own lifecycle. Making it a backing bean would probably require the entity to control it's own transactions, at least to handle CRUD operations. This could work with an ActiveRecord implementation, but IMHO JPA entities do not lend themselves to the AR pattern too well.
In Java EE aplications, the DTO pattern is often used to decouple the persistence from the view, other external systems, or even from all layers of the application above the DAL. Entities are mapped to DTOs with more specific purposes, e.g. presenting a relevant selection of attributes from different entities to the frontend.
EDIT: If I have understood you issue correctly - you have an application where the getters of backing beans perform DB operations and cause your application to be slow.
This is in fact a bad practice. The JSF lifecycle causes these getters to be called multiple times, each of the calls resulting in an unnecessary DB call.
The common practice is to fetch the complete data from DB after the backing bean was constructed and before data is displayed initially (effectively caching the data for the lifetime of the backing bean). The data should be refetched from DB only if the user explicitly refreshes it.
Please see this post for further explanation.

Related

MVC Business Logic + DAL + Mapping ViewModel with EF Model

I'm starting to develop an application, and I want to know the best practices to organize the architecture of the solution.
Should I use EF Class Model as my ViewModel?
Should I put all my queries and db access in the model? or create a Service to manage all Db concerns ?
I'm using EF with DB First, because the db is already developed.
Thanks!
There are much more complete descriptions of application architecture out there, but here's the $.25 description.
EF Class Models are for your communication with a data store
Data Transfer Objects (DTO) are how modules communicate among themselves
(WebAPI to MVC, etc.)
ViewModels supply the data your UI requires
Look up "Separation of Concerns" as it pertains to application architecture, it can save your butt. Often, developers will dual-purpose these entities leading to some hilarious results when you find you've painted a corner for yourself. Not so funny if you are the "painter".
On the other hand, keeping these models requires extra effort and the mappings take CPU cycles. Here's a concrete example:
WebAPI accesses People entity (EF class) and maps to a PeopleDTO (not all fields, maybe additional information) and returns this to your MVC controller. MVC controller takes the PeopleDTO and merges it with supporting lookup tables (more WebAPI calls) to create a PeopleVM (ViewModel) that is used by your Razor Page.
In the scenario I just outlined, there are three different types of People object but each could have very different contents dependent on the needs of that "layer". Lots of tools exist to make the mapping less painful.
Clear?

POCO entities in a 2-tier WPF application

I need to retrieve a large graph of entities, manipulate it in the UI (adds, updates, deletes), then persist it all back to the database. After various SO questions and experiments, I'm finding this mass "detached graph update" approach to be very problematic, so I'm now rethinking my approach.
It's only a 2-tier WPF app, so I'm now thinking of having a long-running context that exists for the duration of the UI used to manipulate the entity graph - that way it can track changes automatically. However I'm not sure how to approach this architecturally.
The application currently has three projects - the UI, business tier, and one for the edmx & generated entities. My business tier has a CustomerManager class that exposes a method to retrieve a Customer graph (orders, order lines, etc.), and a method to persist the Customer graph. Assuming that the UI holds on to the same instance of the CustomerManager class, and therefore the same context, changes to the graph (adding and changing entities) will be tracked.
Deleting an entity is a bit more tricky, as the context must be used to do this, i.e.:-
context.Set<Order>().Remove(orderToDelete);
Looking for some architectural advice really. Do I just expose a DeleteOrder method in my CustomerManager class that does this? Given that I have a dozen other entity types, I would presumably need to expose similar methods to delete orders, products, etc.
Is it a sensible approach for the UI to hold on to the same CustomerManager instance, or is there a better way to manage a long-running context? A logical place for the DeleteOrder method would be in my Customer entity (partial) class, but as these classes are in a separate project from the business tier (which is where the context resides), I guess I can't do this (unless I pass the context to the DeleteOrder method)?
Your long living context idea will work only if your context lives in UI and UI talks to database directly to get and persist data. Involving WCF between your UI and context always result in serialization and it causes entity detaching = not tracking changes (unless you use STEs). Having long living context in WCF service is too problematic and in general bad practice.
Have you considered WCF Data Services? They provide client side tracking to some extend by using special client side context.

What specific issue does the repository pattern solve?

(Note: My question has very similar concerns as the person who asked this question three months ago, but it was never answered.)
I recently started working with MVC3 + Entity Framework and I keep reading that the best practice is to use the repository pattern to centralize access to the DAL. This is also accompanied with explanations that you want to keep the DAL separate from the domain and especially the view layer. But in the examples I've seen the repository is (or appears to be) simply returning DAL entities, i.e. in my case the repository would return EF entities.
So my question is, what good is the repository if it only returns DAL entities? Doesn't this add a layer of complexity that doesn't eliminate the problem of passing DAL entities around between layers? If the repository pattern creates a "single point of entry into the DAL", how is that different from the context object? If the repository provides a mechanism to retrieve and persist DAL objects, how is that different from the context object?
Also, I read in at least one place that the Unit of Work pattern centralizes repository access in order to manage the data context object(s), but I don't grok why this is important either.
I'm 98.8% sure I'm missing something here, but from my readings I didn't see it. Of course I may just not be reading the right sources... :\
I think the term "repository" is commonly thought of in the way the "repository pattern" is described by the book Patterns of Enterprise Application Architecture by Martin Fowler.
A Repository mediates between the domain and data mapping layers,
acting like an in-memory domain object collection. Client objects
construct query specifications declaratively and submit them to
Repository for satisfaction. Objects can be added to and removed from
the Repository, as they can from a simple collection of objects, and
the mapping code encapsulated by the Repository will carry out the
appropriate operations behind the scenes.
On the surface, Entity Framework accomplishes all of this, and can be used as a simple form of a repository. However, there can be more to a repository than simply a data layer abstraction.
According to the book Domain Driven Design by Eric Evans, a repository has these advantages:
They present clients with a simple model for obtaining persistence objects and managing their life cycle
They decouple application and domain design from persistence technology, multiple database strategies, or even multiple data sources
They communicate design decisions about object access
They allow easy substitution of a dummy implementation, for unit testing (typically using an in-memory collection).
The first point roughly equates to the paragraph above, and it's easy to see that Entity Framework itself easily accomplishes it.
Some would argue that EF accomplishes the second point as well. But commonly EF is used simply to turn each database table into an EF entity, and pass it through to UI. It may be abstracting the mechanism of data access, but it's hardly abstracting away the relational data structure behind the scenes.
In simpler applications that mostly data oriented, this might not seem to be an important point. But as the applications' domain rules / business logic become more complex, you may want to be more object oriented. It's not uncommon that the relational structure of the data contains idiosyncrasies that aren't important to the business domain, but are side-effects of the data storage. In such cases, it's not enough to abstract the persistence mechanism but also the nature of the data structure itself. EF alone generally won't help you do that, but a repository layer will.
As for the third advantage, EF will do nothing (from a DDD perspective) to help. Typically DDD uses the repository not just to abstract the mechanism of data persistence, but also to provide constraints around how certain data can be accessed:
We also need no query access for persistent objects that are more
convenient to find by traversal. For example, the address of a person
could be requested from the Person object. And most important, any
object internal to an AGGREGATE is prohibited from access except by
traversal from the root.
In other words, you would not have an 'AddressRepository' just because you have an Address table in your database. If your design chooses to manage how the Address objects are accessed in this way, the PersonRepository is where you would define and enforce the design choice.
Also, a DDD repository would typically be where certain business concepts relating to sets of domain data are encapsulated. An OrderRepository may have a method called OutstandingOrdersForAccount which returns a specific subset of Orders. Or a Customer repository may contain a PreferredCustomerByPostalCode method.
Entity Framework's DataContext classes don't lend themselves well to such functionality without the added repository abstraction layer. They do work well for what DDD calls Specifications, which can be simple boolean expressions sent in to a simple method that will evaluate the data against the expression and return a match.
As for the fourth advantage, while I'm sure there are certain strategies that might let one substitute for the datacontext, wrapping it in a repository makes it dead simple.
Regarding 'Unit of Work', here's what the DDD book has to say:
Leave transaction control to the client. Although the REPOSITORY will insert into and delete from the database, it will ordinarily not
commit anything. It is tempting to commit after saving, for example,
but the client presumably has the context to correctly initiate and
commit units of work. Transaction management will be simpler if the
REPOSITORY keeps its hands off.
Entity Framework's DbContext basically resembles a Repository (and a Unit of Work as well). You don't necessarily have to abstract it away in simple scenarios.
The main advantage of the repository is that your domain can be ignorant and independent of the persistence mechanism. In a layer based architecture, the dependencies point from the UI layer down through the domain (or usually called business logic layer) to the data access layer. This means the UI depends on the BLL, which itself depends on the DAL.
In a more modern architecture (as propagated by domain-driven design and other object-oriented approaches) the domain should have no outward-pointing dependencies. This means the UI, the persistence mechanism and everything else should depend on the domain, and not the other way around.
A repository will then be represented through its interface inside the domain but have its concrete implementation outside the domain, in the persistence module. This way the domain depends only on the abstract interface, not the concrete implementation.
That basically is object-orientation versus procedural programming on an architectural level.
See also the Ports and Adapters a.k.a. Hexagonal Architecture.
Another advantage of the repository is that you can create similar access mechanisms to various data sources. Not only to databases but to cloud-based stores, external APIs, third-party applications, etc.
You're right,in those simple cases the repository is just another name for a DAO and it brings only one value: the fact that you can switch EF to another data access technique. Today you're using MSSQL, tomorrow you'll want a cloud storage. OR using a micro orm instead of EF or switching from MSSQL to MySql.
In all those cases it's good that you use a repository, as the rest of the app won't care about what storage you're using now.
There's also the limited case where you get information from multiple sources (db + file system), a repo will act as the facade, but it's still a another name for a DAO.
A 'real' repository is valid only when you're dealing with domain/business objects, for data centric apps which won't change storage, the ORM alone is enough.
It would be useful in situations where you have multiple data sources, and want to access them using a consistent coding strategy.
For example, you may have multiple EF data models, and some data accessed using traditional ADO.NET with stored procs, and some data accessed using a 3rd party API, and some accessed from an Access database living on a Windows NT4 server sitting under a blanket of dust in your broom closet.
You may not want your business or front-end layers to care about where the data is coming from, so you build a generic repository pattern to access "data", rather than to access "Entity Framework data".
In this scenario, your actual repository implementations will be different from each other, but the code that calls them wouldn't know the difference.
Given your scenario, I would simply opt for a set of interfaces that represent what data structures (your Domain Models) need to be returned from your data layer. Your implementation can then be a mixture of EF, Raw ADO.Net or any other type of Data Store/Provider. The key strategy here is that the implementation is abstracted away from the immediate consumer - your Domain layer. This is useful when you want to unit test your domain objects and, in less common situations - change your data provider / database platform altogether.
You should, if you havent already, consider using an IOC container as they make loose coupling of your solution very easy by way of Dependency Injection. There are many available, personally i prefer Ninject.
The domain layer should encapsulate all of your business logic - the rules and requirements of the problem domain, and can be consumed directly by your MVC3 web application. In certain situations it makes sense to introduce a services layer that sits above the domain layer, but this is not always necessary, and can be overkill for straightforward web applications.
Another thing to consider is that even when you know that you will be working with a single data store it still might make sense to create a repository abstraction. The reason is that there might be a function that your application needs that your ORM du jour either does badly (performance), not at all, or you just don't know how to make the ORM bend to your needs.
If you are wrapping your ORM behind a well thought out repository interface, you can easily switch between different technologies as you see fit. It's not uncommon in my repositories to see some methods use EF for their work and others to use something like PetaPoco, or (gasp) ADO.net code. The repository abstraction enables you to use exactly the right tool for the job at hand without leaking these complexities into the client code.
I think there is a big misunderstanding of what many articles call "repository." And that's why there are doubts about what real value those abstractions bring.
In my opinion the repository in it's pure form is IEnumerable, while you and many articles are talking about "data access service."
I've blogged about it here.

EF + WCF in three-layered application with complex object graphs. Which pattern to use?

I have an architectural question about EF and WCF.
We are developing a three-tier application using Entity Framework (with an Oracle database), and a GUI based on WPF. The GUI communicates with the server through WCF.
Our data model is quite complex (more than a hundred tables), with lots of relations. We are currently using the default EF code generation template, and we are having a lot of trouble with tracking the state of our entities.
The user interfaces on the client are also fairly complex, sometimes an object graph with more than 50 objects are sent down to a single user interface, with several layers of aggregation between the entities. It is an important goal to be able to easily decide in the BLL layer, which of the objects have been modified on the client, and which objects have been newly created.
What would be the clearest approach to manage entities and entity states between the two layers? Self tracking entities? What are the most common pitfalls in this scenario?
Could those who have used STEs in a real production environment tell their experiences?
STEs are supposed to solve this scenario but they are not silver bullet. I have never used them in real project (I don't like them) but I spent some time playing with them. The main pitfalls I found are:
Coupling your data layer with your client application - you must share entity assembly between projects (it also means it is .NET only solution but it should not be a problem in your case)
Large data transfers - you pass 50 entities to clients, client change single entity and you will pass 50 entities back. It will require some fighting with STEs to avoid passing unnecessary data
Unnecessary updates to database - normally when EF works with attached entities it track changes on property level but with STEs it track changes on entity level. So if user modify single property in entity with 100 properties it will generate update with setting all of them. It will require modifying template and adding property level change tracking to avoid this.
Client application should use STEs directly (binding STEs to UI) to get most of its self tracking ability. Otherwise you will have to implement code which will move data from UI back to self tracking entity and modify its state.
They are not proxied = they don't support lazy loading (in case of WCF service it is good behavior)
I described today the way to solve this without STEs. There is also related question about tracking over web services (check #Richard's answer and provided links).
We have developed a layered application with STE's. A user interface layer with ASP.NET and ModelViewPresenter, a business layer, a WCF service layer and the data layer with Entity Framework.
When I first read about STE's the documentation said that they are easier then using custom DTO's. They should be the 'quick and easy way' and that only on really big projects you should use hand written DTO's.
But we've run in a lot of problems using STE's. One of the main problems is that if your entities come from multiple service calls (for example in a master detail view) and so from different contexts you will run into problems when composing the graphs on the server and trying to save them. So our server function still have to check manually which data has changed and then recompose the object graph on the server. A lot has been written about this topic but it's still not easy to fix.
Another problem we ran into was that the STE's wouldn't work without WCF. The change tracking is activated when the entities are serialized. We've originally designed an architecture where WCF could be disabled and the service calls would just be in process (this was a requirement for our unit tests, which would run a lot faster without wcf and be easier to setup). It turned out that STE's are not the right choice for this.
I've also noticed that developers sometimes included a lot of data in their query and just send it to the client instead of really thinking about which data they needed.
After this project we've decided to use custom DTO's with automapper from server to client and use the POCO template in our data layer in a new project.
So since you already state that your project is big I would opt for custom DTO's and service functions that are a specifically created for one goal instead of 'Update(Person person)' functions that send a lot of data
Hope this helps :)

Do Domain Classes usually get JPA or JAXB Annotations or both?

I have a Java enterprise application that provides a web service, has a domain layer, and a hibernate persistence layer. In this particular case, there is not a huge difference (at the moment) between the objects I send over the wire, the domain objects, and the persistence objects.
Currently the application uses DTO's on the persistence side and annotates the Domain classes with JAXB annotations. However, the more I read and think about it, the more this seems backwards! (Not to mention there is a lot of code to support the mindless back and forth between the DTO's and the Domain objects.) It seems like most architects suggest puting JPA annotations on the domain model and create DTO's for sending objects over the wire.
In my case, could I put both the JAXB and JPA (Hibernate) annotations on my domain classes?
The thought of keeping my web service facade, my domain, and my persistence all tightly bundled together seems easy to maintain, but does concern me as these may need to change in time. But would it be smarter to create a set of DTO classes for the web services side and skip the DTO's for the persistence side?
There's no functional reason for not annotating the same class with both JPA and JAXB annotations, I've done it myself on occasion. It does become a bit hard to read, though, and sometimes you want different class design trade-offs with JAXB and JPA. In my experience, these trade-offs usually mean you end up with two class models.
I agree that using the same model classes is the right approach. If you are concerned about annotation clutter, you could use a JAXB implementation (such as EclipseLink JAXB) that provides a mechanism for externalizing the metadata:
http://wiki.eclipse.org/EclipseLink/Examples/MOXy/EclipseLink-OXM.XML
Also since you are using a JPA model EclipseLink JAXB (MOXy) has extensions for making this easier:
http://bdoughan.blogspot.com/2010/07/jpa-entities-to-xml-bidirectional.html
http://wiki.eclipse.org/EclipseLink/Examples/MOXy/JPA
Here is an example of using one model with JAXB & JPA to create a RESTful service:
Part 1 - The database
Part 2 - JPA entities
Part 3 - Mapping entities to XML using JAXB
Part 4 - The RESTful service
Part 5 - The client
There is no problem in using both annotations on the same class. I even tend to encourage this, because thus you don't have to copy-paste when changes occur.
In some cases, some properties differ in behaviour - for example an auto-generated ID might not be required to be marshalled. #XmlTransient and #Transient are then combined. It does become a bit hard to read, but not too hard, because it is obvious what all the annotations mean.
Anyone tempted to put atom link objects in your persisted domain because you have committed to defining your web service xml structure there? It seems strange to me to do this. Hateoas links seem like a good idea but the persisted domain and the service impl (not the web service) have no interest in atom links. Then again, using xml annotations and having jersey serialize my domain for me certainly is convenient. Another downside of this approach though is that it is to easy to impact your web service consumers at runtime with persistence domain "layer" refactoring.
I know this question is a bit old, but I thought I'd weigh in anyways since this is an issue that I've recently come across. My recommendation would be to leave your JAXB annotated classes alone, since any schema change will require re-generating these classes. Meaning you will have to re-enter any hibernate annotations, etc. manually. This may be a little out-dated solution, but I think it would be perfectly reasonable to create a hibernate mapping file (.hbm.xml) to house the mappings externally. This is a little more flexible, less cluttered, and just as useful in my opinion.