How to decouple a data core domain from a REST domain? - rest

I am curious on what is the preferred way to decouple a core domain entity from the entity served by a REST layer.
I saw on this enlightening Spring REST tutorial http://spring.io/guides/tutorials/rest/1/ that it's a good thing not to expose the core domain model directly in the REST layer as it should evolve independently of the core domain model.
The core service is handling and producing events. These events are seen as the communication ports of the application. The core service does not see any REST domain entity. And the REST controller does not see any core domain entity.
To make things simpler, let's consider the example of only one entity, the Order entity.
The tutorial shows how an Order REST domain class is passed by a REST request to a controller. In turn the controller creates an OrderDetails entity passed to an Order handling event to create a CreateOrderEvent event, which is then passed to a service which returns another OrderCreatedEvent event. The controller finally creates a REST domain Order entity from the returned event, and sends it in the response.
We can see that for this one entity, there is one class for core domain entity, one class for the REST domain entity, and one class for the event payload entity.
Also, we can see that the events, sitting in the application core, extend on some base events which remind strongly of the HTTP methods. It is a bit surprising to see this REST like stuff seeping into the application core, when what we are trying to do in the first place, is to decouple the application core from the REST layer.
Any thought on this design or an some alternate design ? Are there any preferred way of achieving this decoupling ?
Thanks for any suggestion.
Kind Regards,
Stephane
One additional question I now have...
Should I go for an entity NotFoundException exception or for a notFoundEntity event member in the event, on a REST domain decoupled from the data domain ?
The event sent back to the controller can carry a notFoundEntity member state which can be used in the controller.
Here is the event notFoundEntity logic:
protected boolean notFoundEntity = false;
public boolean isNotFoundEntity() {
return notFoundEntity;
}
public static OneAdminEvent notFound(Long id) {
OneAdminEvent oneAdmiEvent = new OneAdminEvent(id);
oneAdmiEvent.notFoundEntity = true;
return oneAdmiEvent;
}
The service updates the event member state depending on the entity having been found or not:
Admin admin = adminRepository.findOne(deleteAdminEvent.getId());
if (admin == null) {
return AdminDeletedEvent.notFound(deleteAdminEvent.getId());
In the controller, a call to checks for the entity having been found or not:
if (adminDeletedEvent.isNotFoundEntity()) {
}
This is in line with the decoupling design.
But, I'm not sure the decoupling event should carry this information. This information can be seen as an exception, a business custom exception.
Also, using an exception makes it possible to specify a rollback attribute in the transactional annotation:
#Transactional(rollbackFor = NotFoundException.class)
With an exception, the only not found entity logic left is on the service, the event does not contain any.
The service now looks like:
Admin admin = adminRepository.findOne(deleteAdminEvent.getId());
if (admin == null) {
throw new NotFoundException("No admin was found with the id " + deleteAdminEvent.getId());
What rule of thumb to use to decide when to use a member state in the event and when to use a business custom exception ?

It would be harder for this example application to decouple the REST domain and core domain layers more. Not only have the REST (a.k.a. "view") objects been cleanly separated from the core (a.k.a. "domain") objects, but their direct communication has also been decoupled via an internal event-driven architecture. The reason that the core events remind you so strongly of HTTP methods is probably more due to the simplicity of the example's use cases than by necessity or design. Such can be the peril of example. :)
While this is certainly a sound way to layer an application, the real question is whether it's necessary for your particular scenario. If your application will be very data-oriented (e.g., a data entry system with few business rules), you might not need a separate set of REST domain objects (much the way you might decide you don't need a separate layer of "view" objects in a traditional MVC application). You could take the Spring Data REST approach (see Oliver Gierke's RESTBucks sample app). Then again, if you have some heavy business logic in core and want to create a rich domain model, you're probably better off with a more decoupled architecture.

Related

Entity Framework and DDD - Load required related data before passing entity to business layer

Let's say you have a domain object:
class ArgumentEntity
{
public int Id { get; set; }
public List<AnotherEntity> AnotherEntities { get; set; }
}
And you have ASP.NET Web API controller to deal with it:
[HttpPost("{id}")]
public IActionResult DoSomethingWithArgumentEntity(int id)
{
ArgumentEntity entity = this.Repository.GetById(id);
this.DomainService.DoDomething(entity);
...
}
It receives entity identifier, load entity by id and execute some business logic on it with domain service.
The problem:
The problem here is with related data. ArgumentEntity has AnotherEntities collection that will be loaded by EF only if you explicitly ask to do so via Include/Load methods.
DomainService is a part of business layer and should know nothing about persistence, related data and other EF concepts.
DoDomething service method expects to receive ArgumentEntity instance with loaded AnotherEntities collection.
You would say - it's easy, just Include required data in Repository.GetById and load whole object with related collection.
Now lets come back from simplified example to reality of the large application:
ArgumentEntity is much more complex. It contains multiple related collections and that related entities have their related data too.
You have multiple methods of DomainService. Each method requires different combinations of related data to be loaded.
I could imagine possible solutions, but all of them are far from ideal:
Always load the whole entity -> but it is inefficient and often impossible.
Add several repository methods: GetByIdOnlyHeader, GetByIdWithAnotherEntities, GetByIdFullData to load specific data subsets in controller -> but controller become aware of which data to load and pass to each service method.
Add several repository methods: GetByIdOnlyHeader, GetByIdWithAnotherEntities, GetByIdFullData to load specific data subsets in each service method -> it is inefficient, sql query for each service method call. What if you call 10 service methods for one controller action?
Each domain method call repository method to load additional required data ( e.g: EnsureAnotherEntitiesLoaded) -> it is ugly because my business logic become aware of EF concept of related data.
The question:
How would you solve the problem of loading required related data for the entity before passing it to business layer?
In your example I can see method DoSomethingWithArgumentEntity which obviously belongs to Application Layer. This method has call to Repository which belongs to Data Access Layer. I think this situation does not conform to classic Layered Architecture - you should not call DAL directly from Application Layer.
So your code can be rewritten in another manner:
[HttpPost("{id}")]
public IActionResult DoSomethingWithArgumentEntity(int id)
{
this.DomainService.DoDomething(id);
...
}
In DomainService implementation you can read from repo whatever it needs for this specific operation. This avoids your troubles in Application Layer. In Business Layer you will have more freedom to implement reading: with serveral repository methods reads half-full entity, or with EnsureXXX methods, or something else. Knowledge about what you need to read for operation will be placed into operation's code and you don't need this knowledge in app-layer any more.
Every time situation like this emerged it is a strong signal about your entity is not preperly designed. As krzys said the entity has not cohesive parts. In other words if you often need parts of an entity separately you should split this entity.
Nice question :)
I would argue that "related data" in itself is not a strict EF concept. Related data is a valid concept with NHibernate, with Dapper, or even if you use files for storage.
I agree with the other points mostly, though. So here's what I usually do: I have one repository method, in your case GetById, which has two parameters: the id and a params Expression<Func<T,object>>[]. And then, inside the repository I do the includes. This way you don't have any dependency on EF in your business logic (the expressions can be parsed manually for another type of data storage framework if necessary), and each BLL method can decide for themselves what related data they actually need.
public async Task<ArgumentEntity> GetByIdAsync(int id, params Expression<Func<ArgumentEntity,object>>[] includes)
{
var baseQuery = ctx.ArgumentEntities; // ctx is a reference to your context
foreach (var inlcude in inlcudes)
{
baseQuery = baseQuery.Include(include);
}
return await baseQuery.SingleAsync(a=>a.Id==id);
}
Speaking in context of DDD, It seems that you had missed some modeling aspects in your project that led you to this issue. The Entity you wrote about looked not to be highly cohesive. If different related data is needed for different processes (service methods) it seems like you didn't find proper Aggregates yet. Consider splitting your Entity into several Aggregates with high cohesion. Then all processes correlated with particular Aggregate will need all or most of all data that this Aggregate contains.
So I don't know the answer for your question, but if you can afford to make few steps back and refactor your model, I believe you will not encounter such problems.

Entity Framework + Web API, return Entities (Complex, collections, etc) outside DbContext

Here is my situation. I use Entity Framework 4 with the Web API
The structure of my code is quite simple, I have the Service layer where all my rest API is organized, I have my Business logic layer where I have business controllers to manage Transactions between the rest calls and the data layer. Finally, I have a data layer with generic repositories and a DAO to access the whole thing.
In my Business controllers, I use using to inject a non transactionnal (read only methods) OR a transactional (CRUD methods) DbContext.
When returning values to my REST API, I parse it into JSON.
The problem is that I keep having this exception: Newtonsoft.Json.JsonSerializationException
I return my entities / collections / lists outside of my using{} statement, which I think EF does not like by default.
In debug mode, sometimes, I will manage to retrieve all data, but not all the time. Since my entities come from a query within a DbContext, I think that the behavior is to remove loaded sub-properties after the context has been disposed of.
Fact is, I want to keep my structure as is, and I was wondering the following:
Is there a way of returning complete (not lazy-loaded) entities after leaving the using{} statement?
Thanks a lot
I actually read more about Entity Frameworks behavior. What I get is actually standard for EF. I have to force the context to Load() my refered entities in order to get them after leaving the context.

Domain Driven Design issue regarding repository

I am trying to implement the DDD so I have created the following classes
- User [the domain model]
- UserRepository [a central factory to manage the object(s)]
- UserMapper + UserDbTable [A Mapper to map application functionality and provide the CRUD implementation]
My first question is that when a model needs to communicate with the persistent layer should it contact the Repository or the mapper? Personally I am thinking that it should ask the repository which will contact the mapper and provide the required functionality.
Now my second concern is that there should be only one repository for all the objects of same class, so that means I will be creating a singleton. But if my application has lots of domain models (lets say 20), then there will be 20 singletons. And it don't feel right. The other option is to use DI (dependency injection) but the framework I am using (Zend Framework 1.11) has no support for DIC.
My third
UserRepository [a central factory to manage the object(s)]
In DDD Repository is not a Factory. Repository is responsible for middle and end of life of the domain object. Factory is responsible for beginning. Conceptually, persisting and restoring happens to the domain object in its middle life.
UserMapper + UserDbTable [A Mapper to map application functionality and provide the CRUD implementation]
These classes do not belong to a domain layer, this is data access. They would all be encapsulated by repository implementation (or would not exist at all if you use ORM).
My first question is that when a model needs to communicate with the
persistent layer should it contact the Repository or the mapper?
Personally I am thinking that it should ask the repository which will
contact the mapper and provide the required functionality.
Model does not need to communicate with persistence layer. In fact you should try to make your model as persistence-agnostic as possible. From the perspective of your domain model, Repository is just an interface. The implementation of this interface belongs to a different layer - data access. The implementation is injected later, somewhere in your Application layer. Application layer is aware of persistence and transactions. This is where you can implement Unit Of Work pattern (that also does not belong to domain layer).
Now my second concern is that there should be only one repository for all the objects of same class, so that means I will be creating a singleton. But if my application has lots of domain models (lets say 20), then there will be 20 singletons.
First of all you can have more than one Repository for a given domain object. This is what happens most of the time anyway because you want to avoid 'method explosion' on your Repository interface. Secondly singleton Repository is a bad idea because it would couple all consumers to a single implementation which, among other things, would make unit testing hard. Thirdly, there is nothing wrong with having 20 or more Repositories, in fact the more focused classes you have the better, see SRP.
UPDATE:
I think that you are confusing regular Factory pattern and DDD Factory. In DDD terms, when the object is restored from the database it already exists conceptually (even though it is a new object in memory). So it is a responsibility of the Repository to persist and restore it. DDD Factory comes into play when the complex domain object begins its life - whether it would be a long lived object (saved in db) or not.
Answering your second question. The ZF1 way would be to create a singleton per object class. You could have a factory/registry that creates these for you and returns the previously created one when you ask for an already created one. Alternatively, if you are using PHP 5.3, use a DI container such as Pimple or Zend\Di.

Is it double work creating a data access component and business component?

I am designing my first Layered Application which consists of a Data, Business, and a Presentation layer.
My business components (e.g, Business.Components.UserComponent) currently has the following method:
public void Create(User entity)
{
using (DataContext ctx = new DataContext())
{
ctx.Users.AddObject(entity);
ctx.SaveChanges();
}
}
I like this design. However, I've encountered some examples online that would recommend the following implementation:
public void Create(User entity)
{
// Instanciate the User Data Access Component
UserDAC dac = new UserDAC();
dac.InsertUser(entity);
}
This would result in creating a Data Access Component for all Entities, each containing the basic methods (Create, Edit, Delete...etc).
This seems like double work since I would have to create the Data Access Components with the basic methods as well as the Business Components that just simply calls the methods in the Data Access Component.
What would be considered best practice for properly implementing the basic CRUD functionalities in a Layered Application? Should they be 'coded' in the Business Component or in a Data Access Component?
It depends. If you expect that your business layer simply do CRUD operations than you can follow your initial approach. If you plan to use some big business logic where business component will work with many entities second approach is better.
Reason why people like to use second approach is testing and persistance ignorance. If you use first approach your business component is tightly coupled with Entity framework. Mocking ObjectContext is not very easy task so testing is hard. If you use second approach your business layer is independent on persistance layer. You can easily test it and you can even change persistance layer if you need to. But those are more advanced concepts which you are probably not looking for at the moment. Your code would need some additional improvement to be testable.
DAC can be also implemented as Repository. There are plenty of ways how to create generic repository so that you have only one class and you define the entity type when instancing repository:
var repository = new GenericRepository<User>();
Also be aware that using separate data access layer introduces new complexity because sometimes it is reasonable to share single context among multiple repositories. This comes together with something known as Unit of work pattern.
There are plenty of articles about implementing Repository and Unit of work patterns on the Internet. I have some of them stored as favorites at home so If you are interested I can include them to my answer later on.

Is it really necessary to implement a Repository when using Entity Framework?

I'm studying MVC and EF at the moment and I see quite often in the articles on those subjects that controller manipulates data via a Repository object instead of directly using LINQ to Entities.
I created a small web application to test some ideas. It declares the following interface as an abstraction of the data storage
public interface ITinyShopDataService
{
IQueryable<Category> Categories { get; }
IQueryable<Product> Products { get; }
}
Then I have a class that inherits from the generated class TinyShopDataContext (inherited from ObjectContext) and implements ITinyShopDataService.
public class TinyShopDataService : TinyShopDataContext, ITinyShopDataService
{
public new IQueryable<Product> Products
{
get { return base.Products; }
}
public new IQueryable<Category> Categories
{
get { return base.Categories; }
}
}
Finally, a controller uses an implementation of ITinyShopDataService to get the data, e.g. for displaying products of a specific category.
public class HomeController : Controller
{
private ITinyShopDataService _dataService;
public HomeController(ITinyShopDataService dataService)
{
_dataService = dataService;
}
public ViewResult ProductList(int categoryId)
{
var category = _dataService.Categories.First(c => c.Id == categoryId);
var products = category.Products.ToList();
return View(products);
}
}
I think the controller above possesses some positive properties.
It is easily testable because of data service being injected.
It uses LINQ statements that query an abstract data storage in an implementation-neutral way.
So, I don't see any benefit of adding a Repository between the controller and the ObjectContext-derived class. What do I miss?
(Sorry, it was a bit too long for the first post)
You can test your controllers just fine with what you propose.
But how do you test your queries inside the data service? Imagine you have very complicated query behavior, with multiple, complex queries to return a result (check security, then query a web service, then query the EF, etc.).
You could put that in the controller, but that's clearly wrong.
It should go in your service layer, of course. You might mock/fake out the service layer to test the controller, but now you need to test the service layer.
It would be nice if you could do this without connecting to a DB (or web service). And that's where Repository comes in.
The data service can use a "dumb" Repository as a source of data on which it can perform complex queries. You can then test the data service by replacing the repository with a fake implementation which uses a List<T> as its data store.
The only thing which makes this confusing is the large amount of examples that show controllers talking directly to repositories. (S#arp Architecture, I'm looking at you.) If you're looking at those examples, and you already have a service layer, then I agree that it makes it difficult to see what the benefit of the repository is. But if you just consider the testing of the service layer itself, and don't plan on having controllers talking directly to the repository, then it starts to make more sense.
Ayende of the NHibernate project and other fame agrees with you.
Some Domain Driven Design people would suggest that you should have a service (not same as web service) layer that is called from your controller, but DDD is usually applied to complex domains. In simple applications, what you're doing is fine and testable.
What you're missing is the ability to update, create and delete objects. If that's not an issue, than the interface is probably good enough as it is (although I would let it derive from IDisposable to make sure you can always dispose the context (and test for this))
The repository pattern has nothing to do with Entity Framework, although Entity Framework implements a "repository" pattern.
The repository interface (lets say IRepositoryProducts) lives in the domain layer. It understands domain objects, or even Entities if you don't want to use domain driven design.
But its implementation (lets say RepositoryProducts) is the actual repository pattern implementation and does not live in the domain layer but in the persistance layer.
This implementation can use Entity or any ORM ..or not to persist the information in the database.
So the answer is: It's not really necessary, but it's recommended to use Repository pattern despite using Entity ORM as a good practice to keep separation between domain layer and persistance layer. Because that's the purpose of the Repository Pattern: separation of concerns between the domain logic and the way the information is persisted. When using the repository pattern from the domain level you just abstract yourself and think "I want to store this information, I don't care how it's done", or "I want a list of these things, I don't care where do you get them from or how do you retrieve them. Just give 'em to me".
Entity Framework has nothing to do with domain layer, it's only a nice way to persist objects, but should live in the repository implementation (persistance layer)