Entity Framework 6 Database-First and Onion Architecture - entity-framework

I am using Entity Framework 6 database-first. I am converting the project to implement the onion architecture to move towards better separation of concerns. I have read many articles and watched many videos but having some issues deciding on my solution structure.
I have 4 projects: Core, Infrastructure, Web & Tests.
From what I've learned, the .edmx file should be placed under my "Infrastructure" folder. However, I have also read about using the Repository and Unit of Work patterns to assist with EF decoupling and using Dependency Injection.
With this being said:
Will I have to create Repository Interfaces under CORE for ALL entities in my model? If so, how would one maintain this on a huge database? I have looked into automapper but found issues with it presenting IEnumererables vs. IQueryables but there is an extension available it has to hlep with this. I can try this route deeper but want to hear back first.
As an alternative, should I leave my edmx in Infrastructure and move the .tt T4 files for my entities to CORE? Does this present any tight coupling or a good solution?
Would a generic Repository interface work well with the suggestion you provide? Or maybe EF6 already resolves the Repository and UoW patterns issue?
Thank you for looking at my question and please present any alternative responses as well.
I found a similar post here that was not answered:
EF6 and Onion architecture - database first and without Repository pattern

Database first doesn't completely rule out Onion architecture (aka Ports and Adapters or Hexagonal Architecture, so you if you see references to those they're the same thing), but it's certainly more difficult. Onion Architecture and the separation of concerns it allows fit very nicely with a domain-driven design (I think you mentioned on twitter you'd already seen some of my videos on this subject on Pluralsight).
You should definitely avoid putting the EDMX in the Core or Web projects - Infrastructure is the right location for that. At that point, with database-first, you're going to have EF entities in Infrastructure. You want your business objects/domain entities to live in Core, though. At that point you basically have two options if you want to continue down this path:
1) Switch from database first to code first (perhaps using a tool) so that you can have POCO entities in Core.
2) Map back and forth between your Infrastructure entities and your Core objects, perhaps using something like AutoMapper. Before EF supported POCO entities this was the approach I followed when using it, and I would write repositories that only dealt with Core objects but internally would map to EF-specific entities.
As to your questions about Repositories and Units of Work, there's been a lot written about this already, on SO and elsewhere. You can certainly use a generic repository implementation to allow for easy CRUD access to a large set of entities, and it sounds like that may be a quick way for you to move forward in your scenario. However, my general recommendation is to avoid generic repositories as your go-to means of accessing your business objects, and instead use Aggregates (see DDD or my DDD course w/Julie Lerman on Pluralsight) with one concrete repository per Aggregate Root. You can separate out complex business entities from CRUD operations, too, and only follow the Aggregate approach where it is warranted. The benefit you get from this approach is that you're constraining how the objects are accessed, and getting similar benefits to a Facade over your (large) set of database entities.
Don't feel like you can only have one dbcontext per application. It sounds like you are evolving this design over time, not starting with a green field application. To that end, you could keep your .edmx file and perhaps a generic repository for CRUD purposes, but then create a new code first dbcontext for a specific set of operations that warrant POCO entities, separation of concerns, increased testability, etc. Over time, you can shift the bulk of the essential code to use this, while still keeping the existing dbcontext so you don't lose and current functionality.

I am using entity framework 6.1 in my DDD project. Code first works out very well if you want to do Onion Architecture.
In my project we have completely isolated Repository from the Domain Model. Application Service is what uses repository to load aggregates from and persist aggregates to the database. Hence, there is no repository interfaces in the domain (core).
Second option of using T4 to generate POCO in a separate assembly is a good idea. Please remember that your domain model (core) should be persistence-ignorant.
While generic repository are good for enforcing aggregate-level operations, I prefer using specific repository more, simply because not every Aggregate is going to need all of those generic repository operations.
http://codingcraft.wordpress.com/

Related

Opinion on ASP.NET MVC Onion-based architecture [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
What is your opinion on the following 'generic' code-first Onion-inspired ASP.NET MVC architecture:
The layers, explained:
Core - contain the Domain model. e.g. that's the business objects and their relationship. I am using Entity Framework to visually design the entities and the relations between them. It lets me generate a script for a database. I am getting automatically-generated POCO-like models, which I can freely refer to in the next layer (Persistence), since they are simple (i.e. they are not database-specific).
Persistence - Repository interface and implementations. Basically CRUD operations on the Domain model.
BusinessServices - A business layer around the repository. All the business logic should be here (e.g. GetLargestTeam(), etc). Uses CRUD operations to compose return objects or get/filter/store data. Should contain all business rules and validations.
Web (or any other UI) - In this particular case it's an MVC application, but the idea behind this project is to provide UI, driven by what the Business services offer. The UI project consumes the Business layer and has no direct access to the Repository. The MVC project has its own View models, which are specific to each View situation. I am not trying to force-feed it Domain Models.
So the references go like this:
UI -> Business Services -> Repository -> Core objects
What I like about it:
I can design my objects, rather than code them manually. I am getting
code-generated Model objects.
UI is driven/enforced by the Business
layer. Different UI applications can be coded against the same
Business model.
Mixed feelings about:
Fine, we have a pluggable repository implementation, but how often do you really have different implementations of the same persistence interface?
Same goes for the UI - we have the technical ability to implement different UI apps against the same business rules, but why would we do that, when we can simply render different views (mobile, desktop, etc)?
I am not sure if the UI should only communicate with the Business Layer via View models, or should I use Domain Models to transfer data, as I do now. For display, I am using view models, but for data transfer I am using Domain models. Wrong?
What I don't like:
The Core project is now referenced in every other project - because I want/have to access the Domain models. In classic Onion architecture, the core is referenced only by the next layer.
The DbContext is implemented in the .Core project, because it is being generated by the Entity Framework, in the same place where the .edmx is. I actually want to use the .EDMX for the visual model design, but I feel like the DbContext belongs to the Persistence layer, somewhere within the database-specific repository implementation.
As a final question - what is a good architecture which is not over-engineered (such as a full-blown Onion, where we have injections, service locators, etc) but at the same time provides some reasonable flexibility, in places where you would realistically need it?
Thanks
Wow, there’s a lot to say here! ;-)
First of all, let’s talk about the overall architecture.
What I can see here is that it’s not really an Onion architecture. You forgot the outermost layer, the “Dependency Resolution” layer. In an Onion architecture, it’s up to this layer to wires up Core interfaces to Infrastructure implementations (where your Persistence project should reside).
Here’s a brief description of what you should find in an Onion application. What goes in the Core layer is everything unique to the business: Domain model, business workflows... This layer defines all technical implementation needs as interfaces (i.e.: repositories’ interfaces, logging interfaces, session’s interfaces …). The Core layer cannot reference any external libraries and has no technology specific code. The second layer is the Infrastructure layer. This layer provides implementations for non-business Core interfaces. This is where you call your DB, your web services … You can reference any external libraries you need to provide implementations, deploy as many nugget packages as you want :-). The third layer is your UI, well you know what to put in there ;-) And the latest layer, it’s the Dependency Resolution I talked about above.
Direction of dependency between layers is toward the center.
Here’s how it could looks like:
The question now is: how to fit what you’ve already coded in an Onion architecture.
Core: contain the Domain model
Yes, this is the right place!
Persistence - Repository interface and implementations
Well, you’ll need to separate interfaces with implementations. Interfaces need to be moved into Core and implementations need to be moved into Infrastructure folder (you can call this project Persistence).
BusinessServices - A business layer around the repository. All the
business logic should be here
This needs to be moved in Core, but you shouldn’t use repositories implementations here, just manipulate interfaces!
Web (or any other UI) - In this particular case it's an MVC
application
cool :-)
You will need to add a “Bootstrapper“ project, just have a look here to see how to proceed.
About your mixed feelings:
I won’t discuss about the need of having repositories or not, you’ll find plenty of answers on stackoverflow.
In my ViewModel project I have a folder called “Builder”. It’s up to my Builders to discuss with my Business services interfaces in order to get data. The builder will receive lists of Core.Domain objects and will map them into the right ViewModel.
About what you don’t like:
In classic Onion architecture, the core is referenced only by the next
layer.
False ! :-) Every layer needs the Core to have access to all the interfaces defined in there.
The DbContext is implemented in the .Core project, because it is being
generated by the Entity Framework, in the same place where the .edmx
is
Once again, it’s not a problem as soon as it’s really easy to edit the T4 template associated with your EDMX. You just need to change the path of the generated files and you can have the EDMX in the Infrastructure layer and the POCO’s in your Core.Domain project.
Hope this helps!
I inject my services into my controllers. The services return DTO's which reside in Core.
The model you have looks good, I don't use the repository pattern but many people do. I is difficult to work with EF in this type of architecture which is why I chose to use Nhibernate.
A possible answer to your final question.
CORE
DOMAIN
DI
INFRASTRUCTURE
PRESENTATION
SERVICES
In my opinion:
"In classic Onion architecture, the core is referenced only by the next layer."
That is not true, Core should be reference by any layer... remember that Direction of dependency between layers is toward the center (Core).
"the layers above can use any layer beneath them" By Jeffrey Palermo http://jeffreypalermo.com/blog/the-onion-architecture-part-3/
About your EF, it is in the wrong place.... it should be in the Infrastructure layer not in the Core layer. And use POCO Generator to create the entities (POCO classes) in Core/Model. Or use a Auto-mapper so you can map Core Model (Business objects) to Entity Model (EF Entities)
What you've done looks pretty good and is basically one of two standard architectures that I see a lot.
Mixed feelings about:
Fine, we have a pluggable repository implementation, but how often do you really have different implementations of the same persistence interface?
Pluggable is often touted as being good design but I've never once seen a team swap out a major implementation of something for something else. They just modify the existing thing. IMHO "pluggability" is only useful for being able to mock components for automated unit testing.
I am not sure if the UI should only communicate with the Business Layer via View models, or should I use Domain Models to transfer data, as I do now. For display, I am using view models, but for data transfer I am using Domain models. Wrong?
I reckon view models are a UI (MVC Web) concern, if you added a different type of UI for example it might not require view models or might need something different. So I think the Business layer should return domain entities and allow them to be mapped to view models in the UI layer.
What I don't like:
The Core project is now referenced in every other project - because I want/have to access the Domain models. In classic Onion architecture, the core is referenced only by the next layer.
As others have said this is quite normal. Usually everything ends up having a dependency on the Domain.
The DbContext is implemented in the .Core project, because it is being generated by the Entity Framework, in the same place where the .edmx is. I actually want to use the .EDMX for the visual model design, but I feel like the DbContext belongs to the Persistence layer, somewhere within the database-specific repository implementation.
I think this is a consequence of Entity Framework. If you used it in "Code First" mode you actually can - and usually do - have the context and repository in the Persistance layer with the Domain (represented as POCO classes) in what you've called Core.
As a final question - what is a good architecture which is not over-engineered (such as a full-blown Onion, where we have injections, service locators, etc) but at the same time provides some reasonable flexibility, in places where you would realistically need it?
As I touched on above I wouldn't worry about the need to swap things out except to allow for automated unit tests. Unless there is a specific requirement you know about that will make this very likely.
Good luck!

What specific issue does the repository pattern solve?

(Note: My question has very similar concerns as the person who asked this question three months ago, but it was never answered.)
I recently started working with MVC3 + Entity Framework and I keep reading that the best practice is to use the repository pattern to centralize access to the DAL. This is also accompanied with explanations that you want to keep the DAL separate from the domain and especially the view layer. But in the examples I've seen the repository is (or appears to be) simply returning DAL entities, i.e. in my case the repository would return EF entities.
So my question is, what good is the repository if it only returns DAL entities? Doesn't this add a layer of complexity that doesn't eliminate the problem of passing DAL entities around between layers? If the repository pattern creates a "single point of entry into the DAL", how is that different from the context object? If the repository provides a mechanism to retrieve and persist DAL objects, how is that different from the context object?
Also, I read in at least one place that the Unit of Work pattern centralizes repository access in order to manage the data context object(s), but I don't grok why this is important either.
I'm 98.8% sure I'm missing something here, but from my readings I didn't see it. Of course I may just not be reading the right sources... :\
I think the term "repository" is commonly thought of in the way the "repository pattern" is described by the book Patterns of Enterprise Application Architecture by Martin Fowler.
A Repository mediates between the domain and data mapping layers,
acting like an in-memory domain object collection. Client objects
construct query specifications declaratively and submit them to
Repository for satisfaction. Objects can be added to and removed from
the Repository, as they can from a simple collection of objects, and
the mapping code encapsulated by the Repository will carry out the
appropriate operations behind the scenes.
On the surface, Entity Framework accomplishes all of this, and can be used as a simple form of a repository. However, there can be more to a repository than simply a data layer abstraction.
According to the book Domain Driven Design by Eric Evans, a repository has these advantages:
They present clients with a simple model for obtaining persistence objects and managing their life cycle
They decouple application and domain design from persistence technology, multiple database strategies, or even multiple data sources
They communicate design decisions about object access
They allow easy substitution of a dummy implementation, for unit testing (typically using an in-memory collection).
The first point roughly equates to the paragraph above, and it's easy to see that Entity Framework itself easily accomplishes it.
Some would argue that EF accomplishes the second point as well. But commonly EF is used simply to turn each database table into an EF entity, and pass it through to UI. It may be abstracting the mechanism of data access, but it's hardly abstracting away the relational data structure behind the scenes.
In simpler applications that mostly data oriented, this might not seem to be an important point. But as the applications' domain rules / business logic become more complex, you may want to be more object oriented. It's not uncommon that the relational structure of the data contains idiosyncrasies that aren't important to the business domain, but are side-effects of the data storage. In such cases, it's not enough to abstract the persistence mechanism but also the nature of the data structure itself. EF alone generally won't help you do that, but a repository layer will.
As for the third advantage, EF will do nothing (from a DDD perspective) to help. Typically DDD uses the repository not just to abstract the mechanism of data persistence, but also to provide constraints around how certain data can be accessed:
We also need no query access for persistent objects that are more
convenient to find by traversal. For example, the address of a person
could be requested from the Person object. And most important, any
object internal to an AGGREGATE is prohibited from access except by
traversal from the root.
In other words, you would not have an 'AddressRepository' just because you have an Address table in your database. If your design chooses to manage how the Address objects are accessed in this way, the PersonRepository is where you would define and enforce the design choice.
Also, a DDD repository would typically be where certain business concepts relating to sets of domain data are encapsulated. An OrderRepository may have a method called OutstandingOrdersForAccount which returns a specific subset of Orders. Or a Customer repository may contain a PreferredCustomerByPostalCode method.
Entity Framework's DataContext classes don't lend themselves well to such functionality without the added repository abstraction layer. They do work well for what DDD calls Specifications, which can be simple boolean expressions sent in to a simple method that will evaluate the data against the expression and return a match.
As for the fourth advantage, while I'm sure there are certain strategies that might let one substitute for the datacontext, wrapping it in a repository makes it dead simple.
Regarding 'Unit of Work', here's what the DDD book has to say:
Leave transaction control to the client. Although the REPOSITORY will insert into and delete from the database, it will ordinarily not
commit anything. It is tempting to commit after saving, for example,
but the client presumably has the context to correctly initiate and
commit units of work. Transaction management will be simpler if the
REPOSITORY keeps its hands off.
Entity Framework's DbContext basically resembles a Repository (and a Unit of Work as well). You don't necessarily have to abstract it away in simple scenarios.
The main advantage of the repository is that your domain can be ignorant and independent of the persistence mechanism. In a layer based architecture, the dependencies point from the UI layer down through the domain (or usually called business logic layer) to the data access layer. This means the UI depends on the BLL, which itself depends on the DAL.
In a more modern architecture (as propagated by domain-driven design and other object-oriented approaches) the domain should have no outward-pointing dependencies. This means the UI, the persistence mechanism and everything else should depend on the domain, and not the other way around.
A repository will then be represented through its interface inside the domain but have its concrete implementation outside the domain, in the persistence module. This way the domain depends only on the abstract interface, not the concrete implementation.
That basically is object-orientation versus procedural programming on an architectural level.
See also the Ports and Adapters a.k.a. Hexagonal Architecture.
Another advantage of the repository is that you can create similar access mechanisms to various data sources. Not only to databases but to cloud-based stores, external APIs, third-party applications, etc.
You're right,in those simple cases the repository is just another name for a DAO and it brings only one value: the fact that you can switch EF to another data access technique. Today you're using MSSQL, tomorrow you'll want a cloud storage. OR using a micro orm instead of EF or switching from MSSQL to MySql.
In all those cases it's good that you use a repository, as the rest of the app won't care about what storage you're using now.
There's also the limited case where you get information from multiple sources (db + file system), a repo will act as the facade, but it's still a another name for a DAO.
A 'real' repository is valid only when you're dealing with domain/business objects, for data centric apps which won't change storage, the ORM alone is enough.
It would be useful in situations where you have multiple data sources, and want to access them using a consistent coding strategy.
For example, you may have multiple EF data models, and some data accessed using traditional ADO.NET with stored procs, and some data accessed using a 3rd party API, and some accessed from an Access database living on a Windows NT4 server sitting under a blanket of dust in your broom closet.
You may not want your business or front-end layers to care about where the data is coming from, so you build a generic repository pattern to access "data", rather than to access "Entity Framework data".
In this scenario, your actual repository implementations will be different from each other, but the code that calls them wouldn't know the difference.
Given your scenario, I would simply opt for a set of interfaces that represent what data structures (your Domain Models) need to be returned from your data layer. Your implementation can then be a mixture of EF, Raw ADO.Net or any other type of Data Store/Provider. The key strategy here is that the implementation is abstracted away from the immediate consumer - your Domain layer. This is useful when you want to unit test your domain objects and, in less common situations - change your data provider / database platform altogether.
You should, if you havent already, consider using an IOC container as they make loose coupling of your solution very easy by way of Dependency Injection. There are many available, personally i prefer Ninject.
The domain layer should encapsulate all of your business logic - the rules and requirements of the problem domain, and can be consumed directly by your MVC3 web application. In certain situations it makes sense to introduce a services layer that sits above the domain layer, but this is not always necessary, and can be overkill for straightforward web applications.
Another thing to consider is that even when you know that you will be working with a single data store it still might make sense to create a repository abstraction. The reason is that there might be a function that your application needs that your ORM du jour either does badly (performance), not at all, or you just don't know how to make the ORM bend to your needs.
If you are wrapping your ORM behind a well thought out repository interface, you can easily switch between different technologies as you see fit. It's not uncommon in my repositories to see some methods use EF for their work and others to use something like PetaPoco, or (gasp) ADO.net code. The repository abstraction enables you to use exactly the right tool for the job at hand without leaking these complexities into the client code.
I think there is a big misunderstanding of what many articles call "repository." And that's why there are doubts about what real value those abstractions bring.
In my opinion the repository in it's pure form is IEnumerable, while you and many articles are talking about "data access service."
I've blogged about it here.

Does Entity Framework DB First (EDMX) prevent proper Separation of Concerns?

I am new to entity framework and MVC, and trying to understand what constitutes a good design approach for a new application.
There are several ways of using Entity Framework. However, for my project, the best looking option is DB First. I've played around with an EDMX file, and I have got as far as using the DbContext code generator to create my wrapper classes.
I plan on using the repository and unit-of-work patterns, and using ninject for DI.
However, it does not seem "proper", from a SoC point of view, that whilst my respository will hide the implementation of the data store (EF) from my code, the model classes themselves are very much EF flavoured.
It seems that using EDMX-based approaches to EF blur the separation of concerns. Only POCO support seems to allow a true separation, but POCO has some other limitations that I don't like.
Am I missing something, or does using EDMX have this drawback?
Are people using an auto mapper to convert between the entity model and another, clean, SoCced model?
thanks
Tian
I don't have a strong opinion on the Separation Of Concerns question, but I have used both the standard ADO.Net version of EF and POCO and it is not difficult at all to customise the output of the the T4 code generation script for POCO to address any concerns you have about the structure of the objects created. That sounds like it would probably be a good starting point for what you are looking to do.
Once you know you are looking for T4 templates there are quite a few tutorials and a lot of helpful SO questions that can give you an idea of what you need to do.

Entity Framework: Data Centric vs. Object Centric

I'm having a look at Entity Framework and everything I'm reading takes a data centric approach to explaining EF. By that I mean that the fundamental relationships of the system are first defined in the database and objects are generated that reflect those relationships.
Examples
Quickstart (Entity Framework)
Using Entity Framework entities as business objects?
The EF documentation implies that it's not necessary to start from the database layer, e.g.
Developers can work with a consistent
application object model that can be
mapped to various storage schemas
When designing a new system (simplified version), I tend to first create a class model, then generate business objects from the model, code business layer stuff that can't be generated, and then worry about persistence (or rather work with a DBA and let him worry about the most efficient persistence strategy). That object centric approach is well supported by ORM technologies such as (n)Hibernate.
Is there a reasonable path to an object centric approach with EF? Will I be swimming upstream going that route? Any good starting points?
Model First approach seems to be what you need.
We suggest to take a look at the ADO.NET Team Blog article also.
A while after asking this, I discovered that EF 4 supports POCO (Plain Old CLR Objects), allowing an object-centric design with (relative) ignorance of persistence.
This article was the best one I came across discussing that approach, while this article explains how to use code generation templates to ease the work.

is it that easy working with ADO.NET Entity framework in real programming?

HI Guys,
I was watching these videos series about Entity Framework:
http://msdn.microsoft.com/en-us/data/ff191186.aspx
is that easy building application in real world programming??? and is it ....reliable...has good performance...
"I am a graduate.."
thanks
Entity Framework is a valid real world data access tool. It is very easy to get up and running with EF. You simply import (or create in EF 4) your data model. You then can rename it to make it more code friendly. And then you are off querying databases.
Performance
I have been on multiple projects that use it, some which require high throughput, others that have low performance requirements. Entity Framework out of the box is not the fastest solution in the world, so there are a lot of performance tweaks that have to go on, but its all do able.
Reliability
We never have issues with reliability. We have never had an issue with EF in general, its always data content related. Trying to insert duplicated data, etc.
Other Tangibles
EF follows a pattern which allows for you to do some fun stuff with templates and abstract classes. All entities inerit from a class, entities that have references inherit from other classes. All Entity Contexts inherit from ;) ObjectContext classes, which provide a base set of functionality that allows you to create generic DAO implementations that can be reused throughout the enterprise.
If you are using UI dev, you can also use Data Services that wrap EF, as a fast gateway to your databse. The only downside of this is that you dont have access to the full suite of the Entity Framework.