Confusion over MVC and entity model - asp.net-mvc-2

My confusion stems from the fact I am using 2 different walkthroughs on building mvc applications, namely: Steven Sanderson's pro asp.net mvc and the online mvc music store. The former creates a domain model, placing the entity model in there along with repositories, while the music store demo places the entity model in the mvc model folder. Which of these is the best approach. Should the entity model and associated repositories exist in a separate domain layer, or in the MVCs model folder.

Separation of concerns
Model folder in Asp.net MVC project template is indeed very confusing. Most developers not knowing enough about MVC pattern think that application/domain model = data model. Most of the time, that's not the case.
Take for instance a user entity that may be in several different forms:
NewUser is an application model entity that has most properties of a user, plus two password properties that can be declaratively validated
User data model entity has all usual user properties and one password property
User application model entity has all the usual properties and none for password
So you can see by this simple example there are multiple models that differ between each other. And when you have a multi-assemblied application, putting application model in a separate assembly is very wise, since all assemblies will most probably communicate using these objects only. No data model entities should be transferred outside data assembly/tier to make use of SoC...
So in the end it's ok to put data model in the Model folder when building a small scale simple application, but in all other cases it's probably better to use a separate application model assembly that's shared between all assemblies. And have a separate data model that's only used in data tier assembly.
Read this answer that may help you see things a bit clearer.
And this one as well.
I would recommend against using the Model folder and use a separate assembly instead. You'll have better separation and improved scalability.

Strategically it makes sense to place the EF Model in the same folder as the repositories, because it just is a part of the Data-Access-Layer inside an application.
Logically it would be better to place the EF model in the Model directory as it creates all classes needed to reflect the database in an application. (And if you open the Class View it looks way better to have all those classes residing in a folder called Model instead of Repositories)
At our company we've had the same problem and decided to go with saving the EF model in the Model folder.
After all it's up to you what you do. The most important thing to do here would be to document all kinds decisions that happen during development (when, why and based on what).
Documenting everything could prevent later WTF's

Related

MVC Business Logic + DAL + Mapping ViewModel with EF Model

I'm starting to develop an application, and I want to know the best practices to organize the architecture of the solution.
Should I use EF Class Model as my ViewModel?
Should I put all my queries and db access in the model? or create a Service to manage all Db concerns ?
I'm using EF with DB First, because the db is already developed.
Thanks!
There are much more complete descriptions of application architecture out there, but here's the $.25 description.
EF Class Models are for your communication with a data store
Data Transfer Objects (DTO) are how modules communicate among themselves
(WebAPI to MVC, etc.)
ViewModels supply the data your UI requires
Look up "Separation of Concerns" as it pertains to application architecture, it can save your butt. Often, developers will dual-purpose these entities leading to some hilarious results when you find you've painted a corner for yourself. Not so funny if you are the "painter".
On the other hand, keeping these models requires extra effort and the mappings take CPU cycles. Here's a concrete example:
WebAPI accesses People entity (EF class) and maps to a PeopleDTO (not all fields, maybe additional information) and returns this to your MVC controller. MVC controller takes the PeopleDTO and merges it with supporting lookup tables (more WebAPI calls) to create a PeopleVM (ViewModel) that is used by your Razor Page.
In the scenario I just outlined, there are three different types of People object but each could have very different contents dependent on the needs of that "layer". Lots of tools exist to make the mapping less painful.
Clear?

Getting familiar with Entity Framework when using existing database

We are currently rewriting an existing internal ASP.NET Web Forms application. Our application consists of a Web Api back end which uses Entity Framework 6 for data access and an front end which uses AngularJS.
We have an existing large database that I've created EF models using the Code-First Using Existing Database method and we are using data transfer object classes as inputs/outputs to our API methods so we aren't directly exposing our model classes. So basically, I'm trying to become proficient with EF, Web Api and AngularJS all at the same time. For the most part I'm fairly comfortable with the latter two, but for EF I haven't completely gotten comfortable with. I've watched a lot of the videos on Microsoft Virtual Academy but this is the first time I've had some hands-on experience with it.
We've been working on this application for a few months and so far we've only had to work with CRUD operations on our entities (POCO DTO's) which are flat objects with simple properties. However, we've finally come across some situations where we need to deal not only with our classes, but properties which are classes themselves; a parent-child relationship.
Therefore, I have the following questions:
I see that when we have a proper foreign key relationship in our DB, that virtual properties are created in EF, which from what I recall are to support lazy-loading. However, lazy-loading isn't really feasible in this environment where we are using web services (Web Api). Our object model does allow for some really large hierarchy of classes where a fully populated object and its children would mean a large amount of data would be passed around when that really isn't necessary, so in most cases a first level object is all we need. In some cases however, we do want to populate child classes, so my question is how do we do that, and where do we do that? I've looked at the automatically-generated code in the DB Context but we have also used scaffolded code to create our controllers. Which place do we need to do this? I've seen code samples showing how do to this but it hasn't said specifically where this code lies. It appears to be within a controller but I could be wrong.
If we do allow for 2- or more level hierarchy of objects, does EF automatically handle operations (updates, deletes, etc.) -- for example, if we have a "Company" object which has a collection of "Customer" objects, and we delete the "Company" object, do the related "Customer" objects get deleted too? Also, is a multi-step operation like that automatically performed within a transaction or do we need to explicitly set that up?
If I modify a model class or the DB context, seeing as this code is automatically-generated, that's generally bad practice as my changes could be overwritten, so I am assuming the controller code is where I want to make my changes. I am aware of database migrations but I have no experience with them and I am sure I'll need to use them at some point because I am fairly confident that our database may not have all the foreign key relationships necessary for EF to do everything we need at the moment.
I know this is a long post, but if anyone can give some guidance on how to do some of these things because it's not only me that's having to deal with this but I have two other developers on my team who are working on this project and we are all as inexperienced with this as the others are. Thanks
For the purpose of sending data across a web service, I'd suggest creating a DTO to hold the data you want to send and mapping your entities to the DTO instead of trying to send the entities themselves in your payload. It also protects your API from changes to your entity.
Cascading deletes are configurable, iirc, but I'm not 100% sure what the default is. Transactions are generally not implicit, so you will want to use those where you require them.
Not exactly sure what you are asking here. In general, how your entities/tables change depends on if you are using database-first or code-first. If you are using database-first (you will have a .edmx file in your solution that has the model matching your schema), you just update the SQL directly and update your entity model via the .edmx. If you use code-first, you will change the entities how you want them and run a database migration to update your database to match.
MSDN article about code-first migration: https://msdn.microsoft.com/en-us/data/jj591621.aspx

Entity Framework 6 Database-First and Onion Architecture

I am using Entity Framework 6 database-first. I am converting the project to implement the onion architecture to move towards better separation of concerns. I have read many articles and watched many videos but having some issues deciding on my solution structure.
I have 4 projects: Core, Infrastructure, Web & Tests.
From what I've learned, the .edmx file should be placed under my "Infrastructure" folder. However, I have also read about using the Repository and Unit of Work patterns to assist with EF decoupling and using Dependency Injection.
With this being said:
Will I have to create Repository Interfaces under CORE for ALL entities in my model? If so, how would one maintain this on a huge database? I have looked into automapper but found issues with it presenting IEnumererables vs. IQueryables but there is an extension available it has to hlep with this. I can try this route deeper but want to hear back first.
As an alternative, should I leave my edmx in Infrastructure and move the .tt T4 files for my entities to CORE? Does this present any tight coupling or a good solution?
Would a generic Repository interface work well with the suggestion you provide? Or maybe EF6 already resolves the Repository and UoW patterns issue?
Thank you for looking at my question and please present any alternative responses as well.
I found a similar post here that was not answered:
EF6 and Onion architecture - database first and without Repository pattern
Database first doesn't completely rule out Onion architecture (aka Ports and Adapters or Hexagonal Architecture, so you if you see references to those they're the same thing), but it's certainly more difficult. Onion Architecture and the separation of concerns it allows fit very nicely with a domain-driven design (I think you mentioned on twitter you'd already seen some of my videos on this subject on Pluralsight).
You should definitely avoid putting the EDMX in the Core or Web projects - Infrastructure is the right location for that. At that point, with database-first, you're going to have EF entities in Infrastructure. You want your business objects/domain entities to live in Core, though. At that point you basically have two options if you want to continue down this path:
1) Switch from database first to code first (perhaps using a tool) so that you can have POCO entities in Core.
2) Map back and forth between your Infrastructure entities and your Core objects, perhaps using something like AutoMapper. Before EF supported POCO entities this was the approach I followed when using it, and I would write repositories that only dealt with Core objects but internally would map to EF-specific entities.
As to your questions about Repositories and Units of Work, there's been a lot written about this already, on SO and elsewhere. You can certainly use a generic repository implementation to allow for easy CRUD access to a large set of entities, and it sounds like that may be a quick way for you to move forward in your scenario. However, my general recommendation is to avoid generic repositories as your go-to means of accessing your business objects, and instead use Aggregates (see DDD or my DDD course w/Julie Lerman on Pluralsight) with one concrete repository per Aggregate Root. You can separate out complex business entities from CRUD operations, too, and only follow the Aggregate approach where it is warranted. The benefit you get from this approach is that you're constraining how the objects are accessed, and getting similar benefits to a Facade over your (large) set of database entities.
Don't feel like you can only have one dbcontext per application. It sounds like you are evolving this design over time, not starting with a green field application. To that end, you could keep your .edmx file and perhaps a generic repository for CRUD purposes, but then create a new code first dbcontext for a specific set of operations that warrant POCO entities, separation of concerns, increased testability, etc. Over time, you can shift the bulk of the essential code to use this, while still keeping the existing dbcontext so you don't lose and current functionality.
I am using entity framework 6.1 in my DDD project. Code first works out very well if you want to do Onion Architecture.
In my project we have completely isolated Repository from the Domain Model. Application Service is what uses repository to load aggregates from and persist aggregates to the database. Hence, there is no repository interfaces in the domain (core).
Second option of using T4 to generate POCO in a separate assembly is a good idea. Please remember that your domain model (core) should be persistence-ignorant.
While generic repository are good for enforcing aggregate-level operations, I prefer using specific repository more, simply because not every Aggregate is going to need all of those generic repository operations.
http://codingcraft.wordpress.com/

Entity Framework Code First - Reverse Engineer

I am about to start work on a ASP.Net MVC 4 web application using Entity Framework Code First. Because the database already exists I am using Code First's ability to perform Reverse Engineering in order to generate my domain classes.
I wish then to place these domain classes into a separate project of their own within my solution in order to keep them persistence ignorant. However, when I run the tool to reverse engineer my database (using Entity Framework Power Tools), I find that the domain classes are created, but also created is a folder called Mapping containing a mapping class for each domain class which then uses Fluent APIs to map the table properties. This is all good.
But, what I have also found, is that the mapping classes rely on a reference to the Entity Framework and I was just wondering is this thought to be poor practice? Usually when I create POCO classes, they are completely persistence ignorant, ie, there has been no reference to Entity Framework in that project at all.
Your thoughts?
Thanks.
You can keep your POCO classes in one project, while mapping stays in another project. Just add a reference to the project where POCO classes are located. Another approach is to create a data access layer and move mappings over there. That way you have three projects. Your main MVC project, your model project and data access layer that contains mappings and a reference to EntityFramework.
For example, your solution could be something like this:
1. Web User Interface (MVC)
2. Business layer
3. Unit of Work/Repository
4. Data access layer (Mapping from EF Reverse Engineering)
All four projects have access to the fifth project, Domain Model (Models from EF Reverse Engineering). Your 1 communicates with 2, 2 communicates with 3 and 3 communicates with 4. All four of these have a reference to Domain Model so you don't have to perform any type of domain model conversion between the layers.
By the way, I ignored service layer, but if you have web services or REST, you could fit it in another project, but let's not get into too many details.

MVC3 and EF Data first: what are the best practices?

It seems that most of the focus with MVC3 and EF4.1 is around "code first" - I can't seem to find any examples or tutorials that meet the following criteria:
uses an existing SQLServer database
has separate projects for web & data access (we will have multiple web apps sharing the same data access classes)
recommendations for validation
Does such an example or tutorial exist? Are there any documented "best practices" for how to accomplish this, or rationale for NOT having a solution structured this way?
It is quite common scenario and it depends if you want to use EDMX file for mapping or if you want to have mapping defined in code (like code first).
Both scenarios can be done as database first
You will create EDMX from existing database with build in EF tools in Visual Studio and you will use DbContext T4 generator template to get POCO classes and DbContext derived class
You will download EF Power Tools CTP and you will use its reverse engineering feature to generate code mapping, POCO classes and context for you
Neither of these approaches will add Data annotations. Data annotations on entities should not be used for client validation (that is bad practice) unless you are doing very simple applications. Usually your views have some more advanced expectations and validation in view can be different then on entity. For example insert view and update view can need different validations and it is not possible to perform it with single set of data annotation on the entity. Because of that you should move data annotations for validation to specialized view models and transform your entities to view models and vice versa (you can use AutoMapper to simplify this).
Anyway it is possible to add data annotations to generated classes via buddy classes but as mentioned it is not a good practice.