Use of DTO in ASP.NET MVC - asp.net-mvc-2

In the context of ASP.Net MVC 2.0, can anybody please explain why do we need to use DTO (Data Transfer Object) if there can already be Models? I have seen an example where a web service returns DTO to asp.net and then it is converted to Model using some factory class. This web service talks to database and returns data in the form of DTO.
In my previous projects, I used to communicate to DB using Data context and repository, which used to return model object to my controller. Then I used to pass this model to the corresponding view. Isn't this simpler? I cannot find out the exact use of DTO pattren.

Models represent the logical data model that your views are coded against. This may or may not map 1:1 with the source(s) of the data. In a situation where Model == DTO, I agree, the DTO is somewhat redundant.
In most situations where I've used MVC, it's been pretty rare to have a single source of data, or lack the desire to separate the logical view from the physical sources. For example I often make multiple service and database calls to build single logical model.

Related

Exposed domain model in Java microservice architecture

I'm aware that copying entity classes and properties into DTOs is considered anti-pattern, so by Exposed domain model pattern the same #Entity can be used as both database entity class, and DTO for service and MVC layer. (see here https://codereview.stackexchange.com/questions/93511/data-transfer-objects-vs-entities-in-java-rest-server-application)
But suppose we have microservice architecture where the same set of properties is used as entity in one project with persistence, and as DTO in another project which uses the first one as a service. What's the proposed pattern in such a situation?
Because the second project doesn't need #Entity related functionality, and if we put that class in shared library, it will be tied unnecessary to JPA specific APIs and libraries. And the alternative is to again use separate DTO classes anti-pattern.
When your requirements for a DTO model exactly match your entity model you are either in a very early stage of the project or very lucky that you just have a simple model. If your model is very simple, then DTOs won't give you many immediate benefits.
At some point, the requirements for the DTO model and the entity model will diverge though. Imagine you add some audit aspects, statistics or denormalization to your entity/persistence model. That kind of data is usually never exposed via DTOs directly, so you will need to split the models. It is also often the case that the main driver for DTOs is the fact that you don't need all the data all the time. If you display objects in e.g. a dropdown you only need a label and the object id, so why would you load the whole entity state for such a use case?
The fact that you have annotations on your DTO models shouldn't bother you that much, what is the alternative? An XML-like mapping? Manual object wiring?
If your model is used by third parties directly, you could use a subclassing i.e. keep the main model free of annotations and have annotated subclasses in your project that extend the main model.
Since implementing a DTO approach correctly, I created Blaze-Persistence Entity Views which will not only simplify the way you define DTOs, but it will also improve the performance of your queries.
If you are interested, I even have an example for an external model that uses entity view subclasses to keep the main model clean.
Thank you for the answers, but emphasize in the question is on microservice (MS) architecture and reusing defined entity POJOs from one MS in another as POJOs. From what I've read on microservices it's closely related to another question - should MSs share any common functionality and classes at all, or be completely independent? It seems there is no definite agreement on it, and also no definite answer, or widely accepted pattern, to this.
From my recent experience here is what I adopted, and it works well so far.
Have common functionality across MSs - yes, in form of a commons project added as dependency to all MSs, with its dependencies set as optional. Share entity classes (expose them in commons) - no.
The main reason is that entity classes are closely related to data store for particular MS. And as the established rule is that MSs shouldn't share data stores, then it makes sense not to share entity classes for those data stores. It helps MSs to be more independent, and freedom to manage their data store in their own way. It means some more typing to add additional DTO classes and conversion between them, but it's a trade-off worth taking to retain MS independence. Reasons Christian Beikov and Maksim Gumerov mentioned apply as well.
What we do share (put in commons) are some common functionality and helper classes (for cloud, discovery, error handling, rest and json configuration...), and pure DTOs, where T is transfer between MSs (rest entities or message payloads).

Entity Framework + Web API, return Entities (Complex, collections, etc) outside DbContext

Here is my situation. I use Entity Framework 4 with the Web API
The structure of my code is quite simple, I have the Service layer where all my rest API is organized, I have my Business logic layer where I have business controllers to manage Transactions between the rest calls and the data layer. Finally, I have a data layer with generic repositories and a DAO to access the whole thing.
In my Business controllers, I use using to inject a non transactionnal (read only methods) OR a transactional (CRUD methods) DbContext.
When returning values to my REST API, I parse it into JSON.
The problem is that I keep having this exception: Newtonsoft.Json.JsonSerializationException
I return my entities / collections / lists outside of my using{} statement, which I think EF does not like by default.
In debug mode, sometimes, I will manage to retrieve all data, but not all the time. Since my entities come from a query within a DbContext, I think that the behavior is to remove loaded sub-properties after the context has been disposed of.
Fact is, I want to keep my structure as is, and I was wondering the following:
Is there a way of returning complete (not lazy-loaded) entities after leaving the using{} statement?
Thanks a lot
I actually read more about Entity Frameworks behavior. What I get is actually standard for EF. I have to force the context to Load() my refered entities in order to get them after leaving the context.

3-Tier Architecture MVC 4 Project

I am working on a Web Application using ASP.Net MVC 4 3-Tier Architecture and I am stuck at certain points. I know here exist similar threads but none were clear enough. I have already created the needed layers which are UI(MVC4 project), BLL- Business Logic Layer(library class), BOL- Business Object Layer(library class that contains ADO.net) and DAL- Data Access Layer (library class).
Layer dependencies are as follows:
UI depends on BOL and BLL
BLL depends on BOL and DAL
DAL depends on BOL
I want you to correct me if I am wrong in the following. The BOL is the master reference layer which exchanges raw dB records with DAL then sends them to BLL which is responsible for any logical computations then gets the updated records and sends them to the controller in the UI.
Knowing the above,
Where should we place the CRUD functions?
Where and why should we create a class for declaring (plus set and get) the useful database fields?
What exactly should we put in the ViewModel folder; in other words since we have already defined the variables in the previous step and in the Entity then does it add any value to keep the Model folder?
Thank you in advance.
Can't be unambiguously correct answers on these issues, so any answer should be evaluated simply as an opinion. Here are my answers:
Where should we place the CRUD functions? CRUD is a frequent pattern at different levels. Hi-level Repository provides similar methods, as a low-level Table Gateway.
Where and why should we create a class for declaring (plus set and get) the useful database fields? These classes are usually simple Data Transfer Objects (DTO) or slightly more complicated Active Records. In accordance with the Dependency Inversion Principle the interface for these classes should provide the Business Logic Layer, and implementation should provide the Data Access Layer. Since DTO is very simple classes, they can be provided "as is" without interface/implementation separation.
What exactly should we put in the ViewModel folder; in other words since we have already defined the variables in the previous step and in the Entity then does it add any value to keep the Model folder? In theory entities should be our models; but in practice it's often not the case. F.e. in ASP.NET MVC models should provide not only the value of drop-down field, but also all possible values; so it requires the separate model class.
The result may be that you have three or four very similar classes at different levels of the application. Of course, it's not very good, so Aspect Programming may be applied in this case.

AutoMapper - Convert two entity objects to a single DTO

I have two EntityFramework models that I want to combine into a single DTO. Is there a way to do this? There are a couple ideas in the following question, but you would either have to create a composite model, or lose the ability to call Mapper.AssertConfigurationIsValid to verify all of the properties will be set.
Is it possible to map multiple DTO objects to a single ViewModel using Automapper?
single-viewmodel-using-automappe
From my point of view, It is highly recommanded to create a composite type for merging entities. Entities are part of your Business logic or your Domain logic (depending on your architecture), whereas DTO are part of Presentation logic or Transport layer. You can create an explicit mapping that can be easily tested ; automatic mapping (create maps without options) is good for testing only. If you are using a DTO, then you will probably use it somewhere : in WCF ? as a ViewModel ?
Visual Studio and .Net Framework can manage many files and you have not to sacrifice testability or simplicity (do you know "Technical Debt"?)
Note : The role of Mapper.AssertConfigurationIsValid is to validate all mapping, generated by automatic or explicit mapping. I suggest you call this every time.

Should i use partial classes as business layer when using entity framework?

I am working on a project using entity framework. Is it okay to use partial classes of the EF generated classes as the business layer. I am begining to think that this is how EF is intended to be used.
I have attempted to use a DTO pattern and soon realized that i am just creating a bunch of mapping classes that is duplicating my effort and also a cause for more maintenance work and an additional layer.
I want to use self-tracking-entities and pass the EF entities to all the layers. Please share your thoughts and ideas. Thanks
I had a look at using partial classes and found that exposing the database model up towards the UI layer would be restrictive.
For a few reasons:
The entity model created includes a deep relational object model which, depending on your schema, would get exposed to the UI layer (say the presenter of MVP or the ViewModel in MVVM).
The Business logic layer typically exposes operations that you can code against. If you see a save method on the BLL and look at the parameters needed to do the save and see a model that require the construction of other entities (cause of the relational nature the entity model) just to do the save, it is not keeping the operation simple.
If you have a bunch of web services then the extra data will need to be sent across for no apparent gain.
You can create more immutable DTO's for your operations parameters rather than encountering side effects cause the same instance was modified in some other part of the application.
If you do TDD and follow YAGNI then you will tend to have a structure specifically designed for the operation you are writing, which would be easier to construct tests against (not requiring to create other objects not realated to the test just because they are on the model). In this case you might have...
public class Order
{ ...
public Guid CustomerID { get; set; }
... }
Instead of using the Entity model generated by the EF which have references exposed...
public class Order
{ ...
public Customer Customer { get; set; }
... }
This way the id of the customer is only needed for an operation that takes an order. Why would you need to construct a Customer (and potentially other objects as well) for an operation that is concerned with taking orders?
If you are worried about the duplication and mapping, then have a look at Automapper
I would not do that, for the following reasons:
You loose the clear distinction between the data layer and the business layer
It makes the business layer more difficult to test
However, if you have some data model specific code, place that is a partial class to avoid it being lost when you regenerate the model.
I think partial class will be a good idea. If the model is regenerated then you will not loose the business logic in the partial classes.
As an alternative you can also look into EF4 Code only so that you don't need to generate your model from the database.
I would use partial classes. There is no such thing as data layer in DDD-ish code. There is a data tier and it resides on SQL Server. The application code should only contain business layer and some mappings which allow persisting business objects in the mentioned data tier.
Entity Framework is you data access code so you shouldn't built your own. In most cases the database schema would be modified because the model have changed, not the opposite.
That being said, I would discourage you to share your entities in all the layers. I value separation of UI and domain layer. I would use DTO to transfer data in and out of the domain. If I have the necessary freedom, I would even use CQRS pattern to get rid of mapping entities to DTO -- I would simply create a second EF data access project meant only for reading data for the UI. It would be built on top of the same database. You read data through read (anemic -- without business logic) model, but you modify it by issuing commands that are executed against real model implemented using EF and partial methods.
Does this answer your question?
I wouldn't do that. Try too keep the layers independent as possible. So a tiny change in your database schema will not affect all your layers.
Entities can be used for data layer but they should not.
If at all, provide interfaces to be used and let your entities implement them (on the partial file) the BL should not know the entities but the interfaces.