Is it possible in Entity Framework using Code First to create a domain model of pure POCOs that are totally ignorant of the Entity Framework?, i.e. don't decorate any classes or properties with any attributes and annotations related to the EF, and don't use virtual keyword to be able to support lazy loading.
Can I achieve this? or do I have to make two models one for persistence model and one for domain model to achieve this.
Related
I know entity framework designer is used to create the classes of model-first approach and then we create the database by using that class. But in code-first appraoch, it is possible to create the custom classes, within that classes the database is created automatically. Then what is the difference between code-first approach and model-first approach?
Code First is the more modern style of working with Entity Framework. As the name implies, you write the code first and the database model is generated for you, by using Entity Framework Migrations. In this scenario you are not using any graphical tool at all, everything is just pure code.
Model first means creating the abstract database model in the designer. The code is then generated by templates. If you update the model, the code will be regenerated.
Entity Framework Code First best practice question?
Hi All I am using EF codeFirst 6 on an NTier app.
I have found that poco object that I am using to map to EF are really EntityFramework specific. Let me give you an example
If I want to add a property that is not related to EF in the object ,EF does not like it.
I Read you can put the "NotMapped" attribute however it start making this object difficult to maintain .
Also there might be developers that are not familiar with EF and that will not understand the issue.
My question is it good practice to keep EF Entity Models separate and have a dto to convert to/from to a Domain Model where
a developer can do what he likes with it without interferring with EF Model which is clearly a 1 to 1 with the tables in the database
Any Suggestions?
Your problem could be resolved by using the Fluent API approach instead of the Attribute-based (Annotations) approach. See Entity Framework Fluent API.
You would configure your entity mappings in the DBContext rather than in the entity classes.
From the above linked article:
Specifying Not to Map a CLR Property to a Column in the Database
The following example shows how to specify that a property on a CLR
type is not mapped to a column in the database.
modelBuilder.Entity<Department>().Ignore(t => t.Budget);
that would mean "ignore the Bugdet property in the Department entity."
Lets say we are using DTO objects to transfer data between service layer and Presentation (MVC) layer.In this case the presentation layer can only access DTO objects. Therefore we can't use lazy loading functionality in Entity framework.
Am I right here? Please give your suggestions.
(My DTO are not the entities in EF and I have implemented repository and unit of work pattern)
You can use lazy loading but only on your service side when you are working with attached entities.
First put your definition right: Are your DTO objects also your entities in EF 4.1? Are they (also) your models and do they contain business logic?
If so, i would recommend turning off proxy creation ( myDbContext.Configuration.ProxyCreationEnabled = false; ) since they cant be serialized easily. Then use a repository for dataAccess where in the CRUD methods, you specify the right entity states like: http://blogs.msdn.com/b/adonet/archive/2011/01/29/using-dbcontext-in-ef-feature-ctp5-part-4-add-attach-and-entity-states.aspx
We are in a process of designing an application with approx 100 tables and complicated business logic. Windows Forms will be used on the client side and WCF services with MSSQL on the server.
Custom DTOs are used for client-server communication, business entities are not distributed.
Which variant of Entity Framework to use (and why):
EF 4.0 EntityObjects
EF 4.0 POCO
EF 4.1 DbContext
Something else
Database-first approach is a requirement.
Also, is it worth implementing a Repository pattern? It seems a bit redundant, as there is one level of abstraction in the mapping itself and another one in the use of DTOs. I'm currently leaned towards using auto-generated extendable repositories for each entity returning IQueryable, just to have a place to put common queries, but still allowing querying entity model directly from the Service Layer.
Which variant to use? Basically once you have custom DTO the only question is do you want to have control over entities code (their base class) and make them independent on EF? Do you want to use code first? If the answers to all questions are no then you can use EntityObjects. If you want to have entities persistence ignorant or use custom base class you should go to POCO. If you want to use code first or new DbContext API you will need EF 4.1. Some related topics:
EF 4.1 Code-first vs Model/Database-first
EF POCO code only VS EF POCO with Entity Data Model (this was related to CTP)
ADO.NET DbContext Generator vs. ADO.NET POCO Entity Generator
EF Model First or Code First Approach?
There are more things to consider when designing service layer. You should be aware of complications you will have to deal with when using EF in WCF. Your service will provide data to WinForms application and it will work with them in "detached mode". Once user will do all changes he wants to do he will post data back to the service. But here comes the problem - you must tell EF what has changed. If you for example allow user to change order with all its order items (change quantity in items, add new items, delete some items) you must say EF exactly what has changed, what was added and what was deleted. That is easy when you work with single entity but once you allow user to change object graph (especially many-to-many relations) then it is quite tough. The most common solution is loading the whole graph and merge the state from incoming DTOs to loaded and attached graph. Other solution is using Self tracking entities instead of EntityObjects/POCOs + DTOs.
When discussing repositories I would refer you to this answer which refers many other answers discussing repositories, their possible redundancy and possible mistakes when using them just to make your code testable. Generally each layer should be added only if there is real need for the layer - due to better separation of concerns.
The main advantage of POCOs is that those classes can be your DTOs, so if you've already got custom DTOs that you're using, POCO seems a bit redundant. However, there are some other advantages which may or may not have value to you, since you didn't mention unit testing as a requirement. If you plan to write unit tests, then POCO is still the way to go. You probably won't notice much difference between 4.0 POCO and 4.1 since you won't be using the code-first feature (disclaimer: I've only used 4.0 POCO, so I'm not intimately familiar with any minor differences between the two, but they seem to be more or less the same--basically I was already using POCO in 4.0 and haven't seen anything that's made me want to update everything to use 4.1).
Also, depending on whether you plan to unit-test this layer, there's still value in implementing the repository/unit of work patterns when using Entity Framework. It serves to abstract away the data access logic (the context), not the entities themselves, and allows you to do things like mocking your context in unit tests. What I do is copy the T4 template for my context and use it to create the interface, then edit the T4 template for the context and have it implement that interface and use IObjectSet<T> instead of ObjectSet<T>. So instead of:
public class MyEntitiesContext
{
public ObjectSet<MyClass> MyEntities
...
}
I end up with:
public interface IMyEntitiesContext
{
public IObjectSet<MyClass> MyEntities;
}
and
public class MyEntitiesContext : IMyEntitiesContext
{
public IObjectSet<MyClass> MyEntities
...
}
So I guess it really comes down to whether or not you plan to write unit tests for this layer. If you won't be doing anything that would require mocking out your context for testing, then the easiest thing to use would probably be 4.0 EntityObjects, since you aren't planning to pass your entities between layers and it would require the least effort to implement. If you plan to use mocking, then you'll probably want to use POCO and implement repository/unit of work.
What does one loose by creating POCO using T4 templates in entity framework 4.0? Why is the default behavior when using entity framework 4.0 not to create POCO?
You lose a number of things. A "pure" POCO is of limited use in an ORM, because it will not do change tracking. In other words, when you mutate the object and then save changes to the context, you would like the changed properties saved to the database. With a "pure" POCO you can do this with snapshot based change tracking, which is fairly inefficient. You can also do it with runtime proxies, which force you to make your track properties public virtual, so you arguably don't have a "POCO" anymore. Also, using proxies means that you don't know the true runtime type of the instance.
You also lose some of the convenience properties like EntityState.
"Pure" POCOs cannot do lazy loading. Again, you can work around this with proxy types, but, again, if you're using proxies, you don't really have a "pure" POCO.
On top of all of this, there is less need to use POCO entities in the Entity Framework than in some other ORMs. This is because you can always project your entity types onto POCO instances using LINQ, without having to materialize the entity instances first. So "pure" POCOs are always available in an Entity Framework application, even if you don't happen to map your entities that way.