Conventions Roadmap for EF7 - entity-framework-core

I'd like to get a clearer picture of the roadmap for Code First conventions for EF7. Presently there are three implementations of IEntityTypeConvention:
KeyConvention
PropertiesConvention
RelationshipDiscoveryConvention
Are there plans for achieving parity with EF6 CF conventions? For example, I don't see something like the PluralizingTableNameC convention. Also, what are plans for the API for customizing conventions?

As Anthony mentioned in his comment, this is currently on our backlog to be implemented - https://github.com/aspnet/EntityFramework/issues/214.
The API for custom conventions will likely look quite different from EF6 because of the new metadata API and our change to make ModelBuilder incrementally build a model rather than storing a bunch of configuration and then building the model when needed.

Related

Can’t find best way for apply best in code design techniques in software dev

[Pre]
I have to say that I'm dummy newbie who is trying to get together important puzzles with such crucial details as DDD, TDD, MVVM, and EFCore. I have an about 10 years of windows form develop experience in complete wrong manner, and after I'm joined to Plurasight I'm understood that I'm just lost my last 10 years, and this is really sad :).
[Problem description]
I have an App that i want to re-write from scratch by using latest and greatest technics that've learned for the last 6 month on Pluralsight, but the problem is that these new knowledge’s is stopping me, because simply I'm afraid that I'll do it wrong again...(that is stupid I know, but it is what it is).
So back to my questions, I have a big problem domain, and pretty well documented business logic, which i have to turn in to the code. I'm understand that my start point is design data layer, for these purposes I want to use Entity framework core (I saw Julie Lerman's course on Pluralsight and I think's she is amazing and inspires me to use EFCore as ORM for my app). But at the same time leakage of experience produces more questions than what I’ve learned with Pluralsight, and I will try to write them all(please don’t judge me too hard)
It is looks like that I will need 2 or even more data model projects in my solution, and here is why I have multiple document set types, each of the type contain more than one reference books used to generate unique file names and data sheets. But it looks weird to me have 3 Data model projects such as MyApp.PackType1.DataModel, MyApp.PackType2.DataModel, and each of them will be preinstalled with the EFCore, and each of them will generates its own database based on Data Context defined by EF. Isn’t it very redundant or this is correct way?
I don’t understand how to join these multiple Data Models projects, including Shared Kernel into the one nice model
I don’t understand what is the best way to design my data classes? Should they be just POCO’s or I can design them as nice looking classes with the private var’s and public properties? What are the best practices in here?
Also I don’t understand what is the best practice to use a MVVM pattern on top of that, and is it applicable at all to use MVVM in this case?
Should I keep my Tests in separate projects like MyApp.PackType1.DataModel.Tests, or keep them in same project?
Best regards,
Maks!
P.S.
Apologize for unclear definitions and questions, English isn't my native language.
It's very complicated to answer your question because you have asked for a lot of details, but I going to provide a brief answer and I hope it will be helpful.
You can have only one model for your entities (DDD) and create sub model from this model in your end level projects (Web API or UI)
Read point #1
You have to create an Entity Layer project that represents your database and then you can create DTO's for specific scenarios
From my point of view, use Angular but you can use another UI framework such as React or VueJs, but I prefer to use Angular to build UI interfaces and consume .NET Core Web API from client
Create unit tests and integration tests for you Web API projects and as additional feature you can use Db in memory provider for tests
May be this guide is useful: https://www.codeproject.com/Articles/1160586/Entity-Framework-Core-for-Enterprise
Regards
Hm, multiple DbContexts (models) usually come about when you have distinct databases you are using. General rule is one Context = one Database. Exceptions can occur when there are a lot of tables that can be grouped functionally, but there are downsides to that approach.
A DbContext is a repository pattern but for individual tables. Using a Unit of Work pattern and layering with a custom repository provider would allow you to make it "appear" as a single database, hiding the complexity from the front-end.
Your entity descriptions are usually created as straight POCO. You can get creative with different DTOs
In a nutshell, an MVVM pattern goes like this:
Request from UI to a controller
Controller possibly issues multiple calls to Data Layer to gather data
Assemble data in a single ViewModel (everything the page needs)
Return to UI
The beauty of the approach is single roundtrip (request/response) to the UI
Separate Project in my opinion. There are techniques to spoof the database connection using EF so you are not using "live" data.
That CodeProject article will come in handy.

Entity Framework 6 Database-First and Onion Architecture

I am using Entity Framework 6 database-first. I am converting the project to implement the onion architecture to move towards better separation of concerns. I have read many articles and watched many videos but having some issues deciding on my solution structure.
I have 4 projects: Core, Infrastructure, Web & Tests.
From what I've learned, the .edmx file should be placed under my "Infrastructure" folder. However, I have also read about using the Repository and Unit of Work patterns to assist with EF decoupling and using Dependency Injection.
With this being said:
Will I have to create Repository Interfaces under CORE for ALL entities in my model? If so, how would one maintain this on a huge database? I have looked into automapper but found issues with it presenting IEnumererables vs. IQueryables but there is an extension available it has to hlep with this. I can try this route deeper but want to hear back first.
As an alternative, should I leave my edmx in Infrastructure and move the .tt T4 files for my entities to CORE? Does this present any tight coupling or a good solution?
Would a generic Repository interface work well with the suggestion you provide? Or maybe EF6 already resolves the Repository and UoW patterns issue?
Thank you for looking at my question and please present any alternative responses as well.
I found a similar post here that was not answered:
EF6 and Onion architecture - database first and without Repository pattern
Database first doesn't completely rule out Onion architecture (aka Ports and Adapters or Hexagonal Architecture, so you if you see references to those they're the same thing), but it's certainly more difficult. Onion Architecture and the separation of concerns it allows fit very nicely with a domain-driven design (I think you mentioned on twitter you'd already seen some of my videos on this subject on Pluralsight).
You should definitely avoid putting the EDMX in the Core or Web projects - Infrastructure is the right location for that. At that point, with database-first, you're going to have EF entities in Infrastructure. You want your business objects/domain entities to live in Core, though. At that point you basically have two options if you want to continue down this path:
1) Switch from database first to code first (perhaps using a tool) so that you can have POCO entities in Core.
2) Map back and forth between your Infrastructure entities and your Core objects, perhaps using something like AutoMapper. Before EF supported POCO entities this was the approach I followed when using it, and I would write repositories that only dealt with Core objects but internally would map to EF-specific entities.
As to your questions about Repositories and Units of Work, there's been a lot written about this already, on SO and elsewhere. You can certainly use a generic repository implementation to allow for easy CRUD access to a large set of entities, and it sounds like that may be a quick way for you to move forward in your scenario. However, my general recommendation is to avoid generic repositories as your go-to means of accessing your business objects, and instead use Aggregates (see DDD or my DDD course w/Julie Lerman on Pluralsight) with one concrete repository per Aggregate Root. You can separate out complex business entities from CRUD operations, too, and only follow the Aggregate approach where it is warranted. The benefit you get from this approach is that you're constraining how the objects are accessed, and getting similar benefits to a Facade over your (large) set of database entities.
Don't feel like you can only have one dbcontext per application. It sounds like you are evolving this design over time, not starting with a green field application. To that end, you could keep your .edmx file and perhaps a generic repository for CRUD purposes, but then create a new code first dbcontext for a specific set of operations that warrant POCO entities, separation of concerns, increased testability, etc. Over time, you can shift the bulk of the essential code to use this, while still keeping the existing dbcontext so you don't lose and current functionality.
I am using entity framework 6.1 in my DDD project. Code first works out very well if you want to do Onion Architecture.
In my project we have completely isolated Repository from the Domain Model. Application Service is what uses repository to load aggregates from and persist aggregates to the database. Hence, there is no repository interfaces in the domain (core).
Second option of using T4 to generate POCO in a separate assembly is a good idea. Please remember that your domain model (core) should be persistence-ignorant.
While generic repository are good for enforcing aggregate-level operations, I prefer using specific repository more, simply because not every Aggregate is going to need all of those generic repository operations.
http://codingcraft.wordpress.com/

Should CSLA be used with a dependency injection framework?

My development team is evaluating the various frameworks available for .NET to simplify our programming, one of which is CSLA. I have to admit to being a bit confused as to whether or not CSLA would benefit from being used in conjunction with a dependency injection framework, such as Spring.net or Windsor. If we combined one of those two DI frameworks with, say, the Entity Framework to handle ORM duties, does that negate the need or benefit of using CSLA altogether?
I have various levels of understanding of all these frameworks, and I'm trying to get a big picture of what will best benefit our enterprise architecture and object design.
Thank you!
CSLA is a framework for creating business entities, so has separate concerns than an IoC container or ORM. In a enterprise application you should consider the benefits of all three.
In particular, you should consider CSLA if you want data binding built in to your models, dirty checking, N-level undo, validation and business rules, as well as the data portal implementation which allows easy configuration for n-tier deployments.
Short answer: Yes.
Long answer: It requires a bit of grunt work and some experimentation to setup, but it can be done without fundamentally breaking CSLA. I put together a working prototype using StructureMap and the repository pattern and used the BuildUp method of Setter Injection to inject within CSLA. I used a method similar to the one found here to ensure that my business objects are re-injected when the objects are serialized.
I also use the registry base class of StructureMap to separate my configuration into presentation, CSLA client, CSLA server, and CSLA global settings. This way I can use the linked file feature of Visual Studio to include the CSLA server and CSLA global configuration files within the server-side Data Portal and the configuration will always be the same in both places. This was to ensure I can still change the Data Portal configuration settings in CSLA from 2 tier to 3 tier without breaking anything.
Anyway, I am still weighing the potential benefits with the drawbacks to using DI, but so far I am leaning in the direction of using it because testing will be much easier although I am skeptical of trying to use any of the advanced features of DI such as interception. I recommend reading the book Dependency Injection in .NET by Mark Seemann to understand the right and wrong way to use DI because there is a lot of misinformation on the Internet.

Using DbContext and Database First in EF 4.1

I have started working on a new project and am switching from LinqToSQL to EF 4.1 as my ORM.
I already have a database set up to work with and so am going with the database first approach. By default the EF generates a context which extends ObjectContext. I wanted to know if a good approach would be to replace it with DbContext.
Most of the available examples deal with only Code First and DbContextbut DBContext can be used with Database First too. Are there any advantages I get by using the DBContext? From what I have read the DBContext is a simplified version of the ObjectContext and makes it easier to work with. Are there any other advantages or disadvantages?
You will not replace anything manually. You will need DbContext T4 Generator available at VS Gallery. Don't touch your autogenerated files - your changes will be lost every time you modify EDMX file.
I answered similar question last year. Now my answer is mostly - for new users DbContext API is probably better. DbContext API is simplified - both in terms of usage and features but you can still get ObjectContext from DbContext and use features available only in ObjectContext API. On the other hand DbContext API has some additional performance impact and additional layer of bugs. In simple project you will probably not find any disadvantage in DbContext API - you will not see performance impact, you will not use corner features available only in ObjectContext and you will not be affected by occasional bugs.
A lot of information and blog posts was collected since DbContext API was released so you don't have to be afraid that you will not find description of the API. Also ADO.NET team now uses DbContext API as their flag ship.
I'm not a big fan of DbContext API but my opinion is not related to its functionality but to its existence - there is no need to have two APIs and split development capacity of ADO.NET team to maintain and fix two APIs doing the same. It only means that there is less capacity for implementation of really new features.
I'm using it now with Oracle on an add on to an existing application. The simplification that Ladislav refers to works well for me on this project as I am short on time and resources. I have not found any gotchas as long as you stick to simple CRUD operations and less than ~150 tables.
You can still use metadata annotations to provide basic validation and localization and there is enough documentation out there but you won't find much on official Microsoft sites.

ADO.NET DbContext Generator vs. ADO.NET Poco Entity Generator (ObjectContext)

I am about to start implementing the data access infrastructure of a project that was architected with an approach to DDD (it's my first attempt on DDD, so be gentle ;-) ).
I will be using Entity Framework. Until now, I was looking into the method teached by Julie Lerman on her great book, Programming Entity Framework, where ADO.NET POCO Entity Generator is used, with some changes to the T4 templates and some more custom code.
Today I started reading articles on EF4.1 and the ADO.NET DbContext Generator, using Database First approach, and I'm trying to decide with which one should I go.
DbContext and EF4.1's approach on DDD seems to be a nice, cleaner way than POCO Entities, but I'm afraid that it could lead to some issues in the near future, since EF4.1 is still in RC.
From ADO.NET team blog, I know that EF4.1 does not include:
Enum support
Spatial data type support
Stored Procedure support in Code First
Migration support in Code First
Customizable conventions in Code First
From my understanding, since I will be using Database First there is a smaller number of features that were not included.
In conclusion, my question is:
Can I replace POCO Entities Generator with EF4.1 DbContext Generator?
From a point of view of clean creation of POCO entities, there is no difference between the two generators. Both generators produce the same entities, however, ADO.NET POCO Entity Generator is based on ObjectContext's API, whereas ADO.NET DbContext Generator is based on DbContext's API.
DbContext's API has a few very nice new features (Local, Query on navigation property, etc.) and API is somehow simplified but at the same time it looks like some features used in ObjectContext API are missing in DbContext API (or at least it has not been explored enough yet).
EF 4.1 RC is go-live release. It means that you can build a real application with it because API will not change in RTW (only bugs will be fixed). Also RTW should be in the next month so I think you will not be ready with your application before the final version is shipped.
ObjectContext API or DbContext API? ObjectContext API is much better covered by documentation and blog posts. You can find plenty of examples about it. Also its limitations are already well known. DbContext API is new release. A very promising release, mostly because of the code-first approach. There is still a very limited number of blog posts, no book and the API is not proven enough. So it depends if you are ready to fight with new API? If not, then ObjectContext API is still a good choice because you don't need the code-first approach.