How many DbContext Classes Should I have? - entity-framework

Here's what I'd like to know.
Assumptions:
It's the year 2020, and not 2010
One database containing all relevant entities
One web API
Using Entity Framework
The question:
Is it typically considered best practice to break EF entities into...
A separate DbContext (and Repository) for each entity?
One DbContext to rule them all?
Something in between?
Is performance, maintenance and/or unit testing affected one way or the other? This being, practically speaking, my first rodeo, I'm unsure of the ramifications of my choice and would appreciate input from others who've been further down this road.
I'm asking this question today because I've got a dozen lookup tables whose data will rarely be edited. It's starting to feel like overkill to have a separate DbContext for each.
I could just throw those entities under the same context used by the table they supply look-ups to.
Or I could do even more, and pull a dozen related entities under the umbrella of a single DbContext.
Or I could just stuff literally the entire database into one DbContext.
And I cannot tell if there is a right or wrong answer or even if it matters in practice.
I found a number of related questions, but they were either asked a long time ago or weren't the same question. (In particular, I am not asking about what scope to use, as this answer is, I believe, a good one.)
I've included below some relevant links that I found thus far.
when-should-i-create-a-new-dbcontext
linq-and-datacontext
use-multiple-dbcontext-in-one-application
how-many-dbcontexts-should-i-have
how-many-dbcontext-subclasses-should-i-have-in-relation-to-my-models

Related

Entity Framework: multiple dbcontext or not? And a few other performance related question

I’m build a calendar/entry/statistics application using quite complex models with a large number of relationships between models.
In general I’m concerned of performance, and are considering different strategies and are looking for input before implementing the application.
I’m completely new to DBContextPooling so please excuse me for possible stupid questions. But has DBContextPooling anything to do with the use of multiple DBContext classes, or is the use related to improved performance regardless of a single or multiple DBContext?
I might end up implementing a larger number of DBsets, or should I avoid it? I’m considering to create multiple DBContext Classes for simplicity, but will this reduce memory use and improve performance? Would it be better/smarter to split the application into smaller projects?
Is there any performance difference in using IEnumerable vs ICollection? I’m avoiding the use of lists as much as possible. Or would it be even better to use IAsyncEnumerable?
Most of your performance pain points will come from a complex architecture that you have not simplified. Managing a monolithic application result in lots of unnecessary 'compensating' logic when one use case is treading on the toes of another and your relationships are so intertwined.
Optimisations such as whether to use Context Pooling or IEnumerable vs ICollection can come later. They should not affect the architecture of your solution.
If your project is as complex as you suggest, then I'd recommend you read up on Domain Driven Design and Microservices and break your application up into several projects (or groups of projects).
https://learn.microsoft.com/en-us/dotnet/architecture/microservices/microservice-ddd-cqrs-patterns/
Each project (or group of projects) will have its own DbContext to administer the entities within that project.
Further, each DbContext should start off by only exposing Aggregate Roots through the DbSets. This can mean more database activity than is strictly necessary for a particular use case but best to start with a clean architecture and squeeze that last ounce of performance (sometimes at the cost of architectural clarity), if and when needed.
For example if you want to add an attendee to an appointment, it can be appealing to attack the Attendee table directly. But, to keep things clean, and considering an attendee cannot exist without an appointment, then you should make appointment the aggregate root and only expose appointment as an entry point for the outside world to attack. That appointment can be retrieved from the database with its attendees. Then ask the appointment to add the attendees. Then save the appointment graph by calling SaveChanges on the DbContext.
In summary, your Appointment is responsible for the functionality within its graph. You should ask Appointment to add an Attendee to the list instead of adding an Attendee to the Appointment's list of attendees. A subtle shift in thinking that can reduce complexity of your solution an awful lot.
The art is deciding where those boundaries between microservices/contexts should lie. There can be pros and cons to two different architectures, with no clear winner
To your other questions:
DbContext Pooling is about maintaining a pool of ready-to-go instantiated DbContexts. It saves the overhead of repeated DbContext instantiation. Probably not worth it, unless you have an awful lot of separate requests coming in and your profiling shows that this is a pain point.
The number of DbSets required is alluded to above.
As for IEnumerable or ICollection or IList, it depends on what functionality you require. Here's a nice simple summary ... https://medium.com/developers-arena/ienumerable-vs-icollection-vs-ilist-vs-iqueryable-in-c-2101351453db
Would it be better/smarter to split the application into smaller
projects?
Yes, absolutely! Start with architectural clarity and then tweak for performance benefits where and when required. Don't start with performance as the goal (unless you're building a millisecond sensitive solution).

Combine Code First & Database First In Single Model?

Is there a way to combine code-first and database-first in the same context? We are running into massive development-time performance problems when editing the EDMX file (it takes 1.5 minutes to save). I've moved our non-insert/update/delete UDFs/stored procs to some custom T4 templates that automatically generate model-first code, but I can't seem to get OnModelCreating to be called when EDMX is involved.
Other things we've considered, but won't work for one reason or another:
We can't (reasonably) separate our code to multiple contexts as there is a lot of overlap in our entity relationships. It also seems like quite a people who have gone this route regret it.
We tried having 2 different contexts, but there are quite a few joins between Entities & UDFs. This may be our last hope, but I'd REALLY like to avoid it.
We can't switch to Dapper since we have unfortunately made heavy use of IQueryable.
We tried to go completely to Code-First, but there are features that we are using in EDMX that aren't supported (mostly related to insert/update/delete stored procedure mapping).
Take a look at the following link. I answered another question in a similar fashion:
How to use Repository pattern using Database first approach in entity framework
As I mentioned in that post, I would personally try to switch to a Code First approach and get rid of the EDMX files as it is already deprecated and most importantly, the maintenance effort is considerable and much more complex compared with the Code First approach.
It is not that hard switching to Code First from a Model First approach. Some steps and images down below:
Display all files at the project level and expand the EDMX file. You will notice that the EDMX file has a .TT file which will have several files nested, the Model Context and POCO clases between them as .cs or .vb classes (depending on the language you are using). See image down below:
Unload the project, right click and then edit.
See the image below, notice the dependencies between the context and the TT file
Remove the dependencies, the xml element should look like the image below:
Repeat the procedure for the Model classes (The ones with the model definition)
Reload your project, remove the EDMX file(s)
You will probably need to do some tweeks and update names/references.
I did this a few times in the past and it worked flawlessly on production. You can also look for tools that do this conversion for you.
This might be a good opportunity for you to rethink the architecture as well.
BTW: Bullet point 4 shouldn't be a show stopper for you. You can map/use Stored Procedures via EF. Look at the following link:
How to call Stored Procedure in Entity Framework 6 (Code-First)?
It also seems like quite a people who have gone this route [multiple contexts] regret it.
I'm not one of them.
Your core problem is a context that gets too large. So break it up. I know that inevitably there will be entities that should be shared among several contexts, which may give rise to duplicate class names. An easy way to solve this is to rename the classes into their context-specific names.
For example, I have an ApplicationUser table (who hasn't) that maps to a class with the same name in the main context, but to a class AuthorizationUser in my AuthorizationContext, or ReportingUser in a ReportingContext. This isn't a problem at all. Most use cases revolve around one context type anyway, so it's impossible to get confused.
I even have specialized contexts that work on the same data as other contexts, but in a more economical way. For example, a context that doesn't map to calculated columns in the database, so there are no reads after inserts and updates (apart from identity values).
So I'd recommend to go for it, because ...
Is there a way to combine code-first and database-first in the same context?
No, there isn't. Both approaches have different ways of building the DbModel (containing the store model, the class model, and the mappings between both). In a generated DbContext you even see that an UnintentionalCodeFirstException is thrown, to drive home that you're not supposed to use that method.
mostly related to insert/update/delete stored procedure mapping
As said in another answer, mapping CUD actions to stored procedures is supported in EF6 code-first.
I got here from a link in your comment on a different question, where you asked:
you mentioned that code-first & database-first is "technically possible" could you explain how to accomplish that?
First, the context of the other question was completely different. The OP there was asking if it was possible to use both database-first and code-first methodologies in the same project, but importantly, not necessarily the same context. My saying that it was "technically possible" applies to the former, not the latter. There is absolutely no way to utilize both code-first and database-first in the same context. Actually, to be a bit more specific, let's say there's no way to utilize an existing database and also migrate that same database with new entities.
The terminology gets a bit confused here due to some unfortunate naming by Microsoft when EF was being developed. Originally, you had just Model-first and Database-first. Both utilized EDMX. The only difference was that Model-first would let you design your entities and create a database from that, while Database-first took an existing database and created entities from that.
With EF 4.1, Code-first was introduced, which discarded EDMX entirely and let you work with POCOs (plain old class objects). However, despite the name, Code-first can and always has been able to work with an existing database or create a new one. Code-first, then is really Model-first and Database-first, combined, minus the horrid EDMX. Recently, the EF team has finally taken it a step further and deprecated EDMX entirely, including both the Model-first and Database-first methodologies. It is not recommended to continue to use either one at this point, and you can expect EDMX support to be dropped entirely in future versions of Visual Studio.
With all that said, let's go with the facts. You cannot both have an existing database and a EF-managed database in a single context. You would at least need two: one for your existing tables and one for those managed by EF. More to the point, these two contexts must reference different databases. If there are any existing tables in an EF-managed database, EF will attempt to remove them. Long and short, you have to segregate your EF-managed stuff from your externally managed stuff, which means you can't create foreign keys between entities in one context and another.
Your only real option here is to just do everything "database-first". In other words, you'll have to just treat your database as existing and manually create new tables, alter columns, etc. without relying on EF migrations at all. In this regard, you should also go ahead and dump the EDMX. Generate all your entities as POCOs and simply disable the database initializer in your context. In other words, Code-first with an existing database. I have additional information, if you need it.
Thank you to everyone for the well thought out and thorough answers.
Many of these other answers assume that the stored procedure mappings in EF Code-First work the same, but they do not. I'm a bit fuzzy on this as it's been about 6 months since I looked at it, but I believe as of EF 6.3 code first stored procedures require that you pass every column from your entity to your insert/update stored procedure and that you only pass the key column(s) to your delete procedure. There isn't an option to pick and choose which columns you can pass. We have a requirement to maintain who deleted a record so we have to pass some additional information besides just a simple key.
That being said, what I ended up doing was using a T4 template to automatically generate my EDMX/Context/Model files from the database (with some additional meta-data). This took our developer time experience down from 1.5 minutes to about 5 seconds.
My hope is EF stored procedure mappings will be improved to achieve parody with EDMX and I can then just code-generate the Code-First mappings and remove the EDMX generation completely.

Entity Framework TPC with multiple inheritance

I am using EF with TPC and I have a multiple inheritance lets say I have
Employee (abstract)
Developer (inherits from Employee)
SeniorDeveloper (inherits from Developer)
I inserted some rows in the database and EF reads them correctly.
BUT
When I insert a new SeniorDeveloper, the values get written to the SeniorDeveloper AND Developer database table, hence querying just the Developers (context.Employees.OfType()) also gets the recently added SeniorDevelopers.
Is there a way to tell EF, that it should store only in one table, or why does EF fall back to TPT strategy?
Since it doesn't look like EF supports the multiple inheritance with TPC, we ended up using TPC for Employee to Developer and TPT between Developer and SeniorDeveloper...
I believe there is a reason for this, although I may not see the full picture and might just be speculating.
The situation
Indeed, the only way (that I see) for EF to be able to list only the non-senior developers (your querying use-case) in a TPT scenario by reading only the Developer table would be by using a discriminator, and we know that EF doesn't use one in TPT/TPC strategies.
Why? Well, remember that all senior developers are developers, so it's only natural (and necessary) that they have a Developer record as well as a SeniorDeveloper record.
The only exception is if Developer is an abstract type, in which case you can use a TPC strategy to remove the Developer table altogether. In your case however, Developer is concrete.
The current solution
Remembering this, and without a discriminator in the Developer table, the only way to determine if any developer is a non-senior developer is by checking if it is not a senior developer; in other words, by verifying that there is no record of the developer in the SeniorDeveloper table, or any other subtype table.
That did sound a little obvious, but now we understand why the SeniorDeveloper table must be used and accessed when its base type (Developer) is concrete (non-abstract).
The current implementation
I'm writing this from memory so I hope it isn't too off, but this is also what Slauma mentioned in another comment. You probably want to fire up a SQL profiler and verify this.
The way it is implemented is by requesting a UNION of projections of the tables. These projections simply add a discriminator declaring their own type in some encoded way[1]. In the union set, the rows can then be filtered based on this discriminator.
[1] If I remember correctly, it goes something like this: 0X for the base type, 0X0X for the first subtype in the union, 0X1X for the second subtype, and so on.
Trade-off #1
We can already identify a trade-off: EF can either store a discriminator in the table, or it can "generate one" at "run time".
The disadvantages of a stored discriminator are that it is less space efficient, and possibly "ugly" (if that's an argument). The advantages are lookup performance in a very specific case (we only want the records of the base type).
The disadvantages of a "run time" discriminator are that lookup performance is not as good for that same use-case. The advantages are that it is more space efficient.
At first sight, it would seem that sometimes we might prefer to trade a little bit of space for query performance, and EF wouldn't let us.
In reality, it's not always clear when; by requesting a UNION of two tables, we just lookup two indexes instead of one, and the performance difference is negligible. With a single level of inheritance, it can't be worse than 2x (since all subtype sets are disjoint). But wait, there's more.
Trade-off #2
Remember that I said the performance advantage of the stored-discriminator approach would only appear in the specific use-case where we lookup records of the base type. Why is that?
Well, if you're searching for developers that may or may not be senior developers[2], you're forced to lookup the SeniorDeveloper table anyway. While this, again, seems obvious, what may be less obvious is that EF can't know in advance if the types will only be of one type or another. This means that it would have to issue two queries in the worst case: one on the Developer table, and if there is even one senior developer in the result set, a second one on the SeniorDeveloper table.
Unfortunately, the extra roundtrip probably has a bigger performance impact than a UNION of the two tables. (I say probably, I haven't verified it.) Worse, it increases for each subtype for which there is a row in the result set. Imagine a type with 3, or 5, or even 10 subtypes.
And that's your trade-off #2.
[2] Remember that this kind of operation could come from any part of your application(s), while the resolving the trade-off must be done globally to satisfy all processes/applications/users. Also couple that with the fact that the EF team must make these trade-offs for all EF users (although it is true that they could add some configuration for these kinds trade-offs).
A possible alternative
By batching SQL queries, it would be possible to avoid the multiple roundtrips. EF would have to send some procedural logic to the server for the conditional lookups (T-SQL). But since we already established in trade-off #1 that the performance advantage is most likely negligible in many cases, I'm not sure this would ever be worth the effort. Maybe someone could open an issue ticket for this to determine if it makes sense.
Conclusion
In the future, maybe someone can optimize a few typical operations in this specific scenario with some creative solutions, then provide some configuration switches when the optimization involves such trade-offs.
Right now however, I think EF has chosen a fair solution. In a strange way, it's almost cleaner.
A few notes
I believe the use of union is an optimization applied in certain cases. In other cases, it would be an outer join, but the use of a discriminator (and everything else) remains the same.
You mentioned multiple inheritance, which sort of confused me initially. In common object-oriented parlance, multiple inheritance is a construct in which a type has multiple base types. Many object-oriented type systems don't support this, including the CTS (used by all .NET languages). You mean something else here.
You also mentioned that EF would "fallback" to a TPT strategy. In the case of Developer/SeniorDeveloper, a TPC strategy would have the same results as a TPT strategy, since Developer is concrete. If you really want a single table, you must then use a TPH strategy.

Using multiple edmx file vs. one large edmx file?

I'm new to the Entity model thing and i'm looking for an advise how to organize my Entity model.
should i create one entity model file (.edmx) which will contain all the tables in my database or should i break it logical files for user, orders, products, etc.
Please let me know which one is better and what the pros/cons (if any) of each alternative.
Thanks.
I'm going to go against the grain here. I've built 2 large applications with EF now, one with a single edmx and one with several. There are pros and cons but generally I found life to be much easier with one edmx. The reason is that there is almost never a real separation of domains within an app, even if there appears to be from the start. New requirements pop up asking you to relate entities in different edmx's then you have to refactor and continually move things around.
All arguments for dividing will soon be obsolete when EF 5 introduces Multiple Diagrams, which is the only real benefit for dividing edmx files in the first place. You don't want to have to see everything you're not working on and you don't want a performance impact.
In my app with the divided edmx's we now have some duplicate entities to get the benefit of navigation properties. Maybe your app has a true separation of domains, but usually everything connects to the user. I'm considering merging two of the now but it will be a lot of work. So I would say keep them together until it becomes a problem.
Having one big EDM containing all the entities generally is NOT a good practice and is not recommended. You should come up with different sets of domain models each containing related objects while each set is unrelated and disconnected from the other one.
Take a look at this post where I explained this matter in detail:
Does it make sense to create a single diagram for all entities?
i think we should keep multiple edmx files in our project. it's like 1-edmx file -- one aggregate (collection of related objects). as per ddd (domain drive design) we can have more than one aggregates in our model. we can keep one edmx file for each aggregate

Rules of thumbs for writing "queries" using ADO.NET Entity Framework

I’m currently working on a prototype of a medium size web application, and I thought that it would be good to also experiment with Entity Framework. The problem is that the major part of the application is not the data layer and logic, and so that I don't have much time to play with Entity Framework. On the other hand, the database schema is quite simple.
One of the problems I’m facing is that I cannot find a consistent way to "write queries". As far as I can tell, there are four "interfaces" for the job:
LINQ to Entities
LINQ to Entities using LINQ extension methods
Entity SQL
Query builder
OK, the first two are essentially the same, but it’s good to use just one for maintenance and consistency.
I’m mostly puzzled by the fact that none of them seems to be complete and the most general. I often find myself cornered and using some ugly looking combination of several of them. My guess is that Entity SQL is the most general one, but writing queries using strings feels like a step back. The main reason I’m experimenting with something like Entity Framework is that I like the compile time checking.
Some other random thought / issues:
I often also use the ObjectQuery.Include() method, but again it takes a string. Is this the only way?
When to use ObjectQuery.Execute() (vs. ToList())? Does it actually execute the query?
Should execute queries as soon as possible (e.g. using ToList()) or should I not care just let leave the execution for the first enumeration which gets in the way?
Are ObjectQuery.Skip() and ObjectQuery.Take() available only as extension methods? Is there a better way to do paging? It’s 2009 and almost every web application deals with paging.
Overall, I understand there are many difficulties when implementing an ORM, and often one has to compromise. On the other hand, the direct database access (e.g. ADO.NET) is plain and simple and has well defined interface (tabular results, data readers), so all code - no matter who and when writes it - is consistent. I don’t want to faced with too many choices whenever writing a database query. It’s too tedious and more than likely different developers will come up with different ways.
What are your rules of thumbs?
I use LINQ-to-Entities as much as possible. I also try and formalise to the lambda-form, as opposed to the extended SQL-style syntax. I have to admit to have had problems enforcing relationships and making compromises on efficiency just to expedite my coding of our application (eg. Master->Child tables may need to be manually loaded) but all in all, EF is a good product.
I do use EF's .Include() method for lazy-loading, which as you say, does require a string input. I find no problem with this, other than that of identifying the string to use which is relatively simple. I guess if you're keen on compile-time checking of such relations, a model similar to: Parent.GetChildren() might be more appropriate.
My application does require some "dynamic" queries to be performed, though. I have two ways of meeting this:
a) I create a mediator object, eg. ClientSearchMediator, which "knows" how to search for clients by name, etc. I can then put this through a SearchHandler.Search(ISearchMediator[] mediators) call (for example). This can be used to target specific data structures and sort results accordingly using LINQ-to-Entities.
b) For a looser experience, possibly as a result of a user designing their own query (using high level tools our application provides), eSQL is ideal for this purpose. It can be made to be injection-safe.
I don't have enough knowledge to address all of this, but I'll at least take a few stabs.
I don't know why you think ADO.NET is more consistent than Entity Framework. There are many different ways to use ADO.NET and I've definitely seen inconsistency within a single code base.
Entity Framework is currently a 1.0 release and it suffers from many 1.0 type problems (incomplete & inconsistent API, missing features, etc.).
In regards to Include, I assume you are referring to eager loading. Multiple people (outside of Microsoft) have developed solutions for getting "type safe" includes (try googling something like: Entity Framework ObjectQueryExtension Include). That said, Include is more of a hint than anything. You can't force eager loading and you have to always remember to call the IsLoaded() method to see if your request was fulfilled. As far as I know, the way "Include" works is not changing at all in the next version of Entity Framework (4.0 - to ship with VS 2010).
As far as executing the Linq query as soon as it's built vs. the last possible moment, that decision is situational. Personally, I would probably execute it as soon as it's built for the most part unless there was a compelling reason not to, but I can see other people going the opposite direction.
There are more mature ORMs on the market and Entity Framework isn't necessarily your best option. For the most part, you can bend Entity Framework to your will, but you may end up rolling your own implementation of features that come out of the box with other ORMs.