Recommended lifecycle for DbContext in ASP.NET Web API? - entity-framework

Consider an ASP.NET Web API 2 application, that provides fairly straightforward access to several DB tables using Entity Framework.
Which of the following options for object lifecycles would be best in terms of servicing the most concurrent requests?
Instantiating a singleton DbContext to be used by all requests.
Instantiating one DbContext for each incoming request.
Instantiating one DbContext for each thread in the thread pool servicing incoming requests?
Other?
Follow up question - What if I change the requirement to "requiring the least amount of DB server resources"? What would then be the best option?

Based on a detailed reply to another question:
Options 1 and 3 in my question are completely invalid. The reason is that DbContext is not thread-safe and having multiple threads access will bring inconsistent data states and throw exceptions. Even in a "per thread" situation, ASP.NET Web API is likely to arbitrarily shift the handling of a single request between several threads.
Option 2 - Instantiating one DbContext for each incoming request - is the preferred way as it ensures only one thread at a time can access the DbContext.

Apart from not being thread safe, DbContexts should not be long lived. So you must use 2. (Or even one DbContext instance for each Db operation).
If your "fairly straightforward access to several DB tables" is really straightforward, I'd recommend you to use OData, and some advance js client, like breeze.js.
Please, see this sites:
ASP.NET Web API OData this exposes the data as a simple REST service
breeze.js this library provides advanced js functionality, similar to that offered by a DbContext, but on the browser side: syntax similar to LINQ, local (browser) data caching...
You can also consume the OData service directly (for example with jQuery AJAX) or with a simpler library (datajs, JayData)

Related

How to use Autofac to inject the same instance of DbContext for processing an HTTP request without causing concurrency issues?

I'm working on an ASP.net Web API application with Autofac and Entity Framework.
I've been breaking apart different my service classes into smaller classes in order to make my code more testable and to make my code more SOLID.
I'm using Autofac to inject Entity Framework DbContext into my various helper classes. This becomes problematic because if I use entities queried from DbContext in two different helper classes, I get an error when Entity Framework tries to produce a query.
The error says that Entity Framework cannot produce a query with entities from two different instances of DbContext.
Clearly, the solution is that I need to configure Autofac so that the same instance of DbContext is injected into each of the helper classes, but I'm afraid that if I try to do this, I may get concurrency issues when this application gets deployed to a production environment and many people use it at once.
How do I configure Autofac so that when a request hits my application, my API helper classes all get the same instance of DbContext, but I don't have concurrency issues across multiple requests?
An alternative to the action filter recommended by the Autofac documentation (https://autofaccn.readthedocs.io/en/latest/faq/per-request-scope.html#no-per-request-filter-dependencies-in-web-api) see: "No Per-Request Filter Dependencies in Web API" and manually going to the DependencyResolver for others:
You could have a look at the Medhime DbContextScope unit of work provider. (https://www.nuget.org/packages/EntityFramework.DbContextScope/) compiled for both EF6 and EF Core.
The injected dependencies for your classes becomes a DbContextScopeFactory for the top level, and an AmbientDbContextLocator for your services. These don't "break" with Web API's limitation on the request lifetime scope. The ContextScopeFactory would be initialized once and supply the DbContext, while the locators will be fed that single instance.
It may be worth having a look at if managing context references across services and an API action prove clunky.

POCO entities in a 2-tier WPF application

I need to retrieve a large graph of entities, manipulate it in the UI (adds, updates, deletes), then persist it all back to the database. After various SO questions and experiments, I'm finding this mass "detached graph update" approach to be very problematic, so I'm now rethinking my approach.
It's only a 2-tier WPF app, so I'm now thinking of having a long-running context that exists for the duration of the UI used to manipulate the entity graph - that way it can track changes automatically. However I'm not sure how to approach this architecturally.
The application currently has three projects - the UI, business tier, and one for the edmx & generated entities. My business tier has a CustomerManager class that exposes a method to retrieve a Customer graph (orders, order lines, etc.), and a method to persist the Customer graph. Assuming that the UI holds on to the same instance of the CustomerManager class, and therefore the same context, changes to the graph (adding and changing entities) will be tracked.
Deleting an entity is a bit more tricky, as the context must be used to do this, i.e.:-
context.Set<Order>().Remove(orderToDelete);
Looking for some architectural advice really. Do I just expose a DeleteOrder method in my CustomerManager class that does this? Given that I have a dozen other entity types, I would presumably need to expose similar methods to delete orders, products, etc.
Is it a sensible approach for the UI to hold on to the same CustomerManager instance, or is there a better way to manage a long-running context? A logical place for the DeleteOrder method would be in my Customer entity (partial) class, but as these classes are in a separate project from the business tier (which is where the context resides), I guess I can't do this (unless I pass the context to the DeleteOrder method)?
Your long living context idea will work only if your context lives in UI and UI talks to database directly to get and persist data. Involving WCF between your UI and context always result in serialization and it causes entity detaching = not tracking changes (unless you use STEs). Having long living context in WCF service is too problematic and in general bad practice.
Have you considered WCF Data Services? They provide client side tracking to some extend by using special client side context.

end-to-end RIA-like client/server patterns? non-Entity Framework contexts?

I have posted this same question in the msdn forums, but nothing yet ..
http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/60cf36d1-c11a-4d8a-9446-f1d299db1222
I'm working on a project that is an MVC app that will be sourced data via a WCF service that may or may not be getting data via EF, but will definitely be using Stored Procedures..
The MVC app will maintain state in the session, and the entity-tracking portion of this state would preferably function much like the RIA Services DomainContext. Whether or not this context encapsulates saves and changesets is not really all that important, but how entities are loaded into the context and relate to one another (navigation properties) are.
Question 1: Is there such a pattern/solution in existence?
Question 2: Should the MVC and WCF layers share the same DTOs/Entities via a class library? (thereby maintaining state-awareness, navigation properties, etc on both ends of the pipe?)
Question 3: Does using WCF Data Services help solve these problems?
Question 4: Is this all misguided and is there a better approach?
Pretty basic stuff here..
The solution is use a WCF Data Service, and in the client add a Service Reference pointing to it. The client-side proxy will include a proxy and the context classes I was looking for, similar to RIA. If you're accustomed to RIA, there will be some differences and caveats, but by and large it's easy to work through and provides a client-side proxy to your server-side ObjectContext (or whatever repository you expose through the DataService)

EF + WCF in three-layered application with complex object graphs. Which pattern to use?

I have an architectural question about EF and WCF.
We are developing a three-tier application using Entity Framework (with an Oracle database), and a GUI based on WPF. The GUI communicates with the server through WCF.
Our data model is quite complex (more than a hundred tables), with lots of relations. We are currently using the default EF code generation template, and we are having a lot of trouble with tracking the state of our entities.
The user interfaces on the client are also fairly complex, sometimes an object graph with more than 50 objects are sent down to a single user interface, with several layers of aggregation between the entities. It is an important goal to be able to easily decide in the BLL layer, which of the objects have been modified on the client, and which objects have been newly created.
What would be the clearest approach to manage entities and entity states between the two layers? Self tracking entities? What are the most common pitfalls in this scenario?
Could those who have used STEs in a real production environment tell their experiences?
STEs are supposed to solve this scenario but they are not silver bullet. I have never used them in real project (I don't like them) but I spent some time playing with them. The main pitfalls I found are:
Coupling your data layer with your client application - you must share entity assembly between projects (it also means it is .NET only solution but it should not be a problem in your case)
Large data transfers - you pass 50 entities to clients, client change single entity and you will pass 50 entities back. It will require some fighting with STEs to avoid passing unnecessary data
Unnecessary updates to database - normally when EF works with attached entities it track changes on property level but with STEs it track changes on entity level. So if user modify single property in entity with 100 properties it will generate update with setting all of them. It will require modifying template and adding property level change tracking to avoid this.
Client application should use STEs directly (binding STEs to UI) to get most of its self tracking ability. Otherwise you will have to implement code which will move data from UI back to self tracking entity and modify its state.
They are not proxied = they don't support lazy loading (in case of WCF service it is good behavior)
I described today the way to solve this without STEs. There is also related question about tracking over web services (check #Richard's answer and provided links).
We have developed a layered application with STE's. A user interface layer with ASP.NET and ModelViewPresenter, a business layer, a WCF service layer and the data layer with Entity Framework.
When I first read about STE's the documentation said that they are easier then using custom DTO's. They should be the 'quick and easy way' and that only on really big projects you should use hand written DTO's.
But we've run in a lot of problems using STE's. One of the main problems is that if your entities come from multiple service calls (for example in a master detail view) and so from different contexts you will run into problems when composing the graphs on the server and trying to save them. So our server function still have to check manually which data has changed and then recompose the object graph on the server. A lot has been written about this topic but it's still not easy to fix.
Another problem we ran into was that the STE's wouldn't work without WCF. The change tracking is activated when the entities are serialized. We've originally designed an architecture where WCF could be disabled and the service calls would just be in process (this was a requirement for our unit tests, which would run a lot faster without wcf and be easier to setup). It turned out that STE's are not the right choice for this.
I've also noticed that developers sometimes included a lot of data in their query and just send it to the client instead of really thinking about which data they needed.
After this project we've decided to use custom DTO's with automapper from server to client and use the POCO template in our data layer in a new project.
So since you already state that your project is big I would opt for custom DTO's and service functions that are a specifically created for one goal instead of 'Update(Person person)' functions that send a lot of data
Hope this helps :)

How can I setup OData and EF with out coupling to my database structure?

I really like OData (WCF Data Services). In past projects I have coded up so many Web-Services just to allow different ways to read my data.
OData gives great flexibility for the clients to have the data as they need it.
However, in a discussion today, a co-worker pointed out that how we are doing OData is little more than giving the client application a connection to the database.
Here is how we are setting up our WCF Data Service (Note: this is the traditional way)
Create an Entity Framework (E)F Data Model of our database
Publish that model with WCF Data Services
Add Security to the OData feed
(This is where it is better than a direct connection to the SQL Server)
My co-worker (correctly) pointed out that all our clients will be coupled to the database now. (If a table or column is refactored then the clients will have to change too)
EF offers a bit of flexibility on how your data is presented and could be used to hide some minor database changes that don't affect the client apps. But I have found it to be quite limited. (See this post for an example) I have found that the POCO templates (while nice for allowing separation of the model and the entities) also does not offer very much flexibility.
So, the question: What do I tell my co-worker? How do I setup my WCF Data Services so they are using business oriented contracts (like they would be if every read operation used a standard WCF Soap based service)?
Just to be clear, let me ask this a different way. How can I decouple EF from WCF Data Services. I am fine to make up my own contracts and use AutoMapper to convert between them. But I would like to not go directly from EF to OData.
NOTE: I still want to use EF as my ORM. Rolling my own ORM is not really a solution...
If you use your custom classes instead of using classes generated directly by EF you will also change a provide for WCF Data Services. It means you will no more pass EF context as generic parameter to DataService base class. This will be OK if you have read only services but once you expect any data modifications from clients you will have a lot of work to do.
Data services based on EF context supports data modifications. All other data services use reflection provider which is read only by default until you implement IUpdatable on your custom "service context class".
Data services are technology for creating quickly services exposing your data. They are coupled with their context and it is responsibility of the context to provide abstraction. If you want to make quick and easy services you are dependent on features supported by EF mapping. You can make some abstractions in EDMX, you can make projections (DefiningQuery, QueryView) etc. but all these features have some limitations (for example projections are readonly unless you use stored procedures for modifications).
Data services are not the same as providing connection to database. There is one very big difference - connection to database will ensure only access and execution permissions but it will not ensure data security. WCF Data Services offer data security because you can create interceptors which will add filters to queries to retrieve only data the user is allowed to see or check if he is allowed to modify the data. That is the difference you can tell your colleague.
In case of abstraction - do you want a quick easy solution or not? You can inject abstraction layer between service and ORM but you need to implement mentioned method and you have to test it.
Most simple approach:
DO NOT PUBLISH YOUR TABLES ;)
Make a separate schema
Add views to this
Put those views to EF and publish them.
The views are decoupled from the tables and thus can be simplified and refactored separately.
Standard approach, also for reporting.
Apart from achieving more granular data authorisation (based of certain field values etc) OData also allows your data to be accessible via open standards like JSON/Xml over Http using OAuth. This is very useful for the web/mobile applications. Now you could create a web service to expose your data but that will warrant a change every time your client needs change in the data requirements (e.g. extra fields needed) whereas OData allows this via OData queries. In a big enterprise this is also useful for designing security at infrastructure level as it will only allow the text based (http) calls which can be inspected/verified for security threats via network firewalls'.
You have some other options for your OData client. Have a look at Simple.OData.Client, described in this article: http://www.codeproject.com/Articles/686240/reasons-to-consume-OData-feeds-using-Simple-ODa
And in case you are familiar with Simple.Data microORM, there is an OData adapter for it:
https://github.com/simplefx/Simple.OData/wiki
UPDATE. My recommendations go for client choice while your question is about setting up your server side. Then of course they are not what you are asking. I will leave however my answer so you aware of client alternatives.