Async Issue for DbContext used in constructor of objects created via DI - entity-framework

I wonder if someone can clarify when to await and when not to. Consider this code
public Task<List<User>> GetUsersForParent(int someParentId)
{
var qry = Context.Users.Where(u=>u.parent = someParentId)
.OrderBy(u=>u.Surname)
return FilterActive(qry);
}
//Actually in a generic base class, but not important (I don't think)
protected Task<List<T>> FilterActive(IQueryable<T> query) where T: BaseEntity
{
return query.Where( q=>q.Active == true ).ToListAsync();
}
Then it is used like this
var users = await DbHandler.GetUsersForParent(1);
So the calling method is awaited, but the others are not. Is this correct?
Should the method calling the ToListAsync() be awaited? (this I assume is now doing the work)
My reason for this is I am getting the DbContext is being used by a second thread dreaded exception. I am running out of places to look. My understanding is the methods are building up the whole task which is executed, but could this be messing with the dbContext?
Edit re DbContext error
Having narrowed down the potential locations for the issue, via Debug.Print and SQL Query profiling (just in case that helps anyone else) I can see one statement being profiled (the next in profile is logging the exception) and I can see two methods being run via the debug print.
One of these methods is a PermissionsManager which, when constructed, initialises itself and loads the user data. This is constructed when requested via the DI framework.
The other method is the single query on the OnGet() method for the page. It is running a single query to get an entity by ID, it is awaited correctly.
My working theory at the moment is that the Thread running the DI construction and another thread running the Page initialise are colliding.
When I made the PermissionManager just _person = new Person() // await db.users.get(userid) the issue goes away. I could replicate the issue 1 in 2 or 3 times of refresh, and with that commented I could not replicate, despite refreshing the page 30+ times.
So my real question with async / await is probably more about DI injection and is that construction running on a different thread? if so, any best practice to avoid?

So the calling method is awaited, but the others are not. Is this correct?
I generally recommend using the async and await keywords, and only return the tasks directly if the method is extremely simple.
My reason for this is I am getting the DbContext is being used by a second thread dreaded exception. I am running out of places to look. My understanding is the methods are building up the whole task which is executed, but could this be messing with the dbContext?
No. At least, the code you posted cannot cause that exception. Whether the async/await keywords are used, or whether the tasks are returned directly, the methods are asynchronous and they do not attempt to do more than one thing on the dbcontext at once.
It's possible that your problem is further up the stack. Task.WhenAll is a good thing to search for when tracking this down.

Should the method calling the ToListAsync() be awaited? (this I assume is now doing the work)
If you await the contents of either method you will be returning the result type, not Task of result type which means the execution cannot be deferred.
Your error will be coming up because you either have multiple threads interacting with the same instance of DbContext, awaited or no this would cause problems, that or you have some code calling the ToListAsync()-containing method, or another async DbContext operation without awaiting.
Writing an EF data access layer returning Task is fairly dangerous which can shoot you in the foot very easily.
Given your code structure I would recommend a couple small changes:
public async Task<List<User>> GetUsersForParent(int someParentId)
{
var qry = Context.Users.Where(u=>u.parent = someParentId)
.OrderBy(u=>u.Surname);
qry = FilterActive(qry);
return await qry.ToListAsync();
}
protected IQueryable<T> FilterActive(IQueryable<T> query) where T: BaseEntity
{
return query.Where( q=> q.Active == true );
}
Notably here I would avoid returning Task to reduce risks of improper use and potentially intermittent bugs. The base-class method for FilterActive can return IQueryable<T> to apply the filter without triggering the execution of the operation. This way FilterActive can be applied whether you want a List, a Count, or simply do an Exists check.
Overall I would recommend exploring patterns that return IQueryable<TEntity> rather than List<TEntity> etc. as the later results in either a lot of limitations for performance and flexibility, or requires a lot of boiler-plate code to handle things like:
Sorting,
Pagination,
Getting just a Count,
Performing an Exists check,
Configurable filtering,
Selectively eager loading related data, or
Projection to generate efficient queries
Doing this with methods that return List<TEntity> either results in very complex code to support some of the above considerations, has these operations applied post-execution leading to heavier queries than would otherwise be needed, or requires a lot of near-duplicate code to handle each scenario.

So the constructor thing was a red herring. It was a missing await after all, just not where expected and in code that was unchanged.
I tracked down the culprit. There was a method in the basePage which hooked into the Filter of MVC pages. It took the user and loaded their permissions, however, since this loading of user permissions was made async, this method did not get awaited (it didn't need it before as was synchronous). I moved it to one of the async events on the page life cycle and all seems happy now (with a suitable await!). So it was a missing await, but the moral of the story is any time you make a sync method async, check what the heck is actually using it!

Related

Get Context instance from DbContextPool (EF Core 2.0) to use it in Task

Entity framework core 2.0 introduce DbContext Pooling.
In my code I do a lot of jobs in Tasks because I do some independent heavy operations on database.
My old approach was:
Task.Run(() =>
{
AppDbContext c = new AppDbContext(this.config);
How can I get instance from EF Core 2.0 DbContext Pooling?
Edited:
I am using DI: public CategoryController(AppDbContext context, ...
Reason for doing this is quicker execute Rest API method.
For example, I think this should complete quicker
List<AppUser> users;
List<DbGroup> groups;
Task task1 = Task.Run(async() => {
users = await ContextFromConnectionPool.Users.Where(t => t.Id == 1).ToListAsync();
});
Task task2 = Task.Run(async () => {
groups = await ContextFromConnectionPool.Groups.Where(t => t.Id == 1).ToListAsync();
});
var tags = await this.context.Tags.ToListAsync();
Task.WaitAll(task1, task2);
//process all 3 results
then this:
List<AppUser> users = await this.context.Users.Where(t => t.Id == 1).ToListAsync();
List<DbGroup> groups = await this.context.Groups.Where(t => t.Id == 1).ToListAsync();
var tags = await this.context.Tags.ToListAsync();
//process all 3 results
In second example second query executes after first is completed.
If every query takes 150ms in first example method execute in approx 150ms, but second in approx 450ms. Am I right?
Only problem is how to get context from connection pool in first approach.
The feature of ASP.NET Core 2.0 and Entity Framework Core 2.0, to support connection pooling, is not — in any way — preventing you from doing the time consuming queries at once. The entire concept of pooling is to allow the connection to be reused in multiple requests, instead of having to recreate an instance each time a new request comes in. Sometimes, it can have benefits and sometimes it might have downfalls. Now, for your question, there are two pathways,
Allow the framework to pool the connection in Startup class and then reuse those objects everywhere you need. You can capture them inside the actions, and any other private or local functions that you have.
Do not use DI and database context pooling and instead keep doing what you were doing. Note that, you were never using DI and thus there is no need to register your database context in the Startup class. But you must take care of creation of instance, manually disposing the instance as well.
Second approach is not suitable, and not a good approach as well, for many reasons. If you want to consider the first approach you can then change your controller to accept a property of the type database context, such as,
public class YourController : Controller {
public AppDbContext c { get; set; }
public YourController (AppDbContext c) {
this.c = c;
}
}
Now if you have got that, you can then use this c variable inside your tasks, and run the time consuming queries inside that function — which in any way would be too useless. You can do this,
Task.Run(() =>
{
// Use c here.
});
Just remember a few points:
It is good to build your query, and then call ToListAsync() — ToList() may not be suitable, consider using ToListAsync() and apply await keyword for asynchronously capturing the data.
Your query only gets executed on the database server, when you call ToList or any similar function.
While running tasks in parallel, you must also handle any cases where your query might break the policies, such as data integrity or similar cases in database. It is always a best practice to catch the exceptions.
In your case, for just better practicing you might want to consider wrapping your code inside using block,
Task.Run(() => {
using (var context = new AppDbContext) {
// use context here.
}
}
This is the best that I can state to help you, since you have not shared 1) purpose of not using DI, 2) the sample of your query (why not using LINQ to build query and then executing on server?) 3) any sample code to be used. I hope this would give you an idea of, why you should consider using DI and using the instances returned from there.

Why is this Autofac mock's lifetime disposed in a simple MSpec test?

I've got a base class I'm using with MSpec which provides convenience methods around AutoMock:
public abstract class SubjectBuilderContext
{
static AutoMock _container;
protected static ISubjectBuilderConfigurationContext<T> BuildSubject<T>()
{
_container = AutoMock.GetLoose();
return new SubjectBuilderConfigurationContext<T>(_container);
}
protected static Mock<TDouble> GetMock<TDouble>()
where TDouble : class
{
return _container.Mock<TDouble>();
}
}
Occasionally, I'm seeing an exception happen when attempting to retrieve a Mock like so:
It should_store_the_receipt = () => GetMock<IFileService>().Verify(f => f.SaveFileAsync(Moq.It.IsAny<byte[]>(), Moq.It.IsAny<string>()), Times.Once());
Here's the exception:
System.ObjectDisposedExceptionInstances cannot be resolved and nested
lifetimes cannot be created from this LifetimeScope as it has already
been disposed.
I'm guessing it has something to do with the way MSpec runs the tests (via reflection) and that there's a period of time when nothing actively has references to any of the objects in the underlying lifetime scope being used by AutoMock which causes the lifetimescope to get disposed. What's going on here, and is there some simple way for me to keep it from happening?
The AutoMock lifetime scope from Autofac.Extras.Moq is disposed when the mock itself is disposed. If you're getting this, it means the AutoMock instance has been disposed or has otherwise lost scope and the GC has cleaned it up.
Given that, there are a few possibilities.
The first possibility is that you've got some potential threading challenges around async method calls. Looking at the method that's being mocked, I see you're verifying the call to a SaveFileAsync method. However, I don't see any async related code in there, and I'm not entirely sure when/how the tests running are calling it given the currently posted code, but if there is a situation where an async call causes the test to run on one thread while the AutoMock loses scope or otherwise gets killed on the other thread, I could see this happening.
The second possibility is the mix of static and instance items in the context. You are storing the AutoMock as a static, but it appears the context class in which it resides is a base class that is intended to supply instance-related values. If two tests run in parallel, for example, the first test will set the AutoMock to what it thinks it needs, then the second test will overwrite the AutoMock and the first will go out of scope, disposing the scope.
The third possibility is multiple calls to BuildSubject<T> in one test. The call to BuildSubject<T> initializes the AutoMock. If you call that multiple times in one test, despite changing the T type, you'll be stomping the AutoMock each time and the associated lifetime scope will be disposed.
The fourth possibility is a test ordering problem. If you only see it sometimes but not other times, it could be that certain tests inadvertently assume that some setup, like the call to BuildSubject<T>, has already been done; while other tests may not make that assumption and will call BuildSubject<T> themselves. Depending on the order the tests run, you may sometimes get lucky and not see the exception, but other times you may run into the problem where BuildSubject<T> gets called at just the wrong time and causes you pain.

Testing GWTP presenter with asynchronous calls

I'm using GWTP, adding a Contract layer to abstract the knowledge between Presenter and View, and I'm pretty satisfied of the result with GWTP.
I'm testing my presenters with Mockito.
But as time passed, I found it was hard to maintain a clean presenter with its tests.
There are some refactoring stuff I did to improve that, but I was still not satisfied.
I found the following to be the heart of the matter :
My presenters need often asynchronous call, or generally call to objects method with a callback to continue my presenter flow (they are usually nested).
For example :
this.populationManager.populate(new PopulationCallback()
{
public void onPopulate()
{
doSomeStufWithTheView(populationManager.get());
}
});
In my tests, I ended to verify the population() call of the mocked PopulationManager object. Then to create another test on the doSomeStufWithTheView() method.
But I discovered rather quickly that it was bad design : any change or refactoring ended to broke a lot of my tests, and forced me to create from start others, even though the presenter functionality did not change !
Plus I didn't test if the callback was effectively what I wanted.
So I tried to use mockito doAnswer method to do not break my presenter testing flow :
doAnswer(new Answer(){
public Object answer(InvocationOnMock invocation) throws Throwable
{
Object[] args = invocation.getArguments();
((PopulationCallback)args[0]).onPopulate();
return null;
}
}).when(this.populationManager).populate(any(PopulationCallback.class));
I factored the code for it to be less verbose (and internally less dependant to the arg position) :
doAnswer(new PopulationCallbackAnswer())
.when(this.populationManager).populate(any(PopulationCallback.class));
So while mocking the populationManager, I could still test the flow of my presenter, basically like that :
#Test
public void testSomeStuffAppends()
{
// Given
doAnswer(new PopulationCallbackAnswer())
.when(this.populationManager).populate(any(PopulationCallback.class));
// When
this.myPresenter.onReset();
// Then
verify(populationManager).populate(any(PopulationCallback.class)); // That was before
verify(this.myView).displaySomething(); // Now I can do that.
}
I am wondering if it is a good use of the doAnswer method, or if it is a code smell, and a better design can be used ?
Usually, my presenters tend to just use others object (like some Mediator Pattern) and interact with the view. I have some presenter with several hundred (~400) lines of code.
Again, is it a proof of bad design, or is it normal for a presenter to be verbose (because its using others objects) ?
Does anyone heard of some project which uses GWTP and tests its presenter cleanly ?
I hope I explained in a comprehensive way.
Thank you in advance.
PS : I'm pretty new to Stack Overflow, plus my English is still lacking, if my question needs something to be improved, please tell me.
You could use ArgumentCaptor:
Check out this blog post fore more details.
If I understood correctly you are asking about design/architecture.
This is shouldn't be counted as answer, it's just my thoughts.
If I have followed code:
public void loadEmoticonPacks() {
executor.execute(new Runnable() {
public void run() {
pack = loadFromServer();
savePackForUsageAfter();
}
});
}
I usually don't count on executor and just check that methods does concrete job by loading and saving. So the executor here is just instrument to prevent long operations in the UI thread.
If I have something like:
accountManager.setListener(this);
....
public void onAccountEvent(AccountEvent event) {
....
}
I will check first that we subscribed for events (and unsubscribed on some destroying) as well I would check that onAccountEvent does expected scenarios.
UPD1. Probably, in example 1, better would be extract method loadFromServerAndSave and check that it's not executed on UI thread as well check that it does everything as expected.
UPD2. It's better to use framework like Guava Bus for events processing.
We are using this doAnswer pattern in our presenter tests as well and usually it works just fine. One caveat though: If you test it like this you are effectively removing the asynchronous nature of the call, that is the callback is executed immediately after the server call is initiated.
This can lead to undiscovered race conditions. To check for those, you could make this a two-step process: when calling the server,the answer method only saves the callback. Then, when it is appropriate in your test, you call sometinh like flush() or onSuccess() on your answer (I would suggest making a utility class for this that can be reused in other circumstances), so that you can control when the callback for the result is really called.

Entity Framework 5 Unit of Work pattern - where should I call SaveChanges?

Apologies, in advance, if this seems like a duplicate question. This question was the closest I could find, but it doesn't really solve the issues I am facing.
I'm using Entity Framework 5 in an ASP.NET MVC4 application and attempting to implement the Unit of Work pattern.
My unit of work class implements IDisposable and contains a single instance of my DbContext-derived object context class, as well as a number of repositories, each of which derives from a generic base repository class that exposes all the usual repository functionality.
For each HTTP request, Ninject creates a single instance of the Unit of Work class and injects it into the controllers, automatically disposing it when the request is complete.
Since EF5 abstracts away the data storage and Ninject manages the lifetime of the object context, it seems like the perfect way for consuming code to access in-memory entity objects without the need to explcitly manage their persistence. In other words, for optimum separation of concerns, I envisage my controller action methods being able to use and modify repository data without the need to explicitly call SaveChanges afterwards.
My first (naiive) attempt to implement this idea employed a call to SaveChanges within every repository base-class method that modified data. Of course, I soon realized that this is neither performance optimized (especially when making multiple successive calls to the same method), nor does it accommodate situations where an action method directly modifies a property of an object retrieved from a repository.
So, I evolved my design to eliminate these premature calls to SaveChanges and replace them with a single call when the Unit of Work instance is disposed. This seemed like the cleanest implementation of the Unit of Work pattern in MVC, since a unit of work is naturally scoped to a request.
Unfortunately, after building this concept, I discovered its fatal flaw - the fact that objects added to or deleted from a DbContext are not reflected, even locally, until SaveChanges has been called.
So, what are your thoughts on the idea that consuming code should be able to use objects without explicitly persisting them? And, if this idea seems valid, what's the best way to achieve it with EF5?
Many thanks for your suggestions,
Tim
UPDATE: Based on #Wahid's response, I am adding below some test code that shows some of the situations in which it becomes essential for the consuming code to explicitly call SaveChanges:
var unitOfWork = _kernel.Get<IUnitOfWork>();
var terms = unitOfWork.Terms.Entities;
// Purge the table so as to start with a known state
foreach (var term in terms)
{
terms.Remove(term);
}
unitOfWork.SaveChanges();
Assert.AreEqual(0, terms.Count());
// Verify that additions are not even reflected locally until committed.
var created = new Term { Pattern = "Test" };
terms.Add(created);
Assert.AreEqual(0, terms.Count());
// Verify that additions are reflected locally once committed.
unitOfWork.SaveChanges();
Assert.AreEqual(1, terms.Count());
// Verify that property modifications to entities are reflected locally immediately
created.Pattern = "Test2";
var another = terms.Single(term => term.Id == created.Id);
Assert.AreEqual("Test2", another.Pattern);
Assert.True(ReferenceEquals(created, another));
// Verify that queries against property changes fail until committed
Assert.IsNull(terms.FirstOrDefault(term => term.Pattern == "Test2"));
// Verify that queries against property changes work once committed
unitOfWork.SaveChanges();
Assert.NotNull(terms.FirstOrDefault(term => term.Pattern == "Test2"));
// Verify that deletions are not even reflected locally until committed.
terms.Remove(created);
Assert.AreEqual(1, terms.Count());
// Verify that additions are reflected locally once committed.
unitOfWork.SaveChanges();
Assert.AreEqual(0, terms.Count());
First of all SaveChanges should NOT be ever in the repositories at all. Because that's leads you to lose the benefit of UnitOfWork.
Second you need to make a special method to save changes in the UnitOfWork.
And if you want to call this method automatically then you may fine some other solution like ActionFilter or maybe by making all your Controllers inherits from BaseController class and handle the SaveChanges in it.
Anyway the UnitOfWork should always have SaveChanges method.

Working around an ArgumentNullException

I'm trying to return the five most recent articles from my database so I can put links to them in some secondary navigation I have on my index page. I've divided up my MVC project into two sub-projects, based on Steven Sanderson's suggestion in his book - WebUI, which is the MVC portion, and Domain, which is the EF4/Domain model portion.
I have a rudimentary repository which does the heavy lifting, mainly by providing a facade to EF4, and handling other tasks like model validation. I have a simple method which returns the last five articles:
public List<Article> LastFive()
{
return _siteDB.Articles.OrderByDescending(a => a.LastModified).Take(5).ToList();
}
My problem is that I have to use two other similar functions on my index page to show the five most recent reviews and site news items. With nothing in the db, they return ArgumentNullExceptions (which is fine). What I'd like to do is display a simple "No articles/reviews/news exist" message instead, but since all three will throw the same exception, I'm unsure how to capture the right one and display the correct message based on the category which threw the exception.
I'm not sure if I should subclass Exception for these cases, and if I did, exactly where I'd throw them. Or, if there's a way to determine where the exception(s) came from so I could handle them properly.
I'm really confused at the results you're reporting. Entity Framework should return an empty IEnumerable when there are no results from a query. I've never seen it throw an ArgumentNullException in this case. Have you done anything weird with your Entity Framework templates?
You should be able to step through your code and home in on exactly where the ArgumentNullException is coming from. (I have a sneaking suspicion it's happening outside of the method you posted).
My guess is that it's the ToList() that is crashing.
Try something like this (did not run it):
public List<Article> LastFive()
{
var result = _siteDB.Articles.OrderByDescending(a => a.LastModified).Take(5);
if (result != null)
return result.ToList();
else
return null;
}
Your calling code should test for null and display a message if so.