EF7 alpha connection state problems - entity-framework

I'm trying to use EF in a asp.net vNext SPA application.
I'm registering the context class with the build in dependency injection container using AddScoped() (just like they have it in the examples) but when I try to perform a delete operation on an entity I get weird errors.
Sometimes the delete works, sometimes I get a
Invalid operation. The connection is closed.
and sometimes I get a
The connection was not closed. The connection's current state is open.
This only happens for delete operations and I can't find a pattern on when the 'connection is open' and 'connection is closed' appear.
Here's my delete method body (the method is virtual because this is a base controller, though no overrides exist for it yet):
public virtual async Task<IActionResult> Delete(int id)
{
var t = await Items.SingleOrDefaultAsync(i => i.ID == id);
if (t == null)
return new HttpStatusCodeResult((int)HttpStatusCode.NoContent);
Items.Remove(t);
AppContext.SaveChanges();
return new HttpStatusCodeResult((int)HttpStatusCode.OK);
}

Problem disappeared after migrating to alpha3

Related

Error EF Core PostgreSQL during an entity insert

I am getting an error during an entity insert. I am using the Entity Framework Core with the PostgreSQL.
Here is a piece of code which produces an error:
public async Task Add(AddVideoDto dto)
{
var videoModel = mapper.Map<Video>(dto);
await context.Videos.AddAsync(videoModel);
await context.SaveChangesAsync();
}
Here is the error log:
fail: Microsoft.EntityFrameworkCore.Database.Connection[20004] et/oklike/oklikebe (master)
An error occurred using the connection to database 'oklikedb' on server 'tcp://127.0.0.1:5432'.
fail: Microsoft.AspNetCore.Server.Kestrel[13]
Connection id "0HLVLRDVR67DK", Request id "0HLVLRDVR67DK:00000001": An unhandled exception was thrown by the application.
System.InvalidOperationException: Reset() called on connector with state Connecting
at Npgsql.NpgsqlConnector.Reset()
at Npgsql.ConnectorPool.Release(NpgsqlConnector connector)
at Npgsql.NpgsqlConnection.<Close>g__FinishClose|76_1(NpgsqlConnection connection,
NpgsqlConnector connector)
at Npgsql.NpgsqlConnection.Close(Boolean wasBroken, Boolean async)
at Npgsql.NpgsqlConnection.CloseAsync()
at Npgsql.NpgsqlConnection.DisposeAsync()
at Microsoft.EntityFrameworkCore.Storage.RelationalConnection.DisposeAsync()
at Microsoft.Extensions.DependencyInjection.ServiceLookup.ServiceProviderEngineScope.<>c__DisplayClass15_0.<<DisposeAsync>g__Await|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.Extensions.DependencyInjection.ServiceLookup.ServiceProviderEngineScope.<>c__DisplayClass15_0.<<DisposeAsync>g__Await|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Http.Features.RequestServicesFeature.<DisposeAsync>g__Awaited|9_0(RequestServicesFeature servicesFeature, ValueTask vt)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.FireOnCompletedAwaited(Task currentTask, Stack`1 onCompleted)
I am sure that I set up a connection to my db correctly. I verified that in the following way: I have another piece of code:
public async Task<List<GetVideoDto>> GetAll()
{
var videoModels = await context
.Videos
.ToListAsync();
return mapper.Map<List<GetVideoDto>>(videoModels);
}
And this piece of code works just fine. I manually inserted a value in my database and checked that it is returned by the await context.Videos.ToListAsync(); by debugging and by Postman. Also I can apply migrations to a database successfully.
So, the error seems to tell me that my piece of code tries to open a connection before closing it. But I can not understand how this could be possible.
I am very well aware of the state machine behind the async/await, so the context.SaveChangesAsync(); in my case will definitely run only after the context.Videos.AddAsync(videoModel); has completed.
UPDATE
I was able to better pin down the issue. The error is thrown due to this line:
await context.SaveChangesAsync();
So, I am not getting an error if I use the SaveChanges instead of the SaveChangesAsync. Does that mean that if I want to preserve the performance benefit of the SaveChangesAsync I should make the context to be not a singleton (as it is by default), but scoped?
Here is how I am adding the context right now:
services.AddDbContext<DataContext>(opt => opt.UseNpgsql(Configuration.GetConnectionString("DefaultConnection")));
I mean here is my entire Startup.ConfigureServices:
public void ConfigureServices(IServiceCollection services)
{
services.AddDbContext<DataContext>(opt =>
opt.UseNpgsql(Configuration.GetConnectionString("DefaultConnection"))
);
services.AddCors();
services.AddControllers();
services.AddAutoMapper(typeof(Startup));
services.AddScoped<IVideoService, VideoService>();
}
And by the performance benefit of the SaveChangesAsync I mean that my thread won`t be idle waiting for the SaveChanges to complete, but will go back to the CLR thread pool.
I am strongly feeling that there should be a DDD principle which targets specifically the case of a correct SaveChangesAsync usage. But I can not find it (probably there is no one).
Add entities using plain old Add() rather than trying to add them async. Its not a long running operation just adding to an in memory collection so there is no benefit to trying to make the Add operation async.
When you are adding to your DbSet you are doing just that, it's the actual saving that benefits from being Async, as that's what you can await.
If you read the docs on AddAsync you will also see you should the following:-
This method is async only to allow special value generators, such as
the one used by
'Microsoft.EntityFrameworkCore.Metadata.SqlServerValueGenerationStrategy.SequenceHiLo',
to access the database asynchronously. For all other cases the non
async method should be used
.
I typically do this, and await the call to save in my controllers or whatever.
public void AddThing(Thing thingToAdd)
{
_context.Things.Add(thingToAdd);
}
public async Task<bool> SaveChangesAsync()
{
return (await _context.SaveChangesAsync() > 0);
}

Not receiving latest data using Npgsql LISTEN/NOTIFY

I'm using .NET Core app with a PostgreSQL database (with Npgsql) combined with SignalR to receive real-time data and latest data entries. However, I am not receiving the latest entry, and sometimes the Clients.All.SendAsync method sends more than one entry to the client. Here is my code:
Hub method that sends new data to client:
public async Task SendForexAsync(string name)
{
var product = GetForex(name);
await Clients.All.SendAsync("CurrentData", product);
using (var conn = new NpgsqlConnection(ApplicationDbContext.GetConnectionString()))
{
conn.Open();
var cmd = new NpgsqlCommand("LISTEN new_forex", conn).ExecuteNonQuery();
conn.Notification += async (o, e) =>
{
var newProduct = GetForex(name);
await Clients.All.SendAsync("NewData", newProduct);
};
while (true)
{
await conn.WaitAsync();
}
}
}
Console app that periodically polls for new data from an API:
var addedStocksDJI = FetchNewStocks("DJI");
if (addedStocksAAPL > 0 || addedStocksDJI > 0)
{
using (var conn = new NpgsqlConnection(ApplicationDbContext.GetConnectionString()))
{
conn.Open();
var cmd = new NpgsqlCommand("NOTIFY new_stocks", conn).ExecuteNonQuery();
}
}
The other code of the app is most definitely correct because I was receiving new and correct data before I tried implementing the LISTEN/NOTIFY feature. But now, I get one (or more) of entries of newProduct on my client, but it is the "old" product, that is, the database does not query and send the latest entries, but only the old ones via SignalR. When I refresh the page manually, the new data is correctly displayed, though.
I believe it has something to do with a single connection being open so I constantly receive only the "old" set of data, but even if that is the case, I am unable to figure out why I sometimes get more than one packet of data, even though I am only trying to send one, and I am calling NOTIFY only once.
I figured it out. Hopefully this will help someone else who gets stuck with this in the future!
The issue was that I was declaring my dbContext via .NET Core's dependency injection in my Hub class, which created the context only once per that class, and also because of that per page or WebSocket transaction. Which is why I was unable to get the latest data, I assume, since the dbContext was "old" and unaware of changes.
I fixed the problem by using a dbContext via the using scheme inside of my methods, twice in my SendForexAsync method (once per every call of the GetForex function), as well as in the GetForex function itself. That way, a dbContext is created and disposed of immediately, so the next time I poll the database for new data via the GetForex function (when I get a notification from the database due to the NOTIFY from the console app), a new instance of dbContext is created which can contain that new data.

"ERROR: 57014: canceling statement due to user request" Npgsql

I am having this phantom problem in my application where one in every 5 request on a specific page (on an ASP.NET MVC application) throws this error:
Npgsql.NpgsqlException: ERROR: 57014: canceling statement due to user request
at Npgsql.NpgsqlState.<ProcessBackendResponses>d__0.MoveNext()
at Npgsql.ForwardsOnlyDataReader.GetNextResponseObject(Boolean cleanup)
at Npgsql.ForwardsOnlyDataReader.GetNextRow(Boolean clearPending)
at Npgsql.ForwardsOnlyDataReader.Read()
at Npgsql.NpgsqlCommand.GetReader(CommandBehavior cb)
...
On the npgsql github page I found the following bug report: 615
It says there:
Regardless of what exactly is happening with Dapper, there's
definitely a race condition when cancelling commands. Part of this is
by design, because of PostgreSQL: cancel requests are totally
"asynchronous" (they're delivered via an unrelated socket, not as part
of the connection to be cancelled), and you can't restrict the
cancellation to take effect only on a specific command. In other
words, if you want to cancel command A, by the time your cancellation
is delivered command B may already be in progress and it will be
cancelled instead.
Although they have made "changes to hopefully make cancellations much safer" in Npgsql 3.0.2 my current code is incompatible with this version because the need of migration described here.
My current workaround (stupid): I have commented the code in Dapper that says command.Cancel(); and the problem seems to be gone.
if (reader != null)
{
if (!reader.IsClosed && command != null)
{
//command.Cancel();
}
reader.Dispose();
reader = null;
}
Is there a better solution to the problem? And secondly what am I loosing with the current fix (except that I have to remember the change every time I update Dapper)?
Configuration:
NET45,
Npgsql 2.2.5,
Postgresql 9.3
I found why my code didn't dispose the reader, resulting in calling command.Cancel(). This only happens with QueryMultiple method when not every refcursor is read.
Changing the code from:
using (var multipleResults = connection.QueryMultiple("schema.getuserbysocialsecurity", new { socialSecurityNumber }))
{
var client = multipleResults.Read<Client>().SingleOrDefault();
if (client != null)
{
client.Address = multipleResults.Read<Address>().Single();
}
return client;
}
To:
using (var multipleResults = connection.QueryMultiple("schema.getuserbysocialsecurity", new { socialSecurityNumber }))
{
var client = multipleResults.Read<Client>().SingleOrDefault();
var address = multipleResults.Read<Address>().SingleOrDefault();
if (client != null)
{
client.Address = address;
}
return client;
}
This fixed the issue and now the reader is properly disposed and command.Cancel() is not invoked.
Hope this helps anyone else!
UPDATE
The npgsql docs for version 2.2 states:
Npgsql is able to ask the server to cancel commands in progress. To do
this, call the NpgsqlCommand’s Cancel method. Note that another thread
must handle the request as the main thread will be blocked waiting for
command to finish. Also, the main thread will raise an exception as a
result of user cancellation. (The error code is 57014.)
I have also posted an issue on the Dapper github page.

properly call EF SaveChanges after each request with Autofac managing scope

I would like to put in a bit of infrastructure on my project to SaveChanges on my db context at the end of every request.
So I create a simple piece of Owin middleware
app.Use(async (ctx, req) => {
await req();
var db = DependencyResolver.Current.GetService<MyDbContext>();
await db.SaveChangesAsync();
});
This does not work and throws the error
Instances cannot be resolved and nested lifetimes cannot be created from this LifetimeScope as it has already been disposed.
If I resolve the db before completing the request
app.Use(async (ctx, req) => {
var db = DependencyResolver.Current.GetService<MyDbContext>();
await req();
await db.SaveChangesAsync();
});
It doesn't throw an error but it also doesn't work (as in changes aren't saved to the db and viewing db in the debugger shows the DbSet's Local property throwing an InvalidOperationException about it being disposed.
I've tried with and without async, registering the middleware before and after autofac configuration (app.UseAutofacMiddleware(container)) and resolving the LifetimeScope directly from the Owin environment. All give me the same results.
I've done something like this before with Structuremap, but can't seem to figure the correct way to get Autofac to play nice.
Steven is right about the fact that you should not be committing on the request disposal because you cannot be sure if you really want to commit there, unless you abstract your UoW from DbContext and keep a success attribute there, checking if on disposal and conditionally commit.
For your specific question, there are two things to clarify.
DbContext or UoW need to be registered with InstancePerRequest()
Instead of using Owin middleware, you should use OnRelease(context => context.SaveMyChangesIfEverythingIsOk()) native Autofac API
For example, this is how it would look like for RavenDb
builder.Register(x =>
{
var session = x.Resolve<IDocumentStore>().OpenAsyncSession();
session.Advanced.UseOptimisticConcurrency = true;
return session;
})
.As<IAsyncDocumentSession>()
.InstancePerRequest()
.OnRelease(x =>
{
x.SaveChangesAsync();
x.Dispose();
});

structuremap entity framework 4 connection

I am using the following Structuremap bootstrapping code for my entity framework 4 entities:
x.For<XEntities>().LifecycleIs(Lifecycles.GetLifecycle(InstanceScope.PerRequest)).Use(() => new XEntities());
But when I do two nearly simultaneous requests, I get the following exception:
EntityException:The underlying provider failed on Open.
{"The connection was not closed. The connection's current state is connecting."}
I am using ASP.NET MVC 2, en have the following in my Application_Start()
EndRequest += new EventHandler(MvcApplication_EndRequest);
void MvcApplication_EndRequest(object sender, EventArgs e)
{
ObjectFactory.ReleaseAndDisposeAllHttpScopedObjects();
}
What can I do to fix this?
[edit]
this happens on a page with several images on it. The images come from the database, served by an Controller Action, which reads the image from the database, and sends it as a file result to the browser. I think that asp.net is breaking down my objectcontext, and closing my db connection when the requests for the images come in, and the exception is thrown.
What I need now, is a correct way to manage the lifetime of the object context in the good way.
Why are you assigning a delegate for EndRequest in Application_Start()?
Just hook directly into the event:
protected void Application_EndRequest()
{
ObjectFactory.ReleaseAndDisposeAllHttpScopedObjects();
}
Also, i have never used that syntax before, this is how i do it:
For<XEntities>().HybridHttpOrThreadLocalScoped().Use<XEntities>()
Also, at what point do you new up your Data Context? Can you show some code?