"ERROR: 57014: canceling statement due to user request" Npgsql - postgresql

I am having this phantom problem in my application where one in every 5 request on a specific page (on an ASP.NET MVC application) throws this error:
Npgsql.NpgsqlException: ERROR: 57014: canceling statement due to user request
at Npgsql.NpgsqlState.<ProcessBackendResponses>d__0.MoveNext()
at Npgsql.ForwardsOnlyDataReader.GetNextResponseObject(Boolean cleanup)
at Npgsql.ForwardsOnlyDataReader.GetNextRow(Boolean clearPending)
at Npgsql.ForwardsOnlyDataReader.Read()
at Npgsql.NpgsqlCommand.GetReader(CommandBehavior cb)
...
On the npgsql github page I found the following bug report: 615
It says there:
Regardless of what exactly is happening with Dapper, there's
definitely a race condition when cancelling commands. Part of this is
by design, because of PostgreSQL: cancel requests are totally
"asynchronous" (they're delivered via an unrelated socket, not as part
of the connection to be cancelled), and you can't restrict the
cancellation to take effect only on a specific command. In other
words, if you want to cancel command A, by the time your cancellation
is delivered command B may already be in progress and it will be
cancelled instead.
Although they have made "changes to hopefully make cancellations much safer" in Npgsql 3.0.2 my current code is incompatible with this version because the need of migration described here.
My current workaround (stupid): I have commented the code in Dapper that says command.Cancel(); and the problem seems to be gone.
if (reader != null)
{
if (!reader.IsClosed && command != null)
{
//command.Cancel();
}
reader.Dispose();
reader = null;
}
Is there a better solution to the problem? And secondly what am I loosing with the current fix (except that I have to remember the change every time I update Dapper)?
Configuration:
NET45,
Npgsql 2.2.5,
Postgresql 9.3

I found why my code didn't dispose the reader, resulting in calling command.Cancel(). This only happens with QueryMultiple method when not every refcursor is read.
Changing the code from:
using (var multipleResults = connection.QueryMultiple("schema.getuserbysocialsecurity", new { socialSecurityNumber }))
{
var client = multipleResults.Read<Client>().SingleOrDefault();
if (client != null)
{
client.Address = multipleResults.Read<Address>().Single();
}
return client;
}
To:
using (var multipleResults = connection.QueryMultiple("schema.getuserbysocialsecurity", new { socialSecurityNumber }))
{
var client = multipleResults.Read<Client>().SingleOrDefault();
var address = multipleResults.Read<Address>().SingleOrDefault();
if (client != null)
{
client.Address = address;
}
return client;
}
This fixed the issue and now the reader is properly disposed and command.Cancel() is not invoked.
Hope this helps anyone else!
UPDATE
The npgsql docs for version 2.2 states:
Npgsql is able to ask the server to cancel commands in progress. To do
this, call the NpgsqlCommand’s Cancel method. Note that another thread
must handle the request as the main thread will be blocked waiting for
command to finish. Also, the main thread will raise an exception as a
result of user cancellation. (The error code is 57014.)
I have also posted an issue on the Dapper github page.

Related

Not receiving latest data using Npgsql LISTEN/NOTIFY

I'm using .NET Core app with a PostgreSQL database (with Npgsql) combined with SignalR to receive real-time data and latest data entries. However, I am not receiving the latest entry, and sometimes the Clients.All.SendAsync method sends more than one entry to the client. Here is my code:
Hub method that sends new data to client:
public async Task SendForexAsync(string name)
{
var product = GetForex(name);
await Clients.All.SendAsync("CurrentData", product);
using (var conn = new NpgsqlConnection(ApplicationDbContext.GetConnectionString()))
{
conn.Open();
var cmd = new NpgsqlCommand("LISTEN new_forex", conn).ExecuteNonQuery();
conn.Notification += async (o, e) =>
{
var newProduct = GetForex(name);
await Clients.All.SendAsync("NewData", newProduct);
};
while (true)
{
await conn.WaitAsync();
}
}
}
Console app that periodically polls for new data from an API:
var addedStocksDJI = FetchNewStocks("DJI");
if (addedStocksAAPL > 0 || addedStocksDJI > 0)
{
using (var conn = new NpgsqlConnection(ApplicationDbContext.GetConnectionString()))
{
conn.Open();
var cmd = new NpgsqlCommand("NOTIFY new_stocks", conn).ExecuteNonQuery();
}
}
The other code of the app is most definitely correct because I was receiving new and correct data before I tried implementing the LISTEN/NOTIFY feature. But now, I get one (or more) of entries of newProduct on my client, but it is the "old" product, that is, the database does not query and send the latest entries, but only the old ones via SignalR. When I refresh the page manually, the new data is correctly displayed, though.
I believe it has something to do with a single connection being open so I constantly receive only the "old" set of data, but even if that is the case, I am unable to figure out why I sometimes get more than one packet of data, even though I am only trying to send one, and I am calling NOTIFY only once.
I figured it out. Hopefully this will help someone else who gets stuck with this in the future!
The issue was that I was declaring my dbContext via .NET Core's dependency injection in my Hub class, which created the context only once per that class, and also because of that per page or WebSocket transaction. Which is why I was unable to get the latest data, I assume, since the dbContext was "old" and unaware of changes.
I fixed the problem by using a dbContext via the using scheme inside of my methods, twice in my SendForexAsync method (once per every call of the GetForex function), as well as in the GetForex function itself. That way, a dbContext is created and disposed of immediately, so the next time I poll the database for new data via the GetForex function (when I get a notification from the database due to the NOTIFY from the console app), a new instance of dbContext is created which can contain that new data.

Why doesn't my db.collection.insert work?

I am encountering a weird issue here...
After I seem to successfully insert some data into my db.collection I cant seem to get it to reflect using db.collection.find().fetch().
Find below the code I insert into my chrome console:
merchantReviews.insert({merchantScore: "5.5"}, function() {
console.log("Review value successfully inserted");
});
This yields:
"9sd5787kj7dsd98ycnd"
Review value successfully inserted
I think returned value "9sd5787kj7dsd98ycnd" is an indication of a successful db collection insert. Then when I run:
merchantReviews.find().fetch()
I get:
[]
Can anyone tell me what is going on here?
Looking forward to your help.
There are two possibilities here: either the insert fails on the server even though it passes on the client, or you haven't subscribed to your collection.
In case the insert fails on server (most likely due to insufficient permissions, if you have removed the insecure package but have not declared any collection.allow rules), the client code still returns the intended insert ID (in your case, "9sd5787kj7dsd98ycnd"). The callback is called once the server has confirmed that the insert has either failed or succeeded. If it has failed, the callback is called with a single error argument. To catch this, you can instead insert the document like this:
merchantReviews.insert({merchantScore: "5.5"}, function(error) {
if (error) {
console.error(error);
} else {
console.log("Review value successfully inserted");
}
});
If this still logs successful insert, then you haven't subscribed to the collection, and you have removed the autopublish package. You can read about Meteor publish-subscribe system here. Basically, you have to publish the collection in server-side code:
Meteor.publish('reviews', function () {
return merchantReviews.find();
});
And in server code (or your js console) you need to subscribe to the collection with Meteor.subscribe('reviews'). Now calling merchantReviews.find().fetch() should return all documents in the collection.

Empty response on long running query SailsJS

I'm currently running SailsJS on a Raspberry Pi and all is working well however when I execute a sails.models.nameofmodel.count() when I attempt to respond with the result I end up getting a empty response.
getListCount: function(req,res)
{
var mainsource = req.param("source");
if(mainsource)
{
sails.models.gatherer.find({source: mainsource}).exec(
function(error, found)
{
if(error)
{
return res.serverError("Error in call");
}
else
{
sails.log("Number found "+found.length);
return res.ok({count: found.length});
}
}
);
}
else
{
return res.ok("Error in parameter");
}
},
I am able to see in the logs the number that was found (73689). However when responding I still get an empty response. I am using the default stock ok.js file, however I did stick in additional logging to try to debug and make sure it is going through the correct paths. I was able to confirm that the ok.js was going through this path
if (req.wantsJSON) {
return res.jsonx(data);
}
I also tried adding .populate() to the call before the .exec(), res.status(200) before I sent out a res.send() instead of res.ok(). I've also updated Sails to 11.5 and still getting the same empty response. I've also used a sails.models.gatherer.count() call with the same result.
You can try to add some logging to the beginning of your method to capture the value of mainsource. I do not believe you need to use an explicit return for any response object calls.
If all looks normal there, try to eliminate the model's find method and just evaluate the request parameter and return a simple response:
getListCount: function(req, res) {
var mainsource = req.param("source");
sails.log("Value of mainsource:" + mainsource);
if (mainsource) {
res.send("Hello!");
} else {
res.badRequest("Sorry, missing source.");
}
}
If that does not work, then your model data may not actually be matching on the criteria that you are providing and the problem may lie there; in which case, your response would be null. You mentioned that you do see the resulting count of the query within the log statement. If the res.badRequest is also null, then you may have a problem with the version of express that is installed within sailsjs. You mention that you have 11.5 of sailsjs. I will assume you mean 0.11.5.
This is what is found in package.json of 0.11.5
"express": "^3.21.0",
Check for any possible bugs within the GitHub issues for sailsjs regarding express and response object handling and the above version of express.
It may be worthwhile to perform a clean install using the latest sailsjs version (0.12.0) and see if that fixes your issue.
Another issue may be in how you are handling the response. In this case .exec should execute the query immediately (i.e. a synchronous call) and return the response when complete. So there should be no asynchronous processing there.
If you can show the code that is consuming the response, that would be helpful. I am assuming that there is a view that is showing the response via AJAX or some kind of form POST that is being performed. If that is where you are seeing the null response, then perhaps the problem lies in the view layer rather than the controller/model.
If you are experiencing a true timeout error via HTTP even though your query returns with a result just in time, then you may need to consider using async processing with sailjs. Take a look at this post on using a Promise instead.

Concurrent requests : what best practices?

I have a Play! Framework 2.3 project hosted on Heroku with the Postgres Addon.
It handles requests from mobile applications (Post a message).
For different reasons, I have duplicate (twice) rows (messages) in database :
the app might send the request twice in a short time ( less than 10ms )
I have multiple dynos that handle requests in parallel
Event if before writing in DB, I check the message does not exist yet. So I guess the first has still not been wrote when the second comes.
I also tried to write a message footprint in the memcache before handling the request (after form validation). But I still got twice messages sometimes.
The solutions I found are :
have a unique constraint on some database field (like a message timestamp client-side generated ?)
regularly check to remove duplicates
As I do not have means to update mobile apps I will script a regular check of duplicates.
Any other idea ?
What are the best practices to handle such concurrent requests ?
Attachement : my pseudo code
public static Result submit() {
User user = MySecured.getCurrentUser(ctx());
final Form<Message> filledForm = form(Message.class).bindFromRequest();
.... Some database pre-verification
if (filledForm.hasErrors()) {
ObjectNode error = Json.newObject();
error.put("error", filledForm.errorsAsJson());
return ok(error);
} else {
if(Cache.get(KEY_LOCK_FLASH_WRITING+filledForm.data().get("mail"))!=null){
return internalServerError();
}
//Verify this flash hasnt already been handled (requests can come twice from client)
Message sameMessage = Message.findSame(filledForm.get().mail, filledForm.get().message);
if(sameMessage!=null){
Logger.info("[Submit] message already exists" + sameMessage.id);
ObjectNode jsonResult = Json.newObject();
.... Processing a result ... no matter this does not happen.
return ok(jsonResult);
}
final Message flash = filledForm.get();
Cache.set(KEY_LOCK_FLASH_WRITING+flash.mail, "");
... some fields initializations like flash.author = new Author();
... Then some promises
return ok();
}
}

Properly disposing resources used by SmtpClient

I have a C# service that runs continuously with user credentials (i.e not as localsystem - I can't change this though I want to). For the most part the service seems to run ok, but ever so often it bombs out and restarts for no apparent reason (servicer manager is set to restart service on crash).
I am doing substantial event logging, and I have a layered approach to Exception handling that I believe makes at least some sort of sense:
Essentially I got the top level generic exception, null exception and startup exception handlers.
Then I got various handlers at the "command level" (i.e specific actions that the service runs)
Finally I handle a few exceptions handled at the class level
I have been looking at whether any resources aren't properly released, and I am starting to suspect my mailing code (send email). I noticed that I was not calling Dispose for the MailMessage object, and I have now rewritten the SendMail code as illustrated below.
The basic question is:
will this code properly release all resources used to send mails?
I don't see a way to dispose of the SmtpClient object?
(for the record: I am not using object initializer to make the sample easier to read)
private static void SendMail(string subject, string html)
{
try
{
using ( var m = new MailMessage() )
{
m.From = new MailAddress("service#company.com");
m.To.Add("user#company.com");
m.Priority = MailPriority.Normal;
m.IsBodyHtml = true;
m.Subject = subject;
m.Body = html;
var smtp = new SmtpClient("mailhost");
smtp.Send(m);
}
}
catch (Exception ex)
{
throw new MyMailException("Mail error.", ex);
}
}
I know this question is pre .Net 4 but version 4 now supports a Dispose method that properly sends a quit to the smpt server. See the msdn reference and a newer stackoverflow question.
There are documented issues with the SmtpClient class. I recommend buying a third party control since they aren't too expensive. Chilkat makes a decent one.