I have a SQL Server Table-Value Function; let's call it MyTVF which I imported into my Entity Framework data model. I set its return type to an existing entity, MyEntity.
Executing the function directly takes a split-second, but when executing it via EF's dbContext it times out with the following message:
Execution Timeout Expired. The timeout period elapsed prior to
completion of the operation or the server is not responding.
Executing the SQL generated by EF to execute the function also takes a split-second.
How do I diagnose what is going wrong here?
First, you need to see the SQL that is being produced. You can either call ToString() on the IQueryable object query (before calling ToList(), ToArray(), Count(), Single(), SingleOrDefault(), etc) or, better, you can set a value in the Database.Log like:
ctx.Database.Log += (s) => Console.Log(s);
After that, try running the exact same query directly in the database and see what happens.
In the unlikely case that you need to increase the execution timeout, you can set it as this:
((IObjectContextAdapter)ctx).ObjectContext.CommandTimeout = somevalue;
Related
I have a simple query
var count = await _context.ExchangeRate.AsNoTracking().CountAsync(u => u.Currency == "GBP");
The table has only 3 Columns and 10 rows data.
When I tried to execute the query from Net 5 project it is taking around 2.3 seconds for the first time and 500ms (+- 100) for subsequent requests. When I hit the same request in SSMS it is returning in almost no time (45ms as seen in sql profiler).
I have implemented ARITHABORT ON in EF from here
When I see in SQL Profiler it is setting ARITHABORT ON but still the query takes the same time for the first request and subsequent requests.
How do I achieve speed same as SSMS query speed. I need the query to run really speed as my project has requirement to the return the response in 1 second (Need to make atleast 5 simple DB calls...if 1 call takes 500ms then it is crossing 1 second requirement)
Edit
Tried with even ADO.Net. The execution time took as seen in SQL Profiler is 40ms where as when it reached the code it is almost 400ms. So much difference
using (var conn = new SqlConnection(connectionString))
{
var sql = "select count(ExchangeRate) as cnt from ExchangeRate where Currency = 'GBP'";
SqlCommand cmd = new SqlCommand();
cmd.CommandText = "SET ARITHABORT ON; " + sql;
cmd.CommandType = CommandType.Text;
cmd.Connection = conn;
conn.Open();
var t1 = DateTime.Now;
var rd = cmd.ExecuteReader();
var t2 = DateTime.Now;
TimeSpan diff = t2 - t1;
Console.WriteLine((int)diff.TotalMilliseconds);
while (rd.Read())
{
Console.WriteLine(rd["cnt"].ToString());
}
conn.Close();
}
Your "first run" scenario is generally the one-off static initialization of the DbContext. This is where the DbContext works out its mappings for the first time and will occur when the first query is executed. The typical approach to avoid this occurring for a user is to have a simple "warm up" query that runs when the service starts up.. For instance after your service initializes, simply put something like the following:
// Warm up the DbContext
using (var context = new AppDbContext())
{
var hasUser = context.Users.Any();
}
This also serves as a quick start-up check that the database is reachable and responding. The query itself will do a very quick operation, but the DbContext will resolve its mappings at this time so any newly generated DbContext instances will respond without incurring that cost during a request.
As for raw performance, if it isn't a query that is expected to take a while and tie up a request, don't make it async. Asynchronous requests are not faster, they are actually a bit slower. Using async requests against the DbContext is about ensuring your web server / application thread is responsive while potentially expensive database operations are processing. If you want a response as quickly as possible, use a synchronous call.
Next, ensure that any fields you are filtering against, in this case Currency, are indexed. Having a field called Currency in your entity as a String rather than a CurrencyId FK (int) pointing to a Currency record is already an extra indexing expense as indexes on integers are smaller/faster than those on strings.
You also don't need to bother with AsNoTracking when using a Count query. AsNoTracking applies solely when you are returning entities (ToList/ToArray/Single/Firstetc.) to avoid having the DbContext holding onto a reference to the returned entity. When you use Count/Any or projection to return properties from entities using Select there is no entity returned to track.
Also consider network latency between where your application code is running and the database server. Are they the same machine or is there a network connection in play? How does this compare when you are performing a SSMS query? Using a profiler you can see what SQL EF is actually sending to the database. Everything else in terms of time is a cost of: Getting the request to the DB, Getting the resulting data back to the requester, parsing that response. (If in the case where you are returning entities, allocating, populating, checking against existing references, etc... In the case of counts etc. checking existing references)
Lastly, to ensure you are getting the peak performance, ensure that your DbContexts lifetimes are kept short. If a DbContext is kept open and has had a number of tracking queries run against in (Selecting entities without AsNoTracking) those tracked entity references accumulate and can have a negative performance impact on future queries, even if you use AsNoTracking as EF looks to check through it's tracked references for entities that might be applicable/related to your new queries. Many times I see developers assume DbContexts are "expensive" so they opt to instantiate them as little as possible to avoid those costs, only to end up making operations more expensive over time.
With all that considered, EF will never be as fast as raw SQL. It is an ORM designed to provide convenience to .Net applications when it comes to working with data. That convenience in working with entity classes rather than sanitizing and writing your own raw SQL every time comes with a cost.
I am getting this error, and I have not been able to resolve:
System.Data.SqlClient.SqlException: 'The transaction operation cannot be performed because there are pending requests working on this transaction.'
What is going on is that a usual data operation is taking place as part of a Controller Action.
At the same time, there is a Filter that is running that logs the action to a database.
this._orderEntryContext.ServerLog.Add(serverLog);
return this._orderEntryContext.SaveChanges() > 0;
This is where the error occurs.
So it seems to me that there is two SaveChanges going on at the same time, and so the transaction gets fouled up.
Not sure how to resolve. They are both using the same context that is gotten through DI. A workaround was to create a second context manually, but I would rather stick to the DI pattern. But I don't know how to create a second Db Context in DI, or even if that is a good idea.
Perhaps I should be using SaveChangesAsync() on both calls to ensure that they do not step on each other?
Turns out the answer to this was to make the Context a transient service:
services.AddDbContext<OrderEntryContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")), ServiceLifetime.Transient);
Then, I changed all repositories to also be transient:
services.AddTransient<AssociateRepository, AssociateRepository>();
I am using Java based configuration for my Spring Batch. I am calling a stored procedure "writer.setSql("call proc (:_name)");"
The data is getting inserted through the procedure. However, I am getting exception " <<<<<<<<<
Thanks
Note: I am skipping "Exception.class" in my step.
The issue is due to the assertion of updates from the JDBCBatchItemWriter. The proc does not return the no.of rows affected like a sql statement. The java code throws the Exception of the count of updates is 0. The solution to the problem stated above is to setAssertUpdates to False " writer.setAssertUpdates(false)".
However, the question still remains on the best writer to use to execute DB objects like procedure or functions and how transactions should be managed.
Refer to the source code from the url below:
http://grepcode.com/file/repo1.maven.org/maven2/org.springframework.batch/spring-batch-infrastructure/3.0.0.RELEASE/org/springframework/batch/item/database/JdbcBatchItemWriter.java
I use Java Configuration. Set the writer to avoid 'assert updates' does the job.
writer.setAssertUpdates(false);
I'm trying to understand how a java (client) application that communicates, through JDBC, with a pgSQL database (server) can "catch" the result produced by a query that will be fired (using a trigger) whenever a record is inserted into a table.
So, to clarify, via JDBC I install a trigger procedure prepared to execute a query whenever a record is inserted into a given database table, and from this query's execution will result an output (wrapped in a resultSet, I suppose). And my problem is that I have no idea how the client will be aware of those results, that are asynchronously produced.
I wonder if JDBC supports any "callback" mechanism able to catch the results produced by a query that is fired through a trigger procedure under the "INSERT INTO table" condition. And if there is no such "callback" mechanism, what is the best approach to achieve this result?
Thank you in advance :)
Triggers can't return a resultset.
There's no way to send such a result to the JDBC driver.
There are a few dirty hacks you can use to get results from a trigger to the client, but they're all exactly that. Things like:
DECLARE a cursor for the resultset, then send the cursor name as a NOTIFY payload, so the app can FETCH ALL FROM <cursorname>;
Create a TEMPORARY table and report the name via NOTIFY
It is more typical to append anything the trigger needs to communicate to the app to a table that exists for that purpose and have the app SELECT from it after the operation that fired the trigger ran.
In most cases if you need to do this, you're probably using a trigger where a regular function is a better fit.
I was trying dapper orm and recently they added asyncquery support. I googled it about that. It is wonderful if you have heavy traffic on your site. I was trying that with postgressql and dapper. Now, in connection if I am passing simple connection string it works fine. But as per couple of articles it is not true async if I want to use it, I need async connection string.
Now, I don't know how to use with Postgresql and npgsql. Here is complete article for reference where author explains how to do it with Sql Server.
What I need to do if I want same with Postgresql?
Please let me know if any further requirement needed.
The author of this article is somewhat wrong - in .NET 4.5 the AsynchronousProcessing property is ignored because it is no longer required. You can just start calling the Async methods of SqlClient without any special connection strings.
Whether the operations will execute asynchronously, depends on the database provider. For example, the default implementation of DbCommand.ExecuteDbDataReaderAsync actually executes synchronously and blocks the calling thread. SqlCommand overrides this method and executes asynchronously.
Unfortunately, NpgsqlCommand doesn't override this method so you are left with synchronous execution only.