Entity Framework timeout not triggered - entity-framework

Recently we ran into an issue where a process that normally took seconds to execute one day took an hour and then the next day took almost 9 hours. That is a separate issue which I am also investigating, but issue I want to talk about here is the fact that the query in question was called through Entity Framework with a 600 second timeout and when the process started running long, there was no timeout exception triggered.
In each instance of the process running long, it eventually completed successfully with no errors generated. We do not override anything related to DbContext so the default timeout of 30 seconds should still apply.
We validated that the query was still running by evaluating current executions on the SQL Server instance. My first thought was that the query finished but goofiness with Entity Key columns in the entity class for this view caused it to take forever to validate. There is only 1 Entity Key column on that object and it is the PK of the main table of the view. We also confirmed that the dataset was not returning duplicate records. So everything points to the fact that the query is executing for the duration of the process and no timeout exceptions are being generated.
Has anyone ever experienced such a thing with EF in their own environment?
We are on EntityFramework 6.1.3 and SQL Server Standard 2016 SP2 CU4.
CODE SAMPLE:
EntitiesCommandTimeout=600
if (ConfigurationManager.AppSettings.AllKeys.Contains("EntitiesCommandTimeout"))
{
_entitiesCommandTimeout = Convert.ToInt32(ConfigurationManager.AppSettings["EntitiesCommandTimeout"].ToString());
}
...
using (var db = new myContext())
{
db.Database.CommandTimeout = _entitiesCommandTimeout;
List<vwMyView> myViewItems = db.myView.Where(v => v.myColumn == myVariable).ToList();
}
UPDATE 2019-03-19:
I forgot to mention that when one of my developers ran this process locally through Visual Studio and the config file set to match what we have setup in Prod, the timeout exception was properly triggered.

Related

Entity Framework Arithabort ON, but still query is slow

I have a simple query
var count = await _context.ExchangeRate.AsNoTracking().CountAsync(u => u.Currency == "GBP");
The table has only 3 Columns and 10 rows data.
When I tried to execute the query from Net 5 project it is taking around 2.3 seconds for the first time and 500ms (+- 100) for subsequent requests. When I hit the same request in SSMS it is returning in almost no time (45ms as seen in sql profiler).
I have implemented ARITHABORT ON in EF from here
When I see in SQL Profiler it is setting ARITHABORT ON but still the query takes the same time for the first request and subsequent requests.
How do I achieve speed same as SSMS query speed. I need the query to run really speed as my project has requirement to the return the response in 1 second (Need to make atleast 5 simple DB calls...if 1 call takes 500ms then it is crossing 1 second requirement)
Edit
Tried with even ADO.Net. The execution time took as seen in SQL Profiler is 40ms where as when it reached the code it is almost 400ms. So much difference
using (var conn = new SqlConnection(connectionString))
{
var sql = "select count(ExchangeRate) as cnt from ExchangeRate where Currency = 'GBP'";
SqlCommand cmd = new SqlCommand();
cmd.CommandText = "SET ARITHABORT ON; " + sql;
cmd.CommandType = CommandType.Text;
cmd.Connection = conn;
conn.Open();
var t1 = DateTime.Now;
var rd = cmd.ExecuteReader();
var t2 = DateTime.Now;
TimeSpan diff = t2 - t1;
Console.WriteLine((int)diff.TotalMilliseconds);
while (rd.Read())
{
Console.WriteLine(rd["cnt"].ToString());
}
conn.Close();
}
Your "first run" scenario is generally the one-off static initialization of the DbContext. This is where the DbContext works out its mappings for the first time and will occur when the first query is executed. The typical approach to avoid this occurring for a user is to have a simple "warm up" query that runs when the service starts up.. For instance after your service initializes, simply put something like the following:
// Warm up the DbContext
using (var context = new AppDbContext())
{
var hasUser = context.Users.Any();
}
This also serves as a quick start-up check that the database is reachable and responding. The query itself will do a very quick operation, but the DbContext will resolve its mappings at this time so any newly generated DbContext instances will respond without incurring that cost during a request.
As for raw performance, if it isn't a query that is expected to take a while and tie up a request, don't make it async. Asynchronous requests are not faster, they are actually a bit slower. Using async requests against the DbContext is about ensuring your web server / application thread is responsive while potentially expensive database operations are processing. If you want a response as quickly as possible, use a synchronous call.
Next, ensure that any fields you are filtering against, in this case Currency, are indexed. Having a field called Currency in your entity as a String rather than a CurrencyId FK (int) pointing to a Currency record is already an extra indexing expense as indexes on integers are smaller/faster than those on strings.
You also don't need to bother with AsNoTracking when using a Count query. AsNoTracking applies solely when you are returning entities (ToList/ToArray/Single/Firstetc.) to avoid having the DbContext holding onto a reference to the returned entity. When you use Count/Any or projection to return properties from entities using Select there is no entity returned to track.
Also consider network latency between where your application code is running and the database server. Are they the same machine or is there a network connection in play? How does this compare when you are performing a SSMS query? Using a profiler you can see what SQL EF is actually sending to the database. Everything else in terms of time is a cost of: Getting the request to the DB, Getting the resulting data back to the requester, parsing that response. (If in the case where you are returning entities, allocating, populating, checking against existing references, etc... In the case of counts etc. checking existing references)
Lastly, to ensure you are getting the peak performance, ensure that your DbContexts lifetimes are kept short. If a DbContext is kept open and has had a number of tracking queries run against in (Selecting entities without AsNoTracking) those tracked entity references accumulate and can have a negative performance impact on future queries, even if you use AsNoTracking as EF looks to check through it's tracked references for entities that might be applicable/related to your new queries. Many times I see developers assume DbContexts are "expensive" so they opt to instantiate them as little as possible to avoid those costs, only to end up making operations more expensive over time.
With all that considered, EF will never be as fast as raw SQL. It is an ORM designed to provide convenience to .Net applications when it comes to working with data. That convenience in working with entity classes rather than sanitizing and writing your own raw SQL every time comes with a cost.

.NET Core 3.1, Devart for Oracle, First Call to Database is very slow

Hy everybody,
i need Help for the following Problem:
I have a Single-Page-Application (REACT, Webpack) with .NET Core 3.1, Devart for Oracle 9.11.980.
The first call to the Database is very slow (20 Seconds). I tested different Calls from my Single-Page-Application.
Call a Rest-API with an connection to a Database. The Response is a JSON-Object. In the DB-Table is only one record. (Duration: 22 Seconds)
Call a Rest-API without connection to a Database which is generating the same JSON-Response like the
Call in Number 1 (Duration: Milliseconds)
Call a Rest-API without connection to a Database. The Response is a string (Duration: Milliseconds).
So it seems to be the connection to the Database.
When i restart the App (e.g. F5) the Call of the Rest-API from Number 1 only needs Milliseconds.
When i Stop and Start the AppPool on IIS for that Application, the Rest-API from Number 1 needs again 22 Seconds for the first request.
It is just a simple Call to the Database:
using (Data.ma06kesch_adminModel context = new Data.ma06kesch_adminModel())
{
IQueryable<BANKOMATKARTE> query = context.BANKOMATKARTE.ToList();
}
Does anyone have a suggestion?
I am not very familiar with IIS. Maybe is it something there? I tried a few different settings but nothing was successful :-/.
Thank you very much.
Try setting "Min Pool Size=0;" in your connection string. If this doesn't help, refer to Fixing slow initial load for IIS.
In the case of large and complex EF6 model, also look at https://www.fusonic.net/developers/2014/07/09/3-steps-for-fast-entity-framework-6-1-code-first-startup-performance/.

Spring batch instance id duplicate key error, trying to start from #1

I am copying java code(using springboot spring batch) and database from dev server to local(desktop) and run it. Getting an error.
It works fine in Dev server. In local , spring-batch is resetting Job instance to 1 and causing primary key error.Is there any option in spring batch so that it starts with next instance id instead of 1? Please let me know
Referred to stackoverflow link below , seems related but posted few years back and reference links does not work anymore.
Duplicate Spring Batch Job Instance
#Configuration
#EnableBatchProcessing
public class Jobclass {
#Rest of the code with Job Bean and steps which works fine in Dev server
}
Error:
com.microsoft.sqlserver.jdbc.SQLServerException: Violation of PRIMARY KEY
constraint 'PK__BATCH_JO__4848154AFB5435C7'. Cannot insert duplicate key
in object 'dbo.BATCH_JOB_INSTANCE'. The duplicate key value is (5).
I've had the same thing happening to me when moving an anonymized production database to another system. Turns out that the anonymization tool in question (PostgreSQL Anonymizer), has a bug which results in stripping the commands which set the next value for the exported sequences, so that was the root cause.
This would also cause the ID reported in the stacktrace to be incremented by 1 on with every attempt - since the sequence was erroneously starting at 1, but a lot of previous executions were stored in Spring Batch's tables.
When I sidestepped the issue by setting the next value myself, the problem vanished. In my case, this amounted to:
SELECT pg_catalog.setval('public.batch_job_execution_seq', 6482, true);
SELECT pg_catalog.setval('public.batch_job_seq', 6482, true);
SELECT pg_catalog.setval('public.batch_step_execution_seq', 6482, true);
Is there any option in spring batch so that it starts with next instance id
To answer your question, the "option" you are looking for is the RunIdIncrementer. It will increment a job parameter called "run.id" each time so you will have a new instance on each run.
However, this is not how I would fix the issue (See my comment). You need to check why this duplicate key exception is happening and fix it. Check if you are launching the job with same parameters resulting in the same instance (and even if this happens, you should not have such an exception if the transaction's isolation level of your job repository is correctly configured, I would expect a JobInstanceAlreadyCompleteException if the last execution succeeded or a JobExecutionAlreadyRunningException if the last execution failed and another one is currently running).

Entity Framework Query execution timeout issue

Meet the following issue when working with EntityFramework v6.1.3
The query is:
var query = DataContext.ExternalPosts.Include(e => e.ExternalUser)
.Where(e => e.EventId == eventId)
.OrderByDescending(e => e.PublishedAt)
.Take(35);
When I do
query.ToList()
I get "The wait operation timed out" exception. But when I use query from
query.ToString()
and execute it directly on server (via Management Studio) it take about 150ms
I have updated CommandTimeout period to 180 and managed to get the result after 50sec via EntityFramework.
When I remove '.OrderByDescending' or '.Include' it works correct for me, I didn't measure the time but it works quite fast.
There is statistics: http://grab.by/KsQ2
I use AzureDb.
UPDATE:
New day, new situation: today it works quite normal with the same query and on the same set of data. Could this be Azure services issue?
Any ideas?
This might help :- msdn forum link

What things should I consider when using System.Transactions in my EF project?

Background
I have both an MVC app and a windows service that access the same data access library which utilizes EntityFramework. The windows service monitors certain activity on several tables and performs some calculations.
We are using the DAL project against several hundred databases, generating the connection string for the context at runtime.
We have a number of functions (both stored procedures and .NET methods which call on EF entities) which because of the scope of data we are using are VERY db intensive which have the potential to block one another.
The problem
The windows service is not so important that it can't wait. If something must be blocked, the windows service can. Earlier I found a number of SO questions that stated that System.Transactions is the way to go when setting your transaction isolation level to READ UNCOMMITTED to minimize locks.
I tried this, and I may be misunderstanding what is going on, so I need some clarification.
The method in the windows service is structured like so:
private bool _stopMe = false;
public void Update()
{
EntContext context = new EntContext();
do
{
var transactionOptions = new System.Transactions.TransactionOptions();
transactionOptions.IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted;
using (var transactionScope = new System.Transactions.TransactionScope( System.Transactions.TransactionScopeOption.Required, transactionOptions))
{
List<Ent1> myEnts = (from e....Complicated query here).ToList();
SomeCalculations(myEnts);
List<Ent2> myOtherEnts = (from e... Complicated query using entities from previous query here).ToList();
MakeSomeChanges(myOtherEnts);
context.SaveChanges();
}
Thread.Sleep(5000); //wait 5 seconds before allow do block to continue
}while (! _stopMe)
}
When I execute my second query, an exception gets thrown:
The underlying provider failed on Open.
Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please
enable DTC for network access in the security configuration for MSDTC using the
Component Services Administrative tool.
The transaction manager has disabled its support for remote/network
transactions. (Exception from HRESULT: 0x8004D024)
I can assume that I should not be calling more than one query in that using block? The first query returned just fine. This is being performed on one database at a time (really other instances are being run in different threads and nothing from this thread touches the others).
My question is, is this how it should be used or is there more to this that I should know?
Of Note: This is a monitoring function, so it must be run repeatedly.
In your code you are using transaction scope. It looks like the first query uses a light weight db transaction. When the second query comes the transaction scope upgrades the transaction to a distributed transaction.
The distributed transaction uses MSDTC.
Here is where the error comes, by default MSDTC is not enabled. Even if it is enabled and started, it needs to be configured to allow a remote client to create a distributed transaction.