.NET Core 3.1, Devart for Oracle, First Call to Database is very slow - asp.net-core-3.1

Hy everybody,
i need Help for the following Problem:
I have a Single-Page-Application (REACT, Webpack) with .NET Core 3.1, Devart for Oracle 9.11.980.
The first call to the Database is very slow (20 Seconds). I tested different Calls from my Single-Page-Application.
Call a Rest-API with an connection to a Database. The Response is a JSON-Object. In the DB-Table is only one record. (Duration: 22 Seconds)
Call a Rest-API without connection to a Database which is generating the same JSON-Response like the
Call in Number 1 (Duration: Milliseconds)
Call a Rest-API without connection to a Database. The Response is a string (Duration: Milliseconds).
So it seems to be the connection to the Database.
When i restart the App (e.g. F5) the Call of the Rest-API from Number 1 only needs Milliseconds.
When i Stop and Start the AppPool on IIS for that Application, the Rest-API from Number 1 needs again 22 Seconds for the first request.
It is just a simple Call to the Database:
using (Data.ma06kesch_adminModel context = new Data.ma06kesch_adminModel())
{
IQueryable<BANKOMATKARTE> query = context.BANKOMATKARTE.ToList();
}
Does anyone have a suggestion?
I am not very familiar with IIS. Maybe is it something there? I tried a few different settings but nothing was successful :-/.
Thank you very much.

Try setting "Min Pool Size=0;" in your connection string. If this doesn't help, refer to Fixing slow initial load for IIS.
In the case of large and complex EF6 model, also look at https://www.fusonic.net/developers/2014/07/09/3-steps-for-fast-entity-framework-6-1-code-first-startup-performance/.

Related

Entity Framework timeout not triggered

Recently we ran into an issue where a process that normally took seconds to execute one day took an hour and then the next day took almost 9 hours. That is a separate issue which I am also investigating, but issue I want to talk about here is the fact that the query in question was called through Entity Framework with a 600 second timeout and when the process started running long, there was no timeout exception triggered.
In each instance of the process running long, it eventually completed successfully with no errors generated. We do not override anything related to DbContext so the default timeout of 30 seconds should still apply.
We validated that the query was still running by evaluating current executions on the SQL Server instance. My first thought was that the query finished but goofiness with Entity Key columns in the entity class for this view caused it to take forever to validate. There is only 1 Entity Key column on that object and it is the PK of the main table of the view. We also confirmed that the dataset was not returning duplicate records. So everything points to the fact that the query is executing for the duration of the process and no timeout exceptions are being generated.
Has anyone ever experienced such a thing with EF in their own environment?
We are on EntityFramework 6.1.3 and SQL Server Standard 2016 SP2 CU4.
CODE SAMPLE:
EntitiesCommandTimeout=600
if (ConfigurationManager.AppSettings.AllKeys.Contains("EntitiesCommandTimeout"))
{
_entitiesCommandTimeout = Convert.ToInt32(ConfigurationManager.AppSettings["EntitiesCommandTimeout"].ToString());
}
...
using (var db = new myContext())
{
db.Database.CommandTimeout = _entitiesCommandTimeout;
List<vwMyView> myViewItems = db.myView.Where(v => v.myColumn == myVariable).ToList();
}
UPDATE 2019-03-19:
I forgot to mention that when one of my developers ran this process locally through Visual Studio and the config file set to match what we have setup in Prod, the timeout exception was properly triggered.

Extended events not capturing Entity Framework queries (read/updates)

I had added the Extended event to track sql calls which is slowing down my system leading to Timeout exceptions and other
CREATE EVENT SESSION [longrunning_statements] ON SERVER
ADD EVENT sqlserver.sql_statement_completed(
WHERE ([duration]>(2000000) AND [database_id]=(9)))
ADD TARGET package0.event_file(SET filename=N'c:\capture\xe_longrunning_statement.xel',metadatafile=N'c:\capture\xe_longrunning_statement.xem')
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF)
GO
But i noticed it does not register sql server updates/read queries/procedures calls from Entity Framework but only logged my sql queries run using SSMS.
Any ideas are appreciated
UPDATE:
I use EF6.1 which used i think batches to save data.
Instead of trying to capture sqlserver.sql_statement_completed I would capture sqlserver.sql_batch_completed and sqlserver.rpc_completed for application/API issued queries/stored procedures.
If that does not work then removing the filters(or at least the duration filter (as Andrey is suggesting in the comments) would probably give us more insight on why the queries are not being captured.

Play Framework Database Related endpoints hang after some up time

I have situation which is almost identical to the one described here: Play framework resource starvation after a few days
My application is simple, Play 2.6 + PostgreSQL + Slick 3.
Also, DB retrieval operations are Slick only and simple.
Usage scenario is that data comes in through one endpoint, gets stored into DB (there are some actors storing some data in async fashion which can fail with default strategy) and is served through rest endpoints.
So far so good.
After few days, every endpoint that has anything to do with database stops responding. Application is server on t3-medium on a single instance connected to RDS instance. Connection count to RDS is always the same and stable, mostly idling.
What I have also noticed is that database actually gets called and query gets executed, but request never ends or gets any data.
Simplest endpoint (POST) is for posting feedback - basically one liner:
feedbackService.storeFeedback(feedback.deviceId, feedback.message).map(_ => Success)
This Success thing is wrapper around Ok("something") so no magic there.
Feedback Service stores one record in DB in a Slick preferred way, nothing crazy there as well.
Once feedback post is called, I notice in psql client that INSERT query has been executed and data really ends up in database, but HTTP request never ends and no success data gets returned. In parallel, calling non DB related endpoints which do return some values like status endpoint goes through without problems.
Production logs don't show anything and restarting helps for a day or two.
I suppose some kind of resource starvation is happening, but which and where is currently beyond me.

Caching Issue with Teiid Virtual DB with ODATA Query

I have a business scenario where, whenever a new record is loaded into a DB table,
a) A notification will be sent to the client. Notification message is to convey data is loaded and ready for querying.
b) Upon receiving the notification, the Client will make an OData query to the JBOSS vitrual DB. Odata is supported by Teiid VDB
Problem is that: The new records (inserted via manual/automated SL script) that are not returned in the ODATA query response. It is always returning the cached result for first 5 minutes. Because the Odata has a default cache time setting to 5 minutes.
We want TEIID to always return all the records including the newly inserted one.
I tried the following option but it is not working as expected (https://developer.jboss.org/wiki/AHowToGuideForMaterializationcachingViewsInTeiid)
1) Cache hints
/*+ cache(ttl:300000) */ select * from Source.UpdateProduct
2) OPTION NOCACH
**** This works when I make a JDBC query to the DB.
Please suggest, how to turn off this caching in case of ODATA REST query ?
I think Teiid documentation https://docs.jboss.org/author/display/TEIID/OData+Support could help.
You don't specify version of Teiid you use, so I enclose the most current version's documentation.
Now when you go through the docs page, at the bottom there is section Configuration, where there are several configurable options.
Doesn't the skiptoken-cache-time option serve your need? Try setting it to lower value/zero and see if this helps. Just locate the odata war, open it, and change the WEB-INF/web.xml file.
Jan

ADO.NET SqlData Client connections never go away

An asp.net application I am working on may have a couple hundred users trying to connect. We get an error that the maximum number of connections in the pool has been reached. I understand the concept of connection pools in ADO.NET although in testing I've found that a connection is left "sleeping" on the ms sql 2005 server days after the connection was made and the browser was then closed. I have tried to limit the connection lifetime in the connection string but this has no effect. Should I push the max number of connections? Have I completely misdiagnosed the real problem?
All of your database connections must either be wrapped in a try...finally:
SqlConnection myConnection = new SqlConnection(connString);
myConnection.Open();
try
{
}
finally
{
myConnection.Close();
}
Or...much better yet, create a DAL (Data Access Layer) that has the Close() call in its Dispose method and wrap all of your DB calls with a using:
using (MyQueryClass myQueryClass = new MyQueryClass())
{
// DB Stuff here...
}
A few notes: ASP.NET does rely on you to release your Connections. It will release them through GC after they've gone out of scope but you can not count on this behavior as it may take a very long time for this to kick in - much longer than you may be able to afford before your connection pool runs out. Speaking of which - ASP.NET actually pools your connections to the database as you request them (recycling old connections rather than releasing them completely and then requesting them anew from the database). This doesn't really matter as far as you are concerned: you still must call Close()!
Also, you can control the pooling by using your connection string (e.g. Min/Max pool size, etc.). See this article on MSDN for more information.