Very strange problem happening with the Entity Framework for our ASP.NET web application hosted on IIS. We have a simple query that continues to run slower the longer the application is running. The issue so far cannot be replicated when debugging locally through visual studio.
var query = context.Stocktakes
.Include(s => s.Items)
.Include(s => s.Items.Select(i => i.Product))
.Where(s => s.StocktakeID == stocktakeID && s.SiteID == siteID)
.AsNoTracking()
.SingleOrDefault();
After restarting the app pool, the query runs in less than ¼ of a second. Once there has been a small amount of traffic on the application the query performance starts to degrade, taking 6 seconds and so far up to 22 seconds after several hours of the application being hosted.
The same query when ran through SQL Server Manager takes less than ¼ of a second always. Clearing the execution plans has little to no effect so I do not believe the issue is due to parameter sniffing. I have created a copy of the application running under the same app pool, and under a different app pool with only myself connecting to it and the query performance does not degrade, it remains fast with the query taking less than ¼ of a second.
Does anyone have any ideas on what is happening?
Edit: The problem does not relate to nested projections as there is no projection to a non-entity class in the example given. The example is showing querying entities direct from the context. The link given for nested projections also does not explain why the query performance would degrade over time.
The reason for the performance drop was because of the following line in the web.config file;
<trust legacyCasModel="true" level="Full" />
I assume that because Entity Framework uses dynamic data for some of its operations the LegacyCasModel setting is not compatible with the Entity Framework.
For anyone stumbling on this post who turned on this setting to improve the performance of their local SSRS reports, the following posts helped me to disable the setting above and still generate the reports fast.
Local report rdlc to pdf very slow
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/6d89e2ce-3528-465f-9740-7e22aa7b7aae/slow-performance-with-dynamic-grouping-and-reportviewer-in-local-mode?forum=sqlreportingservices
Related
I have already combed through this old article:
Why is Entity Framework taking 30 seconds to load records when the generated query only takes 1/2 of a second?
but no success.
I have tested the query:
without lazy loading (not using .Include of related entities) and
without merge tracking (using AsNoTracking)
I do not think I can easily switch to compiled queries in general due to the complexity of queries and using a Code First model, but let me know if you experience otherwise...
Setup
Entity Framework '4.4' (.Net 4.0 with EF 5 install)
Code First model and DbContext
Testing directly on the SQL Server 2008 machine hosting the database
Query
- It's just returning simple fields from one table:
SELECT
[Extent1].[Id] AS [Id],
[Extent1].[Active] AS [Active],
[Extent1].[ChangeUrl] AS [ChangeUrl],
[Extent1].[MatchValueSetId] AS [MatchValueSetId],
[Extent1].[ConfigValueSetId] AS [ConfigValueSetId],
[Extent1].[HashValue] AS [HashValue],
[Extent1].[Creator] AS [Creator],
[Extent1].[CreationDate] AS [CreationDate]
FROM [dbo].[MatchActivations] AS [Extent1]
The MatchActivations table has relationships with other tables, but for this purpose using explicit loading of related entities as needed.
Results (from SQL Server Profiler)
For Microsoft SQL Server Management Studio Query: CPU = 78 msec., Duration = 587 msec.
For EntityFrameworkMUE: CPU = 31 msec., Duration = 8216 msec.!
Does anyone know, besides suggesting the use of compiled queries if there is anything else to be aware of when using Entity Framework for such a simple query?
A number of people have run into problems where cached query execution plans due to parameter sniffing cause SQL Server to produce a very inefficient execution plan when running a query through ADO.NET, while running the exact same query directly from SQL Server Management Studio uses a different execution plan because some flags on the query are set differently by default.
Some people have reported success in forcing a refresh of the query execution plans by running one or both of the following commands:
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
But a more long-term, targeted solution to this problem would be to use Query Hints like OPTIMIZE FOR and OPTION(Recompile), as described in this article, to help ensure that good execution plans are chosen more consistently in the first place.
I think the framework is doing something funky if you what you say is true i.e. running the query in management studio takes half a second while entity framework takes 8.2 seconds. My hunch is that it's trying to do something with those 25K+ records set (perhaps bind to something else).
Can you download NP NET profiler and profile your app once? http://www.microsoft.com/en-in/download/details.aspx?id=35370
This nifty little program is going to record every method call and their execution time and basically give you info from under the hood on where it's spending those 7+ seconds. If that does not help, I also recommend trying out JetBrains .NET profiler. https://www.jetbrains.com/profiler/
Previous answer suggests that the execution plan can be off and that's true in many cases but it's also worth to sometimes look under the hood to determine the cause.
My thanks to Kalagen and others who responded to this - I did come to a conclusion on this, but forgot about this post.
It turns it is the number of records being returned X processing time (LINQ/EF I presume) to repurpose the raw SQL data back into objects on the client side. I set up wireshark on the SQL server to monitor the network traffic between it and client machines post-query and discovered:
There is a constant stream of network traffic between SQL server and
the rate of packet processing varies greatly between different machines (8x)
While that is occurring, the SQL server CPU utilization is < 25% and no resource starvation seems to be happening (working set, virtual memory, thread, handle counts, etc.)
so it is basically the constant conversion of the results back into EF objects.
The query in question BTW was part of a 'performance' unit test so we ended up culling it down to a more reasonable typical web-page loading of 100 records in under 1 sec. which passes easily.
If anyone wants to chime in on the details of how Entity Framework processes records post-query, I'm sure that would be useful to know.
It was an interesting discovery that the processing time depended more heavily on the client machine than on the SQL server machine (this is an intranet application).
Not sure if this is a question for here or the DBA forum, but here is some background to my problem. I have an application written in C# which uses the Gentle Framework to interface with our database. The issue I am running into has shown up on two different servers, both running SQL Server 2008 R2. One server is running Windows Server 2003 with 16GB of RAM, and the other is running Windows Server 2008 R2 with 64GB of RAM. Also we are running gigabit intranet so I doubt it is either a resource or network issue.
That being said my issue is... when inserting into the database, each insert is taking a little more time starting with about 30ms and building up to about 1900ms before suddenly taking 189549ms (a little over 3 minutes). After that 3 minute insert, the time drops down to about 10ms and starts building up again. Here is a link to my log file showing the time in ms and the query. Unfortunately, due to what has been called "proprietary", I can't share the exact inserts with you but I can answer general questions about the details.
Some additional details:
The log file linked only shows queries which took over 10ms, there are many more queries in the log between the inserts, however they are only taking 1-2ms. I can link one with all of the queries.
I have looked at other questions regarding a similar issues but they have either been about RAID or multiple inserts vs multiple values statements
These are parameterized queries
The tables are indexed and it is not possible to remove these because other applications are using these tables and depend on those indexes.
I don't think I have much choice in changing how records are inserted because Gentle is generating the SQL.
I found this question which I think is close.
I believe that Gentle is doing all of this in 1 transaction, and I have been told that "it should be done in 1 transaction, and we are not going to break it apart"
I have an application written on Play Framework 1.2.4 with Hibernate(default C3P0 connection pooling) and PostgreSQL database (9.1).
Recently I turned on slow queries logging ( >= 100 ms) in postgresql.conf and found some issues.
But when I tried to analyze and optimize one particular query, I found that it is blazing fast in psql (0.5 - 1 ms) in comparison to 200-250 ms in the log. The same thing happened with the other queries.
The application and database server is running on the same machine and communicating using localhost interface.
JDBC driver - postgresql-9.0-801.jdbc4
I wonder what could be wrong, because query duration in the log is calculated considering only database processing time excluding external things like network turnarounds etc.
Possibility 1: If the slow queries occur occasionally or in bursts, it could be checkpoint activity. Enable checkpoint logging (log_checkpoints = on), make sure the log level (log_min_messages) is 'info' or lower, and see what turns up. Checkpoints that're taking a long time or happening too often suggest you probably need some checkpoint/WAL and bgwriter tuning. This isn't likely to be the cause if the same statements are always slow and others always perform well.
Possibility 2: Your query plans are different because you're running them directly in psql while Hibernate, via PgJDBC, will at least sometimes be doing a PREPARE and EXECUTE (at the protocol level so you won't see actual statements). For this, compare query performance with PREPARE test_query(...) AS SELECT ... then EXPLAIN ANALYZE EXECUTE test_query(...). The parameters in the PREPARE are type names for the positional parameters ($1,$2,etc); the parameters in the EXECUTE are values.
If the prepared plan is different to the one-off plan, you can set PgJDBC's prepare threshold via connection parameters to tell it never to use server-side prepared statements.
This difference between the plans of prepared and unprepared statements should go away in PostgreSQL 9.2. It's been a long-standing wart, but Tom Lane dealt with it for the up-coming release.
It's very hard to say for sure without knowing all the details of your system, but I can think of a couple of possibilities:
The query results are cached. If you run the same query twice in a short space of time, it will almost always complete much more quickly on the second pass. PostgreSQL maintains a cache of recently retrieved data for just this purpose. If you are pulling the queries from the tail of your log and executing them immediately this could be what's happening.
Other processes are interfering. The execution time for a query varies depending on what else is going on in the system. If the queries are taking 100ms during peak hour on your website when a lot of users are connected but only 1ms when you try them again late at night this could be what's happening.
The point is you are correct that the query duration isn't affected by which library or application is calling it, so the difference must be coming from something else. Keep looking, good luck!
There are several possible reasons. First if the database was very busy when the slow queries excuted, the query may be slower. So you may need to observe the load of the OS at that moment for future analysis.
Second the history plan of the sql may be different from the current session plan. So you may need to install auto_explain to see the actual plan of the slow query.
I have several performance issue in my website.
I'm using asp.net mvc 2 and Entity Framework 4.0. I bought a Entity Framework Profiler to see what kind of SQL request that EF generated.
By example, some page take between 3 and 5 seconds to open. This is to much for my client.
To see if it's a performance problem with SQL generated by EF, I used my profiler and Copy / Paste the generated SQL in Sql Management Studio to see the execution plan and the sql statistic. The result show in less than a second.
Now that I eliminated the SQL query, I suspect EF at buidling query step.
I Follow the msdn step by step to pre-generate my view. I didn't see any performance gain.
How to be sure that my query use these Pre-Generated Views ?
Is there anything I can do to increase performance of my website ?
thanks
First of all, keep in mind that the pre-compiled queries still take just as long (in fact a little longer) the first time they are run, because the queries are compiled the first time they are invoked. After the first invocation, you should see a significant performance increase on the individual queries.
However, you will find the best answer to all performance questions is: figure out what's taking the most time first, then work on improving in that area. Until you have run a profiler and know where your system is blocking, any time you spend trying to speed things up is likely to be wasted.
Once you've determined what's taking the most time, there are a lot of possible techniques to use to speed things up:
Caching data that doesn't change often
Restructuring your data accesses so you pull the data you need in fewer round trips.
Ensuring you're not pulling more data than you need when you do your database queries.
Buying better hardware.
... and many others
One last note: In Entity Framework 5, they plan to implement automatic query caching, which will make precompiling queries practically useless. So I'd only recommend doing it where you know for sure that you'll get a significant improvement.
I have a .NET Core 2.1 application that allows users to search a large database, with the possibility of using lots of parameters. The data access is done through ADO.NET. Some of the queries generated result in long running queries (several hours). Obviously, the user gives up on waiting, but the query chugs along in SQL Server.
I realize that the root cause is the design of the app, but I would like a quick solution for now, if possible.
I have tried many solutions, but none seem to work as expected.
What I have tried:
CommandTimeout
CommandTimeout works as expected with ExecuteNonQuery but does not work with ExecuteReader, as discussed in this forum
When you execute command.ExecuteReader(), you don't get this exception because the server responds on time. The application doesn't respond because it reads data to the memory, and the ExecuteReader() method doesn't return control until all the data is read.
I have also tried using SqlDataAdapter, but this does not work either.
SQL Server query governor
SQL Server's query governor works off of the estimated execution plan, and while it does work sometimes, it does not always catch inefficient queries.
SQL Server execution time-out
Tools > Options > Query Execution > SQL Server > General
I'm not sure what this does, but after entering a value of 1, SQL Server still allows queries to run as long as they need. I tried restarting the server instance, but that did not make any difference.
Again, I realize that the cause of this problem is the way that the queries are generated, but with so many parameters and so much data, fine tuning a solution in the design of the application may take some time. As of now, we are manually killing any spid associated with this app that has run over 10 or so minutes.
EDIT:
I abandoned the hope of finding a simple solution. If you're having a similar issue, here is what we did to address it:
We created a .net core console app that polls the database for queries running over a certain allotted time. The app looks at the login name and the amount of time it's been running and determines whether to kill the process.
https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlcommand.cancel?view=netframework-4.7.2
Looking through the documentation on SqlCommand.Cancel, I think it might solve your issue.
If you were to create and start a Timer before you call ExecuteReader(), you could then keep track of how long the query is running, and eventually call the Cancel method yourself.
(Note: I wanted to add this as a comment but I don't have the reputation to be allowed to yet)