I've recently noticed some extra sql calls when using miniprofiler.
They only seem to occur after a build, and I think they appeared after upgrading to EF6.
Are they just checking for changes in models?
And can I safely ignore them?
SELECT
[GroupBy1].[A1] AS [C1]
FROM ( SELECT
COUNT(1) AS [A1]
FROM [dbo].[__MigrationHistory] AS [Extent1]
) AS [GroupBy1]
SELECT
[GroupBy1].[A1] AS [C1]
FROM ( SELECT
COUNT(1) AS [A1]
FROM [dbo].[__MigrationHistory] AS [Extent1]
WHERE ([Extent1].[ContextKey] = #p__linq__0) AND (#p__linq__0 IS NOT NULL)
) AS [GroupBy1]
SELECT TOP (1)
[Project1].[C1] AS [C1],
[Project1].[MigrationId] AS [MigrationId],
[Project1].[Model] AS [Model]
FROM ( SELECT
[Extent1].[MigrationId] AS [MigrationId],
[Extent1].[Model] AS [Model],
1 AS [C1]
FROM [dbo].[__MigrationHistory] AS [Extent1]
WHERE ([Extent1].[ContextKey] = #p__linq__0) AND (#p__linq__0 IS NOT NULL)
) AS [Project1]
ORDER BY [Project1].[MigrationId] DESC
These DB calls refer to the new Migration HIstory Table feature included in EF6 when using Code-First:
Migrations history table is a table used by Code First Migrations to store details about migrations applied to the database. By default the name of the table in the database is __MigrationHistory and it is created when applying the first migration do the database. In Entity Framework 5 this table was a system table if the application used Microsoft Sql Server database. This has changed in Entity Framework 6 however and the migrations history table is no longer marked a system table.
If you aren't using them, these calls shouldn't cause any harm. You can always disable them or have EF create DB change scripts for you instead.
Related
I have an application that uses the Azure Mobile Apps .NET backend and Windows Client. One of the tables being synced between the backend and client has 30,000 rows. When syncing this table, the DTU spikes to around 60% of my 100 DTU (S3) tier, which is very bad. The monitoring graph looks like this:
The table controller for this table is pretty basic:
public async Task<IQueryable<MyBigTable>> GetMyBigTable()
{
return Query();
}
The following is the SQL generated by Azure Mobile Apps:
exec sp_executesql N'SELECT TOP (51)
[Project2].[Field1] AS [Field1],
[Project2].[C1] AS [C1],
[Project2].[C2] AS [C2],
[Project2].[Deleted] AS [Deleted],
[Project2].[C3] AS [C3],
...
FROM ( SELECT
[Project1].[Id] AS [Id],
...
FROM ( SELECT
[Extent1].[Id] AS [Id],
...
FROM [dbo].[MyBigTable] AS [Extent1]
WHERE ([Extent1].[UpdatedAt] >= #p__linq__0)
) AS [Project1]
ORDER BY [Project1].[UpdatedAt] ASC, [Project1].[Id] ASC
OFFSET #p__linq__1 ROWS FETCH NEXT #p__linq__2 ROWS ONLY
) AS [Project2]
ORDER BY [Project2].[UpdatedAt] ASC, [Project2].[Id] ASC',N'#p__linq__0 datetimeoffset(7),#p__linq__1 int,#p__linq__2 int',#p__linq__0='2017-02-28 03:48:49.4840000 +00:00',#p__linq__1=0,#p__linq__2=50
... not very pretty, and far too complicated for its ultimate purpose, but that's another matter. The problem, I think, which explains the shape of the graph, is that the inner SQL aliased as [Project1] always returns all rows from [UpdatedAt] till the end of the table. This inner SQL would have been the ideal place to put a TOP 51 clause, but instead it's found in the outer SQLs.
So, although each paging call from the client returns only 50 rows to the client, the first call causes the inner SQL to return all rows; the next call, 50 rows less than the previous, and so on. This, I think, explains the shape of the graph.
Is there any way to influence how the SQL is generated, or even override the SQL with my own? Does this mean that I need to extract the OData query? What is the best way to do this?
I have a SELECT statement (NOT a Stored Procedure) that I am using to create a report in SSRS (Visual Studio 2010).
Parameter #ClassCode is the one that causing a trouble. But in Development it works fine, but when I deploy it to Production it renders forever.
I am assuming it a Parameter Sniffing, and I read about how to fix it inside the Stored Procedure. But I dont have a SP, I am using a SELECT statement.
What would be the workaround for SELECT statement?
And what is the difference between environments? Production is much much more powerful.
My query below:
;WITH cte1
AS
(
SELECT QuoteID,
AccidentDate,
PolicyNumber,
SUM(PaidLosses) as PaidLosses
FROM tblLossesPlazaCommercialAuto
WHERE InsuredState IN (#State) AND AccidentDate BETWEEN #StartDate AND #EndDate AND TransactionDate <= #EndDate AND Coverage = 'VehicleComprehensive'
GROUP BY QuoteID,
AccidentDate,
PolicyNumber
),
cte3
AS
(
SELECT
cte1.Quoteid,
cte1.PolicyNumber,
cte1.AccidentDate,
cc.TransactionEffectiveDate,
cc.ClassCode,
CASE
WHEN ROW_NUMBER() OVER (PARTITION BY cte1.QuoteID, cte1.PolicyNumber,cc.AccidentDate ORDER BY (SELECT 0))=1 THEN cte1.PaidLosses
ELSE 0
END as PaidLosses--,
FROM cte1 inner join tblClassCodesPlazaCommercial cc
on cte1.PolicyNumber=cc.PolicyNumber
AND cte1.AccidentDate=cc.AccidentDate
AND cc.AccidentDate IS NOT NULL
/* This is the one that gives me problem */
WHERE cc.ClassCode IN (#ClassCode)
)
SELECT SUM(PaidLosses) as PaidLosses, c.YearNum, c.MonthNum
FROM cte3 RIGHT JOIN tblCalendar c ON c.YearNum = YEAR(cte3.AccidentDate) AND c.MonthNum = MONTH(cte3.AccidentDate)
WHERE c.YearNum <>2017
GROUP BY c.YearNum, c.MonthNum
ORDER BY c.YearNum, c.MonthNum
Used Tuning Advisor to see what indexes and statistics needed the workload. After creating those - everything works fine.
Can I count on the order of the special trigger tables, inserted and deleted, being the same?
If not, how do I deal with insert statements that affect the identity column(s)?
I have a production (main) database. I'm trying to set up a development database that is a layer on top of the production database. In the development database, the tables my application doesn't modify and the views are synonyms to the production database. I have views set up for the tables it does modify. When these views are updated or inserted, the data is inserted into secondary tables instead of the production database tables. The views select all of the rows from the secondary table and then any rows from the production table that don't exist in the secondary table (the comparison is on the primary key column(s)).
Lets say one of my production tables is MainDB.App.Data, it's secondary table is MockDB.MockTables.Data, and the view is MockDB.App.Data. I have a trigger on the view that updates or inserts the mock table:
USE MockDB
CREATE TRIGGER App.DataTrigger
ON App.Data INSTEAD OF INSERT, UPDATE
NOT FOR REPLICATION AS
BEGIN
SET NOCOUNT ON
-- try updating
UPDATE T1
SET <all T1 columns to TI columns>
FROM MockDB.MockTables.Data T1
INNER JOIN (SELECT *, ROW_NUMBER() OVER (ORDER BY ##ROWCOUNT) as # FROM deleted) TD
ON T1.Identity = TD.NotificationID
INNER JOIN (SELECT *, ROW_NUMBER() OVER (ORDER BY ##ROWCOUNT) as # FROM inserted) TI
ON TD.# = TI.#
-- see if the update did something
IF ##ROWCOUNT > 0
BEGIN
RETURN
END
-- insert rows that don't already exist in the mock table
SET IDENTITY_INSERT MockDB.MockTables.Data ON
INSERT INTO MockDB.MockTables.Data (<all columns>)
SELECT * FROM inserted
SET NOCOUNT OFF
SET IDENTITY_INSERT MockDB.MockTables.Data OFF
END
Say I have a query like this one:
var result=collection.OrderBy(orderingFunction).Skip(start).Take(length);
Will the whole query run on SQL server and return the result, or will it return the whole ordered table and then run Skip and Take in memory? I am concerned because I noticed that OrderBy returns IOrderedEnumerable.
How about something like this:
if(orderAscending)
orderedCollection=collection.OrderBy(orderingFunction);
else
orderedCollection=collection.OrderByDescending(orderingFunction);
var result=orderedCollection.Skip(start).Take(length);
Will the Skip and Take part run on Server or in memory in this case?
This query is translated into SQL. An Entity Framework query such as
myTable.OrderBy(row => row.Id).Skip(10).Take(20);
Will produce SQL resembling the following:
SELECT TOP (20) [Extent1].[Id] AS [Id]
FROM ( SELECT [Extent1].[Id], row_number() OVER (ORDER BY [Extent1].[Id] ASC) AS [row_number]
FROM [my_table] AS [Extent1]
) AS [Extent1]
WHERE [Extent1].[row_number] > 10
ORDER BY [Extent1].[Id] ASC
I recommend downloading LinqPad, a utility that allows you to execute EF queries (and other queries), and see the results and the corresponding SQL. It is an invaluable tool in developing high-quality EF queries.
Yes, it does translate to SQL. This is essential for paging.
You can verify this using SQL Profiler.
My project is EF 5, using DbContext.
I just noticed that the first time I run any Linq query in LinqPad, there is a slight delay, and the generated SQL starts with the following. The subsequent runs, there is no delay and no extra SQL.
Can anyone explain to me what this SQL is, and if I should worry about it?
SELECT TABLE_SCHEMA SchemaName, TABLE_NAME Name FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE'
GO
SELECT
[GroupBy1].[A1] AS [C1]
FROM ( SELECT
COUNT(1) AS [A1]
FROM [dbo].[__MigrationHistory] AS [Extent1]
) AS [GroupBy1]
GO
SELECT TOP (1)
[Extent1].[Id] AS [Id],
[Extent1].[ModelHash] AS [ModelHash]
FROM [dbo].[EdmMetadata] AS [Extent1]
ORDER BY [Extent1].[Id] DESC
GO
That's EF code first, verifying that your database matches the model to be sure everything will work properly.
Don't worry about it!