entity framework 6 system.outofmemoryexception - entity-framework

I keep getting a system.outofmemoryexception:
Exception of type 'System.OutOfMemoryException' was thrown.
at System.Collections.Generic.List1.set_Capacity(Int32 value)
at System.Collections.Generic.List1.EnsureCapacity(Int32 min)
at System.Collections.Generic.List1.Add(T item)
at System.Collections.Generic.List1..ctor(IEnumerable1 collection)
at System.Linq.Enumerable.ToList[TSource](IEnumerable1 source)
at CIGDataLibrary.Archive.ArchiveCycleData(String applicationName)
in c:!TFS\SCCSoftware\Commercial\CIG\Wagering\CIGDataLibrary\files\Archive.cs:line 571
Code at line 571:
List<Guid?> cycleData = Queries.Current.GetCycleDataArchiveList();
The method:
public static List<Guid?> GetCycleDataArchiveList()
{
using (var dbContext = new CIGDataModels.CIGDBStoredProcModels())
{
return dbContext.usp_arch_GetCycleData().ToList();
}
}
And the meat of the stored procedure:
SELECT TOP 1000 gpCycleData_Id FROM CIGDB.dbo.cig_Cycle_gpCycleData
WHERE [TimeStamp] < DATEADD(HH, 3, DATEADD(dd, 0, DATEDIFF(dd, 0, GETDATE())))
ORDER BY [TimeStamp]
Any thoughts on why this is occurring? It should be returning 1000 records (just a list of GUIDs) so shouldn't be throwing a fit, right? I've taken the SP down to TOP 1 but still results in same error.

Your production database is returning too many rows. It's likely missing the TOP n clause within the stored procedure, hence it's returning so much data that your application is running out of memory.
As you say you have edited the stored proc to only return a single row and it's still causing an error, there must be another factor.
Assuming that there's nothing more to the procedure than you have posted, some possible reasons:
A trigger is affecting the output in some way.
Entity framework is calling a different procedure. Check the content of your model to determine the correct procedure is being called.
Another procedure exists in a different schema. For example, your app logs into the database with username myApp and when it calls MyProc it's resolving to myApp.MyProc instead of dbo.MyProc.

Ok, I figured out the issue and it has run as expected.
Contrary to the exception's stack trace, the error was on line 572, not 571.
That line was
List<Guid> arch_gpCycle = dbarchContext.cig_Cycle_gpCycleData.Select(s => s.gpCycleData_Id).ToList();
And that resulted in the system.outofmemoryexception. I have since changed it to a compiled query that returns a list of Guids and it has run successfully.

Related

EF Core FromSQL query does not get executed immediately (PostgreSQL)

I have written a function in PostgreSQL for insertion as follows:
CREATE OR REPLACE FUNCTION public.insert_blog("Url" character)
RETURNS void AS
$BODY$Begin
Insert Into "Blogs"("Url") Values("Url");
End$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION public.insert_blog(character)
OWNER TO postgres;
The above function adds an entry into the Blogs table (Url is a parameter).
I am trying to use this function in .Net Core (Npgsql.EntityFrameworkCore.PostgreSQL) as follows:
[HttpPost]
[ValidateAntiForgeryToken]
public IActionResult Create(Blog blog)
{
if (ModelState.IsValid)
{
//This works fine
var count = _context.Blogs.FromSql("Select insert_blog({0})", blog.Url).Count();
//This doesn't work -- it gives an error of "42601: syntax error at or near "insert_blog""
//var count = _context.Blogs.FromSql("insert_blog #Url={0}", blog.Url).Count();
return RedirectToAction("Index");
}
return View(blog);
}
Can someone tell me why the second command is not working? Also, even if the first command is working, is it the right way?
Since I have to write .FromSql(...).Count() in order for it to work, if I remove .Count() item doesn't get inserted. Can someone tell me why this is happening?
Is there any good article on using .FromSql() or "Using Postgres functions in entity framework core" (I'd guess that this is a new feature and that that's why I couldn't find much data on this)?
Can some one tell me why the second command is not working? Also even if the first command is working, is it the right way?
It's simply just not the way PostgreSQL syntax works. Select insert_blog({0}) is indeed the right way.
Since I have to write .FromSql(...).Count() in order for it to work. If I remove ".Count()" item doesn't get inserted. Can someone tell me why this is happening?
FromSql behaves just like Where and other functions on an IQueryable. Execution is postponed until the results are requested, because it will try to do everything in one database query.
To make sure your query actually gets executed, you need to call a method that returns something other than IQueryable such as .Count() or .ToList(). More info can be found here: https://learn.microsoft.com/en-us/ef/core/querying/overview#when-queries-are-executed

Concurrency check not happening

I have a code first EF model with concurrency tokens on some of the entities.
These tokens are defined as byte[] properties and are decorated with [Timestamp] attributes.
[Timestamp]
public byte[] ConcurrencyStamp { get; set; }
(I've also tried any combination of Timestamp and ConcurrencyCheck)
The properties are also marked as concurrency tokens in OnModelCreating of my context:
modelBuilder.Entity<Room>().Property(x => x.ConcurrencyStamp).IsConcurrencyToken();
So, here's the scenario:
Some of the entities are serialized as JSON and passed on to external clients. When a client updates an object, that changed object is received again (as Json) and the changes are applied to a freshly fetched object. In this process, I also update the concurrencytoken value received from the client to the object just fetched from the db. Then, when saving changes, no concurrency error is thrown even if the values don't match.
So, to summarize:
1. fetch object from DB
2. serialize object to JSON (including concurrencytoken)
3. client messes with object
4. server receives updated object as json
5. fetch object (by id) from DB
6. apply json values to fetched object (including concurrencytoken)
7. context.savechanges
--> no error if token was changed
Checking the log, it seems that EF is doing the update statement with the "fetched" concurrencytoken when saving changes, not the token set manually from the external object.
UPDATE [dbo].[Rooms]
SET [RoomName] = #0, [ConcurrencyStamp] = #1
WHERE (([RoomId] = #2) AND ([ConcurrencyStamp] = #3))
-- #0: 'new room name' (Type = String, Size = -1)
-- #1: '1500' (Type = Int64)
-- #2: '1' (Type = Int32)
-- #3: '1999' (Type = Int64)
(I've used longs here, but the same applies to byte[] stamps, which I tried initially).
1999 is the current concurrencytoken value in the DB. 1500 is the token coming from the JSON object, which was set manually by setting the property.
Even though you can see EF updating the token in the statement (because I set the property) it is still using the original token value to do the check.
Changing the properties through the change tracker doesn't help, the behaviour stays the same.
Any clues? Is this scenario not supported? Am I doing something wrong?
Update
The check does work. When creating a new context in a separate thread and doing a change between the fetch and savechanges (thus between step 5 and step 7), the savechanges in step 7 barfs with a ConcurrencyException.
So it appears it works as described, but there's no way to "force" the token to be updated externally (which might make sense in a way, I guess).
You actually can force it.
You just need to set timestamp like this:
customerRequest.RowVersion = detachedRequest.RowVersion;
Context.Entry(customerRequest).Property(p => p.RowVersion).OriginalValue = customerRequest.RowVersion;
Context.Entry(customerRequest).Property(p => p.RowVersion).IsModified = false;
After that ef will think that its not updated and will throw concurrency exception on update.
tested on ef 6 code first.
EF always uses the OriginalValue of the timestamp fetched in step 5 in its UPDATE statement in step 7.
Setting entity.ConcurrencyStamp = viewModel.ConcurrencyStamp in step 6 only updates the CurrentValue.
To set the OriginalValue, do this in step 6 instead:
dbContext.Entry(entity).Property(e => e.ConcurrencyStamp).OriginalValue =
viewModel.ConcurrencyStamp;

Linq To Entities: "Sequence contains no elements" with Count()

In our appliction we have a piece of code with some Linq-queries (EF) that sometimes throws an exception.
This has only happened to the end user, and we are not able to reproduce it so far.
From the logfile we got the following stacktrace for the exception:
System.InvalidOperationException: Sequence contains no elements
at System.Linq.Enumerable.Single[TSource](IEnumerable1 source)
at System.Data.Objects.ELinq.ObjectQueryProvider.<GetElementFunction>b__3[TResult](IEnumerable1 sequence)
at System.Data.Objects.ELinq.ObjectQueryProvider.ExecuteSingle[TResult](IEnumerable1 query, Expression queryRoot)
at System.Data.Objects.ELinq.ObjectQueryProvider.System.Linq.IQueryProvider.Execute[S](Expression expression)
at System.Linq.Queryable.Count[TSource](IQueryable1 source)
at MT3.uctXGrid.LoadLayout(String strUniqueID, Boolean rethrowException, List`1 visibleColumns)
In the method LoadLayout there are only 2 instances of Count(), and they are just operating on standard IQueryables which interrogate an entity type based on one integer field and select all fields (no aggregations or anything).
ex:
from p in cxt.genData where datId = ID
In the stacktrace, it seems like internally .Single() is being used which could throw an exception if there are no records.
But why is it using single if we are just calling .Count() ?
How can a query like
(from p in cxt.genData where datId = ID).Count()
throw a "sequence contains no elements" exception?
We have had other strange problems with queries as well, I'm starting to wonder if there are any issues with our version of EF maybe.
We are still on 4.0 at the moment. (Standard version which came with VS2010).
Has anyone got an idea what could be going on here?
Update:
Here are the Linq-to-Entities queries we actually use
Dim qryLastLayout = From t In oContext.genGridLayouts Where t.layID = intCurrentLayoutID
If Not IsNothing(qryLastLayout) AndAlso qryLastLayout.Count <> 0 Then
Dim qryPrintSettings = From p In oContext.genPrintSettings Where p.prtDefault = True
If Not IsNothing(qryPrintSettings) AndAlso qryPrintSettings.Count <> 0 Then
Have you tried using the .Any() method?
if(cxt.genData.Any(x => x.datId == ID))
{
// do something here
}
One thing to be aware of wrt Linq to Entities is that the semantics of Count() are not those of .NET but those of the underlying datasource (somewhat undermining the whole language-integration aspect, but oh well...). I don't think this can cause issues like yours, but you never know.
MSDN link with more details: http://msdn.microsoft.com/en-us/library/vstudio/bb738551.aspx#sectionSection5

Intersystems Cache - Maintaining Object Code to ensure Data is Compliant with Object Definition

I am new to using intersytems cache and face an issue where I am querying data stored in cache, exposed by classes which do not seem to accurately represent the data in the underlying system. The data stored in the globals is almost always larger than what is defined in the object code.
As such I get errors like the one below very frequently.
Msg 7347, Level 16, State 1, Line 2
OLE DB provider 'MSDASQL' for linked server 'cache' returned data that does not match expected data length for column '[cache]..[namespace].[tablename].columname'. The (maximum) expected data length is 5, while the returned data length is 6.
Does anyone have any experience with implementing some type of quality process to ensure that the object definitions (sql mappings) are maintained in such away that they can accomodate the data which is being persisted in the globals?
Property columname As %String(MAXLEN = 5, TRUNCATE = 1) [ Required, SqlColumnNumber = 2, SqlFieldName = columname ];
In this particular example the system has the column defined with a max len of 5, however the data stored in the system is 6 characters long.
How can I proactively monitor and repair such situations.
/*
I did not create these object definitions in cache
*/
It's not completely clear what "monitor and repair" would mean for you, but:
How much control do you have over the database side? Cache runs code for a data-type on converting from a global to ODBC using the LogicalToODBC method of the data-type class. If you change the property types from %String to your own class, AppropriatelyNamedString, then you can override that method to automatically truncate. If that's what you want to do. It is possible to change all the %String property types programatically using the %Library.CompiledClass class.
It is also possible to run code within Cache to find records with properties that are above the (somewhat theoretical) maximum length. This obviously would require full table scans. It is even possible to expose that code as a stored procedure.
Again, I don't know what exactly you are trying to do, but those are some options. They probably do require getting deeper into the Cache side than you would prefer.
As far as preventing the bad data in the first place, there is no general answer. Cache allows programmers to directly write to the globals, bypassing any object or table definitions. If that is happening, the code doing so must be fixed directly.
Edit: Here is code that might work in detecting bad data. It might not work if you are doing cetain funny stuff, but it worked for me. It's kind of ugly because I didn't want to break it up into methods or tags. This is meant to run from a command prompt, so it would have to be modified for your purposes probably.
{
S ClassQuery=##CLASS(%ResultSet).%New("%Dictionary.ClassDefinition:SubclassOf")
I 'ClassQuery.Execute("%Library.Persistent") b q
While ClassQuery.Next(.sc) {
If $$$ISERR(sc) b Quit
S ClassName=ClassQuery.Data("Name")
I $E(ClassName)="%" continue
S OneClassQuery=##CLASS(%ResultSet).%New(ClassName_":Extent")
I '$IsObject(OneClassQuery) continue //may not exist
try {
I 'OneClassQuery.Execute() D OneClassQuery.Close() continue
}
catch
{
D OneClassQuery.Close()
continue
}
S PropertyQuery=##CLASS(%ResultSet).%New("%Dictionary.PropertyDefinition:Summary")
K Properties
s sc=PropertyQuery.Execute(ClassName) I 'sc D PropertyQuery.Close() continue
While PropertyQuery.Next()
{
s PropertyName=$G(PropertyQuery.Data("Name"))
S PropertyDefinition=""
S PropertyDefinition=##CLASS(%Dictionary.PropertyDefinition).%OpenId(ClassName_"||"_PropertyName)
I '$IsObject(PropertyDefinition) continue
I PropertyDefinition.Private continue
I PropertyDefinition.SqlFieldName=""
{
S Properties(PropertyName)=PropertyName
}
else
{
I PropertyName'="" S Properties(PropertyDefinition.SqlFieldName)=PropertyName
}
}
D PropertyQuery.Close()
I '$D(Properties) continue
While OneClassQuery.Next(.sc2) {
B:'sc2
S ID=OneClassQuery.Data("ID")
Set OneRowQuery=##class(%ResultSet).%New("%DynamicQuery:SQL")
S sc=OneRowQuery.Prepare("Select * FROM "_ClassName_" WHERE ID=?") continue:'sc
S sc=OneRowQuery.Execute(ID) continue:'sc
I 'OneRowQuery.Next() D OneRowQuery.Close() continue
S PropertyName=""
F S PropertyName=$O(Properties(PropertyName)) Q:PropertyName="" d
. S PropertyValue=$G(OneRowQuery.Data(PropertyName))
. I PropertyValue'="" D
.. S PropertyIsValid=$ZOBJClassMETHOD(ClassName,Properties(PropertyName)_"IsValid",PropertyValue)
.. I 'PropertyIsValid W !,ClassName,":",ID,":",PropertyName," has invalid value of "_PropertyValue
.. //I PropertyIsValid W !,ClassName,":",ID,":",PropertyName," has VALID value of "_PropertyValue
D OneRowQuery.Close()
}
D OneClassQuery.Close()
}
D ClassQuery.Close()
}
The simplest solution is to increase the MAXLEN parameter to 6 or larger. Caché only enforces MAXLEN and TRUNCATE when saving. Within other Caché code this is usually fine, but unfortunately ODBC clients tend to expect this to be enforced more strictly. The other option is to write your SQL like SELECT LEFT(columnname, 5)...
The simplest solution which I use for all Integration Services Packages, for example is to create a query that casts all nvarchar or char data to the correct length. In this way, my data never fails for truncation.
Optional:
First run a query like: SELECT Max(datalength(mycolumnName)) from cachenamespace.tablename.mycolumnName
Your new query : SELECT cast(mycolumnname as varchar(6) ) as mycolumnname,
convert(varchar(8000), memo_field) AS memo_field
from cachenamespace.tablename.mycolumnName
Your pain of getting the data will be lessened but not eliminated.
If you use any type of oledb provider, or if you use an OPENQUERY in SQL Server,
the casts must occur in the query sent to Intersystems CACHE db, not in the the outer query that retrieves data from the inner OPENQUERY.

Can the Sequence of RecordSets in a Multiple RecordSet ADO.Net resultset be determined, controlled?

I am using code similar to this Support / KB article to return multiple recordsets to my C# program.
But I don't want C# code to be dependant on the physical sequence of the recordsets returned, in order to do it's job.
So my question is, "Is there a way to determine which set of records from a multiplerecordset resultset am I currently processing?"
I know I could probably decipher this indirectly by looking for a unique column name or something per resultset, but I think/hope there is a better way.
P.S. I am using Visual Studio 2008 Pro & SQL Server 2008 Express Edition.
No, because the SqlDataReader is forward only. As far as I know, the best you can do is open the reader with KeyInfo and inspect the schema data table created with the reader's GetSchemaTable method (or just inspect the fields, which is easier, but less reliable).
I spent a couple of days on this. I ended up just living with the physical order dependency. I heavily commented both the code method and the stored procedure with !!!IMPORTANT!!!, and included an #If...#End If to output the result sets when needed to validate the stored procedure output.
The following code snippet may help you.
Helpful Code
Dim fContainsNextResult As Boolean
Dim oReader As DbDataReader = Nothing
oReader = Me.SelectCommand.ExecuteReader(CommandBehavior.CloseConnection Or CommandBehavior.KeyInfo)
#If DEBUG_ignore Then
'load method of data table internally advances to the next result set
'therefore, must check to see if reader is closed instead of calling next result
Do
Dim oTable As New DataTable("Table")
oTable.Load(oReader)
oTable.WriteXml("C:\" + Environment.TickCount.ToString + ".xml")
oTable.Dispose()
Loop While oReader.IsClosed = False
'must re-open the connection
Me.SelectCommand.Connection.Open()
'reload data reader
oReader = Me.SelectCommand.ExecuteReader(CommandBehavior.CloseConnection Or CommandBehavior.KeyInfo)
#End If
Do
Dim oSchemaTable As DataTable = oReader.GetSchemaTable
'!!!IMPORTANT!!! PopulateTable expects the result sets in a specific order
' Therefore, if you suddenly start getting exceptions that only a novice would make
' the stored procedure has been changed!
PopulateTable(oReader, oDatabaseTable, _includeHiddenFields)
fContainsNextResult = oReader.NextResult
Loop While fContainsNextResult
Because you're explicitly stating in which order to execute the SQL statements the results will appear in that same order. In any case if you want to programmatically determine which recordset you're processing you still have to identify some columns in the result.