Logging memory used by Entity Framework Core - entity-framework-core

Can I somehow log the size of data loaded to memory by Entity Framework?
It is easy to load a lot of data into memory by putting an AsEnumerable before the where clause.
Is the peak memory usage for each query loggable?

Related

Speed Issue with sqlite and core data on the iPhone

I have 40,000+ records in an sqlite db table and am using core data to model the data.
when deployed to a device (iPhone 3G) the data is very slow to load (it takes 5 seconds for the data to load into the tableview). I was wondering if anyone had any tips on how to improve this. I've heard about indexing the data, but am not sure how this is done.
thanks for any help.
...the 40K records are broken up into 70+
categories, the most any tableview
would show is 2000 records. the
categories are in a plist which then
points to the sqlite db using
NSFetchedResultsController.
That sounds like a bottleneck. Firstly, the categories have to all be loaded into memory at once as the plist is read in. Depending on how big the category objects/data are, that could eat quite a bit of memory.
More importantly though, it suggest your data model is not well configured. There should be no need for any significant data external to Core Data model. The category data should be part of the data model. If you are using a lot of external data to configure the fetched results controller, then you probably end up with complex, slow predicates for the fetch request. That will bog everything down.
Well configured, Core Data can handle very large and complex data sets without any apparent effort because the data is read only in smallish chunks.

Memory footprint benefits of using CoreData vs in-memory not evident/obvious - opinions?

I have an app that currently holds all state in memory. It fetches a bunch of information from a server as JSON and then holds on to the JSON values in memory. Each JSONObject can be ~300 bytes and there can be thousands of such objects.
I use this data simply to populate UITableView.
In order to better handle large amounts of aata, I modified my code to fetch data from the server and store it using CoreData. There JSON objects can be represented as simple entities, each with 3 NSString attributes and one 1 int32 attribute. I created a NSFetchedResultsController to use as the data source of the UITableView. My assumption was that this would reduce the resident memory usage of my application (I assume NSFetchedResults controllers effectively manages memory to not hold entities that aren't being displayed in the view, vs holding all my state in-memory).
For the purposes of this discussion, let's assume my app purges the CoreData store and re-fetches all data each time it runs.
When I went to measure the changes in Resident Memory and Virtual Size using the VM Tracker in Instruments, I noticed that both these values remain almost identical. Infact, the Core-Data based version of my app seems to use more memory than when I have everything entirely in-memory.
While this may be true, I don't have an intuition for why this might be so. Any explanations?
From what I have said about my app, does it sound like I don't want to bother persisting in CoreData, and why?
Core Data can use more memory as you have seen, if there is memory to be had. However, one of the benefits to Core Data is when you get into a low memory situation. When that happens then Core Data will automatically reduce its own memory footprint as much as possible.
Another thing to consider is are you letting Core Data fault these objects? If you are pulling in 1000 objects and displaying 10 of them, then the other 990 should be in a faulted state and therefore taking up less memory.
I would run through the Core Data instruments and make sure you are not fulling realizing all of these objects by accident and unintentionally causing your memory usage to be higher than it needs to be.
Update
Sounds like you are importing this data and then not flushing Core Data properly if you are not seeing any faulting going on.
Assuming you are loading this data on first launch (which I would not do btw, I would pre-load the app and avoid the plist files entirely), you want to call -reset on your NSManagedObjectContext after the load is complete so that any objects that are not being used are flushed out of memory. Then as the data is coming back into memory (on use) it will be properly faulted.
Lastly, make sure you are using a SQLite store. Otherwise this is all moot.

Core Data NSFetchedResultsController Performance Advantages Over NSArray?

Does using an NSFetchedResultsController provide any performance advantages on the iPhone over an NSArray?
I have between 4,000 and 8,000 records stored in core data and wanted to know if I should select one over the other. Is the NSFetchedResultsController just used to make code 'prettier'?
My concern is searching, and lag on keyboard button presses (as well as the issue of loading that many records into memory). Thanks!
Given your parameters, Core Data will be faster than an array especially if you make any changes to the data.
The disadvantage of an array in this case is that you have to load the entire array into memory in one go.
It might seem obvious that Core Data will be slower than more primitive methods but owing to fine tuned optimization plus the ease of integration with the rest of the API it is actually fairly hard to beat Core Data in real world apps with significant amounts of data.

Does the using the entity framework put a lot of pressure on memory, like the DataSet does?

I'm new to the entity framework. Some screen casts I've been watching, show results sets being held in memory with their changes. This seems like it could use a lot of memory.
Does this mean that EF isn't suitable for ASP.NET development? Does the entity framework have a memory efficient pattern similar to the SqlDataReader?
It looks like it that if you enumerate through the query result as objects, a DbDataReader is actually used and objects are created on the fly, so only 1 row will be in memory as an actual EntityObject. You can also access the data at a DataReader level using it's EntityClient Provider, but if you're concerned about optimal performance, I suppose you should stick to plain ADO.NET.
I've used Entity Framework without memory or performance problems on reasonably high traffic sites (12,000 - 20,000 unique visitors per day with 250k pageviews).
Also, you may want to wait for Entity Framework 4.0, or use the pre-release. It has lots of improvements, one of which is support for POCO (Plain Old CLR Objects).
"Does the entity framework have a memory efficient pattern similar to the SqlDataReader?"
No, the EF makes complete copies of objects in RAM similar to a DataSet and DataRelations, etc, but it keeps them in objects instead. It then also has to keep copies of each of the objects changes (Changesets). Each of those changes are then build up as SQL to update the database if you submit the changes back.
SqlDataReader is a forward only lightweight reader to grab a single row at a time. EF is loading all your answers to the queries into object collections with change tracking on top of it.
Is it suitable for your app? I don't know. If you need fast as possible and smallest amount of RAM then ADO.NET is the only way to go. Any abstraction placed on top of ADO.NET is going to add overhead, memory etc.
EF will use more memory than using ADO.net directly (assuming that you use ADO.net correctly)
Having said that, we have used EF on large ASP.net projects with no memory problems.
The only situation I can think of is if you are handling large binary objects, and you want to stream them instead of loading them into memory.

XML and SQLite memory utilization and performance on the iPhone

How do the memory utilization and performance for XML or SQLite compare on the iPhone?
The initial data set for our application is 500 records with no more than 750 characters each.
How well would XML compare with SQLite for accessing say record 397 without going through the first 396? I know SQLite3 would have a better methods for that, but how is the memory utilization?
When dealing with XML, you'll probably need to read the entire file into memory to parse it, as well as write out the entire file when you want to save. With SQLite and Core Data, you can query the database to extract only certain records, and can write only the records that have been changed or added. Additionally, Core Data makes it easy to do batched fetching.
These limited reads and writes can make your application much faster if it is using SQLite or Core Data for its data store, particularly if you take advantage of Core Data's batched fetching. As Graham says, specific numbers on performance can only be obtained by testing under your specific circumstances, but in general XML is significantly slower for all but the smallest data sets. Memory usage can also be much greater, due to the need to load and parse records you do not need at that instant.
To find out how the memory usage for your application fares, you need to measure your application :). The Instruments tool will help you.