iOS CoreData: NSFetchedResultsController performances - iphone

In the model, I have two entities: Record and Category. Category is one-to-many with Record through inverse relationship. The persistent store is of SQLITE type and the db is not so small, about 23MB (17k records).
I use a list-detail design to show the records table and the detailed record view.The list viewController uses NSFetchedResultsController.
Building on the device, if I don't use setFetchBatchSize:
CoreData: annotation: sql connection fetch time: 15.8800s
CoreData: annotation: total fetch execution time: 16.9198s for 17028 rows.
OMG!
If I use setFetchBatchSize:25, everything works great again:
CoreData: annotation: sql connection fetch time: 1.1736s
CoreData: annotation: total fetch execution time: 1.1900s for 17028 rows.
Yeah, that would be great! But it is not! In the list viewController, when user taps on a record I allocate a detailed viewController and I pass the record at the indexPath in the fetchedResultsController:
- (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath {
Record *record = (Record *)[fetchedResultsController objectAtIndexPath:indexPath];
RecordViewController *recordViewController= [[RecordViewController alloc] init];
recordViewController.record = record;
[self.navigationController pushViewController:recordViewController animated:YES];
[recordViewController release];
}
NOW, in the detailed viewController, I have a button to set a record as favorite or not:
- (IBAction) setFavorite {
if (![record.ISFAV intValue])
[record setValue:[NSNumber numberWithInt:1] forKey:#"ISFAV"];
else
[record setValue:[NSNumber numberWithInt:0] forKey:#"ISFAV"];
###SAVE ON THE CONTEXT HERE###
}
OK, are u ready? If I tap on the first record in the list, then I add or remove it from the favorites, it happens in 0.0046 seconds, instantly! Console with SQL Debug mode shows only the UPDATE statement:
CoreData: sql: BEGIN EXCLUSIVE
CoreData: sql: UPDATE ZRECORD SET ZISFAV = ?, Z_OPT = ? WHERE Z_PK = ? AND Z_OPT = ?
CoreData: sql: COMMIT
CoreData: annotation: sql execution time: 0.0046s
If I scroll very fast the big list (and I obviously find the batch requests on the console), when I tap a record reached with many batch requests and I add\remove it from favorites, many many many many (too many! the more I scroll the more they are!) SELECT statements appears in the console before the UPDATE one. This means total execution time not acceptable (the uibutton freezes for a long time on the iphone).
What's happening? The problem is clearly related to the batched fetch requests. More fetch requests = more SELECT statements before the UPDATE statement. This is one of them:
CoreData: sql: SELECT 0, t0.Z_PK, t0.Z_OPT, t0.ZCONTENT, t0.ZCONTENT2, t0.ZISUSER, t0.ZISFAV, t0.ZTITLE, t0.ZTITLE2, t0.ZID, t0.ZAUTHOR, t0.ZCATEGORY FROM ZRECORD t0 WHERE t0.Z_PK IN (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?) ORDER BY t0.ZTITLE LIMIT 26
If I remove the setFetchBatchSize, there's no problem (but startup requires 16 seconds). It seems that when I update the property ISFAV, CoreData needs to execute again all the fetchRequests that were needed to reach that record, even if I pass that record to the detail viewController as object.
Sorry for the long post, I tried to be as clearer as possible.
Thank you very much, I'm driving myself crazy...

What's happening is the fetched results controller sees the change notification when you update the managed object, and it has to figure out what index that object was so it can tell its delegate that the object was updated. To do this, it's going through all the batch selects again until it can find the right one. I'm not sure why it can't just have this information cached, but obviously it isn't. Have you tried adding a section cache to the fetched results controller? That may possibly speed things up (depending on whether or not the fetched results controller uses that cache in this instance). You do so simply by specifying the cache name when you call -initWithFetchRequest:managedObjectContext:sectionNameKeyPath:cacheName:.

First, I would suggest to avoid showing to the user a big table with 17K records; instead you should allow the user searching for records and then selecting one of the search results. Anyway, if you want to allow the user selecting a record directly from the big table, you need to think about the fetching process.
Start checking that you have properly indexed in your Core Data model the attributes you use to setup the NSPredicate associate to your NSFetchedResultsController. Think about the size of your "working set" of records. This should be as small as possible, and is usually in the order of hundreds of records.
In your case, setting setFetchBatchSize to 25 is probably not appropriate, given that you want to allow your user browsing 17K records. Since 17000:25 = 680, you will need that many fetches to reach the latest 25 records. But fetching involves actual I/O to the underlying database to make sure that everything is always in sync with other "possible" insert/delete/update operations done by other threads.
Even if your application does not use multiple threads with Core Data, the Core Data framework must check to verify if something changed. Now, since I/O is expensive, you need a tradeoff. Setting setFetchBatchSize to 1000 will require in the worst case 17 fetches to reach the latest 1000 records (improving by a factor of 40) even though each individual fetch may take "slightly" longer.
Using the cache as suggested may provide some benefit unless other threads modify the data. Indeed, cache hits are fast, very fast. However, cache misses are extremely expensive, requiring I/O to fetch the associated data from the database. The chance of cache misses increases of course when multiple threads work simultaneously on the same database (unless these threads only read records).
Another possible solution you may want to try, involves using multiple threads. Your main thread only fetches an initial number of records and presents them to the user while another thread using a different managed object context fetches another batch of records asynchronously, in background (using a proper offset). These records are then handed over to the main thread for visualization.
One more thing. You should not use KVC to update the value of your attributes; for performance reasons it is much better to do something like
record.ISFAV = [NSNumber numberWithInt:1];
or
[record setISFAV:[NSNumber numberWithInt:1]];
Updating just a single attribute you may not notice a difference, but if you need to work with several attributes, you may start experiencing huge savings.

Related

Loading Large data to UITableview from local database without ui freeze

I am retrieving 10,000 records from an Ultralite database to an array.
My query is taking 3 seconds load values to the array. This is making the UI to freeze for 3 seconds whenever i click to open the view controller.
I want to open the view controller immediately and show Activity indicator for 3 seconds while my query is executing in background.
And if possible i want to show row animation and show row count like "Number Of Products retrieved is 5045" dynamically.
Please, can anyone help me on this?
Thanks in advance.
EDIT:
NSMutableArray *customerArray = [[DB sharedInstance] LoadCustomerOverview];
The "LoadCustomerOverview" is a function which is having the select statement which retrieves 10,000 records from Ultrlite database.
The above line is taking 3 seconds. I checked this with NSLog before and after above statement. Using this "customerArray" i will fill the UITableview in my view controller,which is taking only Milli seconds to prepare cells.
Problem is with the above line.
How can i solve this problem? or any other way to improve performance?
Thanks in advance.
Accept from all other good answers, this may little different, there's (mostly) 10-11 for iPhone device and its double in iPad devices (I'm not having exact idea), number of rows visible to users in UITableView. If you really don't need to process or show all 10k records at once, I think you really don't query for all as it takes good memory and processing time as questioned by you. Instead you can fetch (query for) 1k (or even some small amount) of records at once, once you get 1k (or the amount of you want) you can query for the next and that should be in [self performSelectorInBackground:#selector(getRecordsFromDatabase) withObject:]; so your app will never freeze & user won't feel any interrupts, This scenario could only achieve if your data is persistent and having a unique row identifier key (primary key), also if you're query those data in either ascending or descending order, for other cases like, if you want any random data then this answer can't work.
Also note that, putting code in viewDidAppear & UIActivityIndicator may be your solution, but in case you'll fetching some more amount of rows that time it will interruptable for user.
I can't imagine why would you need 10000 records in an array, as this can probably be optimised.
However, to answer your question, you could move your loading method to viewDidAppear: (I'm assuming you're doing your fetching in viewDidLoad:), and bring up some sort of a progress bar before you start the load.
[NSThread detachNewThreadSelector:#selector(startBackgroundProcess:) toTarget:self withObject:YOUR_OBJECT];
-(void)startBackgroundProcess:(id)obj{
//interact with DB..
//After firing the query and get the Data
[self performSelectorOnMainThread:#selector(finishedBackgroundProcess:) withObject:YOUR_RESULT waitUntilDone:NO];
}
Solution is performSelectorInBackground which will not freeze your UI
[self performSelectorInBackground:#selector(getRecordsFromDatabase) withObject:yourObjectHereAsArgument]
Now method:
-(void)getRecordsFromDatabase
{
//retrieve records here
}
Update: Do you use Core Data ?
What if you batch ?
[fetchRequest setFetchBatchSize:20];
https://developer.apple.com/library/mac/#documentation/Cocoa/Reference/CoreDataFramework/Classes/NSFetchRequest_Class/NSFetchRequest.html
Regards,

How to safely increment a counter in Entity Framework

Let's say I have a table that tracks the number of times a file was downloaded, and I expose that table to my code via EF. When the file is downloaded I want to update the count by one. At first, I wrote something like this:
var fileRecord = (from r in context.Files where r.FileId == 3 select r).Single();
fileRecord.Count++;
context.SaveChanges();
But then when I examined the actual SQL that is generated by these statements I noticed that the incrementing isn't happening on the DB side but instead in my memory. So my program reads the value of the counter in the database (say 2003), performs the calculation (new value is 2004) and then explicitly updates the row with the new Count value of 2004. Clearly this isn't safe from a concurrency perspective.
I was hoping the query would end up looking instead like:
UPDATE Files SET Count = Count + 1 WHERE FileId=3
Can anyone suggest how I might accomplish this? I'd prefer not to lock the row before the read and then unlock after the update because I'm afraid of blocking reads by other users (unless there is someway to lock a row only for writes but not block reads).
I also looked at doing a Entity SQL command but it appears Entity SQL doesn't support updates.
Thanks
You're certainly welcome to call a stored procedure with EF. Write a sproc with the SQL you show then create a function import in your EF model mapped to said sproc.
You will need to do some locking in order to get this to work. But you can minimise the amount of locking.
When you read the count and you want to update it, you must lock it, this can be done by placing the read and the update inside a transaction scope. This will protect you from race conditions.
When you read the value and you just want to read it, you can do this with a transaction isolation level of ReadUncommited, this read will then not be locked by the read/write lock above.

FetchedResultsController and UITableView in multiple views

Summary: I have two different views which use tables to show results from a fetched results controller. These tables may contain the same data. I get errors when moving the rows in one table after the other table has been loaded. (Obvious I suppose!)
To simplify, imagine an entity consisting of countries, and another entity grouping these countries together into sets of countries. We have an "editSet" view which allows you to name the set, add or delete countries in the set, as well as reorder them, all using the standard UI. We then have a "viewSet" view that shows you these countries and some value associated with them (eg. exchange rate or whatever).
Now, in the edit view, when we reorder and moveRowAtIndexPath is invoked, I set a BOOL which stops any further UI changes until the coredata is updated (each record has a displayOrder integer which I update before doing another performSearch). This all works perfectly if you only have instantiated the "editSet" view.
Where things goes wrong is if you load "viewSet" then load "editSet" with the same set and move the rows. The BOOL we set in editSet doesn't get passed along to viewSet (which is also "watching" the coredata) and gets upset when the coredata is changed programmatically. This generates:
Serious application error. An exception was caught from the delegate of
NSFetchedResultsController during a call to -controllerDidChangeContent:
. *** -[NSMutableArray removeObjectAtIndex:]:
index 0 beyond bounds for empty array with userInfo (null)
And all hell breaks lose.
On the other hand, if I load/appear "viewSet" with A DIFFERENT SET to the one I am editing, no problems.
So, what I need to do is EITHER "disconnect" the FRC and the table when leaving viewSet (perhaps do a search on nothing and reload the table?) OR pass a BOOL to the viewSetController when saving the moved rows in coredata in editSet to mimc what I do locally in that viewcontroller (not quite sure how to do this but doable I guess).
I am sure I am not the first to come across this problem, so wondering, what's the best way?
Use another managed object context for your "edits" and then merge them back in when you go back to your "viewSet". Take a look in the Apple sample code project 'CoreDataBooks' to see how you can use two contexts to perform disjoint edits.

Limit NSFetchedResultsController results, and get more

HI All,
I currently have an NSFetchedResultsController setup to return all rows in a table in my core data database. This then fills up my UITableView. The trouble is this will quickly get out of hand as the rows grow in number.
How can I limit the initial query to say 20 results, then add a button somewhere to "Get More" from where we left off?
Thanks for any guidance as always
This is controlled with NSFetchRequest's -setFetchLimit: and -setFetchOffSet.
If I recall correctly, the drawback with NSFetchedResultsController is that you can't modify the fetch request after you create your NSFetchedResultsController instance. I believe this means you'll have to create a new one (instance w/new fetch request) each time you change the range you want to retrieve/display.
File an enhancement request with Apple at bugreporter.apple.com if you feel this shouldn't be the case.
To change the limit number on the fly you simply need to:
Access the fetchRequest of your NSFetchedResultsController instance, change the limit, delete the old cache if there is any and perform a new fetch.
Code:
[yourFetchedResultsController.fetchRequest setFetchLimit:50];
[NSFetchedResultsController deleteCacheWithName:"you cache name"];
[yourFetchedResultsController performFetch:nil];
fetchBatchSize only affects how many objects are fetched at a time. It will not limit number of objects in-memory concurrently so it is still possible to run out of memory. It is possible to limit the total concurrent objects with a combination of batchSize, fetchLimit, and offset but it requires deleting the cache or storing separate caches per "page", which seems un-ideal to me.
Another more hacky method to get around it is to re-create the NSFetchedResultsController, the results from the old controller will be faulted if possible, and you can start with a clean slate. Really crude, but it avoids deleting the cache.
I believe that instead of setting -setFetchLimit and limiting your NSFetchRequest (for new rows you have to create a new reqeust), set -fetchBatchSize to only control how many rows will be loaded into memory. Say, If you show 10 cells per view, set your batch size to double or so. As you scroll your view, the controller will automatically load new set into memory.

Handling background changes with NSFetchedResultsController

I am having a few nagging issues with NSFetchedResultsController and CoreData, any of which I would be very grateful to get help on.
Issue 1 - Updates: I update my store on a background thread which results in certain rows being delete, inserted or updated. The changes are merged into the context on the main thread using the "mergeChangesFromContextDidSaveNotification:" method. Inserts and deletes are updated properly, but updates are not (e.g. the cell label is not updated with the change) although I have confirmed the updates to come through the contextDidSaveNotifcation, exactly like the inserts and deleted. My current workaround is to temporarily change the staleness interval of the context to 0, but this does not seem like the ideal solution.
Issue 2 - Deleting objects: My fetch batch size is 20. If an object is deleted by the background thread which is in the first 20 rows, everything works fine. But if the object is after the first 20 rows and the table is scrolled down, a "CoreData could not fulfill a fault" error is raised. I have tried resaving the context and reperforming the frc fetch - all to no avail. Note: In this scenario, the frc delegate method "didChangeObject...." is not called for the delete - I assume this is because the object in question had not been faulted at that time (as it is was outside the initial fetch range). But for some reason, the context still thinks the object is around, although is has been deleted from the store.
Issue 3 - Deleting sections : When the deletion of a row leads to the deletion of a section, I have gotten the "invalid number of rows in section???" error. I have worked around this by removing the "reloadSection" line from the NSFetchedResultsChangeMove: section and replacing it with "[tableView insertRowsAtIndexPaths...." This seems to work, but once again, I am not sure if this is the best solution.
Any help would be greatly appreciated. Thank you!
I think all your problems relate to the fetched results controller's cache.
Issue 1 is caused by the FRC using the cached objects (whose IDs have not changed.) When you add or remove an object that changes the IDs and forces an update of the cache but changing the attributes of an object doesn't do so reliably.
Issue 2 is caused by the FRC checking for the object in cache. Most likely, the object has an unfaulted relationship that persist in the cache. When you delete it in the background the FRC tries to fault in the object at the other end of the relationship and cannot.
Issue 3: Same problem. The cache does not reflect the changes.
You really shouldn't use a FRC's cache when some object other than the FRC is modifying the data model. You have two options:
(Preferred) Don't use the cache. When creating the FRC set the cache property to nil.
Clear the cache anytime the background process alters the data model.
Of course, two defeats the purpose of using the cache in the first place.
The cache is only useful if the data is largely static and/or the FRC manages the changes. In any other circumstance, you shouldn't use it because FRC need to check the actual data model repeatedly to ensure that it has a current understanding of the data. It can't rely on the object copies it squirreled away because another input may have changed the real objects.
My advice:
Detect the changes needed on the background thread
Post the changes to the main thread as a payload
Make the actual changes and save on the main thread (Managed Object Context on the main thread)
DO use the cache for the FRC; you'll get better performance
Quote from "Pro Core Data for iOS" by Michael Privat, Robert Warner:
"Core Data manages its caches intelligently so that if the results are updated by another call, the cache is removed if affected."