iPhone Strange CoreData Caching Performance Issues - iphone

I'm working on an app that uses Core Data and NSFetchedResultsController. A major component of the app is the filtering down items in an indexed table view based on a set of 15 or so pre-defined switches that correspond to a property or relationship of my Managed Objects. In most of my situations, I'm searching through a set of around 300-400 objects, so caching/performance is not an issue. Everything is very snappy with no caching required.
However, there is a part of my app that basically searches through all objects in my CD database (~15,000 items). Here, I'm trying to implement caching on the NSFetchedResultsController to improve performance. The 'cacheString' property for the NSFetchedResultsController is simply the predicate's string value. Whenever the user toggles a filter switch, I create a new predicate, create a new NSFetchedResultsController, and set the cache to the new predicate's string value. The first hit to get all the items (unfiltered) takes ~7 seconds, with subsequent hits taking less than one.
What's strange, though - and here's my problem - is that once I proceed to the 'next step' of the table view (I push a new view controller to the nav controller, passing it a reference to the NSFetchedResultsController's fetchedObjects), performance drops considerably. This next view is essentially a different representation (a horizontally paging scroll view) of the previous view's table list with one item on the screen at once. When I page from one item to the next, accessing the previous or next object in the fetchedObjects array locks up the phone for about 5 seconds. The 'lock up' duration increases the further you go into the fetchedObjects array. If 'i == 0', there is no perceivable lag. If 'i == 10,000', it takes about 15 seconds to access the next object. Nuts! If I disable caching (or it's a query that wasn't cached so it needed to pull fresh results), everything except for the initial filter query is fast and snappy with zero lag.
Does enabling caching ONLY cache indexing info for a table view and not the fetched objects themselves?
I'm not sure what the deal is here. I hope I explained this well enough - let me know if you want to see some code or need additional info.
Thanks!
Billy

Alright, I've found out what my problem was...
Basically, asking my NSFetchedResultsController for a managedObject via objectAtIndexPath: is IMMENSELY faster than going directly to the fetchedObjects array and asking for objectAtIndex: (which, of course, is what I was doing), especially as your index gets into the thousands. I'm not sure 100% why that is, though. I'm guessing NSFetchedResultsController does some fancy stuff to efficiently pull out single objects rather than going straight to the raw data. So, I don't think the caching had anything to do with my performance issue.
Thanks to those who checked out my question. I hope this helps anyone else having similar issues.

Related

Scala ScalaFX: how to deal with large set of changes of Observable*

I am using a ObservableMap for data modeling, and want to update the whole entry. Initially the ObservableMap is empty, and it is filled asynchronously with lots of elements.
Now the problem is that an onChanged event is shot for each and every entry, which creates too many events and a bogging down of the GUI. I used pkgs.clear() ; pkgs ++= newpkgs.
Is there way to either only trigger one onChanged, either by disabling the handler temporarily, or by having an operation on the map that updates all elements but fires only afterwards.
I'm not aware of a mechanism to disable/delay/buffer UI updates.
I don't know about other JavaFX view types, but I have experience with this using TableView in a similar situation. I suspect other views may be similar in this regard.
At least the TableView has a property called itemsProperty. When data in itemsProperty is updated using setItems (wrapper for itemsProperty.set(value)), the table tries very hard to update just the minimum slice of it self.
However, for the optimizations to work, the key is that the items must be "value objects" (hashCode and equals are deep and based on the actual data being displayed and not some random references.)
In the case of TableView this may require elaborate rowFactory and
cellFactory implementations. The reason is that the data in items can't be "preformatted" in any way or it would spoil the optimizations inside TableView.
Realizing the above, solved my update churn problems. Maybe other answers can provide other tips.

Sorting cells in UITableView into sections, after TableView has been loaded

Right, so my UITableView loads and puts all the cells in Alphabetical order. Information is then downloaded from a server and calculations are done, with the TableView being reloaded once everything is complete. Currently this is quite a simple procedure as once the information is downloaded from the server, the cells don't even move, they are left in their alphabetical order. Nothing really happens other than half of the information is filled in and small changes are made depending on the calculations. I was wondering if there was an easy way of putting the cells into sections depending on the calculations done after the download is complete? I did have an idea of creating 4 arrays (there will only be 4 sections ever) and once isLoading is set to no, changing the data source of the TableView to have sections, however, that sounds a bit... iffy. I know this is a theoretical question as opposed to a coding problem, but before I go and mess up my code, in what is sure to be a stupidly inefficient way of doing things, is there an easy way of "assigning" UITableViewCells to sections?
My main issue with my way of doing it is that should the user delete a cell, deleting the appropriate entry in Core Data will be a little tricky and prone to errors. This lead me on to another idea. What if I added an extra attribute to my Core Data entity. That attribute would be assigned and then saved once the calculations were done. The problem with this is that no existing databases would work. There has to be a neat way of achieving this.
Thanks for the help. If you need me to post any code just say so and I will.
You should be fine if you implement the data source methods related to sections.
For example:
numberOfSectionsInTableView
sectionIndexTitlesForTableView.
Any time the table data is reloaded (e.g., [self.tableView reloadData]), these methods will be called and the data will be placed into their sections.
Keep in mind that the cells are just the visual representation of your model, which in this case is your fetched data. The cells are not assigned to sections; they are simply created however you specify for your model (via the table view data source and delegate methods).
Regarding deletion of entries while using Core Data, I suggest taking a look at NSFetchedResultsController. The latter will monitor any changes to your table's data and message its delegate, your table view controller, when updates are made.
For example, a deletion would start with a call to the table view delegate like normal (i.e., via tableView:didEndEditingRowAtIndexPath). Within the latter, you would then delete the entry from core data (e.g., [self.myDatabase.managedObjectContext deleteObject:entity]). Assuming you initiated the NSFetchedResultsController w/ the same managed object context, the deletion would be automatically reflected back to your user.
If you're using a remote DB, however, you'll also have to perform a save (however you've implemented that) to ensure the DB is updated too.
Note also that if you use an NSFetchedResultsController, you don't need to implement the section data source methods since NSFetchedResultsController can handle that for you. Just define the key-path in your data model that will return the section name when initializing the NSFetchedResultsController.

iOS: using GCD with Core Data

at the heart of it, my app will ask the user for a bunch of numbers, store them via core data, and then my app is responsible for showing the user the average of all these numbers.
So what I figure I should do is that after the user inputs a new number, I could fire up a new thread, fetch all the objects in a NSFetchDescription instance and call it on my NSManagedObjectContext, do the proper calculations, and then update the UI on the main thread.
I'm aware that the rule for concurrency in Core Data is one thread per NSManagedObjectContext instance so what I want to know is, do you I think can what I just described without having my app explode 5 months down the line? I just don't think it's necessary to instantiate a whole a new context just to do some measly calculations...
Based on what you have described, why not just store the numbers as they are entered into a CoreData model and also into an NSMutableArray? It seems as though you are storing these for future retrieval in case someone needs to look at (and maybe modify) a previous calculation. Under that scenario, there is no need to do a fetch after a current set of numbers is entered. Just use a mutable array and populate it with all the numbers for the current calculation. As a number is entered, save it to the model AND to the array. When the user is ready to see the average, do the math on the numbers in the already populated array. If the user wants to modify a previous calculation, retrieve those numbers into an array and work from there.
Bottom line is that you shouldn't need to work with multiple threads and merging Contexts unless you are populating a model from a large data set (like initial seeding of a phonebook, etc). Modifying a Context and calling save on that context is a very fast thing for such a small change as you are describing.
I would say you may want to do some testing, especially in regard to the size of the data set. if it is pretty small, the sqlite calls are pretty fast so you may get away with doing in on the main queue. But if it is going to take some time, then it would be wise to get it off the main thread.
Apple introduced the concept of parent and child managed object contexts in 2011 to make using MO contexts on different threads easier. you may want to check out the WWDC videos on Core Data.
You can use NSExpression with you fetch to get really high performance functions like min, max, average, etc. here is a good link. There are examples on SO
http://useyourloaf.com/blog/2012/01/19/core-data-queries-using-expressions.html
Good luck!

Core Data fetch last 50 objects?

Like the native iPhone Messages app, I want to code AcaniChat to return the last 50 messages sorted chronologically. Let's say there are 200 messages total in Core Data.
I know I can use fetchOffset=150 & fetchLimit=50 (Actually, do I even need fetchLimit in this case since I want to fetch all the way to the end?), but can I fetch the last 50 messages without first having to fetch the messages count? For example, with Redis, I could just set fetchOffset to -50.
Reverse the sort order, and grab the first 50.
EDIT
But then, how do I display the messages in chronological order? I'm
using an NSFetchedResultsController. – MattDiPasquale
That wasn't part of your question now, was it ;-)
Anyhow, the FRC is not used directly. Your view controller is asked to provide the information, and it then asks the FRC. You can do simple math to transform section/row to get the reverse order.
You could also use a second array internally that has a copy of the objects in the FRC, but with a different sort ordering. That's simple as well.
More complex, but more "academically interesting" is using a separate MOC with custom fetch parameters.
However, before I went too far down either path, I'd want to know what's so wrong with querying the count of objects. It's actually quite fast.
Until I had proof from Instruments that it's the bottleneck that's killing my app, I'd push for the simplest solution possible.

Any tips or best practices for adding a new item to a history while maintaining a maximum total number of items?

I'm working on some basic logging/history functionality for a Core Data iPhone app. I want to maintain a maximum number of history items.
My general plan is to ignore the maximum when adding a new item and enforce it whenever I need to fetch all the items anyway (e.g. for searching or browsing the history). Alternatively, I could do it when adding a new item: fetch the current items, add the new one, and delete the oldest one if we're at the maximum. The second way seems less efficient, since I would be fetching all the items when I otherwise wouldn't need to.
So, the questions:
Which way is better? Is there an even better way to do this that I'm not considering?
How many items would be a reasonable maximum? The history is used for text field autocompletion, so more items means better usability, unless the number of items is so huge that it's slowing stuff down.
Thanks!
Whichever method is easier to implement is the right one. You shouldn't bother with a more efficient/more complicated implementation unless it proves it's needed.
If these objects are in a to-many relationship of some kind, I'd use the relationship to manage the maximum number. (Override add<Whatever>Object: and delete the extraneous items then).
If you're just fetching them, then that's really your only opportunity to filter them out. If you're using an NSArrayController, you might be able to implement a subclass that detects when new objects are added and chops off the extra ones.
If the items are added manually by the user, then you can safely use the method of cleaning up later. With text data, a user won't enter more a few hundred items at most and text data takes up very little room. If the items are added by software, you have to check every so many entries or risk spill over.
You might not want to spend a lot of time on this. Autocomplete is not that big, usually just a few hundred entries. I would right it the simplest way, with clean up later, and then fiddle with it only if you hit a definite performance bottleneck.
Remember, premature optimization is the root of all programming evil. That and the dweebs in marketing.