Objective C UITableView 100 000 entries - iphone

I want to create a UITableView that dynamically searches and displays matching entries. My problem is that I do not know how to store these entries. I think a plain textfile or a property list is out of question, because with those types, the whole data is loaded at the beginning.
Does anyone have an alternative to these filetypes? I read about sqlite, but is this suitable for 100 000 entries? And is it possible to performantly search for entries? (I currently have the table in a text file. The entries are seperated by a certain string.)
Thanks in advance!

Core Data may be one API worth researching
http://developer.apple.com/library/ios/#documentation/DataManagement/Conceptual/iPhoneCoreData01/Introduction/Introduction.html
Core Data can be backed by several storage mechanisms, one being sqlite. Sqlite is probably your best bet for so much data.

Either SQLite or Core Data (which uses SQLite) would be appropriate and should be able to handle data sets of that size with little problem.
I think you'll find that UITableView will respond acceptably even when you have that many rows in the table. You'd never want the user to actually scroll through even 1000, let alone 100,000 rows, though, so make sure you give the user a filter to really cut down the number of rows before they have to look at the data.
Important: Make sure you use a fixed row height for your table cells. If you use a variable row height, UITableView will ask you for the height of every single row in the table just so it can figure out the total height of the table. The only way you'll get decent performance is to use a fixed height.

Related

How to handle selection with large amounts of data in NatTable

When using the NatTable with a selection layer, if I have huge amounts (1million+) columns of data, selecting a row will take extremely long amounts of time (20 seconds+) or will outright crash my application. Is there a better way to handle selection of large amounts of data or maybe a way to select the entire amount but only visually show the amount of showing columns as selected and updating that as the table is scrolled?
It turns out that this is really a performance leak in NatTable. And interestingly it exists in that form for a long time and nobody has seen this until now.
I created a ticket [1] and work on a fix.
Until that you could try to remove or replace the "bad guys" from your composition. If that is not possible, you need to wait for the fix.
ColumnReorderLayer: if you don't need column reorder support, remove it from your layer stack (when talking about millions of columns, I suppose reordering is not a required feature)
ColumnHideShowLayer: if you don't need to support hiding columns, remove it from your layer stack. Not sure if you need it for your use case of showing millions of columns.
SelectionModel: I don't know your data model, but maybe the PreserveSelectionModel performs slightly better at the moment. Or have a look at the proposed fix attached to the ticket (once it is uploaded) and use a local version of that fix in your environment by creating a custom ISelectionModel implementation based on the fix.
[1] https://bugs.eclipse.org/bugs/show_bug.cgi?id=509685

UITableView alphabetical index with large JSON data

I have a table that loads in data from a webservice that returns a bunch of JSON data. I load more data when the user scrolls down as the DB I am querying holds quite a bit of data.
The question I have is - will it be feasible to implement the right side alphabetical listing on such a table and how could this be done? It is definitely possible if I load in ALL the data and then sort them locally, populate the index and cache the data for every other time. But what if this is going to be 10K rows of data or more. Maybe load this data on application first launch is one option.
So in terms of performance and usability, does anyone have any recommendations of what is possible to do?
I don't think that you should download all data to make those indexes, it would decrease refreshing time and might cause memory problems.
But if you think that indexes could make a good difference than you can add some features to your server API. I would add either a different API call like get_indexes. Or even I would add POST parameter get_indexes which adds an array of indexes to any call which has this parameter set.
And you should be ready to handle cases when user taps on indexes without any downloaded data or when user just stresses out your app making fast index scrolling up and down.
First see how big the data download is. If the server can gzip the data, it may be surprisingly small - JSON zips very well because of the duplicated keys.
If it's too big, I would recommend modifying the server if possible to let you specify a starting letter. That way, if the user hits the "W" in the index you should be able to request all items that begin with "W".
It would also be helpful to get a total record count from the server so you can know how many rows are in the table ahead of time. I would also return a "loading..." string for each unknown row until the actual data comes down.

iPhone app memory related query

I am building an iPhone application .In the database I have 5000 records. Among them I am displaying only 50 in the app. But I want to ask would there be any memory issue if I create 5000 empty cells in the iPhone view initially even though I am displaying 50 rows of data?
If you build your table appropriately, you will only be using a handful to perhaps a dozen actual UITableViewCell objects which are constantly recycled as things show on screen.
Even 50 would be safe.
Having 5000 data objects in memory with 50 UITableViewCells should be pretty acceptable.
Especially if those data objects are small, or you are allowing CoreData to do some work for you with managing your data set.
The important thing is DO NOT MAKE 5000 TABLE CELL VIEWS. That is extremely poor practice.
iPhone has a limited amount of memory so you should always be careful to display only the data that is necessary for that view. You can implement infinite scrolling where when you reach the bottom of the screen through scrolling you trigger an event and load the next 25-50 records.
http://nsscreencast.com/episodes/8-automatic-uitableview-paging
One thing you'll quickly learn with the canonical way of handling tables is that regardless of the size of your model (i.e., the number of rows you intend to create), only a handful of rows are actually created, therefore the memory footprint remains low.
In essence, the UITableView initially creates and renders a screenful of rows (plus a few more for good measure). When you begin scrolling down, the controller recognises that it needs to draw a new row. But, it also realises that rows from the top of the table have disappeared from view. So, rather than create a whole new cell it simply takes one of the cells no longer in view and reconfigures it with the new info. No matter how many rows your table has, only those few cells live in memory.
So in your case, the memory bottleneck will likely be the model that is feeding the cell configuration. If you loaded all your 5000 rows into memory at once then that may be slow and memory consuming. But there is help at hand: you get a hint from the table controller that basically tells you that it wants to set up the *n*th row. So your model can in effect be more targeted and only load the data you need. E.g., since you know the 15th row is being rendered, then go and grab the 15th row from your database rather than preloading the entire model up-front.
This is the approach I've used to create apps with many more than 5000 rows without the need for paging. Of course it depends on your dataset as to how your user may navigate.

Core Data - Large Datasets and Very Long Load Times

I've got about 5000-7000 objects in my core data store that I want to display in a table view. I'm using a fetched results controller and I haven't got any predicates on the fetch. Just a sort on an integer field. The object consists of a few ints and a few strings that hold about 10 to 50 chars in. My problem is that it's taking a good 10 seconds to load the view. Is this normal?
I believe that the FRC handles large datasets and handle's batches and such things to allow large datasets. Are there any common pitfalls or something like that because I'm really stumped. I've stripped my app down to a single table view, yet it still takes around 10 seconds to load. And I'm leaving the table view as the default style and just displaying a string in the cell.
Any advice would be greatly appreciated!
Did you check the index checkbox for the integer you are sorting on in your Core Data model?
On your fetch request, have you used -setFetchBatchSize: to minimize the number of items fetched at once (generally, the number of items onscreen, plus a few for a buffer)? Without that, you won't see as much of a performance benefit from using an NSFetchedResultsController for your table view.
You could also limit the properties being fetched by using -setPropertiesToFetch: on your fetch request. It might be best to limit your fetch to only those properties of your objects that will influence their display in the table view. The remainder can be lazy-loaded later when you need them.

What's the fastest way to save data and read it next time in a IPhone App?

In my dictionary IPhone app I need to save an array of strings which actually contains about 125.000 distinct words; this transforms in aprox. 3.2Mb of data.
The first time I run the app I get this data from an SQLite db. As it takes ages for this query to run, I need to save the data somehow, to read it faster each time the app launches.
Until now I've tried serializing the array and write it to a file, and afterword I've tested if writing directly to NSUserDefaults to see if there's any speed gain but there's none. In both ways it takes about 7 seconds on the device to load the data. It seems that not reading from the file (or NSUserDefaults) actually takes all that time, but the deserialization does:
objectsForCharacters = [[NSKeyedUnarchiver unarchiveObjectWithData:data] retain];
Do you have any ideeas about how I could write this data structure somehow that I could read/put in memory it faster?
The UITableView is not really designed to handle 10s of thousands of records. If would take a long time for a user to find what they want.
It would be better to load a portion of the table, perhaps a few hundred rows, as the user enters data so that it appears they have all the records available to them (Perhaps providing a label which shows the number of records that they have got left in there filtered view.)
The SQLite db should be perfect for this job. Add an index to the words table and then select a limited number of rows from it to show the user some progress. Adding an index makes a big difference to the performance of the even this simple table.
For example, I created two tables in a sqlite db and populated them with around 80,000 words
#Create and populate the indexed table
create table words(word);
.import dictionary.txt words
create unique index on words_index on word DESC;
#Create and populate the unindexed table
create table unindexed_words(word);
.import dictionary.txt unindexed_words
Then I ran the following query and got the CPU Time taken for each query
.timer ON
select * from words where word like 'sn%' limit 5000;
...
>CPU Time: user 0.031250 sys 0.015625;
select * from unindex_words where word like 'sn%' limit 5000;
...
>CPU Time: user 0.062500 sys 0.0312
The results vary but the indexed version was consistently faster that the unindexed one.
With fast access to parts of the dictionary through an indexed table, you can bind the UITableView to the database using NSFecthedResultsController. This class takes care of fecthing records as required, caches results to improve performance and allows predicates to be easily specified.
An example of how to use the NSFetchedResultsController is included in the iPhone Developers Cookbook. See main.m
Just keep the strings in a file on the disk, and do the binary search directly in the file.
So: you say the file is 3.2mb. Suppose the format of the file is like this:
key DELIMITER value PAIRDELIMITER
where key is a string, and value is the value you want to associate. The DELIMITER and PAIRDELIMITER must be chosen as such that they don't occur in the value and key.
Furthermore, the file must be sorted on the key
With this file you can just do the binary search in the file itself.
Suppose one types a letter, you go to the half of the file, and search(forwards or backwards) to the first PAIRDELIMITER. Then check the key and see if you have to search upwards or downwards. And repeat untill you find the key you need,
I'm betting this will be fast enough.
Store your dictionary in Core Data and use NSFetchedResultsController to manage the display of these dictionary entries in your table view. Loading all 125,000 words into memory at once is a terrible idea, both performance- and memory-wise. Using the -setFetchBatchSize: method on your fetch request for loading the words for your table, you can limit NSFetchedResultsController to only handling the small subset of words that are visible at any given moment, plus a little buffer. As the user scrolls up and down the list of words, new batches of words are fetched in transparently.
A case like yours is exactly why this class (and Core Data) was added to iPhone OS 3.0.
Do you need to store/load all data at once?
Maybe you can just load the chunk of strings you need to display and load all other strings in the background.
Perhaps you can load data into memory in one thread and search from it in another? You may not get search results instantly, but having some searches feel snappier may be better than none at all, by waiting until all data are loaded.
Are some words searched more frequently or repeatedly than others? Perhaps you can cache frequently searched terms in a separate database or other store. Load it in a separate thread as a searchable store, while you are loading the main store.
As for a data structure solution, you might look into a suffix trie to search for substrings in linear time. This will probably increase your storage requirements, though, which may affect your ability to implement this with an iPhone's limited memory and disk storage capabilities.
I really don't think you're on the right path trying to load everything at once.
You've already determined that your bottleneck is the deserialization.
Regardless what the UI does, the user only sees a handful (literally) of search results at a time.
SQLlite already has a robust indexing mechanism, there is likely no need to re-invent that wheel with your own indexing, etc.
IMHO, you need to rethink how you are using UITableView. It only needs a few screenfuls of data at a time, and you should reuse cell objects as they scroll out of view rather than creating a ton of them to begin with.
So, use SQLlite's indexing and grab "TOP x" rows, where x is the right balance between giving the user some immediately-available rows to scroll through without spending too much time loading them. Set the table's scroll bar scaling using a separate SELECT COUNT(*) query, which only needs to be updated when the user types something different.
You can always go back and cache aggressively after you deserialize enough to get something up on-screen. A slight lag after the first flick or typing a letter is more acceptable than a 7-second delay just starting the app.
I have currently a somewhat similar coding problem with a large amount of searchable strings.
My solution is to store the prepared data in one large memory array, containing both the texttual data and offsets as links. Meaning I do not allocate objects for each item. This makes the data use less memory and also allows me to load & save it to a file without further processing.
Not sure if this is an option for you, since this is quite an obvious solution once you've realized that the object tree is causing the slowdown.
I use a large NSData memory block, then search through it. Well, there's more to it, it took me about two days to get it well optimized.
In your case I suspect you have a dictionary with a lot of words that have similar beginnings. You could prepare them on another computer in a format the both compacts the data and also facilitates fast lookup. As a first step, the words should be sorted. With that, you can already perform a binary search on them for a fast lookup. If you store it all in one large memory area, you can do the search quite fast, compared to how sqlite would search, I think.
Another way would be to see the words as a kind of tree: You have many thousands that begin with the same letter. So you divide your data accordingly: You have a sql table for each beginning letter of your set of words. that way, if you look up a word, you'd select one of the now-smaller tables depening on the first letter. This makes the amount that has to be searched already much smaller. and you can do this for the 2nd and 3rd letter as well, and you already could have quite a fast access.
Did this give you some ideas?
Well actually I figured it out myself in the end, but of course I thank you all for your quick and pertinent answers. To be concise I will just say that, the fact that Objective-C, just like any other object-based programming language, due to introspection and other objective requirements is significantly slower than procedural programming languages.
The solution was in fact to load all my data in a continuous chunk of memory using malloc (a char **) and search on-demand in it and transform to objects. This concluded in a .5 sec loading time (from file to memory) and resonable (should be read "fast") operations during execution. Thank you all again and if you have any questions I'm here for you. Thanks