I am pulling data from a web service in order to populate my UITableView rows. It loads perfectly fine, however it takes around 4 seconds in order to load the whole data. Is there a way in that I can increase the time to load? Probably by caching it? Or any tips and tricks on what people usually do to do this?
You can display old data and asynchronously fetch using threads the new data and then reload the UITableView
Caching depends on what you're loading. If you're loading, say, Twitter feeds, you should cache the user avatar pictures because you know you'll be fetching them over and over again. If you're writing something like a retail app, you might show items that are on sale. If the items change every Sunday, then cache them the first time you fetch them and don't fetch them again until Sunday. That sort of thing.
Beyond that, there's not much you can do to make the internet faster. If you have control over the web service, you can make the data sent back as concise and simple as possible. You'd be surprised how many milliseconds you can burn parsing complicated XML.
If it makes sense for your app, you can show old data. For a twitter client, it's better to just save the data you've already fetched, show it immediately, and load the new stuff in the background.
If you can't do any of that, then pretty much all you can do is put up a "Loading..." overlay of some sort so that the app doesn't just look frozen and live with the delay.
You can try to use Three20 TTTableViewController, nice tutorial can be found here:
Three20
Moreover, you can add "Load more results" button, look here
Related
I'm fairly new to mvvmcross and the mvvm model in general. I have been trying to create my own cross platform app for several weeks now and I'm stuck at what would be good practice. I have two main problems, I hope somebody can help me with
Question 1:
I have a complex model with many properties, sub items, and sub items in those sub items. Also, many values are automatically calculated based on other values.
I implemented the MvxNavigatingObject everywhere, and all values are correctly notified when changes occured. So far so good.
Now I want to let people use the app to change the values in my model. But because there are so many input fields, I want to divide the data over several pages. But each page has it's own view model of course. That means reloading my large object every time the page changes.
To solve this, I created a DataHolderService, which is loaded as a singleton on all the view models. Then I let my viewmodels change the data in the DataHolderService and I never have to reload the data.
But I wonder, is this good practice? It feels a bit strange to be doing this. Are there other possibilities? Like using the same viewmodel on multiple pages?
Question 2
I would like to save my data to the database so it persists between sessions. I have a SQLite database and am able to save the data using a button. But if the user forgets to use the save button and the app is put in the background until the system eventually kills it, the data would be lost.
I therefore added a timer, which periodically saves the data to the database. But I can understand that this isn't very good practice. What would be a good way to save the data back into the database without having the user needing to press a save button? Is there an event/function that will fire before the view model is disposed?
Its a bit hard to understand exactly what you are trying to achieve, however.
Is it not better to use a list of questions rather than spreading it over multiple pages?
We have created a similar page recently. If your data is shown as a checkbox/radio buttons/spinner etc, we save them immediately when the value has changed.
For saving text, we use a 1 second timer that starts when the user starts typing, is reset if the user changes the text within that time and is saved otherwise.
I have noticed that for a mobile application, saving on the main thread seems to take a bit when it compares to other applications on the device. Is it recommended to only save Core Data when the application enters the background or when an application closes instead of anytime items are added and sent / received from the api?
That's kind of a broad question, but I've found that saving core data after VewDidAppear statements is better than viewWill statements. Giving the user something to engage with and persisting makes it less noticeable than on a load. However, if a user is used to waiting for something like an activity loop, adding the save to that doesn't tax it too much (IMHO).
Not sure this help, just my experience.
I am using a Rails webservice and was wondering how many json objects I should bring back on the first call?
Options:
Bring back 200 webservice but only show 25 in the uitableview w/ Load more feature?
Bring back 25 and on clicking load more fetch another 25 from webservice?
????
Without empirical data it's very hard to say, but I would guess that overall, dealing with smaller datasets and more calls would be a little better for the user. The reason being that users tend to "hurry up and wait". They tap something, and when they tap that something they want it 5 seconds ago (hurry up). Once they see the data, they probably want to actually look at it a bit before they request new data (wait).
This is also an argument for background loading as the user is playing around with things, if you can load that other info invisibly before they ask for it all the better for their snappy UI, but you may be wasting the bandwidth on your server, and their battery. Which brings us back to needing good metrics. Make it work and get it into some people's hands, see how it feels, then go from there with some real UX feedback.
If yaou can bring back 200 objects in a relatively short amount of time, the cellular or wifi radios may be able to go into a low power mode for longer, as the user scrolls, enhancing battery life.
If loading over 25 objects takes a long time, you might not want to keep the radios powered up until you know the user wants to see that data.
I will say, don't bother about bringing the objects back, that won't take much time. If you are grtting 500 objects from webservice, its just an xml file coming, it shouldn't take much time to load and parse. You can easily achieve it in background thread or lazy loading. The problem should come, if you are simultaneously trying to update the UI. Drawing a view will consume most of the cycles. So handle it tactfully.
Did anyone hear about asynchronous executing of an EF query?
I want my items control to be filled right when the form loads and the user should be able to view the list while the rest of the items are still being loaded.
Maybe by auto-splitting the execution in bulks of items (i.e. few queries for each execution) all in same connection.
I posted a feature suggestion to Microsoft, please share them with your ideas as well.
Not wanting to sound like a commercial, but I noticed that the latest DevExpress grid gives features like this in their WPF grid. Essentially you want to load visible-count items first, then load the rest in a background thread so your UI isn't freezing up. The background thread should probably load another page at a time and make them available to the UI thread.
It's something you would want to think about carefully and make sure you get it right, or simply buy a control that does the hard work for you.
I take from your link that this is a web app. Is that correct?
A Query must complete and return data before rendering can begin. An EF feature will not help you here. Rather. look at breaking up your process into several processes that can be done at once.
Keep in mind that ASP.NET cannot return a response to a browser if it is not done rendering the HTML.
Let me assume you are executing a single query, getting the results back and displaying them to a page.
Best option: Page your results. if you Have 4000 records, show the first 50. If you show 200+ records to a user, They cannot digest that much information.
If that does not fit your needs, look at firing one query for 50 results. Make an Ajax call to the the remaining records and build the UI from there, in (reasonably sized) chunks.
What are good practices for asynchronously pulling large amounts of XML from a RESTful service into a Core Data store, and from this store, populating a UITableView on the fly?
I'm thinking of using libxml2's xmlParseChunk() function to parse chunks of incoming XML and translate a node and its children into the relevant managed objects, as nodes come in.
At the same time that these XML nodes are turned into managed objects, I want to generate UITableView rows, in turn. Say, 50 rows at a time. Is this realistic?
In your experience, what do you do to accomplish this task, to maintain performance and handle, potentially, thousands of rows? Are there different, simpler approaches that work as well or better?
Sure, this is a pretty standard thing. The easiest solution is to do the loading in a background thread on one MOC, and have the UI running on the main thread with its own MOC. Whenever you get a chunk of data you want to have appear (say 50 entries), you have the background MOCsave:.
Assuming you have the foreground MOC rigged to merge changes (via mergeChangesFromContextDidSaveNotification:) then whenever you save the background MOC the foreground MOC will get all of those changes. Assuming you are using NSFetchedResultsController it has delegate methods to cope with changes in its MOC, and if you are using Apple's sample code then you probably already have everything setup correctly.
In general CoreData is going to be faster than anything you roll yourself unless you really know what you are doing and are willing to spend a ton of time tuning for your specific case. The biggest thing you can do is make sure that slow things (like XML processing and synchronous flash I/O caused by save:) are not on the main thread blocking user interaction.
Joe Hewitt (Facebook app developer) has release much of his code as open-source. It is called Three20. There is a class there that is great for fetching internet data and populating it into a table, without the need for the data beforehand. The classes used for this are called TTTableViewController and TTTableViewDataSource.
From here, it would not be much of a stretch to store as CoreData, just subclass the classes as you see fit with the supplied hooks.
If you are worried about too much data, 50 at a time does sound reasonable. These classes have a built in "More" button to help you out.
From the Three20 readme:
Internet-aware table view controllers
TTTableViewController and
TTTableViewDataSource help you to
build tables which load their content
from the Internet. Rather than just
assuming you have all the data ready
to go, like UITableView does by
default, TTTableViewController lets
you communicate when your data is
loading, and when there is an error or
nothing to display. It also helps you
to add a "More" button to load the
next page of data, and optionally
supports reloading the data by shaking
the device.
No one has mentioned RestKit yet? My friends ... seriously, you have to check this out. If you are doing anything with REST on iOS (and now on OS X) and particularly if you're wanting to work with Core Data ... PLEASE have a look at RestKit. I've saved countless hours implementing some pretty complex data synchronization between a server and my Core Data models on iOS. RestKit made it so damned easy, it almost makes you sick.