SwiftyDropbox batchUploadFiles vs upload - swift

I am uploading a large amount of photos and annotations to dropbox using the swiftyDropbox sdk. I want to update the UI to reflect the upload status of each item which is stored in coreData. My understanding of batchUpload is that you pass it an array of URLs and it uploads them asynchronously. I would like to use batch upload but I am not sure how to tell when a certain item is finished with batchUpload since it is operating on an array of URLs. Is there a way that I can use batchUpload, versus just iterating over the array with the upload function?
It seems that upload will would be the correct solution as I can just add each item to background thread asynchronously and update each one as they finish. Looking for arguments to persuade me either way.

The batchUploadFiles method in the SwiftyDropbox method is advantageous as only has to take one lock to upload the entire batch of files. It only calls the response block once the entire batch is done though, and the files are committed in a batch, so you wouldn't see individual uploads being completed one by one. You instead get the result for each all together at the end.
If you do need to be able to see individual file uploads completed one by one for whatever reason, you would need to use individual upload calls, but that has the disadvantage of not batching the uploads, so you're more likely to run in to lock contention.

Related

Stream processing

Have a requirement that when user is uploading a file it should work in a following manner
1)File upload dialog (in browser) is presented to the user. User picks a file.
2) Application should load only first x number of records (e.g. lets say total # of records are 100 then get first 10) and user will have a chance to do visual review of records (read only view).
3)User then decides one of two things : "Click on Submit" which will take in all the data and streams to the server, Or if s/he click on "Next" s/he can review next 10 records etc.
Is Scalaz-stream a good fit as a over all solution and in particular for doing 2) and 3) from above? To get only partial data and pause the stream then continue, consume, and repeating the process?
No, scalaz-stream is not a good idea. The Play! framework has its own framework with the Enumerator, Enumeratee, and Iteratee classes which can be used for asynchronous processing of streams, and the file upload code is already built to use it.
You have two options:
One, use HTML5 and front-end Javascript to get access to the file. This will only work in the newest browsers. This is the only option if you don't want any of the file uploaded until the user chooses "Submit".
Two, incrementally parse the upload as it comes to the server using the Enumerator framework, and respond over long-polling AJAX/Comet/Websocket to the front-end Javascript with a subset of records as they are parsed. The Iteratee that is parsing the incoming upload will have to pause and wait for further input from the front-end. This solution would be complex and would suffer from issues with the browser timing out.
Neither of these are a very good idea. It would be much simpler to have the entire file upload all at once, have the parsed records fed back to the front-end afterwards, and have the "Submit" button actually function as a "Save" button to tell the server to keep the received upload. Unless you are shoving 100 MiB+ Excel files up a mobile connection, this is likely the easiest and most compatible solution.

How to sync Core Data with referenced files?

Just started to read various posts how to sync files or core data using iCloud. The app I'm currently developing stores data inside core data and filenames as references to the image files stored in documents app sandbox. So, a related file (photo) is also created in documents dir every time a user makes a record inside a database.
Everything looks fine if we would need to sync files OR core data, however I'm looking a way how to sync core data AND files. So, I'm worried about the case if new core data records will arrive earlier than image files of those records. In that case, data integrity will be broken. Actually, I would prefer all new related files would come first, and then all core data updates. Is it possible to do that?
Not really, no. You send data to the cloud, but you have no way to control when it appears on other devices. iCloud is going to bring over your managed objects whenever it feels like it, regardless of the state of the external files. The only way you could make this happen would be to find and download any external files, wait for the download to finish, and only then bring up your Core Data stack. But that would mean locking the user out of the data store until the downloads finish, which is not a good idea.
When I faced a similar situation, I handled it like this:
Initiate downloads of all the external files and bring up the Core Data stack.
Modify the getter method for the image to check whether the file exists and has been downloaded.
If yes to both, proceed normally
If no, display a "loading..." UI element. This could be a spinner or a progress indicator. Listen for a custom "download complete" notification.
Whenever an external file finishes downloading, post that "download complete" notification. Re-check the file, and if it's ready, replace the "loading..." UI with the image.

Optimal way to save/parse XML on iPhone

This is my first question so I will do my best to conform to the question guidelines.
I am developing an iPhone app that parses and XML feed to be displayed in a table. Parsing is not a problem but I am not sure of the best way to optimize loading times after initial run of the app.
Here is the different approaches that I am considering:
Parse the XML feed each time the application is loaded. Easy way but possibly longer loading time each run of the app.
Grab the feed and store it locally (as .xml) then parse locally. Then, each time the app is opened, make an http call to see if the feed has been changed. If not, parse locally. If so, download the new feed and parse locally. The initial loading time will be longer but could be cut down on later runs (if the feed as not been updated). This option will be beneficial if the user has a bad signal but needs to see the data.
Parse the feed and store it into a local sqlite db. Then, each time the app is opened, make an http call the see if the feed has been changed. If so, detect which objects have been added/removed and alter local db accordingly. If not, load data from local db. This might be the best option but I am not sure how long finding the changes will take.
My feed is only about 100 or so items, each with roughly 20 fields.
Initial parsing time:
Roughly 4-5sec with full bars.
Roughly 5-7sec with 3 bars.
Any insight as to which option would work best would be very much appreciated.
I think the frequency of the xml data changing should be a factor. If its only going to change once a day/week? Id load it, save it, and check for updates. If update exists download new and overwrite old.
Third solution is clearly the best and it will allow your app to work offline and start quickly. To detect the change, you can simply store a MD5 of the xml file in database and match it against the MD5 of the new XML file. If data has changed, then simply discard all previous data.

Downloading multiple items in package

I need to allows users to download multiple images in a single download. This download will include an sql file and images. Once the download completes, the sql will execute, inserting text into an sqlite database. This text will include references to the download images. The text and images are rendered in a UIWebView.
What is the best way to download everything in a single download? I was thinking to use a bundle since it can be loaded at runtime but not sure of any limitations/restrictions in this scenario. I have tested putting the bundle into the Documents folder and then accessing resources inside of it. That seems to work fine in a simple test.
You're downloading everything through a socket, which only knows about bytes, so a bundle, or even a file, doesn't "naturally" transfer through, the server side opens files and encodes and sends them into the connection, the client reads from the socket and reconstructs the original file structure.
Assuming the application has a UI for picking which items needs to be transferred, it could then request all items to the server, and the server could then send all the items through the single connection with some delimitation you invent, so that the iPhone app can split the stream back into the individual files.
Or another options is that the client could just perform individual HTTP requests for the different files, through pretty straightforward use of NSURLConnection.
The former sounds like an attempt to optimize the latter. Have you already tested and verified that the latter is too slow/inefficient? It definitely is more complex to implement.
There is a latency issue with multiple HTTP connections that you run in a sequence, however you can perhaps mitigate it by running multiple downloads connections in parallel -- for example through an NSOperationQueue with a limit of 2 to 5 concurrent download operations.

Patterns for accessing remote data with Core Data?

I am trying to write a Core Data application for the iPhone that uses an external data source. I'm not really using Core Data to persist my objects but rather for the object life-cycle management. I have a pretty good idea on how to use Core Data for local data, but have run into a few issues with remote data. I'll just use Flickr's API as an example.
The first thing is that if I need say, a list of the recent photos, I need to grab them from an external data source. After I've retrieved the list, it seems like I should iterate and create managed objects for each photo. At this point, I can continue in my code and use the standard Core Data API to set up a fetch request and retrieve a subset of photos about, say, dogs.
But what if I then want to continue and retrieve a list of the user's photos? Since there's a possibility that these two data sets might intersect, do I have to perform a fetch request on the existing data, update what's already there, and then insert the new objects?
--
In the older pattern, I would simply have separate data structures for each of these data sets and access them appropriately. A recentPhotos set and and a usersPhotos set. But since the general pattern of Core Data seems to be to use one managed object context, it seems (I could be wrong) that I have to merge my data with the main pool of data. But that seems like a lot of overhead just to grab a list of photos. Should I create a separate managed object context for the different set? Should Core Data even be used here?
I think that what I find appealing about Core Data is that before (for a web service) I would make a request for certain data and either filter it in the request or filter it in code and produce a list I would use. With Core Data, I can just get list of objects, add them to my pool (updating old objects as necessary), and then query against it. One problem, I can see with this approach, however, is that if objects are externally deleted, I can't know, since I'm keeping my old data.
Am I way off base here? Are there any patterns people follow for dealing with remote data and Core Data? :) I've found a few posts of people saying they've done it, and that it works for them, but little in the way of examples. Thanks.
You might try a combination of two things. This strategy will give you an interface where you get the results of a NSFetchRequest twice: Once synchronously, and once again when data has been loaded from the network.
Create your own subclass of
NSFetchRequest that takes an additional block property to
execute when the fetch is finished.
This is for your asynchronous
request to the network. Let's call
it FLRFetchRequest
Create a class to which you pass
this request. Let's call it
FLRPhotoManager. FLRPhotoManager has a method executeFetchRequest: which takes an
instance of the FLRFetchRequest and...
Queues your network request based on the fetch request and passes along the retained fetch request to be processed again when the network request is finished.
Executes the fetch request against your CoreData cache and immediately returns the results.
Now when the network request finishes, update your core data cache with the network data, run the fetch request again against the cache, and this time, pull the block from the FLRFetchRequest and pass the results of this fetch request into the block, completing the second phase.
This is the best pattern I have come up with, but like you, I'm interested in other's opinions.
It seems to me that your first instincts are right: you should use fetchrequests to update your existing store. The approach I used for an importer was the following: get a list of all the files that are eligible for importing and store it somewhere. I'm assuming here that getting that list is fast and lightweight (just a name and an url or unique id), but that really importing something will take a bit more time and effort and the user may quit the program or want to do something else before all the importing is done.
Then, on a separate background thread (this is not as hard as it sounds thanks to NSRunLoop and NSTimer, google on "Core Data: Efficiently Importing Data"), get the first item of that list, get the object from Flickr or wherever and search for it in the Core Data database (carefully read Apple's Predicate Programming Guide on setting up efficient, cached NSFetchRequests). If the remote object already lives in Core Data, update the information as necessary, if not insert. When that is done, remove the item from the to-be-imported list and move on to the next one.
As for the problem of objects that have been deleted in the remote store, there are two solutions: periodic syncing or lazy, on-demand syncing. Does importing a photo from Flickr mean importing the original thing and all its metadata (I don't know what the policy is regarding ownership etc) or do you just want to import a thumbnail and some info?
If you store everything locally, you could just run a check every few days or weeks to see if everything in your local store is present remotely as well: if not, the user may decide to keep the photo anyway or delete it.
If you only store thumbnails or previews, then you will need to connect to Flickr each time the user wants to see the full picture. If it has been deleted, you can then inform the user and delete it locally as well, or mark it as not being accessible any more.
For a situation like this you could use Cocoa's archiving facilities to save the photo objects (and an index) to disk between sessions, and just overwrite it all every time the app calls home to Flickr.
But since you're already using Core Data, and like the features it provides, why not modify your data model to include a "source" or "callType" attribute? At the moment you're implicitly creating a bunch of objects with source "Flickr API", but you can just as easily treat the different API calls as unique sources and then store that explicitly.
To handle deletion, the simplest way would be to clear the data store each time it's refreshed. Otherwise you'd need to iterate over everything and only delete the photo objects with filenames that weren't included in the new results.
I'm planning to do something similar to this myself so I hope this helps.
PS: If you're not storing the photo objects between sessions at all, you could just use two different contexts and query them separately. As long as they're never saved, and the central store doesn't have anything in it already, it would work just like you describe.