Downloading multiple items in package - iphone

I need to allows users to download multiple images in a single download. This download will include an sql file and images. Once the download completes, the sql will execute, inserting text into an sqlite database. This text will include references to the download images. The text and images are rendered in a UIWebView.
What is the best way to download everything in a single download? I was thinking to use a bundle since it can be loaded at runtime but not sure of any limitations/restrictions in this scenario. I have tested putting the bundle into the Documents folder and then accessing resources inside of it. That seems to work fine in a simple test.

You're downloading everything through a socket, which only knows about bytes, so a bundle, or even a file, doesn't "naturally" transfer through, the server side opens files and encodes and sends them into the connection, the client reads from the socket and reconstructs the original file structure.
Assuming the application has a UI for picking which items needs to be transferred, it could then request all items to the server, and the server could then send all the items through the single connection with some delimitation you invent, so that the iPhone app can split the stream back into the individual files.
Or another options is that the client could just perform individual HTTP requests for the different files, through pretty straightforward use of NSURLConnection.
The former sounds like an attempt to optimize the latter. Have you already tested and verified that the latter is too slow/inefficient? It definitely is more complex to implement.
There is a latency issue with multiple HTTP connections that you run in a sequence, however you can perhaps mitigate it by running multiple downloads connections in parallel -- for example through an NSOperationQueue with a limit of 2 to 5 concurrent download operations.

Related

SwiftyDropbox batchUploadFiles vs upload

I am uploading a large amount of photos and annotations to dropbox using the swiftyDropbox sdk. I want to update the UI to reflect the upload status of each item which is stored in coreData. My understanding of batchUpload is that you pass it an array of URLs and it uploads them asynchronously. I would like to use batch upload but I am not sure how to tell when a certain item is finished with batchUpload since it is operating on an array of URLs. Is there a way that I can use batchUpload, versus just iterating over the array with the upload function?
It seems that upload will would be the correct solution as I can just add each item to background thread asynchronously and update each one as they finish. Looking for arguments to persuade me either way.
The batchUploadFiles method in the SwiftyDropbox method is advantageous as only has to take one lock to upload the entire batch of files. It only calls the response block once the entire batch is done though, and the files are committed in a batch, so you wouldn't see individual uploads being completed one by one. You instead get the result for each all together at the end.
If you do need to be able to see individual file uploads completed one by one for whatever reason, you would need to use individual upload calls, but that has the disadvantage of not batching the uploads, so you're more likely to run in to lock contention.

Stream processing

Have a requirement that when user is uploading a file it should work in a following manner
1)File upload dialog (in browser) is presented to the user. User picks a file.
2) Application should load only first x number of records (e.g. lets say total # of records are 100 then get first 10) and user will have a chance to do visual review of records (read only view).
3)User then decides one of two things : "Click on Submit" which will take in all the data and streams to the server, Or if s/he click on "Next" s/he can review next 10 records etc.
Is Scalaz-stream a good fit as a over all solution and in particular for doing 2) and 3) from above? To get only partial data and pause the stream then continue, consume, and repeating the process?
No, scalaz-stream is not a good idea. The Play! framework has its own framework with the Enumerator, Enumeratee, and Iteratee classes which can be used for asynchronous processing of streams, and the file upload code is already built to use it.
You have two options:
One, use HTML5 and front-end Javascript to get access to the file. This will only work in the newest browsers. This is the only option if you don't want any of the file uploaded until the user chooses "Submit".
Two, incrementally parse the upload as it comes to the server using the Enumerator framework, and respond over long-polling AJAX/Comet/Websocket to the front-end Javascript with a subset of records as they are parsed. The Iteratee that is parsing the incoming upload will have to pause and wait for further input from the front-end. This solution would be complex and would suffer from issues with the browser timing out.
Neither of these are a very good idea. It would be much simpler to have the entire file upload all at once, have the parsed records fed back to the front-end afterwards, and have the "Submit" button actually function as a "Save" button to tell the server to keep the received upload. Unless you are shoving 100 MiB+ Excel files up a mobile connection, this is likely the easiest and most compatible solution.

Synchronize Directory of Files Between Server and iOS Application

I am building an internal iOS application (so - it won't ever be in the app store), and I need to keep a directory of content synchronized between a server and each of the instances of the iOS application. This would be easy enough if I just wanted to delete and re-download this content each time, but I would rather use something similar to rsync to only download the elements that have changed.
I haven't found any good way to utilize rsync. I considered looking at Objective-Git as a possibility here, but at a quick glance it looked like there is still a lot of the support for remote repositories that isn't supported yet.
As a final note, while this won't be in the app store, I will not be jailbreaking these devices and I would prefer to not rely on any private API's (although if there was an elegant solution that utilized private API's I might consider it).
Thoughts?
ADDITIONAL NOTE: This needs to be an isolated solution. I won't be relying on outside services (like Dropbox, Box.net, etc...). This needs to work solely between the device and the server (which is on a local network with the device).
Use HTTP to list the contents of each folder on the server.
Compare last modification time of each file with those on the device, and identify added/removed files.
Get added and modified files, remove deleted files.
It sounds like you're maybe asking for a library that already does this, but if you don't find one it's obviously moderately easy to write this from the ground up using stat(2) on the server and the same or a higher-level equivalent on the iOS devices. Have the iPhone send a tree of files with their modification date to the server and get back a list of insert/delete/update operations to do with the url (or whatever) for each one so you can do them incrementally on a background thread. Have the information from the server for new/updated files include the mod date that the server has so you can set it to be the same on the iOS device and send that when asking the server for the status of each file (kind of hack using the file system to store that, but it works).
Why not just set up a RESTful interface and do it across HTTP; that way you could query the modification times easily enough to determine whether client or server files need to be updated. You might also want to keep track of what files on the client have been synced, so you can easily know which files to add or delete. This can be done with a simple .sync file or using a plist / sqlite / etc.
If you'll consider FTP, there are some pretty advanced client libraries available.
For example, the iOS Chilkat bundle includes an FTP client library that supports synchronization in both directions. It's not free, but it's pretty cheap -- and you get a ton of other stuff that will likely prove useful someday. Here's an example of iOS pulling down all additions and changes (mode 2):
http://www.example-code.com/ios/ftp_syncLocalTree.asp
One caveat -- judging solely from the example, it doesn't appear to synchronize deletions. If this is a requirement, you could do it yourself without too much effort immediately following a sync.
acrosync (see https://acrosync.com/library.html) seems like a good fit given the initial question, however I haven't used it myself yet.

Is there any way to allow failed uploads to resume with a Perl CGI script?

The application is simple, an HTML form that posts to a Perl script. The problem is we sometimes have our customers upload very large files (gt 500mb) and their internet connections can be unreliable at times.
Is there any way to resume a failed transfer like in WinSCP or is this something that can't be done without support for it in the client?
AFAIK, it must be supported by the client. Basically, the client and the server need to negotiate which parts of the file (likely defined as parts in "multipart/form-data" POST) have already been uploaded, and then the server code needs to be able to merge newly uploaded data with existing one.
The best solution is to have custom uploader code, usually implemented in Java though I think this may be possible in Flash as well. You might be even able to do this via JavaScript - see 2 sections with examples below
Here's an example of how Google did it with YouTube: http://code.google.com/apis/youtube/2.0/developers_guide_protocol_resumable_uploads.html
It uses "308 Resume Incomplete" HTTP response which sends range: bytes=0-408 header from the server to indicate what was already uploaded.
For additional ideas on the topic:
http://code.google.com/p/gears/wiki/ResumableHttpRequestsProposal
Someone implemented this using Google Gears on calient side and PHP on server side (the latter you can easily port to Perl)
http://michaelshadle.com/2008/11/26/updates-on-the-http-file-upload-front/
http://michaelshadle.com/2008/12/03/updates-on-the-http-file-upload-front-part-2/
It's a shame that your clients can't use ftp uploading, since this already includes abilities like that. There is also "chunked transfer encoding" in HTTP. I don't know what Perl modules might support it already.

Lightweight HTTP application/server for static content

I am in need of a scalable and performant HTTP application/server that will be used for static file serving/uploading. So I only need support for GET and PUT operations.
However, there are a few extra features that I need:
Custom authentication: I need to
check credentials against a database for each request.
Thus I must be able to integrate propietary
database interaction.
Support for
signed access keys: The access to
resources via PUT should be signed
using a key like http://uri/?key=foo The key then contains information about the request like md5(user + path + secret) which allows me to block unwanted requests. The application/server should allow me to check for this.
Performance: I'd like to avoid piping content as much as possible. Otherwise the whole application could be implemented in Perl/etc. in a few lines as CGI.
Perlbal (in webserver mode) looks nice, however the single-threaded model does not fit with my database lookup and it does also not support query strings.
Lighttp/Nginx/… have some modules for these tasks, however it is not feasible putting everything together without ending up writing own extensions/modules.
So how would you solve this? Are there other leightweight webservers available for this?
Should I implement an application inside of a webserver (i.e. CGI). How can I avoid/speed up piping content between the webserver and my application.
Thanks in advance!
Have a look at nodejs http://nodejs.org/
There are a few modules for static web servers and database interfaces:
http://wiki.github.com/ry/node/modules
You might have to write your own file upload handler, or use one from this example http://www.componentix.com/blog/13
nginx + spawn-fcgi + fcgi application written in C + memcached + sqlite serves for similar task well, latency is about 20-30 ms for small data and fast connections from the same local network. As far as I know production server handles about 100-150 requests per second with no problem. On test server I peaked up to 20k requests per second, again with no problem, average latency were about 60 ms. Aggressive caching and UNIX domain sockets is the key.
Do not know how that configuration will behave on frequent PUT requests, in our task they are very rare and typically batched.