Extra data usage V having a local DB(cache) in an app - iphone

Working on an app where all the contents/data for it will be coming via JSON and occasionally i will display a HTML page.
The client is suggesting that maybe we should have some local database(MYSQL Lite) to cache the JSON data returned so we use less of the users data(if there search for the same item again) allowance and because it maybe slightly faster.
Are these good enough reasons for adding the extra complexity and potential problems of having a local DB on the phone?
I didn't think from my experience that the phone was particularly slow or that JSON or HTML were data heavy in there data usage. I'd prefer having a thin client.
Facebook/Twitter/etc work with very little problems using JSON and Html.
Would I be wrong to try steer away from the local DB idea?
Thanks,
-Code

Caching url request results can improve your application's latency over a slow connexion. You could use CoreData to manually manage a cache (key:url, value:request's answer)
Another more elegant solution would be (if you have write access to the webservices) to implement server-side the "if-modified-since" header so that your request data received would be kept at a minimum level.

Related

Alamofire cache response offline. Swift 4

I am new to using Alamofire for swift. I tried reading the documentation, but didn't succeed.
I am making a
Alamofire.request("http:json").responseJSON
and I discovered, that it works and returns a response even when the phone is offline. If I'm not mistaken, the response is saved in the cache.
How long will this response stay in the cache for the user to use offline?
Should I store the response as a preference?
Thanks for the help.
You are right, Alamofire caches your response.
Although, I don't think there's a way to know when your response will be dismissed from cache, as there are many variables for the system to consider, for example- disk space... You may use a custom caching policy if you think it's right for you.
I wouldn't count on the default caching policy to save me files for offline usage, and implementing a custom policy feels wrong for that case. So if you really need your files offline, I would recommend you to use a different way.
Take a look on URLCache- this what Alamofire uses for response caching-
Response Caching is handled on the system framework level by URLCache. It provides a composite in-memory and on-disk cache and lets you manipulate the sizes of both the in-memory and on-disk portions. -> From Alamofire documentation

Using CouchDB as interface. Is it appropriate way?

our devices (microscopes with cameras) produce images and additional information to each image.
Now a middleware supplies wants to connect these devices to lab automation system. They have to acquire the data and we have to provide it. An astonishing thing for me was their interface suggestion - a very cryptical token separated format (ASTM E1394-97). Unfortunatelly, they even can't accomodate images in their protocol, and are aiming to get file-paths.
I thought it is not the up-to date approach. While lookink for alternatives, I saw CoachDB.
So, my idea was, our devices would import data including images in CoachDB and they could get the data. It seems even, that using mustache, we could produce the format they want (ascii-text) and placing URLs as image references instead of path's.
My question is, did someone applied CoachDB for such a use case already? It seems to be a little-bit misuse of CoachDB, as the main intention is interface not data storage. Another point disturbing me is, that the inventor of CoachDB went to other project Coachbase. Could it mean lack of support for CoachDB in the future?
Thank you very much for any insights and suggestions!
It's ok use-case and actually we're using CouchDB in such way - as proxing middleware between medical laboratory analyzers and LIS. Some of them publish images or pdf data on shared folders and we'd just loading them into related document as attachments.
More over you'd like to know, CouchDB is able to serve external processes (aka os_daemons) and take care about their lifespan: restarting if someone had terminated and starting right after you update config options through HTTP interface. This helps to setup ASTM client and server processes since this protocol is different from HTTP (which is native for CouchDB) which communicates with devices and creates documents as regular CouchDB clients. In same way you may setup daemons to monitor shared folders for specific files. And all this is just CouchDB with few "low bounded" plugins.

Is there any value in using core data for iPhone apps?

Can people give me examples of why they would use coreData in an application?
I ask this because most apps are just clients to a central server where an API of some sort gives you the information you need.
In my case I'm writing a timesheet application for a web app which has an API and I'm debating if there is any value in replicating the data structure on my server in core data(Sqlite)
e.g
Project has many timesheets
employee has many timesheets
It seems to me that I can just connect to the API on every call for lists of projects or existing timesheets for example.
I realize for some kind of offline mode you could store locally in core data but this creates way more problems because you now have a big problem with syncing that data back to the web server when you get connection again.. e.g. the project selected for a timesheet no longer exists.
Can any experienced developer shed some light on there experiences on when core data is best practice approach?
EDIT
I realise of course there is value in storing local persistance but the key value of user defaults seems to cover most applications I can think of.
You shouldn't think of CoreData simply as an SQLite database. It's not JUST an SQLite database. Sure, SQLite is an option, but there are other options as well, such as in-memory and, as of iOS5, a whole slew of custom data stores. The biggest benefit with CoreData is persistence, obviously. But even if you are using an in-memory data store, you get the benefits of a very well structured object graph, and all of the heavy lifting with regards to pulling information out of or putting information into the data store is handled by CoreData for you, without you necessarily needing to concern yourself with what is backing that data store. Sure, today you don't care too much about persistence, so you could use an in-memory data store. What happens if tomorrow, or in a month, or a year, you decide to add a feature that would really benefit from persistence? With CoreData, you simply change or add a persistent data store, and all of your methods to get information out or in remain unchanged. The overhead for that sort of addition is minimal in comparison to if you were trying to access SQLite or some other data store directly. IMHO, that's the biggest benefit: abstraction. And, in essence, abstraction is one of the most powerful things behind OOP. Granted, building the Data Model just for in-memory storage could be overkill for your app, depending on how involved the app is. But, just as a side note, you may want to consider what is faster: Requesting information from your web service every time you want to perform some action, or requesting the information once, storing it in memory, and acting on that stored value for the remainder of the session. An in-memory data store wouldn't persistent beyond that particular session.
Additionally, with CoreData you get a lot of other great features like saving, fetching, and undo-redo.
There are basically two kinds of apps. Those that provide you with local functionality (games, professional applications, navigation systems...) and those that grant access to a remote service.
Your app seems to be in the second category. If you access remote services, your users will want to access new or real-time data (you don't want to read 2 week old Facebook posts) but in some cases, local caching makes sense (e.g. reading your mails when you're on the train with unstable network).
I assume that the value of accessing cached entries when not connected to a network is pretty low for your customers (internal or external) compared to the importance of accessing real-time-data. So local storage might be not necessary at all.
If you don't have hundreds of entries in your timetable, "normal" serialization (NSCoding-protocol) might be enough. If you only access some "dashboard-data", you will be able to get along with simple request/response-caching (NSURLCache can do a lot of things...).
Core Data does make more sense if you have complex data structures which should be synchronized with a server. This adds a lot of synchronization logic to your project as well as complexity from Core Data integration (concurrency, thread-safety, in-app-conflicts...).
If you want to create a "client"-app with a server driven user experience, local storage is not necessary at all so my suggestion is: Keep it as simple as possible unless there is a real need for offline storage.
It's ideal for if you want to store data locally on the phone.
Seriously though, if you can't see a need for it for your timesheet app, then don't worry about it and don't use it.
Solving the sync problems that you would have with an "offline" mode would be detailed in your design of your app. For example - don't allow projects to be deleted. Why would you? Wouldn't you want to go back in time and look at previous data for particular projects? Instead just have a marker on the project to show it as inactive and a date/time that it was made inactive. If the data that is being synced from the device is for that project and is before the date/time that it was marked as inactive, then it's fine to sync. Otherwise display a message and the user will have to sort it.
It depends purely on your application's design whether you need to store some data locally or not, if it is a real problem or a thin GUI client around your web service. Apart from "offline" mode the other reason to cache server data on client side might be to take traffic load from your server. Just think what does it mean for your server to send every time the whole timesheet data to the client, or just the changes. Yes, it means more implementation on both side, but in some cases it has serious advantages.
EDIT: example added
You have 1000 records per user in your timesheet application and one record is cca 1 kbyte. In this case every time a user starts your application, it has to fetch ~1Mbyte data from your server. If you cache the data locally, the server can tell you that let's say two records were updated since your last update, so you'll have to download only 2 kbyte. Now you should scale up this for several tens of thousands of user and you will immediately notice the difference of the server bandwidth and CPU usage.

Best way to store dyamic data on iOS App from Web Service

I want to know what is the best way to store data on the iPhone from a web service.
I want the information to be stored on the device so the person doesn't need to access the web service every time he/she needs it. The currently information isn't much and contains less that 150 records. The records might update from time to time and a few new ones will be added. What is the best way to go about storing the data?
Thanks
If you use ASIHTTPRequest for your network stuff (and if you don't already, I can't sing its praises highly enough), you will find it has a cache layer built in which is perfect for situations like this.
You can activate it with a simple one line;
[ASIHTTPRequest setDefaultCache:[ASIDownloadCache sharedCache]];
And you have full control over the cache policy etc - just read the documentation.
The other simple approach of course is - on the assumption that your web service is returning JSON or XML - simply to store the response in a local file against a hash of the request parameters, then when you request the data again, you can first look to see if the file exists and if it does, return that data rather than going back to the website. You can roll your own cache policies etc too.
Since I discovered ASIHTTPRequest had a cache though, I've not needed to roll my own again.
I find that using coreData or sqllite3 is just overkill for 99% my requirements and a simple cache works very well.
If the data is relational, a Sqlite3 database would be the best storage option you have.
Also, this helps by allowing you to retrieve from the server and to update only the records that have changed, thus saving time and bandwidth.
This is the best option from a scalability point of view as well, as you stated that "current information isn't much", thus giving the impression that this is only a current situation, that may be subjected to further change, probably towards more records being added in time.
Sqite3 also gives you more control and better performance than using, for instance, Core Data. Here's an article explaining some of the details. Moreover, if you work through an Objective-C wrapper, such as FMDB, you get all the advantages without managing the complexity yourself.

Single request to multiple asynchronous responses

So, here's the problem. iPhones are awesome, but bandwidth and latency are serious issues with apps that have serverside requirements. My initial plan to solve this was to make multiple requests for bits of data (pun unintended) and have that be how the issue of lots of incoming//outgoing data was handled. This is a bad idea for a lot of reasons, most obvious to me is that my poor database (MySQL) can't handle this very well. From what I understand it's better to request large chunks all at once, especially if I'm going to ask for all of it anyways.
The problem is now I'm waiting again for a large amount of data to get through. I was wondering if there's a way to basically send the server a bunch of IDs to get from the database, and then that SINGLE request then sends a lot of little responses, each one containing all the information about a single db entry. Order is irrelevant, and ideally I'd be able to send another request to the server telling it to stop sending me things because I have what I need.
I realize this is probably NOT a simple thing to do so if you (awesome) guys could point me in the right direction that would also be incredible.
Current system is iPhone (Cocoa//Objective-C) -> PHP -> MySQL
Thanks a ton in advance.
AFAIK, a single request cannot get multiple responses. From what you are asking, it seems that you need to do this in two parts.
Part 1: Send a single call with the IDs.
Your server responds with a single message that contains the URLs or the information needed to call the unique "smaller" answers.
Part 2: Working from that list of responses, fire off multiple requests that run on their own threads.
I am thinking of this similar to how a web page works. You call the HTML URL in a web browser. The HTML tells the browser all the places/URLS it needs to get additional pieces (images, css, js, etc) to build the full page.
Hope this helps.