I'm just reading about cache store modes in JPA, and I see the followings:
If data is already in the cache, setting the store mode to USE will not force a refresh when data is read from the database.
It is really strange for me, because I don't understand that why is it great to not to update the cache if the database contains different state of the entity, and I am also curious about in which cases will the cache be updated and in which not.
Thank You in advance.
Related
First of all I'm new in ios/swift...
I need to have offline mode of my app.
I'm using Alamofire for all networking getting json, convert to objects and save into the DB (Core-Data). Wanted to know do I need to have additional cache in between (like: Haneke, or DataCache) in case no internet connection or getting from CoreData?
Is DB request fast/convenient enough?
CoreData is very fast (if correctly used). I don't believe it would be necessary to have an additional cache layer.
It would be just a duplication of data that you already have stored in your DB.
By the way all depends from your project use cases. I would not rely on temporary cached data if my app must work without internet connection.
To give you an idea of core data performances so that you can choose what works best for you: https://developer.apple.com/library/content/documentation/Cocoa/Conceptual/CoreData/Performance.html
I have a document that holds a big data structure in certain fields inside an array, it is slowing down my application due to frequent hits to read such data. am thinking on few solutions to implement but I need advice before i proceed and possibly even a better solution, here are my thoughts/questions:
would it help to cache data?
should I use memcached or redis as a caching engine and why?
would it help to read single fields from this document instead of reading it all every time?
should I do something else?!
Caching will help because it would avoid your db to be hit too often
Memcache or redis it's up to you. I prefere redis but if you already have a memcache it's fine.
If you have a cluster of servers, think if you need a centralized cache or not
Caching a full document won't help for getting a single field because you cache the result of a query without knowing what it contains.
your question need more clarification. for example how big is the data that you are speaking of is it couple of megabytes or gigabytes. All these factors change the solution. But if we consider that you have couple of megabytes and you want to prevent to call database every time the best solution is cache. How to choose a cache is also completely depends on what is your situation. If your web application runs on one server you can use the in-memory cache like ASP.Net cache which is very quick and fast for in-memory cache. this cache is stored in your heap so you can put all your object in the cache without serialization.But consider that whenever your application is restarted like most of deployments. your heap will be deleted and all the cache is cleared inside the heap.
if you have more than one server then you can start to think about an out-of-memory cache because two servers are not sharing heap memory and using all in-memory cache are useless because it duplicate the data and invalidating is nightmare. However, this is more reliable cache while it is not in the heap and in term of persistence is more than in-memory cache. But whatever you want to put in this kind of cache should be serializable while you are transferring the object over network connection. So you cannot put all your object in cache. Both Redis and memcached can be used for this purpose. Redis is more complicated with more functionality than Memcached but for your purpose memcached is quite good.
Whatever caching system you choose, approach it in a wide perspective. Design a caching system in your application while over time you need to put more things in cache. so its better to prepare everything for that time from now.
another things which is very important in cache is that whenever you set something in cache you have to consider when you are going to invalidate it.
Whether or not caching will help depends on the accession of the document. If the document is being accessed multiple times then caching will not help due to how MongoDB to memory caching actually works.
First, you need to understand your data accession patterns.
Can people give me examples of why they would use coreData in an application?
I ask this because most apps are just clients to a central server where an API of some sort gives you the information you need.
In my case I'm writing a timesheet application for a web app which has an API and I'm debating if there is any value in replicating the data structure on my server in core data(Sqlite)
e.g
Project has many timesheets
employee has many timesheets
It seems to me that I can just connect to the API on every call for lists of projects or existing timesheets for example.
I realize for some kind of offline mode you could store locally in core data but this creates way more problems because you now have a big problem with syncing that data back to the web server when you get connection again.. e.g. the project selected for a timesheet no longer exists.
Can any experienced developer shed some light on there experiences on when core data is best practice approach?
EDIT
I realise of course there is value in storing local persistance but the key value of user defaults seems to cover most applications I can think of.
You shouldn't think of CoreData simply as an SQLite database. It's not JUST an SQLite database. Sure, SQLite is an option, but there are other options as well, such as in-memory and, as of iOS5, a whole slew of custom data stores. The biggest benefit with CoreData is persistence, obviously. But even if you are using an in-memory data store, you get the benefits of a very well structured object graph, and all of the heavy lifting with regards to pulling information out of or putting information into the data store is handled by CoreData for you, without you necessarily needing to concern yourself with what is backing that data store. Sure, today you don't care too much about persistence, so you could use an in-memory data store. What happens if tomorrow, or in a month, or a year, you decide to add a feature that would really benefit from persistence? With CoreData, you simply change or add a persistent data store, and all of your methods to get information out or in remain unchanged. The overhead for that sort of addition is minimal in comparison to if you were trying to access SQLite or some other data store directly. IMHO, that's the biggest benefit: abstraction. And, in essence, abstraction is one of the most powerful things behind OOP. Granted, building the Data Model just for in-memory storage could be overkill for your app, depending on how involved the app is. But, just as a side note, you may want to consider what is faster: Requesting information from your web service every time you want to perform some action, or requesting the information once, storing it in memory, and acting on that stored value for the remainder of the session. An in-memory data store wouldn't persistent beyond that particular session.
Additionally, with CoreData you get a lot of other great features like saving, fetching, and undo-redo.
There are basically two kinds of apps. Those that provide you with local functionality (games, professional applications, navigation systems...) and those that grant access to a remote service.
Your app seems to be in the second category. If you access remote services, your users will want to access new or real-time data (you don't want to read 2 week old Facebook posts) but in some cases, local caching makes sense (e.g. reading your mails when you're on the train with unstable network).
I assume that the value of accessing cached entries when not connected to a network is pretty low for your customers (internal or external) compared to the importance of accessing real-time-data. So local storage might be not necessary at all.
If you don't have hundreds of entries in your timetable, "normal" serialization (NSCoding-protocol) might be enough. If you only access some "dashboard-data", you will be able to get along with simple request/response-caching (NSURLCache can do a lot of things...).
Core Data does make more sense if you have complex data structures which should be synchronized with a server. This adds a lot of synchronization logic to your project as well as complexity from Core Data integration (concurrency, thread-safety, in-app-conflicts...).
If you want to create a "client"-app with a server driven user experience, local storage is not necessary at all so my suggestion is: Keep it as simple as possible unless there is a real need for offline storage.
It's ideal for if you want to store data locally on the phone.
Seriously though, if you can't see a need for it for your timesheet app, then don't worry about it and don't use it.
Solving the sync problems that you would have with an "offline" mode would be detailed in your design of your app. For example - don't allow projects to be deleted. Why would you? Wouldn't you want to go back in time and look at previous data for particular projects? Instead just have a marker on the project to show it as inactive and a date/time that it was made inactive. If the data that is being synced from the device is for that project and is before the date/time that it was marked as inactive, then it's fine to sync. Otherwise display a message and the user will have to sort it.
It depends purely on your application's design whether you need to store some data locally or not, if it is a real problem or a thin GUI client around your web service. Apart from "offline" mode the other reason to cache server data on client side might be to take traffic load from your server. Just think what does it mean for your server to send every time the whole timesheet data to the client, or just the changes. Yes, it means more implementation on both side, but in some cases it has serious advantages.
EDIT: example added
You have 1000 records per user in your timesheet application and one record is cca 1 kbyte. In this case every time a user starts your application, it has to fetch ~1Mbyte data from your server. If you cache the data locally, the server can tell you that let's say two records were updated since your last update, so you'll have to download only 2 kbyte. Now you should scale up this for several tens of thousands of user and you will immediately notice the difference of the server bandwidth and CPU usage.
I want to know what is the best way to store data on the iPhone from a web service.
I want the information to be stored on the device so the person doesn't need to access the web service every time he/she needs it. The currently information isn't much and contains less that 150 records. The records might update from time to time and a few new ones will be added. What is the best way to go about storing the data?
Thanks
If you use ASIHTTPRequest for your network stuff (and if you don't already, I can't sing its praises highly enough), you will find it has a cache layer built in which is perfect for situations like this.
You can activate it with a simple one line;
[ASIHTTPRequest setDefaultCache:[ASIDownloadCache sharedCache]];
And you have full control over the cache policy etc - just read the documentation.
The other simple approach of course is - on the assumption that your web service is returning JSON or XML - simply to store the response in a local file against a hash of the request parameters, then when you request the data again, you can first look to see if the file exists and if it does, return that data rather than going back to the website. You can roll your own cache policies etc too.
Since I discovered ASIHTTPRequest had a cache though, I've not needed to roll my own again.
I find that using coreData or sqllite3 is just overkill for 99% my requirements and a simple cache works very well.
If the data is relational, a Sqlite3 database would be the best storage option you have.
Also, this helps by allowing you to retrieve from the server and to update only the records that have changed, thus saving time and bandwidth.
This is the best option from a scalability point of view as well, as you stated that "current information isn't much", thus giving the impression that this is only a current situation, that may be subjected to further change, probably towards more records being added in time.
Sqite3 also gives you more control and better performance than using, for instance, Core Data. Here's an article explaining some of the details. Moreover, if you work through an Objective-C wrapper, such as FMDB, you get all the advantages without managing the complexity yourself.
I have an app with a very large Core Data database. I have versioned it many times over the past year.
The last time I versioned the database I made one simple change to an entity: I added a new optional attribute. For some reason it would not migrate using Light-weight Migration. I found out much later that this was due to a bug in Apple's Light-weight Migration code resulting from the 'renaming identifiers' that I had needed back in another versioning.
Anyway, I digress...
Because of the bug that kept me from using Light-weight migration, I created a mapping file to help with the migration, not understanding that this would was a much heavier process and would force my users to wait while the app loaded the entire database into memory while doing the migration. It turns out that this is not really an option at all with very large databases and many of my users were unable to migrate the database at all due to memory problems, etc.
So now I want to re-release my app and clear up this problem. The trouble is, some of my users have a database that is somehow marked as being 'in the middle of migrating'. Even with my new code, which gets rid of the mapping file and supports Light-weight migration, users that are in this state, 'in the middle of a migration', don't seem to get reset.
What are my options for backing out a migration?
- I can detect that I am in this state because there is a '.myDB.sqlite.migrationdestination_41b5a6b5c6e848c462a8480cd24caef3' file in the Documents directory. Deleting this file does not clear up the migration. My guess is that the database is somehow flagged as being in this state, or is already partially migrated.
- I can detect this state and then delete the database altogether. But this forces my users to re-download their data.
Any Thoughts?
Thanks for you help.
The only thing that occurs to me would be crack open the SQL store of an affected file and look for flags or something else that might signal the db being in a transitory state. You might be able to write directly to the file and alter something.
That's really ugly problem.