CoreData, among other things, is designed to keep an application memory footprint low.
However I could not find any information on weather or not it implements the didReceiveMemoryWarning notification.
I assume it drops its cache as this would be the sensible thing to do?
Core Data does something conceptually similar called faulting which limits how many object in the graph are in memory at any given time. I'd read up on it here https://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/CoreData/Articles/cdFaultingUniquing.html
Related
What are the best practices to cache the data in iOS apps connected to data source via web service?
You should lookat NSCache
http://developer.apple.com/library/mac/#documentation/Cocoa/Reference/NSCache_Class/Reference/Reference.html
An NSCache object is a collection-like container, or cache, that
stores key-value pairs, similar to the NSDictionary class. Developers
often incorporate caches to temporarily store objects with transient
data that are expensive to create. Reusing these objects can provide
performance benefits, because their values do not have to be
recalculated. However, the objects are not critical to the application
and can be discarded if memory is tight. If discarded, their values
will have to be recomputed again when needed.
Depends on the type of data
for binary data (files):
- Cache your files in the Cache folder using NSFileManager and NSData writeToFile:
for small ammounts of data (ascii/utf8):
- Use NSUserDefaults
for large ammounts of data (ascii/utf8):
- Use a sqlite3 database
It depends on how much data you want to cache and how you'll be accessing it once you have it cached, and a bunch of other cache management issues.
If you have a small amount of data, you could store that in a dictionary or array, and simply write it out and read it in. But this kind of solution can become slow if you have a lot of data; those reads and writes can take a long time. And flushing a dirty cache to disk means writing the whole object.
You could write individual files, but again, if you have a lot of files that might become a performance issue as well.
Another alternative is to use CoreData. If you have a lot of data (say, many objects) it may make sense to define what those look like as CoreData entities. Then you just store and fetch objects as you need them, falling back to fetching from your web service (and then caching) if the data is not local. You can also optimize other cache management tasks (like expiring unused entries) easily and efficiently using CoreData.
I actually went down this road, with a couple different apps. I started with an NSDictionary, and that became quite slow. I switched to CoreData, which not only simplified a lot of my code for cache initialization and management, but gave the apps quite a performance boost in the process.
If you're using NSURLConnection, or anything that uses NSURLRequest, caching is already taken care of for you:
http://developer.apple.com/library/ios/#documentation/Cocoa/Conceptual/URLLoadingSystem/Tasks/UsingNSURLConnection.html#//apple_ref/doc/uid/20001836-169425
http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/URLLoadingSystem/Concepts/CachePolicies.html#//apple_ref/doc/uid/20001843-BAJEAIEE
By default these use the cache policies of the protocol, which for a web service would be the HTTP headers it returns. This is also true, IIRC, of ASIHttpRequest.
Core Data also implements its own row and object caching, which works pretty well. So the reality here is that you really don't need to worry about caching when it comes to these things - it's optimizing your use of things like NSDateFormatter that starts to become important (they're expensive to create, not thread safe, etc...)
And when in doubt, use Instruments to find bottlenecks and latency
I dropped out of the CS program at my university... So, can someone who has a full understanding of Computer Science please tell me: what is the meaning of Dirty and Resident, as relates to Virtual Memory? And, for bonus points, what the heck is Virtual Memory anyway? I am using the Allocations/VM Tracker tool in Instruments to analyze an iOS app.
*Hint - try to explain as if you were talking to an 8-year old kid or a complete imbecile.
Thanks guys.
"Dirty memory" is memory which has been changed somehow - that's memory which the garbage collector has to look at, and then decide what to do with it. Depending on how you build your data structures, you could cause the garbage collector to mark a lot of memory as dirty, having each garbage collection cycle take longer than required. Keeping this number low means your program will run faster, and will be less likely to experience noticeable garbage collection pauses. For most people, this is not really a concern.
"Resident memory" is memory which is currently loaded into RAM - memory which is actually being used. While your application may require that a lot of different items be tracked in memory, it may only require a small subset be accessible at any point in time. Keeping this number low means your application has lower loading times, plays well with others, and reduces the risk you'll run out of memory and crash as your application is running. This is probably the number you should be paying attention to, most of the time.
"Virtual memory" is the total amount of data that your application is keeping track of at any point in time. This number is different from what is in active use (what's being used is marked as "Resident memory") - the system will keep data that's tracked but not used by your application somewhere other than actual memory. It might, for example, save it to disk.
WWDC 2013 - 410 Fixing Memory Issues Explains this nicely. Well worth a watch since it also explains some of the practical implications of dirty, resident and virtual memory.
I am using a custom UITableViewCell with 3 UIImages in a UITableView with 50-100 rows. Its similar to a UITableViewCell the Facebook iPhone app uses for its news feed view.
The application has 4 similar UITableViews which may be open at the same time via a UITabConroller.
The images are lazy loaded, there is a cache on disk so that no images are loaded twice from the server and there is also a NSMutableDictionary for images allowing in-memory reuse of the same image eg: a users profile picture appears multiple times
This setup is extremely fast but takes a lot of memory even after using the NSMutableDictionary for image reuse.
I tried a variation without the NSMutableDictionary where images are either loaded from the server or pulled from the disk cache every time cellForRowAtIndexPath is called. This setup is extremely memory efficient but causes a noticeable lag in the UITableView scrolling.
A mid-way approach is to free the NSMutableDictionary for images when a low memory warning is received.
Will really appreciate recommendations to optimize memory usage and speed in this scenario and or an insight into how the Facebook iphone app or three20 execute this conceptually.
I have an app that is very similar to yours in many respects (uses Three20, has several tabs across the bottom, each tab can have a table, each cell can have one or two images); and the approach I'm taking is the one you mentioned near the end of your post:
A mid-way approach is to free the NSMutableDictionary for images when a low memory warning is received.
Personally, I quite like iOS's approach to memory management, of warning me when memory is getting tight. The Mac/PC approach of "just use all the memory you want, we'll swap it out to disk if memory gets tight" has the disadvantage that even though the OS is the only one who really knows how much pressure there is on memory, it isn't telling you. I think what every polite app would really like to say (if apps could talk) is, "I'd be happy to use as much memory as you'll give me, but I don't want to be a bother, I don't want to slow down any other apps, so if you could please give me a hint as to how much memory I can use without causing problems, I would appreciate it."
Well that's what iOS's memory warnings give you, in my opinion. So, just keep as many images cached in memory as you want; and when you get a memory warning, empty the in-memory cache. To me it's really the best of both worlds.
Also, you should definitely take a look at Three20's TTURLCache, although I can't tell you a lot about it because I haven't dug into it very much. What I do know is:
If you retrieve your messages via TTURLImageResponse, it will automatically cache them in TTURLCache's image cache.
You can also store and load your own images (and other data) in the TTURLCache.
Three20 seems to take an approach similar to what I am talking about. Take a look at this code from Three20Network/Sources/TTURLCache.m (the NO argument means don't remove from disk, only remove from memory):
- (void)didReceiveMemoryWarning:(void*)object {
// Empty the memory cache when memory is low
[self removeAll:NO];
}
In addition, that class also allows you to set a maximum size for the in-memory cache, but by default there is no maximum size.
you would purge images when they go offscreen, then read the images from the locale cache on demand from a secondary worker thread when needed. since one can zip through tables, add support for read cancellation (esp. for requests which come off the server). NSOperation is a good api for this.
if you know your table's small, then you could opt to avoid purging in such cases.
also, rescaling the image to the size you'll display it as is often a good idea (depending on how far you want to take an optimization). assuming the source is larger than the displayed size: this will reduce memory requirements, drawing speed, disk space, and disk read times.
you can also read three20's sources to see what they have done.
I build a parsing algorithm using NSXMLParser.
Im having doubt as to what is the best strategy for keeping my memory usage on a minimum.
I have a valueObject (e.g. "Person") this object has ≈ 30 NSString properties, while parsing the xml I continually alloc and release a temporary Person object as the nodes are traversed.
I checked this and there is only one of these Person objects instantiated at any time.
When a node is traversed and a Person is "build" I pass the Person to a NSMutableArray and release this Person. Seems no problem there. (I'll need the array for a tableView).
When I reach around 50+ Person objects in the array my app just quits, didReceiveMemoryWarning doesn't get called, no other warnings, no parseErrorOccurred, nothing?
If I limit the number of Persons in xml the app does just fine, I haven't been able to find any memory leaks with Instruments.
I think that I simply can't hold 50+ Person objects in an array… seems a bit harsh, but I haven't got much memory experience with the iPhone, so this is just a guess.
The xml is search results from which the user probably only needs a few, so persisting them to my core model to keep them around for display seems a bit crazy.
What would be a good strategy for keeping these Person objects around? or am I missing a huge memory leak since the iPhone should be able to handle much more than this?
Hope some experienced developers can point me in the right direction:)
Thank you!
Despite NSXMLParser being a SAX-based parser it does not support parsing an input stream, which means that the entire XML string you are parsing is kept in memory. This on its own is a big issue, but as you parse the problem gets worse as you start duplicating the string data from the XML in your Person objects.
If your strings are really big, you've got the second problem of having too many parsed Person objects in memory at one time.
The first problem can be solved by using AQXMLParser from Jim Dovey's AQToolkit library, which provides an NSXMLParser-like API but with support for streaming the data from disk.
The second problem can be solved using a disk-based persistence technology, like Core Data, SQLite Persistent Objects, or even just storing the Person objects on disk yourself.
How long are those strings? Generally, on the iPhone 3G and older models, your app should have a minimum of about 20 MB of memory available (much more on the 3Gs). This is no absolute rule, of course, but a decent rule of thumb. To occupy this much memory with 50 objects would mean ~400-500 KB per Person object. Is this in the ballpark? If so, you will probably need a memory management strategy that does not keep all objects in memory at the same time. Core Data can probably help you a great deal in that case.
If you did not receive a memory warning it is probably not the reason your app is quitting. In Xcode go to the organizer, and select the device, then click on the console tab. If they app was shutdown for memory reasons there will be a system message in the console log saying it is killing the app due to memory pressure.
The answer is to chop up the incoming stream, I wrote a post about it some time ago:
https://lukassen.wordpress.com/2010/01/15/feeding-nsxmlparser-a-stream-of-xml/
I have an application with multiple views. It works pretty fine without any leaks or crashes. But when you run using performance tool for leaks, I see when I switch through multiple views and comeback to home screen, my overall size of the application gets increased. Like if its 1.53MB after visiting 4-5 different views and getting back to screen increases the consumption to 1.58MB or less but definitely greater than 1.53MB.
I tried resolving this issue but not able to figure out where I am going wrong since there are no memory leaks.
Does anyone know what could be the problem?
Will apple reject my application on this basis?
I would go back and forth between the screens many many many many times (at least one hundred times). If the memory continues to grow (linearly) during that time, you have a problem. If the memory stabilizes, you might be okay.
Definitely keep trying to fix you memory leaks. But if it's small, I doubt Apple will notice it. I mean, their own apps leak some too. You could get rejected for it, sure. But realistically, leaking a few bytes here and there shouldn't prevent an approval by itself.
(Source, 2 apps approved, one with the same issue, a tiny little memory leak I couldn't track down. I submitted it and was approved. Shortly after, I found and fixed it and released it as part of an update).
If an application has an increasing memory footprint on a known stable state for example named A after going into and coming back from state B which should have no persistent affect on state A and there is no memory leaks this problem called (as much as I know) the lingering memory.
Checklist to be sure if you have lingering memory problem:
App has no memory leaks, or no memory leaks on non-system code when profiled by Instruments.
State A and State B are individually stable states, like in state-machine.
State B has no permanent affect on State A, or it's memory. State A could be a gateway, a menu to another states like State B or State C. But Child states has no or limited info about state A and makes no change about State A.
On loop state changes starts and ends with root state for example A->B->A, A->C->A, A->B->C->A; you encounter increasing memory usage on State A. Memory usage on other child states are not important.
To spot and solve this problem profile your app with instruments. But instead of monitoring leaks, you should monitor allocations and total memory. Every time your app gets to State A, including start, take a memory snapshot. (There is a button for that :D) After snapshot go to State B, State C and use your application as it suppose to. After coming back to root state, in this example State A, take another snapshot. Instruments will show you memory allocations and difference delta in total memory between snapshots. It will also give information about for which object the memory had allocated and when if possible. If it was your code you probably will see the type of class and allocation point. Instruments can not help you about when the object should have been released but when you got the lingering object or memory, figuring out the deallocation point should be much easier.
BUT! Do not forget:
OS and Framework codes could have leaks and lingering memory problems like every OS. If you are sure that it is not your code leaking or lingering in the memory the everything is fine. That was the case in my app and it got approved(App: Tusudoku). System function often use additional memory if there is available, but they immediately release it when received memory warning. Although devices has limited memory, it is a waste if still not used, and using memory does not make memory chip to use measurably increased electrical current. Using memory to the limits for performance and immediately releasing it when someone definitely needs it, is best possible practice. These cache memories does not tend to be grove over time linearly but you should force memory warning every time app gets to root state, in this example State A. So this way you will be sure any cache memory allocated by system or frameworks will be deallocated, then you take the snapshot.
Most of the apps on the App Store® has memory leaks and other memory problems. The question is how this affect user. Non-linear lingering memory with rapidly dropping acceleration on increase velocity generally won't be a reason for rejection. Calculated the memory usage as 15MB for a perfect working app but if it worked, no problem, say that it will reach 20MB limit max ever and you are good to go. So you later fix your memory problems. Bu if your application has a linear or worse increasing memory usage and can not release that memories when needed, that will be a critical problem.
For more information about memory usage please consider reading official documentation and watching WWDC videos(That's where I learned all about memory fixes using Instruments).
There is no set in stone answer.
On the one hand is the fact that your application may have an obscure memory leak would be enough to reject it according to the posted policies.
On the other hand documents submitted to the FCC by Apple (in the AT&T+Apple vs. Google monopoly fight) give enough detail to work out just how much goes into reviewing an app - unless Apple lied the average app is reviewed by 2 people, and each of them spends around 5 minutes and 38 seconds (assuming Apple doesn't give breaks) to determine if your app passes or fails.
So the answer largely depends on if this memory leak can be discovered in the first 5 minutes of examination by some of the most overworked testers in the industry.
If you are using UIImageViews in your views, then part of the extra memory could be the caching that it does. See here.
Sometimes when we load views, and then switch to another, we leave the view around. For example, if you have a rootviewcontroller that has all the views as retained properties. Normally when you remove a subview, it is released, but not if you have it retained in your viewcontoller. As yu can see, that would add up to memory consumed, but not freed. It's not a leak, except that it gets released only when you release or remove the rootviewcontroller.
You could try to go through and find places where memory is tied up like this, or you could justify it based on the added speed of going through views without having to wait for them to reload.
In summary, it is good to know why your views and other objects consume and hold on to memory, but you may find that all those uses are justified, and you want to keep things that way. Having said that, I don't think Apple will be rejecting our app for decisions like this. If your app crashes because of memory usage, then that would get it rejected.
You're describing very typical memory usage.
If your app runs out of memory and crashes while they're testing it, they will reject it. Beyond that, you're fine.