What do "Dirty" and "Resident" mean in relation to Virtual Memory? - iphone

I dropped out of the CS program at my university... So, can someone who has a full understanding of Computer Science please tell me: what is the meaning of Dirty and Resident, as relates to Virtual Memory? And, for bonus points, what the heck is Virtual Memory anyway? I am using the Allocations/VM Tracker tool in Instruments to analyze an iOS app.
*Hint - try to explain as if you were talking to an 8-year old kid or a complete imbecile.
Thanks guys.

"Dirty memory" is memory which has been changed somehow - that's memory which the garbage collector has to look at, and then decide what to do with it. Depending on how you build your data structures, you could cause the garbage collector to mark a lot of memory as dirty, having each garbage collection cycle take longer than required. Keeping this number low means your program will run faster, and will be less likely to experience noticeable garbage collection pauses. For most people, this is not really a concern.
"Resident memory" is memory which is currently loaded into RAM - memory which is actually being used. While your application may require that a lot of different items be tracked in memory, it may only require a small subset be accessible at any point in time. Keeping this number low means your application has lower loading times, plays well with others, and reduces the risk you'll run out of memory and crash as your application is running. This is probably the number you should be paying attention to, most of the time.
"Virtual memory" is the total amount of data that your application is keeping track of at any point in time. This number is different from what is in active use (what's being used is marked as "Resident memory") - the system will keep data that's tracked but not used by your application somewhere other than actual memory. It might, for example, save it to disk.

WWDC 2013 - 410 Fixing Memory Issues Explains this nicely. Well worth a watch since it also explains some of the practical implications of dirty, resident and virtual memory.

Related

iPhone Memory, What to trust?

I have been having some memory problems with an app. Im getting to a stage now where nothing really gives me a solid answer in terms of memory.
At first I used the Allocations profiler which I dont think seems to work that well at all, I think this is due to the fact most of my code is in Obj-C++ meaning it cant track the memory correctly.
With the Allocations profiler it tells me the app uses 32mb of memory and around this point it says it has low memory and sometimes crashes out. However, within other parts of the applications its used up to 40mb and never crashed out.
I found this code chunk:
http://landonf.bikemonkey.org/code/iphone/Determining_Available_Memory.20081203.html
Which appears to tell me im using 70mb of memory, when I get the low memory warning it says I have 2mb - 4mb left of unused memory. Which seems more reasonable, but its almost double what the profiler says!
The only thing I can think of is just ignoring it all and reducing the amount of memory used by my app as much as possible.
Ignore it all and reduce the size of your app is actually a good way to proceed. Make sure that you're responding to memory warnings by purging anything in memory that you don't need. Remember that different devices have different amounts of memory, and you may need to use even less than you think, at least if you want to support those older devices.

If malloc() is considered causing "dirty memory", is free() cleaning it up?

I've heard rumors that calling malloc leads to so called "dirty memory", which you can see in the VM Tracker instrument.
Now, rumors also say one must try to keep the amount of dirty memory as low as possible. But what they didn't talk much about was how to undirty it again.
Sometimes there's no other option than using malloc(). Heck, I love malloc(). For example when creating audio sources for OpenAL, one must malloc() a lot of data.
So: When my app calls malloc() and free() all over the place, I always believed that's fine. Am I having a huge problem when doing that? Or will free() always "clean it up"? I'm a bit confused because some very big guys at a very big company warned that malloc() must be avoided as much as possible, because of this dirty memory problem.
Maybe someone can un-confuse me about that.
I seriously doubt this is true. All memory allocation in Cocoa is eventually done via malloc. so sayeth Apple's Memory Usage Performance Guidelines. Quoting from that document:
Because memory is such a fundamental resource, Mac OS X and iOS both
provide several ways to allocate it. Which allocation techniques you
use will depend mostly on your needs, but in the end all memory
allocations eventually use the malloc library to create the memory.
Even Cocoa objects are allocated using the malloc library eventually.
I don't know about your big guys at your big company, but I've known big guys at big companies that didn't know squat. Just sayin'. Documentation trumps rumors every time. :)
I don't know what they mean by "dirty" and "clean". Possibly they are referring to the problem of fragmentation. Doing lots of allocs and frees can cause fragmentation problems, but it really depends on the usage patterns and block sizes you are allocating. In general, don't worry about using malloc and free. If you a real reason to avoid the standard allocator, you can use your own allocator. Then you just call malloc once for a huge block that you can use as the basis of your custom allocator.
If you malloc and free same size memory blocks multiple times, the memory will be reused instead of accumulating dirty VM pages. So it's perfectly safe as long as you know the max of all possible fragment sizes ever allocated by your app at any one time, and that keeps your app under the OS kill limit.
You are correct - free() will simply clean it up again.

Why doesn't iOS have automatic garbage collection?

When developing with Objective-C on iOS, memory management currently must be performed by the developer. Some of the other mobile platforms use automatic garbage collection to remove the need for managing memory.
What could be the reasons why garbage collection is not used on the iOS devices?
The problem with garbage collection is that memory usage grows until it's collected, so there might be more memory allocated than necessary. That's bad for devices with restricted memory and no option to swap.
When the garbage collector runs, it scans the heap to find memory that's no longer being used, and that's an expensive process, that will slow down your device until is has completed.
At WWDC 2011, Apple explained that they didn't want garbage collection on their mobile devices because they want apps to be able to run with the best use of the provided resources, and with great determinism. The problem with garbage collection is not only that objects build up over time until the garbage collector kicks in, but that you don't have any control over when the garbage collector will kick in. This causes non-deterministic behavior that could lead to slowdowns that could occur when you don't want them.
Bottom Line: You can't say, "OK. I know that these objects will be freed at X point in time, and it won't collide with other events that are occurring."
The main reason is probably memory load and performance. Reference counting has a smaller memory profile in that it allows the amount used by the application grow much more than reference counting. Also, there is a performance issue in that when the garbage collector runs, other threads have to be stopped. This is not really a huge issue on Macintoshes with fast multicore processors but could cause the UI to stutter on mobile devices.
The debate may be moot soon anyway. Clang / LLVM has just had a new feature added called automatic reference counting. This leverages the analysing capability to automatically put in the retains, releases and autoreleases so that the programmer doesn't have too.

Will Apple reject my iPhone application because of increasing memory usage?

I have an application with multiple views. It works pretty fine without any leaks or crashes. But when you run using performance tool for leaks, I see when I switch through multiple views and comeback to home screen, my overall size of the application gets increased. Like if its 1.53MB after visiting 4-5 different views and getting back to screen increases the consumption to 1.58MB or less but definitely greater than 1.53MB.
I tried resolving this issue but not able to figure out where I am going wrong since there are no memory leaks.
Does anyone know what could be the problem?
Will apple reject my application on this basis?
I would go back and forth between the screens many many many many times (at least one hundred times). If the memory continues to grow (linearly) during that time, you have a problem. If the memory stabilizes, you might be okay.
Definitely keep trying to fix you memory leaks. But if it's small, I doubt Apple will notice it. I mean, their own apps leak some too. You could get rejected for it, sure. But realistically, leaking a few bytes here and there shouldn't prevent an approval by itself.
(Source, 2 apps approved, one with the same issue, a tiny little memory leak I couldn't track down. I submitted it and was approved. Shortly after, I found and fixed it and released it as part of an update).
If an application has an increasing memory footprint on a known stable state for example named A after going into and coming back from state B which should have no persistent affect on state A and there is no memory leaks this problem called (as much as I know) the lingering memory.
Checklist to be sure if you have lingering memory problem:
App has no memory leaks, or no memory leaks on non-system code when profiled by Instruments.
State A and State B are individually stable states, like in state-machine.
State B has no permanent affect on State A, or it's memory. State A could be a gateway, a menu to another states like State B or State C. But Child states has no or limited info about state A and makes no change about State A.
On loop state changes starts and ends with root state for example A->B->A, A->C->A, A->B->C->A; you encounter increasing memory usage on State A. Memory usage on other child states are not important.
To spot and solve this problem profile your app with instruments. But instead of monitoring leaks, you should monitor allocations and total memory. Every time your app gets to State A, including start, take a memory snapshot. (There is a button for that :D) After snapshot go to State B, State C and use your application as it suppose to. After coming back to root state, in this example State A, take another snapshot. Instruments will show you memory allocations and difference delta in total memory between snapshots. It will also give information about for which object the memory had allocated and when if possible. If it was your code you probably will see the type of class and allocation point. Instruments can not help you about when the object should have been released but when you got the lingering object or memory, figuring out the deallocation point should be much easier.
BUT! Do not forget:
OS and Framework codes could have leaks and lingering memory problems like every OS. If you are sure that it is not your code leaking or lingering in the memory the everything is fine. That was the case in my app and it got approved(App: Tusudoku). System function often use additional memory if there is available, but they immediately release it when received memory warning. Although devices has limited memory, it is a waste if still not used, and using memory does not make memory chip to use measurably increased electrical current. Using memory to the limits for performance and immediately releasing it when someone definitely needs it, is best possible practice. These cache memories does not tend to be grove over time linearly but you should force memory warning every time app gets to root state, in this example State A. So this way you will be sure any cache memory allocated by system or frameworks will be deallocated, then you take the snapshot.
Most of the apps on the App Store® has memory leaks and other memory problems. The question is how this affect user. Non-linear lingering memory with rapidly dropping acceleration on increase velocity generally won't be a reason for rejection. Calculated the memory usage as 15MB for a perfect working app but if it worked, no problem, say that it will reach 20MB limit max ever and you are good to go. So you later fix your memory problems. Bu if your application has a linear or worse increasing memory usage and can not release that memories when needed, that will be a critical problem.
For more information about memory usage please consider reading official documentation and watching WWDC videos(That's where I learned all about memory fixes using Instruments).
There is no set in stone answer.
On the one hand is the fact that your application may have an obscure memory leak would be enough to reject it according to the posted policies.
On the other hand documents submitted to the FCC by Apple (in the AT&T+Apple vs. Google monopoly fight) give enough detail to work out just how much goes into reviewing an app - unless Apple lied the average app is reviewed by 2 people, and each of them spends around 5 minutes and 38 seconds (assuming Apple doesn't give breaks) to determine if your app passes or fails.
So the answer largely depends on if this memory leak can be discovered in the first 5 minutes of examination by some of the most overworked testers in the industry.
If you are using UIImageViews in your views, then part of the extra memory could be the caching that it does. See here.
Sometimes when we load views, and then switch to another, we leave the view around. For example, if you have a rootviewcontroller that has all the views as retained properties. Normally when you remove a subview, it is released, but not if you have it retained in your viewcontoller. As yu can see, that would add up to memory consumed, but not freed. It's not a leak, except that it gets released only when you release or remove the rootviewcontroller.
You could try to go through and find places where memory is tied up like this, or you could justify it based on the added speed of going through views without having to wait for them to reload.
In summary, it is good to know why your views and other objects consume and hold on to memory, but you may find that all those uses are justified, and you want to keep things that way. Having said that, I don't think Apple will be rejecting our app for decisions like this. If your app crashes because of memory usage, then that would get it rejected.
You're describing very typical memory usage.
If your app runs out of memory and crashes while they're testing it, they will reject it. Beyond that, you're fine.

Understanding the memory consumption on iPhone

I am working on a 2D iPhone game using OpenGL ES and I keep hitting the 24 MB memory limit – my application keeps crashing with the error code 101. I tried real hard to find where the memory goes, but the numbers in Instruments are still much bigger than what I would expect.
I ran the application with the Memory Monitor, Object Alloc, Leaks and OpenGL ES instruments. When the application gets loaded, free physical memory drops from 37 MB to 23 MB, the Object Alloc settles around 7 MB, Leaks show two or three leaks a few bytes in size, the Gart Object Size is about 5 MB and Memory Monitor says the application takes up about 14 MB of real memory. I am perplexed as where did the memory go – when I dig into the Object Allocations, most of the memory is in the textures, exactly as I would expect. But both my own texture allocation counter and the Gart Object Size agree that the textures should take up somewhere around 5 MB.
I am not aware of allocating anything else that would be worth mentioning, and the Object Alloc agrees. Where does the memory go? (I would be glad to supply more details if this is not enough.)
Update: I really tried to find where I could allocate so much memory, but with no results. What drives me wild is the difference between the Object Allocations (~7 MB) and real memory usage as shown by Memory Monitor (~14 MB). Even if there were huge leaks or huge chunks of memory I forget about, the should still show up in the Object Allocations, shouldn’t they?
I’ve already tried the usual suspects, ie. the UIImage with its caching, but that did not help. Is there a way to track memory usage “debugger-style”, line by line, watching each statement’s impact on memory usage?
What I have found so far:
I really am using that much memory. It is not easy to measure the real memory consumption, but after a lot of counting I think the memory consumption is really that high. My fault.
I found no easy way to measure the memory used. The Memory Monitor numbers are accurate (these are the numbers that really matter), but the Memory Monitor can’t tell you where exactly the memory goes. The Object Alloc tool is almost useless for tracking the real memory usage. When I create a texture, the allocated memory counter goes up for a while (reading the texture into the memory), then drops (passing the texture data to OpenGL, freeing). This is OK, but does not always happen – sometimes the memory usage stays high even after the texture has been passed on to OpenGL and freed from “my” memory. This means that the total amount of memory allocated as shown by the Object Alloc tool is smaller than the real total memory consumption, but bigger than the real consumption minus textures (real – textures < object alloc < real). Go figure.
I misread the Programming Guide. The memory limit of 24 MB applies to textures and surfaces, not the whole application. The actual red line lies a bit further, but I could not find any hard numbers. The consensus is that 25–30 MB is the ceiling.
When the system gets short on memory, it starts sending the memory warning. I have almost nothing to free, but other applications do release some memory back to the system, especially Safari (which seems to be caching the websites). When the free memory as shown in the Memory Monitor goes zero, the system starts killing.
I had to bite the bullet and rewrite some parts of the code to be more efficient on memory, but I am probably still pushing it. If I were to design another game, I would certainly think of some resource paging. With the current game it’s quite hard, because the thing is in motion all the time and loading the textures gets in the way, even if done in another thread. I would be much interested in how other people solve this issue.
Please note that these are just my views that do not have to be much accurate. If I find out something more to say on this topic, I will update the question. I’ll keep the question open in case somebody who understands the issue would care to answer, since these all are more workarounds and guesses than anything else.
I highly doubt this is a bug in Instruments.
First, read this blog post by Jeff Lamarche about openGL textures:
has a simple example of how to load
textures without causing leaks
gives understanding of how "small"
images, get once they are loaded
into openGL, actually use "a lot" of memory
Excerpt:
Textures, even if they're made from
compressed images, use a lot of your
application's memory heap because they
have to be expanded in memory to be
used. Every pixel takes up four bytes,
so forgetting to release your texture
image data can really eat up your
memory quickly.
Second, it is possible to debug texture memory with Instruments. There are two profiling configurations: OpenGL ES Analyzer and OpenGL ES Driver. You will need to run these on the device, as the simulator doesn't use OpenGL. Simply choose Product->Profile from XCode and look for these profiles once Instruments launches.
Armed with that knowledge, here is what I would do:
Check that you're not leaking memory -- this will obviously cause this problem.
Ensure your'e not accessing autoreleased memory -- common cause of crashes.
Create a separate test app and play with loading textures individually (and in combination) to find out what texture (or combination thereof) is causing the problem.
UPDATE: After thinking about your question, I've been reading Apple's OpenGL ES Programming Guide and it has very good information. Highly recommended!
One way is to start commenting out code and checking to see if the bug still happens. Yes it is tedious and elementary, but it might help if you knew where the bug was.
Where it is crashing is why it is crashing, etc.
Hrmm, that's not many details, but if leaks doesn't show you where the leaks are, there are two important options:
[i] Leaks missed a leak
[ii] The memory isn't actually being leaked
fixing [i] is quite hard, but as Eric Albert said filing a bug report with Apple will help. [ii] means that the memory you're using is still accessible somewhere, but perhaps you've forgotten about it. Are any lists growing, without throwing out old entries? Are any buffers being realloc()ed a lot?
For those seeing this after the year 2012:
The memory really loaded into device's physical memory is the Resident Memory in VM Tracker Instrument.
Allocation Instrument only marks the memory created by malloc/[NSObject alloc] and some framework buffer, for example, decompressed image bitmap is not included in Allocation Instrument but it always takes most of your memory.
Please Watch WWDC 2012 Session 242 iOS App Performance: Memory to get the information from Apple.
This doesn't specifically help you, but if you find that the memory tools don't provide all the data you need, please file a bug at bugreport.apple.com. Attach a copy of your app and a description of how the tools are falling short of your analysis and Apple will see if they can improve the tools. Thanks!