Memory Warning but Small Live Bytes - iphone

In my application, I get a memory warning of level 1 and then 2 after repeating some action (choosing a picture + processing) several times and then a crash.
The leak tool doesn't show any leak. I'm also following the Allocations tool in Instruments and my Live Bytes are roughly 4 MB, overall I allocate 113 MB. At maximum I have maybe 20 MB in memory when the picture is loaded.
Since I have to repeat an action to get to the crash, it is very likely to be a memory leak. However, I don't know how to locate it since my live bytes are 4 MB and things supposed to be allocated (apart a small leak of ~100 KB in the UIImagePickerController).
How much can I trust the memory leak/allocation tools? Would you have an advice to help me locate the reason of the problem?

I don't know how iPhone OS works, so this is basically just guessing, but in systems where no garbage collector compacts the heap memory, it will be fragmented over time. Having a lot of memory free does not mean that a lot of contiguous memory is free.
For example, if you always need 4MB of memory for some processing, and you have this allocation pattern:
Allocate 4MB
Allocate 1KB
Free 4MB
Allocate 1KB
(You don't free the 1KB blocks because it's the computation result, or whatever)
You may end up with only 3,999K of free contiguous memory - so next time you allocate 4MB, it will be located after the gap, even though it almost fits. This means you can run out of memory even though almost the entire memory (or rather, addressing space) is free.
Granted, modern systems shouldn't suffer from this problem, but they may, especially if the application is never shut down and does not have a compacting garbage collector. Note that some systems have a low-fragmentation heap especially for situations like this (re-allocating and freeing blocks of the same size), but you usually need to explicitly request it.

Related

Virtual memory location on hard-disk

I was reading about paging and swap-space and I'm a little confused about how much space (and where) on the hard-disk is used to page out / swap-out frames. Let's think of the following scenario :
We have a single process which progressively uses newer pages in virtual memory. Each time for a new page, we allocate a frame in physical memory.
But after a while, frames in the physical memory get exhausted and we choose a victim frame to be removed from RAM.
I have the following doubts :
Does the victim frame get swapped out to the swap space or paged out to some different location (apart from swap-space) on the hard-disk?
From what I've seen, swap space is usually around 1-2x size of RAM, so does this mean a process can use only RAM + swap-space amount of memory in total? Or would it be more than that and limited by the size of virtual memory?
Does the victim frame get swapped out to the swap space or paged out to some different location (apart from swap-space) on the hard-disk?
It gets swapped to the swap space. Swap space is used for that. A system without swap space cannot use this feature of virtual memory. It still has other features like avoiding external fragmentation and memory protection.
From what I've seen, swap space is usually around 1-2x size of RAM, so does this mean a process can use only RAM + swap-space amount of memory in total? Or would it be more than that and limited by the size of virtual memory?
The total memory available to a process will be RAM + swap-space. Imagine a computer with 1GB of RAM + 1GB of swap space and a process which requires 3GB. The process has virtual memory needs above what is available. This will not work because eventually the process will access all this code/data and it will make the program crash. Basically, the process image is bigger than RAM + swap space so eventually the program will get loaded completely from the executable and the computer will simply not have enough space to hold the process. It will crash the process.
There's really 2 options here. You either store a part of the process in RAM directly or you store it in the swap space. If there's no room in both of these for your process then the kernel doesn't have anywhere else to go. It thus crashes the process.

Interpreting.Q.w[] for potential problems?

From this page we know that .Q.w[] gives us for example:
used| 108432 / bytes malloced
heap| 67108864 / heap bytes available
peak| 67108864 / heap high-watermark in bytes
wmax| 0 / workspace limit from -w param
mmap| 0 / amount of memory mapped
syms| 537 / number of symbols interned
symw| 15616 / bytes used by 537 symbols
If I wanted to monitor the instance for memory issues (eg. memory full) should I be looking at used or heap or a combination?
If you want to monitor how much is currently being used you would use used but it's only a rough estimate of the actual used as it doesn't take into account the memory used by interned strings (symbols) or memory-mapped files.
Monitoring the heap is useful to get a sense of how your memory spikes (and peak gives what the max spike is) but it wouldn't necessarily be ideal for informing you if you're close to your limit because if you have a big memory spike and you hit your limit then the process will die before you have a chance to monitor the fact that the spike was close to the limit.
Ultimately I would monitor both (and peak) and allow yourself buffers in both cases. Have a low-level alert if the heap/peak reaches say 50% of the limit, higher levels at 60%, 70% etc. Then also monitor your used as a percentage of your heap/peak. If your used is a high percentage of your heap - and your heap is a high percentage of your limit - then this could be alarming. Essentially your process could either be:
Low-medium memory usage but spiking:
If the used is generally a low-medium percentage of the heap/peak then your process is using low-med memory but spiking. This is pretty harmless and expected if crunching a lot of data
used is a high % of heap/peak and heap/peak is a high % of max
Here you might have a situation where a process is storing more and more memory without releasing. So the used is continually growing and the heap/peak is continually growing with it. This is a problem if unchecked.
So essentially you want to capture behaviour 2 while allowing behaviour 1.
There are some other behaviour patterns also but this would be the general gist. Whether or not automatic garbage collect is enabled also plays into it. If auto garbage collect isn't enabled and used is a lot less than heap then this process is hogging memory that it doesn't need to.

load 160MB data, Matlab memory usage jumps 0.5GB to 1.3GB

When I clear all, the Matlab memory usage drops to 0.5GB. I then load a *.mat file in which the memory requirement is dominated by an object requiring 161MB (from whos). The Matlab memory usage jumps to 1.3GB. I then run a script that processes the data, creating small data objects/structures in the process. From whos, nothing rivals the 161MB object in terms of memory footprint. However, the Matlab's memory footprint creeps up to 2.2GB during processing, then settles down to 1.6GB when done. whos still reveals the overwhelmingly dominant memory user to be the loaded object.
Why does Matlab use so much memory than the data that it is processing? It's about 1000 times more. Is this just to give it the space for intermediate results?
I'm using Windows 7, 64-bit. My code is pretty simple post-processing script to tally up some of the loaded data. It invokes no user-defined functions or 3rd party tools. I understand that readers can't analyze my code to track down the specific causes, but is 1000x memory footprint typical? What are typical reasons for this?

Matlab: Total memory usage returned by "whos" doesn't match "Memory used by Matlab" returned by "memory"

I have a script that sometimes crashes and sometimes returns an "out of memory" message. While investigating, I discovered that when adding up the memory allocated to my variables, as returned by "whos", I get a total of about 380 MB. On the other hand, "memory" returns 1196 MB as the total memory used by Matlab. I know the program itself takes up some memory, but I'm thinking it should not be as much as 800 MB. Does anyone have an idea of where the rest of my memory is being used? Thanks in advance for any advice.
Steve
The rest of the memory is probably used by the MATLAB program itself. Unfortunately, MATLAB could consume 800 MB, since it probably has memory leaks.

Memory alignment on iPhone and Android

As I've discovered from my tests iPhone's malloc has 16 byte alignment. But I'm not sure whether it is guaranteed or just coincidence.
So the question is: what is the guaranteed memory alignment when using malloc() on iOS and Android(NDK)?
On iOS, the alignment is currently 16 bytes, as you've discovered. However, that isn't guaranteed, nor documented. I.e. don't rely on it.
Assuming it is available on iOS, posix_memalign() allows for the allocation of specifically aligned memory. There may be other mechanisms.
Android is not my bailiwick.
malloc in C returns a pointer to a block of memory "which is suitably aligned for any kind of variable"; whether this alignment is or will remain at 16 bytes is not guaranteed even for different versions of the same OS. Objective-C is effectively based on C, so that should hold true there too.
If your Android NDK is written in C or C++, the same should hold true there.
I would agree that you still shouldn't rely on this, but for something like a hash function it's pretty helpful.
It is actually documented -
https://developer.apple.com/library/ios/#documentation/performance/Conceptual/ManagingMemory/Articles/MemoryAlloc.html
Excerpt:
Allocating Small Memory Blocks Using Malloc
For small memory allocations, where small is anything less than a few virtual memory pages, malloc sub-allocates the requested amount of memory from a list (or “pool”) of free blocks of increasing size. Any small blocks you deallocate using the free routine are added back to the pool and reused on a “best fit” basis. The memory pool is itself is comprised of several virtual memory pages that are allocated using the vm_allocate routine and managed for you by the system.
When allocating any small blocks of memory, remember that the granularity for blocks allocated by the malloc library is 16 bytes. Thus, the smallest block of memory you can allocate is 16 bytes and any blocks larger than that are a multiple of 16. For example, if you call malloc and ask for 4 bytes, it returns a block whose size is 16 bytes; if you request 24 bytes, it returns a block whose size is 32 bytes. Because of this granularity, you should design your data structures carefully and try to make them multiples of 16 bytes whenever possible.