Every RAM must have stack and heap (like CS,ES,DS,SS 4 segments).but is there like stack size in iphone,is only heap available?some tutorial say when we increase stack size , heap will be decreased,when we increase heap size ,stack will be decreased ...is it true..? or fixed stack size or fixed heap size ? any help please?
RAM does not have stack and heap (these are program use constructs and not part of the memory physically), nor do the Intel segment registers apply to ARM.
An application's thread, since it is a C application, has a stack. The stack size is bounded on the device, and in most cases cannot exceed a certain size (1MB for the main thread on iPhone OS), nor can it be shrunk.
Heap is also limited. The only way your stack size can influence the available heap is by creating threads, where the new stacks will occupy memory which could be used by the heap allocator. On iPhone OS, the minimum stack size is 16KB. For more information, read the threading documentation.
Related
I was reading about paging and swap-space and I'm a little confused about how much space (and where) on the hard-disk is used to page out / swap-out frames. Let's think of the following scenario :
We have a single process which progressively uses newer pages in virtual memory. Each time for a new page, we allocate a frame in physical memory.
But after a while, frames in the physical memory get exhausted and we choose a victim frame to be removed from RAM.
I have the following doubts :
Does the victim frame get swapped out to the swap space or paged out to some different location (apart from swap-space) on the hard-disk?
From what I've seen, swap space is usually around 1-2x size of RAM, so does this mean a process can use only RAM + swap-space amount of memory in total? Or would it be more than that and limited by the size of virtual memory?
Does the victim frame get swapped out to the swap space or paged out to some different location (apart from swap-space) on the hard-disk?
It gets swapped to the swap space. Swap space is used for that. A system without swap space cannot use this feature of virtual memory. It still has other features like avoiding external fragmentation and memory protection.
From what I've seen, swap space is usually around 1-2x size of RAM, so does this mean a process can use only RAM + swap-space amount of memory in total? Or would it be more than that and limited by the size of virtual memory?
The total memory available to a process will be RAM + swap-space. Imagine a computer with 1GB of RAM + 1GB of swap space and a process which requires 3GB. The process has virtual memory needs above what is available. This will not work because eventually the process will access all this code/data and it will make the program crash. Basically, the process image is bigger than RAM + swap space so eventually the program will get loaded completely from the executable and the computer will simply not have enough space to hold the process. It will crash the process.
There's really 2 options here. You either store a part of the process in RAM directly or you store it in the swap space. If there's no room in both of these for your process then the kernel doesn't have anywhere else to go. It thus crashes the process.
From this page we know that .Q.w[] gives us for example:
used| 108432 / bytes malloced
heap| 67108864 / heap bytes available
peak| 67108864 / heap high-watermark in bytes
wmax| 0 / workspace limit from -w param
mmap| 0 / amount of memory mapped
syms| 537 / number of symbols interned
symw| 15616 / bytes used by 537 symbols
If I wanted to monitor the instance for memory issues (eg. memory full) should I be looking at used or heap or a combination?
If you want to monitor how much is currently being used you would use used but it's only a rough estimate of the actual used as it doesn't take into account the memory used by interned strings (symbols) or memory-mapped files.
Monitoring the heap is useful to get a sense of how your memory spikes (and peak gives what the max spike is) but it wouldn't necessarily be ideal for informing you if you're close to your limit because if you have a big memory spike and you hit your limit then the process will die before you have a chance to monitor the fact that the spike was close to the limit.
Ultimately I would monitor both (and peak) and allow yourself buffers in both cases. Have a low-level alert if the heap/peak reaches say 50% of the limit, higher levels at 60%, 70% etc. Then also monitor your used as a percentage of your heap/peak. If your used is a high percentage of your heap - and your heap is a high percentage of your limit - then this could be alarming. Essentially your process could either be:
Low-medium memory usage but spiking:
If the used is generally a low-medium percentage of the heap/peak then your process is using low-med memory but spiking. This is pretty harmless and expected if crunching a lot of data
used is a high % of heap/peak and heap/peak is a high % of max
Here you might have a situation where a process is storing more and more memory without releasing. So the used is continually growing and the heap/peak is continually growing with it. This is a problem if unchecked.
So essentially you want to capture behaviour 2 while allowing behaviour 1.
There are some other behaviour patterns also but this would be the general gist. Whether or not automatic garbage collect is enabled also plays into it. If auto garbage collect isn't enabled and used is a lot less than heap then this process is hogging memory that it doesn't need to.
Does a number assigned to a variable always fit the allocated amount of RAM ?
When a variable is initialized, it is initialized on the stack, not the heap. Though both the stack and the heap are part of memory, we usually only talk about allocation with respect to the heap. This is because the stack is controlled entirely by the program running at the time, and no calls to the OS are required to push anything onto it.
All that being said, there is a maximum size the stack can grow to, and once we grow past that size, we have (coincidentally) "stack overflow". So, yes, there is a point at which creating another variable will not fit, but using the term allocation is the wrong way to describe it.
As I've discovered from my tests iPhone's malloc has 16 byte alignment. But I'm not sure whether it is guaranteed or just coincidence.
So the question is: what is the guaranteed memory alignment when using malloc() on iOS and Android(NDK)?
On iOS, the alignment is currently 16 bytes, as you've discovered. However, that isn't guaranteed, nor documented. I.e. don't rely on it.
Assuming it is available on iOS, posix_memalign() allows for the allocation of specifically aligned memory. There may be other mechanisms.
Android is not my bailiwick.
malloc in C returns a pointer to a block of memory "which is suitably aligned for any kind of variable"; whether this alignment is or will remain at 16 bytes is not guaranteed even for different versions of the same OS. Objective-C is effectively based on C, so that should hold true there too.
If your Android NDK is written in C or C++, the same should hold true there.
I would agree that you still shouldn't rely on this, but for something like a hash function it's pretty helpful.
It is actually documented -
https://developer.apple.com/library/ios/#documentation/performance/Conceptual/ManagingMemory/Articles/MemoryAlloc.html
Excerpt:
Allocating Small Memory Blocks Using Malloc
For small memory allocations, where small is anything less than a few virtual memory pages, malloc sub-allocates the requested amount of memory from a list (or “pool”) of free blocks of increasing size. Any small blocks you deallocate using the free routine are added back to the pool and reused on a “best fit” basis. The memory pool is itself is comprised of several virtual memory pages that are allocated using the vm_allocate routine and managed for you by the system.
When allocating any small blocks of memory, remember that the granularity for blocks allocated by the malloc library is 16 bytes. Thus, the smallest block of memory you can allocate is 16 bytes and any blocks larger than that are a multiple of 16. For example, if you call malloc and ask for 4 bytes, it returns a block whose size is 16 bytes; if you request 24 bytes, it returns a block whose size is 32 bytes. Because of this granularity, you should design your data structures carefully and try to make them multiples of 16 bytes whenever possible.
How is internally the maximum size of stack and Heap is set? How can we determine its maximum size? I am not using it for any of my projects. But this is just out of curiosity.
iPhone/iOS has support for virtual memory (just no backing store in normal use), and a virtual address space much larger than physical RAM. So the maximum size for either stack or heap is until the sum of all (maybe dirty) memory use (in allocated pages) runs out of that available for the current app process/sandbox, which will vary depending on what else is running on the system.