Memory alignment on iPhone and Android - iphone

As I've discovered from my tests iPhone's malloc has 16 byte alignment. But I'm not sure whether it is guaranteed or just coincidence.
So the question is: what is the guaranteed memory alignment when using malloc() on iOS and Android(NDK)?

On iOS, the alignment is currently 16 bytes, as you've discovered. However, that isn't guaranteed, nor documented. I.e. don't rely on it.
Assuming it is available on iOS, posix_memalign() allows for the allocation of specifically aligned memory. There may be other mechanisms.
Android is not my bailiwick.

malloc in C returns a pointer to a block of memory "which is suitably aligned for any kind of variable"; whether this alignment is or will remain at 16 bytes is not guaranteed even for different versions of the same OS. Objective-C is effectively based on C, so that should hold true there too.
If your Android NDK is written in C or C++, the same should hold true there.

I would agree that you still shouldn't rely on this, but for something like a hash function it's pretty helpful.
It is actually documented -
https://developer.apple.com/library/ios/#documentation/performance/Conceptual/ManagingMemory/Articles/MemoryAlloc.html
Excerpt:
Allocating Small Memory Blocks Using Malloc
For small memory allocations, where small is anything less than a few virtual memory pages, malloc sub-allocates the requested amount of memory from a list (or “pool”) of free blocks of increasing size. Any small blocks you deallocate using the free routine are added back to the pool and reused on a “best fit” basis. The memory pool is itself is comprised of several virtual memory pages that are allocated using the vm_allocate routine and managed for you by the system.
When allocating any small blocks of memory, remember that the granularity for blocks allocated by the malloc library is 16 bytes. Thus, the smallest block of memory you can allocate is 16 bytes and any blocks larger than that are a multiple of 16. For example, if you call malloc and ask for 4 bytes, it returns a block whose size is 16 bytes; if you request 24 bytes, it returns a block whose size is 32 bytes. Because of this granularity, you should design your data structures carefully and try to make them multiples of 16 bytes whenever possible.

Related

Difference between paging and segmentation

I am trying to understand both paradigms of memory management;however, I fail to see the big picture and the difference between both. Paging consists of taking fixed size pages from a secondary to a primary storage in order to do some task requested by a process. Segmentation consists of assigning to each unit in a process an address space, so they are allowed to grow. I don't quiet see how they are related and that's because there are still a lot of holes in my understanding. Can someone fill them up?
I think you have something confused. One problem you have is that the term "segment" had multiple meanings.
Segmentation is a method of memory management. Memory is managed in segments that are of variable or fixed length, depending upon the processor. Segments originated on 16-bit processors as a means to access more than 64K of memory.
On the PDP-11, programmers used segments to map different memory into the 64K address space. At any given time a process could only access 64K of memory but the memory that made up that 64K could change.
The 8086 and it successors used segments with base registers. Each segment could have 64K (that grew with the processors) but a process could have 4 segments (more in later processors).
Paging allows a process to have a larger address space than there is physical memory available.
The 8086's successors used the kludge of paging on top of segments. However, that bit of ugliness has finally gone away in 64-bit mode.
You got your answer right there, paging relates with fixed size pages in a storage while segmentation deals with units in a page. 'Segments' are objects in the class 'Page'

For a 2GBytes memory, suppose its memory width is 8 bits: what is the address space of the memory?

For a 2GBytes memory, suppose its memory width is 8 bits….
what is the address space of the memory?
what is the address width of the memory?
I’m not looking for the answer to question, I’m just trying to understand the process of how to get there.
EDIT: all instances of Gb replaced with GB
The address-space is the same as the memory size. This was not true (for example) in 32 bit operating systems that had more than 2^32 bytes of memory installed. Since the number of bits used for addressing is not specified in your question, one can only assume it to be sufficient to address the installed memory. To contrast, while you could install more than 4GB in a 32 bit system, you couldn't access more than 4GB, since (2^32)-1 is the location of the last byte you could access. Bear in mind, this address-space must include video memory and any/all bioses in the system. This meant that in 32bit WinXP MS limited the amount of user-accessible memory to a figure considerably less than 4GB.
Since the memory width is 8 bits, each address will point to 1 byte. Since you've got 2GB, you need to use a number of bits for addressing that is equal-to or greater-than that which will allow you to point to any one of those bytes.
Spoiler:
Your address-space is 2GB and you need 31 bit wide addresses to use it all.

Dynamic allocation in kernel space

I have been trying to allocate space using malloc in kernel space for a driver I am working on (using malloc is a constraint here; I am not allowed to allocate space in any other manner), but if I try to allocate "too many" elements (~500 times a very small struct), only a fraction of the space I required is actually allocated.
Reducing the number of allocated elements did work for me with no problems. Does dynamic allocation in kernel space have limits which might be causing the behavior I am seeing?
malloc is a user space library function. You cannot use it in kernel space. There is a function called kmalloc() which is used to allocate memory in kernel space.
You can use vmalloc() also. I suggest you to read this thread What is the difference between vmalloc and kmalloc? for some clarification on vmalloc() and kmalloc().
And also I suggest you to search your queries in SO, then Ask Questions. Because, Already someone asked here

Maximum array size in objective C on iPhone?

I have a VERY large array (96,000 elements of type GLfloat). It was previously 24,000 elements, until I made a couple of changes. Now I'm getting a crash. I haven't done much to debug it yet, but when I noticed how ridiculously large one of my arrays was getting I thought it might be worth looking into. So, my only question is whether 96,000 elements (or 384,000 bytes) is too much for a single array?
That should be fine on the heap, but you should avoid allocations of that size on the stack. So malloc/free or new[]/delete[] is what you should use to create and destroy an array of that size.
If the device has low memory, you can expect requests for large amounts of memory to occasionally return NULL. There are applications (such as photo/image processing) which request allocations at tens of megabytes -- many times greater than your 384 KiB allocation.
There is no upper bound on the size of an array, save the amount of available RAM on the device.
I don't think it's too big. Some image resources would take up that much or more contiguous space without problem. For example, a 400x400px image would take about 160,000*4 = 640,000 bytes of memory. I think the problem is somewhere else.

Memory Warning but Small Live Bytes

In my application, I get a memory warning of level 1 and then 2 after repeating some action (choosing a picture + processing) several times and then a crash.
The leak tool doesn't show any leak. I'm also following the Allocations tool in Instruments and my Live Bytes are roughly 4 MB, overall I allocate 113 MB. At maximum I have maybe 20 MB in memory when the picture is loaded.
Since I have to repeat an action to get to the crash, it is very likely to be a memory leak. However, I don't know how to locate it since my live bytes are 4 MB and things supposed to be allocated (apart a small leak of ~100 KB in the UIImagePickerController).
How much can I trust the memory leak/allocation tools? Would you have an advice to help me locate the reason of the problem?
I don't know how iPhone OS works, so this is basically just guessing, but in systems where no garbage collector compacts the heap memory, it will be fragmented over time. Having a lot of memory free does not mean that a lot of contiguous memory is free.
For example, if you always need 4MB of memory for some processing, and you have this allocation pattern:
Allocate 4MB
Allocate 1KB
Free 4MB
Allocate 1KB
(You don't free the 1KB blocks because it's the computation result, or whatever)
You may end up with only 3,999K of free contiguous memory - so next time you allocate 4MB, it will be located after the gap, even though it almost fits. This means you can run out of memory even though almost the entire memory (or rather, addressing space) is free.
Granted, modern systems shouldn't suffer from this problem, but they may, especially if the application is never shut down and does not have a compacting garbage collector. Note that some systems have a low-fragmentation heap especially for situations like this (re-allocating and freeing blocks of the same size), but you usually need to explicitly request it.