Can a block of memory be partially freed? - iphone

I'm writing a very memory intensive program that will have dozens of malloc'd arrays. When the app receives a low memory warning, I want to dump the lower half of each of these arrays. Is there any way to do this?
I need some way that I can preserve half of the memory in each array. Obviously, if the app has low memory, I can't allocate a smaller array, copy half of my data into it, and then free the old array. Is there any function that can free a block of memory starting at pointer A and ending at pointer B or something like that?

Realloc() can return the trailing portion of the memory of a malloc back to the allocation pool, but can't return it to the OS.
Realloc() also won't help with memory fragmentation, which is likely a problem in a low-memory situation.

If they are NSMutableArrays, you can replace the objects in the lower end with a single instance of [NSNull null], thereby releasing all of those objects.
NSNull Class Reference

Related

What is advantage of using arrayWithCapacity than using array?

It is not necessary to specify array size when creating array, right?
Then, why is arrayWithCapacity necessary?
And if I set the size of array smaller than actually needed, is it OK?
arrayWithCapacity is an optimization - it is not necessary. If you know the number of elements ahead of time, the system can allocate storage in one system call and in one chunk of memory. Otherwise, the system has to resize the array later as you add more elements and that tends to be slow, requiring additional allocations and possibly copying data from the old buffer to the new buffer.
array creates an empty array (and allocs memory when you add an object) while arrayWithCapacity creates an array with enough memory allocated to hold those objects, but you can always expand it when needed.

iPhone/Instruments: what's about the "malloc" entries in object summary?

I'm performance tuning my iPhone/iPad app, it seems like not all the memory gets freed which should be. In instruments, after I simulate a memory warning in my simulator, there are lots of "Malloc" entries left; what's about them? Can I get rid of them, what do they mean/what do they stand for?
Thanks a lot,
Stefan
At any time, your app will have a (huge) number of living objects, even after getting a memory warning (and the subsequent memory recovery by the operating system). So, it is pretty common that you will also see many of those mallocs you are seeing.
They are not in themselves a sign that something is wrong with memory allocation, but possibly only of the fact that your program is running.
Also have a look at this S.O. topic to learn more about the object allocation tool.
Furthermore, there are many advanced techniques you can use to detect memory allocation problems.
Here you can find a great tutorial that will allow you to go way beyond what the Leaks tool allows you to.
EDIT:
About the exact meaning of those mallocs, you have to think that you can allocate two broad classes of objects (to put it roughly): Objective-C objects that are created through the Obj-C runtime system, and "normal" C objects, that are allocated through malloc.
Many object of the second class are allocated (without you directly calling malloc) by system libraries and by the compiler C library (think about, e.g., sockets or file handles, whatever). Those (C) objects do not have type information associated to them, so Instruments simply shows you the size of the allocated memory block, without having more information available.
Many times malloc objects are created by higher-level classes, so that when you recover memory associated to their instances, also memory allocated through malloc is freed.
You should not worry specifically about them, unless you see that their overall size "grows indefinitely" along program execution. In such case you need first to investigate the way you alloc/release your higher level objects and understand where in your code things get stuck.

What do those question marks mean in Windbg?

I'm getting an access violation in a program. Windbg shows that the program is trying to read at 0x09015000. It shows question marks (??) next to the address. My question is, what do these question marks indicate. Do they mean the memory location was never allocated, i.e. it's not backed by any physical memory (or page file)? Or is it something else?
It means that the virtual address is bad. Possibly a bogus pointer (i.e. uninitialized garbage), freed memory, etc.
Do they mean the memory location was never allocated
That's one possibility. Other options:
it was allocated before, but has been freed (VirtualFree())
it's not included in the crash dump you analyze. This may depend on the MINIDUMP_TYPE. Also, Procdump has an option ( -mp) to exclude memory regions larger than 512 MB.

How expensive is it to create an NSAutoreleasePool

I have a method which needs to run in its own thread 88 times per second (it's a callback for an audio unit.) Should I avoid creating an NSAutoreleasePool each time it's called?
Creating the NSAutoReleasePool itself shouldn't be too slow, but if there are a lot of objects to be dealloc'ed when you 'drain' the pool, that could start get slow. It's probably worth profiling how long the pool drains are taking.
Assuming that you've just come back from Instruments or Shark with concrete evidence that autorelease pools really are a performance concern in your app…
Creating your own autorelease pools is an answer to a dilemma. You do it when you are creating a lot of objects, in order to not create too many at once and either enter paging hell (on the Mac) or get a memory warning and/or termination (iPhone OS).
But autorelease pools are objects, too. They aren't free. The expense of a single autorelease pool is tiny, but in a loop where you're creating lots of objects, you're probably creating one pool every X objects,
draining it, and creating another one for the next X objects.
Even then, the autorelease pools probably aren't that many and so won't add up to much. You should see this in your Instruments or Shark profile: Most of the time spent in -[NSAutoreleasePool drain] is, in turn, spent in -[NSObject release]. That's time you'll be spending whether you use an autorelease pool or not.
[EDIT: As of December 2011, autorelease pools can now be created without an object, with the #autoreleasepool statement. They probably are still not free (at least without ARC), but now they are even cheaper than before.]
So the real solution in such cases is simply to create fewer objects. This can mean:
Using and reusing buffers whenever possible, reallocating a previously-used buffer when the size needed changes. You may want to use the malloc_good_size function to round up the size, to make it less likely that you'll need to reallocate (you can skip reallocating if the old needed size and new needed size both round up to the same number). You may also consider only growing the buffer, never shrinking it.
Using and reusing mutable objects. For example, if you build up a string and then write it out to a document, instead of releasing it and creating a new one, delete its entire contents, or replace the entire old contents with the first portion of the “new” string.
Adjusting the value of X (your pool-disposal threshold). Higher X means more momentary memory consumption, but fewer pools created and thrown away. Lower X means more pools, but less risk of paging out or getting a memory warning. This is unlikely to make much of a difference except when you raise X too far, or lower it from being too high.
Please see Mike Ash's Performance Comparisons of Common Operations. When he tested in 10.5, creating and destroying an autorelease pool took 0.0003577 milliseconds.
If you can avoid it, do that. If you can’t, there’s no need to worry about it, autorelease pools are created and released quite quickly. If you need a precise answer, set up a simple test and measure (which is always a good idea when speaking about performance).

Memory leak tool tells me zero leaks but memory footprint keeps rising

I'm running through some memory profiling for my application in SDK 3.2 and I used the 'Leak' profiler to find all my memory leaks and I plugged all of them up. This is a scrollView navigationController application where there are tiles and you click a tile which goes to a new view of tiles and so on, I can go many levels deep and come all the way back to the top and the 'Leak' profiler says everything is cool.
However, if I watch the memory footprint in the 'ObjectAlloc' profiler the memory footprint goes up and up as I go deeper (which seems reasonable) but as I back out of the views the memory footprint doesn't go down as I'd expect.
I know this is a vague description of the app but I can't exactly post a gillion lines of code :) Also it should be noted I'm using coreData to store image data as I go so the database is growing in size as more nodes are chosen, dunno if/when that is released from memory.
What gives?
This sounds like it could be one of a few things:
Memory not given back to OS after deallocation. This is a common design for C runtimes. When you do an allocation the C runtime allocates more memory for its use and returns a chunk of it for you to use. When you do a free the C runtime simply marks it as deallocated but doesn't return it back to the OS. Thus if the Leak Tool is reading OS level statistics rather than C runtime statistics the Leak tool will fail to report a corresponding decrease in memory usage.
Misleading values reported by Leak Tool Memory. The Leak Tool could be looking at different values than the C runtime and is reporting values that will cause you concern even though nothing is wrong (just as people try to use Task Manager in Windows for detecting leaks and get very confused with the results because it is a very poor tool indeed for that job).
Fragmentation. It is possible that your application is suffering from memory fragmentation. That is when you allocate, then deallocate then allocate, subsequent attempted allocations are larger than the "holes" left by deallocations. When this happens you fragment the memory space, leaving unusable holes, preventing large contiguous memory blocks and forcing the usage of more and more memory space until you run out of memory. This is a pathological condition and the fix is typically application specific.
I think the first of these three suggestions is most likely what is happening.
Depending on how you have your object graph constructed in Core Data, it's memory use can grow unexpectedly large.
A common mistake is to store objects inside a complex and often faulted (loaded into memory) entity. This cause the big blob to be loaded/remain in memory whenever any other part of the entity is referenced. As you object graph grows, it eats more and more memory unless you actively delete objects and then save the graph.
For example: You have an person entity with lots of text info e.g. name, address, etc as well as a large photo. If you make the photo an attribute of the person entity it will be in memory anytime the person entity is faulted. If you get the attribute name, then the photo attribute is in memory as well.
To avoid this, blobs should be in their own entity and then linked to other entities in relationships. Since relationship objects are not faulted until they are called directly they can remain out of memory until needed.
Just because there are no refcount-based leaks, doesn't mean that you're not stuffing something off in a Dictionary "cache" and forgetting about it; those won't show up as leaks because there are valid references to it (the dict is still valid, and so are the refs to all its children). You also need to look for valid, yet unnecessary refs to objects.
The easiest way is to just let it run for too long, then sort object counts by type and see who has a gigantic number - then, track down the reference graph (might be hard in Obj-C?). If Instruments doesn't do this directly, you can definitely write a DTrace script to do so.
To reiterate:
char *str1 = malloc(1000);
char *str2 = malloc(1000);
.
.
.
char *str1000 = malloc(1000);
is not a memory leak, but
char *str1 = malloc(1000);
char *str1 = malloc(1000); //Note! No free(str1) in between!
is a memory leak
The information on core data memory management is good info and technically the answer by Arthur Kalliokoski is a good answer re: the difference between a leek and object allocation. My particular problem here is related to an apparently known bug with setBackgroundImage on a button in the simulator, it creates a memory 'leak' in that it doesn't release ever release the memory for the UIImage.
You can have a continuously growing program without necessarily leaking memory. Suppose you read words from the input and store them in dynamically allocated blocks of memory in a linked list. As you read more words, the list keeps growing, but all the memory is still reachable via the list, so there is no memory leak.