Avoiding autoreleased objects, good practice or overkill? - iphone

I am just curious with regards to the following, when I am writing code I always try and manage the memory myself by sticking to non-autoreleased objects. I know that this means that objects are not hanging around in the pool, but I was just curious if doing it this way in general is good practice or simply overkill?
// User Managed Memory
NSSet *buttonSizes = [[NSSet alloc] initWithObjects:#"Go", #"Going", #"Gone", nil];
[barItemLeft setPossibleTitles:buttonSizes];
[barItemRight setPossibleTitles:buttonSizes];
[buttonSizes release];
.
// Autoreleased Memory
NSSet *buttonSizes = [NSSet setWithObjects:#"Go", #"Going", #"Gone", nil];
[barItemLeft setPossibleTitles:buttonSizes];
[barItemRight setPossibleTitles:buttonSizes];

Total overkill. If you read the relevant documentation carefully, it does not say that you should avoid autoreleasing objects. It says that you should avoid using autorelease pools when you in tight, object-rich loops. In those cases, you should be managing your memory explicitly (with retain and release) to ensure that objects are created and destroyed in an amortized manner.
The argument that the iPhone is a memory-constrained environment is true, but a complete red herring. Objective-C, with the Foundation and Cocoa frameworks (though they weren't called that at the time), ran just fine on a NeXTcube, which had 16MB of RAM (expandable to 64). Even the iPhone 3, which is pretty much EOL at this point, has 128MB.
edit
Since an iPhone app is a runloop-based application, a new autorelease pool is going to be created and destroyed every time it spins. This is clearly defined in the documentation. As such, the only reasons you'd have to create your own autorelease pool are:
Spawning a background thread, where you must create your own pool (since new threads, by default, do not have a base ARPool)
Performing some operation that will generate a significant number of autoreleased objects. In this case, creating and draining a pool would be to help ensure the limited lifetime of the temporary objects.
However, in the second case, you're recommended to explicitly manage memory as much as possible. This is so that your operation doesn't come to a screeching halt when it attempts to drain a pool with a few thousand objects in it. If you manage your memory manually, you can release these objects gradually as they are no longer needed, instead of saving them up for a single all-at-once release. You could help amortize the all-at-once release by nesting ARPools, which will help.
At the end of the day, though, just do what feels natural, and then (and only then) optimize it if you have concrete evidence that you need to do so.
edit #2
OK, it turns out that there is a recommendation to avoid using autorelease. BUT that recommendation is in the "Allocate Memory Wisely" section of the "Tuning for Performance and Responsiveness" area of the iOS Application Programming Guide. In other words, avoid it if you're measuring a performance problem. But seriously: autorelease has been around for ~20 years and did just fine on computers far slower and more constrained than the iDevices of today.

Unless you see performance issues (or memory issues), I wouldn't worry about using the autorelease pools. Furthermore, autorelease in theory can be more performant than non-auto release. The really dangerous places to use autorelease is in large loops (for example, when importing a large data set). In these cases you can run out of the limited iPhone memory.
I'd recommend you only focus on removing autorelease usage once you've finished your application and can identify memory management issues using the built in instruments.

When possible I like to use the autorelease methods. It eliminates the chance that I forget to release them or somebody comes in and changes the flow of the code so that the explicit release is bypassed. I vote that self-managing the releases is overkill.

It depends on whether or not an autorelease pool is in place. For instance, if you're using objective-c objects in an AudioUnit's render callback (which is called from a separate thread without an autorelease pool) your autoreleased objects will leak. In this case it's important that you release them manually, or wrap them in an autorelease pool.
In general though, I think it's up to you and your coding style

There's a balance. It depends a lot on the situation. In that particular case, assuming the barItem* variables are long-lived ivars, avoiding autorelease is pretty close to 100% pointless, because the set will be sticking around regardless of whether you release it now or two seconds from now.
In fact, in most cases, it makes little difference whether objects are released now or on the next iteration of the runloop, because the runloop is going so fast that it's actually capable of creating smooth animation. The iPhone is a particularly memory-starved platform, so it's good not to leave things alive too long. But at the same time, be realistic: One autoreleased NSNumber in a method that's called every couple of seconds isn't even going to make a dent in your app's profile. Even 100 thousand-character NSStrings will only use about 0.065% of the system RAM on an iPhone 3GS. That only becomes significant when you build up a ton of them in a very short time. So if there's no problem, don't sweat it.

I think there is only one resonable answer: usually autorelease is not an performance issue. It is good to keep in mind that it could be problematic in tight loops, but unless a performance meter like Instruments shows a memory spike you have to get rid of, I would use it if you like.
Premature optimization is a great way of spending time for nothing. Maybe in the end you know that your solution is elegant, but it could be that the easier solution performs just as fine.

Related

why callee return autoreleased object instead of returning retained object and let caller release object?

For instance:
we always write like this way 1:
-(NSObject*)giveMeAnObject
{
return [[[NSObject alloc] init] autorelease];
}
-(void)useObject
{
NSObject *object = [self giveMeAnObject];
//use that object
}
but why don't we write like this way 2:
-(NSObject*)giveMeAnObject
{
return [[NSObject alloc] init];
}
-(void)useObject
{
NSObject *object = [self giveMeAnObject];
//use that object
[object release];
}
The Cocoa SDK do things like way 1, I think thats why we all use way 1, it has become a coding convention.
But I just think that if the convention is way 2, we can gain little performance improvement.
So is there any other reason that we use way 1 instead way 2 except coding convention?
Returning an autoreleased object is a form of abstraction -- a convenience for the developer, so he/she does not have to think about the reference counting of the returned object as much -- and consequently results in fewer bugs in specific categories (although you can also say autorelease pools introduce new categories of bugs or complexities). It can really simplify the clients code, although yes, there can be performance penalties. It can also be used as an abstracted optimization when no reference operation must be made - consider when the object holds an instance of what is returned and no retain or copy need be made. Although chaining statements can be overused, this practice is also convenient for chaining statements.
Also, the static analyzer which determines reference count errors is relatively new to some of these libraries and programs. Autorelease pools preceded ARC and static analysis of objc objects' reference counts by many years -- ensuring your reference counting is correct is much simpler now with these tools. They are able to detect many of the bugs.
Of course with ARC, a lot of that changes if you favored the simplicity of returning autoreleased objects -- with ARC, you can return fewer autoreleased objects without the chance of introducing the bugs autorelease pools abstracted.
Using uniform ownership semantics also simplifies programs. If an abstract selector or collection of selectors always return using the same semantics, then it can really simplify some generic forms. For example -- if a set of selectors passed to performSelector: had different ownership-on-return semantics, then it would add a lot of complexity. So uniform ownership on return can really simplify some of the more 'generic' implementations.
Performance: Reference count operations (retain/release) can be rather weighty -- especially if you are used to working in lower levels. However, the current autorelease pool implementations are very fast. They have been recently updated, and are faster than they used to be. The compiler and runtime use several special shortcuts to keep these costs low. It's also a good idea to keep your autorelease pool sizes down -- particularly on mobile devices. Creating an autorelease pool is very fast. In practice, you may see from zero to a few percent increase from the autorelease operations themselves (i.e. it consumes much less time than objc_msgSend+variants). Some tests even ran slightly faster. This isn't an optimization many people will gain much from. It wouldn't qualify as low hanging fruit under typical circumstances, and it's actually relatively difficult to measure the effects and locality of such changes in real programs -- based off some testing I did after bbum mentioned the changes below. So the tests were limited in scope, but it appears to be better/faster in MRC and ARC.
So a lot of this comes down to the level of responsibility you want to assume, in the event you are performing your own reference counting operations. For most people, it shouldn't really change how they write. I think localizing memory issues, more deterministic object destruction, and more predictable heap sizes are the primary reasons one might favor return an 'owning' (+1) reference count if you are running on modern systems. Even then, the runtime and compiler work to reduce this for you in many cases (see bbum's answer +1). Even though autorelease pools are approximately as fast, I don't intend on using them more than I do now at this time -- so there are still justifiable reasons to minimize using them, but the reasons are lessening.
Have you measured the performance benefit? Do you have a quantifiable case where autorelease vs. CF style caller-must-release has a measurable performance impact?
If not, moot point. If so, I'd bet that there is a systemic architecture problem that far eclipses autorelease vs. not.
Regardless, if you adopt ARC, the "cost" of autorelease is minimized. The compiler and runtime can actually optimize away the autorelease across method calls.
There are three main reasons:
It imposes concept of taking ownership.
It eliminates the issues of dangling pointers.
It returns the object created in local auto release pool and so increases performance.

Class design for weapons in a game?

I enjoy making games and now for the first time try myself out on mobile devices. There, performance is of course a much bigger issue than on a nice PC and I find myself particularly struggling with weapon (or rather projectile) class design.
They need to be updated a lot, get destroyed/created a lot and generally require much updating.
At the moment I do it the obvious way, I create a projectile object each time I fire and destroy it on impact. Every frame all active projectiles get checked for collision with other objects.
Both steps seem like they could definitely need improvement. Are there common ways on how to handle such objects effectively?
In general I am looking for advice on how to do clean and performant class design, my googling skills were weak on this one so far.
I will gladly take any advice on this subject.
When you have lots of objects being created and destroyed in a short timespan, a common approach is to have a pool of instances already allocated that you simply reinitialise. Only if the pool is empty do you allocate new instances. Apple do this with MapKit and table views, among others. Studying those interfaces will probably serve you well.
I don't think this is about class design. Your classes are fine; it's the algorithms that need work.
They need to be updated a lot, get destroyed/created a lot and generally require much updating.
Instead of destroying every projectile, consider putting it into a dead projectile list. Then, when you need to create a new one, instead of allocating a fresh object, pull one from the the dead-list and reinitialise it. This is often quicker as you save on memory management calls.
As for updating, you need to update everything that changes - there's no way around that really.
Every frame all active projectiles get checked for collision with other objects.
Firstly - if you check every object against every other then each pair of objects gets compared twice. You can get away with half that number of checks by only comparing the objects that come later in the update list.
#Bad
for obj1 in all_objects:
for obj2 in all_objects:
if obj1 hit obj2:
resolve_collision
#Good
for obj1 in all_objects:
for obj2 in all_objects_after_obj1:
if obj1 hit obj2:
resolve_collision
How to implement 'all_objects_after_obj1' is language specific, but if you have an array or other random access structure holding your objects, you can just start the indexing from 1 after obj1.
Secondly, the hit check itself can be slow. Make sure you're not performing complex mathematics to check the collision when a simpler option would do. And if the world is big, a spatial database scheme can help, eg. a grid map or quadtree, to cut down the number of objects to check potential collisions against. But that is often awkward and a lot of work for little gain in a small game.
Both steps seem like they could definitely need improvement.
They only 'seem'? Profile the app and see where the slow parts are. It's rarely a good idea to guess at performance, because modern languages and hardware can be surprising.
As Jim wrote you can create a pool of objects and manage them. If you looking a specific design pattern there is Flyweight .Hope it will help you.

Don't worry about `retainCount`? Really?

I've been told to not worry about retain counts. I understand that I shouldn't decide to release or retain using conditional logic based on retainCount, but should I not worry about it? I thought these correspond to memory usage in some way.
For instance, if I have a bunch of subviews of UIView that I've also put into an NSArray to be able to iterate through them, doesn't that double the retain count and therefore the memory use of the application? If so, is this costly or trivial, if the subviews are, say, 500 UIControl instances? This is assuming that I need the 500 instances, of course.
The value returned by retainCount is the absolute number of times the object has been retained. A UIView comes from a framework whose implementation is opaque. The implementation details aren't something you should worry about save for the documented interface you interact with.
And within that implementation, instances of UIView may be retained any number of times as a part of the implementation. In terms of memory, the actual number of retains is meaningless; 1 is the same as 5.
The only thing you should ever be concerned with is how your code changes an objects retain count.
If your code increases the retain count, it must decrease it somewhere or the object will stick around forever. If you retain, you must release (or autorelease). If you copy, you must release or autorelease. If you new or alloc, you must release (or autorelease).
That is it.
You're not supposed to be worrying about retainCount because it's often a misleading implementation detail of the reference counting system. What you're supposed to concern yourself with is following proper object ownership policy.
I post this once in a while from Apple's documentation:
Important: This method is typically of no value in debugging memory management issues. Because any number of framework objects may have retained an object in order to hold references to it, while at the same time autorelease pools may be holding any number of deferred releases on an object, it is very unlikely that you can get useful information from this method.
As for your last question, your memory usage is not going to double from adding objects from one array to another array. The retain count is just an unsigned integer in the object that gets incremented by 1 when something claims ownership of it. But again, don't concern yourself with that.
For instance, if I have a bunch of subviews of UIView that I've also put into an NSArray to be able to iterate through them, doesn't that double the retain count...
Yes, it will.
... and therefore the memory use of the application?
No! The conclusion is wrong. 1000000 takes as much space as 0 when stored as a 32-bit integer.
The reason they say not to worry is that the retainCount field can often be very misleading. Aside from not knowing when the autorelease pool was last flushed or how many times an object has been autoreleased since, there are also some complicated internals, and system components could be temporarily holding references in ways that cannot be predicted. So if you start to study the retainCount you'll likely spend a lot time trying to figure out what other parts of the system are doing with various objects, which is not likely to help you get your application right.
You should design how your application works so that memory usage is not excessive.
You should worry about how many objects you have in memory, and how many times you have retained them (this is a number that will be less than the retainCount), and making sure you release them as many times as you retain them.
Calling retain on an object multiple times still only results in a single copy of the object being in memory.
To check for memory usage and/or leaks, you use the instruments leaks detector.
The retain count is just a number. An object whose retain count hits zero will get deallocated; aside from that, it doesn't matter if it's been retained once or fifty times.

Using an object beyond Autorelease context

under the "Guaranteeing the Foundation Ownership Policy" in Apple developer's site article on Autorelease pool
http://developer.apple.com/mac/library/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmAutoreleasePools.html#//apple_ref/doc/uid/20000047-997594, they talk about extending an object's lifetime beyond the Autorelease pool.
Can someone give me a situation where this concept could be used?
Short answer: What the documentation is saying is that if you need to keep an object that has been autoreleased in an autorelease pool, you need to retain it.
Long answer: For instance, say I need to do a certain operation to 1000 objects. Once I'm done with these objects I'm going to autorelease them. Without an autorelease pool, they're going to be eventually released, but holding those 1000 objects in memory can make your program really slow (at least until they're autoreleased).
In order to solve this issue, I'm creating an autorelease pool that will be cleaned every 100 objects. However, what happens if I need to keep the last object of the last batch around? I still need to purge those other 99 objects. What I'm going to do is send a retain message to that very last object then clean the autorelease pool.
This way the autorelease pool will notify the system that it no longer wants those 100 items, but you've already let the system know that you do need one of them. If the object had a previous retain count of 1, then it'll still be around:
1 (original retain count) +1 (your retain) -1 (autorelease pool release) = 1.
This preserves the object after the autorelease pool is done with it.

Cocoa Touch Programming. KVO/KVC in the inner loop is super slow. How do I speed things up?

I've become a huge fan of KVO/KVC. I love the way it keeps my MVC architecture clean. However I am not in love with the huge performance hit I incur when I use KVO within the inner rendering loop of the 3D rendering app I'm designing where messages will fire at 60 times per second for each object under observation - potentially hundreds.
What are the tips and tricks for speeding up KVO? Specifically, I am observing a scalar value - not an object - so perhaps the wrapping/unwrapping is killing me. I am also setting up and tearing down observation
[foo addObserver:bar forKeyPath:#"fooKey" options:0 context:NULL];
[foo removeObserver:bar forKeyPath:#"fooKey"];
within the inner loop. Perhaps I'm taking a hit for that.
I really, really, want to keep the huge flexibility KVO provides me. Any speed freaks out there who can lend a hand?
Cheers,
Doug
Objective-C's message dispatch and other features are tuned and pretty fast for what they provide, but they still don't approach the potential of tuned C for computational tasks:
NSNumber *a = [NSNumber numberWithIntegerValue:(b.integerValue + c.integerValue)];
is way slower than:
NSInteger a = b + c;
and nobody actually does math on objects in Objective-C for that reason (well that and the syntax is awful).
The power of Objective-C is that you have a nice expressive message based object system where you can throw away the expensive bits and use pure C when you need to. KVO is one of the expensive bits. I love KVO, I use it all the time. It is computationally expensive, especially when you have lots of observed objects.
An inner loop is that small bit of code you run over and over, anything thing there will be done over and over. It is the place where you should be eliminating OOP features if need be, where you should not be allocating memory, where you should be considering replacing method calls with static inline functions. Even if you somehow manage to get acceptable performance in your rendering loop, it will be much lower performance than if you got all that expensive notification and dispatch logic out of there.
If you really want to try to keep it going with KVO here are a few things you can try to make things go faster:
Switch from automatic to manual KVO in your objects. This may allow you to reduce spurious notifications
Aggregate updates: If your intermediate values over some time interval are not relevant, and you can defer for some amount of time (like the next animation frame) don't post the change, mark that the change needs to posted and wait for a the relevent timer to go off, you might get to avoid a bunch of short lived intermediary updates. You might also use some sort of proxy to aggregate related changes between multiple objects.
Merge observable properties: If you have a large number of properties in one type of object that might change you may be better off making a single "hasChanges" property observe and having the the observer query the properties.