Tracing Allocations Test Results - look good or not so good? - iphone

I'm trying to memory test my app.
I've followed the Organizer - Documentation article entitled "Recovering Memory You Have Abandoned", but I'm not sure if the results make my tested page good or bad, or somewhere in-between.
(My test involved: navigate to page 2, going back to page 1, press 'Mark Heap' -repeated 25 times for good measure.)
Attached is a screenshot of my allocations test. Most of the #Persistent values are 0. But there are some anomalies. Are these typical?
(The last Heapshot, 26, was taken after stopping the recording, and pressing 'Mark Heap' at the end of the trace - as suggested in the documentation.)
I would be very grateful for some advice. Thanks.

I believe that you are using ARC, and if you are using ARC, there is no need of bothering about the heaps, it will take care of everything.
Here are the 9 simple points from Apple's docs to be in mind while using ARC:
ARC imposes some new rules that are not present when using other
compiler modes. The rules are intended to provide a fully reliable
memory management model; in some cases, they simply enforce best
practice, in some others they simplify your code or are obvious
corollaries of your not having to deal with memory management. If you
violate these rules, you get an immediate compile-time error, not a
subtle bug that may become apparent at runtime.
You cannot explicitly invoke dealloc, or implement or invoke retain,
release, retainCount, or autorelease.
The prohibition extends to using #selector(retain), #selector(release), and so on.
You may implement a dealloc method if you need to manage resources other than releasing instance variables. You do not have to
(indeed you cannot) release instance variables, but you may need to
invoke [systemClassInstance setDelegate:nil] on system classes and
other code that isn’t compiled using ARC.
Custom dealloc methods in ARC do not require a call to [super dealloc] (it actually results in a compiler error). The
chaining to
super is automated and enforced by the compiler.
You can still use CFRetain, CFRelease, and other related functions
with Core Foundation-style
You cannot use NSAllocateObject or NSDeallocateObject.
You create objects using alloc; the runtime takes care of
deallocating objects.
You cannot use object pointers in C structures.
Rather than using a struct, you can create an Objective-C class to manage the data instead.
There is no casual casting between id and void *.
You must use special casts that tell the compiler about object lifetime. You need to do this to cast between Objective-C objects and
Core Foundation types that you pass as function arguments. For more
details, see “Managing Toll-Free Bridging.”
You cannot use NSAutoreleasePool objects.
ARC provides #autoreleasepool blocks instead. These have an advantage of being more efficient than NSAutoreleasePool.
You cannot use memory zones.
There is no need to use NSZone any more—they are ignored by the modern Objective-C runtime anyway.

Related

Does Swift use message dispatch for methods?

I'm sure my terminology is off, so here's an example:
C/C++ has methods and virtual methods. Both have the opportunity to be inlined at compile time.
C#'s CIL has call and callvirt instructions (which closely resemble C++ methods and virtual methods). Although almost all method calls in C# become callvirt (due to langauge snafu) the JIT compiler is able to optimize most back to call instructions and then (if worthwhile) also inline them.
Objective-C method calls are done very differently (and inefficiently); a message object is passed via objc_msgsend every time you call a method, it's a form of dynamic dispatch, and can never be inlined.
Reading up on the language specification for functions for Swift, I don't know if Swift is using the same messaging system as Objective-C or something different.
Sometimes yes, sometimes no. If you have pure swift code, and do not expose your classes/protocols to Objective-C with the #objc decoration, it appears that pure-swift method calls are not dispatched via objc_msgSend, however in other cases they are. If the protocol your swift object adopts is declared in Objective-C, or if the swift protocol is decorated with #objc, then method calls to protocol methods, even from swift objects to other swift objects, are dispatched via objc_msgSend.
The documentation is currently a little thin; I'm sure there are other nuances... but empirically speaking (i.e. I've tried it out) some swift method calls go through objc_msgSend and others don't. I think getting the best performance will be dependent on keeping your code as much pure-swift as possible and crossing the Obj-C/swift boundary as little as possible, and through bottleneck interfaces/protocols, so as to limit the number of swift calls that have to be dispatched dynamically.
I'm sure more detailed docs will emerge sooner or later.
Unlike C++, it is not necessary to designate that a method is virtual in Swift. The compiler will work out which of the following to use:
The performance metrics of course depend on hardware.
Inline the method : 0 ns
Static dispatch: < 1.1ns
Virtual dispatch 1.1ns (like Java, C# or C++ when designated).
Dynamic Dispatch 4.9ns (like Objective-C).
Objective-C of course always uses the latter. The 4.9ns overhead is not usually a problem as this would represent a small fraction of the overall method execution time. However, where necessary developers could seamlessly fall-back to C or C++ where required. This is still somewhat of an option in Swift, however the compiler will analyze which of the fastest can be used and try to decide on your behalf.
One side-effect of this, is that some of the powerful features afforded by dynamic dispatch may not be available, where as this could previously have been assumed to be the case for any Objective-C method. Dynamic dispatch is used for method interception, which is in turn used by:
Cocoa-style property observers.
CoreData model object instrumentation.
Aspect Oriented Programming
With the latest release of Swift, even if an Object is marked as '#objc' or extends NSObject the compiler may still not necessarily use dynamic dispatch. There's a dynamic attribute that can be added to the method to opt-in.

Migrating to Arc with poor naming standards

I'm dealing with a codebase where naming standards have been routinely ignored. So, there are methods in some classes which return objects with reference counts of 1 even though the method name does not conform to NARC. Fantastic stuff.
I'd like to convert the project to use automatic reference counting, but I'm a little nervous due to the fact that NARC naming standards have been ignored altogether. Does anyone know whether ARC relies on NARC naming standards to work properly?
Thanks,
Sean
ARC does rely on the naming conventions to work correctly. However...
If you only used ObjC objects, then it will typically "work out" as long as you only have ARC code. For example, if you had a method like:
- (id)something {
return [[Something alloc] init];
}
This is wrong (in non-ARC code), but ARC will balance it out by effectively adding an extra autorelease. In fact, the above is correct ARC code, so it's fine.
My suggestion, if this is almost all ObjC code, is to auto-convert to ARC and then run the static analyzer. The problem may actually be much smaller than you fear if it's fairly simple code that just happens to have bad naming.
If this is heavily Core Foundation toll-free bridged code, things are a little more complicated. Then I'd recommend running the static analyzer first and getting your naming right before converting. Luckily, naming conventions is something that the static analyzer is very good at.
I had to convert several projects to ARC and so far never encountered any problems directly due to naming conventions whatsoever.
Actually the conversion is really straight forward - so while I fully understand your state of mind about the code you have to deal with - I wouldn't really worry too much.
So far I have never encountered any seriously difficult situation during conversion as long as the code to be converted was correct in the first place and somehow clear to understand.
In fact using ARC I find is as trouble free as any other language with built in GC - concerning memory issues of course!
In worst case you may always run the static analyzer - but even that is rarely required nowadays with ARC.
Probably the most critical situation is discussed here: What kind of leaks does automatic reference counting in Objective-C not prevent or minimize?

Why retain/release rather than new/delete?

I'm newbie to objective-C, I feel comportable in C++.
My question is:
Why language designer of obj-c proper to use retain/release rather then use new/delete(=alloc/dealloc) only?
Maybe my brain is fit to new/delete only memory management, I can not understand why I should manage reference counts, I think I know when object have to be alloc/dealloc with my C++ experence.
(Yes, I spend 4 hours to debug reference count problem, it is resolved by 1 line "release")
Can anyone explain me what is better when we use reference counter? (in programming language respects) I think I can manage lifecycle of object by new/delete, but I can't with reference counting.
I need long article that explains why reference counter is useful, if you have link.
P.S: I heard about Compile-time Automatic Reference Counting at WWDC 2011, it is really awesome, it can be reason of use of reference counter, for example.
The short answer is that it is a way to manage object lifetimes without requiring "ownership" as one does with C++.
When creating an object using new in C++, something has to know when to delete that object later. Often this is straightforward, but it can be difficult when an object can be passed around and shared by many different "owners" with differing lifetimes.
With reference counting, as long as any other object refers to the object, it stays alive. When all other objects remove their references, it disappears. There are drawbacks to this approach (the debugging of retain/release and reference cycles being the most obvious), but it is a useful alternative to fully automatic garbage collection.
Objective-C is not the only language to use reference counting. In C++, it is common to use std::shared_ptr, which is the standard reference-counted smart pointer template. Windows Component Object Model programming requires it. Many languages use automated reference-counting behind the scenes as a garbage-collection strategy.
The Wikipedia article is a good place to start looking for more information: http://en.wikipedia.org/wiki/Reference_counting

Using Structs in Objective-C (for iOS): Premature Optimization?

I realize that what counts as premature optimization has a subjective component, but this is an empirical or best-practices question.
When programming for iOS, should I prefer using struct and typedefs where the object has no "behavior" (methods, basically)? My feeling is that the struct syntax is a bit strange for a non-C person, but that it should be WAY lower profile. Then again, testing some cases with 50K NSObject instances, it doesn't seem bad (relative, I know). Should I "get used to it" (use structs where possible) or are NSObject instances okay, unless I have performance problems?
The typical case would be a class with two int member variables. I've read that using a struct to hold two NSString instances (or any NSObject subclass) is a bad idea.
Structs with NSObject instances in them are definitely a bad idea. You need -init and -dealloc to handle the retain count correctly. Writing retain and releases from the caller side is just insane. It will never pay off.
Structs with two int or four doubles are borderline cases. The Cocoa framework itself implements NSRect, NSPoint etc. as a struct. But that fact has confused lots and lots of newcomers. Honestly, even the distinction between primitive types and object types confused them. It becomes even confusing to me when you have structs as properties of an object: you can't do
object.frame.origin.x=10;
If you start making your own structs, you need to remember which is which. That's again a hassle. I think the reason why they (NSRect etc.) are structs are basically historical.
I would prefer to make everything objects. And use garbage collection if available.
And, don't ask people if something is worth optimizing or not. Measure it yourself by Instruments or whatever. Depending on the environment (ppc vs intel, OS X vs iOS, iPad vs iPhone) one way which was faster in a previous system might be slower in a new system.
An Objective C object has almost the same storage as a struct, except it is 4 bytes (8 bytes on 64 bit) bigger. That's it - just one pointer into a place where the runtime holds all the class information.
If you are that tight on memory, then lose the 4 bytes, but usually that's only for large numbers of objects: 50,000 Nsobjects vs structs is only 200k - you get a lot of stuff for that 200k. For a million objects, the cost will add up on an iPhone.
If you want to say transfer the items to openGL or need a c array for other purposes, then another option is to make ONE NSObject that has a malloc'ed pointer to all 50,000 integers. Then the objective c memory overhead is ~0, and you can encapsulate all the nasty malloc and free() stuff into the innards of one .m file.
Go with regular objects until you hit a measurable performance bottleneck. I’ve used high-level code even in tight game loops without problems – messaging, collection classes, autorelease pools, no problems.
I see no problem at all with using structs to hold small quantities of primitive (i.e. non object) types where there is no behaviour required. There are already several examples of this in the Cocoa frameworks (CGRect, CGSize, CGPoint, NSRange for example).
Do not use structs to hold Objective-C objects. It complicates the memory management in the reference counted environment and may break it altogether in the GC environment.
For me, I would prefer to use regular objects because you can easily do Object job with it like retain, release, autorelease. I only see quite few structs in Cocoa Framework like CGSize, CGRect and CGPoint. I think the reason is that they are being used a lot
I believe is a good idea to use structs specially if you are dealing with C-based frameworks , lets says OpenGL, CoreGraphics, CoreText specially stuff that will require a couple/triple of ints, doubles, chars, etc. (If they are already not implemented in some of Apple Frameworks: CGRect, CGPoint, CTRect, NSRange, etc...) C stuff plays and looks better with other C stuff.
I don't think I would write a subclass of NSObject containing a couple of ints. It's almost ridiculous. lol.

Variable optimized away by compiler

I started debugging some code attempting to find my mistake. When I attempt to p tlEntries from the debugger I get the
<variable optimized away by compiler>
message while stopped on the if statement. The following is my code:
NSArray *tlEntries = [[NSArray alloc] initWithArray:[self fetchJSONValueForURL:url]];
for (NSDictionary *info in tlEntries)
{
if ([info objectForKey:#"screen_name"] != nil)
NSLog(#"Found %# in the timeline", [info objectForKey:#"screen_name"]);
}
Earlier debugging gives me confidence the URL is indeed returning a valid NSArray, but I don't understand why tlEntries is being "optimized away".
The proper solution is to declare the variable in a different way as follows:
volatile NSArray *tlEntries;
Indeed, the volatile keyword is used exactly to inform the compiler that it must not try to optimize the code related to that variable in any way.
Kind regards.
The compiler probably noticed that you only use tlEntries twice in the beginning, and don't use it at all in the loop. Loops create an enumeration object instead of keeping a reference to the container object, if I remember correctly. So tlEntries should be valid for the first line but then gone for the rest.
Idea: You can force the compiler to keep it by using tlEntries somewhere later in the function. Something like
NSPrint(#"IGNORE THIS LINE %p", tlEntries);
Better idea: You can set the optimization to -O0. This is highly recommended for debugging code. It should be set to -O0 automatically if you use the "Debug" instead of "Release" builds, but you can change that.
When compiler optimizations are enabled, variables are frequently "optimized away" by placing them in registers or other tricks such as statement re-ordering. If you are using any -O flag other than -O0 then this can take place.
I don't think that adding additional references to the variable in your code are going to prevent this from happening. (If anything, the compiler may try even harder since the potential gain from optimizing it away is greater.)
As a temporary workaround, you can declare the variable "volatile". This isn't generally a good long-term solution because it prevents the compiler from performing any optimizations involving that variable.
Another workaround is to use good old-fashioned logging. Something like:
NSLog(#"Entries initialized as: %#", tlEntries);
Finally, you can also compile with -O0. Many people recommend this for the Debug project profile. Although it prevents optimization it makes debugging much easier. Stepping through statements is actually predictable, and variables can be viewed. Unfortunately it has what I consider a rather nasty side-effect with the version of gcc that Apple ships: when -O0 is in effect you can't get warnings about using variables prior to initialization. (I personally find this warning useful enough that I'm willing to suffer the pain of debugging being less convenient.)
P.S. You have a memory leak in the snippet posted; for clarity an additional line should be added:
[tlEntries release];
Assuming that you never use this variable later on (which seems reasonable if the compiler optimized it away), one very important way you could use to solve the issue (similar to Dietrich's example, though better for your program) is to do a:
[tlEntries release];
Otherwise, you will definitely leak that memory away. This will make the compiler see the object used later (as does the NSPrint) so it will not be optimized out.