Variable optimized away by compiler - iphone

I started debugging some code attempting to find my mistake. When I attempt to p tlEntries from the debugger I get the
<variable optimized away by compiler>
message while stopped on the if statement. The following is my code:
NSArray *tlEntries = [[NSArray alloc] initWithArray:[self fetchJSONValueForURL:url]];
for (NSDictionary *info in tlEntries)
{
if ([info objectForKey:#"screen_name"] != nil)
NSLog(#"Found %# in the timeline", [info objectForKey:#"screen_name"]);
}
Earlier debugging gives me confidence the URL is indeed returning a valid NSArray, but I don't understand why tlEntries is being "optimized away".

The proper solution is to declare the variable in a different way as follows:
volatile NSArray *tlEntries;
Indeed, the volatile keyword is used exactly to inform the compiler that it must not try to optimize the code related to that variable in any way.
Kind regards.

The compiler probably noticed that you only use tlEntries twice in the beginning, and don't use it at all in the loop. Loops create an enumeration object instead of keeping a reference to the container object, if I remember correctly. So tlEntries should be valid for the first line but then gone for the rest.
Idea: You can force the compiler to keep it by using tlEntries somewhere later in the function. Something like
NSPrint(#"IGNORE THIS LINE %p", tlEntries);
Better idea: You can set the optimization to -O0. This is highly recommended for debugging code. It should be set to -O0 automatically if you use the "Debug" instead of "Release" builds, but you can change that.

When compiler optimizations are enabled, variables are frequently "optimized away" by placing them in registers or other tricks such as statement re-ordering. If you are using any -O flag other than -O0 then this can take place.
I don't think that adding additional references to the variable in your code are going to prevent this from happening. (If anything, the compiler may try even harder since the potential gain from optimizing it away is greater.)
As a temporary workaround, you can declare the variable "volatile". This isn't generally a good long-term solution because it prevents the compiler from performing any optimizations involving that variable.
Another workaround is to use good old-fashioned logging. Something like:
NSLog(#"Entries initialized as: %#", tlEntries);
Finally, you can also compile with -O0. Many people recommend this for the Debug project profile. Although it prevents optimization it makes debugging much easier. Stepping through statements is actually predictable, and variables can be viewed. Unfortunately it has what I consider a rather nasty side-effect with the version of gcc that Apple ships: when -O0 is in effect you can't get warnings about using variables prior to initialization. (I personally find this warning useful enough that I'm willing to suffer the pain of debugging being less convenient.)
P.S. You have a memory leak in the snippet posted; for clarity an additional line should be added:
[tlEntries release];

Assuming that you never use this variable later on (which seems reasonable if the compiler optimized it away), one very important way you could use to solve the issue (similar to Dietrich's example, though better for your program) is to do a:
[tlEntries release];
Otherwise, you will definitely leak that memory away. This will make the compiler see the object used later (as does the NSPrint) so it will not be optimized out.

Related

Object class members as pointers to avoid #include in headers - is it good practice?

This is really a question of precedence: which is more preferred in C++, avoiding pointers or avoiding #includes in header files?
"Don't Use #include in header files."
There seems to be some ambiguity based on my research. In this SO question, the top answer says "...make sure you actually need an include, [don't use one] when a forward declaration or even leaving it out completely will do." (From Header files and include best practice)
And this article explains the negative effect excess header inclusions can have on compile-time: http://blog.knatten.org/2012/11/09/another-reason-to-avoid-includes-in-headers/
As well as this tutorial, stating, "...you should try to put all of your code in the CPP class and only the class declaration in the HPP file.": https://github.com/LaurentGomila/SFML/wiki/Tutorial%3A-Basic-Game-Engine#wiki-declarations
"Don't Use Pointers."
But, there is also evidence that pointers should be avoided most often as well:
c++: when to use pointers?
https://softwareengineering.stackexchange.com/questions/56935/why-are-pointers-not-recommended-when-coding-with-c
Which preference takes precedence?
If my understanding about avoiding #includes in header files is correct, this can easily be done by changing things like class members to pointers so I can use a forward declaration instead, but is this a good idea for class members whose lifetime only lasts as long as the class itself?
It's not really an "one or the other". Both statements are true, but you need to understand the reasoning behind them.
tl;dr: Use forward declaration where possible to reduce compile time. Use stack objects or references as much as possible and pointers only in rare cases.
"Don't Use #include in header files."
This is a rather general statement, which as is, would be wrong. The more important part behind this statement actually is: "Use forward declarations where ever possible". Includes in header files are not something bad per se, but they often aren't needed either.
Forward declarations can be used, if the included type/class/etc. is used as a pointer in the new type/class/etc. declaration within the given header. Forward declaration just tells the compiler: "Somewhere a long the way you'll find the actual declaration of type X." The include can even be removed if the type isn't used at all in the declaration. The reason is that the compiler doesn't need to know anything about these types to calculate the required memory layout for the new type. For example a pointer has "always" the same size. Including the file additionally in the header, would potentially only waste processing power, since the compiler would have to open and parse the file, thus adding expensive seconds to the compile time. So in most cases you'll do yourself a favor by reducing the unnecessary includes in the header files and instead use forward declaration.
For the sake of completion: Forward declaration are explicitly needed if you get circular references (class A depends on class B, which depends on class C, which depends on class A). However this can often also reveal either bad design and/or old/outdate coding standards which would lead us to the second topic.
"Don't use pointers."
Again the statement is a tiny bit too general. One might rather want to say: "Don't use raw pointers."
With C++11 and soon C++1y the language itself has changed a lot. As much bad C++ books the world has seen, the more outdated C++ books float around nowadays (here's a good list however). While in the past we were mostly stuck with pointers new and delete for memory management, we've evolved to better, more readable, less risk and 100% memory leak free ways to manage the data in memory. One of the magic words is RAII - since you linked something from SFML above, here's a nice demonstration of the power of RAII. I see many people use pointers and new and delete just because or maybe because they are thinking in Java or C# terms were objects get instantiated with the new keyword. In C++ however object don't need to use new to be allocated and it's mostly preferable to run things on the stack instead of the heap. This works for many, many things, especially when using STL containers, which will hide the dynamic management in the background. The usage of the heap is mostly all cases only preferable if you need the data to be dynamic, non "local" or you need a lot of it. However when you use the heap, make sure to use smart pointers such as std::unique_ptr or std::shared_ptr depending on the use case, but certainly not raw pointers. In modern C++ raw pointers should never own an object anymore. There are cases where it's okay to return a raw pointer to reference an object, but there's really no reason in modern C++ to call new on a raw pointer.
Lets get back to the original question though. The "Don't use raw pointers" is essentially more of a design question and quite unrelated to the whole header issue. While there might be some cases where you'll have to switch to raw pointers, due to circular references, the use of forward declarations is otherwise just about compilation time (and maybe clean code), but it's not as essential for the programming itself.
In short: Don't use raw pointers to avoid inclusions in header files, but use forward declaration where ever possible and utilize smart pointers as much as possible.

Migrating to Arc with poor naming standards

I'm dealing with a codebase where naming standards have been routinely ignored. So, there are methods in some classes which return objects with reference counts of 1 even though the method name does not conform to NARC. Fantastic stuff.
I'd like to convert the project to use automatic reference counting, but I'm a little nervous due to the fact that NARC naming standards have been ignored altogether. Does anyone know whether ARC relies on NARC naming standards to work properly?
Thanks,
Sean
ARC does rely on the naming conventions to work correctly. However...
If you only used ObjC objects, then it will typically "work out" as long as you only have ARC code. For example, if you had a method like:
- (id)something {
return [[Something alloc] init];
}
This is wrong (in non-ARC code), but ARC will balance it out by effectively adding an extra autorelease. In fact, the above is correct ARC code, so it's fine.
My suggestion, if this is almost all ObjC code, is to auto-convert to ARC and then run the static analyzer. The problem may actually be much smaller than you fear if it's fairly simple code that just happens to have bad naming.
If this is heavily Core Foundation toll-free bridged code, things are a little more complicated. Then I'd recommend running the static analyzer first and getting your naming right before converting. Luckily, naming conventions is something that the static analyzer is very good at.
I had to convert several projects to ARC and so far never encountered any problems directly due to naming conventions whatsoever.
Actually the conversion is really straight forward - so while I fully understand your state of mind about the code you have to deal with - I wouldn't really worry too much.
So far I have never encountered any seriously difficult situation during conversion as long as the code to be converted was correct in the first place and somehow clear to understand.
In fact using ARC I find is as trouble free as any other language with built in GC - concerning memory issues of course!
In worst case you may always run the static analyzer - but even that is rarely required nowadays with ARC.
Probably the most critical situation is discussed here: What kind of leaks does automatic reference counting in Objective-C not prevent or minimize?

Tracing Allocations Test Results - look good or not so good?

I'm trying to memory test my app.
I've followed the Organizer - Documentation article entitled "Recovering Memory You Have Abandoned", but I'm not sure if the results make my tested page good or bad, or somewhere in-between.
(My test involved: navigate to page 2, going back to page 1, press 'Mark Heap' -repeated 25 times for good measure.)
Attached is a screenshot of my allocations test. Most of the #Persistent values are 0. But there are some anomalies. Are these typical?
(The last Heapshot, 26, was taken after stopping the recording, and pressing 'Mark Heap' at the end of the trace - as suggested in the documentation.)
I would be very grateful for some advice. Thanks.
I believe that you are using ARC, and if you are using ARC, there is no need of bothering about the heaps, it will take care of everything.
Here are the 9 simple points from Apple's docs to be in mind while using ARC:
ARC imposes some new rules that are not present when using other
compiler modes. The rules are intended to provide a fully reliable
memory management model; in some cases, they simply enforce best
practice, in some others they simplify your code or are obvious
corollaries of your not having to deal with memory management. If you
violate these rules, you get an immediate compile-time error, not a
subtle bug that may become apparent at runtime.
You cannot explicitly invoke dealloc, or implement or invoke retain,
release, retainCount, or autorelease.
The prohibition extends to using #selector(retain), #selector(release), and so on.
You may implement a dealloc method if you need to manage resources other than releasing instance variables. You do not have to
(indeed you cannot) release instance variables, but you may need to
invoke [systemClassInstance setDelegate:nil] on system classes and
other code that isn’t compiled using ARC.
Custom dealloc methods in ARC do not require a call to [super dealloc] (it actually results in a compiler error). The
chaining to
super is automated and enforced by the compiler.
You can still use CFRetain, CFRelease, and other related functions
with Core Foundation-style
You cannot use NSAllocateObject or NSDeallocateObject.
You create objects using alloc; the runtime takes care of
deallocating objects.
You cannot use object pointers in C structures.
Rather than using a struct, you can create an Objective-C class to manage the data instead.
There is no casual casting between id and void *.
You must use special casts that tell the compiler about object lifetime. You need to do this to cast between Objective-C objects and
Core Foundation types that you pass as function arguments. For more
details, see “Managing Toll-Free Bridging.”
You cannot use NSAutoreleasePool objects.
ARC provides #autoreleasepool blocks instead. These have an advantage of being more efficient than NSAutoreleasePool.
You cannot use memory zones.
There is no need to use NSZone any more—they are ignored by the modern Objective-C runtime anyway.

Xcode 4/LLVM 3.0 -- make it a little smarter about "no known instance method for selector" errors?

The following code is perfectly safe, yet Xcode 4 gives me an error for it:
if ([self respondsToSelector: #selector(foo)])
[self foo];
I am aware that I can get around it with a dummy protocol, but I use this pattern pretty often, and it feels like that amount of work should not be necessary. Is there any way to set a setting somewhere, preferably once, so that this "error" does not bug me again?
if ([self respondsToSelector: #selector(foo)])
[self foo];
That expression is only "perfectly safe" if there are no arguments and no return value. If any type information is required, #selector(foo) is insufficient.
Even then, I suspect that there are architectures whose ABI are such that the no-arg-no-return case would actually require that type knowledge be available to the compiler to be able to generate code that is absolutely guaranteed correct.
That is to say, your example of fooWithInteger: and/or fooWithX:y:z: could not possibly be compiled correctly without the full type information available due to the vagaries of the C language and the architecture specific ABI.
As well, to allow the compiler to compile that without warning would require the compiler to collude a runtime expression -- respondsToSelector: must be dynamically dispatched -- with a compile time expression. Compilers hate that.
To silence the compiler when following that kind of pattern, I use -performSelector:
if ([self respondsToSelector:#selector(foo)]) {
[self performSelector:#selector(foo)];
}
I don't know of any other way.

How do you define 'unwanted code'?

How would you define "unwanted code"?
Edit:
IMHO, Any code member with 0 active calling members (checked recursively) is unwanted code. (functions, methods, properties, variables are members)
Here's my definition of unwanted code:
A code that does not execute is a dead weight. (Unless it's a [malicious] payload for your actual code, but that's another story :-))
A code that repeats multiple times is increasing the cost of the product.
A code that cannot be regression tested is increasing the cost of the product as well.
You can either remove such code or refactor it, but you don't want to keep it as it is around.
0 active calls and no possibility of use in near future. And I prefer to never comment out anything in case I need for it later since I use SVN (source control).
Like you said in the other thread, code that is not used anywhere at all is pretty much unwanted. As for how to find it I'd suggest FindBugs or CheckStyle if you were using Java, for example, since these tools check to see if a function is used anywhere and marks it as non-used if it isn't. Very nice for getting rid of unnecessary weight.
Well after shortly thinking about it I came up with these three points:
it can be code that should be refactored
it can be code that is not called any more (leftovers from earlier versions)
it can be code that does not apply to your style-guide and way-of-coding
I bet there is a lot more but, that's how I'd define unwanted code.
In java i'd mark the method or class with #Deprecated.
Any PRIVATE code member with no active calling members (checked recursively). Otherwise you do not know if your code is not used out of your scope analysis.
Some things are already posted but here's another:
Functions that almost do the same thing. (only a small variable change and therefore the whole functions is copy pasted and that variable is changed)
Usually I tell my compiler to be as annoyingly noisy as possible, that picks 60% of stuff that I need to examine. Unused functions that are months old (after checking with the VCS) usually get ousted, unless their author tells me when they'll actually be used. Stuff missing prototypes is also instantly suspect.
I think trying to implement automated house cleaning is like trying to make a USB device that guarantees that you 'safely' play Russian Roulette.
The hardest part to check are components added to the build system, few people notice those and unused kludges are left to gather moss.
Beyond that, I typically WANT the code, I just want its author to refactor it a bit and make their style the same as the rest of the project.
Another helpful tool is doxygen, which does help you (visually) see relations in the source tree.. however, if its set at not extracting static symbols / objects, its not going to be very thorough.