Does Swift whole module optimization cause any issues when debugging? - swift

Some time ago (a year?) we ran into problems with debug builds that had whole module optimization enabled. When tracing, the debugger would jump to unexpected addresses. Since then, we've been shy about enabling this on our debug builds. We do enable it for our release builds.
Is anyone aware of any existing --- even subtle --- issues with debugging an executable that has been optimized this way?
Or, conversely, has everything been working fine for you with this configuration?

From what I understand it should not cause issues as whole module optimisation mainly means that Swift knows what it can directly address ( a straight up _call ) compared to a VTable lookup as well as reduce the costly retain release cycles.
Similar to this, it is a good idea in general to use access control in swift right when you writing your code. privatizie everything unless you really need to access it outside. Use final on classes that don't need subclassing etc.
Usually when you enter the assembly view in Xcode is because of symbols missing for the debugger on libraries being called.
A small tip for debugging: I recently found a bug in my own code when implementing a TLS socket by simply looking up the address and the referenced method, downloading the source and looking up what was missing or causing the issue.
Now that said, in general it will greatly increase debug build times so do you have any reasons for running -whmo in debug?

Related

Swift: Turn on optimization or release mode for a single class

I have a class that takes 3 seconds to process data in Debug mode, but only takes 100ms in Release mode. There is obviously something wrong with it but all tests pass so I don't need any Debug features when using it.
Is there a way to make Xcode run the project in Debug mode but do all Release optimization for this single class?
The only workaround I can think of is to turn this into a framework with a release compile flag but that seems like a bit overkill.
The only workaround I can think of is to turn this into a framework with a release compile flag but that seems like a bit overkill.
This is the only stable way to do this at the moment (with Xcode 13). One more option is that you can try doing some timing measurements, it might be possible to rearrange your code so that even the debug codegen gives better results.
If this is throwaway code, you can try marking individual functions with the underscored attribute #_optimize(speed).
I'll repeat the disclaimer at the top of the doc:
WARNING: This information is provided primarily for compiler and standard library developers. Usage of these attributes outside of the Swift monorepo is STRONGLY DISCOURAGED.

Swift compiler optimisations cause freezes

My app freezes in Release configuration only.
I tracked down the issue to this setting:
It is no secret that the Swift compiler is buggy.
I have never seen a compiler crash (and crash often).
So, is it "safe" to submit to the App Store with Optimisation Level set to "None"?
Any experience?
Apple does not recommend shipping your application with no compiler optimizations.[1]
None: The compiler does not attempt to optimize code. Use this option
during development when you are focused on solving logic errors and
need a fast compile time. Do not use this option for shipping your
executable.
Taken from apple.developer.com.
While compiler optimization bugs exist,[2] Xcode is probably not the source of the problem, as explained in the answer provided here by the stackoverflow user #kfmfe04:
In some extremely rare cases, the debug code works, but the release
code fails. When this happens, almost always, the problem is in my
code; aggressive optimization in release builds can reveal bugs caused
by mis-understood lifetimes of temporaries, etc...
Remember that you can always track down the source of the problem by examining the compiled assembly file, but it will require some ASM knowledge to understand what the compiler is doing under the hood.
In Xcode options:
Debug -> Debug Workflow -> Always Show Disassembly
Then you put a breakpoint where you want to check the ASM code.

Avoiding Eclipse Errors When Decompiling Android APK

I'm working on reverse engineering / decompiling an APK file - I was able to use:
http://www.decompileandroid.com/
I'm sure APK tool is a better option (I'd love to hear the reasons why though) but in this instance it worked - sorta.
My problem - and the root of my question/post: is I ended up with over 4000 eclipse errors when I import the source (thankfully they seem to be just a small handful of similar/related errors repeated many many times).
That being said - is there a better method of going about this in order to avoid these errors? (shown below)
Eclipse Errors:
https://docs.google.com/document/d/1gwbZuJ8duQ37JRGeTdqIrv0o_DBNL_xWRxrG9Xxxwy4/edit?usp=sharing
I do not know of any Java decompiler that will reliably produce output that can be "round-tripped" (decompiled, then recompiled). There are a few in active development, my own included, for which you could submit bug reports. In the case of Procyon, type inference has become increasingly broken over time, particularly where generics are concerned. Then there are a host of other problems that primarily affect classes converted from Android format.
JARs created by tools like dex2jar tend to be much harder to process because they produce tricky exception handler tables, oddly ordered blocks, local variable slot sharing, etc. I would recommend trying a few different combinations of tools: straight Android decompilers as well as different dex-to-jar rewriters paired with various Java decompilers. You may find that one combination of tools consistently yields better results than others.
That said, I will reiterate my usual advice: never trust the output from a decompiler. Do not assume it is correct, even if it compiles cleanly.

How does XCode know which project to debug into when multiple projects are open simultaneously?

TL;DR Version:
This question has arisen due to the fact that I have multiple frameworks (which I have built) and a client project that uses said frameworks. Now, when I open up the client project and try to debug into the framework, it doesn't work.
However, if I have the project associated with the framework open, then debugging appears to work (though there are some weird issues with breakpoints I don't see being triggered).
I have looked at Apple's docs, and perhaps the answer is buried there somewhere, but I couldn't find it on a skim of the Xcode Debugging Guide.
Long Version:
The reason this question is important to me is that a coworker and I had a disagreement about how headers are imported in the frameworks we build.
I have a tendency to use framework headers (with client apps) in the fashion:
#import "FrameworkA/HeaderA.h"
#import "FrameworkB/HeaderB.h"
He, on the other hand, favors importing the framework headers (with client apps) like this:
#import "HeaderA.h"
#import "HeaderB.h"
and specifying the header search paths in the build target of the client application.
Complicating matters further is the fact that some of these frameworks have interdependencies. For example, FrameworkB has headers from FrameworkA referenced in his format:
#import "HeaderA.h"
His argument for doing this is that debugging only works if we import headers this way. It seems dubious to me that there would be a relation between header importing style and debugging, but I am not really certain how XCode chooses the file to link to during debugging, hence the question.
Thanks in advance for any assistance with this query.
you add project references to the target, and make sure Xcode knows where to find the debug symbols.
#import <FrameworkA/HeaderA.h>
that's the way to go (for internal and external declarations). the reason? the other approach is more likely to cause issues as libraries evolve. the additional qualification disambiguates any case (unless of course there are two FrameworkA/s in your search path), it's best to qualify the file explicitly now, rather than when your clients tell you they cannot use your library with other libraries, or that they can only use them in some conditions. then you have to go fix the issues and reship (this stuff has a way of happening at inconvenient times =p). it's one simple measure to ensure you've developed a robust interface.
perhaps the most important part that people overlook is the location of the products: use a customized central build location for your targets -- many people use the default location, which is by the xcodeproject. otherwise, Xcode may not be able to locate debug information.
finally, debugging complex projects in Xcode can be quite... let's call it 'problematic'. So don't expect the debugging experience to be perfect, even if you've configured everything correctly. all the more reason to integrate assertions and unit tests into your development cycle early on with Xcode. truth is, the debugger may be useless no matter how hard you try - this is not a new issue. hopefully LLDB will improve our debugging experiences.
good luck

What is "incremental linking"?

I've looked at Microsoft's MSDN and all around the web, but I still haven't been able to get a really good idea of what it is.
Does it mean the completed program loads DLLs at different times during its execution, as apposed to all at once upon launch?
Am I totally way off? :)
Linking involves packaging together all of the .obj files built from your source files, as well as any .lib files you reference, into your output (eg .exe or .dll).
Without incremental linking, this has to be done from scratch each time.
Incremental linking links your exe/dll in a way which makes it easier for the linker to update the existing exe/dll when you make a small change and re-compile.
So, incremental linking just makes it faster to compile and link your project.
The only runtime effect it might have is that it may make your exe/dll slightly bigger and slower, as decribed here:
http://msdn.microsoft.com/en-us/library/4khtbfyf.aspx
Edit: As mentioned by Logan, incremental linking is also incompatible with link time code generation - therefore losing a possible performance optimization.
You may want to use incremental linking for debug builds to speed development, but disable it for release builds to improve runtime performance.
Delay loaded DLLs may be what you are thinking of:
http://msdn.microsoft.com/en-us/library/151kt790.aspx
Also, quite importantly, incremental link is a prerequisite for Edit&Continue - possibily to edit your code and recompile it on the fly, without restarting.
So it is a good thing to have on debug builds, but not release builds.