Speeding up compilation in xCode - iphone

I have a rather large project where compilation takes more than 1 hour on a Mac with i5 processor.
Just changing one little piece of code at one place makes the complete long compilation necessary.
Is there any way to reduce this time?
I was thinking about "precompiling of classes" or "pre-linking" if there is anything like that.
Even uploading a little app to a device takes 10 seconds.
ps Anyone can provide some experience whether xCode4.3 is faster on the new Mac Retinas in this context?
Many thanks!

1) Use a precompiled header and remove any imports of those files (UIKite, Foundation, Cocoa, etc) which Xcode adds when you create classes)
2) Add reasonable stable user header files in the .pch as well - to reduce the precompile work.

In your classes, make most of the imports in the implementation file (.m), not the headers. Use forward declaration when appropriate. See '#class vs. #import' and 'Importing header in objective c'
You might consider moving a stable and well confined part of your main project into a separate project and include it as a static library in the main project.

Recently I removed a few libraries that I had been referencing as .a files and moved the code in with the code. The speed increased amazingly. Compilation used to take 15min, now takes 15 seconds. Indexing used to take all day to finish (in time for shut-down), but now it is really fast. The library was on an network drive which may have been exacerbating the problem.

Related

Is there a difference in linking standard and custom dynamic library?

I don't get how standard library like libc is linked.I use MingW compiler.
I see that there is no libc.dll file in its bin folder.Then how libc is linked?
How does compiler know difference between custom and dynamic library?
We use build tools because they are a practical way to compile code and create executables, deployables etc.
For example. Consider that you have a large Java application consisting of thousands of separate source files split over multiple packages. Suppose that it has a number of JAR file dependencies, some of them for libraries that you have developed, and others for external libraries that can be downloaded from standard places.
You could spend hours manually downloading external JAR files and putting them in the right place. Then you could manually run javac for each of the source files, and jar in multiple times. Each time in the correct directory with the correct command line arguments ... and in the correct order.
And each time you change the source code ... repeat the relevant parts of the above process.
And make sure you don't make a mistake which will cause you to waste time finding test failures, runtime errors, etc caused by not building correctly.
Or ... you could use a build tool that takes care of it all. And does it correctly each time.
In summary, the reasons we use build tools are:
They are less work than doing it by hand
They are more accurate
The results are more reproducible.
I want to know why compiler can't do it?
Because compilers are not designed to perform the entire build process. Just like your oven is not designed to cook a three course meal and serve it up at the dinner table.
A compiler is (typically) designed to compile individual source code files. There is more to building than doing that.

What is the difference between User Defined SWIFT_WHOLE_MODULE_OPTIMIZATION and Swift Optimization Level?

I'm currently looking into optimizing my project's compile time.
Although I've known there's something called whole module optimization (WMO for short), but am afraid of checking it out in Build Settings since I didn't really dig deep into it yet.


As I understand it:
WMO should result in a faster code execution but slightly increase the compile time, because it compiles whole module files as one whole instead of compiling each file separately in parallel, according to this Swift official blog on whole module optimizations

.
So it's recommended to set Swift optimization level as follows:
For Debug configuration, set to None [-Onone]
For Release configuration, set to Fast, Whole Module Optimization [-O -whole-module-optimization] as it is not that important to have best compile time for occasional release builds.
However, when digging for tips about how to reduce compile time for Debug configuration, I found this User Defined settings:
SWIFT_WHOLE_MODULE_OPTIMIZATION = YES for Debug
SWIFT_WHOLE_MODULE_OPTIMIZATION = NO for Release
This settings reduced my Debug compile time almost by half.
Since I'm new to Swift compiler and User Defined settings, I tried to find official documentation on SWIFT_WHOLE_MODULE_OPTIMIZATION but what's confusing is there's not any documentation there online.
People just say it reduces compile time but no further explanation, or they conflicts with Swift Optimization Level mentioned above.

As I understand it, this settings set to YES should increase the compile time as it enables WMO. Therefore I think I took WMO wrong.


Questions:


What is the difference between Swift Optimization Level settings and SWIFT_WHOLE_MODULE_OPTIMIZATION?
Why does SWIFT_WHOLE_MODULE_OPTIMIZATION reduces compile time?
Thank you!
The main difference is that whole-module-optimization refers to how the compiler optimises modules and the Swift Optimization Level refers to each file compilation. You can read more about the different flags for Swift Optimization Level here.
SWIFT_WHOLE_MODULE_OPTIMIZATION improves compilation time because the compiler has a more global view on all functions, methods and relations between files, allowing it to ignore unused functions, optimise compile order, among other improvements. It also focuses on compiling only modified files, which means that even with that flag activated, if you clean your project and delete the derived data folder, you will still have a bigger compilation time during the first run.

Compiler optimization to reduce bytes of executable code

Is it possible for a Compiler (for ex. javac) to scan your whole project for unused methods and variables before compilation and then compiles the project without those unused methods and variables such that you end up with fewer bytes of executable code.
If this would be a compiler optimization, I would create one huge library that contains all my helper methods and import it in all my projects and not worry that it being so huge could effect my Software size.
I understand this could be impossible, if you do not have the source code of those libraries you are using(importing), but I am speaking of the case where you have the source code.
Is there a tool/IDE plugin that does something similar? I would think this could be also done in one step ahead of the compilation.
Java's compiler doesn't do this natively, but you can use a tool like ProGuard or any number of other Java optimizers to remove unused code.
But, in your case, why don't you just compile your big software library once and put it on your classpath? That way you don't have to duplicate it at all.

Compatibility of code Progress 8 to OpenEdge 11

we have an ERP system running in our company based on Progress 8. Can you give an indication how compatible OpenEdge 11 is to version 8? Is it like "compile the source" and it will run (of course testing :-)) or more like every second line will need rework?
I know it's a general question but maybe you can provide a general answer? :o)
Thanks,
Gunter
Yes. Convert the db and recompile.
Sometimes you might run across keyword conflicts. A quick fix for that is the -k parameter (the "keyword forget list"). Using -k is a quick way to get old code that has variables or table/field names that have become new keywords to compile while you work on changing the names.
You might also see the occasional situation where the compiler has tightened up the rules a bit. For instance, there was some tightening of rules around defining shared variables in the v8/v9 time frame -- most of what I remember about that was looking at the impacted code and asking myself "how did that ever compile to start with?"
Another potential issue -- if your application uses a framework (such as "smart objects") whose API might change from release to release it is important to make sure that you compile against the version of that framework that your code requires -- not something newer but different.
Obviously you need to test but the overwhelmingly vast majority of code recompiles and runs without any issues.
We just did the conversion from Progress 8.3E to OpenEdge 11 a few days ago. It went on much like Tom wrote. Convert and recompile.
The only problem was one database that was originally created in Progress version 7 . Here conversion failed - but since it was a small database, it was quicker to dump , recreate and load.

Does it matter if there are unused functions I put into a big CoolFunctions.h / CoolFunctions.m file that's included everywhere in my project?

I want to create a big file for all cool functions I find somehow reusable and useful, and put them all into that single file. Well, for the beginning I don't have many, so it's not worth thinking much about making several files, I guess. I would use pragma marks to separate them visually.
But the question: Would those unused methods bother in any way? Would my application explode or have less performance? Or is the compiler / linker clever enough to know that function A and B are not needed, and thus does not copy their "code" into my resulting app?
This sounds like an absolute architectural and maintenance nightmare. As a matter of practice, you should never make a huge blob file with a random set of methods you find useful. Add the methods to the appropriate classes or categories. See here for information on the blob anti-pattern, which is what you are doing here.
To directly answer your question: no, methods that are never called will not affect the performance of your app.
No, they won't directly affect your app. Keep in mind though, all that unused code is going to make your functions file harder to read and maintain. Plus, writing functions you're not actually using at the moment makes it easy to introduce bugs that aren't going to become apparent until much later on when you start using those functions, which can be very confusing because you've forgotten how they're written and will probably assume they're correct because you haven't touched them in so long.
Also, in an object oriented language like Objective-C global functions should really only be used for exceptional, very reusable cases. In most instances, you should be writing methods in classes instead. I might have one or two global functions in my apps, usually related to debugging, but typically nothing else.
So no, it's not going to hurt anything, but I'd still avoid it and focus on writing the code you need now, at this very moment.
The code would still be compiled and linked into the project, it just wouldn't be used by your code, meaning your resultant executable will be larger.
I'd probably split the functions into seperate files, depending on the common areas they are to address, so I'd have a library of image functions separate from a library of string manipulation functions, then include whichever are pertinent to the project in hand.
I don't think having unused functions in the .h file will hurt you in any way. If you compile all the corresponding .m files containing the unused functions in your build target, then you will end up making a bigger executable than is required. Same goes for if you include the code via static libraries.
If you do use a function but you didn't include the right .m file or library, then you'll get a link error.