I have a stock mac-mini server (2.53GHz, 4GB, 5400RPM drive) which I use for iphone development. I am working on a smallish app. The compilation seems to be taking longer and longer (now about 20min). The app is about 20 small files totaling 4000 lines. It links to C++/Boost.
What suggestions do you have to speed up the compilation process? Cheaper will be better.
Sounds like the system is swapping, which is a common problem when compiling C++ that is STL heavy. GCC can be a complete memory pig in such a situation and two GCCs at once can easily consume all memory.
Try:
defaults write com.apple.Xcode PBXNumberOfParallelBuildSubtasks 1
And see if that helps.
If it doesn't, then the other answers may help.
As others have noted, that's way in excess of what you should be expecting, so it suggests a problem somewhere. Try toggling distributed builds -- depending on your setup it may improve things or slow them down (for a while we had a situation where one machine was servicing everyone else's builds but not its own). If you have a 1Gb network then you may get a bit of extra performance from distributed builds (especially if you have a couple of spare macminis). Also enable precompiled headers, and put both boost and cocoa headers in it.
If things are still slow then it's worth running activity monitor to ensure that you don't have some other process interfering, or even Instruments to get some more detailed information. You could find that you're trying to compile files on a network drive for instance, which will certainly slow things down. Also inside Xcode try running the option to preprocess a file and see what's in the file contents (available via right-click) to check that you're not compiling in loads of extra code.
Also with boost, try slimming it down to find only the bits you need. The bcp tool that comes in the distribution will copy only the packages you ask it to, plus all the dependencies, so you're not building stuff you won't use.
Include all your boost headers in the precompiled header for your project.
I concur with Eiko. We're compiling projects that are vastly larger (including boost libraries) in much shorter time.
I'd start by checking logs. You can also view the raw compiler output in the build window (Shift+CMD+B or in the Build menu->Build Results). Xcode just makes straight gcc or llvm calls, so this may give you a better idea of what's actually happening during the build process.
There is something messed up. We are compiling bigger projects in fractions of the time.
Related
I'm currently working on a preact app that connects with firebase.
My problem is this
Since adding firebase to the project watch build times get seriously affected, jumping from an average of 10s to almost a minute.
From what I read online this is merely an information and not an error (kinda obvious with the 'Note:' bit).
Question: Is there any way to disable optimization for specific modules?
Thanks
Enabling compact mode on its own will not have any obvious performance benefits, especially not in the magnitude you're looking for.
Babel will still parse and transpile every module you throw at it -- what compact mode does is to give whitespace to the output. It makes it human readable. And that's all that warning is saying, that the module is so large that Babel is enabling compact mode itself for that module.
Suppose I have a source code directory, I want to run a script that would scan the code in the directory and return the languages, frameworks and libraries used in it. I've tried github/linguist, its a great tool which even Github uses to detect the programming languages used in a source code, however I am not able go beyond that and detect the framework exactly.
I even tried tools like it-depends, to fetch the dependencies but, its getting messed up.
Could someone help me out to figure out how I can do this stuff, with an existing tool or if have to make one such tool how should I approach it.
Thanks in Advance
This is, in the general case, impossible. The halting problem precludes any program from being able to compute, in finite time, what other programs may or may not do - including what dependencies it requires to run. Sure, you can make it work for some inputs - but never for all.
So you have to compromise:
which languages do you need to support? it-depends does not try to support Java, for example. Different languages have different ways of calling in dependencies from their source-code. For example, if working with C, you will want to look at #includes.
which build-chains to you need to support? parsing a standard Makefile for C is very different from, say, looking into a Maven pom.xml for Java. Additionally, build-chains can perform arbitrary computation -- and again, due to the halting problem, your dependency-detection program will not be able to "statically" figure out intended behavior. It is entirely possible to link against one library or another one (or none at all) depending on what is detected to exist. What should you output in this case?. For programs that have no documented build process, you simply cannot know their dependencies. Often, the build-process is human-documented but not machine-readable...
what do you consider a library/framework? long-lived libraries can evolve through many different versions, and the fact that one version is required and not another may not be explicit in the source-code. If a code-base depends on behavior found in only a specific, now superseded, version of a library, and no explicit mention of that version is found -- your dependency-detection program will have no way to know about it (unless you code in library-version-specific detection; which is doable, but on a case-by-case basis, and requires deep knowledge of differences between versions).
Therefore the answer to your question is that... it depends (they go into a fair amount of detail regarding limitations). For the specific case of Java + Maven, which is not covered by it-depends, you can use Maven itself, via mvn dependency:tree. Choose a subset of the problem instead of trying to solve it all at once.
Is it possible to compile a swift binary from an OS X computer so that it runs on a server running Linux as a single binary without no extra libraries that need to be dynamically linked?
I'm thinking something like passing a -target to the swift command and passing another parameter to let it statically link all dependencies, but I'm not sure what the exact commands are.
The exact value for -target seems to be rather elusive.
Do I need to know the exact target distribution to be able to pass the correct string to the -target parameter?
From reading the sources on github
target would be Linux
machine would be x86_64
This gets called by the primary build script
This how ever answers a part of the question
The exact value for -target seems to be rather elusive.
Install a GCC toolchain for Mac OSX that can retarget Linux, one repo that I can see is OSXCross, for example.
Supply the values to the environment variables to GCC prior to running the script, that references that toolchain.
Unfortunately, that does not guarantee it will work, but give it a try and see what happens.
Is it possible to compile a swift binary from an OS X computer so that it runs on a server running Linux as a single binary without no extra libraries that need to be dynamically linked?
The short answer? Of course it is! Anything is possible when you put your heart to it!
Is it efficient? Inherently, no.
While I'm sure everyone here is familiar with what a compiler does, for the sake of this question and its newest of users, a compiler is an application that converts human readable code and maps it to a binary format that a computer can understand. However it should be worth mentioning that not all computers are the same. Each computer operating system has a different binary mapping than the other so a simple operation like copying values could be expressed as 1010 on one machine, and 0101 on another. As stated before in many questions prior, and for example this one, many programming languages are buildable across a variety of machines, but very few of them are portable across them because each computer has a different binary mapping.
~~~~~~~
So how do we fix this?
~~~~~~~
Well there are a number of solutions to fix this. The most obvious
method is to simply make your environment the target environment and
build your program to your hearts content. You obviously already did
this through a virtual machine and this typically what many
developers will do. This is by far the easiest solution but it
defeats the purpose of the question where you want to simply build
from your OSX machine.
Earlier you said you heard people talking about compiling windows programs on linux machines. Cygwin is a a development platform aimed at obtaining the windows framework that is not normally present on linux machines and allows many programs to be built with the windows framework in mind. However all its doing is adding binaries so that the compiler has some proper places to map to when windows only commands are found within a configuration. All its doing is introducing the binary configurations necessary for a program to successfully be ported over. This ties into the second option.
Secondly comes a compiler that supports cross platform compilation.
While I am currently uninformed and/or unfamiliar with such
compilers, this is technically a valid solution but would I call it
reliable? Probably not. If anything you're just adding more work to
your compiler as not only does it have to correctly map the syntax of
one computer program for one computer, it has to waste time linking
it to the new binaries. Additionally you would need to have the
compiler remember these linking's which could entail more wasted
memory space for having this one compiler.
Even then, systems like these are few and far between and whether its guaranteed to work depends upon how well the compilers maintainer knows their stuff, how frequently they update it etc. etc. And the chances that they even performed to correct mappings of binaries in the first place is not something I'd stake my life on.
The third and perhaps most ideal solution is to look into container technologies such as docker. Their containers are essentially ways for you to build your app and port it to new machines without having to change or modify anything about how you build and compile it. Simply build one, store it in a container, port it to your machine of choice and integrate it into your current project. Come to think of it, container systems like docker were built to prevent the very thing that you are currently experiencing, where you have a source code working on your one machine but no place else. Something like docker will be able to run your code on any machine without having to recompile it for each new machine.
Docker provides an interesting framework for container to container communication, application examples and has a fairly straightforward documentation that it would be worthwhile to look at to see if your project could have some of its parts ported to docker.
This being said, there are multiple ways to fix the issue you are currently facing, so as you being a software engineer, its up to you in what would be the most ideal way of handling your project.
// EDIT //
Will edit this to be a better response once im not drop-dead tired.
I work in a small iPhone development team, in our office, we have at least 4 copies of XCode running on the network at any one time. Contemplating getting everyone to have it running.
We're networked together using a standard WIFI Switch, so network speed and latency isn't as good as wired network...
Just wondering, is there any real time gain to be had on using distributed builds? Once it passes the relevant data back and forth over the network. At least for relatively small projects.
it depends on your project, its dependencies, and the amount of data that must be transferred.
15-20 seconds is not terrible. certainly, there is more work overall to be done. it may be a good idea for everyone to farm it out to a very fast Mac Pro, rather than to each other if you're using dual cores (that info was not given).
as far as project configuration: if you have a bunch of dependent libraries in your projects, then it may help to disable precompiled headers. much of the equation is the average number of dependencies, and the number of objects to generate.
at 15-20 seconds, it would help many developers to write so they optimize their build times before farming out. if it were a few minutes, then you may want to jump straight into distributed builds with a 8 or 12 core.
one easily overlooked aspect of slow builds on small projects: disable static analysis per build, and just run it manually every hour two, fixing every issue then.
otherwise, your project could probably be divided into smaller projects/libraries. chances are, you won't always be editing the same dependencies.
assuming compilation, linking, etc are where the time is spent at this point: much of the rest falls into the typical issues involved with building c and c++ programs. minimize your dependencies and include graphs. it's actually quite easy to accomplish with objc; since much of the interfaces use objc-types, you can use forwards.
if your library is small (e.g. less than 50 objects generated), then you may also gain a speedup by not using precompiled headers. if everything already depends on your inclusion of 12 system frameworks included by the pch.... then try it in the next project.
of course, you could just try timing a clean rebuild, a build with generated pch files, and several incremental builds in order to come to a conclusion.
I've worked on a number of products that make use of code generation. It seems to be the only way to achieve both a high degree of user-customizability and high execution speed.
The downside is that we are requiring users to install a compiler (primarily on MS Windows).
This has been an on-going headache, because vendors like MS keep obsoleting compilers, and some users tend to have more than one compiler installed.
We're considering using GNU C, and possibly C++, but even there, there are continual version issues.
I've considered possibly generating assembly language, in an effort to get off the compiler-version-treadmill, but assembly languages are all machine-specific.
Ideally there would be some way to produce generated code that would be flexible, run fast, and not expose us to the whims of third-party providers.
Maybe I'm overlooking something simple, like Java. Any ideas would be appreciated. Thanks.
If you're considering C and even assembler, take a look at LLVM first: http://llvm.org
I might be missing some context here, but could you just pin yourself to a specific version? E.g., .NET 2.0 can be installed side by side with .NET 1.1 and .NET 3.5, as well as other versions that will come out in the future. So as long as your code makes use of a specific version of a compiler, what's the problem?
I've considered possibly generating assembly language, in an effort to get off the compiler-version-treadmill, but assembly languages are all machine-specific.
That would be called a compiler :)
Why don't you stick to C90?
I haven't heard much of severe violations of standards from gcc's side, if you don't use extensions.
And you can always distribute a certain version of gcc along with your product, say, 4.3.2, giving an option to users to use their own compiler at their own risk.
As long as all code is generated by you (i. e. you don't embed your instructions into other's code), there shouldn't be any problems in testing against this version and using it to compile your libraries.
If you want to generate assembly language code, you may take a look at asmjit.
One option would be to use a language/environment that provides access to the compiler in code; For example, here is a C# example.
Why not ship a GNU C compiler with your code generator? That way you have no version issues, and the client can constantly generate code that is usable.
It sounds like you're looking for LLVM.
Start here: The Code Generation conference
In the spirit of "might not be to late to add my 2 cents" as in #Alvin's answer's case, here is something I'd think about: if your application is meant to last for some years, it is going to face several changes in how applications and systems work.
For instance, let's say you were thinking about this 10 years ago. I was watching Dexter back then, but I guess you actually have memories of how things were at that time. From what I can tell, multithreading was not much of an issue to developers of 2000, and now it is. So Moore's law broke for them. Before that people didn't even care about what will happen in "Y2K".
Speaking of Moore's law, processors are indeed getting quite fast, so maybe certain optimizations won't be even that necessary. And possibly the array of optimizations will be much bigger, some processors are getting optimizations for several server-centric stuff (XML, cryptography, compression and regex! I am surprised such things can get done on a chip) and also spend less energy (which is probably very important for warfare hardware...).
My point being that focusing on what exist today as a platform for tomorrow is not a good idea. Make it work today, and surely it will work tomorrow (backward-compatibility is especially valued by Microsoft, Apple is not bad it seems and Linux is very liberal about making it work as you want).
There is, yes, one thing that you can do. Attach your technology to something that just won't (likely) die, such as Javascript. I'm serious, Javascript VMs are getting terribly efficient nowdays and are just going to get better, plus everyone loves it so it's not going to dissappear suddenly. If needing more efficiency/features, maybe target the CRL or JVM?
Also I believe multithreading will become more and more of an issue. I have a gut feeling the number of processor cores will have a Moore's law of their own. And architectures are more than likely to change, from the looks of the cloud buzz.
PS: In any case, I belive C optimizations of the past are still quite valid under modern compilers!
I would stick to that language that you use for generating that language. You can generate and compile Java code in Java, Python code in Python, C# in C#, and even Lisp in Lisp, etc.
But it is not clear whether such languages are sufficiently fast for you. For top speed I would choose to generate C++ and use GCC for compilation.
Why not use something like SpiderMonkey or Rhino (JavaScript support in Java or C++). You can export your objects to JavaScript namespaces, and your users don't have to compile anything.
Embed an interpreter for a language like Lua/Scheme into your program, and generate code in that language.