When to use the XCode Distributed Build Feature - iphone

I work in a small iPhone development team, in our office, we have at least 4 copies of XCode running on the network at any one time. Contemplating getting everyone to have it running.
We're networked together using a standard WIFI Switch, so network speed and latency isn't as good as wired network...
Just wondering, is there any real time gain to be had on using distributed builds? Once it passes the relevant data back and forth over the network. At least for relatively small projects.

it depends on your project, its dependencies, and the amount of data that must be transferred.
15-20 seconds is not terrible. certainly, there is more work overall to be done. it may be a good idea for everyone to farm it out to a very fast Mac Pro, rather than to each other if you're using dual cores (that info was not given).
as far as project configuration: if you have a bunch of dependent libraries in your projects, then it may help to disable precompiled headers. much of the equation is the average number of dependencies, and the number of objects to generate.
at 15-20 seconds, it would help many developers to write so they optimize their build times before farming out. if it were a few minutes, then you may want to jump straight into distributed builds with a 8 or 12 core.
one easily overlooked aspect of slow builds on small projects: disable static analysis per build, and just run it manually every hour two, fixing every issue then.
otherwise, your project could probably be divided into smaller projects/libraries. chances are, you won't always be editing the same dependencies.
assuming compilation, linking, etc are where the time is spent at this point: much of the rest falls into the typical issues involved with building c and c++ programs. minimize your dependencies and include graphs. it's actually quite easy to accomplish with objc; since much of the interfaces use objc-types, you can use forwards.
if your library is small (e.g. less than 50 objects generated), then you may also gain a speedup by not using precompiled headers. if everything already depends on your inclusion of 12 system frameworks included by the pch.... then try it in the next project.
of course, you could just try timing a clean rebuild, a build with generated pch files, and several incremental builds in order to come to a conclusion.

Related

A way to leverage GWT 2.8 incremental compilation to add extra modules faster

GWT used be able to produce and use *.gwtar files before to speed up (at least a little bit) the transpilation - libraries could come with their *.gwtar files and only modified code would need to be fully rebuilt.
I am working on a large GWT application that allow for more modules to be added by the product's end user - kind of like plugins. These modules are specifically packaged and must obey certain contracts. During deployment we rebuild the app using the combined code (existing + module being added). The production build process takes significant time already but we can live with it in development, we use Super Dev mode for the rest. However, end users see multi-minute builds (say 10-20!) when they add a module and that is rather inconvenient. *.gwtar files no longer help/work - we noticed that they started breaking some stuff with GWT 2.7, actually (compiler reported errors with 3rd party libraries but only when *.gwtars were used).
GWT isn't really made to be able to have independently compiled modules talk to one another without rebuilding (that would be a very welcome feature), but we are looking for a way to leverage GWT incremental compilation to speed up the process and enhance our end users' experience. I have not been able to find the documentation on where the intermediate artifacts are stored and whether/how they can be reused.
While the concern about stability of this system exists - i.e. the files may change from release to release, the greatest time is taken by the base product, which also supplies the tooling. Thus, we can change this with every release too, as needed - even if plugins don't come with these artifacts, the transpilation may still be faster.
Can anyone, please, help me figure out how to leverage this (GWT 2.8 incremental transpilation) for the above use case?

Auto-generation of code in order to mass produce. Is this sensible?

Our company plans to auto-generate our projects from the domain area up-to the presentation layer so that we can mass produce software. The idea is we can produce a project within 24 hours. I think this is possible but that's not my concern.
What are the ramifications of such plan? I just think that the quality of software produced from such grandeur idea is below good quality. First, clients have varying requirements. Assuming we can standardize what's common among them, there are still requirements that will be beyond our original template.
Second, how can such software be reliable if not fully tested? Does a 24 hour period can cover a full unit/integration/other types of test?
At the end, it appears we won't be able to hit the 24 hour target thereby defeating our original purpose.
I just believe it's better to build quality software than mass producing them. How would I tell my boss that their idea is wrong?
Sorry, but I don't think this is sensible.
In order to build a system that can auto-generate any kind of software that would fill any kind of requirement, you will kind of have to implement more then the softwares you plan to generate.
Auto-generated code is great, when you have some repetitive tasks, information, or components, that are similar enough to enable you to make a one time effort to generate all repetitions.
But trying to write a system to produce any kind of project is not feasible. In order to meet a wide enough range of supported projects, your system will have to allow a very wide set of capabilities to describe project configuration and behavior, and the time required to describe each project's behavior will not necessarily be shorter than the time it would have taken to implement the project in the first place. You will just end up developing a development environment, and implementing projects in your own language.
So, instead, why not just take an existing development environment that is already available? Like Visual Studio?
Maybe you should have a big library reprository and stuff everything that could be reusable in it. This way, the software you have to write would be very small. There could be documentation templates associated with the library and you would just have to C&P them.
However, it takes time to build such a library.
Or do it this way.

GCC/XCode speedup suggestions?

I have a stock mac-mini server (2.53GHz, 4GB, 5400RPM drive) which I use for iphone development. I am working on a smallish app. The compilation seems to be taking longer and longer (now about 20min). The app is about 20 small files totaling 4000 lines. It links to C++/Boost.
What suggestions do you have to speed up the compilation process? Cheaper will be better.
Sounds like the system is swapping, which is a common problem when compiling C++ that is STL heavy. GCC can be a complete memory pig in such a situation and two GCCs at once can easily consume all memory.
Try:
defaults write com.apple.Xcode PBXNumberOfParallelBuildSubtasks 1
And see if that helps.
If it doesn't, then the other answers may help.
As others have noted, that's way in excess of what you should be expecting, so it suggests a problem somewhere. Try toggling distributed builds -- depending on your setup it may improve things or slow them down (for a while we had a situation where one machine was servicing everyone else's builds but not its own). If you have a 1Gb network then you may get a bit of extra performance from distributed builds (especially if you have a couple of spare macminis). Also enable precompiled headers, and put both boost and cocoa headers in it.
If things are still slow then it's worth running activity monitor to ensure that you don't have some other process interfering, or even Instruments to get some more detailed information. You could find that you're trying to compile files on a network drive for instance, which will certainly slow things down. Also inside Xcode try running the option to preprocess a file and see what's in the file contents (available via right-click) to check that you're not compiling in loads of extra code.
Also with boost, try slimming it down to find only the bits you need. The bcp tool that comes in the distribution will copy only the packages you ask it to, plus all the dependencies, so you're not building stuff you won't use.
Include all your boost headers in the precompiled header for your project.
I concur with Eiko. We're compiling projects that are vastly larger (including boost libraries) in much shorter time.
I'd start by checking logs. You can also view the raw compiler output in the build window (Shift+CMD+B or in the Build menu->Build Results). Xcode just makes straight gcc or llvm calls, so this may give you a better idea of what's actually happening during the build process.
There is something messed up. We are compiling bigger projects in fractions of the time.

One big executable or many small DLL's?

Over the years my application has grown from 1MB to 25MB and I expect it to grow further to 40, 50 MB. I don't use DLL's, but put everything in this one big executable.
Having one big executable has certain advantages:
Installing my application at the customer is really: copy and run.
Upgrades can be easily zipped and sent to the customer
There is no risk of having conflicting DLL's (where the customer has not version X of the EXE, but version Y of the DLL)
The big disadvantage of the big EXE is that linking times seem to grow exponentially.
Additional problem is that a part of the code (let's say about 40%) is shared with another application. Again, the advantages are that:
There is no risk on having a mix of incorrect DLL versions
Every developer can make changes on the common code which speeds up developments.
But again, this has a serious impact on compilation times (everyone compiles the common code again on his PC) and on linking times.
The question Grouping DLL's for use in Executable mentions the possibility of mixing DLL's in one executable, but it looks like this still requires you to link all functions manually in your application (using LoadLibrary, GetProcAddress, ...).
What is your opinion on executable sizes, the use of DLL's and the best 'balance' between easy deployment and easy/fast development?
A single executable has a huge positive impact on maintainability. It is easier to debug, deploy (size issues aside) and diagnose in the field. As you point out, it completely sidesteps DLL hell.
The most straightforward solution to your problem is to have two compilation modes, one that builds a single exe for production and one that builds lots of little DLLs for development.
The tenet is: reduce the number of your .NET assemblies to the strict minimum. Having a single assembly is the ideal number. This is for example the case for Reflector or NHibernate that both come as a very few assemblies. My company published free two white books on the topic One big executable or many small DLL's:
Partitioning code base through .NET assemblies and Visual Studio projects (8 pages)
Defining .NET Components with Namespaces (7 pages)
Arguments are developed in these white-books come with invalid/valid reasons to create an assembly and a case study on the code base of the tool NDepend.
The problem is that MS fosters(and is still fostering) the idea that assemblies are components while assemblies are just physical artifact to pack code. The notion of component is a logical artifact and typically an assemblies should contains several components. It is a good idea to partition component with the notion of namespaces although it is not always practicable (especially in the case of a framework with a public API where namespace are used to partition the API and not necessarily the components)
One big executable is definitely beneficial - you can have whole program optimization and less overhead and maintenance is much simpler.
As for the link time - you could have both the "many DLLs" and "one big executable" at the same time. For each DLL have a project configuration that builds a static library. So when you debug things you compile the "DLL" configuration of the project and when you need to ship you compile the "static library" configurations of your projects. Sometimes you will have different behavior in different configurations, but this will have to be addressed per incident.
An easier way to maintain large programs is to compose them into smaller manageable parts. A program can be composed into a shell and modules that add feature to the shell. Large programs like Visual Studio, outlook all use the same concepts. Try this approach to make a more maintainable and robust programs.

Real time system concept proof project

I'm taking an introductory course (3 months) about real time systems design, but any implementation.
I would like to build something that let me understand better what I'll learn in theory, but since I have never done any real time system I can't estimate how long will take any project. It would be a concept proof project, or something like that, given my available time and knowledge.
Please, could you give me some idea? Thank you in advance.
I programm in TSQL, Delphi and C#, but I'll not have any problem in learning another language.
Suggest you consider exploring the Real-Time Specification for Java (RTSJ). While it is not a traditional environment for constructing real-time software, it is an up-and-coming technology with a lot of interest. Even better, you can witness some of the ongoing debate about what matters and what doesn't in real-time systems.
Sun's JavaRTS is freely available for download, and has some interesting demonstrations available to show deterministic behavior, and show off their RT garbage collector.
In terms of a specific project, I suggest you start simple: 1) Build a work-generator that you can tune to consume a given amount of CPU time; 2) Put this into a framework that can produce a distribution of work-generator tasks (as threads, or as chunks of work executed in a thread) and a mechanism for logging the work produced; 3) Produce charts of the execution time, sojourn time, deadline, slack/overrun of these tasks versus their priority; 4) demonstrate that tasks running in the context of real-time threads (vice timesharing) behave differently.
Bonus points if you can measure the overhead in the scheduler by determining at what supplied load (total CPU time produced by your work generator tasks divided by wall-clock time) your tasks begin missing deadlines.
Try to think of real-time tasks that are time-critical, for instance video-playing, which fails if tasks are not finished (e.g. calculating the next frame) in time.
You can also think of some industrial solutions, but they are probably more difficult to study in your local environment.
You should definitely consider building your system using a hardware development board equipped with a small processor (ARM, PIC, AVR, any one will do). This really helped remove my fear of the low-level when I started developing. You'll have to use C or C++ though.
You will then have two alternatives : either go bare-metal, or use a real-time OS.
Going bare-metal, you can learn :
How to initalize your processor from scratch and most importantly how to use interrupts, which are the fastest way you have to respond to an externel event
How to implement lightweight threads with fast context switching, something every real-time OS implements
In order to ease this a bit, look for a dev kit which comes with lots of documentation and source code. I used Embedded Artists ARM boards and they give you a lot of material.
Going with the RT OS :
You'll fast-track your project, and will be able to learn how to fine-tune a RT OS
You may try your hand at an open-source OS, such as Linux or the BSDs, and learn a lot from the source code
Either choice is good, you will get a really cool hands-on project to show off and hopefully better understand your course material. Good luck!
As most realtime systems are still implemented in C or C++ it may be good to brush up your knowledge of these programming languages. Many realtime systems are also embedded systems, so you might want to play around with a cheap open source one like BeagleBoard (http://beagleboard.org/). This will also give you a chance to learn about cross compiling etc.