Im asking the experts here...
Does anybody have made some performance test about which compiler is the best for iPhone apps?
Since we have the choice between:
GCC 4.2
LLVM GCC 4.2
LLVM compiler 1.5
Im wondering which of the 3 offers the best performance...
I've done some quick test myself but haven't found much difference?
Which compiler are you using?
If you watch the WWDC 2010 session video 300, the Developer Tools State of the Union, you will see that Apple reports significant performance increases for applications built using the LLVM compiler over GCC (up to 60% faster in certain cases). There are additional improvements that can be made by using the Clang parser with the LLVM compiler. Watch Session 312 - "What's New in the LLVM Compiler" for more on this, as well as the sessions on LLVM from WWDC 2009, if you have them.
I saw a 20% speedup going from GCC to LLVM 1.5 in an informal benchmark within one of my applications, but it wasn't a rigorous test, so consider that only anecdotal evidence.
My recommendation is to use Clang + LLVM (LLVM Compiler 1.5) if you can for faster build times, more performant applications, and much better compiler errors. If you use C++ code or something else that the Clang parser cannot handle right now, use LLVM GCC to still get the performance benefits in your compiled application. Go to GCC only if that fails for some reason. It's a simple switch to hit in your build settings to gain even a small amount of extra performance for free in your end application.
LLVM Compiler 2.0, coming with Xcode 4, has full support for C++ and promises additional optimizations for compiled applications, along with more compile-time speedups. Xcode 4 even uses Clang as the syntax highlighting / code correction engine in the IDE. It's clear the direction that Apple is heading with their compilers.
I do not know about performance for the iPhone, but in other benchmarks, Clang generally compiles faster but produces slower code than GCC. Clang also has better error messages than GCC. Thus, it might be best to use Clang in development, and switch to GCC for final production builds. If you choose such an approach, make sure you have a good QA cycle, or a build system that will also build and test the GCC build, so you don't get any nasty compiler related surprises at the end.
C++ support in Clang is a bit behind that of GCC (and more C++ code has been tested and tweaked for GCC's quirks than Clang's), so if you need to use a lot of C++, GCC might be a better option.
Really, you will need to choose the best compiler for your needs. Benchmarks, and other people's results, can give you an indication of what to consider, but every program is different, so the best approach would be to benchmark your own program on the different compilers, and see which works best for you.
LLVM GCC 4.2 is what I use.
Clang does not handle C++ well enough, and it's very much a work in progress. It's a very promising toolkit, but it's just not stable enough for production at this time (in my experience).
Apple's definitely investing in Clang as their future compiler, but it is not a trivial project. Unfortunately, that puts many of us in a strange place, using one relatively old compiler and/or one very very new one (guess how many years it will before I can begin using features of c++0x in my codebases).
I've used the GCC frontend with LLVM backend with my codebases since it was available (at least, during testing). It's been publicly available for years, and is fairly stable. I've found the LLVM pass does produce smaller, faster executables in comparison to GCC alone (although I do more works targeting OS X than iOS). Frankly, I can't compile enough code with Clang alone to recommend it (plus, I have a lot of C++).
I've found the GCC+LLVM combo reliable. If reliability is your primary concern: begin with GCC, regularly test +LLVM in development, and regularly compile and test with Clang at each Clang release until you're satisfied with it. GCC+LLVM will usually be usable for today's production builds.
If speed is your concern, begin with GCC+LLVM, and test with Clang regularly (if that is an option for you -- it is not for me - too much c++).
Regarding Clang's parsing/lexing/generation: Clang aims to extremely standards compliant. They're doing well, but there are many features which are new or non-existent, which is why I suggest you be cautious, especially with C++.
I believe Apple's made their preferred compiler of the future obvious so... don't wait too long to test with Clang.
I 'd like to introduce you this amazing article Compiler Options in Xcode - GCC or LLVM?
You won't wanna miss a single word, especially the section Which to choose ?
With clang 1.5's C++ frontend is not the best, and I generally recommend against using it if you have to deal with any C++ code (this includes Objective-C++ code). In addition, I have experienced some weak linking problems when using clang, so to me, it's not ready for production if you have to deal with either of the above two cases.
That said, I have not noticed any real impact on performance between the two, though clang's errors and warnings are MUCH more useful than gcc's.
Food for thought.
Related
I've read through the llvm Kaleidoscope tutorial, but it's about how to use their tools. I'm looking for a way to write my own code that allow me to take an abstract syntax tree and generate the llvm IR.
Unfortunately, I'm a little lost on how to go about doing that. My current idea was to have each node of my AST do a fill in the blank style string generation. However this seems inelegant, and there is probably a better way to do this.
I read this question which is simalar to mine, but from my understanding of the llvm IR, which could be completely off, is that it behaves similar to a higher level language than traditional assembly languages, with it having functions, and variables(infinite registers). So I think different techniques may apply.
Not inelegant at all. Generally speaking: at this point most compilers are likely going to generate some form of IR or VM instruction set which likely gets optimized based on well documented approaches. At the end, the compiler translates that result/outcome to the target machine code.
Think of LLVM-IR as that internal IR that you are going to generate and let the toolchain take care of optimizing and creating machine code.
What is the best way to convert scala (or bytecode) to native binary in order to increase performance
At this moment I see two solutions to convert jvm bytecode to sort of self-contained native binary:
Avian - lightweight embeddable JVM with AOT features
Excelsior JET - Commercial Java native compiler
Both should be compatible with Scala.
There are no direct native compilers for scala as I know. There some projects like Scala LLVM, but they are more about research and proof of concepts than ready to use tools
Although not a fully capable tool yet, the scala-native project is starting to become usable, though it's still at an early stage, it's under active development and is becoming more capable by the month. It's based on LLVM and clang, and will compile your scala sources to binaries, if the libraries you depend on are among those implemented at this early stage. (it's not yet working in Windows or cygwin, although it does work in the WSL environment).
Update: Windows support has been improving recently (fall 2021).
Whether performance is increased or not is a separate question, although most programs are likely to start up much more quickly.
Here's a link to the User's Guide
To create your own project: Minimal sbt project
The biggest limitations are that only a subset of java and scala standard libraries have been implemented so far, so you'll need to limit yourself to what's currently available, and not every project will only be feasible if you restrict yourself to 100% scala. Also, the documentation is a work in progress.
As a test, I created a command line tool for processing text files, and I was able to get it to work finally, although I did spend a bit of time figuring out how to accomplish various things, and mostly how to live with the available libraries. If necessary, you can also link to C/C++ libraries although I didn't need to for my small project.
Footnote as of June 2019: I'm having good luck with graalvm native-image. Here's the link:
https://www.graalvm.org/docs/reference-manual/aot-compilation/
Simple question really, however there doesn't seem to be a straight answer in the current developer documentation.
Does Swift compile to machine language (i.e. assembly), or does it compile to some intermediary form that then runs on a virtual machine?
(I suspect it does, but being unfamiliar with development in Apple's world it is not clear to me like it may be to someone who is.)
Yes, it compiles to machine language by way of LLVM Bitcode and, as #connor said, runs on top of the Objective-C runtime.
Swift not only compiles to native machine code but it has also been designed specifically for it. Unlike e.g. Java which has been designed specifically as a JITed language. By that I mean Swift achieves best performance with ahead of time compilation while Java benefits most from JITing.
There are many reasons for these design choices but among them is that Swift has a much bigger scope than managed languages like Java. It is supposed to work on both desktop computers and phones with more restricted hardware. You can use Swift as a systems programming language unlike say C#, Java or Python because it has little runtime requirements and allow fairly detailed control of memory. So in theory one should be able to build an OS kernel with Swift which would be difficult with say Java.
Swift, just like objective-c compiles to native code using llvm
A good explanation can be found in Apple's top secret Swift language grew from work to sustain Objective C, which it now aims to replace
From that article, talking about Swift
The compiler is optimized for performance, and the language is
optimized for development, without compromising on either.
Swift, like Objective-C, is compiled to machine code that runs on the Objective-C runtime.
I'm student working on optimizing GCC for multi-core processor. I tried going through the source code, it is difficult to follow through it since I need to add some code to the back end. Can anyone suggest some good resource which explains the code flow through the different phases.
Also suggest some development environment for debugging GCC mainly to step through the code. Is it possible on windows?
As a starting point see Links and Selected Readings on GCC site. Of particular interest to you, I think, are:
GNU C Compiler Internals
Compilation of Functional Programming Languages using GCC -- Tail Calls by Andreas Bauer
Porting GCC for Dunces by Hans-Peter Nilsson
If you want to develop on Windows you probably need to start from MinGW (Minimalist GNU for Windows) Compiler Suite sources (it includes GNU GDB debugger), which is a port of GCC to Windows.
For a comfortable development environment I cannot help much because I don't develop in C++. But I suppose a good IDE for C/C++ is what you need: have a look at this comparison, there are plenty free/open source IDEs for Windows.
Update: I think ICI can also be of interest to you:
The Interactive Compilation Interface
(or 'ICI' for short) is a plugin
system with a high-level
compiler-independent and low-level
compiler-dependent API to transform
current compilers into collaborative
open modular interactive toolsets. The
ICI framework acts as a "middleware"
interface between the compiler and the
user-definable plugins. It opens up
and reuses the production-quality
compiler infrastructure to enable
program analysis and instrumentation,
fine-grain program optimizations,
simple prototyping of new development
and research ideas while avoiding
building new compilation tools from
scratch. For example, it is used in
MILEPOST GCC to automate compiler and
architecture design and program
optimizations based on statistical
analysis and machine learning. It
should enable universal self-tuning
compilers adaptable to heterogeneous,
reconfigurable, multi-core
architectures ranging from
supercomputers to embedded systems.
.. as the rest of projects under the Collective TUNING umbrella.
Note: Writing "compilers are one of the most complex programs there are", as BlueRaja wrote in comments, is an overstatement: there are very simple compilers and very complex compilers. But in compiler theory (once you have studied it) there is nothing esoteric. GCC is a complex program to understand as whatever BIG, poorly documented program out there1. So rizwanhudda don't be discouraged: start studying the documentation available and then ask GCC developers (on GCC irc channel, as suggested by nvl or GCC developers mailing list) to explain what is poorly (or not at all) documented.
In fact program comprehension is an active field of research.
I would suggest you to use the GCC irc channel, it is meant for discussion of development of GCC.
There is Gambit Scheme, MIT Scheme, PLT Scheme, Chicken Scheme, Bigloo, Larceny, ...; then there are all the lisps.
Yet, there's not (to my knowledge) a single popular scheme/lisp on LLVM, even though LLVM provides lots of nice things like:
easier to generate code than x86
easy to make C FFI calls
...
So why is it that there isn't a good scheme/lisp on LLVM?
LLVM provides a lot, but it's still only a small part of the runtime a functional language needs. And C FFI calls are uncomplicated because LLVM leaves memory management to be handled by someone else. Interacting the Garbage Collector is what makes FFI calls difficult in languages such as Scheme.
You might be interested in HLVM, but it's still more than experimental at this point.
For CL: Clasp is a Common Lisp implementation on LLVM, and mocl implements a subset of Common Lisp on LLVM.
For Scheme: there's a self-hosting Scheme->LLVM demo and a prototype LLVM backend for Bigloo Scheme.
For Clojure: there's Rhine, which is a Clojure-inspired lisp.
There's a very small and apparently unoptimised Scheme compiler here:
http://www.ida.liu.se/~tobnu/scheme2llvm/
Taking your question literally,
Writing compilers is hard.
A poor implementation like the one linked above can block new implementations. People going to the LLVM page see that there's a Scheme already, and don't bother writing one.
There's a limited number of people who write and use Scheme (I'm one, not a hater, btw).
There are lots of existing Scheme intepreters and compilers and there's not a screaming need to have a new one.
There's not an immediate, clear benefit to writing a new interpreter using LLVM. Would it be faster, easier, more flexible, better in some way than the other dozens of Scheme implementations?
The LLVM project went with another language (C) to demo their technology, and haven't seen a need to implement a lot of others.
I think that it could be a lot of fun for someone to build an LLVM-based Scheme compiler. The Scheme compilers in SICP and PAIP are both good examples.
Maybe I'm completely misunderstanding the question or context, but I believe that you could use ECL, which is a Common Lisp that compiles down to C, and use the Clang compiler to target LLVM (instead of GCC).
I'm not sure what (if any) benefit this would give you, but it would give you a Lisp running on LLVM =].
One thing to keep in mind is that many of these implementations have C FFIs and native-code compilers that significantly predate LLVM.
CL-LLVM provides Common Lisp bindings for LLVM. It takes the FFI approach, rather than attempting to output LLVM assembly or bitcode directly.
This library is available via Quicklisp.
mocl is a compiler for a relatively static subset of Common Lisp. It compiles via LLVM/Clang.
there's not (to my knowledge) a single
popular scheme/lisp on LLVM
Currently, llvm-gcc is the nearest thing to a popular implementation of any language on LLVM. In particular, there are no mature LLVM-based language implementations with garbage collection yet. I am sure LLVM will be used as the foundation for lots of exciting next-generation language implementations but that will take a lot of time and effort and it is early days for LLVM in this context.
My own HLVM project is one of the only LLVM-based implementations with garbage collection and its GC is multicore-capable but loosely bound: I used a shadow stack for an "uncooperative environment" rather than hacking the C++ code in LLVM to integrate real stack walking.
There is a Scheme2LLVM, apparently based on SICP:
The code is quite similar to the code in the book SICP (Structure and Interpretation of Computer Programs), chapter five, with the difference that it implements the extra functionality that SICP assumes that the explicit control evaluator (virtual machine) already have. Much functionality of the compiler is implemented in a subset of scheme, llvm-defines, which are compiled to llvm functions.
I don't know if it's "good".
GHC is experimenting with a scheme backend and getting really exciting preliminary results over their native code compiler. Granted, that's haskell. But they've recently pushed new changes into LLVM making tail calls easier IIRC. This could be good for some scheme implementation.