How to test generic performance with whole module optimization - swift

In WWDC 2015 Session 409 near the 18 minute mark. The discussion at hand leads me to believe that generics can be optimized through Generic Specialization by enabling whole module optimization mode. Unfortunately my tests, which I'm not confident in, revealed nothing useful.
I ran some very simple tests between the following two methods to see if the performance was similar:
func genericMax<T : Comparable>(x:T, y:T) -> T {
return y > x ? y : x
}
func intMax(x:Int, y:Int) -> Int {
return y > x ? y : x
}
Simple XCTest:
func testPerformanceExample() {
self.measureBlock {
let x: Int = Int(arc4random_uniform(9999))
let y: Int = Int(arc4random_uniform(9999))
for _ in 0...1000000 {
// let _ = genericMax(x, y: y)
let _ = intMax(x, y: y)
}
}
}
What happened
Without optimization the following tests were reasonably different:
genericMax: 0.018 sec
intMax: 0.005 sec
However with Whole Module Optimization the following tests weren't similar:
genericMax: 0.014 sec
intMax: 0.004 sec
What I Expected
With whole module optimization enabled I expected similar times between the two methods calls. This leads me to believe that my test is flawed.
Question
Assuming my tests are flawed / poor. How could I better measure how Whole Module Optimization mode optimizes generics through Generic Specialization?

Your tests are flawed because they measure test performance, not app performance. Tests live in a separate executable file, and they do not benefit from whole-module optimizations themselves. Because of this, your test always uses the generic, non-specialized implementation even in places where your program doesn't.
If you want to see that whole-module optimizations are enabled in your executable, you need to test a function from your executable. (You also need to make sure that your tests either use the Release build, or that you have WMO enabled in the debug build.)
Adding this function to the executable:
func genericIntMax(x: Int, y: Int) -> Int {
return genericMax(x, y: y)
}
and using it from the tests in place of genericMax in the tests, I get identical performance. (Note that this is not really whole module optimization since the two functions live in the same file; but it shows the difference between app code and test code when it comes to optimizations.)

Related

Suitable wall clock time threshold for when a method should be used async?

I believe this question to be programming language agnostic, but for reference, I am most interested in Swift.
When we perform a Pythagoras calculation inside a method, we know that for 32 or 64 bit Int this will be a fast operation.
func calculatePythagoras(sideX x: Int, sideY y: Int) -> Int {
return sqrt(x*x+y*y)
}
We can call this method syncronously since we consider it to be fast enough.
Of course, it would be silly - assuming Int of max 64-bit size - to implement this in an asynchronous manner:
func calculatePythagorasAsync(sideX x: Int, sideY y: Int, done: (Int) -> Void) -> Void {
DispatchQueue.global(qos: .userInitiated).async {
let sideZ = sqrt(x*x+y*y)
DispatchQueue.main.async {
done(sideZ)
}
}
}
This is just overcomplicating the code since we can assume that disregarding of how old and slow the device we are running on, performing two multiplications and one square root of integers of max size 64 this will execute fast enough.
This is what I am interested in, what is fast enough?. Let's say that for the sake of simplicity we constrain this discussion to one specific device D. Let's say that the execute (wall clock time) for calculating Pythagoras on device D is 1 microsecond.
What do you think is a reasonable threshold for a method M should be changed to asynchronous? Imagine we would like to call this method M every time we display a table view (list view) cell (item), calling M on the main thread. And we would like to keep a smooth 60 fps when scrolling. 1 microsecond is surely fast enough. Of course, 100,000 microseconds (= 0.1 seconds) is not nearly fast enough. Probably 10,000 microseconds (= 0.01 s) is not fast enough either.
Is 1,000 microseconds (= 1 millisecond = 0.001 s) fast enough?
I hope you do not think this is a silly question, I am genuinely interested.
Is there any reference to some standard regarding this? Some best practice?

Why sometimes Apple Accelerate framework is slow?

I am playing with C and Swift 3.0 code using vecLib and Accelerate framework from Apple as dynamic lib + my code in C lang based project and Swift playground.
And in situation with calling Apple's wrapper from framework of SIMD instruction with 1 or < 4 elements computation function like vvcospif() from framework is slower than simple standart cos(x * PI) when functions calls from loop near 1.000 times as example.
I know about difference between vvcospif() and cos(), I should use exactly vvcospif() for x * PI.
Example in playground, you can just copy code and run it:
import Cocoa
import Accelerate
func cosine_interpolate(alpha: Float, a: Float, b: Float) -> Float {
let ft: Float = alpha * 3.1415927;
let f: Float = (1 - cos(ft)) * 0.5;
return a + f*(b - a);
}
var start: Date = NSDate() as Date
var interp: Float;
for index in 0..<1000 {
interp = cosine_interpolate(alpha: 0.25, a: 1.0, b: 0.75)
}
var end = NSDate();
var timeInterval: Double = end.timeIntervalSince(start);
print("cosine_interpolate in \(timeInterval) seconds")
func fast_cosine_interpolate(alpha: Float, a: Float, b: Float) -> Float {
var x: Float = alpha
var count: Int32 = 1
var result: Float = 0
vvcospif(&result, &x, &count)
let SINSIN_HALF_X: Float = (1 - result) * 0.5;
return a + SINSIN_HALF_X * (b - a);
}
start = NSDate() as Date
for index in 0..<1000 {
interp = fast_cosine_interpolate(alpha: 0.25, a: 1.0, b: 0.75)
}
end = NSDate();
timeInterval = end.timeIntervalSince(start);
print("fast_cosine_interpolate in \(timeInterval) seconds")
My question is:
Why vvcospif() is slow in this example?
May be because vvcospif() it is wrapper under Objective-C runtime and converting data structures / copying of memory from Intel SIMD -> Objective-C -> Swift runtime is slower then tiny cos()?
I also have performance issue with C code +
#include <Accelerate/Accelerate.h>
vvcospif(resultVector, inputVector, &count);
when inputVector and resultVector is small arrays with 1 or 2 elements or just float variable, and calls in loop with ~ 1.000.000 times.
cos(x * PI) computation time near 20 ms.
and
vvcospif(x) with processing one float or float array[2] - computation time near 80 ms! Where is Acceleration? :)
Yes, in Xcode I use compiler -O -whole-module-optimization optimisation with whole module opt. enabled.
For a more detailed discussion with examples, see "Introduction to Fast Bezier (and Trying the Accelerate.framework)".
The first, fundamental problem is that non-inlined function calls are extremely expensive. You don't want function calls if you can possibly help it in performance-critical code. Within a module, the compiler can often inline functions for you, and parts of stdlib can be inlined for you. But when you start crossing module barriers, Swift generally cannot optimize out the call.
The point of SIMD functions is that you set up all your data in the right format, and then call them just one time. That way the cost of the function call is made up by the SIMD optimized code you're calling.
But remember, you don't have to call into Accelerate to get SIMD optimizations. The compiler is perfectly capable of noticing you've written a loop and turning it into an inline SIMD algorithm itself (and it does this all the time). So for many simple problems, the compiler is going to win anyway. Think about it: if calling vvcospif with a count of 1 were faster than calling cos, wouldn't they just implement cos that way?
I haven't played with your code much, but if you want to improve its performance with Accelerate, you want to think about how to arrange all your input data so you can call vvcospif one time with a large N. It's quite possible in that case that it will be much faster that a loop (since cos is not trivial).
If you want an example of Accelerate in practice, and how you need to organize your data, see PinchText. This code is computing offsets for a page full of a few thousand glyphs based on up to 10 touches in real-time, with animations (see PinchText.mov for what the result looks like). In particular look at adjustViewPositions:count:forTouchPoint:. Notice how count is large, and the data is transformed step by step with no loops. Even throwing in a (very expensive) ObjC method call into that method doesn't matter very much because it's only made one time. Getting rid of function calls in loops is a huge part of performance programming.

Scala recursion depth differences and StackOverflowError below allowed depth

The following program was written to check how deep recursion in Scala can go deep on my machine. I assume that it mostly depends on stack's size assigned to this program. However, after calculating the maximal depth of recursion (catching exception when it appears) and trying to simulate a recursion of this depth I got odd outputs.
object RecursionTest {
def sumArrayRec(elems: Array[Int]) = {
def goFrom(i: Int, size: Int, elms: Array[Int]): Int = {
if (i < size) elms(i) + goFrom(i + 1, size, elms)
else 0
}
goFrom(0, elems.length, elems)
}
def main(args: Array[String]) {
val recDepth = recurseTest(0, 0, Array(1))
println("sumArrayRec = " + sumArrayRec((0 until recDepth).toArray))
}
def recurseTest(i: Int, j: Int, arr: Array[Int]): Int = {
try {
recurseTest(i + 1, j + 1, arr)
} catch { case e: java.lang.StackOverflowError =>
println("Recursion depth on this system is " + i + ".")
i
}
}
}
The results vary among executions of the program.
One of them is a desirable output:
/usr/lib/jvm/java-8-oracle/bin/java
Recursion depth on this system is 7166.
sumArrayRec = 25672195
Process finished with exit code 0
Nevertheless, the second possible output which I get indicates an error:
/usr/lib/jvm/java-8-oracle/bin/java
Recursion depth on this system is 8129.
Exception in thread "main" java.lang.StackOverflowError
at RecursionTest$.goFrom$1(lab44.scala:6)
at RecursionTest$.goFrom$1(lab44.scala:6)
at RecursionTest$.goFrom$1(lab44.scala:6)
(...)
at RecursionTest$.goFrom$1(lab44.scala:6)
at RecursionTest$.goFrom$1(lab44.scala:6)
Process finished with exit code 1
I didn't observe any dependence or relationship of them, I just once got the first, and other time the second
All of that lead me to the following questions:
Why do I get an error and what causes it?
Is this the stack overflow error and if yes...
Why the size of stack changes in the same time when the program is running? Is it even possible?
When I changed println("sumArrayRec = " + sumArrayRec((0 until recDepth).toArray)) into
println("sumArrayRec = " + sumArrayRec((0 until recDepth - 5000).toArray)), so much less, the behaviour remains the same.
You get a stack overflow because you cause a stack overflow
Yes
The max stack doesn't change, the JVM sets this. Your two methods recurseTest and sumArrayRec push different amounts to the stack. recurseTest is basically adding 2 ints the stack with each call while sumArrayRec is adding 3 ints since you have a call to elms(i), probably more with addition/ifelse (more instructions). It's tough to tell since the JVM is handling all of this and it's purpose is to obscure this from what you need to know. Also, who knows what optimizations the JVM is doing behind the scenes which can drastically effect the stack sizes created from each method call. On multiple runs you'll get different depths for stack due to system timing etc, maybe the jvm will optimize in the short time the program has run, maybe it won't, it's non-deterministic in this scenario. If you ran some warmup code before hand that might make your tests more deterministic so any optimizations will or will not take place.
You should look into using something like JMH for your tests, it will help with the determinism.
Side note: you can also manually change your stack size with -Xss2M, common for SBT usage.

In swift which loop is faster `for` or `for-in`? Why?

Which loop should I use when have to be extremely aware of the time it takes to iterate over a large array.
Short answer
Don’t micro-optimize like this – any difference there is could be far outweighed by the speed of the operation you are performing inside the loop. If you truly think this loop is a performance bottleneck, perhaps you would be better served by using something like the accelerate framework – but only if profiling shows you that effort is truly worth it.
And don’t fight the language. Use for…in unless what you want to achieve cannot be expressed with for…in. These cases are rare. The benefit of for…in is that it’s incredibly hard to get it wrong. That is much more important. Prioritize correctness over speed. Clarity is important. You might even want to skip a for loop entirely and use map or reduce.
Longer Answer
For arrays, if you try them without the fastest compiler optimization, they perform identically, because they essentially do the same thing.
Presumably your for ;; loop looks something like this:
var sum = 0
for var i = 0; i < a.count; ++i {
sum += a[i]
}
and your for…in loop something like this:
for x in a {
sum += x
}
Let’s rewrite the for…in to show what is really going on under the covers:
var g = a.generate()
while let x = g.next() {
sum += x
}
And then let’s rewrite that for what a.generate() returns, and something like what the let is doing:
var g = IndexingGenerator<[Int]>(a)
var wrapped_x = g.next()
while wrapped_x != nil {
let x = wrapped_x!
sum += x
wrapped_x = g.next()
}
Here is what the implementation for IndexingGenerator<[Int]> might look like:
struct IndexingGeneratorArrayOfInt {
private let _seq: [Int]
var _idx: Int = 0
init(_ seq: [Int]) {
_seq = seq
}
mutating func generate() -> Int? {
if _idx != _seq.endIndex {
return _seq[_idx++]
}
else {
return nil
}
}
}
Wow, that’s a lot of code, surely it performs way slower than the regular for ;; loop!
Nope. Because while that might be what it is logically doing, the compiler has a lot of latitude to optimize. For example, note that IndexingGeneratorArrayOfInt is a struct not a class. This means it has no overhead over declaring the two member variables directly. It also means the compiler might be able to inline the code in generate – there is no indirection going on here, no overloaded methods and vtables or objc_MsgSend. Just some simple pointer arithmetic and deferencing. If you strip away all the syntax for the structs and method calls, you’ll find that what the for…in code ends up being is almost exactly the same as what the for ;; loop is doing.
for…in helps avoid performance errors
If, on the other hand, for the code given at the beginning, you switch compiler optimization to the faster setting, for…in appears to blow for ;; away. In some non-scientific tests I ran using XCTestCase.measureBlock, summing a large array of random numbers, it was an order of magnitude faster.
Why? Because of the use of count:
for var i = 0; i < a.count; ++i {
// ^-- calling a.count every time...
sum += a[i]
}
Maybe the optimizer could have fixed this for you, but in this case it hasn’t. If you pull the invariant out, it goes back to being the same as for…in in terms of speed:
let count = a.count
for var i = 0; i < count; ++i {
sum += a[i]
}
“Oh, I would definitely do that every time, so it doesn’t matter”. To which I say, really? Are you sure? Bet you forget sometimes.
But you want the even better news? Doing the same summation with reduce was (in my, again not very scientific, tests) even faster than the for loops:
let sum = a.reduce(0,+)
But it is also so much more expressive and readable (IMO), and allows you to use let to declare your result. Given that this should be your primary goal anyway, the speed is an added bonus. But hopefully the performance will give you an incentive to do it regardless.
This is just for arrays, but what about other collections? Of course this depends on the implementation but there’s a good reason to believe it would be faster for other collections like dictionaries, custom user-defined collections.
My reason for this would be that the author of the collection can implement an optimized version of generate, because they know exactly how the collection is being used. Suppose subscript lookup involves some calculation (such as pointer arithmetic in the case of an array - you have to add multiple the index by the value size then add that to the base pointer). In the case of generate, you know what is being done is to sequentially walk the collection, and therefore you could optimize for this (for example, in the case of an array, hold a pointer to the next element which you increment each time next is called). Same goes for specialized member versions of reduce or map.
This might even be why reduce is performing so well on arrays – who knows (you could stick a breakpoint on the function passed in if you wanted to try and find out). But it’s just another justification for using the language construct you should probably be using regardless.
Famously stated: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil" Donald Knuth. It seems unlikely that you are in the %3.
Focus on the bigger problem at hand. After it is working, if it needs a performance boost, then worry about for loops. But I guarantee you, in the end, bigger structural inefficiencies or poor algorithm choice will be the performance problem, not a for loop.
Worrying about for loops is oh so 1960s.
FWIW, a rudimentary playground test shows map() is about 10 times faster than for enumeration:
class SomeClass1 {
let value: UInt32 = arc4random_uniform(100)
}
class SomeClass2 {
let value: UInt32
init(value: UInt32) {
self.value = value
}
}
var someClass1s = [SomeClass1]()
for _ in 0..<1000 {
someClass1s.append(SomeClass1())
}
var someClass2s = [SomeClass2]()
let startTimeInterval1 = CFAbsoluteTimeGetCurrent()
someClass1s.map { someClass2s.append(SomeClass2(value: $0.value)) }
println("Time1: \(CFAbsoluteTimeGetCurrent() - startTimeInterval1)") // "Time1: 0.489435970783234"
var someMoreClass2s = [SomeClass2]()
let startTimeInterval2 = CFAbsoluteTimeGetCurrent()
for item in someClass1s { someMoreClass2s.append(SomeClass2(value: item.value)) }
println("Time2: \(CFAbsoluteTimeGetCurrent() - startTimeInterval2)") // "Time2 : 4.81457495689392"
The for (with a counter) is just incrementing a counter. Very fast. The for-in uses an iterator (call object to pass the next element). This is much slower. But finally you want to access the element in both cases wich will then make no difference in the end.

Swift Beta performance: sorting arrays

I was implementing an algorithm in Swift Beta and noticed that the performance was very poor. After digging deeper I realized that one of the bottlenecks was something as simple as sorting arrays. The relevant part is here:
let n = 1000000
var x = [Int](repeating: 0, count: n)
for i in 0..<n {
x[i] = random()
}
// start clock here
let y = sort(x)
// stop clock here
In C++, a similar operation takes 0.06s on my computer.
In Python, it takes 0.6s (no tricks, just y = sorted(x) for a list of integers).
In Swift it takes 6s if I compile it with the following command:
xcrun swift -O3 -sdk `xcrun --show-sdk-path --sdk macosx`
And it takes as much as 88s if I compile it with the following command:
xcrun swift -O0 -sdk `xcrun --show-sdk-path --sdk macosx`
Timings in Xcode with "Release" vs. "Debug" builds are similar.
What is wrong here? I could understand some performance loss in comparison with C++, but not a 10-fold slowdown in comparison with pure Python.
Edit: weather noticed that changing -O3 to -Ofast makes this code run almost as fast as the C++ version! However, -Ofast changes the semantics of the language a lot — in my testing, it disabled the checks for integer overflows and array indexing overflows. For example, with -Ofast the following Swift code runs silently without crashing (and prints out some garbage):
let n = 10000000
print(n*n*n*n*n)
let x = [Int](repeating: 10, count: n)
print(x[n])
So -Ofast is not what we want; the whole point of Swift is that we have the safety nets in place. Of course, the safety nets have some impact on the performance, but they should not make the programs 100 times slower. Remember that Java already checks for array bounds, and in typical cases, the slowdown is by a factor much less than 2. And in Clang and GCC we have got -ftrapv for checking (signed) integer overflows, and it is not that slow, either.
Hence the question: how can we get reasonable performance in Swift without losing the safety nets?
Edit 2: I did some more benchmarking, with very simple loops along the lines of
for i in 0..<n {
x[i] = x[i] ^ 12345678
}
(Here the xor operation is there just so that I can more easily find the relevant loop in the assembly code. I tried to pick an operation that is easy to spot but also "harmless" in the sense that it should not require any checks related to integer overflows.)
Again, there was a huge difference in the performance between -O3 and -Ofast. So I had a look at the assembly code:
With -Ofast I get pretty much what I would expect. The relevant part is a loop with 5 machine language instructions.
With -O3 I get something that was beyond my wildest imagination. The inner loop spans 88 lines of assembly code. I did not try to understand all of it, but the most suspicious parts are 13 invocations of "callq _swift_retain" and another 13 invocations of "callq _swift_release". That is, 26 subroutine calls in the inner loop!
Edit 3: In comments, Ferruccio asked for benchmarks that are fair in the sense that they do not rely on built-in functions (e.g. sort). I think the following program is a fairly good example:
let n = 10000
var x = [Int](repeating: 1, count: n)
for i in 0..<n {
for j in 0..<n {
x[i] = x[j]
}
}
There is no arithmetic, so we do not need to worry about integer overflows. The only thing that we do is just lots of array references. And the results are here—Swift -O3 loses by a factor almost 500 in comparison with -Ofast:
C++ -O3: 0.05 s
C++ -O0: 0.4 s
Java: 0.2 s
Python with PyPy: 0.5 s
Python: 12 s
Swift -Ofast: 0.05 s
Swift -O3: 23 s
Swift -O0: 443 s
(If you are concerned that the compiler might optimize out the pointless loops entirely, you can change it to e.g. x[i] ^= x[j], and add a print statement that outputs x[0]. This does not change anything; the timings will be very similar.)
And yes, here the Python implementation was a stupid pure Python implementation with a list of ints and nested for loops. It should be much slower than unoptimized Swift. Something seems to be seriously broken with Swift and array indexing.
Edit 4: These issues (as well as some other performance issues) seems to have been fixed in Xcode 6 beta 5.
For sorting, I now have the following timings:
clang++ -O3: 0.06 s
swiftc -Ofast: 0.1 s
swiftc -O: 0.1 s
swiftc: 4 s
For nested loops:
clang++ -O3: 0.06 s
swiftc -Ofast: 0.3 s
swiftc -O: 0.4 s
swiftc: 540 s
It seems that there is no reason anymore to use the unsafe -Ofast (a.k.a. -Ounchecked); plain -O produces equally good code.
tl;dr Swift 1.0 is now as fast as C by this benchmark using the default release optimisation level [-O].
Here is an in-place quicksort in Swift Beta:
func quicksort_swift(inout a:CInt[], start:Int, end:Int) {
if (end - start < 2){
return
}
var p = a[start + (end - start)/2]
var l = start
var r = end - 1
while (l <= r){
if (a[l] < p){
l += 1
continue
}
if (a[r] > p){
r -= 1
continue
}
var t = a[l]
a[l] = a[r]
a[r] = t
l += 1
r -= 1
}
quicksort_swift(&a, start, r + 1)
quicksort_swift(&a, r + 1, end)
}
And the same in C:
void quicksort_c(int *a, int n) {
if (n < 2)
return;
int p = a[n / 2];
int *l = a;
int *r = a + n - 1;
while (l <= r) {
if (*l < p) {
l++;
continue;
}
if (*r > p) {
r--;
continue;
}
int t = *l;
*l++ = *r;
*r-- = t;
}
quicksort_c(a, r - a + 1);
quicksort_c(l, a + n - l);
}
Both work:
var a_swift:CInt[] = [0,5,2,8,1234,-1,2]
var a_c:CInt[] = [0,5,2,8,1234,-1,2]
quicksort_swift(&a_swift, 0, a_swift.count)
quicksort_c(&a_c, CInt(a_c.count))
// [-1, 0, 2, 2, 5, 8, 1234]
// [-1, 0, 2, 2, 5, 8, 1234]
Both are called in the same program as written.
var x_swift = CInt[](count: n, repeatedValue: 0)
var x_c = CInt[](count: n, repeatedValue: 0)
for var i = 0; i < n; ++i {
x_swift[i] = CInt(random())
x_c[i] = CInt(random())
}
let swift_start:UInt64 = mach_absolute_time();
quicksort_swift(&x_swift, 0, x_swift.count)
let swift_stop:UInt64 = mach_absolute_time();
let c_start:UInt64 = mach_absolute_time();
quicksort_c(&x_c, CInt(x_c.count))
let c_stop:UInt64 = mach_absolute_time();
This converts the absolute times to seconds:
static const uint64_t NANOS_PER_USEC = 1000ULL;
static const uint64_t NANOS_PER_MSEC = 1000ULL * NANOS_PER_USEC;
static const uint64_t NANOS_PER_SEC = 1000ULL * NANOS_PER_MSEC;
mach_timebase_info_data_t timebase_info;
uint64_t abs_to_nanos(uint64_t abs) {
if ( timebase_info.denom == 0 ) {
(void)mach_timebase_info(&timebase_info);
}
return abs * timebase_info.numer / timebase_info.denom;
}
double abs_to_seconds(uint64_t abs) {
return abs_to_nanos(abs) / (double)NANOS_PER_SEC;
}
Here is a summary of the compiler's optimazation levels:
[-Onone] no optimizations, the default for debug.
[-O] perform optimizations, the default for release.
[-Ofast] perform optimizations and disable runtime overflow checks and runtime type checks.
Time in seconds with [-Onone] for n=10_000:
Swift: 0.895296452
C: 0.001223848
Here is Swift's builtin sort() for n=10_000:
Swift_builtin: 0.77865783
Here is [-O] for n=10_000:
Swift: 0.045478346
C: 0.000784666
Swift_builtin: 0.032513488
As you can see, Swift's performance improved by a factor of 20.
As per mweathers' answer, setting [-Ofast] makes the real difference, resulting in these times for n=10_000:
Swift: 0.000706745
C: 0.000742374
Swift_builtin: 0.000603576
And for n=1_000_000:
Swift: 0.107111846
C: 0.114957179
Swift_sort: 0.092688548
For comparison, this is with [-Onone] for n=1_000_000:
Swift: 142.659763258
C: 0.162065333
Swift_sort: 114.095478272
So Swift with no optimizations was almost 1000x slower than C in this benchmark, at this stage in its development. On the other hand with both compilers set to [-Ofast] Swift actually performed at least as well if not slightly better than C.
It has been pointed out that [-Ofast] changes the semantics of the language, making it potentially unsafe. This is what Apple states in the Xcode 5.0 release notes:
A new optimization level -Ofast, available in LLVM, enables aggressive optimizations. -Ofast relaxes some conservative restrictions, mostly for floating-point operations, that are safe for most code. It can yield significant high-performance wins from the compiler.
They all but advocate it. Whether that's wise or not I couldn't say, but from what I can tell it seems reasonable enough to use [-Ofast] in a release if you're not doing high-precision floating point arithmetic and you're confident no integer or array overflows are possible in your program. If you do need high performance and overflow checks / precise arithmetic then choose another language for now.
BETA 3 UPDATE:
n=10_000 with [-O]:
Swift: 0.019697268
C: 0.000718064
Swift_sort: 0.002094721
Swift in general is a bit faster and it looks like Swift's built-in sort has changed quite significantly.
FINAL UPDATE:
[-Onone]:
Swift: 0.678056695
C: 0.000973914
[-O]:
Swift: 0.001158492
C: 0.001192406
[-Ounchecked]:
Swift: 0.000827764
C: 0.001078914
TL;DR: Yes, the only Swift language implementation is slow, right now. If you need fast, numeric (and other types of code, presumably) code, just go with another one. In the future, you should re-evaluate your choice. It might be good enough for most application code that is written at a higher level, though.
From what I'm seeing in SIL and LLVM IR, it seems like they need a bunch of optimizations for removing retains and releases, which might be implemented in Clang (for Objective-C), but they haven't ported them yet. That's the theory I'm going with (for now… I still need to confirm that Clang does something about it), since a profiler run on the last test-case of this question yields this “pretty” result:
As was said by many others, -Ofast is totally unsafe and changes language semantics. For me, it's at the “If you're going to use that, just use another language” stage. I'll re-evaluate that choice later, if it changes.
-O3 gets us a bunch of swift_retain and swift_release calls that, honestly, don't look like they should be there for this example. The optimizer should have elided (most of) them AFAICT, since it knows most of the information about the array, and knows that it has (at least) a strong reference to it.
It shouldn't emit more retains when it's not even calling functions which might release the objects. I don't think an array constructor can return an array which is smaller than what was asked for, which means that a lot of checks that were emitted are useless. It also knows that the integer will never be above 10k, so the overflow checks can be optimized (not because of -Ofast weirdness, but because of the semantics of the language (nothing else is changing that var nor can access it, and adding up to 10k is safe for the type Int).
The compiler might not be able to unbox the array or the array elements, though, since they're getting passed to sort(), which is an external function and has to get the arguments it's expecting. This will make us have to use the Int values indirectly, which would make it go a bit slower. This could change if the sort() generic function (not in the multi-method way) was available to the compiler and got inlined.
This is a very new (publicly) language, and it is going through what I assume are lots of changes, since there are people (heavily) involved with the Swift language asking for feedback and they all say the language isn't finished and will change.
Code used:
import Cocoa
let swift_start = NSDate.timeIntervalSinceReferenceDate();
let n: Int = 10000
let x = Int[](count: n, repeatedValue: 1)
for i in 0..n {
for j in 0..n {
let tmp: Int = x[j]
x[i] = tmp
}
}
let y: Int[] = sort(x)
let swift_stop = NSDate.timeIntervalSinceReferenceDate();
println("\(swift_stop - swift_start)s")
P.S: I'm not an expert on Objective-C nor all the facilities from Cocoa, Objective-C, or the Swift runtimes. I might also be assuming some things that I didn't write.
I decided to take a look at this for fun, and here are the timings that I get:
Swift 4.0.2 : 0.83s (0.74s with `-Ounchecked`)
C++ (Apple LLVM 8.0.0): 0.74s
Swift
// Swift 4.0 code
import Foundation
func doTest() -> Void {
let arraySize = 10000000
var randomNumbers = [UInt32]()
for _ in 0..<arraySize {
randomNumbers.append(arc4random_uniform(UInt32(arraySize)))
}
let start = Date()
randomNumbers.sort()
let end = Date()
print(randomNumbers[0])
print("Elapsed time: \(end.timeIntervalSince(start))")
}
doTest()
Results:
Swift 1.1
xcrun swiftc --version
Swift version 1.1 (swift-600.0.54.20)
Target: x86_64-apple-darwin14.0.0
xcrun swiftc -O SwiftSort.swift
./SwiftSort
Elapsed time: 1.02204304933548
Swift 1.2
xcrun swiftc --version
Apple Swift version 1.2 (swiftlang-602.0.49.6 clang-602.0.49)
Target: x86_64-apple-darwin14.3.0
xcrun -sdk macosx swiftc -O SwiftSort.swift
./SwiftSort
Elapsed time: 0.738763988018036
Swift 2.0
xcrun swiftc --version
Apple Swift version 2.0 (swiftlang-700.0.59 clang-700.0.72)
Target: x86_64-apple-darwin15.0.0
xcrun -sdk macosx swiftc -O SwiftSort.swift
./SwiftSort
Elapsed time: 0.767306983470917
It seems to be the same performance if I compile with -Ounchecked.
Swift 3.0
xcrun swiftc --version
Apple Swift version 3.0 (swiftlang-800.0.46.2 clang-800.0.38)
Target: x86_64-apple-macosx10.9
xcrun -sdk macosx swiftc -O SwiftSort.swift
./SwiftSort
Elapsed time: 0.939633965492249
xcrun -sdk macosx swiftc -Ounchecked SwiftSort.swift
./SwiftSort
Elapsed time: 0.866258025169373
There seems to have been a performance regression from Swift 2.0 to Swift 3.0, and I'm also seeing a difference between -O and -Ounchecked for the first time.
Swift 4.0
xcrun swiftc --version
Apple Swift version 4.0.2 (swiftlang-900.0.69.2 clang-900.0.38)
Target: x86_64-apple-macosx10.9
xcrun -sdk macosx swiftc -O SwiftSort.swift
./SwiftSort
Elapsed time: 0.834299981594086
xcrun -sdk macosx swiftc -Ounchecked SwiftSort.swift
./SwiftSort
Elapsed time: 0.742045998573303
Swift 4 improves the performance again, while maintaining a gap between -O and -Ounchecked. -O -whole-module-optimization did not appear to make a difference.
C++
#include <chrono>
#include <iostream>
#include <vector>
#include <cstdint>
#include <stdlib.h>
using namespace std;
using namespace std::chrono;
int main(int argc, const char * argv[]) {
const auto arraySize = 10000000;
vector<uint32_t> randomNumbers;
for (int i = 0; i < arraySize; ++i) {
randomNumbers.emplace_back(arc4random_uniform(arraySize));
}
const auto start = high_resolution_clock::now();
sort(begin(randomNumbers), end(randomNumbers));
const auto end = high_resolution_clock::now();
cout << randomNumbers[0] << "\n";
cout << "Elapsed time: " << duration_cast<duration<double>>(end - start).count() << "\n";
return 0;
}
Results:
Apple Clang 6.0
clang++ --version
Apple LLVM version 6.0 (clang-600.0.54) (based on LLVM 3.5svn)
Target: x86_64-apple-darwin14.0.0
Thread model: posix
clang++ -O3 -std=c++11 CppSort.cpp -o CppSort
./CppSort
Elapsed time: 0.688969
Apple Clang 6.1.0
clang++ --version
Apple LLVM version 6.1.0 (clang-602.0.49) (based on LLVM 3.6.0svn)
Target: x86_64-apple-darwin14.3.0
Thread model: posix
clang++ -O3 -std=c++11 CppSort.cpp -o CppSort
./CppSort
Elapsed time: 0.670652
Apple Clang 7.0.0
clang++ --version
Apple LLVM version 7.0.0 (clang-700.0.72)
Target: x86_64-apple-darwin15.0.0
Thread model: posix
clang++ -O3 -std=c++11 CppSort.cpp -o CppSort
./CppSort
Elapsed time: 0.690152
Apple Clang 8.0.0
clang++ --version
Apple LLVM version 8.0.0 (clang-800.0.38)
Target: x86_64-apple-darwin15.6.0
Thread model: posix
clang++ -O3 -std=c++11 CppSort.cpp -o CppSort
./CppSort
Elapsed time: 0.68253
Apple Clang 9.0.0
clang++ --version
Apple LLVM version 9.0.0 (clang-900.0.38)
Target: x86_64-apple-darwin16.7.0
Thread model: posix
clang++ -O3 -std=c++11 CppSort.cpp -o CppSort
./CppSort
Elapsed time: 0.736784
Verdict
As of the time of this writing, Swift's sort is fast, but not yet as fast as C++'s sort when compiled with -O, with the above compilers & libraries. With -Ounchecked, it appears to be as fast as C++ in Swift 4.0.2 and Apple LLVM 9.0.0.
From The Swift Programming Language:
The Sort Function Swift’s standard library provides a function called
sort, which sorts an array of values of a known type, based on the
output of a sorting closure that you provide. Once it completes the
sorting process, the sort function returns a new array of the same
type and size as the old one, with its elements in the correct sorted
order.
The sort function has two declarations.
The default declaration which allows you to specify a comparison closure:
func sort<T>(array: T[], pred: (T, T) -> Bool) -> T[]
And a second declaration that only take a single parameter (the array) and is "hardcoded to use the less-than comparator."
func sort<T : Comparable>(array: T[]) -> T[]
Example:
sort( _arrayToSort_ ) { $0 > $1 }
I tested a modified version of your code in a playground with the closure added on so I could monitor the function a little more closely, and I found that with n set to 1000, the closure was being called about 11,000 times.
let n = 1000
let x = Int[](count: n, repeatedValue: 0)
for i in 0..n {
x[i] = random()
}
let y = sort(x) { $0 > $1 }
It is not an efficient function, an I would recommend using a better sorting function implementation.
EDIT:
I took a look at the Quicksort wikipedia page and wrote a Swift implementation for it. Here is the full program I used (in a playground)
import Foundation
func quickSort(inout array: Int[], begin: Int, end: Int) {
if (begin < end) {
let p = partition(&array, begin, end)
quickSort(&array, begin, p - 1)
quickSort(&array, p + 1, end)
}
}
func partition(inout array: Int[], left: Int, right: Int) -> Int {
let numElements = right - left + 1
let pivotIndex = left + numElements / 2
let pivotValue = array[pivotIndex]
swap(&array[pivotIndex], &array[right])
var storeIndex = left
for i in left..right {
let a = 1 // <- Used to see how many comparisons are made
if array[i] <= pivotValue {
swap(&array[i], &array[storeIndex])
storeIndex++
}
}
swap(&array[storeIndex], &array[right]) // Move pivot to its final place
return storeIndex
}
let n = 1000
var x = Int[](count: n, repeatedValue: 0)
for i in 0..n {
x[i] = Int(arc4random())
}
quickSort(&x, 0, x.count - 1) // <- Does the sorting
for i in 0..n {
x[i] // <- Used by the playground to display the results
}
Using this with n=1000, I found that
quickSort() got called about 650 times,
about 6000 swaps were made,
and there are about 10,000 comparisons
It seems that the built-in sort method is (or is close to) quick sort, and is really slow...
As of Xcode 7 you can turn on Fast, Whole Module Optimization. This should increase your performance immediately.
Swift Array performance revisited:
I wrote my own benchmark comparing Swift with C/Objective-C. My benchmark calculates prime numbers. It uses the array of previous prime numbers to look for prime factors in each new candidate, so it is quite fast. However, it does TONS of array reading, and less writing to arrays.
I originally did this benchmark against Swift 1.2. I decided to update the project and run it against Swift 2.0.
The project lets you select between using normal swift arrays and using Swift unsafe memory buffers using array semantics.
For C/Objective-C, you can either opt to use NSArrays, or C malloc'ed arrays.
The test results seem to be pretty similar with fastest, smallest code optimization ([-0s]) or fastest, aggressive ([-0fast]) optimization.
Swift 2.0 performance is still horrible with code optimization turned off, whereas C/Objective-C performance is only moderately slower.
The bottom line is that C malloc'd array-based calculations are the fastest, by a modest margin
Swift with unsafe buffers takes around 1.19X - 1.20X longer than C malloc'd arrays when using fastest, smallest code optimization. the difference seems slightly less with fast, aggressive optimization (Swift takes more like 1.18x to 1.16x longer than C.
If you use regular Swift arrays, the difference with C is slightly greater. (Swift takes ~1.22 to 1.23 longer.)
Regular Swift arrays are DRAMATICALLY faster than they were in Swift 1.2/Xcode 6. Their performance is so close to Swift unsafe buffer based arrays that using unsafe memory buffers does not really seem worth the trouble any more, which is big.
BTW, Objective-C NSArray performance stinks. If you're going to use the native container objects in both languages, Swift is DRAMATICALLY faster.
You can check out my project on github at SwiftPerformanceBenchmark
It has a simple UI that makes collecting stats pretty easy.
It's interesting that sorting seems to be slightly faster in Swift than in C now, but that this prime number algorithm is still faster in Swift.
The main issue that is mentioned by others but not called out enough is that -O3 does nothing at all in Swift (and never has) so when compiled with that it is effectively non-optimised (-Onone).
Option names have changed over time so some other answers have obsolete flags for the build options. Correct current options (Swift 2.2) are:
-Onone // Debug - slow
-O // Optimised
-O -whole-module-optimization //Optimised across files
Whole module optimisation has a slower compile but can optimise across files within the module i.e. within each framework and within the actual application code but not between them. You should use this for anything performance critical)
You can also disable safety checks for even more speed but with all assertions and preconditions not just disabled but optimised on the basis that they are correct. If you ever hit an assertion this means that you are into undefined behaviour. Use with extreme caution and only if you determine that the speed boost is worthwhile for you (by testing). If you do find it valuable for some code I recommend separating that code into a separate framework and only disabling the safety checks for that module.
func partition(inout list : [Int], low: Int, high : Int) -> Int {
let pivot = list[high]
var j = low
var i = j - 1
while j < high {
if list[j] <= pivot{
i += 1
(list[i], list[j]) = (list[j], list[i])
}
j += 1
}
(list[i+1], list[high]) = (list[high], list[i+1])
return i+1
}
func quikcSort(inout list : [Int] , low : Int , high : Int) {
if low < high {
let pIndex = partition(&list, low: low, high: high)
quikcSort(&list, low: low, high: pIndex-1)
quikcSort(&list, low: pIndex + 1, high: high)
}
}
var list = [7,3,15,10,0,8,2,4]
quikcSort(&list, low: 0, high: list.count-1)
var list2 = [ 10, 0, 3, 9, 2, 14, 26, 27, 1, 5, 8, -1, 8 ]
quikcSort(&list2, low: 0, high: list2.count-1)
var list3 = [1,3,9,8,2,7,5]
quikcSort(&list3, low: 0, high: list3.count-1)
This is my Blog about Quick Sort- Github sample Quick-Sort
You can take a look at Lomuto's partitioning algorithm in Partitioning the list. Written in Swift.
Swift 4.1 introduces new -Osize optimization mode.
In Swift 4.1 the compiler now supports a new optimization mode which
enables dedicated optimizations to reduce code size.
The Swift compiler comes with powerful optimizations. When compiling
with -O the compiler tries to transform the code so that it executes
with maximum performance. However, this improvement in runtime
performance can sometimes come with a tradeoff of increased code size.
With the new -Osize optimization mode the user has the choice to
compile for minimal code size rather than for maximum speed.
To enable the size optimization mode on the command line, use -Osize
instead of -O.
Further reading : https://swift.org/blog/osize/