Swift Beta performance: sorting arrays - swift

I was implementing an algorithm in Swift Beta and noticed that the performance was very poor. After digging deeper I realized that one of the bottlenecks was something as simple as sorting arrays. The relevant part is here:
let n = 1000000
var x = [Int](repeating: 0, count: n)
for i in 0..<n {
x[i] = random()
}
// start clock here
let y = sort(x)
// stop clock here
In C++, a similar operation takes 0.06s on my computer.
In Python, it takes 0.6s (no tricks, just y = sorted(x) for a list of integers).
In Swift it takes 6s if I compile it with the following command:
xcrun swift -O3 -sdk `xcrun --show-sdk-path --sdk macosx`
And it takes as much as 88s if I compile it with the following command:
xcrun swift -O0 -sdk `xcrun --show-sdk-path --sdk macosx`
Timings in Xcode with "Release" vs. "Debug" builds are similar.
What is wrong here? I could understand some performance loss in comparison with C++, but not a 10-fold slowdown in comparison with pure Python.
Edit: weather noticed that changing -O3 to -Ofast makes this code run almost as fast as the C++ version! However, -Ofast changes the semantics of the language a lot — in my testing, it disabled the checks for integer overflows and array indexing overflows. For example, with -Ofast the following Swift code runs silently without crashing (and prints out some garbage):
let n = 10000000
print(n*n*n*n*n)
let x = [Int](repeating: 10, count: n)
print(x[n])
So -Ofast is not what we want; the whole point of Swift is that we have the safety nets in place. Of course, the safety nets have some impact on the performance, but they should not make the programs 100 times slower. Remember that Java already checks for array bounds, and in typical cases, the slowdown is by a factor much less than 2. And in Clang and GCC we have got -ftrapv for checking (signed) integer overflows, and it is not that slow, either.
Hence the question: how can we get reasonable performance in Swift without losing the safety nets?
Edit 2: I did some more benchmarking, with very simple loops along the lines of
for i in 0..<n {
x[i] = x[i] ^ 12345678
}
(Here the xor operation is there just so that I can more easily find the relevant loop in the assembly code. I tried to pick an operation that is easy to spot but also "harmless" in the sense that it should not require any checks related to integer overflows.)
Again, there was a huge difference in the performance between -O3 and -Ofast. So I had a look at the assembly code:
With -Ofast I get pretty much what I would expect. The relevant part is a loop with 5 machine language instructions.
With -O3 I get something that was beyond my wildest imagination. The inner loop spans 88 lines of assembly code. I did not try to understand all of it, but the most suspicious parts are 13 invocations of "callq _swift_retain" and another 13 invocations of "callq _swift_release". That is, 26 subroutine calls in the inner loop!
Edit 3: In comments, Ferruccio asked for benchmarks that are fair in the sense that they do not rely on built-in functions (e.g. sort). I think the following program is a fairly good example:
let n = 10000
var x = [Int](repeating: 1, count: n)
for i in 0..<n {
for j in 0..<n {
x[i] = x[j]
}
}
There is no arithmetic, so we do not need to worry about integer overflows. The only thing that we do is just lots of array references. And the results are here—Swift -O3 loses by a factor almost 500 in comparison with -Ofast:
C++ -O3: 0.05 s
C++ -O0: 0.4 s
Java: 0.2 s
Python with PyPy: 0.5 s
Python: 12 s
Swift -Ofast: 0.05 s
Swift -O3: 23 s
Swift -O0: 443 s
(If you are concerned that the compiler might optimize out the pointless loops entirely, you can change it to e.g. x[i] ^= x[j], and add a print statement that outputs x[0]. This does not change anything; the timings will be very similar.)
And yes, here the Python implementation was a stupid pure Python implementation with a list of ints and nested for loops. It should be much slower than unoptimized Swift. Something seems to be seriously broken with Swift and array indexing.
Edit 4: These issues (as well as some other performance issues) seems to have been fixed in Xcode 6 beta 5.
For sorting, I now have the following timings:
clang++ -O3: 0.06 s
swiftc -Ofast: 0.1 s
swiftc -O: 0.1 s
swiftc: 4 s
For nested loops:
clang++ -O3: 0.06 s
swiftc -Ofast: 0.3 s
swiftc -O: 0.4 s
swiftc: 540 s
It seems that there is no reason anymore to use the unsafe -Ofast (a.k.a. -Ounchecked); plain -O produces equally good code.

tl;dr Swift 1.0 is now as fast as C by this benchmark using the default release optimisation level [-O].
Here is an in-place quicksort in Swift Beta:
func quicksort_swift(inout a:CInt[], start:Int, end:Int) {
if (end - start < 2){
return
}
var p = a[start + (end - start)/2]
var l = start
var r = end - 1
while (l <= r){
if (a[l] < p){
l += 1
continue
}
if (a[r] > p){
r -= 1
continue
}
var t = a[l]
a[l] = a[r]
a[r] = t
l += 1
r -= 1
}
quicksort_swift(&a, start, r + 1)
quicksort_swift(&a, r + 1, end)
}
And the same in C:
void quicksort_c(int *a, int n) {
if (n < 2)
return;
int p = a[n / 2];
int *l = a;
int *r = a + n - 1;
while (l <= r) {
if (*l < p) {
l++;
continue;
}
if (*r > p) {
r--;
continue;
}
int t = *l;
*l++ = *r;
*r-- = t;
}
quicksort_c(a, r - a + 1);
quicksort_c(l, a + n - l);
}
Both work:
var a_swift:CInt[] = [0,5,2,8,1234,-1,2]
var a_c:CInt[] = [0,5,2,8,1234,-1,2]
quicksort_swift(&a_swift, 0, a_swift.count)
quicksort_c(&a_c, CInt(a_c.count))
// [-1, 0, 2, 2, 5, 8, 1234]
// [-1, 0, 2, 2, 5, 8, 1234]
Both are called in the same program as written.
var x_swift = CInt[](count: n, repeatedValue: 0)
var x_c = CInt[](count: n, repeatedValue: 0)
for var i = 0; i < n; ++i {
x_swift[i] = CInt(random())
x_c[i] = CInt(random())
}
let swift_start:UInt64 = mach_absolute_time();
quicksort_swift(&x_swift, 0, x_swift.count)
let swift_stop:UInt64 = mach_absolute_time();
let c_start:UInt64 = mach_absolute_time();
quicksort_c(&x_c, CInt(x_c.count))
let c_stop:UInt64 = mach_absolute_time();
This converts the absolute times to seconds:
static const uint64_t NANOS_PER_USEC = 1000ULL;
static const uint64_t NANOS_PER_MSEC = 1000ULL * NANOS_PER_USEC;
static const uint64_t NANOS_PER_SEC = 1000ULL * NANOS_PER_MSEC;
mach_timebase_info_data_t timebase_info;
uint64_t abs_to_nanos(uint64_t abs) {
if ( timebase_info.denom == 0 ) {
(void)mach_timebase_info(&timebase_info);
}
return abs * timebase_info.numer / timebase_info.denom;
}
double abs_to_seconds(uint64_t abs) {
return abs_to_nanos(abs) / (double)NANOS_PER_SEC;
}
Here is a summary of the compiler's optimazation levels:
[-Onone] no optimizations, the default for debug.
[-O] perform optimizations, the default for release.
[-Ofast] perform optimizations and disable runtime overflow checks and runtime type checks.
Time in seconds with [-Onone] for n=10_000:
Swift: 0.895296452
C: 0.001223848
Here is Swift's builtin sort() for n=10_000:
Swift_builtin: 0.77865783
Here is [-O] for n=10_000:
Swift: 0.045478346
C: 0.000784666
Swift_builtin: 0.032513488
As you can see, Swift's performance improved by a factor of 20.
As per mweathers' answer, setting [-Ofast] makes the real difference, resulting in these times for n=10_000:
Swift: 0.000706745
C: 0.000742374
Swift_builtin: 0.000603576
And for n=1_000_000:
Swift: 0.107111846
C: 0.114957179
Swift_sort: 0.092688548
For comparison, this is with [-Onone] for n=1_000_000:
Swift: 142.659763258
C: 0.162065333
Swift_sort: 114.095478272
So Swift with no optimizations was almost 1000x slower than C in this benchmark, at this stage in its development. On the other hand with both compilers set to [-Ofast] Swift actually performed at least as well if not slightly better than C.
It has been pointed out that [-Ofast] changes the semantics of the language, making it potentially unsafe. This is what Apple states in the Xcode 5.0 release notes:
A new optimization level -Ofast, available in LLVM, enables aggressive optimizations. -Ofast relaxes some conservative restrictions, mostly for floating-point operations, that are safe for most code. It can yield significant high-performance wins from the compiler.
They all but advocate it. Whether that's wise or not I couldn't say, but from what I can tell it seems reasonable enough to use [-Ofast] in a release if you're not doing high-precision floating point arithmetic and you're confident no integer or array overflows are possible in your program. If you do need high performance and overflow checks / precise arithmetic then choose another language for now.
BETA 3 UPDATE:
n=10_000 with [-O]:
Swift: 0.019697268
C: 0.000718064
Swift_sort: 0.002094721
Swift in general is a bit faster and it looks like Swift's built-in sort has changed quite significantly.
FINAL UPDATE:
[-Onone]:
Swift: 0.678056695
C: 0.000973914
[-O]:
Swift: 0.001158492
C: 0.001192406
[-Ounchecked]:
Swift: 0.000827764
C: 0.001078914

TL;DR: Yes, the only Swift language implementation is slow, right now. If you need fast, numeric (and other types of code, presumably) code, just go with another one. In the future, you should re-evaluate your choice. It might be good enough for most application code that is written at a higher level, though.
From what I'm seeing in SIL and LLVM IR, it seems like they need a bunch of optimizations for removing retains and releases, which might be implemented in Clang (for Objective-C), but they haven't ported them yet. That's the theory I'm going with (for now… I still need to confirm that Clang does something about it), since a profiler run on the last test-case of this question yields this “pretty” result:
As was said by many others, -Ofast is totally unsafe and changes language semantics. For me, it's at the “If you're going to use that, just use another language” stage. I'll re-evaluate that choice later, if it changes.
-O3 gets us a bunch of swift_retain and swift_release calls that, honestly, don't look like they should be there for this example. The optimizer should have elided (most of) them AFAICT, since it knows most of the information about the array, and knows that it has (at least) a strong reference to it.
It shouldn't emit more retains when it's not even calling functions which might release the objects. I don't think an array constructor can return an array which is smaller than what was asked for, which means that a lot of checks that were emitted are useless. It also knows that the integer will never be above 10k, so the overflow checks can be optimized (not because of -Ofast weirdness, but because of the semantics of the language (nothing else is changing that var nor can access it, and adding up to 10k is safe for the type Int).
The compiler might not be able to unbox the array or the array elements, though, since they're getting passed to sort(), which is an external function and has to get the arguments it's expecting. This will make us have to use the Int values indirectly, which would make it go a bit slower. This could change if the sort() generic function (not in the multi-method way) was available to the compiler and got inlined.
This is a very new (publicly) language, and it is going through what I assume are lots of changes, since there are people (heavily) involved with the Swift language asking for feedback and they all say the language isn't finished and will change.
Code used:
import Cocoa
let swift_start = NSDate.timeIntervalSinceReferenceDate();
let n: Int = 10000
let x = Int[](count: n, repeatedValue: 1)
for i in 0..n {
for j in 0..n {
let tmp: Int = x[j]
x[i] = tmp
}
}
let y: Int[] = sort(x)
let swift_stop = NSDate.timeIntervalSinceReferenceDate();
println("\(swift_stop - swift_start)s")
P.S: I'm not an expert on Objective-C nor all the facilities from Cocoa, Objective-C, or the Swift runtimes. I might also be assuming some things that I didn't write.

I decided to take a look at this for fun, and here are the timings that I get:
Swift 4.0.2 : 0.83s (0.74s with `-Ounchecked`)
C++ (Apple LLVM 8.0.0): 0.74s
Swift
// Swift 4.0 code
import Foundation
func doTest() -> Void {
let arraySize = 10000000
var randomNumbers = [UInt32]()
for _ in 0..<arraySize {
randomNumbers.append(arc4random_uniform(UInt32(arraySize)))
}
let start = Date()
randomNumbers.sort()
let end = Date()
print(randomNumbers[0])
print("Elapsed time: \(end.timeIntervalSince(start))")
}
doTest()
Results:
Swift 1.1
xcrun swiftc --version
Swift version 1.1 (swift-600.0.54.20)
Target: x86_64-apple-darwin14.0.0
xcrun swiftc -O SwiftSort.swift
./SwiftSort
Elapsed time: 1.02204304933548
Swift 1.2
xcrun swiftc --version
Apple Swift version 1.2 (swiftlang-602.0.49.6 clang-602.0.49)
Target: x86_64-apple-darwin14.3.0
xcrun -sdk macosx swiftc -O SwiftSort.swift
./SwiftSort
Elapsed time: 0.738763988018036
Swift 2.0
xcrun swiftc --version
Apple Swift version 2.0 (swiftlang-700.0.59 clang-700.0.72)
Target: x86_64-apple-darwin15.0.0
xcrun -sdk macosx swiftc -O SwiftSort.swift
./SwiftSort
Elapsed time: 0.767306983470917
It seems to be the same performance if I compile with -Ounchecked.
Swift 3.0
xcrun swiftc --version
Apple Swift version 3.0 (swiftlang-800.0.46.2 clang-800.0.38)
Target: x86_64-apple-macosx10.9
xcrun -sdk macosx swiftc -O SwiftSort.swift
./SwiftSort
Elapsed time: 0.939633965492249
xcrun -sdk macosx swiftc -Ounchecked SwiftSort.swift
./SwiftSort
Elapsed time: 0.866258025169373
There seems to have been a performance regression from Swift 2.0 to Swift 3.0, and I'm also seeing a difference between -O and -Ounchecked for the first time.
Swift 4.0
xcrun swiftc --version
Apple Swift version 4.0.2 (swiftlang-900.0.69.2 clang-900.0.38)
Target: x86_64-apple-macosx10.9
xcrun -sdk macosx swiftc -O SwiftSort.swift
./SwiftSort
Elapsed time: 0.834299981594086
xcrun -sdk macosx swiftc -Ounchecked SwiftSort.swift
./SwiftSort
Elapsed time: 0.742045998573303
Swift 4 improves the performance again, while maintaining a gap between -O and -Ounchecked. -O -whole-module-optimization did not appear to make a difference.
C++
#include <chrono>
#include <iostream>
#include <vector>
#include <cstdint>
#include <stdlib.h>
using namespace std;
using namespace std::chrono;
int main(int argc, const char * argv[]) {
const auto arraySize = 10000000;
vector<uint32_t> randomNumbers;
for (int i = 0; i < arraySize; ++i) {
randomNumbers.emplace_back(arc4random_uniform(arraySize));
}
const auto start = high_resolution_clock::now();
sort(begin(randomNumbers), end(randomNumbers));
const auto end = high_resolution_clock::now();
cout << randomNumbers[0] << "\n";
cout << "Elapsed time: " << duration_cast<duration<double>>(end - start).count() << "\n";
return 0;
}
Results:
Apple Clang 6.0
clang++ --version
Apple LLVM version 6.0 (clang-600.0.54) (based on LLVM 3.5svn)
Target: x86_64-apple-darwin14.0.0
Thread model: posix
clang++ -O3 -std=c++11 CppSort.cpp -o CppSort
./CppSort
Elapsed time: 0.688969
Apple Clang 6.1.0
clang++ --version
Apple LLVM version 6.1.0 (clang-602.0.49) (based on LLVM 3.6.0svn)
Target: x86_64-apple-darwin14.3.0
Thread model: posix
clang++ -O3 -std=c++11 CppSort.cpp -o CppSort
./CppSort
Elapsed time: 0.670652
Apple Clang 7.0.0
clang++ --version
Apple LLVM version 7.0.0 (clang-700.0.72)
Target: x86_64-apple-darwin15.0.0
Thread model: posix
clang++ -O3 -std=c++11 CppSort.cpp -o CppSort
./CppSort
Elapsed time: 0.690152
Apple Clang 8.0.0
clang++ --version
Apple LLVM version 8.0.0 (clang-800.0.38)
Target: x86_64-apple-darwin15.6.0
Thread model: posix
clang++ -O3 -std=c++11 CppSort.cpp -o CppSort
./CppSort
Elapsed time: 0.68253
Apple Clang 9.0.0
clang++ --version
Apple LLVM version 9.0.0 (clang-900.0.38)
Target: x86_64-apple-darwin16.7.0
Thread model: posix
clang++ -O3 -std=c++11 CppSort.cpp -o CppSort
./CppSort
Elapsed time: 0.736784
Verdict
As of the time of this writing, Swift's sort is fast, but not yet as fast as C++'s sort when compiled with -O, with the above compilers & libraries. With -Ounchecked, it appears to be as fast as C++ in Swift 4.0.2 and Apple LLVM 9.0.0.

From The Swift Programming Language:
The Sort Function Swift’s standard library provides a function called
sort, which sorts an array of values of a known type, based on the
output of a sorting closure that you provide. Once it completes the
sorting process, the sort function returns a new array of the same
type and size as the old one, with its elements in the correct sorted
order.
The sort function has two declarations.
The default declaration which allows you to specify a comparison closure:
func sort<T>(array: T[], pred: (T, T) -> Bool) -> T[]
And a second declaration that only take a single parameter (the array) and is "hardcoded to use the less-than comparator."
func sort<T : Comparable>(array: T[]) -> T[]
Example:
sort( _arrayToSort_ ) { $0 > $1 }
I tested a modified version of your code in a playground with the closure added on so I could monitor the function a little more closely, and I found that with n set to 1000, the closure was being called about 11,000 times.
let n = 1000
let x = Int[](count: n, repeatedValue: 0)
for i in 0..n {
x[i] = random()
}
let y = sort(x) { $0 > $1 }
It is not an efficient function, an I would recommend using a better sorting function implementation.
EDIT:
I took a look at the Quicksort wikipedia page and wrote a Swift implementation for it. Here is the full program I used (in a playground)
import Foundation
func quickSort(inout array: Int[], begin: Int, end: Int) {
if (begin < end) {
let p = partition(&array, begin, end)
quickSort(&array, begin, p - 1)
quickSort(&array, p + 1, end)
}
}
func partition(inout array: Int[], left: Int, right: Int) -> Int {
let numElements = right - left + 1
let pivotIndex = left + numElements / 2
let pivotValue = array[pivotIndex]
swap(&array[pivotIndex], &array[right])
var storeIndex = left
for i in left..right {
let a = 1 // <- Used to see how many comparisons are made
if array[i] <= pivotValue {
swap(&array[i], &array[storeIndex])
storeIndex++
}
}
swap(&array[storeIndex], &array[right]) // Move pivot to its final place
return storeIndex
}
let n = 1000
var x = Int[](count: n, repeatedValue: 0)
for i in 0..n {
x[i] = Int(arc4random())
}
quickSort(&x, 0, x.count - 1) // <- Does the sorting
for i in 0..n {
x[i] // <- Used by the playground to display the results
}
Using this with n=1000, I found that
quickSort() got called about 650 times,
about 6000 swaps were made,
and there are about 10,000 comparisons
It seems that the built-in sort method is (or is close to) quick sort, and is really slow...

As of Xcode 7 you can turn on Fast, Whole Module Optimization. This should increase your performance immediately.

Swift Array performance revisited:
I wrote my own benchmark comparing Swift with C/Objective-C. My benchmark calculates prime numbers. It uses the array of previous prime numbers to look for prime factors in each new candidate, so it is quite fast. However, it does TONS of array reading, and less writing to arrays.
I originally did this benchmark against Swift 1.2. I decided to update the project and run it against Swift 2.0.
The project lets you select between using normal swift arrays and using Swift unsafe memory buffers using array semantics.
For C/Objective-C, you can either opt to use NSArrays, or C malloc'ed arrays.
The test results seem to be pretty similar with fastest, smallest code optimization ([-0s]) or fastest, aggressive ([-0fast]) optimization.
Swift 2.0 performance is still horrible with code optimization turned off, whereas C/Objective-C performance is only moderately slower.
The bottom line is that C malloc'd array-based calculations are the fastest, by a modest margin
Swift with unsafe buffers takes around 1.19X - 1.20X longer than C malloc'd arrays when using fastest, smallest code optimization. the difference seems slightly less with fast, aggressive optimization (Swift takes more like 1.18x to 1.16x longer than C.
If you use regular Swift arrays, the difference with C is slightly greater. (Swift takes ~1.22 to 1.23 longer.)
Regular Swift arrays are DRAMATICALLY faster than they were in Swift 1.2/Xcode 6. Their performance is so close to Swift unsafe buffer based arrays that using unsafe memory buffers does not really seem worth the trouble any more, which is big.
BTW, Objective-C NSArray performance stinks. If you're going to use the native container objects in both languages, Swift is DRAMATICALLY faster.
You can check out my project on github at SwiftPerformanceBenchmark
It has a simple UI that makes collecting stats pretty easy.
It's interesting that sorting seems to be slightly faster in Swift than in C now, but that this prime number algorithm is still faster in Swift.

The main issue that is mentioned by others but not called out enough is that -O3 does nothing at all in Swift (and never has) so when compiled with that it is effectively non-optimised (-Onone).
Option names have changed over time so some other answers have obsolete flags for the build options. Correct current options (Swift 2.2) are:
-Onone // Debug - slow
-O // Optimised
-O -whole-module-optimization //Optimised across files
Whole module optimisation has a slower compile but can optimise across files within the module i.e. within each framework and within the actual application code but not between them. You should use this for anything performance critical)
You can also disable safety checks for even more speed but with all assertions and preconditions not just disabled but optimised on the basis that they are correct. If you ever hit an assertion this means that you are into undefined behaviour. Use with extreme caution and only if you determine that the speed boost is worthwhile for you (by testing). If you do find it valuable for some code I recommend separating that code into a separate framework and only disabling the safety checks for that module.

func partition(inout list : [Int], low: Int, high : Int) -> Int {
let pivot = list[high]
var j = low
var i = j - 1
while j < high {
if list[j] <= pivot{
i += 1
(list[i], list[j]) = (list[j], list[i])
}
j += 1
}
(list[i+1], list[high]) = (list[high], list[i+1])
return i+1
}
func quikcSort(inout list : [Int] , low : Int , high : Int) {
if low < high {
let pIndex = partition(&list, low: low, high: high)
quikcSort(&list, low: low, high: pIndex-1)
quikcSort(&list, low: pIndex + 1, high: high)
}
}
var list = [7,3,15,10,0,8,2,4]
quikcSort(&list, low: 0, high: list.count-1)
var list2 = [ 10, 0, 3, 9, 2, 14, 26, 27, 1, 5, 8, -1, 8 ]
quikcSort(&list2, low: 0, high: list2.count-1)
var list3 = [1,3,9,8,2,7,5]
quikcSort(&list3, low: 0, high: list3.count-1)
This is my Blog about Quick Sort- Github sample Quick-Sort
You can take a look at Lomuto's partitioning algorithm in Partitioning the list. Written in Swift.

Swift 4.1 introduces new -Osize optimization mode.
In Swift 4.1 the compiler now supports a new optimization mode which
enables dedicated optimizations to reduce code size.
The Swift compiler comes with powerful optimizations. When compiling
with -O the compiler tries to transform the code so that it executes
with maximum performance. However, this improvement in runtime
performance can sometimes come with a tradeoff of increased code size.
With the new -Osize optimization mode the user has the choice to
compile for minimal code size rather than for maximum speed.
To enable the size optimization mode on the command line, use -Osize
instead of -O.
Further reading : https://swift.org/blog/osize/

Related

Why Swift using subscript syntax in a for-in loops is faster than using direct access to the element?

I read the famous Why is it faster to process a sorted array than an unsorted array? and I decided to play around and experiment with other languages such as Swift. I was surprised by the run time differences between 2 very similar snippets of code.
In Swift one can access elements in an array either in a direct way or with a subscript while in a for-in loop. For instance this code:
for i in 0..<size {
sum += data[i]
}
Could be written:
for element in data {
sum += element
}
With size the data length and data an array of summable elements.
So, I just implemented in Swift (code bellow) the same algorithm as in the question I mentioned in the first paragraph and what surprised me is that the first method is roughly 5 times faster than the second method.
I don't really know the backstage subscript implementation but I thought that accessing directly the elements in a Swift for-in loop was just syntactic sugar.
Question
My question is what is the difference between the two for-in syntaxes and why it is faster to use subscript?
here is the detail of timers. I'm using Xcode 9.4.1 with Swift 4.1 on an early 2015 MacBook Air with a Commande Line Project.
// Using Direct Element Access
Elapsed Time: 8.506288427
Sum: 1051901000
vs
// Using Subscript
Elapsed Time: 1.483967902
Sum: 1070388000
Bonus question: why the execution is 100 times slower in Swift than in C++ (both executed on the same Mac in a n Xcode project)? For instance 100,000 repetitions in C++ take nearly the same time as 1,000 repetitions in Swift. My first guess is that Swift is a higher level language than C++ and that Swift operates more safety checks for instance.
Here is the Swift code I used, I only modified the second nested loop:
import Foundation
import GameplayKit
let size = 32_768
var data = [Int]()
var sum = 0
var rand = GKRandomDistribution(lowestValue: 0, highestValue: 255)
for _ in 0..<size {
data.append(rand.nextInt())
}
// data.sort()
let start = DispatchTime.now()
for _ in 0..<1_000 {
// Only the following for-in loop changes
for i in 0..<size {
if data[i] <= 128 {
sum += data[i]
}
}
}
let stop = DispatchTime.now()
let nanoTime = stop.uptimeNanoseconds - start.uptimeNanoseconds
let elapsed = Double(nanoTime) / 1_000_000_000
print("Elapsed Time: \(elapsed)")
print("Sum: \(sum)")
The overall performance output highly depends on the optimizations done by the compiler. If you compile your code with optimizations enabled, you will see the difference between both solutions is minimal.
To demonstrate this, I updated your code adding two methods, one with subscripting and the other using for-in.
import Foundation
import GameplayKit
let size = 32_768
var data = [Int]()
var sum = 0
var rand = GKRandomDistribution(lowestValue: 0, highestValue: 255)
for _ in 0..<size {
data.append(rand.nextInt())
}
// data.sort()
func withSubscript() {
let start = DispatchTime.now()
for _ in 0..<1_000 {
for i in 0..<size {
if data[i] <= 128 {
sum += data[i]
}
}
}
let stop = DispatchTime.now()
let elapsed = Double(stop.uptimeNanoseconds - start.uptimeNanoseconds) / 1_000_000_000
print("With subscript:")
print("- Elapsed Time: \(elapsed)")
print("- Sum: \(sum)")
}
func withForIn() {
let start = DispatchTime.now()
for _ in 0..<1_000 {
for element in data {
if element <= 128 {
sum += element
}
}
}
let stop = DispatchTime.now()
let elapsed = Double(stop.uptimeNanoseconds - start.uptimeNanoseconds) / 1_000_000_000
print("With for-in:")
print("- Elapsed Time: \(elapsed)")
print("- Sum: \(sum)")
}
withSubscript()
withForIn()
I saved that code into a file called array-subscripting.swift.
Then, from the command line, we can run it without any optimizations, like this:
$ swift array-subscripting.swift
With subscript:
- Elapsed Time: 0.924554249
- Sum: 1057062000
With for-in:
- Elapsed Time: 5.796038213
- Sum: 2114124000
As you mentioned in the post, there is a big difference in performance.
This difference is pretty negligible when the code is compiled with optimizations:
$ swiftc array-subscripting.swift -O
$ ./array-subscripting
With subscript:
- Elapsed Time: 0.110622556
- Sum: 1054578000
With for-in:
- Elapsed Time: 0.11670454
- Sum: 2109156000
As you can see, both solutions are way faster than before, and very similar on time execution.
Back to your original question, subscripting provides direct access to memory, which is pretty efficient in the case of contiguous arrays, where elements are stored next to each other in memory.
for-in loops, in the other hand, create an immutable copy of each element from the array, which incurs in a performance hit.

Why sometimes Apple Accelerate framework is slow?

I am playing with C and Swift 3.0 code using vecLib and Accelerate framework from Apple as dynamic lib + my code in C lang based project and Swift playground.
And in situation with calling Apple's wrapper from framework of SIMD instruction with 1 or < 4 elements computation function like vvcospif() from framework is slower than simple standart cos(x * PI) when functions calls from loop near 1.000 times as example.
I know about difference between vvcospif() and cos(), I should use exactly vvcospif() for x * PI.
Example in playground, you can just copy code and run it:
import Cocoa
import Accelerate
func cosine_interpolate(alpha: Float, a: Float, b: Float) -> Float {
let ft: Float = alpha * 3.1415927;
let f: Float = (1 - cos(ft)) * 0.5;
return a + f*(b - a);
}
var start: Date = NSDate() as Date
var interp: Float;
for index in 0..<1000 {
interp = cosine_interpolate(alpha: 0.25, a: 1.0, b: 0.75)
}
var end = NSDate();
var timeInterval: Double = end.timeIntervalSince(start);
print("cosine_interpolate in \(timeInterval) seconds")
func fast_cosine_interpolate(alpha: Float, a: Float, b: Float) -> Float {
var x: Float = alpha
var count: Int32 = 1
var result: Float = 0
vvcospif(&result, &x, &count)
let SINSIN_HALF_X: Float = (1 - result) * 0.5;
return a + SINSIN_HALF_X * (b - a);
}
start = NSDate() as Date
for index in 0..<1000 {
interp = fast_cosine_interpolate(alpha: 0.25, a: 1.0, b: 0.75)
}
end = NSDate();
timeInterval = end.timeIntervalSince(start);
print("fast_cosine_interpolate in \(timeInterval) seconds")
My question is:
Why vvcospif() is slow in this example?
May be because vvcospif() it is wrapper under Objective-C runtime and converting data structures / copying of memory from Intel SIMD -> Objective-C -> Swift runtime is slower then tiny cos()?
I also have performance issue with C code +
#include <Accelerate/Accelerate.h>
vvcospif(resultVector, inputVector, &count);
when inputVector and resultVector is small arrays with 1 or 2 elements or just float variable, and calls in loop with ~ 1.000.000 times.
cos(x * PI) computation time near 20 ms.
and
vvcospif(x) with processing one float or float array[2] - computation time near 80 ms! Where is Acceleration? :)
Yes, in Xcode I use compiler -O -whole-module-optimization optimisation with whole module opt. enabled.
For a more detailed discussion with examples, see "Introduction to Fast Bezier (and Trying the Accelerate.framework)".
The first, fundamental problem is that non-inlined function calls are extremely expensive. You don't want function calls if you can possibly help it in performance-critical code. Within a module, the compiler can often inline functions for you, and parts of stdlib can be inlined for you. But when you start crossing module barriers, Swift generally cannot optimize out the call.
The point of SIMD functions is that you set up all your data in the right format, and then call them just one time. That way the cost of the function call is made up by the SIMD optimized code you're calling.
But remember, you don't have to call into Accelerate to get SIMD optimizations. The compiler is perfectly capable of noticing you've written a loop and turning it into an inline SIMD algorithm itself (and it does this all the time). So for many simple problems, the compiler is going to win anyway. Think about it: if calling vvcospif with a count of 1 were faster than calling cos, wouldn't they just implement cos that way?
I haven't played with your code much, but if you want to improve its performance with Accelerate, you want to think about how to arrange all your input data so you can call vvcospif one time with a large N. It's quite possible in that case that it will be much faster that a loop (since cos is not trivial).
If you want an example of Accelerate in practice, and how you need to organize your data, see PinchText. This code is computing offsets for a page full of a few thousand glyphs based on up to 10 touches in real-time, with animations (see PinchText.mov for what the result looks like). In particular look at adjustViewPositions:count:forTouchPoint:. Notice how count is large, and the data is transformed step by step with no loops. Even throwing in a (very expensive) ObjC method call into that method doesn't matter very much because it's only made one time. Getting rid of function calls in loops is a huge part of performance programming.

How to test generic performance with whole module optimization

In WWDC 2015 Session 409 near the 18 minute mark. The discussion at hand leads me to believe that generics can be optimized through Generic Specialization by enabling whole module optimization mode. Unfortunately my tests, which I'm not confident in, revealed nothing useful.
I ran some very simple tests between the following two methods to see if the performance was similar:
func genericMax<T : Comparable>(x:T, y:T) -> T {
return y > x ? y : x
}
func intMax(x:Int, y:Int) -> Int {
return y > x ? y : x
}
Simple XCTest:
func testPerformanceExample() {
self.measureBlock {
let x: Int = Int(arc4random_uniform(9999))
let y: Int = Int(arc4random_uniform(9999))
for _ in 0...1000000 {
// let _ = genericMax(x, y: y)
let _ = intMax(x, y: y)
}
}
}
What happened
Without optimization the following tests were reasonably different:
genericMax: 0.018 sec
intMax: 0.005 sec
However with Whole Module Optimization the following tests weren't similar:
genericMax: 0.014 sec
intMax: 0.004 sec
What I Expected
With whole module optimization enabled I expected similar times between the two methods calls. This leads me to believe that my test is flawed.
Question
Assuming my tests are flawed / poor. How could I better measure how Whole Module Optimization mode optimizes generics through Generic Specialization?
Your tests are flawed because they measure test performance, not app performance. Tests live in a separate executable file, and they do not benefit from whole-module optimizations themselves. Because of this, your test always uses the generic, non-specialized implementation even in places where your program doesn't.
If you want to see that whole-module optimizations are enabled in your executable, you need to test a function from your executable. (You also need to make sure that your tests either use the Release build, or that you have WMO enabled in the debug build.)
Adding this function to the executable:
func genericIntMax(x: Int, y: Int) -> Int {
return genericMax(x, y: y)
}
and using it from the tests in place of genericMax in the tests, I get identical performance. (Note that this is not really whole module optimization since the two functions live in the same file; but it shows the difference between app code and test code when it comes to optimizations.)

Weird casting needed in Swift

I was watching some of the videos at WWDC2014 and trying to code I liked, but one of the weird things I noticed is that Swift keeps getting mad at me and wanting me to cast to different number types. This is easy enough but in the videos at WWDC they did NOT need to do this. Here is an example from "What's New With Interface Builder":
-M_PI/2 keeps giving me the error: "Could not find an overload for '/' that accepts the supplied arguments'
Does anyone have a solution to this problem, that does NOT simply involve casting because there is clearly another way of doing this? I have many many more examples for similar problems to this.
if !ringLayer {
ringLayer = CAShapeLayer()
let innerRect = CGRectInset(bounds, lineWidth / 2.0, lineWidth / 2.0)
let innerPath = UIBezierPath(ovalInRect: innerRect)
ringLayer.path = innerPath.CGPath
ringLayer.fillColor = nil
ringLayer.lineWidth = lineWidth
ringLayer.strokeColor = UIColor.blueColor().CGColor
ringLayer.anchorPoint = CGPointMake(0.5, 0.5)
ringLayer.transform = CATransform3DRotate
(ringLayer.transform, -M_PI/2, 0, 0, 1)
layer.addSublayer(ringLayer)
}
ringLayer.frame = layer.bounds
Edit: NB: CGFloat has changed in beta 4, specifically to make handling this 32/64-bit difference easier. Read the release notes and don't take the below as gospel now: it was written for beta 2.
After a clue from this answer I've worked it out: it depends on the selected project architecture. If I leave the Project architecture at the default of (armv7, arm64), then I get the same error as you with this code:
// Error with arm7 target:
ringLayer.transform = CATransform3DRotate(ringLayer.transform, -M_PI/2, 0, 0, 1)
...and need to cast to a Float (well, CGFloat underneath, I'm sure) to make it work:
// Works with explicit cast on arm7 target
ringLayer.transform = CATransform3DRotate(ringLayer.transform, Float(-M_PI/2), 0, 0, 1)
However, if I change the target architecture to arm64 only, then the code works as written in the Apple example from the video:
// Works fine with arm64 target:
ringLayer.transform = CATransform3DRotate(ringLayer.transform, -M_PI/2, 0, 0, 1)
So to answer your question, I believe this is because CGFloat is defined as double on 64-bit architecture, so it's okay to use M_PI (which is also a double)-derived values as a CGFloat parameter. However, when arm7 is the target, CGFloat is a float, not a double, so you'd be losing precision when passing M_PI (still a double)-derived expressions directly as a CGFloat parameter.
Note that Xcode by default will only build for the "active" architecture for Debug builds—I found it was possible to toggle this error by switching between iPhone 4S and iPhone 5S schemes in the standard drop-down in the menu bar of Xcode, as they have different architectures. I'd guess that in the demo video, there's a 64-bit architecture target selected, but in your project you've got a 32-bit architecture selected?
Given that a CGFloat is double-precision on 64-bit architectures, the simplest way of dealing with this specific problem would be to always cast to CGFloat.
But as a demonstration of dealing with this type of issue when you need to do different things on different architectures, Swift does support conditional compilation:
#if arch(x86_64) || arch(arm64)
ringLayer.transform = CATransform3DRotate (ringLayer.transform, -M_PI / 2, 0, 0, 1)
#else
ringLayer.transform = CATransform3DRotate (ringLayer.transform, CGFloat(-M_PI / 2), 0, 0, 1)
#endif
However, that's just an example. You really don't want to be doing this sort of thing all over the place, so I'd certainly stick to simply using CGFloat(<whatever POSIX double value you need>) to get either a 32- or 64-bit value depending on the target architecture.
Apple have added much more help for dealing with different floats in later compiler releases—for example, in early betas you couldn't even take floor() of a single-precision float easily, whereas now (currently Xcode 6.1) there are overrides for floor(), ceil(), etc. for both float and double, so you don't need to be fiddling with conditional compilation.
There seems to be issues currently with automatic conversions between Objective C numeric types and Swift types. For this I was able to get it to work by marking the lineWidth to the Float type. I don't know why they didn't have that issue in the video, I assume that is a different build they were using. Either there is an Objective C interop setting I'm missing, or it's just a beta issue.
To verify some of the basic issues (even happening in Playground) I used:
var x:NSNumber = 1
var y:Integer = 2
var z:Int = 3
x += 5 //error
y += 6 //error
z = z + y //error
For Swift 1.2 you have to cast second parameter to CGFloat
This code works:
ringLayer.transform = CATransform3DRotate(ringLayer.transform, CGFloat(-M_PI/2), 0, 0, 1)

In what circumstances can a compiler change the execution order of programme statements?

If this is not a real question then feel free to close ;)
Not only the compiler can reorder execution (mostly for optimization), most modern processors do so, too. Read more about execution reordering and memory barriers.
The compiler can change the execution order of statements when it sees fit for optimization purposes, and when such changes wouldn't alter the observable behavior of the code.
A very simple example -
int func (int value)
{
int result = value*2;
if (value > 10)
{
return result;
}
else
{
return 0;
}
}
A naive compiler can generate code for this in exactly the sequence shown. First calculate "result" and return it only if the original value is larger than 10 (if it isn't, "result" would be ignored - calculated needlessly).
A sane compiler, though, would see that the calculation of "result" is only needed when "value" is larger than 10, so may easily move the calculation "value*2" inside the first braces and only do it if "value" is actually larger than 10 (needless to mention, the compiler doesn't really look at the C code when optimizing - it works in lower levels).
This is only a simple example. Much more complicated examples can be created. It is very possible that a C function would end up looking almost nothing like its C representation in compiled form, with aggressive enough optimizations.
Many compilers use something called "common subexpression elimination". For example, if you had the following code:
for(int i=0; i<100; i++) {
x += y * i * 15;
}
the compiler would notice that y * 15 is invariant (its value doesn't change). So it would compute y * 15, stick the result in a register and change the loop statement to "x += r0 * i". This is kind of a contrived example, but you often see expressions like this when working with array indexes or any other base + offset type of situation.