This might be a bad question but I am curious.
I was following some data structures and algorithms courses online, and I came across algorithms such as selection sort, insertion sort, bubble sort, merge sort, quick sort, heap sort.. They almost never get close to O(n) when the array is reverse-sorted.
I was wondering one thing: why are we not using space in return of time?
When I organise something I pick up one, and put it where it belongs to. So I thought if we have an array of items, we could just put each value to the index with that value.
Here is my implementation in Swift 4:
let simpleArray = [5,8,3,2,1,9,4,7,0]
let maxSpace = 20
func spaceSort(array: [Int]) -> [Int] {
guard array.count > 1 else {
return array
}
var realResult = [Int]()
var result = Array<Int>(repeating: -1, count: maxSpace)
for i in 0..<array.count{
if(result[array[i]] != array[i]){
result[array[i]] = array[i]
}
}
for i in 0..<result.count{
if(result[i] != -1){
realResult.append(i)
}
}
return realResult
}
var spaceSorted = [Int]()
var execTime = BenchTimer.measureBlock {
spaceSorted = spaceSort(array: simpleArray)
}
print("Average execution time for simple array: \(execTime)")
print(spaceSorted)
Results I get:
Does this sorting algorithm exist already?
Is this a bad idea because it only takes unique values and loses the duplicates? Or could there be uses for it?
And why can't I use Int.max for the maxSpace?
Edit:
I get the error below
error: Execution was interrupted.
when I use let maxSpace = Int.max
MyPlayground(6961,0x7000024af000) malloc: Heap corruption detected,
free list is damaged at 0x600003b7ebc0
* Incorrect guard value: 0 MyPlayground(6961,0x7000024af000) malloc: * set a breakpoint in malloc_error_break to debug
Thanks for the answers
This is an extreme version of radix sort. Quoted from Wikipedia:
radix sort is a non-comparative sorting algorithm. It avoids comparison by creating and distributing elements into buckets according to their radix. For elements with more than one significant digit, this bucketing process is repeated for each digit, while preserving the ordering of the prior step, until all digits have been considered. For this reason, radix sort has also been called bucket sort and digital sort.
In this case you choose your radix as maxSpace, and so you don't have any "elements with more than one significant digit" (from quote above).
Now, if you would use a Hash Set data structure instead of an array, you would actually not need to really allocate the space for the whole range. You would still keep all the loop iterations though (from 0 to maxSpace), and it would check whether the hash set contains the value of i (the loop variable), and if so, output it.
This can only be an efficient algorithm if maxSpace has the same order of magnitude as the number of elements in your input array. Other sorting algorithms can sort with O(nlogn) time complexity, so for cases where maxSpace is much greater than nlogn, the algorithm is not that compelling.
Related
let str = "This is a swift bug"
let data = Data(str.utf8)
print("data size = ", data.endIndex, data.count)
let trimmed = data[2..<data.endIndex]
print("trimmed size = ", trimmed.endIndex, trimmed.count)
The result is
data size = 19 19
trimmed size = 19 17
According to the Apple doc about endIndex:
This is the “one-past-the-end” position, and will always be equal to the count.
Is it a bug? or I'm missing something?
You should open an Apple Feedback for the documentation of Data.endIndex. It's incorrect.
The startIndex of Data is not promised to be zero, and this is an example of when it isn't. Using the Int subscript on Data is unfortunately very dangerous unless you know precisely how the Data was constructed (and specifically that it has a zero index).
Data uniquely mixes two facts that make it tricky to use correctly:
It is its own Slice
Its Index is Int
For some discussion of this, and suggested patterns, see Data.popFirst(), removeFirst() adjust indices. Also see Data ranged subscribe strange behavior for another version of this question.
When you use an expression like array[2..<array.endIndex] you are creating a slice. A slice is a sort of window onto an array (or something similar to an array). Its startIndex is not necessarily 0 and its endIndex is not necessarily one after the last index of the original.
Example:
let arr = Array(1...10)
print(arr.startIndex) // 0
print(arr.endIndex) // 10
let slice = arr[2...4]
print(slice.startIndex) // 2
print(slice.endIndex) // 5
print(slice.count) // 3
You see how this works? The slice has its own logic. Its size (count) is the size of the slice, but its index numbers come from the original array, because the slice is nothing but a pointer into a section of the original array. It has no independent existence; it is just a way of seeing, as it were.
An important consequence is that slice[0] will crash: the first available index of slice is 2, as we have already been told. This is why it is crucial to know whether you're dealing with an original array or a slice.
However, at least you have reason to know that this issue might exist, because slice has a special type — Array<Int>.SubSequence, meaning an ArraySlice. But the fact that you are encountering this by way of Data makes it more tricky, because trimmed is typed as a Data, not as a DataSlice! It is in fact a Data.SubSequence, but you have no simple way of finding that out! That's because Data.SubSequence is typealiased to Data itself. This is to be regarded as a flaw in the Data implementation.
Nevertheless, it is exactly the same phenomenon. These answers should look strangely familiar:
let str = "This is a swift bug"
let data = Data(str.utf8)
let trimmed = data[2...4]
print(trimmed.startIndex) // 2
print(trimmed.endIndex) // 5
print(trimmed.count) // 3
The best way to solve this is Don't Do That. To take a subrange of a Data as a true Data, use subdata:
let trimmed2 = data.subdata(in: 2..<5)
print(trimmed2.startIndex) // 0, and so on; it's an independent copy
Say ...
you have about 20 Thing
very often, you do a complex calculation running through a loop of say 1000 items. The end result is a varying number around 20 each time
you don't know how many there will be until you run through the whole loop
you then want to quickly (and of course elegantly!) access the result set in many places
for performance reasons you don't want to just make a new array each time. note that unfortunately there's a differing amount so you can't just reuse the same array trivially.
What about ...
var thingsBacking = [Thing](repeating: Thing(), count: 100) // hard limit!
var things: ArraySlice<Thing> = []
func fatCalculation() {
var pin: Int = 0
// happily, no need to clean-out thingsBacking
for c in .. some huge loop {
... only some of the items (roughly 20 say) become the result
x = .. one of the result items
thingsBacking[pin] = Thing(... x, y, z )
pin += 1
}
// and then, magic of slices ...
things = thingsBacking[0..<pin]
(Then, you can do this anywhere... for t in things { .. } )
What I am wondering, is there a way you can call to an ArraySlice<Thing> to do that in one step - to "append to" an ArraySlice and avoid having to bother setting the length at the end?
So, something like this ..
things = ... set it to zero length
things.quasiAppend(x)
things.quasiAppend(x2)
things.quasiAppend(x3)
With no further effort, things now has a length of three and indeed the three items are already in the backing array.
I'm particularly interested in performance here (unusually!)
Another approach,
var thingsBacking = [Thing?](repeating: Thing(), count: 100) // hard limit!
and just set the first one after your data to nil as an end-marker. Again, you don't have to waste time zeroing. But the end marker is a nuisance.
Is there a more better way to solve this particular type of array-performance problem?
Based on MartinR's comments, it would seem that for the problem
the data points are incoming and
you don't know how many there will be until the last one (always less than a limit) and
you're having to redo the whole thing at high Hz
It would seem to be best to just:
(1) set up the array
var ra = [Thing](repeating: Thing(), count: 100) // hard limit!
(2) at the start of each run,
.removeAll(keepingCapacity: true)
(3) just go ahead and .append each one.
(4) you don't have to especially mark the end or set a length once finished.
It seems it will indeed then use the same array backing. And it of course "increases the length" as it were each time you append - and you can iterate happily at any time.
Slices - get lost!
I'm getting data from my database in the reverse order of how I need it to be. In order to correctly order it I have a couple choices: I can insert each new piece of data gotten at index 0 of my array, or just append it then reverse the array at the end. Something like this:
let data = ["data1", "data2", "data3", "data4", "data5", "data6"]
var reversedArray = [String]()
for var item in data {
reversedArray.insert(item, 0)
}
// OR
reversedArray = data.reverse()
Which one of these options would be faster? Would there be any significant difference between the 2 as the number of items increased?
Appending new elements has an amortized complexity of roughly O(1). According to the documentation, reversing an array has also a constant complexity.
Insertion has a complexity O(n), where n is the length of the array and you're inserting all elements one by one.
So appending and then reversing should be faster. But you won't see a noticeable difference if you're only dealing with a few dozen elements.
Creating the array by repeatedly inserting items at the beginning will be slowest because it will take time proportional to the square of the number of items involved.
(Clarification: I mean building the entire array reversed will take time proportional to n^2, because each insert will take time proportional to the number of items currently in the array, which will therefore be 1 + 2 + 3 + ... + n which is proportional to n squared)
Reversing the array after building it will be much faster because it will take time proportional to the number of items involved.
Just accessing the items in reverse order will be even faster because you avoid reversing the array.
Look up 'big O notation' for more information. Also note that an algorithm with O(n^2) runtime can outperform one with O(n) for small values of n.
My test results…
do {
let start = Date()
(1..<100).forEach { _ in
for var item in data {
reversedArray.insert(item, at: 0)
}
}
print("First: \(Date().timeIntervalSince1970 - start.timeIntervalSince1970)")
}
do {
let start = Date()
(1..<100).forEach { _ in
reversedArray = data.reversed()
}
print("Second: \(Date().timeIntervalSince1970 - start.timeIntervalSince1970)")
}
First: 0.0124959945678711
Second: 0.00890707969665527
Interestingly, running them 10,000 times…
First: 7.67399883270264
Second: 0.0903480052947998
I am currently taking an online algorithms course in which the teacher doesn't give code to solve the algorithm, but rather rough pseudo code. So before taking to the internet for the answer, I decided to take a stab at it myself.
In this case, the algorithm that we were looking at is merge sort algorithm. After being given the pseudo code we also dove into analyzing the algorithm for run times against n number of items in an array. After a quick analysis, the teacher arrived at 6nlog(base2)(n) + 6n as an approximate run time for the algorithm.
The pseudo code given was for the merge portion of the algorithm only and was given as follows:
C = output [length = n]
A = 1st sorted array [n/2]
B = 2nd sorted array [n/2]
i = 1
j = 1
for k = 1 to n
if A(i) < B(j)
C(k) = A(i)
i++
else [B(j) < A(i)]
C(k) = B(j)
j++
end
end
He basically did a breakdown of the above taking 4n+2 (2 for the declarations i and j, and 4 for the number of operations performed -- the for, if, array position assignment, and iteration). He simplified this, I believe for the sake of the class, to 6n.
This all makes sense to me, my question arises from the implementation that I am performing and how it effects the algorithms and some of the tradeoffs/inefficiencies it may add.
Below is my code in swift using a playground:
func mergeSort<T:Comparable>(_ array:[T]) -> [T] {
guard array.count > 1 else { return array }
let lowerHalfArray = array[0..<array.count / 2]
let upperHalfArray = array[array.count / 2..<array.count]
let lowerSortedArray = mergeSort(array: Array(lowerHalfArray))
let upperSortedArray = mergeSort(array: Array(upperHalfArray))
return merge(lhs:lowerSortedArray, rhs:upperSortedArray)
}
func merge<T:Comparable>(lhs:[T], rhs:[T]) -> [T] {
guard lhs.count > 0 else { return rhs }
guard rhs.count > 0 else { return lhs }
var i = 0
var j = 0
var mergedArray = [T]()
let loopCount = (lhs.count + rhs.count)
for _ in 0..<loopCount {
if j == rhs.count || (i < lhs.count && lhs[i] < rhs[j]) {
mergedArray.append(lhs[i])
i += 1
} else {
mergedArray.append(rhs[j])
j += 1
}
}
return mergedArray
}
let values = [5,4,8,7,6,3,1,2,9]
let sortedValues = mergeSort(values)
My questions for this are as follows:
Do the guard statements at the start of the merge<T:Comparable> function actually make it more inefficient? Considering we are always halving the array, the only time that it will hold true is for the base case and when there is an odd number of items in the array.
This to me seems like it would actually add more processing and give minimal return since the time that it happens is when we have halved the array to the point where one has no items.
Concerning my if statement in the merge. Since it is checking more than one condition, does this effect the overall efficiency of the algorithm that I have written? If so, the effects to me seems like they vary based on when it would break out of the if statement (e.g at the first condition or the second).
Is this something that is considered heavily when analyzing algorithms, and if so how do you account for the variance when it breaks out from the algorithm?
Any other analysis/tips you can give me on what I have written would be greatly appreciated.
You will very soon learn about Big-O and Big-Theta where you don't care about exact runtimes (believe me when I say very soon, like in a lecture or two). Until then, this is what you need to know:
Yes, the guards take some time, but it is the same amount of time in every iteration. So if each iteration takes X amount of time without the guard and you do n function calls, then it takes X*n amount of time in total. Now add in the guards who take Y amount of time in each call. You now need (X+Y)*n time in total. This is a constant factor, and when n becomes very large the (X+Y) factor becomes negligible compared to the n factor. That is, if you can reduce a function X*n to (X+Y)*(log n) then it is worthwhile to add the Y amount of work because you do fewer iterations in total.
The same reasoning applies to your second question. Yes, checking "if X or Y" takes more time than checking "if X" but it is a constant factor. The extra time does not vary with the size of n.
In some languages you only check the second condition if the first fails. How do we account for that? The simplest solution is to realize that the upper bound of the number of comparisons will be 3, while the number of iterations can be potentially millions with a large n. But 3 is a constant number, so it adds at most a constant amount of work per iteration. You can go into nitty-gritty details and try to reason about the distribution of how often the first, second and third condition will be true or false, but often you don't really want to go down that road. Pretend that you always do all the comparisons.
So yes, adding the guards might be bad for your runtime if you do the same number of iterations as before. But sometimes adding extra work in each iteration can decrease the number of iterations needed.
Which loop should I use when have to be extremely aware of the time it takes to iterate over a large array.
Short answer
Don’t micro-optimize like this – any difference there is could be far outweighed by the speed of the operation you are performing inside the loop. If you truly think this loop is a performance bottleneck, perhaps you would be better served by using something like the accelerate framework – but only if profiling shows you that effort is truly worth it.
And don’t fight the language. Use for…in unless what you want to achieve cannot be expressed with for…in. These cases are rare. The benefit of for…in is that it’s incredibly hard to get it wrong. That is much more important. Prioritize correctness over speed. Clarity is important. You might even want to skip a for loop entirely and use map or reduce.
Longer Answer
For arrays, if you try them without the fastest compiler optimization, they perform identically, because they essentially do the same thing.
Presumably your for ;; loop looks something like this:
var sum = 0
for var i = 0; i < a.count; ++i {
sum += a[i]
}
and your for…in loop something like this:
for x in a {
sum += x
}
Let’s rewrite the for…in to show what is really going on under the covers:
var g = a.generate()
while let x = g.next() {
sum += x
}
And then let’s rewrite that for what a.generate() returns, and something like what the let is doing:
var g = IndexingGenerator<[Int]>(a)
var wrapped_x = g.next()
while wrapped_x != nil {
let x = wrapped_x!
sum += x
wrapped_x = g.next()
}
Here is what the implementation for IndexingGenerator<[Int]> might look like:
struct IndexingGeneratorArrayOfInt {
private let _seq: [Int]
var _idx: Int = 0
init(_ seq: [Int]) {
_seq = seq
}
mutating func generate() -> Int? {
if _idx != _seq.endIndex {
return _seq[_idx++]
}
else {
return nil
}
}
}
Wow, that’s a lot of code, surely it performs way slower than the regular for ;; loop!
Nope. Because while that might be what it is logically doing, the compiler has a lot of latitude to optimize. For example, note that IndexingGeneratorArrayOfInt is a struct not a class. This means it has no overhead over declaring the two member variables directly. It also means the compiler might be able to inline the code in generate – there is no indirection going on here, no overloaded methods and vtables or objc_MsgSend. Just some simple pointer arithmetic and deferencing. If you strip away all the syntax for the structs and method calls, you’ll find that what the for…in code ends up being is almost exactly the same as what the for ;; loop is doing.
for…in helps avoid performance errors
If, on the other hand, for the code given at the beginning, you switch compiler optimization to the faster setting, for…in appears to blow for ;; away. In some non-scientific tests I ran using XCTestCase.measureBlock, summing a large array of random numbers, it was an order of magnitude faster.
Why? Because of the use of count:
for var i = 0; i < a.count; ++i {
// ^-- calling a.count every time...
sum += a[i]
}
Maybe the optimizer could have fixed this for you, but in this case it hasn’t. If you pull the invariant out, it goes back to being the same as for…in in terms of speed:
let count = a.count
for var i = 0; i < count; ++i {
sum += a[i]
}
“Oh, I would definitely do that every time, so it doesn’t matter”. To which I say, really? Are you sure? Bet you forget sometimes.
But you want the even better news? Doing the same summation with reduce was (in my, again not very scientific, tests) even faster than the for loops:
let sum = a.reduce(0,+)
But it is also so much more expressive and readable (IMO), and allows you to use let to declare your result. Given that this should be your primary goal anyway, the speed is an added bonus. But hopefully the performance will give you an incentive to do it regardless.
This is just for arrays, but what about other collections? Of course this depends on the implementation but there’s a good reason to believe it would be faster for other collections like dictionaries, custom user-defined collections.
My reason for this would be that the author of the collection can implement an optimized version of generate, because they know exactly how the collection is being used. Suppose subscript lookup involves some calculation (such as pointer arithmetic in the case of an array - you have to add multiple the index by the value size then add that to the base pointer). In the case of generate, you know what is being done is to sequentially walk the collection, and therefore you could optimize for this (for example, in the case of an array, hold a pointer to the next element which you increment each time next is called). Same goes for specialized member versions of reduce or map.
This might even be why reduce is performing so well on arrays – who knows (you could stick a breakpoint on the function passed in if you wanted to try and find out). But it’s just another justification for using the language construct you should probably be using regardless.
Famously stated: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil" Donald Knuth. It seems unlikely that you are in the %3.
Focus on the bigger problem at hand. After it is working, if it needs a performance boost, then worry about for loops. But I guarantee you, in the end, bigger structural inefficiencies or poor algorithm choice will be the performance problem, not a for loop.
Worrying about for loops is oh so 1960s.
FWIW, a rudimentary playground test shows map() is about 10 times faster than for enumeration:
class SomeClass1 {
let value: UInt32 = arc4random_uniform(100)
}
class SomeClass2 {
let value: UInt32
init(value: UInt32) {
self.value = value
}
}
var someClass1s = [SomeClass1]()
for _ in 0..<1000 {
someClass1s.append(SomeClass1())
}
var someClass2s = [SomeClass2]()
let startTimeInterval1 = CFAbsoluteTimeGetCurrent()
someClass1s.map { someClass2s.append(SomeClass2(value: $0.value)) }
println("Time1: \(CFAbsoluteTimeGetCurrent() - startTimeInterval1)") // "Time1: 0.489435970783234"
var someMoreClass2s = [SomeClass2]()
let startTimeInterval2 = CFAbsoluteTimeGetCurrent()
for item in someClass1s { someMoreClass2s.append(SomeClass2(value: item.value)) }
println("Time2: \(CFAbsoluteTimeGetCurrent() - startTimeInterval2)") // "Time2 : 4.81457495689392"
The for (with a counter) is just incrementing a counter. Very fast. The for-in uses an iterator (call object to pass the next element). This is much slower. But finally you want to access the element in both cases wich will then make no difference in the end.