Big-O of a recursive function - swift

just wondering, what's the big o of this function,
let say the initial value of the parameters is as following:
numOfCourseIndex = 0
maximumScheduleCount = 1000
schedule = [[Section]]()
result = [[[Section]]]()
orderdGroupOfSections = n
.
func foo(numOfCourseIndex: Int, orderdGroupOfSections: [[[Section]]], maximumScheduleCount: Int) {
if (result.count >= maximumScheduleCount) {
return
}
for n in 0..<orderdGroupOfSections[numOfCourseIndex].count {
for o in 0..<orderdGroupOfSections[numOfCourseIndex][n].count {
for p in 0..<orderdGroupOfSections[numOfCourseIndex][n][o].sectionTime!.count {
for q in 0..<orderdGroupOfSections[numOfCourseIndex][n][o].sectionTime![p].day!.count {
///do something
}
}
}
if (numOfCourseIndex == orderdGroupOfSections.count - 1) {
result.append(schedule)
}
else {
foo(numOfCourseIndex: numOfCourseIndex + 1, orderdGroupOfSections: orderdGroupOfSections, maximumScheduleCount: maximumScheduleCount)
}
}
}
I'm saying it's a Big-O of (n!) as the worst case, but I'm not sure.

There are two simple things that you can do to help you analyze the complexity of your function. The first is to simplify the input and see how the function behaves. Instead of running the function for a large number of courses or schedules or whatever, look at what it does for just one. How many steps does it take to process one course? How many for two? Three? Four? Make a table with the results, and then look at the difference between one and two courses, two and three, three and four, etc. Can you see a pattern?
The second thing you can do is break the function down into parts and analyze the parts separately. You're probably not going to be able to just see the complexity of the whole thing because it's, well, complex. So simplify it... what's the complexity of the innermost loop? How about the second innermost loop, ignoring the innermost one? What's the complexity of the two together? Rinse and repeat.

Related

Clingo flexible but maximum count of literals and how to prevent negation cycles

I'm programming a Sudoku solver and have come across two problems.
I would like to generate a specific number of literals, but keep the total number flexible
How can I prevent a negation cycle, so that I have a clean solution for declaring a digit as not possible?
General code with generator regarding my first question:
row(1..3). %coordinates are declared via position of sub-grid (row, col) and position of
col(1..3). %field in sub-grid (sr, sc)
sr(1..3).
sc(1..3).
num(1..9).
1 { candidate(R,C,A,B,N) : num(N) } 9 :- row(R), col(C), sr(A), sc(B).
Here I want to create all candidates for a field, which at the beginning are all the numbers from 1 to 9. So I want for all candidate(1,1,1,1,1-9). But it would be nice to keep the number of candidates for each field flexible, so I can declare a solution if through integrity constraints like
:- candidate(R,C,A,B,N), solution(R1,C,A1,B,N), R != R1, A != A1. %excludes candidates if digit is present in solution in same column
I have excluded all 8 other candidates:
solution(R,C,A,B,N) :- candidate(R,C,A,B,N), { N' : candidate(R,C,A,B,N') } = 1.
Regarding my second question, I basically want to declare a solution, if a specific condition is fulfilled. The problem is, if I have a solution, the condition is no longer true and this leads to a negation cycle:
solution(R,C,A,B,N) :- candidate(R,C,A,B,N), { set1(R',C',A',B') } = { posDigit(N') }, { negDigit(N'') } = { set2(R'',C'',A'',B'') } - 1, not taken(R,C,A,B), not takenDigit(N).
taken(R,C,A,B) :- solution(R,C,A,B,N).
I would be glad I somebody offers input on how to solve these problems.

In swift, is there a way to only check part of an array in a for loop (with a set beginning and ending point)

So lets say we have an array a = [20,50,100,200,500,1000]
Generally speaking we could do for number in a { print(a) } if we wanted to check the entirety of a.
How can you limit what indexes are checked? As in have a set beginning and end index (b, and e respectively), and limit the values of number that are checked to between b and e?
For an example, in a, if b is set to 1, and e is set to 4, then only a1 through a[4] are checked.
I tried doing for number in a[b...e] { print(number) }, I also saw here someone do this,
for j in 0..<n { x[i] = x[j]}, which works if we want just a ending.
This makes me think I can do something like for number in b..<=e { print(a[number]) }
Is this correct?
I'm practicing data structures in Swift and this is one of the things I've been struggling with. Would really appreciate an explanation!
Using b..<=e is not the correct syntax. You need to use Closed Range Operator ... instead, i.e.
for number in b...e {
print(a[number])
}
And since you've already tried
for number in a[b...e] {
print(number)
}
There is nothing wrong with the above syntax as well. You can use it either way.
An array has a subscript that accepts a Range: array[range] and returns a sub-array.
A range of integers can be defined as either b...e or b..<e (There are other ways as well), but not b..<=e
A range itself is a sequence (something that supports a for-in loop)
So you can either do
for index in b...e {
print(a[index])
}
or
for number in a[b...e] {
print(number)
}
In both cases, it is on you to ensure that b...e are valid indices into the array.

Merge Sort algorithm efficiency

I am currently taking an online algorithms course in which the teacher doesn't give code to solve the algorithm, but rather rough pseudo code. So before taking to the internet for the answer, I decided to take a stab at it myself.
In this case, the algorithm that we were looking at is merge sort algorithm. After being given the pseudo code we also dove into analyzing the algorithm for run times against n number of items in an array. After a quick analysis, the teacher arrived at 6nlog(base2)(n) + 6n as an approximate run time for the algorithm.
The pseudo code given was for the merge portion of the algorithm only and was given as follows:
C = output [length = n]
A = 1st sorted array [n/2]
B = 2nd sorted array [n/2]
i = 1
j = 1
for k = 1 to n
if A(i) < B(j)
C(k) = A(i)
i++
else [B(j) < A(i)]
C(k) = B(j)
j++
end
end
He basically did a breakdown of the above taking 4n+2 (2 for the declarations i and j, and 4 for the number of operations performed -- the for, if, array position assignment, and iteration). He simplified this, I believe for the sake of the class, to 6n.
This all makes sense to me, my question arises from the implementation that I am performing and how it effects the algorithms and some of the tradeoffs/inefficiencies it may add.
Below is my code in swift using a playground:
func mergeSort<T:Comparable>(_ array:[T]) -> [T] {
guard array.count > 1 else { return array }
let lowerHalfArray = array[0..<array.count / 2]
let upperHalfArray = array[array.count / 2..<array.count]
let lowerSortedArray = mergeSort(array: Array(lowerHalfArray))
let upperSortedArray = mergeSort(array: Array(upperHalfArray))
return merge(lhs:lowerSortedArray, rhs:upperSortedArray)
}
func merge<T:Comparable>(lhs:[T], rhs:[T]) -> [T] {
guard lhs.count > 0 else { return rhs }
guard rhs.count > 0 else { return lhs }
var i = 0
var j = 0
var mergedArray = [T]()
let loopCount = (lhs.count + rhs.count)
for _ in 0..<loopCount {
if j == rhs.count || (i < lhs.count && lhs[i] < rhs[j]) {
mergedArray.append(lhs[i])
i += 1
} else {
mergedArray.append(rhs[j])
j += 1
}
}
return mergedArray
}
let values = [5,4,8,7,6,3,1,2,9]
let sortedValues = mergeSort(values)
My questions for this are as follows:
Do the guard statements at the start of the merge<T:Comparable> function actually make it more inefficient? Considering we are always halving the array, the only time that it will hold true is for the base case and when there is an odd number of items in the array.
This to me seems like it would actually add more processing and give minimal return since the time that it happens is when we have halved the array to the point where one has no items.
Concerning my if statement in the merge. Since it is checking more than one condition, does this effect the overall efficiency of the algorithm that I have written? If so, the effects to me seems like they vary based on when it would break out of the if statement (e.g at the first condition or the second).
Is this something that is considered heavily when analyzing algorithms, and if so how do you account for the variance when it breaks out from the algorithm?
Any other analysis/tips you can give me on what I have written would be greatly appreciated.
You will very soon learn about Big-O and Big-Theta where you don't care about exact runtimes (believe me when I say very soon, like in a lecture or two). Until then, this is what you need to know:
Yes, the guards take some time, but it is the same amount of time in every iteration. So if each iteration takes X amount of time without the guard and you do n function calls, then it takes X*n amount of time in total. Now add in the guards who take Y amount of time in each call. You now need (X+Y)*n time in total. This is a constant factor, and when n becomes very large the (X+Y) factor becomes negligible compared to the n factor. That is, if you can reduce a function X*n to (X+Y)*(log n) then it is worthwhile to add the Y amount of work because you do fewer iterations in total.
The same reasoning applies to your second question. Yes, checking "if X or Y" takes more time than checking "if X" but it is a constant factor. The extra time does not vary with the size of n.
In some languages you only check the second condition if the first fails. How do we account for that? The simplest solution is to realize that the upper bound of the number of comparisons will be 3, while the number of iterations can be potentially millions with a large n. But 3 is a constant number, so it adds at most a constant amount of work per iteration. You can go into nitty-gritty details and try to reason about the distribution of how often the first, second and third condition will be true or false, but often you don't really want to go down that road. Pretend that you always do all the comparisons.
So yes, adding the guards might be bad for your runtime if you do the same number of iterations as before. But sometimes adding extra work in each iteration can decrease the number of iterations needed.

Iterating through all but the last index of an array

I understand that in Swift 3 there have been some changes from typical C Style for-loops. I've been working around it, but it seems like I'm writing more code than before in many cases. Maybe someone can steer me in the right direction because this is what I want:
let names : [String] = ["Jim", "Jenny", "Earl"]
for var i = 0; i < names.count - 1; i+=1 {
NSLog("%# loves %#", names[i], names[i+1])
}
Pretty simple stuff. I like to be able to get the index I'm on, and I like the for-loop to not run if names.count == 0. All in one go.
But it seems like my options in Swift 3 aren't allowing me this. I would have to do something like:
let names : [String] = ["Jim", "Jenny", "Earl"]
if names.count > 0 {
for i in 0...(names.count - 1) {
NSLog("%# loves %#", names[i], names[i+1])
}
}
The if statement at the start is needed because my program will crash in the situation where it reads: for i in 0...0 { }
I also like the idea of being able to just iterate through everything without explicitly setting the index:
// Pseudocode
for name in names.exceptLastOne {
NSLog("%# loves %#", name, name.plus(1))
}
I feel like there is some sort of syntax that mixes all my wants, but I haven't come across it yet. Does anyone know of a way? Or at least a way to make my code more compact?
UPDATE: Someone suggested that this question has already been asked, citing a SO post where the solution was to use something to the degree of:
for (index, name) in names.enumerated {}
The problem with this when compared to Hamish's answer is that I only am given the index of the current name. That doesn't allow me to get the value at index without needing to do something like:
names[index + 1]
That's just one extra variable to keep track of. I prefer Hamish's which is:
for i in names.indices.dropLast() {
print("\(names[i]) loves \(names[i + 1])")
}
Short, simple, and only have to keep track of names and i, rather than names, index, and name.
One option would be to use dropLast() on the array's indices, allowing you to iterate over a CountableRange of all but the last index of the array.
let names = ["Jim", "Jenny", "Earl"]
for i in names.indices.dropLast() {
print("\(names[i]) loves \(names[i + 1])")
}
If the array has less than two elements, the loop won't be entered.
Another option would be to zip the array with the array where the first element has been dropped, allowing you to iterate through the pairs of elements with their successor elements:
for (nameA, nameB) in zip(names, names.dropFirst()) {
print("\(nameA) loves \(nameB)")
}
This takes advantage of the fact that zip truncates the longer of the two sequences if they aren't of equal length. Therefore if the array has less than two elements, again, the loop won't be entered.

In swift which loop is faster `for` or `for-in`? Why?

Which loop should I use when have to be extremely aware of the time it takes to iterate over a large array.
Short answer
Don’t micro-optimize like this – any difference there is could be far outweighed by the speed of the operation you are performing inside the loop. If you truly think this loop is a performance bottleneck, perhaps you would be better served by using something like the accelerate framework – but only if profiling shows you that effort is truly worth it.
And don’t fight the language. Use for…in unless what you want to achieve cannot be expressed with for…in. These cases are rare. The benefit of for…in is that it’s incredibly hard to get it wrong. That is much more important. Prioritize correctness over speed. Clarity is important. You might even want to skip a for loop entirely and use map or reduce.
Longer Answer
For arrays, if you try them without the fastest compiler optimization, they perform identically, because they essentially do the same thing.
Presumably your for ;; loop looks something like this:
var sum = 0
for var i = 0; i < a.count; ++i {
sum += a[i]
}
and your for…in loop something like this:
for x in a {
sum += x
}
Let’s rewrite the for…in to show what is really going on under the covers:
var g = a.generate()
while let x = g.next() {
sum += x
}
And then let’s rewrite that for what a.generate() returns, and something like what the let is doing:
var g = IndexingGenerator<[Int]>(a)
var wrapped_x = g.next()
while wrapped_x != nil {
let x = wrapped_x!
sum += x
wrapped_x = g.next()
}
Here is what the implementation for IndexingGenerator<[Int]> might look like:
struct IndexingGeneratorArrayOfInt {
private let _seq: [Int]
var _idx: Int = 0
init(_ seq: [Int]) {
_seq = seq
}
mutating func generate() -> Int? {
if _idx != _seq.endIndex {
return _seq[_idx++]
}
else {
return nil
}
}
}
Wow, that’s a lot of code, surely it performs way slower than the regular for ;; loop!
Nope. Because while that might be what it is logically doing, the compiler has a lot of latitude to optimize. For example, note that IndexingGeneratorArrayOfInt is a struct not a class. This means it has no overhead over declaring the two member variables directly. It also means the compiler might be able to inline the code in generate – there is no indirection going on here, no overloaded methods and vtables or objc_MsgSend. Just some simple pointer arithmetic and deferencing. If you strip away all the syntax for the structs and method calls, you’ll find that what the for…in code ends up being is almost exactly the same as what the for ;; loop is doing.
for…in helps avoid performance errors
If, on the other hand, for the code given at the beginning, you switch compiler optimization to the faster setting, for…in appears to blow for ;; away. In some non-scientific tests I ran using XCTestCase.measureBlock, summing a large array of random numbers, it was an order of magnitude faster.
Why? Because of the use of count:
for var i = 0; i < a.count; ++i {
// ^-- calling a.count every time...
sum += a[i]
}
Maybe the optimizer could have fixed this for you, but in this case it hasn’t. If you pull the invariant out, it goes back to being the same as for…in in terms of speed:
let count = a.count
for var i = 0; i < count; ++i {
sum += a[i]
}
“Oh, I would definitely do that every time, so it doesn’t matter”. To which I say, really? Are you sure? Bet you forget sometimes.
But you want the even better news? Doing the same summation with reduce was (in my, again not very scientific, tests) even faster than the for loops:
let sum = a.reduce(0,+)
But it is also so much more expressive and readable (IMO), and allows you to use let to declare your result. Given that this should be your primary goal anyway, the speed is an added bonus. But hopefully the performance will give you an incentive to do it regardless.
This is just for arrays, but what about other collections? Of course this depends on the implementation but there’s a good reason to believe it would be faster for other collections like dictionaries, custom user-defined collections.
My reason for this would be that the author of the collection can implement an optimized version of generate, because they know exactly how the collection is being used. Suppose subscript lookup involves some calculation (such as pointer arithmetic in the case of an array - you have to add multiple the index by the value size then add that to the base pointer). In the case of generate, you know what is being done is to sequentially walk the collection, and therefore you could optimize for this (for example, in the case of an array, hold a pointer to the next element which you increment each time next is called). Same goes for specialized member versions of reduce or map.
This might even be why reduce is performing so well on arrays – who knows (you could stick a breakpoint on the function passed in if you wanted to try and find out). But it’s just another justification for using the language construct you should probably be using regardless.
Famously stated: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil" Donald Knuth. It seems unlikely that you are in the %3.
Focus on the bigger problem at hand. After it is working, if it needs a performance boost, then worry about for loops. But I guarantee you, in the end, bigger structural inefficiencies or poor algorithm choice will be the performance problem, not a for loop.
Worrying about for loops is oh so 1960s.
FWIW, a rudimentary playground test shows map() is about 10 times faster than for enumeration:
class SomeClass1 {
let value: UInt32 = arc4random_uniform(100)
}
class SomeClass2 {
let value: UInt32
init(value: UInt32) {
self.value = value
}
}
var someClass1s = [SomeClass1]()
for _ in 0..<1000 {
someClass1s.append(SomeClass1())
}
var someClass2s = [SomeClass2]()
let startTimeInterval1 = CFAbsoluteTimeGetCurrent()
someClass1s.map { someClass2s.append(SomeClass2(value: $0.value)) }
println("Time1: \(CFAbsoluteTimeGetCurrent() - startTimeInterval1)") // "Time1: 0.489435970783234"
var someMoreClass2s = [SomeClass2]()
let startTimeInterval2 = CFAbsoluteTimeGetCurrent()
for item in someClass1s { someMoreClass2s.append(SomeClass2(value: item.value)) }
println("Time2: \(CFAbsoluteTimeGetCurrent() - startTimeInterval2)") // "Time2 : 4.81457495689392"
The for (with a counter) is just incrementing a counter. Very fast. The for-in uses an iterator (call object to pass the next element). This is much slower. But finally you want to access the element in both cases wich will then make no difference in the end.