This is a simple function that checks to see if the string is unique. I reason that the complexity should be N * N -> N^2. Is this correct? Even if the second N will always be smaller than the first.
func isUnique(_ str: String) -> Bool {
let charArr = Array(str.characters)
for (i1 , char) in charArr.enumerated() {
guard i1 != charArr.count - 1 else {
break
}
for (i2, char) in charArr[(i1 + 1)..<charArr.count].enumerated() {
if charArr[i1] == char {
return false
}
}
}
return true
}
Yes, There are a lot of myths behind this problem and when you analysis on Big O topic, you getting so many varying answers. And the most popular question is:
"If two nested for loops contains break statement. so still my complexity
is n*n or O(n2)?"
I think the simple answer is"
Big-O notation isn't about finding exact values given your actual parameters. It is about determining asymptotic runtime.
Related
Only case 4 gives runtime error. I looked at other answers but couldn't find a solution
Question
I don't return the array. I'm just adding elements
func circularArrayRotation(a: [Int], k: Int, queries: [Int]) -> [Int] {
var result = [Int]()
for i in queries {
if i < k {
result.append(a[a.count-k+i])
}
else {
result.append(a[i-k])
}
}
return result
}
Isn't the time complexity of this algorithm O(n). Am I calculating wrong?
k can be much larger than the length of the array, so your approach is failing since the index is much larger than the array length.
To correctly handle this, make k equal to k modulus array_length, since rotating the array by array_length times effectively makes no changes to the current ordering.
just wondering, what's the big o of this function,
let say the initial value of the parameters is as following:
numOfCourseIndex = 0
maximumScheduleCount = 1000
schedule = [[Section]]()
result = [[[Section]]]()
orderdGroupOfSections = n
.
func foo(numOfCourseIndex: Int, orderdGroupOfSections: [[[Section]]], maximumScheduleCount: Int) {
if (result.count >= maximumScheduleCount) {
return
}
for n in 0..<orderdGroupOfSections[numOfCourseIndex].count {
for o in 0..<orderdGroupOfSections[numOfCourseIndex][n].count {
for p in 0..<orderdGroupOfSections[numOfCourseIndex][n][o].sectionTime!.count {
for q in 0..<orderdGroupOfSections[numOfCourseIndex][n][o].sectionTime![p].day!.count {
///do something
}
}
}
if (numOfCourseIndex == orderdGroupOfSections.count - 1) {
result.append(schedule)
}
else {
foo(numOfCourseIndex: numOfCourseIndex + 1, orderdGroupOfSections: orderdGroupOfSections, maximumScheduleCount: maximumScheduleCount)
}
}
}
I'm saying it's a Big-O of (n!) as the worst case, but I'm not sure.
There are two simple things that you can do to help you analyze the complexity of your function. The first is to simplify the input and see how the function behaves. Instead of running the function for a large number of courses or schedules or whatever, look at what it does for just one. How many steps does it take to process one course? How many for two? Three? Four? Make a table with the results, and then look at the difference between one and two courses, two and three, three and four, etc. Can you see a pattern?
The second thing you can do is break the function down into parts and analyze the parts separately. You're probably not going to be able to just see the complexity of the whole thing because it's, well, complex. So simplify it... what's the complexity of the innermost loop? How about the second innermost loop, ignoring the innermost one? What's the complexity of the two together? Rinse and repeat.
I have a LeetCode problem:
Given an M x N matrix, return True if and only if the matrix is Toeplitz.
A matrix is Toeplitz if every diagonal from top-left to bottom-right has the same element.
My solution is (Swift):
func isToeplitzMatrix(_ matrix: [[Int]]) -> Bool {
if matrix.count == 1 { return true }
for i in 0 ..< matrix.count - 1 {
if matrix[i].dropLast() != matrix[i + 1].dropFirst() { return false }
}
return true
}
As I understood Big O notation, my algorithm's time complexity is O(n), while LeetCode top answers' is O(n^2).
Top answers example:
func isToeplitzMatrix(_ matrix: [[Int]]) -> Bool {
for i in 0..<matrix.count-1 {
for j in 0..<matrix[0].count-1 {
if (matrix[i][j] != matrix[i+1][j+1]) {
return false;
}
}
}
return true;
}
Still, my algorithm takes 36ms (according to LeetCode), while top answer takes 28ms.
When I commented out if matrix.count == 1 { return true } it took even more time to run - 56ms.
Your time complexity for the function is also O(n^2) because the function call dropLast is O(n).
Edit:
Also mentioned by Rup and melpomene, the comparison of arrays also takes the complexity up to O(n^2).
Also, Big O notation describes how the algorithm scales in response to n, the number of data. It takes away any constants for brevity. Therefore, an algorithm with O(1) can be slower than an algorithm with O(n^3) if the input data is small enough.
I am trying to work on a leetcode problem that asks for
Given an array of integers where 1 ≤ a[i] ≤ n (n = size of array), some elements appear twice and others appear once.
Find all the elements of [1, n] inclusive that do not appear in this array.
My solution to the problem is:
func findDisappearedNumbers(_ nums: [Int]) -> [Int] {
var returnedArray = [Int]()
if nums.isEmpty == false {
for i in 1...nums.count {
if nums.contains(i) == false {
returnedArray.append(i)
}
}
} else {
returnedArray = nums
}
return returnedArray
}
However, leetcode tells me that my solution is "Time limit exceeded"
Shouldn't my solution be O(n)? I am not sure where did I made it to be greater than O(n).
If I haven't missed anything your algorithm is O(n^2).
First, you iterate over each element of the array which is O(n), but for each element, you call contains which has to iterate over all the elements again and you end up with O(n^2).
I refrain from telling you the solution since it is for leetcode.
I am currently taking an online algorithms course in which the teacher doesn't give code to solve the algorithm, but rather rough pseudo code. So before taking to the internet for the answer, I decided to take a stab at it myself.
In this case, the algorithm that we were looking at is merge sort algorithm. After being given the pseudo code we also dove into analyzing the algorithm for run times against n number of items in an array. After a quick analysis, the teacher arrived at 6nlog(base2)(n) + 6n as an approximate run time for the algorithm.
The pseudo code given was for the merge portion of the algorithm only and was given as follows:
C = output [length = n]
A = 1st sorted array [n/2]
B = 2nd sorted array [n/2]
i = 1
j = 1
for k = 1 to n
if A(i) < B(j)
C(k) = A(i)
i++
else [B(j) < A(i)]
C(k) = B(j)
j++
end
end
He basically did a breakdown of the above taking 4n+2 (2 for the declarations i and j, and 4 for the number of operations performed -- the for, if, array position assignment, and iteration). He simplified this, I believe for the sake of the class, to 6n.
This all makes sense to me, my question arises from the implementation that I am performing and how it effects the algorithms and some of the tradeoffs/inefficiencies it may add.
Below is my code in swift using a playground:
func mergeSort<T:Comparable>(_ array:[T]) -> [T] {
guard array.count > 1 else { return array }
let lowerHalfArray = array[0..<array.count / 2]
let upperHalfArray = array[array.count / 2..<array.count]
let lowerSortedArray = mergeSort(array: Array(lowerHalfArray))
let upperSortedArray = mergeSort(array: Array(upperHalfArray))
return merge(lhs:lowerSortedArray, rhs:upperSortedArray)
}
func merge<T:Comparable>(lhs:[T], rhs:[T]) -> [T] {
guard lhs.count > 0 else { return rhs }
guard rhs.count > 0 else { return lhs }
var i = 0
var j = 0
var mergedArray = [T]()
let loopCount = (lhs.count + rhs.count)
for _ in 0..<loopCount {
if j == rhs.count || (i < lhs.count && lhs[i] < rhs[j]) {
mergedArray.append(lhs[i])
i += 1
} else {
mergedArray.append(rhs[j])
j += 1
}
}
return mergedArray
}
let values = [5,4,8,7,6,3,1,2,9]
let sortedValues = mergeSort(values)
My questions for this are as follows:
Do the guard statements at the start of the merge<T:Comparable> function actually make it more inefficient? Considering we are always halving the array, the only time that it will hold true is for the base case and when there is an odd number of items in the array.
This to me seems like it would actually add more processing and give minimal return since the time that it happens is when we have halved the array to the point where one has no items.
Concerning my if statement in the merge. Since it is checking more than one condition, does this effect the overall efficiency of the algorithm that I have written? If so, the effects to me seems like they vary based on when it would break out of the if statement (e.g at the first condition or the second).
Is this something that is considered heavily when analyzing algorithms, and if so how do you account for the variance when it breaks out from the algorithm?
Any other analysis/tips you can give me on what I have written would be greatly appreciated.
You will very soon learn about Big-O and Big-Theta where you don't care about exact runtimes (believe me when I say very soon, like in a lecture or two). Until then, this is what you need to know:
Yes, the guards take some time, but it is the same amount of time in every iteration. So if each iteration takes X amount of time without the guard and you do n function calls, then it takes X*n amount of time in total. Now add in the guards who take Y amount of time in each call. You now need (X+Y)*n time in total. This is a constant factor, and when n becomes very large the (X+Y) factor becomes negligible compared to the n factor. That is, if you can reduce a function X*n to (X+Y)*(log n) then it is worthwhile to add the Y amount of work because you do fewer iterations in total.
The same reasoning applies to your second question. Yes, checking "if X or Y" takes more time than checking "if X" but it is a constant factor. The extra time does not vary with the size of n.
In some languages you only check the second condition if the first fails. How do we account for that? The simplest solution is to realize that the upper bound of the number of comparisons will be 3, while the number of iterations can be potentially millions with a large n. But 3 is a constant number, so it adds at most a constant amount of work per iteration. You can go into nitty-gritty details and try to reason about the distribution of how often the first, second and third condition will be true or false, but often you don't really want to go down that road. Pretend that you always do all the comparisons.
So yes, adding the guards might be bad for your runtime if you do the same number of iterations as before. But sometimes adding extra work in each iteration can decrease the number of iterations needed.