Swift how does my function exceed O(n)? - swift

I am trying to work on a leetcode problem that asks for
Given an array of integers where 1 ≤ a[i] ≤ n (n = size of array), some elements appear twice and others appear once.
Find all the elements of [1, n] inclusive that do not appear in this array.
My solution to the problem is:
func findDisappearedNumbers(_ nums: [Int]) -> [Int] {
var returnedArray = [Int]()
if nums.isEmpty == false {
for i in 1...nums.count {
if nums.contains(i) == false {
returnedArray.append(i)
}
}
} else {
returnedArray = nums
}
return returnedArray
}
However, leetcode tells me that my solution is "Time limit exceeded"
Shouldn't my solution be O(n)? I am not sure where did I made it to be greater than O(n).

If I haven't missed anything your algorithm is O(n^2).
First, you iterate over each element of the array which is O(n), but for each element, you call contains which has to iterate over all the elements again and you end up with O(n^2).
I refrain from telling you the solution since it is for leetcode.

Related

I'm working on the algorithm and it only gives an error in 1 case (Runtime Error)

Only case 4 gives runtime error. I looked at other answers but couldn't find a solution
Question
I don't return the array. I'm just adding elements
func circularArrayRotation(a: [Int], k: Int, queries: [Int]) -> [Int] {
var result = [Int]()
for i in queries {
if i < k {
result.append(a[a.count-k+i])
}
else {
result.append(a[i-k])
}
}
return result
}
Isn't the time complexity of this algorithm O(n). Am I calculating wrong?
k can be much larger than the length of the array, so your approach is failing since the index is much larger than the array length.
To correctly handle this, make k equal to k modulus array_length, since rotating the array by array_length times effectively makes no changes to the current ordering.

Is first(where:) Method always O(n) or it can be O(1) with usage of Set or Dictionary?

I like to know if I use Set instead of Array can my method of first(where:) became Complexity:O(1)?
Apple says that the first(where:) Method is O(n), is it in general so or it depends on how we use it?
for example look at these two ways of coding:
var numbers: [Int] = [Int]()
numbers = [3, 7, 4, -2, 9, -6, 10, 1]
if let searchResult = numbers.first(where: { value in value == -2 })
{
print("The number \(searchResult) Exist!")
}
else
{
print("The number does not Exist!")
}
and this:
var numbers: Set<Int> = Set<Int>()
numbers = [3, 7, 4, -2, 9, -6, 10, 1]
if let searchResult = numbers.first(where: { value in value == -2 })
{
print("The number \(searchResult) Exist!")
}
else
{
print("The number does not Exist!")
}
can we say that in second way Complexity is O(1)?
It's still O(n) even when you use a Set. .first(where:) is defined on a sequence, and it is necessary to check the items in the sequence one at a time to find the first one that makes the predicate true.
Your example is simply checking if the item exists in the Set, but since you are using .first(where:) and a predicate { value in value == -2} Swift will run that predicate for each element in the sequence in turn until it finds one that returns true. Swift doesn't know that you are really just checking to see if the item is in the set.
If you want O(1), then use .contains(-2) on the Set.
I recommend to learn more about Big-O notation. O(1) is a strict subset of O(n). Thus every function that is O(1) is also in O(n).
That said, Apple’s documentation is actually misleading as it does not take the complexity of the predicate function into account. The following is clearly O(n^2):
numbers.first(where: { value in numbers.contains(value + 42) })
Both Set and Dictionary conform to the Sequence protocol, which is the one that exposes the first(where:) function. And this function has the following requirement, taken from the documentation:
Complexity: O(n), where n is the length of the sequence.
Now, this is the upper limit of the function complexity, it might well be that some sequences optimize the search based on their data type and the storage details.
Bottom line: you need to reach the documentation for a particular type if you want to know more about the performance of some feature, however if you're only circulating some protocol references, then you should assume the "worst" - aka what's in the protocol documentation.
This is the implementation of the first(where:) function in the sequence:
/// - Complexity: O(*n*), where *n* is the length of the sequence.
#inlinable
public func first(
where predicate: (Element) throws -> Bool
) rethrows -> Element? {
for element in self {
if try predicate(element) {
return element
}
}
return nil
}
From the Swift Source Code on the Github
As you can see, It's a simple for loop and the complexity is O(n) (assuming the predicate complexity is 1 🤷🏻‍♂️).
The predicate executes n times. So the worst case is O(n)
The Set has not an overload for this function (since it is nonsense and there will be nothing more than the first one in a Set). If you know about the sequence and you are just looking for a value (not a predicate), just use contains or firstIndex(of:). These two have overloads with the complexity of O(1)
From the Swift Source Code on the Github

How to properly call another function within a function in swift?

I'm learning an swift and I've written two functions and have tried them on their own they both work well. However when I try to call one function within another one I can't seem to get the desired out-put that I seek.
The task at hand is that one function should print Prime numbers whilst the other is to calculate and check if the number is prime. I am supposed to call the check if number is prime from the print Prime numbers function.
below is my code:
This function calculates whether or not the X:Int is a prime number. It's set to a boolean because I'm supposed to print "true" or "false" in the function below it.
func isPrime(_ x: Int) -> Bool {
if(x%2 == 0 || x%3 == 0){
if(x == 2 || x == 3){
return(true)
}
return(false)
}
else{
//if the number is less than or equal to 1, we'll say it's not prime
if(x <= 1){
return(false)
}
}
return true
}
This piece calculates the printing of the prime number from 1 to n.
func PrintPrimes(upTo n: Int) {
for x in 1...n {
var count = 0
for num in 1..<x {
isPrime(x)
count += 1
}
if count <= 1 {
print(isPrime(x))
}
}
}
This piece only runs twice and i'm not exactly sure why. I don't know if its because i'm not calling it correctly or I'd have to change up some calculations.
All help is appreciated
EDIT:
Here is the original printPrimes() before I decided to call isPrime within the function. This function calculates the prime numbers only and prints them up to n.
func printPrimes(upTo n: Int) {
for x in 1...n {
var count = 0
for num in 1..<x {
if x % num == 0 {
count += 1
}
}
if count <= 1 {
print(x)
}
}
}
Your second routine is printing only two values because it is calling isPrime, but never doing anything conditional on the value returned, but rather incrementing count regardless. And since you’re printing only if count is <= 1, that will happen only for the first two values of n.
But let’s say you were trying to print the prime numbers up to a certain number, you could do:
func printPrimes(upTo n: Int) {
for x in 1...n {
if isPrime(x) {
print(x)
}
}
}
(As a matter of convention, in Swift, when we say “through n”, we’d iterate 1...n, and if someone said “up to n”, we’d iterate 1..<n. But because your original code snippet uses upTo in conjunction with 1...n, I’ll use that here, but just note that this isn’t very consistent with standard Swift API patterns.)
Unfortunately, isPrime is not correct, either. So you’ll have to fix that first. For example, consider 25. That is not divisible by 2 or 3, but isn’t prime, either.
If you look at the original printPrimes that was provided, what it effectively does is say “by how many whole integers less than x is x divisible ... if only divisible by one other number (namely 1), then it’s a prime.” That logic, although not efficient, is correct. You should go ahead and use that inside your isPrime routine. But that “is divisible by 2 or 3” logic is not correct.
You can do it this way, in your printPrimes you can loop up to the number you want and just check if the number is prime by calling the function with the number. But you have to check your isPrime function. Your printPrimes should only do what its name says (print the prime numbers up to n) and all the logic to check if the number is prime should be on your isPrime function.
Also its a good practice to use camelCase on functions, you should rename your function to printPrimes instead of PrintPrimes.
func printPrimes(upTo n: Int) {
for x in 1...n {
if isPrime(x) {
print(x)
}
}
}

Swift Mini-Max Sum One Test Case Failed - HackerRank

Before anything else, I checked if this kind of question fits Stackoverflow, and based on one similar question (javascript) and from this question: https://meta.stackexchange.com/questions/129598/which-computer-science-programming-stack-exchange-sites-do-i-post-on -- it does.
So here it goes. The challenge is pretty simple, in my opinion:
Given five positive integers, find the minimum and maximum values that
can be calculated by summing exactly four of the five integers. Then
print the respective minimum and maximum values as a single line of
two space-separated long integers.
For example, . Our minimum sum is and our maximum sum is . We would
print
16 24
Input Constraint:
1 <= arr[i] <= (10^9)
My solution is pretty simple. This is what I could do best:
func miniMaxSum(arr: [Int]) -> Void {
let sorted = arr.sorted()
let reversed = Array(sorted.reversed())
var minSum = 0
var maxSum = 0
_ = sorted
.filter({ $0 != sorted.last!})
.map { minSum += $0 }
_ = reversed
.filter({ $0 != reversed.last!})
.map { maxSum += $0 }
print("\(minSum) \(maxSum)")
}
As you can see, I have two sorted arrays. One is incrementing, and the other one is decrementing. And I'm removing the last element of the two newly sorted arrays. The way I remove the last element is using filter, which probably creates the problem. But from there, I thought I could get easily the minimum and maximum sum of the 4 elements.
I had 13/14 test cases passed. And my question is, what could be the test case in which this solution will likely to fail?
Problem link: https://www.hackerrank.com/challenges/mini-max-sum/problem
Here
_ = sorted
.filter({ $0 != sorted.last!})
.map { minSum += $0 }
your expectation is that all but the largest element are added. But that is only correct it the largest element is unique. (And similarly for the maximal sum.)
Choosing an array with all identical errors makes the problem more apparent:
miniMaxSum(arr: [1, 1, 1, 1, 1])
// 0 0
A simpler solution would be to compute the sum of all elements once, and then get the result by subtracting the largest respectively smallest array element. I'll leave the implementation to you :)
Here is the O(n) solution:
func miniMaxSum(arr: [Int]) {
var smallest = Int.max
var greatest = Int.min
var sum = 0
for x in arr {
sum += x
smallest = min(smallest, x)
greatest = max(greatest, x)
}
print(sum - greatest, sum - smallest, separator: " ")
}
I know this isn't codereview.stackexchange.com, but I think some clean up is in order, so I'll start with that.
let reversed = Array(sorted.reversed())
The whole point of the ReversedCollection that is returned by Array.reversed() is that it doesn't cause a copy of elements, and it doesn't take up any extra memory or time to produce. It's merely a wrapper around a collection, and intercepts indexing operations and changes them to immitate a buffer that's been reversed. Asked for .first? It'll give you .last of its wrapped collection. Asked for .last? It'll return .first, etc.
By initializing a new Array from sorted.reversed(), you're causing an unecessary copy, and defeating the point of ReversedCollection. There are some circumstances where this might be necessary (e.g. you want to pass a pointer to a buffer of reversed elements to a C API), but this isn't one of them.
So we can just change that to let reversed = sorted.reversed()
-> Void doesn't do anything, omit it.
sorted.filter({ $0 != sorted.last!}) is inefficient.
... but more than that, this is the source of your error. There's a bug in this. If you have an array like [1, 1, 2, 3, 3], your minSum will be 4 (the sum of [1, 1, 2]), when it should be 7 (the sum of [1, 1, 2, 3]). Similarly, the maxSum will be 8 (the sume of [2, 3, 3]) rather than 9 (the sum of [1, 2, 3, 3]).
You're doing a scan of the whole array, doing sorted.count equality checks, only to discard an element with a known position (the last element). Instead, use dropLast(), which returns a collection that wraps the input, but whose operations mask the existing of a last element.
_ = sorted
.dropLast()
.map { minSum += $0 }
_ = reversed
.dropLast()
.map { maxSum += $0 }
_ = someCollection.map(f)
... is an anti-pattern. The distinguishing feature between map and forEach is that it produces a resulting array that stores the return values of the closure as evaluated with every input element. If you're not going to use the result, use forEach
sorted.dropLast().forEach { minSum += $0 }
reversed.dropLast().forEach { maxSum += $0 }
However, there's an even better way. Rather than summing by mutating a variable and manually adding to it, instead use reduce to do so. This is ideal because it allows you to remove the mutability of minSum and maxSum.
let minSum = sorted.dropLast().reduce(0, +)
let maxSum = reversed.dropLast().reduce(0, +)
You don't really need the reversed variable at all. You could just achieve the same thing by operating over sorted and using dropFirst() instead of dropLast():
func miniMaxSum(arr: [Int]) {
let sorted = arr.sorted()
let minSum = sorted.dropLast().reduce(0, +)
let maxSum = sorted.dropFirst().reduce(0, +)
print("\(minSum) \(maxSum)")
}
Your code assumes the input size is always 5. It's good to document that in the code:
func miniMaxSum(arr: [Int]) {
assert(arr.count == 5)
let sorted = arr.sorted()
let minSum = sorted.dropLast().reduce(0, +)
let maxSum = sorted.dropFirst().reduce(0, +)
print("\(minSum) \(maxSum)")
}
A generalization of your solution uses a lot of extra memory, which you might not have available to you.
This problem fixes the number of summed numbers (always 4) and the number of input numbers (always 5). This problem could be generalized to picking summedElementCount numbers out of any sized arr. In this case, sorting and summing twice is inefficient:
Your solution has a space complexity of O(arr.count)
This is caused by the need to hold the sorted array. If you were allowed to mutate arr in-place, this could reduce to `O(1).
Your solution has a time complexity of O((arr.count * log_2(arr.count)) + summedElementCount)
Derivation: Sorting first (which takes O(arr.count * log_2(arr.count))), and then summing the first and last summedElementCount (which is each O(summedElementCount))
O(arr.count * log_2(arr.count)) + (2 * O(summedElementCount))
= O(arr.count * log_2(arr.count)) + O(summedElementCount) // Annihilation of multiplication by a constant factor
= O((arr.count * log_2(arr.count)) + summedElementCount) // Addition law for big O
This problem could instead be solved with a bounded priority queue, like the MinMaxPriorityQueue in Google's Gauva library for Java. It's simply a wrapper for min-max heap that maintains a fixed number of elements, that when added to, causes the greatest element (according to the provided comparator) to be evicted. If you had something like this available to you in Swift, you could do:
func miniMaxSum(arr: [Int], summedElementCount: Int) {
let minQueue = MinMaxPriorityQueue<Int>(size: summedElementCount, comparator: <)
let maxQueue = MinMaxPriorityQueue<Int>(size: summedElementCount, comparator: >)
for i in arr {
minQueue.offer(i)
maxQueue.offer(i)
}
let (minSum, maxSum) = (minQueue.reduce(0, +), maxQueue.reduce(0, +))
print("\(minSum) \(maxSum)")
}
This solution has a space complexity of only O(summedElementCount) extra space, needed to hold the two queues, each of max size summedElementCount.
This is less than the previous solution, because summedElementCount <= arr.count
This solution has a time complexity of O(arr.count * log_2(summedElementCount))
Derviation: The for loop does arr.count iterations, each consisting of a log_2(summedElementCount) operation on both queues.
O(arr.count) * (2 * O(log_2(summedElementCount)))
= O(arr.count) * O(log_2(summedElementCount)) // Annihilation of multiplication by a constant factor
= O(arr.count * log_2(summedElementCount)) // Multiplication law for big O
It's unclear to me whether this is better or worse than O((arr.count * log_2(arr.count)) + summedElementCount). If you know, please let me know in the comments below!
Try this one accepted:
func miniMaxSum(arr: [Int]) -> Void {
let sorted = arr.sorted()
let minSum = sorted[0...3].reduce(0, +)
let maxSum = sorted[1...4].reduce(0, +)
print("\(minSum) \(maxSum)"
}
Try this-
func miniMaxSum(arr: [Int]) -> Void {
var minSum = 0
var maxSum = 0
var minChecked = false
var maxChecked = false
let numMax = arr.reduce(Int.min, { max($0, $1) })
print("Max number in array: \(numMax)")
let numMin = arr.reduce(Int.max, { min($0, $1) })
print("Min number in array: \(numMin)")
for item in arr {
if !minChecked && numMin == item {
minChecked = true
} else {
maxSum = maxSum + item
}
if !maxChecked && numMax == item {
maxChecked = true
} else {
minSum = minSum + item
}
}
print("\(minSum) \(maxSum)")
}
Try this:
func miniMaxSum(arr: [Int]) -> Void {
let min = arr.min()
let max = arr.max()
let total = arr.reduce(0, +)
print(total - max!, total - min!, separator: " ")
}

Why O(n) takes longer than O(n^2)?

I have a LeetCode problem:
Given an M x N matrix, return True if and only if the matrix is Toeplitz.
A matrix is Toeplitz if every diagonal from top-left to bottom-right has the same element.
My solution is (Swift):
func isToeplitzMatrix(_ matrix: [[Int]]) -> Bool {
if matrix.count == 1 { return true }
for i in 0 ..< matrix.count - 1 {
if matrix[i].dropLast() != matrix[i + 1].dropFirst() { return false }
}
return true
}
As I understood Big O notation, my algorithm's time complexity is O(n), while LeetCode top answers' is O(n^2).
Top answers example:
func isToeplitzMatrix(_ matrix: [[Int]]) -> Bool {
for i in 0..<matrix.count-1 {
for j in 0..<matrix[0].count-1 {
if (matrix[i][j] != matrix[i+1][j+1]) {
return false;
}
}
}
return true;
}
Still, my algorithm takes 36ms (according to LeetCode), while top answer takes 28ms.
When I commented out if matrix.count == 1 { return true } it took even more time to run - 56ms.
Your time complexity for the function is also O(n^2) because the function call dropLast is O(n).
Edit:
Also mentioned by Rup and melpomene, the comparison of arrays also takes the complexity up to O(n^2).
Also, Big O notation describes how the algorithm scales in response to n, the number of data. It takes away any constants for brevity. Therefore, an algorithm with O(1) can be slower than an algorithm with O(n^3) if the input data is small enough.