Before anything else, I checked if this kind of question fits Stackoverflow, and based on one similar question (javascript) and from this question: https://meta.stackexchange.com/questions/129598/which-computer-science-programming-stack-exchange-sites-do-i-post-on -- it does.
So here it goes. The challenge is pretty simple, in my opinion:
Given five positive integers, find the minimum and maximum values that
can be calculated by summing exactly four of the five integers. Then
print the respective minimum and maximum values as a single line of
two space-separated long integers.
For example, . Our minimum sum is and our maximum sum is . We would
print
16 24
Input Constraint:
1 <= arr[i] <= (10^9)
My solution is pretty simple. This is what I could do best:
func miniMaxSum(arr: [Int]) -> Void {
let sorted = arr.sorted()
let reversed = Array(sorted.reversed())
var minSum = 0
var maxSum = 0
_ = sorted
.filter({ $0 != sorted.last!})
.map { minSum += $0 }
_ = reversed
.filter({ $0 != reversed.last!})
.map { maxSum += $0 }
print("\(minSum) \(maxSum)")
}
As you can see, I have two sorted arrays. One is incrementing, and the other one is decrementing. And I'm removing the last element of the two newly sorted arrays. The way I remove the last element is using filter, which probably creates the problem. But from there, I thought I could get easily the minimum and maximum sum of the 4 elements.
I had 13/14 test cases passed. And my question is, what could be the test case in which this solution will likely to fail?
Problem link: https://www.hackerrank.com/challenges/mini-max-sum/problem
Here
_ = sorted
.filter({ $0 != sorted.last!})
.map { minSum += $0 }
your expectation is that all but the largest element are added. But that is only correct it the largest element is unique. (And similarly for the maximal sum.)
Choosing an array with all identical errors makes the problem more apparent:
miniMaxSum(arr: [1, 1, 1, 1, 1])
// 0 0
A simpler solution would be to compute the sum of all elements once, and then get the result by subtracting the largest respectively smallest array element. I'll leave the implementation to you :)
Here is the O(n) solution:
func miniMaxSum(arr: [Int]) {
var smallest = Int.max
var greatest = Int.min
var sum = 0
for x in arr {
sum += x
smallest = min(smallest, x)
greatest = max(greatest, x)
}
print(sum - greatest, sum - smallest, separator: " ")
}
I know this isn't codereview.stackexchange.com, but I think some clean up is in order, so I'll start with that.
let reversed = Array(sorted.reversed())
The whole point of the ReversedCollection that is returned by Array.reversed() is that it doesn't cause a copy of elements, and it doesn't take up any extra memory or time to produce. It's merely a wrapper around a collection, and intercepts indexing operations and changes them to immitate a buffer that's been reversed. Asked for .first? It'll give you .last of its wrapped collection. Asked for .last? It'll return .first, etc.
By initializing a new Array from sorted.reversed(), you're causing an unecessary copy, and defeating the point of ReversedCollection. There are some circumstances where this might be necessary (e.g. you want to pass a pointer to a buffer of reversed elements to a C API), but this isn't one of them.
So we can just change that to let reversed = sorted.reversed()
-> Void doesn't do anything, omit it.
sorted.filter({ $0 != sorted.last!}) is inefficient.
... but more than that, this is the source of your error. There's a bug in this. If you have an array like [1, 1, 2, 3, 3], your minSum will be 4 (the sum of [1, 1, 2]), when it should be 7 (the sum of [1, 1, 2, 3]). Similarly, the maxSum will be 8 (the sume of [2, 3, 3]) rather than 9 (the sum of [1, 2, 3, 3]).
You're doing a scan of the whole array, doing sorted.count equality checks, only to discard an element with a known position (the last element). Instead, use dropLast(), which returns a collection that wraps the input, but whose operations mask the existing of a last element.
_ = sorted
.dropLast()
.map { minSum += $0 }
_ = reversed
.dropLast()
.map { maxSum += $0 }
_ = someCollection.map(f)
... is an anti-pattern. The distinguishing feature between map and forEach is that it produces a resulting array that stores the return values of the closure as evaluated with every input element. If you're not going to use the result, use forEach
sorted.dropLast().forEach { minSum += $0 }
reversed.dropLast().forEach { maxSum += $0 }
However, there's an even better way. Rather than summing by mutating a variable and manually adding to it, instead use reduce to do so. This is ideal because it allows you to remove the mutability of minSum and maxSum.
let minSum = sorted.dropLast().reduce(0, +)
let maxSum = reversed.dropLast().reduce(0, +)
You don't really need the reversed variable at all. You could just achieve the same thing by operating over sorted and using dropFirst() instead of dropLast():
func miniMaxSum(arr: [Int]) {
let sorted = arr.sorted()
let minSum = sorted.dropLast().reduce(0, +)
let maxSum = sorted.dropFirst().reduce(0, +)
print("\(minSum) \(maxSum)")
}
Your code assumes the input size is always 5. It's good to document that in the code:
func miniMaxSum(arr: [Int]) {
assert(arr.count == 5)
let sorted = arr.sorted()
let minSum = sorted.dropLast().reduce(0, +)
let maxSum = sorted.dropFirst().reduce(0, +)
print("\(minSum) \(maxSum)")
}
A generalization of your solution uses a lot of extra memory, which you might not have available to you.
This problem fixes the number of summed numbers (always 4) and the number of input numbers (always 5). This problem could be generalized to picking summedElementCount numbers out of any sized arr. In this case, sorting and summing twice is inefficient:
Your solution has a space complexity of O(arr.count)
This is caused by the need to hold the sorted array. If you were allowed to mutate arr in-place, this could reduce to `O(1).
Your solution has a time complexity of O((arr.count * log_2(arr.count)) + summedElementCount)
Derivation: Sorting first (which takes O(arr.count * log_2(arr.count))), and then summing the first and last summedElementCount (which is each O(summedElementCount))
O(arr.count * log_2(arr.count)) + (2 * O(summedElementCount))
= O(arr.count * log_2(arr.count)) + O(summedElementCount) // Annihilation of multiplication by a constant factor
= O((arr.count * log_2(arr.count)) + summedElementCount) // Addition law for big O
This problem could instead be solved with a bounded priority queue, like the MinMaxPriorityQueue in Google's Gauva library for Java. It's simply a wrapper for min-max heap that maintains a fixed number of elements, that when added to, causes the greatest element (according to the provided comparator) to be evicted. If you had something like this available to you in Swift, you could do:
func miniMaxSum(arr: [Int], summedElementCount: Int) {
let minQueue = MinMaxPriorityQueue<Int>(size: summedElementCount, comparator: <)
let maxQueue = MinMaxPriorityQueue<Int>(size: summedElementCount, comparator: >)
for i in arr {
minQueue.offer(i)
maxQueue.offer(i)
}
let (minSum, maxSum) = (minQueue.reduce(0, +), maxQueue.reduce(0, +))
print("\(minSum) \(maxSum)")
}
This solution has a space complexity of only O(summedElementCount) extra space, needed to hold the two queues, each of max size summedElementCount.
This is less than the previous solution, because summedElementCount <= arr.count
This solution has a time complexity of O(arr.count * log_2(summedElementCount))
Derviation: The for loop does arr.count iterations, each consisting of a log_2(summedElementCount) operation on both queues.
O(arr.count) * (2 * O(log_2(summedElementCount)))
= O(arr.count) * O(log_2(summedElementCount)) // Annihilation of multiplication by a constant factor
= O(arr.count * log_2(summedElementCount)) // Multiplication law for big O
It's unclear to me whether this is better or worse than O((arr.count * log_2(arr.count)) + summedElementCount). If you know, please let me know in the comments below!
Try this one accepted:
func miniMaxSum(arr: [Int]) -> Void {
let sorted = arr.sorted()
let minSum = sorted[0...3].reduce(0, +)
let maxSum = sorted[1...4].reduce(0, +)
print("\(minSum) \(maxSum)"
}
Try this-
func miniMaxSum(arr: [Int]) -> Void {
var minSum = 0
var maxSum = 0
var minChecked = false
var maxChecked = false
let numMax = arr.reduce(Int.min, { max($0, $1) })
print("Max number in array: \(numMax)")
let numMin = arr.reduce(Int.max, { min($0, $1) })
print("Min number in array: \(numMin)")
for item in arr {
if !minChecked && numMin == item {
minChecked = true
} else {
maxSum = maxSum + item
}
if !maxChecked && numMax == item {
maxChecked = true
} else {
minSum = minSum + item
}
}
print("\(minSum) \(maxSum)")
}
Try this:
func miniMaxSum(arr: [Int]) -> Void {
let min = arr.min()
let max = arr.max()
let total = arr.reduce(0, +)
print(total - max!, total - min!, separator: " ")
}
this is demo of iOS Charts library (LineChart) and I want to input my data instead of arc4random data.
My data is in Array so I have to approach with index but I can't understand the (0..<count).map { (i) -> ChartDataEntry code.
func setChartValues(_ count : Int = 24) {
let values = (0..<count).map { (i) -> ChartDataEntry in
let val = Double(arc4random_uniform(UInt32(count))+3)
return ChartDataEntry(x: Double(i), y: val)
}
let set1 = LineChartDataSet(entries: values , label : "DataSet 1")
let data = LineChartData(dataSet: set1)
self.lineChartView.data = data
}
It seems you are new to iOS and swift. What you are looking for is an understanding of the functionning of closures in swift, plus the map function which is called an high order function
from apple doc ( https://developer.apple.com/documentation/swift/array/3017522-map ) :
Returns an array containing the results of mapping the given closure over the sequence’s elements.
In other words it maps your array into another array, according to the trailing closure you passed as a parameter.
In your specific case here his how to read it :
(0..<count) : creates an array of count lengh
example : if count = 4 then (0..<count) is [0, 1, 2, 3]
As said previously the map function will transform each of your element into another ( therefore keeping the length of the array ).
in your case val = Double(arc4random_uniform(UInt32(count))+3) will be equal to a random number calculated with count value, and create a new ChartDataEntry with this random value.
to sum it up the whole code is just saying "I will create a count length array of random ChartDataEntry", I guess as a mockup
I suggest you to read about closures here :
https://medium.com/the-andela-way/closures-in-swift-8aef8abc9474
and high order functions ( such as map(_:) ) here :
https://medium.com/#abhimuralidharan/higher-order-functions-in-swift-filter-map-reduce-flatmap-1837646a63e8
let values = (0.. ChartDataEntry in
let val = Double(arc4random_uniform(UInt32(count))+3)
return ChartDataEntry(x: Double(i), y: val)
}
The value mapped and return is you can say a hash function. (arc4random).
It index you are taking is just setting X axis of the chart like 0 , 1 ,2 etc...
and your graph Y it set according to the functions return (arc4random)
How does subscripting a lazy filter work?
let ary = [0,1,2,3]
let empty = ary.lazy.filter { $0 > 4 }.map { $0 + 1 }
print(Array(empty)) // []
print(empty[2]) // 3
It looks like it just ignores the filter and does the map anyway. Is this documented somewhere? What other lazy collections have exceptional behavior like this?
It comes down to subscripting a LazyFilterCollection with an integer which in this case ignores the predicate and forwards the subscript operation to the base.
For example, if we're looking for the strictly positive integers in an array :
let array = [-10, 10, 20, 30]
let lazyFilter = array.lazy.filter { $0 > 0 }
print(lazyFilter[3]) // 30
Or, if we're looking for the lowercase characters in a string :
let str = "Hello"
let lazyFilter = str.lazy.filter { $0 > "Z" }
print(lazyFilter[str.startIndex]) //H
In both cases, the subscript is forwarded to the base collection.
The proper way of subscripting a LazyFilterCollection is using a LazyFilterCollection<Base>.Index as described in the documentation :
let start = lazyFilter.startIndex
let index = lazyFilter.index(start, offsetBy: 1)
print(lazyFilter[index])
Which yields 20 for the array example, or l for the string example.
In your case, trying to access the index 3:
let start = empty.startIndex
let index = empty.index(start, offsetBy: 3)
print(empty)
would raise the expected runtime error :
Fatal error: Index out of range
To add to Carpsen90's answer, you run into one of Collection's particularities: it's not recommended, nor safe to access collections by an absolute index, even if the type system allows this. Because the collection you receive might be a subset of another one.
Let's take a simpler example, array slicing:
let array = [0, 1, 2, 3, 4]
let slice = array[2..<3]
print(slice) // [2]
print(slice.first) // Optional(2)
print(slice[0]) // crashes with array index out of bounds
Even if slice is a collection indexable by an integer, it's still unsafe to use absolute integers to access elements of that collection, as the collection might have a different set of indices.
I was coding a calculator app on swift, and I am very new to swift. So I am lost with the syntax and everything. When I debug my code I get and error of on the code sayign division by 0. I have debugged and everything but I have no idea how to solve it any help would be greatly appreciated, I am just starting out swift and iOS. The application I am making right is for the mac terminal so my program takes the the users input from string and then converts it to an int.
This is the code I am working with
var average = 0;
let count = nums.count - 1
for index in 0...nums.count - 2 {
let nextNum = Int(nums[index])
average += nextNum!
}
return average / count
}
You are subtracting one from the array elements count, I assume due to the idea that zero based numbering affects it, but there is no need in this case.
You should check for an empty array since this will cause a division by zero. Also you can use reduce to simply sum up an array of numbers then divide by the count.
func average(of nums: [Float]) -> Float? {
let count = nums.count
if (count == 0) { return nil }
return nums.reduce(0, +) / Float(count)
}
There might be some reason for divisor be 0. As #MartinR said if there is only 1 object in nums then count = nums.count -1 would be zero and 1 / 0 is an undefined state.
One more issue I found that you are looping as 0...nums.count - 2 but it should be 0...nums.count - 1. You can also write it with less than condition as
0..<nums.count or 0..<count
Use,
var average = 0;
let count = nums.count
for index in 0..<count {
let nextNum = Int(nums[index])
average += nextNum!
}
return average / count
You can use the swift high-order functions for the optimised solution which will return 0 as average even if you do not have any number in your nums array. As:
let count = nums.count
let avg = nums.reduce(0, +) / count
Let try this:
var sum = 0;
let count = nums.count
for index in 0...nums.count - 1 {
let nextNum = Int(nums[index])
sum += nextNum!
}
return count != 0 ? sum/count : 0
var occurences: [Int : Int] = [:]
for number in numbers {
if var value = occurences[number] {
occurences[number] = ++value
} else {
occurences[number] = 1
}
}
I understand the first 2 lines that it declares an empty dictionary and I have an array of numbers to iterate in a for-in loop, but can someone explain the 4th and 5th line, please. I just don't get how it declares which one is the key and which one is the value. Thank you so much, stucking here for like 2 days.
This line
if var value = occurences[number]
means that it checks to see if occurences has some value stored for key number and then in next line
occurences[number] = ++value
it increments the value by using ++ and then saves that to the occurences dict.