What does the error mean "Cannot convert value of type"? - swift5

The Python code was taken as a basis: enter link description here
Errors occur in each cycle
A lot of errors (please help fix it):
enter image description here
Code:
var t = readLine()!
var s = readLine()!
var len_s = s.count
var t_lis = Set(t)
let character:[Character] = Array(s)
var c_s:[Character: Int] = Dictionary(uniqueKeysWithValues: zip(character, Array(repeating: 1, count: character.count)))
let character2:[Character] = Array(t_lis)
var c_t:[Character: Int] = Dictionary(uniqueKeysWithValues: zip(character2, Array(repeating: 1, count: character2.count)))
var c_res = [String: String]()
var summ = 0
for e in c_s{
c_res[e] = [c_s[e], min( c_s[e], c_t[e] )]
summ += c_res[e][1]}
for i in 0..<((t.count-s.count)+1) {
if summ == len_s-1{
print(i)
break
}
for j in c_res{
if t[i] = c_res[j]{
if c_res[t[i]][1] > 0{
c_res[t[i]][1] -= 1
summ -= 1
}}}
for l in c_res {
if (i+len_s < t.count && t[i+len_s]) = c_res{
if c_res[ t[i+len_s] ][1] < c_res[ t[i+len_s] ][0]{
c_res[ t[i+len_s] ][1] += 1
summ += 1
}}}
}

For reference, here is the original Python code that OP linked to:
t = input('t = ')
s = input('s = ')
len_s = len(s)
t_lis = list(t)
c_s = Counter(s)
c_t = Counter(t_lis[:len_s])
c_res = dict()
summ = 0
for e in c_s:
c_res[e] = [c_s[e], min( c_s[e], c_t[e] )]
summ += c_res[e][1]
for i in range( len(t)-len(s)+1 ):
if summ == len_s-1:
print(i)
break
if t[i] in c_res:
if c_res[t[i]][1] > 0:
c_res[t[i]][1] -= 1
summ -= 1
if i+len_s < len(t) and t[i+len_s] in c_res:
if c_res[ t[i+len_s] ][1] < c_res[ t[i+len_s] ][0]:
c_res[ t[i+len_s] ][1] += 1
summ += 1
else:
print(-1)
First I want to mention that the Python code that was linked to is pretty bad. By that, I mean that nothing is clearly named. It's totally obtuse as to what it's trying to accomplish. I'm sure it would be clearer if I spoke Russian or whatever the languge on that page is, but it's not either of the ones I speak. I know Python programming has a different culture around it than Swift programming since Python is often written for ad hoc solutions, but it really should be refactored, with portions extracted into well-named functions. That would make it a lot more readable, and might have helped you in your translation of it into Swift. I won't try to do those refactorings here, but once the errors are fixed, if you want to use it in any kind of production environment, you really should clean it up.
As you acknowledge, you have a lot of errors. You ask what the errors mean, but presumably you want to know how to fix the problems, so I'll address both. The errors start on this line:
c_res[e] = [c_s[e], min( c_s[e], c_t[e] )]
The first error is Cannot assign value of type '[Any]' to subscript of type 'String'
This means you are building an array containing elements of type Any and trying to assign it to c_res[e]. c_res is Dictionary with keys of type String and values of type String. So assuming e were a String, which it isn't - more on that in a sec - then c_res[e] would have the type of the value, a String.
The natural question would be why is the right-hand side an array of Any. It comes down to the definition of the array isn't legal, and the compiler is choking on it (basically, a by-product of other errors). The reason is because min expects all of its parameters to be of a single type that conforms to the Comparable protocol, but c_s[e] and c_s[e] are illegal... and that's because they are both Dictionary<Character, Int>, so they expect an index of type of Character, but e isn't a Character. It's a tuple, (Character, Int). The reason is to be found on the preceding line:
for e in c_s{
Since c_s is Dictionary<Character, Int> it's elements are tuples containing a Character and an Int. That might be surprising for a Python programmer new to Swift. To iterate over the keys you have to specify that's what you want, so let's correct that:
for e in c_s.keys {
With that fixed, previous errors go away, but a new problem is exposed. When you index into a Dictionary in Swift you get an optional value, because it might be nil if there is no value stored for that key, so it needs to be unwrapped. If you're sure that neither c_s[e] nor c_t[e] will be nil you could force-unwrap them like this:
c_res[e] = [c_s[e]!, min( c_s[e]!, c_t[e]! )]
But are you sure? It's certainly not obvious that they must be. So we need to handle the optional, either with optional binding, or optional coallescing to provide a default value if it is nil.`
for (e, csValue) in c_s {
let ctValue = c_t[e] ?? Int.max
c_res[e] = [csValue, min(csValue, ctValue]
summ += c_res[e][1]
}
Note that we've gone back to iterating over c_s instead of c_s.keys, but now we're using tuple binding to assign just the key to e and the value to csValue. This avoids optional handing for c_s elements. For the value from c_t[e] I use optional coallescing to default it to the maximum integer, that way if c_t[e] is nil, min will still return csValue which seems to be the intent.
But again we have exposed another problem. Now the compiler complains that we can't assign Array<Int> to c_res[e] which is expected to be a String... In Swift, String is not an Array<Int>. I'm not sure why c_res is defined to have values of type String when the code puts arrays of Int in it... so let's redefine c_res.
var c_res = [String: [Int]]()
var summ = 0
for (e, csValue) in c_s {
let ctValue = c_t[e] ?? Int.max
c_res[e] = [csValue, min(csValue, ctValue)]
summ += c_res[e][1]
To paraphrase a Nirvana lyric, "Hey! Wait! There is a new complaint!" Specifically, e is type Character, but c_res is Dictionary<String, Array<Int>>, so let's just make c_res a Dictionary<Character, Array<Int>> instead.
var c_res = [Character: [Int]]()
var summ = 0
for (e, csValue) in c_s {
let ctValue = c_t[e] ?? Int.max
c_res[e] = [csValue, min(csValue, ctValue)]
summ += c_res[e][1]
}
Yay! Now we've resolved all the errors on the line we started with... but there's now one on the next line: Value of optional type '[Int]?' must be unwrapped to refer to member 'subscript' of wrapped base type '[Int]'
Again this is because when we index into a Dictionary, the value for our key might not exist. But we just computed the value we want to add to summ in our call to min, so let's save that off and reuse it here.
var c_res = [Character: [Int]]()
var summ = 0
for (e, csValue) in c_s {
let ctValue = c_t[e] ?? Int.max
let minC = min(csValue, ctValue)
c_res[e] = [csValue, minC]
summ += minC
}
Now we finally have no errors in the first loop. Remaining errors are in the nested loops.
For starters, the code uses = to test for equality. As with all languages in the C family, Swift uses == as the equality operator. I think that change, which is needed in couple of places, is pretty straight forward, so I won't show that iteration. Once those are fixed, we get one of my favorite (not) errors in Swift: Type of expression is ambiguous without more context on this line:
if t[i] == c_res[j] {
These ambiguity errors can mean one of a few things. The main reason is because elements of the expression match several definitions, and the compiler doesn't have a way to figure out which one should be used. That flavor is often accompanied by references to the possible matches. It also seems to happen when multiple type-check failures combine in a way the compiler can't give a clearer error. I think that's the version that's happening here. The source of this problem goes back to the outer loop
for i in 0..<((t.count-s.count)+1) {
which makes the loop variable, i, be of type, Int, combined with using i to index into t, which is a String. The problem is that you can't index into String with an Int. You have to use String.Index. The reason comes down to String consisting of unicode characters and using UTF-8 internally, which means that it's characters are of different lengths. Indexing with an Int in the same way as you would for an element of an Array would require O(n) complexity, but indexing into a String is expected to have O(1) complexity. String.Index solves this by using String methods like index(after:) to compute indices from other indices. Basically indexing into a String is kind of pain, so in most cases Swift programmers do something else, usually relying on the many methods String supports to manipulate it. As I'm writing this, I haven't yet put together what the code is supposed to be doing, which makes it hard to figure out what String methods might be helpful here, so let's just convert t to Array[Character], then we can use integers to index into it:
var t = [Character](readLine()!)
That still gives an ambiguous expression error though, so I looked at the equivalent line in the Python code. This revealed a logic error in translation. Here's the Python:
if t[i] in c_res:
if c_res[t[i]][1] > 0:
c_res[t[i]][1] -= 1
summ -= 1
There is no loop. It looks like the loop was introduced to mimic the check to see if t[i] is in c_res, which is one way to do it, but it was done incorrectly. Swift has a way to do that more succinctly:
if c_res.keys.contains(t[i]) {
if c_res[t[i]][1] > 0 {
c_res[t[i]][1] -= 1
summ -= 1
}
}
But we can use optional binding to clean that up further:
let tChar = t[i]
if let cResValue = c_res[tChar] {
if cResValue[1] > 0 {
c_res[tChar][1] -= 1
summ -= 1
}
}
But again we have the problem of indexing into a Dictionary returning an optional which needs unwrapping on the line,
c_res[tChar][1] -= 1
Fortunately we just ensured that c_res[tChar] exists when we bound it to cResValue, and the only reason we need to index into again is because we need to update the dictionary value... this is a good use of a force-unwrap:
let tChar = t[i]
if let cResValue = c_res[tChar] {
if cResValue[1] > 0 {
c_res[tChar]![1] -= 1
summ -= 1
}
}
The last loop also seems to be the result of testing for existence in c_res and the loop variable isn't even used. Here's the original Python:
if i+len_s < len(t) and t[i+len_s] in c_res:
if c_res[ t[i+len_s] ][1] < c_res[ t[i+len_s] ][0]:
c_res[ t[i+len_s] ][1] += 1
summ += 1
We can use optional binding here combined with the comma-if syntax, and another force-unwrap.
tChar = t[i + len_s]
if i+len_s < t.count, let cResValue = c_res[tChar] {
if cResValue[1] < cResValue[0] {
c_res[tChar]![1] += 1
summ += 1
}
}
Of course, since we're re-using tChar with a new value, it has to be changed from let to var.
Now it all compiles. It definitely needs refactoring, but here it is altogether:
import Foundation
var t = [Character](readLine()!)
var s = readLine()!
var len_s = s.count
var t_lis = Set(t)
let character:[Character] = Array(s)
var c_s:[Character: Int] = Dictionary(uniqueKeysWithValues: zip(character, Array(repeating: 1, count: character.count)))
let character2:[Character] = Array(t_lis)
var c_t:[Character: Int] = Dictionary(uniqueKeysWithValues: zip(character2, Array(repeating: 1, count: character2.count)))
var c_res = [Character: [Int]]()
var summ = 0
for (e, csValue) in c_s {
let ctValue = c_t[e] ?? Int.max
let minC = min(csValue, ctValue)
c_res[e] = [csValue, minC]
summ += minC
}
for i in 0..<((t.count-s.count)+1) {
if summ == len_s-1 {
print(i)
break
}
var tChar = t[i]
if let cResValue = c_res[tChar] {
if cResValue[1] > 0 {
c_res[tChar]![1] -= 1
summ -= 1
}
}
tChar = t[i + len_s]
if i+len_s < t.count, let cResValue = c_res[tChar] {
if cResValue[1] < cResValue[0] {
c_res[tChar]![1] += 1
summ += 1
}
}
}
If all of this makes you wonder why anyone would use such a picky language, there are two things to consider. The first is that you don't get so many errors when writing code originally in Swift, or even when translating from another strongly typed language. Coverting from a typeless language, like Python, is a problem, because apart from subtle other differences, you also have to pin down it's overly flexible view of data to some concrete type - and that's not always obvious how to do it. The other thing is that strongly typed languages allow you to catch huge classes of bugs early... because the type system won't even let them compile.

Related

Make variable immutable after initial assignment

Is there a way to make a variable immutable after initializing/assigning it, so that it can change at one point, but later become immutable? I know that I could create a new let variable, but is there a way to do so without creating a new variable?
If not, what is best practice to safely ensure a variable isn't changed after it needs to be?
Example of what I'm trying to accomplish:
var x = 0 //VARIABLE DECLARATION
while x < 100 { //VARIABLE IS CHANGED AT SOME POINT
x += 1
}
x = let x //MAKE VARIABLE IMMUTABLE AFTER SOME FUNCTION IS PERFORMED
x += 5 //what I'm going for: ERROR - CANNOT ASSIGN TO IMMUTABLE VARIABLE
You can initialize a variable with an inline closure:
let x: Int = {
var x = 0
while x < 100 {
x += 1
}
return x
}()
There's no way I know of that lets you change a var variable into a let constant later on. But you could declare your constant as let to begin with and not immediately give it a value.
let x: Int /// No initial value
x = 100 /// This works.
x += 5 /// Mutating operator '+=' may not be used on immutable value 'x'
As long as you assign a value to your constant sometime before you use it, you're fine, since the compiler can figure out that it will eventually be populated. For example if else works, since one of the conditional branches is guaranteed to get called.
let x: Int
if 5 < 10 {
x = 0 /// This also works.
} else {
x = 1 /// Either the first block or the `else` will be called.
}
x += 5 /// Mutating operator '+=' may not be used on immutable value 'x'

Swift Mini-Max Sum One Test Case Failed - HackerRank

Before anything else, I checked if this kind of question fits Stackoverflow, and based on one similar question (javascript) and from this question: https://meta.stackexchange.com/questions/129598/which-computer-science-programming-stack-exchange-sites-do-i-post-on -- it does.
So here it goes. The challenge is pretty simple, in my opinion:
Given five positive integers, find the minimum and maximum values that
can be calculated by summing exactly four of the five integers. Then
print the respective minimum and maximum values as a single line of
two space-separated long integers.
For example, . Our minimum sum is and our maximum sum is . We would
print
16 24
Input Constraint:
1 <= arr[i] <= (10^9)
My solution is pretty simple. This is what I could do best:
func miniMaxSum(arr: [Int]) -> Void {
let sorted = arr.sorted()
let reversed = Array(sorted.reversed())
var minSum = 0
var maxSum = 0
_ = sorted
.filter({ $0 != sorted.last!})
.map { minSum += $0 }
_ = reversed
.filter({ $0 != reversed.last!})
.map { maxSum += $0 }
print("\(minSum) \(maxSum)")
}
As you can see, I have two sorted arrays. One is incrementing, and the other one is decrementing. And I'm removing the last element of the two newly sorted arrays. The way I remove the last element is using filter, which probably creates the problem. But from there, I thought I could get easily the minimum and maximum sum of the 4 elements.
I had 13/14 test cases passed. And my question is, what could be the test case in which this solution will likely to fail?
Problem link: https://www.hackerrank.com/challenges/mini-max-sum/problem
Here
_ = sorted
.filter({ $0 != sorted.last!})
.map { minSum += $0 }
your expectation is that all but the largest element are added. But that is only correct it the largest element is unique. (And similarly for the maximal sum.)
Choosing an array with all identical errors makes the problem more apparent:
miniMaxSum(arr: [1, 1, 1, 1, 1])
// 0 0
A simpler solution would be to compute the sum of all elements once, and then get the result by subtracting the largest respectively smallest array element. I'll leave the implementation to you :)
Here is the O(n) solution:
func miniMaxSum(arr: [Int]) {
var smallest = Int.max
var greatest = Int.min
var sum = 0
for x in arr {
sum += x
smallest = min(smallest, x)
greatest = max(greatest, x)
}
print(sum - greatest, sum - smallest, separator: " ")
}
I know this isn't codereview.stackexchange.com, but I think some clean up is in order, so I'll start with that.
let reversed = Array(sorted.reversed())
The whole point of the ReversedCollection that is returned by Array.reversed() is that it doesn't cause a copy of elements, and it doesn't take up any extra memory or time to produce. It's merely a wrapper around a collection, and intercepts indexing operations and changes them to immitate a buffer that's been reversed. Asked for .first? It'll give you .last of its wrapped collection. Asked for .last? It'll return .first, etc.
By initializing a new Array from sorted.reversed(), you're causing an unecessary copy, and defeating the point of ReversedCollection. There are some circumstances where this might be necessary (e.g. you want to pass a pointer to a buffer of reversed elements to a C API), but this isn't one of them.
So we can just change that to let reversed = sorted.reversed()
-> Void doesn't do anything, omit it.
sorted.filter({ $0 != sorted.last!}) is inefficient.
... but more than that, this is the source of your error. There's a bug in this. If you have an array like [1, 1, 2, 3, 3], your minSum will be 4 (the sum of [1, 1, 2]), when it should be 7 (the sum of [1, 1, 2, 3]). Similarly, the maxSum will be 8 (the sume of [2, 3, 3]) rather than 9 (the sum of [1, 2, 3, 3]).
You're doing a scan of the whole array, doing sorted.count equality checks, only to discard an element with a known position (the last element). Instead, use dropLast(), which returns a collection that wraps the input, but whose operations mask the existing of a last element.
_ = sorted
.dropLast()
.map { minSum += $0 }
_ = reversed
.dropLast()
.map { maxSum += $0 }
_ = someCollection.map(f)
... is an anti-pattern. The distinguishing feature between map and forEach is that it produces a resulting array that stores the return values of the closure as evaluated with every input element. If you're not going to use the result, use forEach
sorted.dropLast().forEach { minSum += $0 }
reversed.dropLast().forEach { maxSum += $0 }
However, there's an even better way. Rather than summing by mutating a variable and manually adding to it, instead use reduce to do so. This is ideal because it allows you to remove the mutability of minSum and maxSum.
let minSum = sorted.dropLast().reduce(0, +)
let maxSum = reversed.dropLast().reduce(0, +)
You don't really need the reversed variable at all. You could just achieve the same thing by operating over sorted and using dropFirst() instead of dropLast():
func miniMaxSum(arr: [Int]) {
let sorted = arr.sorted()
let minSum = sorted.dropLast().reduce(0, +)
let maxSum = sorted.dropFirst().reduce(0, +)
print("\(minSum) \(maxSum)")
}
Your code assumes the input size is always 5. It's good to document that in the code:
func miniMaxSum(arr: [Int]) {
assert(arr.count == 5)
let sorted = arr.sorted()
let minSum = sorted.dropLast().reduce(0, +)
let maxSum = sorted.dropFirst().reduce(0, +)
print("\(minSum) \(maxSum)")
}
A generalization of your solution uses a lot of extra memory, which you might not have available to you.
This problem fixes the number of summed numbers (always 4) and the number of input numbers (always 5). This problem could be generalized to picking summedElementCount numbers out of any sized arr. In this case, sorting and summing twice is inefficient:
Your solution has a space complexity of O(arr.count)
This is caused by the need to hold the sorted array. If you were allowed to mutate arr in-place, this could reduce to `O(1).
Your solution has a time complexity of O((arr.count * log_2(arr.count)) + summedElementCount)
Derivation: Sorting first (which takes O(arr.count * log_2(arr.count))), and then summing the first and last summedElementCount (which is each O(summedElementCount))
O(arr.count * log_2(arr.count)) + (2 * O(summedElementCount))
= O(arr.count * log_2(arr.count)) + O(summedElementCount) // Annihilation of multiplication by a constant factor
= O((arr.count * log_2(arr.count)) + summedElementCount) // Addition law for big O
This problem could instead be solved with a bounded priority queue, like the MinMaxPriorityQueue in Google's Gauva library for Java. It's simply a wrapper for min-max heap that maintains a fixed number of elements, that when added to, causes the greatest element (according to the provided comparator) to be evicted. If you had something like this available to you in Swift, you could do:
func miniMaxSum(arr: [Int], summedElementCount: Int) {
let minQueue = MinMaxPriorityQueue<Int>(size: summedElementCount, comparator: <)
let maxQueue = MinMaxPriorityQueue<Int>(size: summedElementCount, comparator: >)
for i in arr {
minQueue.offer(i)
maxQueue.offer(i)
}
let (minSum, maxSum) = (minQueue.reduce(0, +), maxQueue.reduce(0, +))
print("\(minSum) \(maxSum)")
}
This solution has a space complexity of only O(summedElementCount) extra space, needed to hold the two queues, each of max size summedElementCount.
This is less than the previous solution, because summedElementCount <= arr.count
This solution has a time complexity of O(arr.count * log_2(summedElementCount))
Derviation: The for loop does arr.count iterations, each consisting of a log_2(summedElementCount) operation on both queues.
O(arr.count) * (2 * O(log_2(summedElementCount)))
= O(arr.count) * O(log_2(summedElementCount)) // Annihilation of multiplication by a constant factor
= O(arr.count * log_2(summedElementCount)) // Multiplication law for big O
It's unclear to me whether this is better or worse than O((arr.count * log_2(arr.count)) + summedElementCount). If you know, please let me know in the comments below!
Try this one accepted:
func miniMaxSum(arr: [Int]) -> Void {
let sorted = arr.sorted()
let minSum = sorted[0...3].reduce(0, +)
let maxSum = sorted[1...4].reduce(0, +)
print("\(minSum) \(maxSum)"
}
Try this-
func miniMaxSum(arr: [Int]) -> Void {
var minSum = 0
var maxSum = 0
var minChecked = false
var maxChecked = false
let numMax = arr.reduce(Int.min, { max($0, $1) })
print("Max number in array: \(numMax)")
let numMin = arr.reduce(Int.max, { min($0, $1) })
print("Min number in array: \(numMin)")
for item in arr {
if !minChecked && numMin == item {
minChecked = true
} else {
maxSum = maxSum + item
}
if !maxChecked && numMax == item {
maxChecked = true
} else {
minSum = minSum + item
}
}
print("\(minSum) \(maxSum)")
}
Try this:
func miniMaxSum(arr: [Int]) -> Void {
let min = arr.min()
let max = arr.max()
let total = arr.reduce(0, +)
print(total - max!, total - min!, separator: " ")
}

Passing hashset itself as parameter for reduce in swift

Given a hashset h = [1,2,3,4,5] for example, the purpose is to count the number of unique element i such that h.contains(i+1).
I can write down swift code using reduce() as
h.reduce(0,{h.contains($1+1) ? $0 + 1 : $0})
But what if h is an array containing duplicates instead of a hashset? I first need to convert it into hashset and then using the above expression:
Set(h).reduce(0,{Set(h).contains($1+1) ? $0 + 1 : $0})
But in this way we calculated Set(h).count + 1 times of Set(h) as pointed out by #carpsen90, is there any way to write the code like
Set(h).reduce(0,{self.contains($1+1) ? $0 + 1 : $0})
without using a temporary variable to store Set(h)?
Every time you call Set(h) a new set is calculated, so in your example Set(h).reduce(0,{Set(h).contains($1+1) ? $0 + 1 : $0}) , Set(h) will be calculated h.count + 1 times. Having a variable let set = Set(h) is the way to go here :
let set = Set(h)
let result = set.reduce(0) {set.contains($1+1) ? $0 + 1 : $0}
He is an alternative way of getting the desired result :
Let's create a dictionary that indicates wether a number has appeared in h:
var dict = [Int: Bool].init(minimumCapacity: h.count)
for x in h {
if dict[x] == nil {
dict[x] = true
}
}
And then, for each h element, check that its successor appears in the dictionary :
var count = 0
for entry in dict {
if dict[entry.key + 1] == true {
count += 1
}
}
And you could check the result :
print(count) //4
The problem here is that your array might contain duplicates and to filter the duplicates the easiest way is to convert it into the Set. And the correct way to do that is to save the set in a new variable hence it is unavoidable.
Though you can still use the reduce method without converting your array into a set like this:
var tempH = [Int]()
let c = h.reduce(0) { (result, item) in
if (!tempH.contains(item)) {
tempH.append(item)
return h.contains(item+1) ? (result + 1) : result
}
else {return result}
}
But, as you can notice in above code, we have to use a temporary array to track our duplicates. Hence an extra variable seems unavoidable here. Though no Sets are being used in above code.

Calculator is not taking the Average

I was coding a calculator app on swift, and I am very new to swift. So I am lost with the syntax and everything. When I debug my code I get and error of on the code sayign division by 0. I have debugged and everything but I have no idea how to solve it any help would be greatly appreciated, I am just starting out swift and iOS. The application I am making right is for the mac terminal so my program takes the the users input from string and then converts it to an int.
This is the code I am working with
var average = 0;
let count = nums.count - 1
for index in 0...nums.count - 2 {
let nextNum = Int(nums[index])
average += nextNum!
}
return average / count
}
You are subtracting one from the array elements count, I assume due to the idea that zero based numbering affects it, but there is no need in this case.
You should check for an empty array since this will cause a division by zero. Also you can use reduce to simply sum up an array of numbers then divide by the count.
func average(of nums: [Float]) -> Float? {
let count = nums.count
if (count == 0) { return nil }
return nums.reduce(0, +) / Float(count)
}
There might be some reason for divisor be 0. As #MartinR said if there is only 1 object in nums then count = nums.count -1 would be zero and 1 / 0 is an undefined state.
One more issue I found that you are looping as 0...nums.count - 2 but it should be 0...nums.count - 1. You can also write it with less than condition as
0..<nums.count or 0..<count
Use,
var average = 0;
let count = nums.count
for index in 0..<count {
let nextNum = Int(nums[index])
average += nextNum!
}
return average / count
You can use the swift high-order functions for the optimised solution which will return 0 as average even if you do not have any number in your nums array. As:
let count = nums.count
let avg = nums.reduce(0, +) / count
Let try this:
var sum = 0;
let count = nums.count
for index in 0...nums.count - 1 {
let nextNum = Int(nums[index])
sum += nextNum!
}
return count != 0 ? sum/count : 0

what does ++ exactly mean in Swift?

i am learning Swift with a book aimed for people with little experience. One thing bothering me is the ++ syntax. The following is taken from the book:
var counter = 0
let incrementCounter = {
counter++
}
incrementCounter()
incrementCounter()
incrementCounter()
incrementCounter()
incrementCounter()
the book said counter is 5.
but i typed these codes in an Xcode playground. It is 4!
i am confused.
The post-increment and post-decrement operators increase (or decrease) the value of their operand by 1, but the value of the expression is the operand's original value prior to the increment (or decrement) operation
So when you see playground, current value of counter is being printed.
But after evaluation of function, the value of counter changes and you can see updated value on the next line.
x++ operator is an operator that is used in multiple languages - C, C++, Java (see the C answer for the same question)
It is called post-increment. It increments the given variable by one but after the current expression is evaluated. For example:
var x = 1
var y = 2 + x++
// the value of x is now 2 (has been incremented)
// the value of y is now 3 (2 + 1, x has been evaluated before increment)
This differs from the ++x (pre-increment) operator:
var x = 1
var y = 2 + ++x
// the value of x is now 2 (has been incremented)
// the value of y is now 4 (2 + 4, x has been evaluated after increment)
Note the operator is getting removed in the next version of Swift, so you shouldn't use it anymore.
It's always better to just write x += 1 instead of complex expressions with side effects.
The value of counter after your five calls to the incrementCounter closure will be 5, but the return of each call to incrementCounter will seemingly "lag" one step behind. As Sulthan writes in his answer, this is due to x++ being a post-increment operator: the result of the expression will be returned prior to incrementation
var x = 0
print(x++) // 0
print(x) // 1
Also, as I've written in my comment above, you shouldn't use the ++ and -- operators as they will be deprecated in Swift 2.2 and removed in Swift 3. However, if you're interested in the details of post- vs pre-increment operator, you can find good answers here on SO tagged to other languages, but covering the same subject, e.g.
What is the difference between ++i and i++?
It's worth mentioning, however, a point that is relevant to Swift > 2.1 however, and that don't really relate to the ++ operator specifically.
When you initiate the closure incrementCounter as
var someOne : Int = 0
let incrementCounter = {
someInt
}
The closure is implictly inferred to be of type () -> Int: a closure taking zero arguments but with a single return of type Int.
let incrementCounter: () -> Int = {
return someInt
}
Hence, what you seemingly "see" in you playground is the unused (non-assigned) return value of the call to incrementCounter closure; i.e., the result of the expression incrementCounter().
Whereas the value of counter is never really printed in the right block of your playground (unless you write a line where the result of that line:s expression is counter).
++ and -- before the identifier add/subtract one, and then return its value.
++ and -- after the identifier return its value, and then add/subtract 1.
They were removed in Swift 3.0, but you can add them back:
prefix operator --
prefix operator ++
postfix operator --
postfix operator ++
prefix func ++(_ a : inout Int) -> Int {
a += 1
return a
}
prefix func --(_ a : inout Int) -> Int {
a -= 1
return a
}
postfix func ++(_ a: inout Int) -> Int {
defer { a += 1 }
return a
}
postfix func --(_ a: inout Int) -> Int {
defer { a -= 1 }
return a
}
var a = 11
print(a++) // 11
print(a) // 12
var b = 5
print(--b) // 4
print(b) // 4
even though there are a lot of answers and all of them are clear i added this snippet to show you how to replace your code with 'new' syntax, where ++ and or -- are deprecated. at first your own code
var counter = 0
let incrementCounter = {
counter++
}
let i0 = incrementCounter() // 0
let i1 = incrementCounter() // 1
// .....
how to rewrite it in future Swift's syntax? lets try the recommended replacement ...
var counter = 0
let ic = {
counter += 1
}
let i0 = ic() // () aka Void !!!
let i1 = ic() // ()
but now the result of ic() is Void! Hm ... OK, the next attempt could looks like
var counter = 0
let ic = {
counter += 1
return counter
}
but now the code doesn't compile with error: unable to infer closure return type in current context :-), so we have to declare it (it was not necessary in our original version)
var counter = 0
let ic:()->Int = {
counter += 1
return counter
}
let i0 = ic() // 1
let i1 = ic() // 2
// .....
it works, but the results are not the same. that is because in original code ++ operator was used as post-increment operator. so, we need another adjustment of our 'new' version
var counter = 0
let ic:()->Int = {
let ret = counter
counter += 1
return ret
}
let i0 = ic() // 0
let i1 = ic() // 1
// .....
yes, i would like to see my familiar unary ++ and / or -- will be also in the future versions of Swift
The thing which you are doing is post increment.
First Learn the difference between Pre & Post Increment
In Post Increment, the value counter after increment contains incremented value (i.e 5)
but if we return, then it will contain old value (i.e 4).
In Pre Increment, both the value and return value is incremented.
Lets look into your code now,
counter++ makes a copy, increases counter, and returns the copy (old value).
so if you print counter it will have incremented value (i.e 5) but, if you return counter (i.e your doing so with incrementCounter) it contains the old value (i.e 4).
because of which incrementCounter only showing upto 4.
CHECK OUTPUT
Solution :
change counter++ to ++counter
CHECK OUTPUT