Incomprehensible errors in Swift - swift

This works:
func averageOf(numbers: Int...) -> Float {
var sum = 0
for number in numbers {
sum += number
}
return Float(sum) / Float(numbers.count)
}
averageOf() // (not a number)
averageOf(42, 597, 12) // (217.0)
But this doesn't:
func averageOf(numbers: Int...) -> Float {
var sum = 0
for number in numbers {
sum += number
}
return Float(sum / numbers.count)
}
averageOf()
averageOf(42, 597, 12)
It gives me this error on the } line:
Execution was interrupted, reason: EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0)
I stumbled upon another question with the same first and second snippets of code and the author apparently doesn't get the same errors.
Let's remove that cast:
func averageOf(numbers: Int...) -> Float {
var sum = 0
for number in numbers {
sum += number
}
return sum / numbers.count
}
averageOf()
averageOf(42, 597, 12)
It gives me this error, on the division sign:
Cannot invoke '/' with an argument list of type '(#lvalue Int, Int)'
If I then change the return type of the function to Int:
func averageOf(numbers: Int...) -> Int {
var sum = 0
for number in numbers {
sum += number
}
return sum / numbers.count
}
averageOf()
averageOf(42, 597, 12)
I get the same EXC_BAD_INSTRUCTION error.
If I cast only numbers.count:
func averageOf(numbers: Int...) -> Int {
var sum = 0
for number in numbers {
sum += number
}
return sum / Float(numbers.count)
}
averageOf()
averageOf(42, 597, 12)
I get this error on the division sign:
Cannot invoke 'init' with an argument list of type '(#lvalue Int, $T5)'
I also get this error if I change the return type back to Float.
All of this makes no sense to me. Is it Xcode going postal, or have I missed something subtle?

As already explained, the problem in the second example is due to invoking averageOf() without arguments, which results in a division by zero. However, the first averageOf() works, again without arguments, why? Let me add a few more details.
In the first case you reported, you get no error and averageOf() works, because you are casting the two Int operands to Float before the division.
In the world of Float numbers, 0.0 is only an approximation of 0. If you try 0.0 / 0.0 in a Playground, you won't get an error as a result, the output, instead, will be not a number.
In the second case, however, you're trying to divide 0 by 0 before casting to Float. Therefore, we are still in the realm of Int numbers when the division is performed. The result is an error, due to a division by zero (no approximation involved here). If you try 0 / 0 in a Playground, you'll get an error.
All the other cases not explained by the Int vs Float behavior, are due to the fact that Swift requires to you explicitly cast between types, even when another language would let you cast the operands implicitly.

This error occures, because of your function-call averageOf().If you pass no values, to the variadic parameter numbers it creates an empty array.It's count property therefore returns 0. And you can't devide by 0.
This is also the reason why it says BAD_INSTRUCTION.
If you remove averageOf() from your second code example it works.

Related

Swifts stride with Doubles is unreliable

Using stride with Swift 4 sometimes results in wrong results, what is demonstrated by this piece of code:
import UIKit
import PlaygroundSupport
struct Demo {
let omegaMax = 10.0
let omegaC = 1.0
var omegaSignal:Double {get {return 0.1 * omegaC}}
var dOmega:Double {get {return 0.1 * omegaSignal}}
var omegaArray:[Double] {get {return Array(stride(from: 0.0, through: omegaMax, by: dOmega))}}
The variable dOmega is expected to hold the value 0.001.
The array is expected to have the size of 1001 elements,
where the last element should have the value 10.0
However, these assumptions are not true, as you can see in the following code section:
let demo = Demo()
let nbrOfElements = demo.omegaArray.count
let lastElement = demo.omegaArray[nbrOfElements-1]
print("\(nbrOfElements)") // -> 1000
print("\(lastElement)") // -> 9.990000000000002
What is going on there? Inspecting dOmega gives the answer.
print("\(demo.dOmega)") // -> 0.010000000000000002
The increment value dOmega is not exactly 0.01 as expected, but has a very small approximation error, which is OK for a Double. However this leads to the situation, that the expected Array element 1001 would have the value 10.000000000000002, which is bigger than the given maximum value of 10.0 and so this element 1001 is not generated.
Depending whether the variables in the stride function have rounding errors or not, the result is the expected one or not.
My question is: What is the right way to use stride with Doubles to get in any case the expected result?
You are using the wrong overload of stride!
There are two overloads:
stride(from:to:by:)
stride(from:through:by:)
You should have used the second one, because it includes the parameter through, whereas the first one does not include the parameter to.
print(Array(stride(from: 0, through: 10.0, by: 0.001)).last!) // 10.0
The little difference you see is just because of the imprecise nature of Double.

Swift - rounding numbers

I am trying to round a number in swift, and I found a solution using this:
func roundTo(number: Double, precision: Int) -> Double {
var power: Double = 1
for _ in 1...precision {
power *= 10
}
let rounded = Double(round(power * number)/power)
return rounded
}
I have a model class, lets call it MyObject.
class: My Object {
var preciseNumber: Double?
}
I am fetching a number for example:
var myNumber = 10,0123456789
I use my roundTo function to round it so I have 10,0123456 (7 numbers after the decimal point).
When I print a statement:
print("myNumber rounded: \(roundTo(myNumber, precision: 7))") //10,0123456 as a result. Great!
Then next I want to assing rounded myNumber to my class variable preciseNumber so:
let roundedNumber = roundTo(myNumber, precise: 7)
print("Rounded number is: \(roundedNumber)") // 10,01234567 as result
self.preciseNumber = roundedNumber
print("Precise number is now: \(self.preciseNumber)") // 10,01234599999997 as result
What might be causing this? I want to be as precise as possible.
So it sounds like your issue is being able to compare floating point numbers. The best way to do this is to instead find the degree of precision you need. So rather than just checking numOne == numTwo use something like abs(one - two) <= 0.000001
You can create a Swift operator to handle this for you pretty easily:
// `===` is just used as an example
func === (one: Double, two: Double) -> Bool {
return abs(one - two) <= 0.000001
}
Then you can just check numOne === numTwo and it will use a better floating point equality check.
There is also a power function that will help simplify your rounding function:
let power = pow(10.0, precision)

binary operator * cannot be applied to operands of type Int and Double

I'm trying to build a simple Swift app to calculate VAT (Value Added taxes = 20%).
func taxesFree(number: Int) -> Double {
var textfield = self.inputTextField.text.toInt()!
let VAT = 0.2
var result = textfield * VAT
return result
}
For some reason I keep getting
Binary operator * cannot be applied to operands of type Int and Double
on the line
var result = textfield * VAT
You should convert one type to the other one so both variable should be the same types:
var result: Double = Double(textfield) * VAT
It's because you're trying to multiply an Int (textfield) with a Double (VAT). Because with such an operation you could lose the precision of the double Swift doesn't allow to convert one to the other so you need to explicitly cast the Int to a Double ...
var result = Double(textfield) * VAT
The problem here is that the statement given is literally true, because Swift is strongly typed and doesn't coerce implicitly. Just had a similar case myself with "binary operator '-' cannot be applied to operands of type 'Date' and 'Int'".
If you write:
var result = 10 * 0.2
...that's fine, but if you write:
var number = 10
var result = number * 0.2
...that's not fine. This is because untyped explicit values have an appropriate type selected by the compiler, so in fact the first line is taken as being var result = Double(10) * Double(0.2). After all, as a human being you might mean 10 to be floating-point or an integer - you normally wouldn't say which and would expect that to be clear from context. It might be a bit of a pain, but the idea of strong types is that after the code is parsed it can only have one valid compiled expression.
In general you would build a new value using the constructor, so var result = Double(textfield) * VAT in your case. This is different from casting (textfield as Double) because Int is not a subclass of Double; what you are doing instead is asking for a completely new Double value to be built at runtime, losing some accuracy if the value is very high or low. This is what loosely typed languages do implicitly with pretty much all immediate values, at a small but significant time cost.
In your specific case, it wasn't valuable to have an Int in the first place (even if no fraction part is possible) so what you needed was:
func taxesFree(number: Int) -> Double {
var textfield = Double(self.inputTextField.text)!
let VAT = 0.2
var result = textfield * VAT
return result
}
In my case it was just casting to CGFloat:
self.cnsMainFaqsViewHight.constant = CGFloat(self.mainFaqs.count) * 44.0
You can convert like
var result: Double = Double(textfield)
I was misunderstanding the Closed Range Operator in Swift.
You should not wrap the range in an array: [0...10]
for i in [0...10] {
// error: binary operator '+' cannot be applied to operands of type 'CountableClosedRange<Int>' and 'Int'
let i = i + 1
}
for i in 0...10 {
// ok!
let i = i + 1
}
The range is a collection that can itself be iterated. No need to wrap it in an array, as perhaps you would have in Objective-C.
0...3 -> [0, 1, 2, 3]
[0...3] -> [[0, 1, 2, 3]]
Once you realize your object is a nested collection, rather than an array of Ints, it's easy to see why you cannot use numeric operators on the object.
This worked for me when I got the same error message in Playground:
func getMilk(howManyCartons: Int){
print("Buy \(howManyCartons) cartons of milk")
let priceToPay: Float = Float(howManyCartons) * 2.35
print("Pay $\(priceToPay)")
}
getMilk(howManyCartons: 2)

About Swift: Execution was interrupted, reason: EXC_BAD_INSTRUCTION (code=EXC_1386_INVOP, subcode=0x0)

I am just learning to code so I decided to start with Swift. I am following the tour that mac has for it at here and I am at the section where it is calculating a sum of numbers and then it tells you to try to do a function that does an average of numbers.
func averageOf(numbers: Int...) -> Int {
var sum = 0
var total = 0
var average = 0
for number in numbers {
sum += number
total++
} **Execution was interrupted, reason: EXC_BAD_INSTRUCTION (code=EXC_1386_INVOP, subcode=0x0)**
average = sum/total
return average
}
What am I doing wrong(what do I need to learn to do it right)?
I’m guessing you’ve called your function with no arguments, that is:
averageOf()
This is allowed with variadic arguments, and numbers will be an empty array. This will result in you attempting to divide an unchanged sum by an unchanged total (because you will go round the loop no times for no elements in numbers), so dividing 0 by 0, and you’re getting a divide-by-zero error.
To prevent this from being a possibility, you could require the user to supply at least one number:
func averageOf(first: Int, rest: Int...) -> Double {
var sum = first
var total = 1.0
for number in rest {
sum += number
total++
}
return Double(sum)/total
}
This way, if you try to call it with no arguments, you’ll get a compiler error.
BTW I altered your version to return a Double rather than an Int, you might want to experiment with the two versions to see why.
(this technique is similar to how the standard lib max function is declared, which requires at least 2 arguments:
func max<T : Comparable>(x: T, y: T) -> T
but has an overloaded version for 3 or more:
func max<T : Comparable>(x: T, y: T, z: T, rest: T...) -> T
the reason for the first version instead of cutting straight to a variadic version that takes at least two being, you can then pass it into things like reduce to find the max in a collection e.g. reduce(a, 0, max))
For me this error happened because an implicitly unwrapped property was not set. Setting it would fix the issue.

Why can't I divide integers in swift?

In the Swift "Tour" documentation, there's an exercise where you build on the following function to average a set of numbers:
func sumOf(numbers: Int...) -> Int {
var sum = 0
for number in numbers {
sum += number
}
return sum
}
I can make this work using something like the following:
func averageOf(numbers: Double...) -> Double {
var sum: Double = 0, countOfNumbers: Double = 0
for number in numbers {
sum += number
countOfNumbers++
}
var result: Double = sum / countOfNumbers
return result
}
My question is, why do I have to cast everything as a Double to make it work? If I try to work with integers, like so:
func averageOf(numbers: Int...) -> Double {
var sum = 0, countOfNumbers = 0
for number in numbers {
sum += number
countOfNumbers++
}
var result: Double = sum / countOfNumbers
return result
}
I get the following error: Could not find an overload for '/' that accepts the supplied arguments
The OP seems to know how the code has to look like but he is explicitly asking why it is not working the other way.
So, "explicitly" is part of the answer he is looking for: Apple writes inside the "Language Guide" in chapter "The Basics" -> "Integer and Floating-Point Conversion":
Conversions between integer and floating-point numeric types must be
made explicit
you just need to do this:
func averageOf(numbers: Int...) -> Double {
var sum = 0, countOfNumbers = 0
for number in numbers {
sum += number
countOfNumbers++
}
var result: Double = Double(sum) / Double(countOfNumbers)
return result
}
You are assigning the output of / to a variable of type Double, so Swift thinks you want to call this function:
func /(lhs: Double, rhs: Double) -> Double
But the arguments you're passing it are not Doubles and Swift doesn't do implicit casting.
that may be helpful:
func averageOf(numbers: Int...) -> Double {
var sum = 0, countOfNumbers = 0
for number in numbers {
sum += number
countOfNumbers++
}
var result: Double = Double(sum) / Double(countOfNumbers)
return result
}
OR
overloading the / operator can be also a solution, like in Swift 4.x that would look like:
infix operator /: MultiplicationPrecedence
public func /<T: FixedWidthInteger>(lhs: T, rhs: T) -> Double {
return Double(lhs) / Double(rhs)
}
I don't find a necessity for a Forced Division. Normal division operator works though.
In the following code,
func average(numbers:Int...)->Float{
var sum = 0
for number in numbers{
sum += number
}
var average: Float = 0
average = (Float (sum) / Float(numbers.count))
return average
}
let averageResult = average(20,10,30)
averageResult
Here, two float values are divided, of course after type casting as i am storing the result in a float variable and returning the same.
Note: I have not used an extra variable to count the number of parameters.
"numbers" are considered as array, as the functions in Swift take a variable number of arguments into an array.
"numbers.count" (Similar to Objective C) will return the count of the parameters being passed.
Try this but notice swift doesn't like to divide by integers that are initialized to zero or could become zero so you must use &/ to force the division. this code is a little verbose but it is easy to understand and it gives the correct answer in integer not floating point or double
func sumOf(numbers: Int...) -> Int {
var sum = 0
var i = 0
var avg = 1
for number in numbers {
sum += number
i += 1
}
avg = sum &/ i
return avg
}
sumOf()
sumOf(42, 597, 12)
There's no reason to manually track of the number of arguments when you can just get it directly.
func sumOf(numbers: Int...) -> Int {
var sum = 0
for number in numbers {
sum += number
}
let average = sum &/ numbers.count
return average
}
sumOf()
sumOf(42, 597, 12)
Way late to the party, but the reason is because when dividing two INTs in Swift the result is always an INT.
The Compiler does this by truncating the value after the decimal-point (i.e. 5 / 2 = 2; the actual result should be 2.5).
To get the true average (the non-truncated value) you need to cast to a Double, so that the value after the decimal is retained. Otherwise, it will be lost.