Why can't I divide integers in swift? - swift

In the Swift "Tour" documentation, there's an exercise where you build on the following function to average a set of numbers:
func sumOf(numbers: Int...) -> Int {
var sum = 0
for number in numbers {
sum += number
}
return sum
}
I can make this work using something like the following:
func averageOf(numbers: Double...) -> Double {
var sum: Double = 0, countOfNumbers: Double = 0
for number in numbers {
sum += number
countOfNumbers++
}
var result: Double = sum / countOfNumbers
return result
}
My question is, why do I have to cast everything as a Double to make it work? If I try to work with integers, like so:
func averageOf(numbers: Int...) -> Double {
var sum = 0, countOfNumbers = 0
for number in numbers {
sum += number
countOfNumbers++
}
var result: Double = sum / countOfNumbers
return result
}
I get the following error: Could not find an overload for '/' that accepts the supplied arguments

The OP seems to know how the code has to look like but he is explicitly asking why it is not working the other way.
So, "explicitly" is part of the answer he is looking for: Apple writes inside the "Language Guide" in chapter "The Basics" -> "Integer and Floating-Point Conversion":
Conversions between integer and floating-point numeric types must be
made explicit

you just need to do this:
func averageOf(numbers: Int...) -> Double {
var sum = 0, countOfNumbers = 0
for number in numbers {
sum += number
countOfNumbers++
}
var result: Double = Double(sum) / Double(countOfNumbers)
return result
}

You are assigning the output of / to a variable of type Double, so Swift thinks you want to call this function:
func /(lhs: Double, rhs: Double) -> Double
But the arguments you're passing it are not Doubles and Swift doesn't do implicit casting.

that may be helpful:
func averageOf(numbers: Int...) -> Double {
var sum = 0, countOfNumbers = 0
for number in numbers {
sum += number
countOfNumbers++
}
var result: Double = Double(sum) / Double(countOfNumbers)
return result
}
OR
overloading the / operator can be also a solution, like in Swift 4.x that would look like:
infix operator /: MultiplicationPrecedence
public func /<T: FixedWidthInteger>(lhs: T, rhs: T) -> Double {
return Double(lhs) / Double(rhs)
}

I don't find a necessity for a Forced Division. Normal division operator works though.
In the following code,
func average(numbers:Int...)->Float{
var sum = 0
for number in numbers{
sum += number
}
var average: Float = 0
average = (Float (sum) / Float(numbers.count))
return average
}
let averageResult = average(20,10,30)
averageResult
Here, two float values are divided, of course after type casting as i am storing the result in a float variable and returning the same.
Note: I have not used an extra variable to count the number of parameters.
"numbers" are considered as array, as the functions in Swift take a variable number of arguments into an array.
"numbers.count" (Similar to Objective C) will return the count of the parameters being passed.

Try this but notice swift doesn't like to divide by integers that are initialized to zero or could become zero so you must use &/ to force the division. this code is a little verbose but it is easy to understand and it gives the correct answer in integer not floating point or double
func sumOf(numbers: Int...) -> Int {
var sum = 0
var i = 0
var avg = 1
for number in numbers {
sum += number
i += 1
}
avg = sum &/ i
return avg
}
sumOf()
sumOf(42, 597, 12)

There's no reason to manually track of the number of arguments when you can just get it directly.
func sumOf(numbers: Int...) -> Int {
var sum = 0
for number in numbers {
sum += number
}
let average = sum &/ numbers.count
return average
}
sumOf()
sumOf(42, 597, 12)

Way late to the party, but the reason is because when dividing two INTs in Swift the result is always an INT.
The Compiler does this by truncating the value after the decimal-point (i.e. 5 / 2 = 2; the actual result should be 2.5).
To get the true average (the non-truncated value) you need to cast to a Double, so that the value after the decimal is retained. Otherwise, it will be lost.

Related

Create an Array of square numbers with a variadic integer as an input in Swift

I’m new to this and maybe this is a very bad approach, so let me know please.
I want to create an array of integers by a function that use variadic integers as an input and return the square of those integers in an array. But at the end I forced to initialize the return array by a bogus integer which I ended with a [0]. How to eliminate this 0 in my return array and/or what is the correct solution to this problem?
Thanks.
func square(numbers: Int...) -> [Int] {
var x = 0
var y = [0]
var counter = 0
for i in numbers {
repeat {
x = i*i
counter+=1
y.append(x)
} while counter == numbers.count
}
return y
}
You can simply use map(_:) on numbers to return the squares, i.e.
func square(numbers: Int...) -> [Int] {
numbers.map { $0 * $0 }
}
There is no need to write the boilerplate code for what Swift has already provided.

How to multiply a float by a bool value of 0 or 1?

My code below gives a breakpoint error:
func doScore(num: Float, binaural: Bool, noise: Bool) -> Float {
if 50 ... 100 ~= num{
let numDoubled = num + (Float(noise.intValue()!) * weighting)// <--- this is where I get my error
return numDoubled.rounded()
}
All I want to do is multiply the number I am putting into the function by the value of binaural or noise which are boolean values. To do this I am getting the Int value however I need it to be a float as 0 or 1 since the number I am putting in is a float. Why would this cause a crash? Thanks.
By doing a simple workaround, you can fix it easily
let numDoubled = num + (noise ? weighting : 0.0)
Converting bool into int is here but in my solution, there's no need to do double job( convert to int and then again convert into float)
Updated as per vacawama comment
Updated answer from your comment:
let numDoubled = num + ( (noise && binaural) ? weighting : 0.0 ) )
Unlike in (Objective-)C where nil, 0, and false are treated as the same value depending on the environment Int and Bool are not related in Swift and a Swift Bool doesn't have an intValue.
Practically you want to add weighting to num if noise is true.
let numDoubled = noise ? num + weighting : num
If you really need to convert false to 0 and true to 1 write
let boolAsInt = aBool ? 1 : 0
Even though implicit conversion between bool and numeric types has been removed, you can explicitly convert a bool to numeric types by first converting it to an NSNumber.
let numDoubled = num + Float(truncating: NSNumber(booleanLiteral: noise)) * weighting
You can also do Float(truncating: noise) as a shorthand notation, but that call implicitly converts the bool to an NSNumber, so for clarity its better to write out the conversion explicitly.

Swift - rounding numbers

I am trying to round a number in swift, and I found a solution using this:
func roundTo(number: Double, precision: Int) -> Double {
var power: Double = 1
for _ in 1...precision {
power *= 10
}
let rounded = Double(round(power * number)/power)
return rounded
}
I have a model class, lets call it MyObject.
class: My Object {
var preciseNumber: Double?
}
I am fetching a number for example:
var myNumber = 10,0123456789
I use my roundTo function to round it so I have 10,0123456 (7 numbers after the decimal point).
When I print a statement:
print("myNumber rounded: \(roundTo(myNumber, precision: 7))") //10,0123456 as a result. Great!
Then next I want to assing rounded myNumber to my class variable preciseNumber so:
let roundedNumber = roundTo(myNumber, precise: 7)
print("Rounded number is: \(roundedNumber)") // 10,01234567 as result
self.preciseNumber = roundedNumber
print("Precise number is now: \(self.preciseNumber)") // 10,01234599999997 as result
What might be causing this? I want to be as precise as possible.
So it sounds like your issue is being able to compare floating point numbers. The best way to do this is to instead find the degree of precision you need. So rather than just checking numOne == numTwo use something like abs(one - two) <= 0.000001
You can create a Swift operator to handle this for you pretty easily:
// `===` is just used as an example
func === (one: Double, two: Double) -> Bool {
return abs(one - two) <= 0.000001
}
Then you can just check numOne === numTwo and it will use a better floating point equality check.
There is also a power function that will help simplify your rounding function:
let power = pow(10.0, precision)

binary operator * cannot be applied to operands of type Int and Double

I'm trying to build a simple Swift app to calculate VAT (Value Added taxes = 20%).
func taxesFree(number: Int) -> Double {
var textfield = self.inputTextField.text.toInt()!
let VAT = 0.2
var result = textfield * VAT
return result
}
For some reason I keep getting
Binary operator * cannot be applied to operands of type Int and Double
on the line
var result = textfield * VAT
You should convert one type to the other one so both variable should be the same types:
var result: Double = Double(textfield) * VAT
It's because you're trying to multiply an Int (textfield) with a Double (VAT). Because with such an operation you could lose the precision of the double Swift doesn't allow to convert one to the other so you need to explicitly cast the Int to a Double ...
var result = Double(textfield) * VAT
The problem here is that the statement given is literally true, because Swift is strongly typed and doesn't coerce implicitly. Just had a similar case myself with "binary operator '-' cannot be applied to operands of type 'Date' and 'Int'".
If you write:
var result = 10 * 0.2
...that's fine, but if you write:
var number = 10
var result = number * 0.2
...that's not fine. This is because untyped explicit values have an appropriate type selected by the compiler, so in fact the first line is taken as being var result = Double(10) * Double(0.2). After all, as a human being you might mean 10 to be floating-point or an integer - you normally wouldn't say which and would expect that to be clear from context. It might be a bit of a pain, but the idea of strong types is that after the code is parsed it can only have one valid compiled expression.
In general you would build a new value using the constructor, so var result = Double(textfield) * VAT in your case. This is different from casting (textfield as Double) because Int is not a subclass of Double; what you are doing instead is asking for a completely new Double value to be built at runtime, losing some accuracy if the value is very high or low. This is what loosely typed languages do implicitly with pretty much all immediate values, at a small but significant time cost.
In your specific case, it wasn't valuable to have an Int in the first place (even if no fraction part is possible) so what you needed was:
func taxesFree(number: Int) -> Double {
var textfield = Double(self.inputTextField.text)!
let VAT = 0.2
var result = textfield * VAT
return result
}
In my case it was just casting to CGFloat:
self.cnsMainFaqsViewHight.constant = CGFloat(self.mainFaqs.count) * 44.0
You can convert like
var result: Double = Double(textfield)
I was misunderstanding the Closed Range Operator in Swift.
You should not wrap the range in an array: [0...10]
for i in [0...10] {
// error: binary operator '+' cannot be applied to operands of type 'CountableClosedRange<Int>' and 'Int'
let i = i + 1
}
for i in 0...10 {
// ok!
let i = i + 1
}
The range is a collection that can itself be iterated. No need to wrap it in an array, as perhaps you would have in Objective-C.
0...3 -> [0, 1, 2, 3]
[0...3] -> [[0, 1, 2, 3]]
Once you realize your object is a nested collection, rather than an array of Ints, it's easy to see why you cannot use numeric operators on the object.
This worked for me when I got the same error message in Playground:
func getMilk(howManyCartons: Int){
print("Buy \(howManyCartons) cartons of milk")
let priceToPay: Float = Float(howManyCartons) * 2.35
print("Pay $\(priceToPay)")
}
getMilk(howManyCartons: 2)

Incomprehensible errors in Swift

This works:
func averageOf(numbers: Int...) -> Float {
var sum = 0
for number in numbers {
sum += number
}
return Float(sum) / Float(numbers.count)
}
averageOf() // (not a number)
averageOf(42, 597, 12) // (217.0)
But this doesn't:
func averageOf(numbers: Int...) -> Float {
var sum = 0
for number in numbers {
sum += number
}
return Float(sum / numbers.count)
}
averageOf()
averageOf(42, 597, 12)
It gives me this error on the } line:
Execution was interrupted, reason: EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0)
I stumbled upon another question with the same first and second snippets of code and the author apparently doesn't get the same errors.
Let's remove that cast:
func averageOf(numbers: Int...) -> Float {
var sum = 0
for number in numbers {
sum += number
}
return sum / numbers.count
}
averageOf()
averageOf(42, 597, 12)
It gives me this error, on the division sign:
Cannot invoke '/' with an argument list of type '(#lvalue Int, Int)'
If I then change the return type of the function to Int:
func averageOf(numbers: Int...) -> Int {
var sum = 0
for number in numbers {
sum += number
}
return sum / numbers.count
}
averageOf()
averageOf(42, 597, 12)
I get the same EXC_BAD_INSTRUCTION error.
If I cast only numbers.count:
func averageOf(numbers: Int...) -> Int {
var sum = 0
for number in numbers {
sum += number
}
return sum / Float(numbers.count)
}
averageOf()
averageOf(42, 597, 12)
I get this error on the division sign:
Cannot invoke 'init' with an argument list of type '(#lvalue Int, $T5)'
I also get this error if I change the return type back to Float.
All of this makes no sense to me. Is it Xcode going postal, or have I missed something subtle?
As already explained, the problem in the second example is due to invoking averageOf() without arguments, which results in a division by zero. However, the first averageOf() works, again without arguments, why? Let me add a few more details.
In the first case you reported, you get no error and averageOf() works, because you are casting the two Int operands to Float before the division.
In the world of Float numbers, 0.0 is only an approximation of 0. If you try 0.0 / 0.0 in a Playground, you won't get an error as a result, the output, instead, will be not a number.
In the second case, however, you're trying to divide 0 by 0 before casting to Float. Therefore, we are still in the realm of Int numbers when the division is performed. The result is an error, due to a division by zero (no approximation involved here). If you try 0 / 0 in a Playground, you'll get an error.
All the other cases not explained by the Int vs Float behavior, are due to the fact that Swift requires to you explicitly cast between types, even when another language would let you cast the operands implicitly.
This error occures, because of your function-call averageOf().If you pass no values, to the variadic parameter numbers it creates an empty array.It's count property therefore returns 0. And you can't devide by 0.
This is also the reason why it says BAD_INSTRUCTION.
If you remove averageOf() from your second code example it works.