Division returns something different in functions - swift

Paste the following code into a playground:
5.0 / 100
func test(anything: Float) -> Float {
return anything / 100
}
test(5.0)
The first line should return 0.05 as expected. The function test returns 0.0500000007450581. Why?

It has nothing to do with functions. Your first example is using type Double which represents floating point numbers more precisely by using 64 bits. If you were to change your second example to:
func test(anything: Double) -> Double {
return anything / 100
}
test(5.0)
You would get the result you expect. Float uses only 32 bits of data, thus it provides a less precise representation of the number. Also, floating point numbers are stored as binary values and frequently are only an approximation of the base 10 representation. That is why 0.05 is showing up as 0.0500000007450581 when stored as a Float.

Related

Error when converting between Double and Int [duplicate]

I observed something really strange. If you run this code in Swift:
Int(Float(Int.max))
It crashes with the error message:
fatal error: Float value cannot be converted to Int because the result would be greater than Int.max
This is really counter-intuitive, so I expanded the expression into 3 lines and tried to see what happens in each step in a playground:
let a = Int.max
let b = Float(a)
let c = Int(b)
It crashes with the same message. This time, I see that a is 9223372036854775807 and b is 9.223372e+18. It is obvious that a is greater than b by 36854775807. I also understand that floating points are inaccurate, so I expected something less than Int.max, with the last few digits being 0.
I also tried this with Double, it crashes too.
Then I thought, maybe this is just how floating point numbers behave, so I tested the same thing in Java:
long a = Long.MAX_VALUE;
float b = (float)a;
long c = (long)b;
System.out.println(c);
It prints the expected 9223372036854775807!
What is wrong with swift?
There aren't enough bits in the mantissa of a Double or Float to accurately represent 19 significant digits, so you are getting a rounded result.
If you print the Float using String(format:) you can see a more accurate representation of the value of the Float:
let a = Int.max
print(a) // 9223372036854775807
let b = Float(a)
print(String(format: "%.1f", b)) // 9223372036854775808.0
So the value represented by the Float is 1 larger than Int.max.
Many values will be converted to the same Float value. The question becomes, how much would you have to reduce Int.max before it results in a different Double or Float value.
Starting with Double:
var y = Int.max
while Double(y) == Double(Int.max) {
y -= 1
}
print(Int.max - y) // 512
So with Double, the last 512 Ints all convert to the same Double.
Float has fewer bits to represent the value, so there are more values that all map to the same Float. Switching to - 1000 so that it runs in reasonable time:
var y = Int.max
while Float(y) == Float(Int.max) {
y -= 1000
}
print(Int.max - y) // 274877907000
So, your expectation that a Float could accurately represent a specific Int was misplaced.
Follow up question from the comments:
If float does not have enough bits to represent Int.max, how is it
able to represent a number one larger than that?
Floating point numbers are represented as two parts: mantissa and exponent. The mantissa represents the significant digits (in binary) and the exponent represents the power of 2. As a result, a floating point number can accurately express an even power of 2 by having a mantissa of 1 with an exponent that represents the power.
Numbers that are not even powers of 2 may have a binary pattern that contains more digits than can be represented in the mantissa. This is the case for Int.max (which is 2^63 - 1) because in binary that is 111111111111111111111111111111111111111111111111111111111111111 (63 1's). A Float which is 32 bits cannot store a mantissa which is 63 bits, so it has to be rounded or truncated. In the case of Int.max, rounding up by 1 results in the value
1000000000000000000000000000000000000000000000000000000000000000. Starting from the left, there is only 1 significant bit to be represented by the mantissa (the trailing 0's come for free), so this number is a mantissa of 1 and an exponent of 64.
See #MartinR's answer for an explanation of what Java is doing.
Swift and Java behave differently when converting a "too large" floating point
number to an integer. Java truncates any floating point value
larger than Long.MAX_VALUE = 2^63-1:
long c = (long)(1.0E+30f);
System.out.println(c);
// 9223372036854775807
Swift expects that the value is in the range of Int, and aborts
with a runtime exception otherwise:
/// Creates a new instance by rounding the given floating-point value toward
/// zero.
///
/// - Parameter other: A floating-point value. When `other` is rounded toward
/// zero, the result must be within the range `Int.min...Int.max`.
public init(_ value: Float)
Example:
let c = Int(Float(1.0E30))
print(c)
// fatal error: Float value cannot be converted to Int because the result would be greater than Int.max
The same happens with your value Float(Int.max), which is the
floating point representable value closest to Int.max and happens
to be larger than Int.max.

Why does Int(Float(Int.max)) give me an error?

I observed something really strange. If you run this code in Swift:
Int(Float(Int.max))
It crashes with the error message:
fatal error: Float value cannot be converted to Int because the result would be greater than Int.max
This is really counter-intuitive, so I expanded the expression into 3 lines and tried to see what happens in each step in a playground:
let a = Int.max
let b = Float(a)
let c = Int(b)
It crashes with the same message. This time, I see that a is 9223372036854775807 and b is 9.223372e+18. It is obvious that a is greater than b by 36854775807. I also understand that floating points are inaccurate, so I expected something less than Int.max, with the last few digits being 0.
I also tried this with Double, it crashes too.
Then I thought, maybe this is just how floating point numbers behave, so I tested the same thing in Java:
long a = Long.MAX_VALUE;
float b = (float)a;
long c = (long)b;
System.out.println(c);
It prints the expected 9223372036854775807!
What is wrong with swift?
There aren't enough bits in the mantissa of a Double or Float to accurately represent 19 significant digits, so you are getting a rounded result.
If you print the Float using String(format:) you can see a more accurate representation of the value of the Float:
let a = Int.max
print(a) // 9223372036854775807
let b = Float(a)
print(String(format: "%.1f", b)) // 9223372036854775808.0
So the value represented by the Float is 1 larger than Int.max.
Many values will be converted to the same Float value. The question becomes, how much would you have to reduce Int.max before it results in a different Double or Float value.
Starting with Double:
var y = Int.max
while Double(y) == Double(Int.max) {
y -= 1
}
print(Int.max - y) // 512
So with Double, the last 512 Ints all convert to the same Double.
Float has fewer bits to represent the value, so there are more values that all map to the same Float. Switching to - 1000 so that it runs in reasonable time:
var y = Int.max
while Float(y) == Float(Int.max) {
y -= 1000
}
print(Int.max - y) // 274877907000
So, your expectation that a Float could accurately represent a specific Int was misplaced.
Follow up question from the comments:
If float does not have enough bits to represent Int.max, how is it
able to represent a number one larger than that?
Floating point numbers are represented as two parts: mantissa and exponent. The mantissa represents the significant digits (in binary) and the exponent represents the power of 2. As a result, a floating point number can accurately express an even power of 2 by having a mantissa of 1 with an exponent that represents the power.
Numbers that are not even powers of 2 may have a binary pattern that contains more digits than can be represented in the mantissa. This is the case for Int.max (which is 2^63 - 1) because in binary that is 111111111111111111111111111111111111111111111111111111111111111 (63 1's). A Float which is 32 bits cannot store a mantissa which is 63 bits, so it has to be rounded or truncated. In the case of Int.max, rounding up by 1 results in the value
1000000000000000000000000000000000000000000000000000000000000000. Starting from the left, there is only 1 significant bit to be represented by the mantissa (the trailing 0's come for free), so this number is a mantissa of 1 and an exponent of 64.
See #MartinR's answer for an explanation of what Java is doing.
Swift and Java behave differently when converting a "too large" floating point
number to an integer. Java truncates any floating point value
larger than Long.MAX_VALUE = 2^63-1:
long c = (long)(1.0E+30f);
System.out.println(c);
// 9223372036854775807
Swift expects that the value is in the range of Int, and aborts
with a runtime exception otherwise:
/// Creates a new instance by rounding the given floating-point value toward
/// zero.
///
/// - Parameter other: A floating-point value. When `other` is rounded toward
/// zero, the result must be within the range `Int.min...Int.max`.
public init(_ value: Float)
Example:
let c = Int(Float(1.0E30))
print(c)
// fatal error: Float value cannot be converted to Int because the result would be greater than Int.max
The same happens with your value Float(Int.max), which is the
floating point representable value closest to Int.max and happens
to be larger than Int.max.

Accountant rounding in swift

I'm not aware how to round numbers in the following manner in Swift:
6.51,6.52,6.53, 6.54 should be rounded down to 6.50
6.56, 6.57, 6.58, 6.59 should be rounded down to 6.55
I have already tried
func roundDown(number: Double, toNearest: Double) -> Double {
return floor(number / toNearest) * toNearest
}
to no success. Any thoughts ?
Here's your problem (and it has nothing to do with Swift whatsoever): Floating point arithmetic is not exact. Let's say you try to divide 6.55 by 0.05 and expect a result of 131.0. In reality, 6.55 is "some number close to 6.55" and 0.05 is "some number close to 0.05", so the result that you get is "some number close to 131.0". That result is likely just a tiny little bit smaller than 131.0, maybe 130.999999999999 and floor () returns 130.0.
What you do: You decide what is the smallest number that you still want to round up. For example, you'd want 130.999999999999 to give a result of 131.0. You'd probably want 130.9999 to give a result of 131.0. So change your code to
floor (number * 20.0 + 0.0001);
This will round 6.549998 to 6.55, so check if you are Ok with that. Also, floor () works in an unexpected way for negative input, so -6.57 would be rounded down to -6.60, which is likely not what you want.

Using arc4random_uniform to return a both whole and non whole doubles

Using Swift, I am trying to figure out how to use arc4random_uniform to return a number like 37.7. The guidance I must abide by is I must do it in a function, the random double must be between 0 - 300. I have been able to build a function that randomly returns doubles between the range but can't find anything that will lead me to outputting random non whole numbers
//function to randomly generate a double number like 105.3
func makeRandDbl() -> Double {
let randGenerator: Double = Double(arc4random_uniform(301))
print(randGenerator)
return randGenerator
}
makeRandDb()
To generate a Double in the range 0.0 to 300.0 (with one digit following the decimal):
Double(arc4random_uniform(3001))/10.0
You can extend this to more decimal places. For two decimal places (0.00 to 300.00):
Double(arc4random_uniform(30001))/100.0
For three decimal places (0.000 to 300.000):
Double(arc4random_uniform(300001))/1000.0
This has the advantage of being able to actually generate whole values. In the first case 10% of the numbers will be whole. In the second case 1% of the numbers will be whole. And in the third, 0.1% of the numbers will be whole.
This is your function, I believe:
extension Double {
/// Generates a random `Double` within `0.0...1.0`
public static func random() -> Double {
return random(0.0...1.0)
}
/// Generates a random `Double` inside of the closed interval.
public static func random(interval: ClosedInterval<Double>) -> Double {
return interval.start + (interval.end - interval.start) * (Double(arc4random()) / Double(UInt32.max))
}
}
Usage example:
Double.random(0...300)
It is taken from RandomKit library - it looks very useful for various purposes.
One approach would be to convert the result of arc4random_uniform to double, divide the result by UInt32.max, and then multiply the result by 300.
let rand = 300 * Double(arc4random_uniform(UInt32.max)) / Double(UInt32.max)
This would produce a value between 0 and 300, inclusive. The number of possible values that you are going to get is UInt32.max.

Swift: Double conversion inconsistency. How to correctly compare Doubles?

I have a very simple function to convert temperature from ˚C TO ˚K.
func convertKelvinToCelsius(temp:Double) ->Double {
return temp - 273.15
}
And I have a unit test to drive this function. This is where the problem is:
func testKelvinToCelsius(){
var check1 = conv.convertKelvinToCelsius(200.00) // -73.149999999999977
var check2 = 200.00 - 273.15 // -73.149999999999977
var check3 = Double(-73.15) // -73.150000000000006
//Passes
XCTAssert(conv.convertKelvinToCelsius(200.00).description == Double(-73.15).description, "Shoud convert from celsius kelvin")
//Fails
XCTAssert(conv.convertKelvinToCelsius(200.00) == Double(-73.15), "Shoud convert from celsius kelvin")
}
When you add a breakpoint and check the values of check1, check2 and check3, they are very interesting:
check1 Double -73.149999999999977
check2 Double -73.149999999999977
check3 Double -73.150000000000006
Questions:
Why does Swift return different values for check1/check2 and check3
How can I get the second test to pass, because writing it like I did the test1 smells. Why should I have to convert Doubles to Strings to be able to compare them?
Finally, when I println check1, check2 and check3, they all print to be '-73.15'. Why? Why not print accurately, and not confuse the programmers!?
To Reproduce:
Just type 200 - 273.15 == -73.15 in you playground and watch it go false!!
This is expected behavior for floating point values. They cannot be 100% accurately represented.
You can use the XCTAssertEqualWithAccuracy function to assert floating point values are within a given range of each other.
The reason println prints the same value for all is because it internally rounds them to two decimals (I assume).
This is not a Swift specific issue, this is related to the fact how decimal numbers are created in computers and what is their precision. You will need to work with DBL_EPSILON.
Swift, like most languages, uses binary floating point numbers.
With binary floating point numbers, some numbers can be represented exactly, but most can't. What can be represented exactly are integers unless they are very large (for example, 100000000000000.0 is fine), and such integers multiplied or divided by powers of two (7.375 is fine, it is 59.0 / 8, but 7.3 isn't).
Every floating point operation gives you the exact result, rounded to the nearest floating-point number. So you get
200.0 -> Exactly 200
273.15 -> A number very close to 273.15
200 - 273.15 -> A number very close to -73.15
-73.15 -> A number very close to -73.15
If you compare two numbers that are both very very close to -73.15 they are not necessarily equal. That's not a problem of the == operator; that one will determine correctly whether they are equal or not. The problem is that the two numbers can actually be different.