Why are doubles printed differently in dictionaries? - swift

let dic : [Double : Double] = [1.1 : 2.3, 2.3 : 1.1, 1.2 : 2.3]
print(dic)// [2.2999999999999998: 1.1000000000000001, 1.2: 2.2999999999999998, 1.1000000000000001: 2.2999999999999998]
let double : Double = 2.3
let anotherdouble : Double = 1.1
print(double) // 2.3
print(anotherdouble) // 1.1
I don't get that why is the compiler printing values from dictionaries differently?
I'm on Swift 3, Xcode 8. Is this a bug or some weird way of optimizing stuff or something?
EDIT
What's even more weird is that :
Some values go over, some go below, some stay as they are! 1.1 is less than 1.1000000000000001 while 2.3 is more than 2.2999999999999998, 1.2 is just 1.2

As already mentioned in the comments, a Double cannot store
the value 1.1 exactly. Swift uses (like many other languages)
binary floating point numbers according to the IEEE 754
standard.
The closest number to 1.1 that can be represented as a Double is
1.100000000000000088817841970012523233890533447265625
and the closest number to 2.3 that can be represented as a Double is
2.29999999999999982236431605997495353221893310546875
Printing that number means that it is converted to a string with
a decimal representation again, and that is done with different
precision, depending on how you print the number.
From the source code at HashedCollections.swift.gyb one can see that the description method of
Dictionary uses debugPrint() for both keys and values,
and debugPrint(x) prints the value of x.debugDescription
(if x conforms to CustomDebugStringConvertible).
On the other hand, print(x) calls x.description if x conforms
to CustomStringConvertible.
So what you see is the different output of description
and debugDescription of Double:
print(1.1.description) // 1.1
print(1.1.debugDescription) // 1.1000000000000001
From the Swift source code one can see
that both use the swift_floatingPointToString()
function in Stubs.cpp, with the Debug parameter set to false and true, respectively.
This parameter controls the precision of the number to string conversion:
int Precision = std::numeric_limits<T>::digits10;
if (Debug) {
Precision = std::numeric_limits<T>::max_digits10;
}
For the meaning of those constants, see std::numeric_limits:
digits10 – number of decimal digits that can be represented without change,
max_digits10 – number of decimal digits necessary to differentiate all values of this type.
So description creates a string with less decimal digits. That
string can be converted to a Double and back to a string giving
the same result.
debugDescription creates a string with more decimal digits, so that
any two different floating point values will produce a different output.

Yes, Swift uses binary floating numbers while storing it into dictionary
Use dictionary as [Double: Any], use Float if your number is 32 bit then upcast to AnyObject
See below example
let strDecimalNumber = "8.37"
var myDictionary : [String: Any] = [:]
myDictionary["key1"] = Float(strDecimalNumber) as AnyObject // 8.369999999999999
myDictionary["key2"] = Double(strDecimalNumber) as AnyObject //8.369999999999999
myDictionary["key3"] = Double(8.37) as AnyObject //8.369999999999999
myDictionary["key4"] = Float(8.37) as AnyObject //8.37
myDictionary["key5"] = 8.37 // 8.3699999999999992
myDictionary["key6"] = strDecimalNumber // "8.37" it is String
myDictionary["key7"] = strDecimalNumber.description // "8.37" it is String
myDictionary["key8"] = Float(10000000.01) // 10000000.0
myDictionary["key9"] = Float(100000000.01) // 100000000.0
myDictionary["key10"] = Float(1000000000.01) // 1e+09
myDictionary["key11"] = Double(1000000000.01) // 1000000000.01
print(myDictionary)
myDictionary will be printed as
["key1": 8.37 , "key2": 8.369999999999999, "key3": 8.369999999999999,
"key4": 8.37, "key5": 8.3699999999999992, "key6": "8.37", "key7": "8.37" , "key8":
10000000.0, "key9": 100000000.0, "key10": 1e+09 ,"key11": 1000000000.01]
As mentioned by Martin R in above answer using .description will be treated as String not actual Float

Related

String value converted to double automatically change to decimal value in Swift

Working with Swift.
I am converting value from String to double.
let value: String = "8"
var size: Double
size = Double(value)
size = 8.0 // 8.0
Result should be 8 unless string value is 8.0
Double only stores a numeric magnitude. "8" and "8.0" have the same magnitude, so they get represented by the same Double value. Whether you show trailing 0s or not is a consequence of how you choose to format and present your values.
print does it one way for debugging, but NumberFormatter gives you more control to format numbers for real, non-debugging purposes.

String Format Specifiers : rounding rule used for decimal values

I am using String(format:) to convert a Float. I thought the number would be rounded.
Sometimes it is.
String(format: "%.02f", 1.455) //"1.46"
Sometimes not.
String(format: "%.02f", 1.555) //"1.55"
String(round(1.555 * 100) / 100.0) //"1.56"
I guess 1.55 cannot be represented exactly as binary. And that it becomes something like 1.549999XXXX
But NumberFormatter doesn't seem to cause the same problem... Why? Should it be preferred over String(format:)?
let formatter = NumberFormatter()
formatter.maximumFractionDigits = 2
formatter.minimumFractionDigits = 2
if let string = formatter.string(for: 1.555) {
print(string) // 1.56
}
Reference to the problem (to use String (format :) to round a decimal number) can be found in the answers (or more often comments) to these questions: Rounding a double value to x number of decimal places in swift and How to format a Double into Currency - Swift 3. But the problem it covers (math with FloatingPoint) has been dealt with many times on SO (for all languages).
String(format:) does not have the function of rounding a decimal number (even if it is unfortunately proposed in some answers) but of formatting it (as its name suggests). This formatting sometimes causes a rounding. That is true. But we have to keep in mind a problem that the number 1.555 is... not worth 1.555.
In Swift, Double and Float, that conform to the FloatingPoint protocol respect the IEEE 754 specification. However, some values ​​cannot be exactly represented by the IEEE 754 standard.
In the same way that you can't represent a third exactly in a (finite) decimal expansion, there are lots of numbers which look simple in decimal, but which have long or infinite expansions in a binary expansion." (source)
To be convinced of this, we can use The Float Converter to convert between the decimal representation of numbers (like "1.02") and the binary format used by all modern CPUs (IEEE 754 floating point). For 1.555, the value actually stored in float is 1.55499994754791259765625
So the problem does not come from String (format :). For example, we can try another way to round to the thousandth and we find the same problem. :
round (8.45 * pow (10.0, 3.0)) / pow (10.0, 3.0)
// 8.449999999999999
That is how it is : "Binary floating point arithmetic is fine so long as you know what's going on and don't expect values ​​to be exactly the decimal ones you put in your program".
So the real question is : is this really a problem for you to use ? It depends on the app. Generally if we convert a number into a String by limiting its precision (by rounding), it is because we consider that this precision is not useful to the user. If this is the kind of data we're talking about, then it's okay to use a FloatingPoint.
However, to format it it may be more relevant to use a NumberFormatter. Not necessarily for its rounding algorithm, but rather because it allows you to locate the format :
let formatter = NumberFormatter()
formatter.maximumFractionDigits = 2
formatter.minimumFractionDigits = 2
formatter.locale = Locale(identifier: "fr_FR")
formatter.string(for: 1.55)!
// 1,55
formatter.locale = Locale(identifier: "en_US")
formatter.string(for: 1.55)!
// 1.55
Conversely, if we are in a case where precision matters, we must abandon Double / Float and use Decimal. Still to keep our rounding example, we can use this extension (which may be the best answer to the question "Rounding a double value to x number of decimal places in swift ") :
extension Double {
func roundedDecimal(to scale: Int = 0, mode: NSDecimalNumber.RoundingMode = .plain) -> Decimal {
var decimalValue = Decimal(self)
var result = Decimal()
NSDecimalRound(&result, &decimalValue, scale, mode)
return result
}
}
1.555.roundedDecimal(to: 2)
// 1.56

String ecoding returns wrong values . 33.48 becomes 33.47999999999999488 [duplicate]

This question already has an answer here:
Swift JSONEncoder number rounding
(1 answer)
Closed 1 year ago.
I'm trying to create a hash of a give object after converting it to string in swift but the values encoded returned in string are different.
print(myObjectValues.v) // 33.48
let mydata = try JSONEncoder().encode(myObjectValues)
let string = String(data: mydata, encoding: .utf8)!
print(string) // 33.47999999999999488
Here in myObjectValues contains a Decimal value of 33.48. If i try to encode this mydata to string, the value returned is 33.47999999999999488. I've tried to round the decimal value to 2 places but the final string keeps posting this number. I've tried to save it in String and then back to Decimal but still the value returned in the encoded string is this 33.479999999ish.
I can't use the string for calculate and compare hash as the hash value returned from the server is for 33.48 which will never be equal to what i'll get on my end with this long value.
Decimal values created with underlying Double values will always create these issues.
Decimal values created with underlying String values won't create these issues.
What you can try to do is -
Have a private String value as a backing storage that's there just for safely encoding and decoding this decimal value.
Expose another computed Decimal value that uses this underlying String value.
import Foundation
class Test: Codable {
// Not exposed : Only for encoding & decoding
private var decimalString: String = "33.48"
// Work with this in your app
var decimal: Decimal {
get {
Decimal(string: decimalString) ?? .zero
}
set {
decimal = newValue
decimalString = "\(newValue)"
}
}
}
do {
let encoded = try JSONEncoder().encode(Test())
print(String(data: encoded, encoding: .utf8))
// Optional("{\"decimalString\":\"33.48\"}")
let decoded = try JSONDecoder().decode(Test.self, from: encoded)
print(decoded.decimal) // 33.48
print(decoded.decimal.nextUp) // 33.49
print(decoded.decimal.nextDown) // 33.47
} catch {
print(error)
}
I'm trying to create a hash of a give object after converting it to string in swift but the values encoded returned in string are different.
Don't do this. Just don't. It's not sensible.
I'll explain it by analogy: Imagine if you represented numbers with six decimal digits of precision. You have to use some amount of precision, right?
Now, 1/3 would be represented as 0.333333. But 2/3 would be represented by 0.666667. So now, if you multiply 1/3 by two, you will not get 2/3. And if you add 1/3 to 1/3 to 1/3, you will not get 1.
So the hash of 1 will be different depending on how you got that 1! If you add 1/3 to 1/3 to 1/3, you will get a different hash than if you added 2/3 to 1/3.
This is simply not the right data type to hash. Don't use doubles for this purpose. Rounding will work until it doesn't.
And you are using 33 + 48/100 -- a value that cannot be represented exactly in base two just as 1/3 cannot be represented exactly in base ten.

How to store 1.66 in NSDecimalNumber

I know float or double are not good for storing decimal number like money and quantity. I'm trying to use NSDecimalNumber instead. Here is my code in Swift playground.
let number:NSDecimalNumber = 1.66
let text:String = String(describing: number)
NSLog(text)
The console output is 1.6599999999999995904
How can I store the exact value of the decimal number 1.66 in a variable?
In
let number:NSDecimalNumber = 1.66
the right-hand side is a floating point number which cannot represent
the value "1.66" exactly. One option is to create the decimal number
from a string:
let number = NSDecimalNumber(string: "1.66")
print(number) // 1.66
Another option is to use arithmetic:
let number = NSDecimalNumber(value: 166).dividing(by: 100)
print(number) // 1.66
With Swift 3 you may consider to use the "overlay value type" Decimal instead, e.g.
let num = Decimal(166)/Decimal(100)
print(num) // 1.66
Yet another option:
let num = Decimal(sign: .plus, exponent: -2, significand: 166)
print(num) // 1.66
Addendum:
Related discussions in the Swift forum:
Exact NSDecimalNumber via literal
ExpressibleByFractionLiteral
Related bug reports:
SR-3317
Literal protocol for decimal literals should support precise decimal accuracy, closed as a duplicate of
SR-920
Re-design builtin compiler protocols for literal convertible types.

Double String formatting madness

I have the following in playground:
let a:CGFloat = 1.23
let b:Double = 3.45
let c:Float = 6.78
String(format: "(%2.2f|%2.2f|%2.2f)",a,b,c) // prints "(3.45|6.78|0.00)"
let d = NSMakePoint(9.87,6.54)
String(format: "(%2.2f|%2.2f)",d.x,d.y) // prints "(0.00|0.00)"
So why is c:Float rendered as 0.00. I possibly need something else than f (Apples docu on this formatting function is - to be polite - quite thin with a limes going to zero).
BUT: Why is the CGFloat in the first place rendered correctly while the two CGFloats inside the NSPoint get rendered as 0.00?
And no: it's not a duplicate of Precision String Format Specifier In Swift and its pendant.
P.S.
String(format: "(%2.2f|%2.2f)",Double(d.x),d.y) // prints "(9.87|0.00)"
which is a work around but no explanation.
And a PPS: Isn't %2.2f supposed to print " 9.87" instead of "9.87" (2 places for leading digits? It seems to ignore the number. Specifying %02.2f also prints "9.87" rather than "09.87"
CGFloat isn't a float or a double, it is its own struct. If you want the double or float value of a CGFloat (which is dependent on the architecture being 32 or 64 but, then you can access it with a.native. In other words try:
String(format: "(%2.2f|%2.2f|%2.2f)",a.native,b,c)
You'd see similar behavior if you tried to pass other non-float or non-double arguments to the %f formatter. For example:
var str = "Hello, playground"
String(format: "%2.2f|%2.2f|%2.2f", arguments: [str,b,c])
would result in "3.45|6.78|0.00". It appears to be looking for another float in your arguments to satisfy the last %f, and defaults to 0.00
As for the PPS. %2.2F is two decimal place and at least 2 total digits. If you wanted two digits minimum before the decimal, you'd want %5.2f. 5 because the decimal itself takes a place.