Unable to reproduce Decimal to unit test a function using NSDecimalRound [duplicate] - swift

This question already has an answer here:
How to store 1.66 in NSDecimalNumber
(1 answer)
Closed 1 year ago.
let decimalA: Decimal = 3.24
let decimalB: Double = 3.24
let decimalC: Decimal = 3.0 + 0.2 + 0.04
print (decimalA) // Prints 3.240000000000000512
print (decimalB) // Prints 3.24
print (decimalC) // Prints 3.24
I'm totally confused. Why do these things happen? I know why floating point numbers lose precision, but I can't understand why Decimal lose precision while storing decimal numbers.
I want to know how can I initialize Decimal type without losing precision. The reason why these happen is also very helpful to me.
Sorry for my poor English.

The problem is that all floating point literals are inferred to have type Double, which results in a loss of precision. Unfortunately Swift can't initialise floating point literals to Decimal directly.
If you want to keep precision, you need to initialise Decimal from a String literal rather than a floating point literal.
let decimalA = Decimal(string: "3.24")!
let double = 3.24
let decimalC: Decimal = 3.0 + 0.2 + 0.04
print(decimalA) // Prints 3.24
print(double) // Prints 3.24
print(decimalC) // Prints 3.24
Bear in mind this issue only happens with floating point literals, so if your floating point numbers are generated/parsed in runtime (such as reading from a file or parsing JSON), you shouldn't face the precision loss issue.

Related

String Format Specifiers : rounding rule used for decimal values

I am using String(format:) to convert a Float. I thought the number would be rounded.
Sometimes it is.
String(format: "%.02f", 1.455) //"1.46"
Sometimes not.
String(format: "%.02f", 1.555) //"1.55"
String(round(1.555 * 100) / 100.0) //"1.56"
I guess 1.55 cannot be represented exactly as binary. And that it becomes something like 1.549999XXXX
But NumberFormatter doesn't seem to cause the same problem... Why? Should it be preferred over String(format:)?
let formatter = NumberFormatter()
formatter.maximumFractionDigits = 2
formatter.minimumFractionDigits = 2
if let string = formatter.string(for: 1.555) {
print(string) // 1.56
}
Reference to the problem (to use String (format :) to round a decimal number) can be found in the answers (or more often comments) to these questions: Rounding a double value to x number of decimal places in swift and How to format a Double into Currency - Swift 3. But the problem it covers (math with FloatingPoint) has been dealt with many times on SO (for all languages).
String(format:) does not have the function of rounding a decimal number (even if it is unfortunately proposed in some answers) but of formatting it (as its name suggests). This formatting sometimes causes a rounding. That is true. But we have to keep in mind a problem that the number 1.555 is... not worth 1.555.
In Swift, Double and Float, that conform to the FloatingPoint protocol respect the IEEE 754 specification. However, some values ​​cannot be exactly represented by the IEEE 754 standard.
In the same way that you can't represent a third exactly in a (finite) decimal expansion, there are lots of numbers which look simple in decimal, but which have long or infinite expansions in a binary expansion." (source)
To be convinced of this, we can use The Float Converter to convert between the decimal representation of numbers (like "1.02") and the binary format used by all modern CPUs (IEEE 754 floating point). For 1.555, the value actually stored in float is 1.55499994754791259765625
So the problem does not come from String (format :). For example, we can try another way to round to the thousandth and we find the same problem. :
round (8.45 * pow (10.0, 3.0)) / pow (10.0, 3.0)
// 8.449999999999999
That is how it is : "Binary floating point arithmetic is fine so long as you know what's going on and don't expect values ​​to be exactly the decimal ones you put in your program".
So the real question is : is this really a problem for you to use ? It depends on the app. Generally if we convert a number into a String by limiting its precision (by rounding), it is because we consider that this precision is not useful to the user. If this is the kind of data we're talking about, then it's okay to use a FloatingPoint.
However, to format it it may be more relevant to use a NumberFormatter. Not necessarily for its rounding algorithm, but rather because it allows you to locate the format :
let formatter = NumberFormatter()
formatter.maximumFractionDigits = 2
formatter.minimumFractionDigits = 2
formatter.locale = Locale(identifier: "fr_FR")
formatter.string(for: 1.55)!
// 1,55
formatter.locale = Locale(identifier: "en_US")
formatter.string(for: 1.55)!
// 1.55
Conversely, if we are in a case where precision matters, we must abandon Double / Float and use Decimal. Still to keep our rounding example, we can use this extension (which may be the best answer to the question "Rounding a double value to x number of decimal places in swift ") :
extension Double {
func roundedDecimal(to scale: Int = 0, mode: NSDecimalNumber.RoundingMode = .plain) -> Decimal {
var decimalValue = Decimal(self)
var result = Decimal()
NSDecimalRound(&result, &decimalValue, scale, mode)
return result
}
}
1.555.roundedDecimal(to: 2)
// 1.56

NSDecimalNumber usage for precision with currency in Swift [duplicate]

This question already has an answer here:
How to store 1.66 in NSDecimalNumber
(1 answer)
Closed 5 years ago.
I've read a lot that NSDecimalNumber is the best format to use when using currency.
However, I'm still getting floating point issues.
For example.
let a: NSDecimalNumber = 0.07 //0.07000000000000003
let b: NSDecimalNumber = 7.dividing(by: 100) //0.06999999999999999
I know I could use Decimal and b would be what I'm expecting:
let b: Decimal = 7 / 100 //0.07
I'm using Core Data in my app. So I'm stuck with NSDecimalNumber. Unless I want convert a lot of NSDecimalNumbers to Decimals.
Can someone help me get 0.07?
The problem is that you’re effectively doing floating point math (with the problems it has faithfully capturing fractional decimal values in a Double) and creating a Decimal (or NSDecimalNumber) from the Double value that already has introduced this discrepancy. Instead, you want to create your Decimal values before doing your division (or before having a fractional Double value, even if a literal).
So, the following is equivalent to your example, whereby it is building a Double representation (with the limitations that entails) of 0.07, and you end up with a value that is not exactly 0.07:
let value = Decimal(7.0 / 100.0) // or NSDecimalNumber(value: 7.0 / 100.0)
Whereas this does not suffer this problem because we are dividing a decimal 7 by a decimal 100:
let value = Decimal(7) / Decimal(100) // or NSDecimalNumber(value: 7).dividing(by: 100)
Or, other ways to create 0.07 value but avoiding Double in the process include using strings:
let value = Decimal(string: "0.07") // or NSDecimalNumber(string: "0.07")
Or specifying the mantissa/significant and exponent:
let value = Decimal(sign: .plus, exponent: -2, significand: 7) // or NSDecimalNumber(mantissa: 7, exponent: -2, isNegative: false)
Bottom line, avoid Double representations entirely when using Decimal (or NSDecimalNumber), and you won't suffer the problem you described.

Why is 0.29999999999999998 converted to 0.3?

How does it work internally?
How does it decide to convert 0.29999999999999998 to 0.3, even though 0.3 cannot be represented in binary?
Here are some more example:
scala> 0.29999999999999998
res1: Double = 0.3
scala> 0.29999999999999997
res2: Double = 0.3
scala> 0.29999999999999996
res3: Double = 0.29999999999999993
scala> 0.29999999999999995
res4: Double = 0.29999999999999993
There are two conversions involved.
First 0.29999999999999998 is converted to 0.299999999999999988897769753748434595763683319091796875, the nearest representable number.
Next, 0.299999999999999988897769753748434595763683319091796875 is converted to decimal for printing. 0.3 is also one of the numbers that converts to 0.299999999999999988897769753748434595763683319091796875, and it is the one that gets printed because it is so short.
Every finite double number is exactly representable as a decimal fraction. Generally, default output does not attempt to print the exact value, because it can be very long - far longer than the example above. A common choice is to print the shortest decimal fraction that would convert to the double on input. Both conversions are done using non-trivial algorithms. See Algorithm to convert an IEEE 754 double to a string? for some discussion and references to output algorithms.
==============================================================
There has been some discussion in comments on the value 0.30000000000000004. I agree with the comments by Rick Regan and Jesper, but thought it might be useful to add to this answer.
The exact value of the closest double to 0.30000000000000004 is 0.3000000000000000444089209850062616169452667236328125. All decimal numbers in the range [0.3000000000000000166533453693773481063544750213623046875, 0.3000000000000000721644966006351751275360584259033203125] convert to that value, and no numbers even slightly outside that range do so. 0.3000000000000000 is outside the range, so it does not have enough digits. 0.30000000000000004 is inside the range, so there is no need for more digits to correctly identify the double.
Note in Scala Double (see IEEE 754 Standard and IEEE Floating-Point Arithmetic), the original declared value is rounded up to nearest,
val x = 0.29999999999999998
x: Double = 0.3
"0.29999999999999998".toDouble
Double = 0.3
in as much as
0.2999999999999999999999999999999999999999999999999999999999998
Double = 0.3
Also in BigDecimal for arbitrary precision decimal floating-point representation (see API), the original value of type Double (parameter to the constructor) is first rounded up, namely
BigDecimal(0.29999999999999998) == 0.3
Boolean = true
BigDecimal(0.29999999999999998)
scala.math.BigDecimal = 0.3
However a textual declaration of the original value is not interpreted as Double and hence rounded up,
BigDecimal("0.29999999999999998") == 0.3
Boolean = false
namely,
BigDecimal("0.29999999999999998")
scala.math.BigDecimal = 0.29999999999999998

Double String formatting madness

I have the following in playground:
let a:CGFloat = 1.23
let b:Double = 3.45
let c:Float = 6.78
String(format: "(%2.2f|%2.2f|%2.2f)",a,b,c) // prints "(3.45|6.78|0.00)"
let d = NSMakePoint(9.87,6.54)
String(format: "(%2.2f|%2.2f)",d.x,d.y) // prints "(0.00|0.00)"
So why is c:Float rendered as 0.00. I possibly need something else than f (Apples docu on this formatting function is - to be polite - quite thin with a limes going to zero).
BUT: Why is the CGFloat in the first place rendered correctly while the two CGFloats inside the NSPoint get rendered as 0.00?
And no: it's not a duplicate of Precision String Format Specifier In Swift and its pendant.
P.S.
String(format: "(%2.2f|%2.2f)",Double(d.x),d.y) // prints "(9.87|0.00)"
which is a work around but no explanation.
And a PPS: Isn't %2.2f supposed to print " 9.87" instead of "9.87" (2 places for leading digits? It seems to ignore the number. Specifying %02.2f also prints "9.87" rather than "09.87"
CGFloat isn't a float or a double, it is its own struct. If you want the double or float value of a CGFloat (which is dependent on the architecture being 32 or 64 but, then you can access it with a.native. In other words try:
String(format: "(%2.2f|%2.2f|%2.2f)",a.native,b,c)
You'd see similar behavior if you tried to pass other non-float or non-double arguments to the %f formatter. For example:
var str = "Hello, playground"
String(format: "%2.2f|%2.2f|%2.2f", arguments: [str,b,c])
would result in "3.45|6.78|0.00". It appears to be looking for another float in your arguments to satisfy the last %f, and defaults to 0.00
As for the PPS. %2.2F is two decimal place and at least 2 total digits. If you wanted two digits minimum before the decimal, you'd want %5.2f. 5 because the decimal itself takes a place.

double datatype+iphone

i have a one application i know The range of a double is **1.7E +/- 308 (15 digits).**but in my application i have to devide text box 's value to 100.0 my code is
double value=[strPrice doubleValue]/100.0;
NSString *stramoount=[#"" stringByAppendingFormat:#"%0.2f",value ];
when i devide 34901234566781212 by 100 it give me 349012345667812.12 but when i type
349012345667812124 and devide by 100 it give me by 100 it give me 3490123456678121.00 which is wrong whether i change datatype or how can i change my code
The number 349012345667812124 has 18 decimal digits. the double format only provides slightly less than 16 decimal digits of precision (the actual number is not an integer because the format's binary digits do not correspont directly to decimal ones). Thus it is completely expected that the last 2 or 3 digits cannot be represented accurately, and it already happens when the literal "349012345667812124" is parsed to the double format, before any calculations happen.
The fact that you get the expected result with the number 34901234566781212 means nothing; it just happens to be close enough to the nearest value the double format can represent.
To avoid this problem, use the NSDecimal or NSDecimalNumber types.
Use
NSDecimalNumber * dec=[[NSDecimalNumber decimalNumberWithString:value.text locale: [NSLocale currentLocale]] decimalNumberByDividingBy:[NSDecimalNumber decimalNumberWithString:#"100" locale:[NSLocale currentLocale]]];
NSLog(#"%#",dec);
instead of Double