String Format Specifiers : rounding rule used for decimal values - swift

I am using String(format:) to convert a Float. I thought the number would be rounded.
Sometimes it is.
String(format: "%.02f", 1.455) //"1.46"
Sometimes not.
String(format: "%.02f", 1.555) //"1.55"
String(round(1.555 * 100) / 100.0) //"1.56"
I guess 1.55 cannot be represented exactly as binary. And that it becomes something like 1.549999XXXX
But NumberFormatter doesn't seem to cause the same problem... Why? Should it be preferred over String(format:)?
let formatter = NumberFormatter()
formatter.maximumFractionDigits = 2
formatter.minimumFractionDigits = 2
if let string = formatter.string(for: 1.555) {
print(string) // 1.56
}

Reference to the problem (to use String (format :) to round a decimal number) can be found in the answers (or more often comments) to these questions: Rounding a double value to x number of decimal places in swift and How to format a Double into Currency - Swift 3. But the problem it covers (math with FloatingPoint) has been dealt with many times on SO (for all languages).
String(format:) does not have the function of rounding a decimal number (even if it is unfortunately proposed in some answers) but of formatting it (as its name suggests). This formatting sometimes causes a rounding. That is true. But we have to keep in mind a problem that the number 1.555 is... not worth 1.555.
In Swift, Double and Float, that conform to the FloatingPoint protocol respect the IEEE 754 specification. However, some values ​​cannot be exactly represented by the IEEE 754 standard.
In the same way that you can't represent a third exactly in a (finite) decimal expansion, there are lots of numbers which look simple in decimal, but which have long or infinite expansions in a binary expansion." (source)
To be convinced of this, we can use The Float Converter to convert between the decimal representation of numbers (like "1.02") and the binary format used by all modern CPUs (IEEE 754 floating point). For 1.555, the value actually stored in float is 1.55499994754791259765625
So the problem does not come from String (format :). For example, we can try another way to round to the thousandth and we find the same problem. :
round (8.45 * pow (10.0, 3.0)) / pow (10.0, 3.0)
// 8.449999999999999
That is how it is : "Binary floating point arithmetic is fine so long as you know what's going on and don't expect values ​​to be exactly the decimal ones you put in your program".
So the real question is : is this really a problem for you to use ? It depends on the app. Generally if we convert a number into a String by limiting its precision (by rounding), it is because we consider that this precision is not useful to the user. If this is the kind of data we're talking about, then it's okay to use a FloatingPoint.
However, to format it it may be more relevant to use a NumberFormatter. Not necessarily for its rounding algorithm, but rather because it allows you to locate the format :
let formatter = NumberFormatter()
formatter.maximumFractionDigits = 2
formatter.minimumFractionDigits = 2
formatter.locale = Locale(identifier: "fr_FR")
formatter.string(for: 1.55)!
// 1,55
formatter.locale = Locale(identifier: "en_US")
formatter.string(for: 1.55)!
// 1.55
Conversely, if we are in a case where precision matters, we must abandon Double / Float and use Decimal. Still to keep our rounding example, we can use this extension (which may be the best answer to the question "Rounding a double value to x number of decimal places in swift ") :
extension Double {
func roundedDecimal(to scale: Int = 0, mode: NSDecimalNumber.RoundingMode = .plain) -> Decimal {
var decimalValue = Decimal(self)
var result = Decimal()
NSDecimalRound(&result, &decimalValue, scale, mode)
return result
}
}
1.555.roundedDecimal(to: 2)
// 1.56

Related

How do you get integer values in an NSPoint to display as integer rather than as decimal

How do you get integer values in an NSPoint to display as integer rather than as decimal?
let temp:NSPoint = NSMakePoint(5,7)
print(temp.x) . // displays 5.0
I only want it to display 5, not 5.0
You ask:
How do you get integer values in an NSPoint to display as integer rather than as decimal.
This question may be based upon a false premise. The parameters you supply to NSMakePoint(_:_:) are actually CGFloat (as are the the underlying properties, x and y). So although you supplied integers, but Swift stores them as CGFloat for x and y, regardless.
So, the question is “How do you display the CGFloat properties of a NSPoint without any fractional digits?”
Certainly, as others have suggested, you can create an integer from the CGFloat properties with Int(point.x), etc., but I might suggest that you really might want want to just display these CGFloat without decimal places. There are a bunch of ways of doing this.
Consider:
let point = NSPoint(x: 5000, y: 7000)
You can use NSMakePoint, too, but this is a little more natural in Swift.
Then to display this without decimal places, you have a few options:
In macOS, NSStringFromPoint will generate the full CGPoint without decimal places:
print(NSStringFromPoint(point)) // "{5000, 7000}"
You can use String(format:_:), using a format string that explicitly specifies no decimal places:
print(String(format: "%.0f", point.x)) // "5000"
You can use NumberFormatter and specify the minimumFractionDigits:
let formatter = NumberFormatter()
formatter.minimumFractionDigits = 0
formatter.numberStyle = .decimal
print(formatter.string(for: point.x)!) // "5,000"
Needless to say, you can save that formatter and you can use it repeatedly, as needed, if you want to keep your code from getting too verbose.
If you’re just logging this to the console, you can use Unified Logging with import os.log and then do:
os_log("%.0f", point.x) // "5000"

NSDecimalNumber usage for precision with currency in Swift [duplicate]

This question already has an answer here:
How to store 1.66 in NSDecimalNumber
(1 answer)
Closed 5 years ago.
I've read a lot that NSDecimalNumber is the best format to use when using currency.
However, I'm still getting floating point issues.
For example.
let a: NSDecimalNumber = 0.07 //0.07000000000000003
let b: NSDecimalNumber = 7.dividing(by: 100) //0.06999999999999999
I know I could use Decimal and b would be what I'm expecting:
let b: Decimal = 7 / 100 //0.07
I'm using Core Data in my app. So I'm stuck with NSDecimalNumber. Unless I want convert a lot of NSDecimalNumbers to Decimals.
Can someone help me get 0.07?
The problem is that you’re effectively doing floating point math (with the problems it has faithfully capturing fractional decimal values in a Double) and creating a Decimal (or NSDecimalNumber) from the Double value that already has introduced this discrepancy. Instead, you want to create your Decimal values before doing your division (or before having a fractional Double value, even if a literal).
So, the following is equivalent to your example, whereby it is building a Double representation (with the limitations that entails) of 0.07, and you end up with a value that is not exactly 0.07:
let value = Decimal(7.0 / 100.0) // or NSDecimalNumber(value: 7.0 / 100.0)
Whereas this does not suffer this problem because we are dividing a decimal 7 by a decimal 100:
let value = Decimal(7) / Decimal(100) // or NSDecimalNumber(value: 7).dividing(by: 100)
Or, other ways to create 0.07 value but avoiding Double in the process include using strings:
let value = Decimal(string: "0.07") // or NSDecimalNumber(string: "0.07")
Or specifying the mantissa/significant and exponent:
let value = Decimal(sign: .plus, exponent: -2, significand: 7) // or NSDecimalNumber(mantissa: 7, exponent: -2, isNegative: false)
Bottom line, avoid Double representations entirely when using Decimal (or NSDecimalNumber), and you won't suffer the problem you described.

How do I convert a fractional decimal to a whole number in Swift

What is the easiest way to convert the fractional part of a float decimal (the part on the right) to a whole number integer.
For example:
0.25 converts to 25
0.09 converts to 9
0.90 converts to 90
I've tried several ways, including converting the float to a string and extracting the fraction, but for some reason it leaves off any trailing zeros. For example 0.90 would convert to a string as 0.9.
Here is an example:
let a = 0.90
let fractionalPart = a.truncatingRemainder(dividingBy: 1.0)
let modifiedFractionalPart = Int(fractionalPart * 100.0)
let string = String(modifiedFractionalPart)
// prints 90
If you aren't allowed to multiply by 100.0, meaning you don't actually have to limit your fractional part to two decimal places, rather you need to have the whole part after the . then use the following:
let a = 0.09017
let fractionalPart = String(a).components(separatedBy: ".")[1] // "09017"
Then if you have to convert it to an Int just do:
let fractionalPartInt = Int(fractionalPart) // 09017
Apple's Foundation Framework provides a way to do this:
let numberFormatter = NumberFormatter()
numberFormatter.multiplier = 100
print(numberFormatter.string(from: 0.25))
Specific documentation for NumberFormatter is also available
The advantage of using the NumberFormatter is that it is:
More modular -- if you need to change the conversion factor, just change the multiplier
More expressive -- having self-documenting code is extremely useful when viewing an old project

How to store 1.66 in NSDecimalNumber

I know float or double are not good for storing decimal number like money and quantity. I'm trying to use NSDecimalNumber instead. Here is my code in Swift playground.
let number:NSDecimalNumber = 1.66
let text:String = String(describing: number)
NSLog(text)
The console output is 1.6599999999999995904
How can I store the exact value of the decimal number 1.66 in a variable?
In
let number:NSDecimalNumber = 1.66
the right-hand side is a floating point number which cannot represent
the value "1.66" exactly. One option is to create the decimal number
from a string:
let number = NSDecimalNumber(string: "1.66")
print(number) // 1.66
Another option is to use arithmetic:
let number = NSDecimalNumber(value: 166).dividing(by: 100)
print(number) // 1.66
With Swift 3 you may consider to use the "overlay value type" Decimal instead, e.g.
let num = Decimal(166)/Decimal(100)
print(num) // 1.66
Yet another option:
let num = Decimal(sign: .plus, exponent: -2, significand: 166)
print(num) // 1.66
Addendum:
Related discussions in the Swift forum:
Exact NSDecimalNumber via literal
ExpressibleByFractionLiteral
Related bug reports:
SR-3317
Literal protocol for decimal literals should support precise decimal accuracy, closed as a duplicate of
SR-920
Re-design builtin compiler protocols for literal convertible types.

Double String formatting madness

I have the following in playground:
let a:CGFloat = 1.23
let b:Double = 3.45
let c:Float = 6.78
String(format: "(%2.2f|%2.2f|%2.2f)",a,b,c) // prints "(3.45|6.78|0.00)"
let d = NSMakePoint(9.87,6.54)
String(format: "(%2.2f|%2.2f)",d.x,d.y) // prints "(0.00|0.00)"
So why is c:Float rendered as 0.00. I possibly need something else than f (Apples docu on this formatting function is - to be polite - quite thin with a limes going to zero).
BUT: Why is the CGFloat in the first place rendered correctly while the two CGFloats inside the NSPoint get rendered as 0.00?
And no: it's not a duplicate of Precision String Format Specifier In Swift and its pendant.
P.S.
String(format: "(%2.2f|%2.2f)",Double(d.x),d.y) // prints "(9.87|0.00)"
which is a work around but no explanation.
And a PPS: Isn't %2.2f supposed to print " 9.87" instead of "9.87" (2 places for leading digits? It seems to ignore the number. Specifying %02.2f also prints "9.87" rather than "09.87"
CGFloat isn't a float or a double, it is its own struct. If you want the double or float value of a CGFloat (which is dependent on the architecture being 32 or 64 but, then you can access it with a.native. In other words try:
String(format: "(%2.2f|%2.2f|%2.2f)",a.native,b,c)
You'd see similar behavior if you tried to pass other non-float or non-double arguments to the %f formatter. For example:
var str = "Hello, playground"
String(format: "%2.2f|%2.2f|%2.2f", arguments: [str,b,c])
would result in "3.45|6.78|0.00". It appears to be looking for another float in your arguments to satisfy the last %f, and defaults to 0.00
As for the PPS. %2.2F is two decimal place and at least 2 total digits. If you wanted two digits minimum before the decimal, you'd want %5.2f. 5 because the decimal itself takes a place.