How to make sure an NSDecimalNumber represents no fractional digits? - iphone

I want to do some fairly complex arithmetics that require very high precision, i.e. calculating
10000000000 + 0.00000000001 = 10000000000.00000000001
10000000000.00000000001 * 3 = 30000000000.00000000003
I want to use NSDecimalNumber for this kind of math, but the problem is: How to feed it with these values?
The documentation says:
- (id)initWithMantissa:(unsigned long long)mantissa exponent:(short)exponent isNegative:(BOOL)flag
The first problem I see is the mantissa. It requires a unsigned long long. As I understand that data type, It is a floating point, right? So if it is, at this point the entered value is already "dirty". It may have unwanted fractional digits somewhere at the end of it. I couldn't find good documentation on "unsigned long long" from apple, but I remember a code snippet where somone feeded the mantissa with a CGFloat, so that's why I assume it's a floating-point type.
Well if it is indeed some super floating point datatype, then the hard question is: How to get a clean, really clean integer into this thing? So clean, that I could multiply it by a half trillion without getting wrong results?
Are there good tutorials on the usage of NSDecimalNumber in practise?
Edit: No problem here! Thanks everyone!

If you really are concerned about feeding in less precise types, I'd recommend using -initWithString:, -initWithString:locale:, +decimalNumberWithString:, or +decimalNumberWithString:locale:. Using the string description avoids ever having to convert the numerical representation to a floating point or other numerical type before generating your NSDecimalNumber.

Related

What's the correct number type for financial variables in Swift?

I am used to programming in Java, where the BigDecimal type is the best for storing financial values, since there are manners to specify rounding rules over the calculations.
In the latest swift version (2.1 at the time this post is written), which native type better supports correct calculations and rounding for financial values? Is there any equivalent to java's BigDecimal? Or anything similar?
You can use NSDecimal or NSDecimalNumber for arbitrary precision numbers.
See more on NSDecimalNumbers's reference page.
If you are concerned about storing for example $1.23 in a float or double, and the potential inaccuracies you will get from floating point precision errors, that is if you actually want to stick to integer amounts of cents or pence (or whatever else). Then use an integer to store your value and use the pence/cent as your unit instead of pounds/dollars. You will then be 100% accurate when dealing in integer amounts of pence/cents, and it's easier than using a class like NSDecimalNumber. The display of that value is then purely a presentation issue.
If however you need to deal with fractions of a pence/cent, then NSDecimalNumber is probably what you want.
I recommend looking into how classes like this actually work, and how floating point numbers work too, because having an understanding of this will help you to see why precision errors arise and just what the precision limits are of a class like NSDecimalNumber, why it's better for storing decimal numbers, why floats are good at storing numbers like 17/262144 (i.e. where the denominator is a power of two) but can't store 1/100, etc.

Losing accuracy with double division

I am having a problem with a simple division from two integers. I need it to be as accurate as possible, but for some reason the double type is working strange.
For example, if I execute the following code:
double res = (29970.0/1000.0);
The result is 29.969999999999999, when it should be 29.970.
Any idea why this is happening?
Thanks
Any idea why this is happening?
Because double representation is finite. For example, IEEE754 double-precision standard has 52 bits for fraction. So, not all the real numbers are covered. So, some of the values can not be ideally precise. In your case the result is 10^-15 away from the ideal.
I need it to be as accurate as possible
You shouldn't use doubles, then. In Java, for example, you would use BigDecimal instead (most languages provide a similar facility). double operations are intrinsically inaccurate to some degree. This is due to the internal representation of floating point numbers.
floating point numbers of type float and double are stored in binary format. Therefore numbers cant have precise decimal values. Those values are instead quantisized. If you hypothetically had only 2 bits fraction number type you would be able to represent only 2^-2 quantums: 0.00 0.25 0.50 0.75, nothing between.
I need it to be as accurate as possible
There is no silver bullet, but if you want only basic arithmetic operations (which map ℚ to ℚ), and you REALLY want exact results, then your best bet is rational type composed of two unlimited integers (a.k.a. BigInteger, BigInt, etc.) - but even then, memory is not infinite, and you must think about it.
For the rest of the question, please read about fixed size floating-point numbers, there's plenty of good sources.

Arbitrary precision Float numbers on JavaScript

I have some inputs on my site representing floating point numbers with up to ten precision digits (in decimal). At some point, in the client side validation code, I need to compare a couple of those values to see if they are equal or not, and here, as you would expect, the intrinsics of IEEE754 make that simple check fails with things like (2.0000000000==2.0000000001) = true.
I may break the floating point number in two longs for each side of the dot, make each side a 64 bit long and do my comparisons manually, but it looks so ugly!
Any decent Javascript library to handle arbitrary (or at least guaranteed) precision float numbers on Javascript?
Thanks in advance!
PS: A GWT based solution has a ++
There is the GWT-MATH library at http://code.google.com/p/gwt-math/.
However, I warn you, it's a GWT jsni overlay of a java->javascript automated conversion of java.BigDecimal (actually the old com.ibm.math.BigDecimal).
It works, but speedy it is not. (Nor lean. It will pad on a good 70k into your project).
At my workplace, we are working on a fixed point simple decimal, but nothing worth releasing yet. :(
Use an arbitrary precision integer library such as silentmatt’s javascript-biginteger, which can store and calculate with integers of any arbitrary size.
Since you want ten decimal places, you’ll need to store the value n as n×10^10. For example, store 1 as 10000000000 (ten zeroes), 1.5 as 15000000000 (nine zeroes), etc. To display the value to the user, simply place a decimal point in front of the tenth-last character (and then cut off any trailing zeroes if you want).
Alternatively you could store a numerator and a denominator as bigintegers, which would then allow you arbitrarily precise fractional values (but beware – fractional values tend to get very big very quickly).

What's the most precise data type for floating point calculations on iPhone OS?

I always thought it's double, until I accidently hit floo+ESC and it told me there is a floorl(<#long double #>) function. So long double is the solution to all big inaccuracy problems? ;-)
Or is there even something more precise than that?
One thing to be aware of is that long double simply acts as double on the iPhone hardware. You don't get any additional precision from the larger type. It will give you more precision in the Simulator, because you're running on a Mac there, so that can confuse you.
As is noted here (and by other commenters), NSDecimal or NSDecimalNumber is the way to go for precision (up to 34 digits), and calculations performed using it are done with true decimal math, not binary floating point. This avoids many of the errors that you see with normal IEEE 754 math.
I think long double is the limit, but it really depends what you want to do. Have you actually been running into inaccuracy problems?
For even more precision look at the NSDecimalNumber class. As the other comment says - have you found any inaccuracy problems. Also more accuracy will be slower.
Adding more precision is one approach to solving the problem, but the real problem sometimes (usually?) lies in the way you are performing the computation. In that case, I proscribe a healthy dose of RTFM. Any primer on FP arithmetic will cover why certain forms of equations are disastrous and how to avoid the gaping maw of oblivion.

Problem with very small numbers?

I tried to assign a very small number to a double value, like so:
double verySmall = 0.000000001;
9 fractional digits. For some reason, when I multiplicate this value by 10, I get something like 0.000000007. I slighly remember there were problems writing big numbers like this in plain text into source code. Do I have to wrap it in some function or a directive in order to feed it correctly to the compiler? Or is it fine to type in such small numbers in text?
The problem is with floating point arithmetic not with writing literals in source code. It is not designed to be exact. The best way around is to not use the built in double - use integers only (if possible) with power of 10 coefficients, sum everything up and display the final useful figure after rounding.
Standard floating point numbers are not stored in a perfect format, they're stored in a format that's fairly compact and fairly easy to perform math on. They are imprecise at surprisingly small precision levels. But fast. More here.
If you're dealing with very small numbers, you'll want to see if Objective-C or Cocoa provides something analagous to the java.math.BigDecimal class in Java. This is precisely for dealing with numbers where precision is more important than speed. If there isn't one, you may need to port it (the source to BigDecimal is available and fairly straightforward).
EDIT: iKenndac points out the NSDecimalNumber class, which is the analogue for java.math.BigDecimal. No port required.
As usual, you need to read stuff like this in order to learn more about how floating-point numbers work on computers. You cannot expect to be able to store any random fraction with perfect results, just as you can't expect to store any random integer. There are bits at the bottom, and their numbers are limited.