NSDecimalNumber for big number operations on iPhone - iphone

I need to use big number for precision in my application, float or double are not enough.
I also have int and float numbers, and I have to do operations with all of them.
I think that NSDecimalNumber is good for the precision I need, but I would like to do operations with other kind of numbers and it is complex formula. So I doesn't look appropriate to use this class in order to do complex formulas (too complicated to use the functions decimalWith... or decimalBy...) when you have lots of things.
Does anyone know what to use in order to manipulate big numbers easily, and do operations on them with different types (float, decimal, int)?
Thank you.

NSDecimalNumbers are simply wrappers around NSDecimal structs, which have a bunch of useful functions for manipulation without requiring the allocation of new objects.
I've used them a bit, and have come up with some other useful additions to those built-in: https://github.com/davedelong/DDMathParser/blob/master/DDMathParser/_DDDecimalFunctions.m
I would recommend using NSDecimals unless you can come up with a compelling reason not to.

Thanks guys!
I finally used the double type that is enough for me. I was confused because I was using NSLog with %f to print my number and it wasn't what I wanted. I used %e instead to check the number in scientific notation was the right one, and it is. So I just do all my calculations using double number and it is working.

Related

What's the correct number type for financial variables in Swift?

I am used to programming in Java, where the BigDecimal type is the best for storing financial values, since there are manners to specify rounding rules over the calculations.
In the latest swift version (2.1 at the time this post is written), which native type better supports correct calculations and rounding for financial values? Is there any equivalent to java's BigDecimal? Or anything similar?
You can use NSDecimal or NSDecimalNumber for arbitrary precision numbers.
See more on NSDecimalNumbers's reference page.
If you are concerned about storing for example $1.23 in a float or double, and the potential inaccuracies you will get from floating point precision errors, that is if you actually want to stick to integer amounts of cents or pence (or whatever else). Then use an integer to store your value and use the pence/cent as your unit instead of pounds/dollars. You will then be 100% accurate when dealing in integer amounts of pence/cents, and it's easier than using a class like NSDecimalNumber. The display of that value is then purely a presentation issue.
If however you need to deal with fractions of a pence/cent, then NSDecimalNumber is probably what you want.
I recommend looking into how classes like this actually work, and how floating point numbers work too, because having an understanding of this will help you to see why precision errors arise and just what the precision limits are of a class like NSDecimalNumber, why it's better for storing decimal numbers, why floats are good at storing numbers like 17/262144 (i.e. where the denominator is a power of two) but can't store 1/100, etc.

How should I store percentage values in Core Data that are read off UISliders?

Sliders are my app's primary user interaction element (go figure...). I use them to record percentage values which are then stored as Readings in my Core Data store. Based on the nature of percentage values, I would store them as decimal values between 0 and 1, and set the sliders to have a range from 0 to 1 and fire their value-changed actions continuously. When I need to display them I fetch them from the data store and display them as typical percentage values, e.g. 67%.
Now here's the catch: the value property of UISlider is of type float. That means I will run into rounding errors from the get-go. I'm nutty for accuracy, so I'm hoping to reduce the error margin as much as possible when I deal with them throughout my app.
What options are there for managing the percentage values which I'm reading off my sliders, storing in Core Data and displaying in the rest of my app? Or better yet, which of these options that I've come up with would be best for my app in terms of code maintainability, memory usage/performance and accuracy of the values obtained?
Use raw floats or doubles
Actually this is BAD, obviously because of rounding errors. Even 0.64 turns into 0.63 at some point in time (I've tested with my sliders and these come up a lot). With a difference potentially as large as 0.01, my app is definitely intolerant of this. Why am I even writing this as an option anyway?
Set slider range to between 0 and 100, read them, round up/down and store as integers
This is an interesting choice given that I'll only ever display the values as ##% and not 0.##. I won't need to perform arithmetic on these values either, so doing this looks alright:
// Recording and storing
// Slider range is 0 to 100
int percentage = (int) floor(slider.value); // Or ceil()
[reading setValue:[NSNumber numberWithInt:percentage]]; // Value is an integer
// Displaying
NSNumber *val = reading.value;
[readingLabel setText:[NSString stringWithFormat:#"%d%%", [val intValue]]];
This is not going to completely take away the rounding errors when I first read them off my sliders (they're still floats!), but floor() or ceil() should cut it, I think. I'm not entirely certain though; maybe someone here can provide more insight, which is the point of me asking this question.
Convert to, and store as, NSDecimalNumber objects
I've taken a look at the NSDecimalNumber class — percentages are base-10 factors after all — but given that it produces immutable objects, I'm not too sure I like the idea of writing long method calls all over my code repeatedly and creating a crapton of autoreleased objects everywhere.
Even creating NSDecimalNumbers out of primitives is a pain. Currently I can only think of doing this:
// Recording and storing
// Slider range is 0 to 1
NSDecimalNumber *decimalValue = [NSDecimalNumber decimalNumberWithString:
[NSString stringWithFormat:#"%0.2f",
slider.value]];
[reading setValue:decimalValue]; // Value is a decimal number
Multiplying them by 100 since I display them as ##% (like I said above) also gets really tedious and creates unnecessary extra objects which are then converted to NSStrings. I would go on, but I'd likely turn this post into a rant which is definitely not what I'm going for.
With that said, semantically NSDecimalNumber should make the most sense because, as I said above, I believe percentages are meant to be stored as decimal values between 0 and 1. However, looking at how my app works with percentage values, I don't know whether to choose multiplying them by 100 and storing as integers or working with NSDecimalNumber.
You can probably see that I'm already leaning slightly toward the second option (x100, use integers) because it's just more convenient and looks to be slightly better for performance. However, I'd like to see if anyone thinks the third is a better (more future proof?) one.
By the way, I don't mind modifying my data model to change the data type of the value attribute in my Reading entity, and modifying my existing code to accommodate the changes. I haven't done too much just yet because I've spent the rest of my time worrying about this.
Which of the above options do you think I should go for?
If all you need to do is read from a slider, save the values, and present them, dealing with primitive integers sourced from the scaled, rounded float slider input would be what I'd recommend. A slider isn't all that precise an input method, anyway, so your users won't notice if the rounding doesn't exactly match the pixel placement of the slider head.
I'd use the same sort of reasoning here as you do when determining the number of significant digits from a calculation that takes real-world data as its input (like a temperature reading, etc.). There's no sense in reporting a calculation to four significant digits if your input source only has a precision of two digits.
NSDecimals and NSDecimalNumbers are needed when you want to perform more extended calculations on decimal values, but want to avoid floating point errors. For example, I use them when running high-precision calculations in my application or if I need to manipulate currency in some way. Of the two, I stick with NSDecimal for performance reasons, using NSDecimalNumber on the occasions that I need to interact with Core Data or need to import numerical values into NSDecimal structs.
In this case, it seems like NSDecimal would be overkill, because you're not really adjusting the slider values in any way, just displaying them onscreen. If you did need to perform later manipulations (cutting in half, running through a formula, etc.), I'd recommend rounding the slider to an integer representation, creating an NSDecimal struct from that, performing calculations as NSDecimals, and then using the string output routines for NSDecimal / NSDecimalNumber to display the result onscreen. You can then easily save the calculated value as an NSDecimalNumber in Core Data or as a string representation in SQLite or another file.
Also, I wouldn't worry too much about performance on simple calculations like this until something like Instruments tells you they are a hotspot. Odds are, these calculations with NSDecimal, etc. won't be.

double_t in C99

I just read that C99 has double_t which should be at least as wide as double. Does this imply that it gives more precision digits after the decimal place? More than the usual 15 digits for double?.
Secondly, how to use it: Is only including
#include <float.h>
enough? I read that one has to set the FLT_EVAL_METHOD to 2 for long double. How to do this? As I work with numerical methods, I would like maximum precision without using an arbitrary precision library.
Thanks a lot...
No. double_t is at least as wide as double; i.e., it might be the same as double. Footnote 190 in the C99 standard makes the intent clear:
The types float_t and double_t are
intended to be the implementation’s
most efficient types at least as wide
as float and double, respectively.
As Michael Burr noted, you can't set FLT_EVAL_METHOD.
If you want the widest floating-point type on any system available using only C99, use long double. Just be aware that on some platforms it will be the same as double (and could even be the same as float).
Also, if you "work with numerical methods", you should be aware that for many (most even) numerical methods, the approximation error of the method is vastly larger than the rounding error of double precision, so there's often no benefit to using wider types. Exceptions exist, of course. What type of numerical methods are you working on, specifically?
Edit: seriously, either (a) just use long double and call it a day or (b) take a few weeks to learn about how floating-point is actually implemented on the platforms that you're targeting, and what the actual accuracy requirements are for the algorithms that you're implementing.
Note that you don't get to set FLT_EVAL_METHOD - it is set by the compiler's headers to let you determine how the library does certain things with floating point.
If your code is very sensitive to exactly how floating point operations are performed, you can use the value of that macro to conditionally compile code to handle those differences that might be important to you.
So for example, in general you know that double_t will be at least a double in all cases. If you want your code to do something different if double_t is a long double then your code can test if FLT_EVAL_METHOD == 2 and act accordingly.
Note that if FLT_EVAL_METHOD is something other than 0, 1, or 2 you'll need to look at the compiler's documentation to know exactly what type double_t is.
double_t may be defined by typedef double double_t; — of course, if you plan to rely on implementation specifics, you need to look at your own implementation.

How to make sure an NSDecimalNumber represents no fractional digits?

I want to do some fairly complex arithmetics that require very high precision, i.e. calculating
10000000000 + 0.00000000001 = 10000000000.00000000001
10000000000.00000000001 * 3 = 30000000000.00000000003
I want to use NSDecimalNumber for this kind of math, but the problem is: How to feed it with these values?
The documentation says:
- (id)initWithMantissa:(unsigned long long)mantissa exponent:(short)exponent isNegative:(BOOL)flag
The first problem I see is the mantissa. It requires a unsigned long long. As I understand that data type, It is a floating point, right? So if it is, at this point the entered value is already "dirty". It may have unwanted fractional digits somewhere at the end of it. I couldn't find good documentation on "unsigned long long" from apple, but I remember a code snippet where somone feeded the mantissa with a CGFloat, so that's why I assume it's a floating-point type.
Well if it is indeed some super floating point datatype, then the hard question is: How to get a clean, really clean integer into this thing? So clean, that I could multiply it by a half trillion without getting wrong results?
Are there good tutorials on the usage of NSDecimalNumber in practise?
Edit: No problem here! Thanks everyone!
If you really are concerned about feeding in less precise types, I'd recommend using -initWithString:, -initWithString:locale:, +decimalNumberWithString:, or +decimalNumberWithString:locale:. Using the string description avoids ever having to convert the numerical representation to a floating point or other numerical type before generating your NSDecimalNumber.

Problem with very small numbers?

I tried to assign a very small number to a double value, like so:
double verySmall = 0.000000001;
9 fractional digits. For some reason, when I multiplicate this value by 10, I get something like 0.000000007. I slighly remember there were problems writing big numbers like this in plain text into source code. Do I have to wrap it in some function or a directive in order to feed it correctly to the compiler? Or is it fine to type in such small numbers in text?
The problem is with floating point arithmetic not with writing literals in source code. It is not designed to be exact. The best way around is to not use the built in double - use integers only (if possible) with power of 10 coefficients, sum everything up and display the final useful figure after rounding.
Standard floating point numbers are not stored in a perfect format, they're stored in a format that's fairly compact and fairly easy to perform math on. They are imprecise at surprisingly small precision levels. But fast. More here.
If you're dealing with very small numbers, you'll want to see if Objective-C or Cocoa provides something analagous to the java.math.BigDecimal class in Java. This is precisely for dealing with numbers where precision is more important than speed. If there isn't one, you may need to port it (the source to BigDecimal is available and fairly straightforward).
EDIT: iKenndac points out the NSDecimalNumber class, which is the analogue for java.math.BigDecimal. No port required.
As usual, you need to read stuff like this in order to learn more about how floating-point numbers work on computers. You cannot expect to be able to store any random fraction with perfect results, just as you can't expect to store any random integer. There are bits at the bottom, and their numbers are limited.