Arbitrary precision Float numbers on JavaScript - gwt

I have some inputs on my site representing floating point numbers with up to ten precision digits (in decimal). At some point, in the client side validation code, I need to compare a couple of those values to see if they are equal or not, and here, as you would expect, the intrinsics of IEEE754 make that simple check fails with things like (2.0000000000==2.0000000001) = true.
I may break the floating point number in two longs for each side of the dot, make each side a 64 bit long and do my comparisons manually, but it looks so ugly!
Any decent Javascript library to handle arbitrary (or at least guaranteed) precision float numbers on Javascript?
Thanks in advance!
PS: A GWT based solution has a ++

There is the GWT-MATH library at http://code.google.com/p/gwt-math/.
However, I warn you, it's a GWT jsni overlay of a java->javascript automated conversion of java.BigDecimal (actually the old com.ibm.math.BigDecimal).
It works, but speedy it is not. (Nor lean. It will pad on a good 70k into your project).
At my workplace, we are working on a fixed point simple decimal, but nothing worth releasing yet. :(

Use an arbitrary precision integer library such as silentmatt’s javascript-biginteger, which can store and calculate with integers of any arbitrary size.
Since you want ten decimal places, you’ll need to store the value n as n×10^10. For example, store 1 as 10000000000 (ten zeroes), 1.5 as 15000000000 (nine zeroes), etc. To display the value to the user, simply place a decimal point in front of the tenth-last character (and then cut off any trailing zeroes if you want).
Alternatively you could store a numerator and a denominator as bigintegers, which would then allow you arbitrarily precise fractional values (but beware – fractional values tend to get very big very quickly).

Related

Losing accuracy with double division

I am having a problem with a simple division from two integers. I need it to be as accurate as possible, but for some reason the double type is working strange.
For example, if I execute the following code:
double res = (29970.0/1000.0);
The result is 29.969999999999999, when it should be 29.970.
Any idea why this is happening?
Thanks
Any idea why this is happening?
Because double representation is finite. For example, IEEE754 double-precision standard has 52 bits for fraction. So, not all the real numbers are covered. So, some of the values can not be ideally precise. In your case the result is 10^-15 away from the ideal.
I need it to be as accurate as possible
You shouldn't use doubles, then. In Java, for example, you would use BigDecimal instead (most languages provide a similar facility). double operations are intrinsically inaccurate to some degree. This is due to the internal representation of floating point numbers.
floating point numbers of type float and double are stored in binary format. Therefore numbers cant have precise decimal values. Those values are instead quantisized. If you hypothetically had only 2 bits fraction number type you would be able to represent only 2^-2 quantums: 0.00 0.25 0.50 0.75, nothing between.
I need it to be as accurate as possible
There is no silver bullet, but if you want only basic arithmetic operations (which map ℚ to ℚ), and you REALLY want exact results, then your best bet is rational type composed of two unlimited integers (a.k.a. BigInteger, BigInt, etc.) - but even then, memory is not infinite, and you must think about it.
For the rest of the question, please read about fixed size floating-point numbers, there's plenty of good sources.

Irrational number representation in computer

We can write a simple Rational Number class using two integers representing A/B with B != 0.
If we want to represent an irrational number class (storing and computing), the first thing came to my mind is to use floating point, which means use IEEE 754 standard (binary fraction). This is because irrational number must be approximated.
Is there another way to write irrational number class other than using binary fraction (whether they conserve memory space or not) ?
I studied jsbeuno's solution using Python: Irrational number representation in any programming language?
He's still using the built-in floating point to store.
This is not homework.
Thank you for your time.
With a cardinality argument, there are much more irrational numbers than rational ones. (and the number of IEEE754 floating point numbers is finite, probably less than 2^64).
You can represent numbers with something else than fractions (e.g. logarithmically).
jsbeuno is storing the number as a base and a radix and using those when doing calcs with other irrational numbers; he's only using the float representation for output.
If you want to get fancier, you can define the base and the radix as rational numbers (with two integers) as described above, or make them themselves irrational numbers.
To make something thoroughly useful, though, you'll end up replicating a symbolic math package.
You can always use symbolic math, where items are stored exactly as they are and calculations are deferred until they can be performed with precision above some threshold.
For example, say you performed two operations on a non-irrational number like 2, one to take the square root and then one to square that. With limited precision, you may get something like:
(√2)²
= 1.414213562²
= 1.999999999
However, storing symbolic math would allow you to store the result of √2 as √2 rather than an approximation of it, then realise that (√x)² is equivalent to x, removing the possibility of error.
Now that obviously involves a more complicated encoding that simple IEEE754 but it's not impossible to achieve.

Problem with very small numbers?

I tried to assign a very small number to a double value, like so:
double verySmall = 0.000000001;
9 fractional digits. For some reason, when I multiplicate this value by 10, I get something like 0.000000007. I slighly remember there were problems writing big numbers like this in plain text into source code. Do I have to wrap it in some function or a directive in order to feed it correctly to the compiler? Or is it fine to type in such small numbers in text?
The problem is with floating point arithmetic not with writing literals in source code. It is not designed to be exact. The best way around is to not use the built in double - use integers only (if possible) with power of 10 coefficients, sum everything up and display the final useful figure after rounding.
Standard floating point numbers are not stored in a perfect format, they're stored in a format that's fairly compact and fairly easy to perform math on. They are imprecise at surprisingly small precision levels. But fast. More here.
If you're dealing with very small numbers, you'll want to see if Objective-C or Cocoa provides something analagous to the java.math.BigDecimal class in Java. This is precisely for dealing with numbers where precision is more important than speed. If there isn't one, you may need to port it (the source to BigDecimal is available and fairly straightforward).
EDIT: iKenndac points out the NSDecimalNumber class, which is the analogue for java.math.BigDecimal. No port required.
As usual, you need to read stuff like this in order to learn more about how floating-point numbers work on computers. You cannot expect to be able to store any random fraction with perfect results, just as you can't expect to store any random integer. There are bits at the bottom, and their numbers are limited.

From Fraction to Decimal and Back?

Is it possible to store a fraction like 3/6 in a variable of some sort?
When I try this it only stores the numbers before the /. I know that I can use 2 variables and divide them, but the input is from a single text field. Is this at all possible?
I want to do this because I need to calculate the fractions to decimal odds.
A bonus question ;) - Is there an easy way to calculate a decimal value to a fraction? Thanks..
Well in short, there is no true way to extract the original fraction out of a decimal.
Example: take 5/10
you will get 0.5
now, 0.5 also translates back to 1/2, 2/4, 3/6, etc.
Your best bet is to store each integer separately, and perform the calculation later on.
The best thing to do is to implement a fraction class (or rational number class). Normally it would take a numerator and denominator and be able to provide a double, and do basic math with other fraction objects. It should also be able to parse and format fractions.
Rational Arithmetic on Rosetta Code looks like something good to start with.
I'm afraid there aren't any easy answers for you on this. For creating the fraction, you'll have to split the text field on the '/', convert the two halves to doubles, and divide them out. As for converting it back to a fraction, you'll have to crack open a math textbook and figure it out. (Even worse, a double is not actually precise—you may think it has 0.1 in it, but it really has 0.09999999999999998726 or something like that, so you'll have to choose a precision and go for it, or write some sort of fraction class that's based on a pair of integers.)
The method, as been said, is to store the numerator and denominator, much in the way you can write it on paper.
for 'C' use the
GNU Multiple Precision Arithmetic Library
look for 'rational' in the docs.
Is there an easy way to calculate a decimal value to a fraction?
If you limit your decimal values to a certain number of decimal points you could create a lookup table.
0.3333, 1/3
0.6666, 2/3
0.0625, 1/16
0.1250, 1/8
0.2500, 1/4
0.5000, 1/2
0.7500, 3/4
etc...
So if the user input 0.5 you pad it with 0's until you got 4 decimal places. You would then use the lookup table to return "1/2". The lookup table should probably be a dictionary of sorts.
It wouldn't be too difficult to do estimating either. For example, if the user entered 0.0624 you could easily select the value in the table closest to that decimal. In this case it would return "1/16."
Don't let typing/entering of the finite set of decimal/fraction pairs scares you (it's really not that large depending on the precision you choose).
If all else fails perhaps a google search would reveal a library that does this sort of this for you.

Decimal vs Double Speed

I write financial applications where I constantly battle the decision to use a double vs using a decimal.
All of my math works on numbers with no more than 5 decimal places and are not larger than ~100,000. I have a feeling that all of these can be represented as doubles anyways without rounding error, but have never been sure.
I would go ahead and make the switch from decimals to doubles for the obvious speed advantage, except that at the end of the day, I still use the ToString method to transmit prices to exchanges, and need to make sure it always outputs the number I expect. (89.99 instead of 89.99000000001)
Questions:
Is the speed advantage really as large as naive tests suggest? (~100 times)
Is there a way to guarantee the output from ToString to be what I want? Is this assured by the fact that my number is always representable?
UPDATE: I have to process ~ 10 billion price updates before my app can run, and I have implemented with decimal right now for the obvious protective reasons, but it takes ~3 hours just to turn on, doubles would dramatically reduce my turn on time. Is there a safe way to do it with doubles?
Floating point arithmetic will almost always be significantly faster because it is supported directly by the hardware. So far almost no widely used hardware supports decimal arithmetic (although this is changing, see comments).
Financial applications should always use decimal numbers, the number of horror stories stemming from using floating point in financial applications is endless, you should be able to find many such examples with a Google search.
While decimal arithmetic may be significantly slower than floating point arithmetic, unless you are spending a significant amount of time processing decimal data the impact on your program is likely to be negligible. As always, do the appropriate profiling before you start worrying about the difference.
There are two separable issues here. One is whether the double has enough precision to hold all the bits you need, and the other is where it can represent your numbers exactly.
As for the exact representation, you are right to be cautious, because an exact decimal fraction like 1/10 has no exact binary counterpart. However, if you know that you only need 5 decimal digits of precision, you can use scaled arithmetic in which you operate on numbers multiplied by 10^5. So for example if you want to represent 23.7205 exactly you represent it as 2372050.
Let's see if there is enough precision: double precision gives you 53 bits of precision.
This is equivalent to 15+ decimal digits of precision. So this would allow you five digits after the decimal point and 10 digits before the decimal point, which seems ample for your application.
I would put this C code in a .h file:
typedef double scaled_int;
#define SCALE_FACTOR 1.0e5 /* number of digits needed after decimal point */
static inline scaled_int adds(scaled_int x, scaled_int y) { return x + y; }
static inline scaled_int muls(scaled_int x, scaled_int y) { return x * y / SCALE_FACTOR; }
static inline scaled_int scaled_of_int(int x) { return (scaled_int) x * SCALE_FACTOR; }
static inline int intpart_of_scaled(scaled_int x) { return floor(x / SCALE_FACTOR); }
static inline int fraction_of_scaled(scaled_int x) { return x - SCALE_FACTOR * intpart_of_scaled(x); }
void fprint_scaled(FILE *out, scaled_int x) {
fprintf(out, "%d.%05d", intpart_of_scaled(x), fraction_of_scaled(x));
}
There are probably a few rough spots but that should be enough to get you started.
No overhead for addition, cost of a multiply or divide doubles.
If you have access to C99, you can also try scaled integer arithmetic using the int64_t 64-bit integer type. Which is faster will depend on your hardware platform.
Always use Decimal for any financial calculations or you will be forever chasing 1cent rounding errors.
Yes; software arithmetic really is 100 times slower than hardware. Or, at least, it is a lot slower, and a factor of 100, give or take an order of magnitude, is about right. Back in the bad old days when you could not assume that every 80386 had an 80387 floating-point co-processor, then you had software simulation of binary floating point too, and that was slow.
No; you are living in a fantasy land if you think that a pure binary floating point can ever exactly represent all decimal numbers. Binary numbers can combine halves, quarters, eighths, etc, but since an exact decimal of 0.01 requires two factors of one fifth and one factor of one quarter (1/100 = (1/4)*(1/5)*(1/5)) and since one fifth has no exact representation in binary, you cannot exactly represent all decimal values with binary values (because 0.01 is a counter-example which cannot be represented exactly, but is representative of a huge class of decimal numbers that cannot be represented exactly).
So, you have to decide whether you can deal with the rounding before you call ToString() or whether you need to find some other mechanism that will deal with rounding your results as they are converted to a string. Or you can continue to use decimal arithmetic since it will remain accurate, and it will get faster once machines are released that support the new IEEE 754 decimal arithmetic in hardware.
Obligatory cross-reference: What Every Computer Scientist Should Know About Floating-Point Arithmetic. That's one of many possible URLs.
Information on decimal arithmetic and the new IEEE 754:2008 standard at this Speleotrove site.
Just use a long and multiply by a power of 10. After you're done, divide by the same power of 10.
Decimals should always be used for financial calculations. The size of the numbers isn't important.
The easiest way for me to explain is via some C# code.
double one = 3.05;
double two = 0.05;
System.Console.WriteLine((one + two) == 3.1);
That bit of code will print out False even though 3.1 is equal to 3.1...
Same thing...but using decimal:
decimal one = 3.05m;
decimal two = 0.05m;
System.Console.WriteLine((one + two) == 3.1m);
This will now print out True!
If you want to avoid this sort of issue, I recommend you stick with decimals.
I refer you to my answer given to this question.
Use a long, store the smallest amount you need to track, and display the values accordingly.