From a scala application I'm using the Play Framework to write JSON that includes floating point values, but I want to round the floats as they get serialized. How can this be done?
Related
Can you compare two objects that have floating numbers in field or values of maps, without manually comparing each field; i.e., enforce tolerance whenever floats or doubles are encountered using custom matchers and Scala generics?
I am used to programming in Java, where the BigDecimal type is the best for storing financial values, since there are manners to specify rounding rules over the calculations.
In the latest swift version (2.1 at the time this post is written), which native type better supports correct calculations and rounding for financial values? Is there any equivalent to java's BigDecimal? Or anything similar?
You can use NSDecimal or NSDecimalNumber for arbitrary precision numbers.
See more on NSDecimalNumbers's reference page.
If you are concerned about storing for example $1.23 in a float or double, and the potential inaccuracies you will get from floating point precision errors, that is if you actually want to stick to integer amounts of cents or pence (or whatever else). Then use an integer to store your value and use the pence/cent as your unit instead of pounds/dollars. You will then be 100% accurate when dealing in integer amounts of pence/cents, and it's easier than using a class like NSDecimalNumber. The display of that value is then purely a presentation issue.
If however you need to deal with fractions of a pence/cent, then NSDecimalNumber is probably what you want.
I recommend looking into how classes like this actually work, and how floating point numbers work too, because having an understanding of this will help you to see why precision errors arise and just what the precision limits are of a class like NSDecimalNumber, why it's better for storing decimal numbers, why floats are good at storing numbers like 17/262144 (i.e. where the denominator is a power of two) but can't store 1/100, etc.
I understand that there can be a .000000000000001 margin of error for double math and this is be made worse by multiplication to make the margin of error larger. With that said, is it possible to round off every calculation to a significant digit (maybe 4 decimal places) to achieve consistency across all platforms? Would it simply be more efficient using decimal math or will decimal math require similar rounding?
I will be using this for my lockstep RTS game which requires a deterministic physics engine for synchronous multiplayer. I'm using C#. Some calculations and some calculations I wish to perform include Sqrt, Sin, and Pow of the System.Math library.
I've actually been thinking about the whole matter in the wrong way. Instead of trying to minimize errors with greater accuracy (and more overhead), I should just use a type that stores and operates deterministically. I used the answer here: Fixed point math in c#? which helped me create a fixed point type that works perfectly and efficiently.
I have some inputs on my site representing floating point numbers with up to ten precision digits (in decimal). At some point, in the client side validation code, I need to compare a couple of those values to see if they are equal or not, and here, as you would expect, the intrinsics of IEEE754 make that simple check fails with things like (2.0000000000==2.0000000001) = true.
I may break the floating point number in two longs for each side of the dot, make each side a 64 bit long and do my comparisons manually, but it looks so ugly!
Any decent Javascript library to handle arbitrary (or at least guaranteed) precision float numbers on Javascript?
Thanks in advance!
PS: A GWT based solution has a ++
There is the GWT-MATH library at http://code.google.com/p/gwt-math/.
However, I warn you, it's a GWT jsni overlay of a java->javascript automated conversion of java.BigDecimal (actually the old com.ibm.math.BigDecimal).
It works, but speedy it is not. (Nor lean. It will pad on a good 70k into your project).
At my workplace, we are working on a fixed point simple decimal, but nothing worth releasing yet. :(
Use an arbitrary precision integer library such as silentmatt’s javascript-biginteger, which can store and calculate with integers of any arbitrary size.
Since you want ten decimal places, you’ll need to store the value n as n×10^10. For example, store 1 as 10000000000 (ten zeroes), 1.5 as 15000000000 (nine zeroes), etc. To display the value to the user, simply place a decimal point in front of the tenth-last character (and then cut off any trailing zeroes if you want).
Alternatively you could store a numerator and a denominator as bigintegers, which would then allow you arbitrarily precise fractional values (but beware – fractional values tend to get very big very quickly).
I tried to assign a very small number to a double value, like so:
double verySmall = 0.000000001;
9 fractional digits. For some reason, when I multiplicate this value by 10, I get something like 0.000000007. I slighly remember there were problems writing big numbers like this in plain text into source code. Do I have to wrap it in some function or a directive in order to feed it correctly to the compiler? Or is it fine to type in such small numbers in text?
The problem is with floating point arithmetic not with writing literals in source code. It is not designed to be exact. The best way around is to not use the built in double - use integers only (if possible) with power of 10 coefficients, sum everything up and display the final useful figure after rounding.
Standard floating point numbers are not stored in a perfect format, they're stored in a format that's fairly compact and fairly easy to perform math on. They are imprecise at surprisingly small precision levels. But fast. More here.
If you're dealing with very small numbers, you'll want to see if Objective-C or Cocoa provides something analagous to the java.math.BigDecimal class in Java. This is precisely for dealing with numbers where precision is more important than speed. If there isn't one, you may need to port it (the source to BigDecimal is available and fairly straightforward).
EDIT: iKenndac points out the NSDecimalNumber class, which is the analogue for java.math.BigDecimal. No port required.
As usual, you need to read stuff like this in order to learn more about how floating-point numbers work on computers. You cannot expect to be able to store any random fraction with perfect results, just as you can't expect to store any random integer. There are bits at the bottom, and their numbers are limited.