What's the most precise data type for floating point calculations on iPhone OS? - iphone

I always thought it's double, until I accidently hit floo+ESC and it told me there is a floorl(<#long double #>) function. So long double is the solution to all big inaccuracy problems? ;-)
Or is there even something more precise than that?

One thing to be aware of is that long double simply acts as double on the iPhone hardware. You don't get any additional precision from the larger type. It will give you more precision in the Simulator, because you're running on a Mac there, so that can confuse you.
As is noted here (and by other commenters), NSDecimal or NSDecimalNumber is the way to go for precision (up to 34 digits), and calculations performed using it are done with true decimal math, not binary floating point. This avoids many of the errors that you see with normal IEEE 754 math.

I think long double is the limit, but it really depends what you want to do. Have you actually been running into inaccuracy problems?

For even more precision look at the NSDecimalNumber class. As the other comment says - have you found any inaccuracy problems. Also more accuracy will be slower.

Adding more precision is one approach to solving the problem, but the real problem sometimes (usually?) lies in the way you are performing the computation. In that case, I proscribe a healthy dose of RTFM. Any primer on FP arithmetic will cover why certain forms of equations are disastrous and how to avoid the gaping maw of oblivion.

Related

Is Cross-Platform Double Math Determinism Possible with Rounding?

I understand that there can be a .000000000000001 margin of error for double math and this is be made worse by multiplication to make the margin of error larger. With that said, is it possible to round off every calculation to a significant digit (maybe 4 decimal places) to achieve consistency across all platforms? Would it simply be more efficient using decimal math or will decimal math require similar rounding?
I will be using this for my lockstep RTS game which requires a deterministic physics engine for synchronous multiplayer. I'm using C#. Some calculations and some calculations I wish to perform include Sqrt, Sin, and Pow of the System.Math library.
I've actually been thinking about the whole matter in the wrong way. Instead of trying to minimize errors with greater accuracy (and more overhead), I should just use a type that stores and operates deterministically. I used the answer here: Fixed point math in c#? which helped me create a fixed point type that works perfectly and efficiently.

Compilation optimization for iPhone : floating point or fixed point?

I'm building a library for iphone (speex, but i'm sure it will apply to a lot of other libs too) and the make script has an option to use fixed point instead of floating point.
As the iphone ARM processor has the VFP extension and performs very well floating point calculations, do you think it's a better choice to use the fixed point option ?
If someone already benchmarked this and wants to share , i would really thank him.
Well, it depends on the setup of your application, here is some guidelines
First try turning on optimization to 0s (Fastest Smallest)
Turn on Relax IEEE Compliance
If your application can easily process floating point numbers in contiguous memory locations independently, you should look at the ARM NEON intrinsic's and assembly instructions, they can process up to 4 floating point numbers in a single instruction.
If you are already heavily using floating point math, try to switch some of your logic to fixed point (but keep in mind that moving from an NEON register to an integer register results in a full pipeline stall)
If you are already heavily using integer math, try changing some of your logic to floating point math.
Remember to profile before optimization
And above all, better algorithms will always beat micro-optimizations such as the above.
If you are dealing with large blocks of sequential data, NEON is definitely the way to go.
Float or fixed, that's a good question. NEON is somewhat faster dealing with fixed, but I'd keep the native input format since conversions take time and eventually, extra memory.
Even if the lib offers a different output formats as an option, it almost alway means lib-internal conversions. So I guess float is the native one in this case. Stick to it.
Noone prevents you from micro-optimizing better algorithms. And usually, the better the algorithm, the more performance gain can be achieved through micro-optimizations due to the pipelining on modern machines.
I'd stay away from intrinsics though. There are so many posts on the net complaining about intrinsics doing something crazy, especially when dealing with immediate values.
It can and will get very troublesome, and you can hardly optimize anything with intrinsics either.

Double vs float on the iPhone

I have just heard that the iphone cannot do double natively thereby making them much slower that regular float.
Is this true? Evidence?
I am very interested in the issue because my program needs high precision calculations, and I will have to compromise on speed.
The iPhone can do both single and double precision arithmetic in hardware. On the 1176 (original iPhone and iPhone3G), they operate at approximately the same speed, though you can fit more single-precision data in the caches. On the Cortex-A8 (iPhone3GS, iPhone4 and iPad), single-precision arithmetic is done on the NEON unit instead of VFP, and is substantially faster.
Make sure to turn off thumb mode in your compile settings for armv6 if you are doing intensive floating-point computation.
This slide show gives insight in why there isn't good floating point and why there is (the vector floating point unit). Apparently, it is important you check the "thumb mode" which influences whether or not floating point support is on. This is not always an improvement. It shows how to find the right instructions in the assembly code.
It also depends on what version of the phone you want to run your code. The most recent one seems "more capable" in doing floating point math.
EDIT: here's an interesting read on floating point optimizations on the ARM with VFP and NEON SSE.
The ARM1176JZF-S manual says that it supports double precision floating point numbers. You should be in good shape. Here's a link to the PDF documentation. Later iPhones are Cortex chips, and certainly shouldn't be less capable.

Is it a good idea to use NSDecimalNumber for floating point arithmetics instead of plain double?

I wonder what's the point of NSDecimalNumber. It offers some arithmetics methods, but why should I use NSDecimalNumber and not just double or NSNumber? Did apple take care of some floating point arithmetics uglyness there? Would it make life easier when making heavy use of high precision and big floating point maths?
This all depends or your needs.
It is a trade off between precision, speed and size of data.
If you are writing an accounting application you cannot lose any precision and so might well use NSDecimal number.
Ig you are doing complex numerical analysis the speed could matter and so NSDecimalNumber would be too slow. But even in that case your analysis would look at the precision and errors you could afford and here could be cases where you need more precision that doubles etc give you.
NSNumber is a separate case it is a class cluster to allow storage of C type numbers in other objects and other use in Cocoa.
If your software deals with money, or other non-integer numbers of interest to accountants, you are well advised to use decimal numbers for that (rather than the binary ones that the underlying HW is optimized to process); that's why all sorts of general purpose languages and databases bend over backwards to support decimal non-integer numbers, not just binary ones.
Rounding issues with binary non-integers might easily result in fractions-of-a-cent discrepancies that, at the limit, might even land you in legal trouble, and, more realistically, will be perceived by accountants and others dealing with money &c as errors in your program, no matter how staunchly you may argue otherwise!-)
NSDecimalNumber is a fixed precision (and scale) integer scaled to a certain size to represent fractional numbers. This is a little different from a floating point number (where the point, obviously, floats...)
As an example, say you need to represent money from 0.00 to 999.99, you could store this in an integer from 0 to 99999 as an amount in pennies. The scale (in digits) is 2 and the precision is 5. In a floating point number, with precision 5, and a floating point you could represent from .00001 to 99999, but not 999.999, for example.

How to make sure an NSDecimalNumber represents no fractional digits?

I want to do some fairly complex arithmetics that require very high precision, i.e. calculating
10000000000 + 0.00000000001 = 10000000000.00000000001
10000000000.00000000001 * 3 = 30000000000.00000000003
I want to use NSDecimalNumber for this kind of math, but the problem is: How to feed it with these values?
The documentation says:
- (id)initWithMantissa:(unsigned long long)mantissa exponent:(short)exponent isNegative:(BOOL)flag
The first problem I see is the mantissa. It requires a unsigned long long. As I understand that data type, It is a floating point, right? So if it is, at this point the entered value is already "dirty". It may have unwanted fractional digits somewhere at the end of it. I couldn't find good documentation on "unsigned long long" from apple, but I remember a code snippet where somone feeded the mantissa with a CGFloat, so that's why I assume it's a floating-point type.
Well if it is indeed some super floating point datatype, then the hard question is: How to get a clean, really clean integer into this thing? So clean, that I could multiply it by a half trillion without getting wrong results?
Are there good tutorials on the usage of NSDecimalNumber in practise?
Edit: No problem here! Thanks everyone!
If you really are concerned about feeding in less precise types, I'd recommend using -initWithString:, -initWithString:locale:, +decimalNumberWithString:, or +decimalNumberWithString:locale:. Using the string description avoids ever having to convert the numerical representation to a floating point or other numerical type before generating your NSDecimalNumber.