Long Double on the iPhone - iphone

im working on an application that requires me to use a Long Double variable, which, in C/C++/ObjC, should be precise up to 15 floating values (1.123456789012345), the only issue is that on the iphone, i can only seem to display up to 6 places (1.123456) using
NSString *display = [NSString stringWithFormat:#"%Lf",value];
I was reading that the iphone bottlenecks these values but havent found out too much on it, anyone have any ideas how to get it to return 15 points like it should? thanks!

NSDecimalNumber will give you up to 38 digits of precision.
NSDecimalNumbers are not native long doubles, but some type conversions are supported.
See also This question.

Long double simply maps to double on the iPhone hardware. You unfortunately don't get the extra precision that you'd think you would. This tripped me up early on, because the iPhone Simulator will handle long doubles correctly, as it is running on a Mac.
As Charles suggests, you'll need to use NSDecimal or NSDecimalNumber to get that extra precision. Additionally, math involving those types will be free of the normal floating point issues you see when handling decimals.

Related

Why is Time.fixedTime and family a float and not a double?

Just want know if I'm missing something. I'm from ObjC land where NSTimeInterval is a double, which gives "sub-millisecond precision within a range of 10,000 years". Compare this to Unity, which since it uses a float for time, starts to break down after a day (maybe even sooner). Math.Approximately(1 day, a day + 1 frame) returns true for example (whereas 1 hour vs 1 hour + 1 frame correctly returns false). I actually experienced this when I left my game open all night and came back to it, noticing strange behavior on things that were time dependent.
Unity internally uses a double to track time. The double is converted to a float before being passed to user code.
This is largely because Unity uses floats in most other locations (floats are much better supported on a range of graphics cards/platforms).
Since Unity uses a double internally, you don't need to worry about it losing count / failing to increment time.
What you do need to worry about is that after the game has been running for a number of hours, the representable values become more and more sparse.
This can cause things that moved smoothly to start stuttering.
Personally, I tend to keep my own (float) and reset it to zero at some sensible interval, ideally when it makes no difference (which depends what you're using it for).
If Unity uses float for time then Unity is making a huge mistake and you should use other sources for time. The 24-bit precision of a float means that after a day you will only have about 4 ms accuracy. Bugs may start showing up long before that, and some games actually need time to be stable for much longer than a day.
Even if it doesn't seem like you need much time precision, using float means that your time precision is dropping as the game goes on, adding an extra cause for bugs.
There are many games that have had bugs because they use floats for time, and burning that bad decision into a game engine is a terrible idea. I discussed this problem a few years ago after seeing this mistake repeated many times in the games I was working on at the time:
https://randomascii.wordpress.com/2012/02/13/dont-store-that-in-a-float/
The main recommendation is to use double or int64 for time, and if you use double then start it with a value of about 5 billion so that your precision will be consistent throughout your game, instead of gradually dropping.
Unity3d uses floats for many components in the engine. Therefore you will find that a lot of functions and values will return floats or store floats respectively. Once you have been programming in Unity3d for a while you will even get the inside joke on their builds -- usually they look like this: 4.3.1f -- everything is a float.
You should be able to use .NET to get time in double if you use C#. Also I highly recommend, for some things, using the .NET Math class instead of the Unity Math.h, one is fast the other is in floats.
Float is used not only in time representation but everywhere in Unity and in most engines simply because it's good enough for games and uses less resources. By "good enough" I mean that you probably won't need more precision in most situations. Like the example you gave, it's very rare that someone will run into this situation.
In this question, there's this very nice answer:
Floats have no problem doing precise integer arithmetic up to 224, which is what most people are (mostly irrationally) afraid of. Doubles do not solve the problem of common values being unrepresentable exactly - whether you get 0.10000001 or 0.10000000000000001, you still must make sure your code considers it "equal" to 0.09999999.
Doubles are slower on every platform you'll care about writing games, take twice the memory bandwidth, and have fewer specialized CPU instructions available.
Single-precision floats remain the standard for hardware graphics interfaces, both on the C/C++ side and inside shaders.

NSDecimalNumber for big number operations on iPhone

I need to use big number for precision in my application, float or double are not enough.
I also have int and float numbers, and I have to do operations with all of them.
I think that NSDecimalNumber is good for the precision I need, but I would like to do operations with other kind of numbers and it is complex formula. So I doesn't look appropriate to use this class in order to do complex formulas (too complicated to use the functions decimalWith... or decimalBy...) when you have lots of things.
Does anyone know what to use in order to manipulate big numbers easily, and do operations on them with different types (float, decimal, int)?
Thank you.
NSDecimalNumbers are simply wrappers around NSDecimal structs, which have a bunch of useful functions for manipulation without requiring the allocation of new objects.
I've used them a bit, and have come up with some other useful additions to those built-in: https://github.com/davedelong/DDMathParser/blob/master/DDMathParser/_DDDecimalFunctions.m
I would recommend using NSDecimals unless you can come up with a compelling reason not to.
Thanks guys!
I finally used the double type that is enough for me. I was confused because I was using NSLog with %f to print my number and it wasn't what I wanted. I used %e instead to check the number in scientific notation was the right one, and it is. So I just do all my calculations using double number and it is working.

What's the most precise data type for floating point calculations on iPhone OS?

I always thought it's double, until I accidently hit floo+ESC and it told me there is a floorl(<#long double #>) function. So long double is the solution to all big inaccuracy problems? ;-)
Or is there even something more precise than that?
One thing to be aware of is that long double simply acts as double on the iPhone hardware. You don't get any additional precision from the larger type. It will give you more precision in the Simulator, because you're running on a Mac there, so that can confuse you.
As is noted here (and by other commenters), NSDecimal or NSDecimalNumber is the way to go for precision (up to 34 digits), and calculations performed using it are done with true decimal math, not binary floating point. This avoids many of the errors that you see with normal IEEE 754 math.
I think long double is the limit, but it really depends what you want to do. Have you actually been running into inaccuracy problems?
For even more precision look at the NSDecimalNumber class. As the other comment says - have you found any inaccuracy problems. Also more accuracy will be slower.
Adding more precision is one approach to solving the problem, but the real problem sometimes (usually?) lies in the way you are performing the computation. In that case, I proscribe a healthy dose of RTFM. Any primer on FP arithmetic will cover why certain forms of equations are disastrous and how to avoid the gaping maw of oblivion.

How to make sure an NSDecimalNumber represents no fractional digits?

I want to do some fairly complex arithmetics that require very high precision, i.e. calculating
10000000000 + 0.00000000001 = 10000000000.00000000001
10000000000.00000000001 * 3 = 30000000000.00000000003
I want to use NSDecimalNumber for this kind of math, but the problem is: How to feed it with these values?
The documentation says:
- (id)initWithMantissa:(unsigned long long)mantissa exponent:(short)exponent isNegative:(BOOL)flag
The first problem I see is the mantissa. It requires a unsigned long long. As I understand that data type, It is a floating point, right? So if it is, at this point the entered value is already "dirty". It may have unwanted fractional digits somewhere at the end of it. I couldn't find good documentation on "unsigned long long" from apple, but I remember a code snippet where somone feeded the mantissa with a CGFloat, so that's why I assume it's a floating-point type.
Well if it is indeed some super floating point datatype, then the hard question is: How to get a clean, really clean integer into this thing? So clean, that I could multiply it by a half trillion without getting wrong results?
Are there good tutorials on the usage of NSDecimalNumber in practise?
Edit: No problem here! Thanks everyone!
If you really are concerned about feeding in less precise types, I'd recommend using -initWithString:, -initWithString:locale:, +decimalNumberWithString:, or +decimalNumberWithString:locale:. Using the string description avoids ever having to convert the numerical representation to a floating point or other numerical type before generating your NSDecimalNumber.

Problem with very small numbers?

I tried to assign a very small number to a double value, like so:
double verySmall = 0.000000001;
9 fractional digits. For some reason, when I multiplicate this value by 10, I get something like 0.000000007. I slighly remember there were problems writing big numbers like this in plain text into source code. Do I have to wrap it in some function or a directive in order to feed it correctly to the compiler? Or is it fine to type in such small numbers in text?
The problem is with floating point arithmetic not with writing literals in source code. It is not designed to be exact. The best way around is to not use the built in double - use integers only (if possible) with power of 10 coefficients, sum everything up and display the final useful figure after rounding.
Standard floating point numbers are not stored in a perfect format, they're stored in a format that's fairly compact and fairly easy to perform math on. They are imprecise at surprisingly small precision levels. But fast. More here.
If you're dealing with very small numbers, you'll want to see if Objective-C or Cocoa provides something analagous to the java.math.BigDecimal class in Java. This is precisely for dealing with numbers where precision is more important than speed. If there isn't one, you may need to port it (the source to BigDecimal is available and fairly straightforward).
EDIT: iKenndac points out the NSDecimalNumber class, which is the analogue for java.math.BigDecimal. No port required.
As usual, you need to read stuff like this in order to learn more about how floating-point numbers work on computers. You cannot expect to be able to store any random fraction with perfect results, just as you can't expect to store any random integer. There are bits at the bottom, and their numbers are limited.