High precision floating point numbers in Swift - swift

How do I handle floating point issue in Swift if Double's precission is not enough? Is there an analog of Java's BigDecimal?

Swift's Float80 is higher-precision (but not arbitrarily large).
http://swiftdoc.org/v2.0/type/Float80/

Related

What are scenarios where you should use Float in Swift?

Learning about the difference between Floats and Doubles in Swift. I can't think of any reasons to use Float. I know there are, and I know I am just not experienced enough to understand them.
So my question is why would you use float in Swift?
why would you use float in Swift
Left to your own devices, you likely never would. But there are situations where you have to. For example, the value of a UISlider is a Float. So when you retrieve that number, you are working with a Float. It’s not up to you.
And so with all the other numerical types. Swift includes a numerical type corresponding to every numerical type that you might possibly encounter as you interface with Cocoa and the outside world.
Float is a typealias for Float32. Float32 and Float16 are incredibly useful for GPU programming with Metal. They both will feel as archaic someday on the GPU as they do on the CPU, but that day is years off.
https://developer.apple.com/metal/
Double
Represents a 64-bit floating-point number.
Has a precision of at least 15 decimal digits.
Float
Float represents a 32-bit floating-point number.
precision of Float can be as little as 6 decimal digits.
The appropriate floating-point type to use depends on the nature and range of values you need to work with in your code. In situations where either type would be appropriate, Double is preferred.

What's the correct number type for financial variables in Swift?

I am used to programming in Java, where the BigDecimal type is the best for storing financial values, since there are manners to specify rounding rules over the calculations.
In the latest swift version (2.1 at the time this post is written), which native type better supports correct calculations and rounding for financial values? Is there any equivalent to java's BigDecimal? Or anything similar?
You can use NSDecimal or NSDecimalNumber for arbitrary precision numbers.
See more on NSDecimalNumbers's reference page.
If you are concerned about storing for example $1.23 in a float or double, and the potential inaccuracies you will get from floating point precision errors, that is if you actually want to stick to integer amounts of cents or pence (or whatever else). Then use an integer to store your value and use the pence/cent as your unit instead of pounds/dollars. You will then be 100% accurate when dealing in integer amounts of pence/cents, and it's easier than using a class like NSDecimalNumber. The display of that value is then purely a presentation issue.
If however you need to deal with fractions of a pence/cent, then NSDecimalNumber is probably what you want.
I recommend looking into how classes like this actually work, and how floating point numbers work too, because having an understanding of this will help you to see why precision errors arise and just what the precision limits are of a class like NSDecimalNumber, why it's better for storing decimal numbers, why floats are good at storing numbers like 17/262144 (i.e. where the denominator is a power of two) but can't store 1/100, etc.

More precision than double in swift

Are there are any floating points more accurate than Double available in Swift? I know that in C there is the long double, but I can't seem to find its equivalent in Apple's new programming language.
Any help would be greatly appreciated!
Yes there is! There is Float80 exactly for that, it stores 80 bits (duh), 10 bytes. You can use it like any other floating point type. Note that there are Float32, Float64 and Float80 in Swift, where Float32 is just a typealias for Float and Float64 is one for Double
Currently iOS 11+ runs on 64 Bit platform, Double holds Highest among all.
Double has a precision of at least 15 decimal digits, whereas the
precision of Float can be as little as 6 decimal digits. The
appropriate floating-point type to use depends on the nature and range
of values you need to work with in your code. In situations where
either type would be appropriate, Double is preferred.
However in CGFloat The native type used to store the CGFloat, which is Float on 32-bit architectures and Double on 64-bit architectures
https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/TheBasics.html

Xcode Obj-C Why is 7/10 not 0.7

int a=7
int b=10
float answer = (float)a/b;
answer=0.699999988 ( I expect 0.7 ??)
The short version is: Floating points are not accurate, it's only a finite set of bits, and a finite set of bits cannot be used to represent an infinite set of numbers.
The longer version is here: What Every Computer Scientist Should Know About Floating-Point Arithmetic
See also:
How is floating point stored? When does it matter?
Why is my number being rounded incorrectly?
Floating point numbers are accurate only to a certain finite number of digits of precision. You will need to do some rounding to get whole numbers.
If you need more precision, use the double data type, or the NSDecimal class (Which will preserve your decimal digits at the expense of complexity).
It is because floating point calculations are not precise.
The only thing I rely on is the existence of exact small integers (namely -2, -1, 0, 1, 2, as you might use for representing [0,1] plus some special values), and some people frown on using that too.

Is it a good idea to use NSDecimalNumber for floating point arithmetics instead of plain double?

I wonder what's the point of NSDecimalNumber. It offers some arithmetics methods, but why should I use NSDecimalNumber and not just double or NSNumber? Did apple take care of some floating point arithmetics uglyness there? Would it make life easier when making heavy use of high precision and big floating point maths?
This all depends or your needs.
It is a trade off between precision, speed and size of data.
If you are writing an accounting application you cannot lose any precision and so might well use NSDecimal number.
Ig you are doing complex numerical analysis the speed could matter and so NSDecimalNumber would be too slow. But even in that case your analysis would look at the precision and errors you could afford and here could be cases where you need more precision that doubles etc give you.
NSNumber is a separate case it is a class cluster to allow storage of C type numbers in other objects and other use in Cocoa.
If your software deals with money, or other non-integer numbers of interest to accountants, you are well advised to use decimal numbers for that (rather than the binary ones that the underlying HW is optimized to process); that's why all sorts of general purpose languages and databases bend over backwards to support decimal non-integer numbers, not just binary ones.
Rounding issues with binary non-integers might easily result in fractions-of-a-cent discrepancies that, at the limit, might even land you in legal trouble, and, more realistically, will be perceived by accountants and others dealing with money &c as errors in your program, no matter how staunchly you may argue otherwise!-)
NSDecimalNumber is a fixed precision (and scale) integer scaled to a certain size to represent fractional numbers. This is a little different from a floating point number (where the point, obviously, floats...)
As an example, say you need to represent money from 0.00 to 999.99, you could store this in an integer from 0 to 99999 as an amount in pennies. The scale (in digits) is 2 and the precision is 5. In a floating point number, with precision 5, and a floating point you could represent from .00001 to 99999, but not 999.999, for example.