Swift Difference Between Double and Float64 - swift

In Swift Programing language official documentation, It says
Double represents a 64-bit floating-point number.
Float represents a 32-bit floating-point number.
Link: https://docs.swift.org/swift-book/LanguageGuide/TheBasics.html#ID321
Then, Why is there Float64? What is the difference between them? Or Are they same?

The headers, found by hitting command+shift+o and searching for Float64, say:
/// A 64-bit floating point type.
public typealias Float64 = Double
/// A 32-bit floating point type.
public typealias Float32 = Float
and
Base floating point types
Float32 32 bit IEEE float: 1 sign bit, 8 exponent bits, 23 fraction bits
Float64 64 bit IEEE float: 1 sign bit, 11 exponent bits, 52 fraction bits
Float80 80 bit MacOS float: 1 sign bit, 15 exponent bits, 1 integer bit, 63 fraction bits
Float96 96 bit 68881 float: 1 sign bit, 15 exponent bits, 16 pad bits, 1 integer bit, 63 fraction bits
Note: These are fixed size floating point types, useful when writing a floating
point value to disk. If your compiler does not support a particular size
float, a struct is used instead.
Use of of the NCEG types (e.g. double_t) or an ANSI C type (e.g. double) if
you want a floating point representation that is natural for any given
compiler, but might be a different size on different compilers.
As a general rule, unless you’re writing code that is dependent on binary representations, you should use the standard Float v Double names. But if you are writing something where binary compatibility is needed (e.g. writing/parsing binary Data to be exchanged with some other platform), then you can use the data types that bear the number of bits in the name, e.g. Float32 vs. Float64 vs. Float80.

Go to the definiation of Float64
/// A 64-bit floating point type.
public typealias Float64 = Double
/// A 32-bit floating point type.
public typealias Float32 = Float

Related

Long Double in Swift

In C, there is a type called long double, which on my machine is 16-bytes (128-bit). Is there any way to store a long double in Swift? I've tried type aliases (typedefs) of long double from an Objective-C header, but it doesn't show up in Swift. And functions returning long double (powl, sqrtl) don't show up neither.
I know there's a Float80 but that's only 80 bits, not 128 bits.
Unless you have a POWER processor, your 128 bit long double is actually an 80 bits extended precision number, plus 48 bit wasted.

Swift: Numeric Literals Decimal Floats Exponents [duplicate]

According to The Swift Programming Language :
For example, 0xFp2 represents 15 ⨉ 2^2, which evaluates to 60.
Similarly, 0xFp-2 represents 15 ⨉ 2^(-2), which evaluates to 3.75.
Why is 2 used as base for the exponent instead of 16? I'd have expected 0xFp2 == 15 * (16**2) instead of 0xFp2 == 15 * (2**2)
Swift's hexadecimal notation for floating-point numbers is just a variation of the notation introduced for C in the C99 standard for both input and output (with the printf %a format).
The purpose of that notation is to be both easy to interpret by humans and to let the bits of the IEEE 754 representation be somewhat recognizable. The IEEE 754 representation uses base two. Consequently, for a normal floating-point number, when the number before p is between 1 and 2, the number after p is directly the value of the exponent field of the IEEE 754 representation. This is in line with the dual objectives of human-readability and closeness to the bit representation:
$ cat t.c
#include <stdio.h>
int main(){
printf("%a\n", 3.14);
}
$ gcc t.c && ./a.out
0x1.91eb851eb851fp+1
The number 0x1.91eb851eb851fp+1 can be seen to be slightly above 3 because the exponent is 1 and the significand is near 0x1.9, slightly above 0x1.8, which indicates the exact middle between two powers of two.
This format helps remember that numbers that have a compact representation in decimal are not necessarily simple in binary. In the example above, 3.14 uses all the digits of the significand to approximate (and even so, it isn't represented exactly).
Hexadecimal is used for the number before the p, which corresponds to the significand in the IEEE 754 format, because it is more compact than binary. The significand of an IEEE 754 binary64 number requires 13 hexadecimal digits after 0x1. to represent fully, which is a lot, but in binary 52 digits would be required, which is frankly impractical.
The choice of hexadecimal actually has its drawbacks: because of this choice, the several equivalent representations for the same number are not always easy to recognize as equivalent. For instance, 0x1.3p1 and 0x2.6p0 represent the same number, although their digits have nothing in common. In binary, the two notations would correspond to 0b1.0011p1 and 0b10.011p0, which would be easier to see as equivalent. To take another example, 3.14 can also be represented as 0xc.8f5c28f5c28f8p-2, which is extremely difficult to see as the same number as 0x1.91eb851eb851fp+1. This problem would not exist if the number after the p represented a power of 16, as you suggest in your question, but unicity of the representation was not an objective when C99 was standardized: closeness to the IEEE 754 representation was.

LReal Vs Real Data Types

In PLC Structure Text what is main difference between a LReal Vs a Real data type? Which would you use when replacing a double or float when converting from a C based language to structure text with a PLC
LReal is a double precision real, float, or floating point variables that is a 64 bit signed value rather then a real is a single precision real, float, or floating point that is made from a 32 bit signed value. So it stores more in a LReal which makes LReal closer to a Double and a Float. The other thing to keep in mind is depending on the PLC it will convert a Real into a LReal for calculations. Plus a LReal is limited to 15 decimal places rather than a Real is 9 decimal places. So if you need more then 9 decimal places I would recommend LReal but if you need less I would stick with Real because with LReals you have to convert from a Integer to a Real to a LReal. So it would save you a step.

More precision than double in swift

Are there are any floating points more accurate than Double available in Swift? I know that in C there is the long double, but I can't seem to find its equivalent in Apple's new programming language.
Any help would be greatly appreciated!
Yes there is! There is Float80 exactly for that, it stores 80 bits (duh), 10 bytes. You can use it like any other floating point type. Note that there are Float32, Float64 and Float80 in Swift, where Float32 is just a typealias for Float and Float64 is one for Double
Currently iOS 11+ runs on 64 Bit platform, Double holds Highest among all.
Double has a precision of at least 15 decimal digits, whereas the
precision of Float can be as little as 6 decimal digits. The
appropriate floating-point type to use depends on the nature and range
of values you need to work with in your code. In situations where
either type would be appropriate, Double is preferred.
However in CGFloat The native type used to store the CGFloat, which is Float on 32-bit architectures and Double on 64-bit architectures
https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/TheBasics.html

Maximum allowed digits after decimal for double

Hi,
I want to know how many number of digits are allowed after decimal
point for primitive double datatype in java, without actually getting
rounded off.
It depends on the number; read up on floating point representations for more information.
Use the BigDecimal class for fixed-scale arithmetic.
Taken literally, the number is 0. Most decimal fractions with at least one digit after the decimal point get rounded on conversion to double. For example, the decimal fraction 0.1 is rounded to 0.1000000000000000055511151231257827021181583404541015625 on conversion to double.
The problem is that double is a binary floating point system, and most decimal fractions can no more be exactly represented in than 1/3 can be exactly represented as a decimal fraction with a finite number of significant digits.
As already recommended, if truly exact representation of decimal fractions is important, use BigDecimal.
The primitive double has, for normal numbers, 53 significant bits, about 15.9 decimal digits. If very, very close is good enough, you can use double.