How is eps() calculated in MATLAB? - matlab

The eps routine in MATLAB essentially returns the positive distance between floating point numbers. It can take an optional argument, too.
My question: How does MATLAB calculate this value? (Does it use a lookup table, or does it use some algorithm to calculate it at runtime, or something else...?)
Related: how could it be calculated in any language providing bit access, given a floating point number?

WIkipedia has quite the page on it
Specifically for MATLAB it's 2^(-53), as MATLAB uses double precision by default. Here's the graph:
It's one bit for the sign, 11 for the exponent and the rest for the fraction.
The MATLAB documentation on floating point numbers also show this.
d = eps(x), where x has data type single or double, returns the positive distance from abs(x) to the next larger floating-point number of the same precision as x.
As not all fractions are equally closely spaced on the number line, different fractions will show different distances to the next floating-point within the same precision. Their bit representations are:
1.0 = 0 01111111111 0000000000000000000000000000000000000000000000000000
0.9 = 0 01111111110 1100110011001100110011001100110011001100110011001101
the sign for both is positive (0), the exponent is not equal and of course their fraction is vastly different. This means that the next floating point numbers would be:
dec2bin(typecast(eps(1.0), 'uint64'), 64) = 0 01111001011 0000000000000000000000000000000000000000000000000000
dec2bin(typecast(eps(0.9), 'uint64'), 64) = 0 01111001010 0000000000000000000000000000000000000000000000000000
which are not the same, hence eps(0.9)~=eps(1.0).

Here is some insight into eps which will help you to write an algorithm.
See that eps(1) = 2^(-52). Now, say you want to compute the eps of 17179869183.9. Note that, I have chosen a number which is 0.1 less than 2^34 (in other words, something like 2^(33.9999...)). To compute eps of this, you can compute log2 of the number, which would be ~ 33.99999... as mentioned before. Take a floor() of this number and add it to -52, since eps(1) = 2^(-52) and the given number 2^(33.999...). Therefore, eps(17179869183.9) = -52+33 = -19.
If you take a number which is fractionally more than 2^34, e.g., 17179869184.1, then the log2(eps(17179869184.1)) = -18. This also shows that the eps value will change for the numbers that are integer powers of your base (or radix), in this case 2. Since eps value only changes at those numbers which are integer powers of 2, we take floor of the power. You will be able to get the perfect value of eps for any number using this. I hope it is clear.

MATLAB uses (along with other languages) the IEEE754 standard for representing real floating point numbers.
In this format the bits allocated for approximating the actual1 real number, usually 32 - for single or 64 - for double precision, are grouped into: 3 groups
1 bit for determining the sign, s.
8 (or 11) bits for exponent, e.
23 (or 52) bits for the fraction, f.
Then a real number, n, is approximated by the following three - term - relation:
n = (-1)s * 2(e - bias) * (1 + fraction)
where the bias offsets negatively2 the values of the exponent so that they describe numbers between 0 and 1 / (1 and 2) .
Now, the gap reflects the fact that real numbers does not map perfectly to their finite, 32 - or 64 - bit, representations, moreover, a range of real numbers that differ by abs value < eps maps to a single value in computer memory, i.e: if you assign a values val to a variable var_i
var_1 = val - offset
...
var_i = val;
...
val_n = val + offset
where
offset < eps(val) / 2
Then:
var_1 = var_2 = ... = var_i = ... = var_n.
The gap is determined from the second term containing the exponent (or characteristic):
2(e - bias)
in the above relation3, which determines the "scale" of the "line" on which the approximated numbers are located, the larger the numbers, the larger the distance between them, the less precise they are and vice versa: the smaller the numbers, the more densely located their representations are, consequently, more accurate.
In practice, to determine the gap of a specific number, eps(number), you can start by adding / subtracting a gradually increasing small number until the initial value of the number of interest changes - this will give you the gap in that (positive or negative) direction, i.e. eps(number) / 2.
To check possible implementations of MATLAB's eps (or ULP - unit of last place , as it is called in other languages), you could search for ULP implementations either in C, C++ or Java, which are the languages MATLAB is written in.
1. Real numbers are infinitely preciser i.e. they could be written with arbitrary precision, i.e. with any number of digits after the decimal point.
2. Usually around the half: in single precision 8 bits mean decimal values from 1 to 2^8 = 256, around the half in our case is: 127, i.e. 2(e - 127)
2. It can be thought that: 2(e - bias), is representing the most significant digits of the number, i.e. the digits that contribute to describe how big the number is, as opposed to the least significant digits that contribute to describe its precise location. Then the larger the term containing the exponent, the smaller the significance of the 23 bits of the fraction.

Related

MATLAB numeric precision when generating a numeric sequence

I was testing a operation like this:
[input] 3.9/0.1 : 4.1/0.1
[output] 39 40
don't know why 4.1/0.1 is approximated to 40. If I add a round(), it will go as expected:
[input] 3.9/0.1 : round(4.1/0.1)
[output] 39 40 41
What's wrong with the first operation?
In this Q&A I go into detail on how the colon operator works in MATLAB to create a range. But the detail that causes the issue described in this question is not covered there.
That post includes the full code for a function that imitates exactly what the colon operator does. Let's follow that code. We start with start = 3.9/0.1, which is exactly 39, and stop = 4.1/0.1, which, due to rounding errors, is just slightly smaller than 41, and step = 1 (the default if it's not given).
It starts by computing a tolerance:
tol = 2.0*eps*max(abs(start),abs(stop));
This tolerance is intended to be used so that the stop value, if within tol of an exact number of steps, is still used, if the last step would step over it. Without a tolerance, it would be really difficult to build correct sequences using floating-point end points and step sizes.
However, then we get this test:
if start == floor(start) && step == 1
% Consecutive integers.
n = floor(stop) - start;
elseif ...
If the start value is an exact integer, and the step size is 1, then it forces the sequence to be an integer sequence. Unfortunately, it does so by taking the number of steps as the distance between floor(stop) and start. That is, it is not using the tolerance computed earlier in determining the right stop! If stop is slightly above an integer, that integer will be in the range. If stop is slightly below an integer (as in the case of the OP), that integer will not be part of the range.
It could be debated whether MATLAB should round the stop number up in this case or not. MATLAB chose not to. All of the sequences produced by the colon operator use the start and stop values exactly as given by the user. It leaves it up to the user to ensure the bounds of the sequence are as required.
However, if the colon operator hadn't special-cased the sequence of integers, the result would have been less surprising in this case. Let's add a very small number to the start value, so it's not an integer:
>> a = 3.9/0.1 : 4.1/0.1
a =
39 40
>> b = 3.9/0.1 + eps(39) : 4.1/0.1
b =
39.0000 40.0000 41.0000
Floating-point numbers suffer from loss of precision when represented with a fixed number of bits (64-bit in MATLAB by default). This is because there are infinite number of real numbers (even within a small range of say 0.0 to 0.1). On the other hand, a n-bit binary pattern can represent a finite 2^n distinct numbers. Hence, not all the real numbers can be represented. The nearest approximation will be used instead, resulted in loss of accuracy.
The closest representable value for 4.1/0.1 in the computer as a 64-bit double precision floating point number is actually,
4.1/0.1 ≈ 40.9999999999999941713291207...
So, in essence, 4.1/0.1 < 41.0 and that's what you get from the range. If you subtract, for example, 41 - 4.1/0.1 = 7.105427357601002e-15. But when you round, you get the closest value of 41.0 as expected.
The representation scheme for 64-bit double-precision according to the IEEE-754 standard:
The most significant bit is the sign bit (S), with 0 for positive numbers and 1 for negative numbers.
The following 11 bits represent exponent (E).
The remaining 52 bits represents fraction (F).

Matlab representation of floating point numbers

Matlab results for realmax('single') are ans = 3.4028e+38. I am trying to understand why this number appears from the computer's binary representation, but I am a little bit confused.
I understand that realmax('single') is the highest floating point number represented in single percision which is 32-bits. That means the binary representation consists of 1 bit for the sign, 23 bits for the mantissa and 8 bit for the exponent. And 3.4028e+38 is the decimal representation of the highest single precision floating point number, but I don't know how that number was derived.
Now, typing in 2^128 gives me the same answer as 3.4028e+38, but I don't understand the correlation.
Can help me understand why 3.4028e+38 is the largest returned result for a floating point number in a 32 bit format, coming from a binary representation perspective? Thank you.
As IEEE754 specifies, the largest magnitude exponent is Emax=12710=7F16=0111 11112. This is encoded as 25410=FE16=1111 11102 in the 8 exponent bits. The exponent 25510=FF16=1111 11112 is reserved for representing infinity, so 25410 is the largest available. The exponent bias 12710 is subtracted from the exponent bits leading to 25410-12710=12710. The largest mantissa is attained when all 23 mantissa bits are set to 1. The largest value that can be represented is then 1.111111111111111111111112x2127=3.402823510x1038.
This site lets you set the bits of the representation and see the IEEE754 value represented in decimal, binary, and hexadecimal.
Also note that the largest value is less than 2^128. You can see a more precise representation of the numbers output in MATLAB by using format long. The reason they are similar is because 1.111111111111111111111112x2127 is close to 102x2127=12x2128.
As a precursor to the single precision binary floating point representation of a number in a computer, I start with discussing what is known as "scientific notation" for a decimal number.
Using a base 10 number system, every positive decimal number has a first non-zero leading digit in the set {1..9}. (All other digits are in the set {0..9}.) The number's decimal point may always be shifted to the immediate right of this leading digit by multiplying by an appropriate power 10^n of the number base 10. E.g. 0.012345 = 1.2345*10^n where n = -2. This means that the decimal representation of every non-zero number x can be made to take the form
x = (+/-)(i.jklm...)*10^n ; where i is in the set {1,2,3,4,5,6,7,8,9}.
The leading decimal factor (i.jklm...), known as the "mantissa" of x, is a number in the interval [1,10), i.e. greater than or equal to 1 and less than 10. The largest the mantissa can be is 9.9999... so that the real "size" of the number x is an integer stored in the exponential factor 10^n. If the number x is very large, n >> 0, while if x is very small n << 0.
We now want to revisit these ideas using the base 2 number system associated with computer storage of numbers. Computers represent a number internally using the base 2 rather than the more familiar base 10. All the digits used in a "binary" representation of a number belong to the set {0,1}. Using the same kind of thinking to represent x in a binary representation as we did in its decimal representation, we see that every positive number x has the form
x = (+/-)(i.jklm...)*2^n ; where i = 1,
while the remaining digits belong to {0,1}.
Here the leading binary factor (mantissa) i.jklm... lies in the interval [1,2), rather than the interval [1,10) associated with the mantissa in the decimal system. Here the mantissa is bounded by the binary number 1.1111..., which is always less than 2 since in practice there will never be an infinite number of digits. As before, the real "size" of the number x is stored in the integer exponential factor 2^n. When x is very large then n >> 0 and when x is very small n << 0. The exponent n is expressed in the binary decimal system. Therefore every digit in the binary floating point representation of x is either a 0 or a 1. Each such digit is one of the "bits" used in the computer's memory for storing x.
The standard convention for a (single precision) binary representation of x is accomplished by storing exactly 32 bits (0's or 1's) in computer memory. The first bit is used to signify the arithmetic "sign" of the number. This leaves 31 bits to be distributed between the mantissa (i.jklm...) of x and the exponential factor 2^n. (Recall i = 1 in i.jklmn... so none of the 31 bits is required for its representation.) At this point an important "trade off" comes into play:
The more bits that are dedicated to the mantissa (i.jkl...) of x, the fewer that are available to represent the exponent n in its exponential factor 2^n. Conventionally 23 bits are dedicated to the mantissa of x. (It is not hard to show that this allows approximately 7 digits of accuracy for x when regarded in the decimal system, which is adequate for most scientific work.) With the very first bit dedicated to storing the sign of x, this leaves 8 bits that can be used to represent n in the factor 2^n. Since we want to allow for very large x and very small x, the decision has been made to store 2^n in the form
2^n = 2^(m-127) ; n = m - 127,
where the exponent m is stored rather than n. Utilizing 8 bits, this means m belongs to the set of binary integers {000000,00000001,....11111111}. Since it is easier for humans to think in the decimal system, this means that m belongs to the set of values {0,1,....255}. Subtracting -127, this means in turn that 2^n belongs to the number set {-127,-126,...0,1,2...128}, i.e.
-127 <= n <= 128.
The largest the exponential factor 2^n of our binary floating point representation of x can be is then seen to be 2^n = 2^128, or viewed in the decimal system (use any calculator to evaluate 2^128)
2^n <= 3.4028...*10^38.
Summarizing, the largest number x that can possibly be stored in single precision floating point in a computer under the IEEE format is a number in the form
x = y*(3.4028...*10^38).
Here the mantissa y lies in the (half-closed, half-open) interval [1,2).
For simplicity's sake, Matlab reports the "size" of the "largest" possible floating point number as the largest size of the exponential factor 2^128 = 3.4028*10^38. From this discussion we see that the largest floating point number that can be stored using a 32 bit binary floating point representation is actually doubled to max_x = 6.8056*10^38.

Calculating floating points as binary

The question is :
x and y are two floating point numbers in 32-bit IEEE floating-point format
(8-bit exponent with bias 127) whose binary representation is as follows:
x: 1 10000001 00010100000000000000000
y: 0 10000010 00100001000000000000000
Compute their product z = x y and give the result in binary IEEE floating-point format.
So I've found out that X = -4.3125. y = 9.03125. i can multiply them and get -38.947265625. I don't know how to show it in a IEEE format. Thanks in advance for the help.
I agree with the comment that it should be done in binary, rather than by conversion to decimal and decimal multiplication. I used Exploring Binary to do the arithmetic.
The first step is to find the actual binary significands. Neither input is subnormal, so they are 1.000101 and 1.00100001.
Multiply them, getting 1.00110111100101.
Similarly, subtract the bias, binary 1111111, from the exponents, getting 10 and 11. Add those, getting 101, then add back the bias, 10000100.
The sign bit for multiplying two numbers with different sign bits will be 1.
Now pack it all back together. The signficand came out in the [1,2) range so there is no need to normalize and adjust the exponent. We are still in the normal range, so drop the 1 before the binary point in the significand. The significand is narrow enough to fit without rounding - just add enough trailing zeros.
1 10000100 00110111100101000000000
You've made it harder by converting to decimal, the way you'd have to convert it back. It's not that it can't be done that way, but it's harder by hand.
Without converting, the algorithm to multiply two floats is (roughly) this:
put the implicit 1 back (if applicable)
multiply, to full size (don't truncate) (you can get away with using just Guard and Sticky, if you know how they work)
add the exponents
xor the signs
normalize/round/handle special cases (under-/overflow)
So here, multiply (look up how binary multiply worked if you forgot)
1.00010100000000000000000 *
1.00100001000000000000000 =
1.00100001000000000000000 +
0.000100100001000000000000000 +
0.00000100100001000000000000000 =
1.00110111100101000000000000000
Add exponents (mind the bias), 2+3 = 5 in this case, so 132 = 10000100.
Xor the signs, get 1.
No rounding is necessary because the dropped bits are all zero anyway.
Result: 1 10000100 00110111100101000000000

Maximum double value (float) possible in MATLAB (64-bit)

I'm aware that double is the default data-type in MATLAB.
When you compare two double numbers that have no floating part, MATLAB is accurate upto the 17th digit place in my testing.
a=12345678901234567 ; b=12345678901234567; isequal(a,b) --> TRUE
a=123456789012345671; b=123456789012345672; isequal(a,b) --> printed as TRUE
I have found a conservative estimate to be use numbers (non-floating) upto only 13th digit as other functions can become unreliable after it (such as ismember, or the MEX functions ismembc etc).
Is there a similar cutoff for floating values? E.g., if I use shares-outstanding for a company which can be very very large with decimal places, when do I start losing decimal accuracy?
a = 1234567.89012345678 ; b = 1234567.89012345679 ; isequal(a,b) --> printed as TRUE
a = 123456789012345.678 ; b = 123456789012345.677 ; isequal(a,b) --> printed as TRUE
isequal may not be right tool to use for comparing such numbers. I'm more concerned about up to how many places should I trust my decimal values once the integer part of a number starts growing?
It's usually not a good idea to test the equality of floating-point numbers. The behavior of binary floating-point numbers can differ drastically from what you may expect from base-10 decimals. Consider the example:
>> isequal(0.1, 0.3/3)
ans =
0
Ultimately, you have 53 bits of precision. This means that integers can be represented exactly (with no loss in accuracy) up to the number 253 (which is a little over 9 x 1015). After that, well:
>> (2^53 + 1) - 2^53
ans =
0
>> 2^53 + (1 - 2^53)
ans =
1
For non-integers, you are almost never going to be representing them exactly, even for simple-looking decimals such as 0.1 (as shown in that first example). However, it still guarantees you at least 15 significant figures of precision.
This means that if you take any number and round it to the nearest number representable as a double-precision floating point, then this new number will match your original number at least up to the first 15 digits (regardless of where these digits are with respect to the decimal point).
You might want to use variable precision arithmetics (VPA) in matlab. It computes expressions exactly up to a given digit count, which may be quite large. See here.
Check out the MATLAB function flintmax which tells you the maximum consecutive integers that can be stored in either double or single precision. From that page:
flintmax returns the largest consecutive integer in IEEE® double
precision, which is 2^53. Above this value, double-precision format
does not have integer precision, and not all integers can be
represented exactly.

The number of correct decimal digits in a product of doubles with a large number of terms

What is a tight lower-bound on the size of the set of irrational numbers, N, expressed as doubles in Matlab on a 64-bit machine, that I multiply together while having confidence in k decimal digits of the product? What precision, for example could I expect after multiplying together ~10^12 doubles encoding different random chunks of pi?
If you ask for tight bound, the response of #EricPostpischil is the absolute error bound if all operations are performed in IEEE 754 double precision.
If you ask for confidence, I understand it as statistical distribution of errors. Assuming a uniform distribution of error in [-e/2,e/2] you could try to ask for theoretical distribution of error after M operations on math stack exchange... I guess the tight bound is somehow very conservative.
Let's illustrate an experimental estimation of those stats with some Smalltalk code (any language having large integer/fraction arithmetic could do):
nOp := 500.
relativeErrorBound := ((1 + (Float epsilon asFraction / 2)) raisedTo: nOp * 2 - 1) - 1.0.
nToss := 1000.
stats := (1 to: nToss)
collect: [:void |
| fractions exactProduct floatProduct relativeError |
fractions := (1 to: nOp) collect: [:e | 10000 atRandom / 3137].
exactProduct := fractions inject: 1 into: [:prod :element | prod * element].
floatProduct := fractions inject: 1.0 into: [:prod :element | prod * element].
relativeError := (floatProduct asFraction - exactProduct) / exactProduct.
relativeError].
s1 := stats detectSum: [:each | each].
s2 := stats detectSum: [:each | each squared].
maxEncounteredError := (stats detectMax: [:each | each abs]) abs asFloat.
estimatedMean := (s1 /nToss) asFloat.
estimatedStd := (s2 / (nToss-1) - (s1/nToss) squared) sqrt.
I get these results for multiplication of nOp=20 double:
relativeErrorBound -> 4.440892098500626e-15
maxEncounteredError -> 1.250926201710214e-15
estimatedMean -> -1.0984634797115124e-18
estimatedStd -> 2.9607828266493842e-16
For nOp=100:
relativeErrorBound -> 2.220446049250313e-14
maxEncounteredError -> 2.1454964094158273e-15
estimatedMean -> -1.8768492273800676e-17
estimatedStd -> 6.529482793500846e-16
And for nOp=500:
relativeErrorBound -> 1.1102230246251565e-13
maxEncounteredError -> 4.550696454362764e-15
estimatedMean -> 9.51007740905571e-17
estimatedStd -> 1.4766176010100097e-15
You can observe that the standard deviation growth is much more slow than that of error bound.
UPDATE: at first approximation (1+e)^m = 1+m*e+O((m*e)^2), so the distribution is approximately a sum of m uniform in [-e,e] as long as m*e is small enough, and this sum is very near a normal distribution (gaussian) of variance m*(2e)^2/12. You can check that std(sum(rand(100,5000))) is near sqrt(100/12) in Matlab.
We can consider it is still true for m=2*10^12-1, that is approximately m=2^41, m*e=2^-12. In which case, the global error is a quasi normal distribution and the standard deviation of global error is sigma=(2^-52*sqrt(2^41/12)) or approximately sigma=10^-10
See http://en.wikipedia.org/wiki/Normal_distribution to compute P(abs(error)>k*sigma)
In 68% of case (1 sigma), you'll have 10 digits of precision or more.
erfc(10/sqrt(2)) gives you the probability to have less than 9 digits of precision, about 1 case out of 6*10^22, so I let you compute the probability of having only 4 digits of precision (you can't evaluate it with double precision, it underflows) !!!
My experimental standard deviation were a bit smaller than theoretical ones (2e-15 9e-16 4e-16 for 20 100 & 500 double) but this must be due to a biased distriution of my inputs errors i/3137 i=1..10000...
That's a good way to remind that the result will be dominated by the distribution of errors in your inputs, which might exceed e if they result from floating point operations like M_PI*num/den
Also, as Eric said, using only * is quite an ideal case, things might degenerate quicker if you mix +.
Last note: we can craft a list of inputs that reach the maximum error bound, set all elements to be (1+e) which will be rounded to 1.0, and we get the maximum theoretical error bound, but our input distribution is quite biased! HEM WRONG since all multiplication are exact we get only (1+e)^n, not (1+e)^(2n-1), so about only half the error...
UPDATE 2: the inverse problem
Since you want the inverse, what is the length n of sequence such that I get k digits of precision with a certain level of confidence 10^-c
I'll answer only for k>=8, because (m*e) << 1 is required in above approximations.
Let's take c=7, you get k digits with a confidence of 10^-7 means 5.3*sigma < 10^-k.
sigma = 2*e*sqrt((2*n-1)/12) that is n=0.5+1.5*(sigma/e)^2 with e=2^-53.
Thus n ~ 3*2^105*sigma^2, as sigma^2 < 10^-2k/5.3^2, we can write n < 3*2^105*10^-(2*k)/5.3^2
A.N. the probability of having less than k=9 digits is less than 10^-7 for a length n=4.3e12, and around n=4.3e10 for 10 digits.
We would reach n=4 numbers for 15 digits, but here our normal distribution hypothesis is very rough and does not hold, especially distribution tail at 5 sigmas, so use with caution (Berry–Esseen theorem bounds how far from normal is such distribution http://en.wikipedia.org/wiki/Berry-Esseen_theorem )
The relative error in M operations as described is at most (1+2-53)M-1, assuming all input, intermediate, and final values do not underflow or overflow.
Consider converting a real number a0 to double precision. The result is some number a0•(1+e), where -2-53 ≤ e ≤ 2-53 (because conversion to double precision should always produce the closest representable value, and the quantum for double precision values is 2-53 of the highest bit, and the closest value is always within half a quantum). For further analysis, we will consider the worst case value of e, 2-53.
When we multiply one (previously converted) value by another, the mathematically exact result is a0•(1+e) • a1•(1+e). The result of the calculation has another rounding error, so the calculated result is a0•(1+e) • a1•(1+e) • (1+e) = a0 • a1 • (1+e)3. Obviously, this is a relative error of (1+e)3. We can see the error accumulates simply as (1+e)M for these operations: Each operation multiplies all previous error terms by 1+e.
Given N inputs, there will be N conversions and N-1 multiplications, so the worst error will be (1+e)2 N - 1.
Equality for this error is achieved only for N≤1. Otherwise, the error must be less than this bound.
Note that an error bound this simple is possible only in a simple problem, such as this one with homogeneous operations. In typical floating-point arithmetic, with a mixture of addition, subtraction, multiplication, and other operations, computing a bound so simply is generally not possible.
For N=1012 (M=2•1012-1), the above bound is less than 2.000222062•1012 units of 2-53, and is less than .0002220693. So the calculated result is good to something under four decimal digits. (Remember, though, you need to avoid overflow and underflow.)
(Note on the strictness of the above calculation: I used Maple to calculate 1000 terms of the binomial (1+2-53)2•1012-1 exactly (having removed the initial 1 term) and to add a value that is provably larger than the sum of all remaining terms. Then I had Maple evaluate that exact result to 1000 decimal digits, and it was less than the bound I report above.)
For 64 bit floating point numbers, assuming the standard IEEE 754, has 52+1 bits of mantissa.
That means relative precision is between 1.0000...0 and 1.0000...1, where the number of binary digits after the decimal point is 52. (You can think of the 1.000...0 as what is stored in binary in the mantissa AKA significand).
The error is 1/2 to the power of 52 divided by 2 (half the resolution). Note I choose the relative precision as close to 1.0 as possible, because it is the worst case (otherwise between 1.111..11 and 1.111..01, it is more precise).
In decimal, the worst case relative precision of a double is 1.11E-16.
If you multiply N doubles with this precision, the new relative precision (assuming no additional error due to intermediate rounding) is:
1 - (1 - 1.11E-16)^N
So if you multiply pi (or any double 10^12) times, the upper bound on error is:
1.1102e-004
That means you can have confidence in about 4-5 digits.
You can ignore intermediate rounding error if your CPU has support for extended precision floating point numbers for intermediate results.
If there is no extended precision FPU (floating point unit) used, rounding in intermediate steps introduces additional error (same as due to multiplication). That means that a strict lower bound calculated as:
1 -
((1 - 1.11E-16) * (1 - 1.11E-16) * (1 - 1.11E-16)
* (1 - 1.11E-16) * (1 - 1.11E-16) % for multiplication, then rounding
... (another N-4 lines here) ...
* (1 - 1.11E-16) * (1 - 1.11E-16))
= 1-(1-1.11E-16)^(N*2-1)
If N is too large, it takes too long to run. The possible error (with intermediate rounding) is 2.2204e-012, which is double compared to without intermediate rounding 1-(1 - 1.11E-16)^N=1.1102e-012.
Approximately, we can say that intermediate rounding doubles the error.
If you multiplied pi 10^12 times, and there was no extended precision FPU. This might be because you write intermediate steps to memory (and perhaps do something else), before continuing (just make sure the compiler hasn't reordered your instructions so that there is no FPU result accumulation), then a strict upper bound on your relative error is:
2.22e-004
Note that confidence in decimal places doesn't mean it will be exactly that decimal places sometimes.
For example, if the answer is:
1.999999999999, and the error is 1E-5, the actual answer could be 2.000001234.
In this case, even the first decimal digit was wrong. But that really depends on how lucky you are (whether the answer falls on a boundary such as this).
This solution assumes that the doubles (including the answer) are all normalized. For denormalized results, obviously, the number binary digits by which it is denormalized will reduce the accuracy by that many digits.