I've written a function which has tolerance as one of its inputs, and I'm wondering how to set the output to have the same number of decimal places as the tolerance? I know that you can use sprintf to set a number of decimal points, but I can't work out how to set this to be equal to the same number of decimal points as appears in my tolerance. I want it rounded to a degree of precision specified by tol, which is an input. So if I put the tolerance as 0.001, that's the precision I want it to, and this is what I'm finding complicated to do.
Just use round, it has this functionallity built-in:
round(4.235,2)
ans =
4.23
To round to the nearest unit of a given tolerance, divide through by that tolerance, round to the nearest integer and multiply the result by the original tolerance:
round(x/tol)*tol
If you mean to assume that tol represents a whole number of decimal places, you can determine that number with log10:
places = round(-log10(tol));
You can use this result directly in sprintf:
output = sprintf('%.*f', places, x);
Related
I am using matlab online to subtract a negative number very close to 0, from a positive number very close to 0, and am wondering why this doesn't result in significant roundoff error? Is matlab doing some kind of optimisation to use addition instead of subtraction?
Numerical cancellation is a consequence of using floating point arithmetics and in that sense unrelated to the specific programming language being used. Matlab is thus also affected.
The code below shows the round-off error in a particular Matlab example. It calculates the finite difference approximation for the derivative of exp(x)-1 at the point x=0. The symmetric difference quotient requires exp(+epsilon)-1 and exp(-epsilon)-1 thereby reproducing the subtraction of a slightly positive and slightly negative number. The absolute approximation error is clearly behaving erratic for small epsilon. This is the round-off error. The effect increases as epsilon is getting smaller (read: the values get closer together).
I can imagine two reasons as to why you did not observe the numerical round-off error.
The round-off error is quite small and might remain unobserved in practice.
There are specific situations in which rounding errors exactly cancel. For example, you might modify this code to compute the derivative of x^2 at x=0. However, the same rounding errors occur when calculating (-epsilon)^2 and (+epsilon)^2 and the numerical derivate works out just fine.
I hope this helps.
EpsilonList = logspace(-12,-2);
MyFunction = #(x) exp(x)-1;
TrueDerrivative = 1; % Derivative of exp(x)-1, exp(x), evaluated at x=0
% Initialize ErrorList
ErrorList = NaN(length(EpsilonList), 1);
%--- Compute ---%
for iter = 1:length(EpsilonList)
% Increment
epsilon = EpsilonList(iter);
% Forward difference approximation
FiniteDiffApprox = ( MyFunction(epsilon) - MyFunction(-epsilon) ) / (2*epsilon);
% Approximation error
DiffApproxError = FiniteDiffApprox - TrueDerrivative;
% Store
ErrorList(iter) = DiffApproxError;
end
%--- Create plot ---%
loglog(EpsilonList, abs(ErrorList), 'LineWidth', 3)
xlabel('epsilon', 'FontSize', 20)
ylabel('Absolute approximation error', 'FontSize', 20)
Floating point numbers, in case of matlab, double by default, are represented by 3 parts: a sign bit (positive/negative), an exponent (11 bits, 1 for sign) and a fraction (52 bits), see the wikipedia page.
The value represented by doubles is
(-1)^{sign} * 1.{fraction} * 2^{exponent}
or
(-1)^{sign} * 0.{fraction} * 2^{exponent}
where {fraction} is the floating point part of the middle number. In the first case, two numbers with a {fraction} part of 0 (so a middle number of 1) will differ at least 2^{-52}. That is the value of eps in Matlab.
In the second case where the fraction also represents a very small number, the difference between two numbers can differ as little as 2^{-52} * 2^{-1022}.
The Matlab function eps(x) returns "the positive distance from abs(x) to the next larger floating-point number of the same precision as x." I use this to calculate the smallest floating-point number greater than x, via x + eps(x). I would also like to obtain the largest floating point number less than x, but I'm unaware of a function similar to eps that would facilitate this. How can I find the largest floating point number less than x?
You can subtract eps in almost all cases.
However as you probably have realized, this does not apply when the mantisa changes, or in other words, when you want to subtract from a power of two.
A negative-side eps then is easy to implement, knowing that the current eps is smaller than the distance to the next power of two that will trigger a step change. Therefore, the eps of our number minus its eps should do the trick.
function out=neps(in)
out=eps(in-eps(in));
This seem to work fine
eps(2)
4.440892098500626e-16
neps(2)
2.220446049250313e-16
For all numbers except subnormals (i.e. those smaller than 2.225073.8585072014e-308) you can do:
v = 1 - eps()/2;
b = x / v; # next larger in magnitude
c = x * v; # next smaller in magnitude
(based on this answer).
In real probability, there is a 0% chance that a random number p, selected from all of the real numbers in the interval (0,1), will be 0.5. However, what are the odds that
rand == 0.5
in MATLAB? I suppose this is like asking how many double-precision numbers are between zero and one, or maybe there are other factors at play.
No particular info on MATLAB's generator...
In general even simple pseudo-random generators have long enough cycles which would cover all values representable by double.
If MATLAB uses some other form of generating random numbers it would be even better - so assuming it uniformly covers whole range of double values.
I believe probability would be: distance between representable numbers around values you are interested divided by length of the interval. See What is the minimal step in double data type? (.NET) for discussion on the distance.
Looking at this question, we see that there are 262 - 252
doubles in the interval (0 1). Therefore, the probability of picking any single one (like 0.5) would be roughly equal to one divided by this number, or
>> p = 1/(2^62-2^52)
ans =
2.170523997312134e-019
However, as horchler already indicates, it also depends on the type of random number generator you use, as well as MATLAB's implementation thereof. Sadly, I have only basic knowledge on the implementaion details for each, but you can look here for a list of available random number generators in MATLAB and google a bit further for more precise numbers.
I am not sure whether Alexei was trying to say this, but inspired by him I think the probability will indeed be approximately the distance between numbers around 0.5.
Therefore I expect the probability to be approximately:
eps(0.5)
Which evaluates to 1.1102e-16
Given the monotonic nature of the difference between double numbers I would actually think this holds:
eps(0.5-eps(0.5)) <= yourprobability <= eps(0.5)
Implying a range of 5.5511e-17 to 1.1102e-16
realmin "returns the smallest positive normalized floating point number in IEEE double precision". eps(X) "is the positive distance from ABS(X) to the
next larger in mangitude floating point number of the same precision as X".
If I am interpreting the above documentation correctly, then realmin -- the smallest positive number that can be represented -- must be smaller than eps
(0). But:
>> realmin; % 2.2251e-308
>> eps(0); % 4.9407e-324
Obviously, eps(0), which is even smaller, can be represented too. Could someone explain this to me?
This is a floating point issue. You should go read up on denormal numbers.
Briefly, realmin returns the smallest positive normalized floating point number. But it's possible to have denormal numbers that are smaller than this and still representable in floating point, which is what eps(0) returns.
Quick explanation of denormal numbers
A binary floating point number looks like this:
1.abcdef * 2^M
where abcdefg are each either 0 or 1, and M is a number in the range -1022 <= M <= 1023. These are called normalized floating point numbers. The smallest possible normalized floating point number is 1 * 2^(-1022).
The denormal numbers looks like this
0.abcdef * 2^(-1022)
so they can take values that are smaller than the smallest possible normalized floating point number. The denormal numbers linearly interpolate between -realmin and realmin.
Perhaps it is a matter of definition, this is what I see in the documentation of eps:
For all X of class double such that abs(X) <= realmin, eps(X) = 2^(-1074)
eps returns the distance from 1.0 to the next largest double-precision number, so I can use it to interpret the numbers value on negative weight position. But for very large number with value on high positive weight position, what can I use to interpret?
I mean that I need to have some reference to count out computation noise on numbers obtained on Matlab.
Have you read "What Every Computer Scientist Should Know About Floating-Point Arithmetic"?
It discusses rounding error (what you're calling "computation noise"), the IEEE 754 standard for representation of floating-point numbers, and implementations of floating-point math on computers.
I believe that reading this paper would answer your question, or at least give you more insight into exactly how floating point math works.
Some clarifications to aid your understanding - too big to fit in the comments of #Richante's post:
Firstly, the difference between realmin and eps:
realmin is the smallest normalised floating point number. You can represent smaller numbers in denormalised form.
eps is the smallest increment between distinct numbers. realmin = eps(realmin) * 2^52.
"Normalised" and "denormalised" floating point numbers are explained in the paper linked above.
Secondly, rounding error is no indicator of how much you can "trust" the nth digit of a number.
Take, for example, this:
>> ((0.1+0.1+0.1)^512)/(0.3^512)
ans =
1.0000
We're dividing 0.3^512 by itself, so the answer should be exactly one, right? We should be able to trust every digit up to eps(1).
The error in this calculation is actually 400 * eps:
>> ((0.1+0.1+0.1)^512)/(0.3^512) - 1
ans =
9.4591e-014
>> ans / eps(1)
ans =
426
The calculation error, i.e. the extent to which the nth digit is untrustworthy, is far greater than eps, the floating-point roundoff error in the representation of the answer. Note that we only did six floating-point operations here! You can easily rack up millions of FLOPs to produce one result.
I'll say it one more time: eps() is not an indicator of the error in your calculation. Do not attempt to display : "My result is 1234.567 +/- eps(1234.567)". That is meaningless and deceptive, because it implies your numbers are more precise than they actually are.
eps, the rounding error in the representation of your answer, is only 1 part per billion trillion or so. Your real enemy is the error that accumulates every time you do a floating point operation, and that is what you need to track for a meaningful estimate of the error.
Easier to digest than the paper Li-aung Yip recommends would be the Wikipedia article on machine epsilon. Then read What Every Computer Scientist ...
Your question isn't very well worded, but I think you want something that gives the distance from a number to the next smallest double-precision number? If this is the case, then you can just use:
x = 100;
x + eps(x) %Next largest double-precision number
x - eps(-x) %Next smallest double-precision number
Double-precision numbers have a single sign bit, so counting up from a negative number is the same as counting down from a positive.
Edit:
According to help eps, "For all X, EPS(X) is equal to EPS(ABS(X))." which really confuses me; I can't see how that can be consistent with double having a single sign bit, and values not being equally spaced.