Matlab: Find next lesser floating point number - matlab

The Matlab function eps(x) returns "the positive distance from abs(x) to the next larger floating-point number of the same precision as x." I use this to calculate the smallest floating-point number greater than x, via x + eps(x). I would also like to obtain the largest floating point number less than x, but I'm unaware of a function similar to eps that would facilitate this. How can I find the largest floating point number less than x?

You can subtract eps in almost all cases.
However as you probably have realized, this does not apply when the mantisa changes, or in other words, when you want to subtract from a power of two.
A negative-side eps then is easy to implement, knowing that the current eps is smaller than the distance to the next power of two that will trigger a step change. Therefore, the eps of our number minus its eps should do the trick.
function out=neps(in)
out=eps(in-eps(in));
This seem to work fine
eps(2)
4.440892098500626e-16
neps(2)
2.220446049250313e-16

For all numbers except subnormals (i.e. those smaller than 2.225073.8585072014e-308) you can do:
v = 1 - eps()/2;
b = x / v; # next larger in magnitude
c = x * v; # next smaller in magnitude
(based on this answer).

Related

Matlab: Why does subtracting a positive number from a negative number not result in significant roundoff error when they are both very close to 0?

I am using matlab online to subtract a negative number very close to 0, from a positive number very close to 0, and am wondering why this doesn't result in significant roundoff error? Is matlab doing some kind of optimisation to use addition instead of subtraction?
Numerical cancellation is a consequence of using floating point arithmetics and in that sense unrelated to the specific programming language being used. Matlab is thus also affected.
The code below shows the round-off error in a particular Matlab example. It calculates the finite difference approximation for the derivative of exp(x)-1 at the point x=0. The symmetric difference quotient requires exp(+epsilon)-1 and exp(-epsilon)-1 thereby reproducing the subtraction of a slightly positive and slightly negative number. The absolute approximation error is clearly behaving erratic for small epsilon. This is the round-off error. The effect increases as epsilon is getting smaller (read: the values get closer together).
I can imagine two reasons as to why you did not observe the numerical round-off error.
The round-off error is quite small and might remain unobserved in practice.
There are specific situations in which rounding errors exactly cancel. For example, you might modify this code to compute the derivative of x^2 at x=0. However, the same rounding errors occur when calculating (-epsilon)^2 and (+epsilon)^2 and the numerical derivate works out just fine.
I hope this helps.
EpsilonList = logspace(-12,-2);
MyFunction = #(x) exp(x)-1;
TrueDerrivative = 1; % Derivative of exp(x)-1, exp(x), evaluated at x=0
% Initialize ErrorList
ErrorList = NaN(length(EpsilonList), 1);
%--- Compute ---%
for iter = 1:length(EpsilonList)
% Increment
epsilon = EpsilonList(iter);
% Forward difference approximation
FiniteDiffApprox = ( MyFunction(epsilon) - MyFunction(-epsilon) ) / (2*epsilon);
% Approximation error
DiffApproxError = FiniteDiffApprox - TrueDerrivative;
% Store
ErrorList(iter) = DiffApproxError;
end
%--- Create plot ---%
loglog(EpsilonList, abs(ErrorList), 'LineWidth', 3)
xlabel('epsilon', 'FontSize', 20)
ylabel('Absolute approximation error', 'FontSize', 20)
Floating point numbers, in case of matlab, double by default, are represented by 3 parts: a sign bit (positive/negative), an exponent (11 bits, 1 for sign) and a fraction (52 bits), see the wikipedia page.
The value represented by doubles is
(-1)^{sign} * 1.{fraction} * 2^{exponent}
or
(-1)^{sign} * 0.{fraction} * 2^{exponent}
where {fraction} is the floating point part of the middle number. In the first case, two numbers with a {fraction} part of 0 (so a middle number of 1) will differ at least 2^{-52}. That is the value of eps in Matlab.
In the second case where the fraction also represents a very small number, the difference between two numbers can differ as little as 2^{-52} * 2^{-1022}.

Optimization algorithm in Matlab

I want to calculate maximum of the function CROSS-IN-TRAY which is
shown here:
So I have made this function in Matlab:
function f = CrossInTray2(x)
%the CrossInTray2 objective function
%
f = 0.0001 *(( abs(sin(x(:,1)).* sin(x(:,2)).*exp(abs(100 - sqrt(x(:,1).^2 + x(:,2).^2)/3.14159 )) )+1 ).^0.1);
end
I multiplied the whole formula by (-1) so the function is inverted so when I will be looking for the minimum of the inverted formula it will be actually the maximum of original one.
Then when I go to optimization tools and select the GA algorithm and define lower and upper bounds as -3 and 3 it shows me the result after about 60 iterations which is about 0.13 and the final point is something like [0, 9.34].
And how is this possible that the final point is not in the range defined by the bounds? And what is the actual maximum of this function?
The maximum is (0,0) (actually, when either input is 0, and periodically at multiples of pi). After you negate, you're looking for a minimum of a positive quantity. Just looking at the outer absolute value, it obviously can't get lower than 0. That trivially occurs when either value of sin(x) is 0.
Plugging in, you have f_min = f(0,0) = .0001(0 + 1)^0.1 = 1e-4
This expression is trivial to evaluate and plot over a 2d grid. Do that until you figure out what you're looking at, and what the approximate answer should be, and only then invoke an actual optimizer. GA does not sound like a good candidate for a relatively smooth expression like this. The reason you're getting strange answers is the fact that only one of the input parameters has to be 0. Once the optimizer finds one of those, the other input could be anything.

Have a variable output length in Matlab

I've written a function which has tolerance as one of its inputs, and I'm wondering how to set the output to have the same number of decimal places as the tolerance? I know that you can use sprintf to set a number of decimal points, but I can't work out how to set this to be equal to the same number of decimal points as appears in my tolerance. I want it rounded to a degree of precision specified by tol, which is an input. So if I put the tolerance as 0.001, that's the precision I want it to, and this is what I'm finding complicated to do.
Just use round, it has this functionallity built-in:
round(4.235,2)
ans =
4.23
To round to the nearest unit of a given tolerance, divide through by that tolerance, round to the nearest integer and multiply the result by the original tolerance:
round(x/tol)*tol
If you mean to assume that tol represents a whole number of decimal places, you can determine that number with log10:
places = round(-log10(tol));
You can use this result directly in sprintf:
output = sprintf('%.*f', places, x);

Solving equations involving dozens of ceil and floor functions in MATLAB?

I am tackling a problem which uses lots of equations in the form of:
where q_i(x) is the only unknown, c_i, C_j, P_j are always positive. We have two cases, the first when c_i, C_j, P_j are integers and the case when they are real. C_j < P_j for all j
How is this type of problems efficently solved in MATLAB especially when the number of iterations N is between 20 - 100?
What I was doing is q_i(x) - c_i(x) must be equal to the summation of integers. So i was doing an exhaustive search for q_i(x) which satisfies both ends of the equation. Clearly this is computationally exhaustive.
What if c_i(x) is a floating point number, this will even make the problem even more difficult to find a real q_i(x)?
MORE INFO: These equations are from the paper "Integrating Preemption Threshold to Fixed Priority DVS Scheduling Algorithms" by Yang and Lin.
Thanks
You can use bisection method to numerically find zeros of almost any well-behavior functions.
Convert your equation problem into a zero-finding problem, by moving all things to one side of the equal sign. Then find x: f(x)=0.
Apply bisection method equation solver.
That's it! Or may be....
If you have specific range(s) where the roots should fall in, then just perform bisection method for each range. If not, you still have to give a maximum estimation (you don't want to try some number larger than that), and make this as the range.
The problem of this method is for each given range it can only find one root, because it's always picking the left (or right) half of the range. That's OK if P_j is integer, as you can always find a minimum step of the function. Say P_j = 1, then only a change in q_i larger than 1 leads to another segment (and thus a possible different root). Otherwise, within each range shorter than 1 there will be at most one solution.
If P_j is an arbitrary number (such as 1e-10), unless you have a lower limit on P_j, most likely you are out of lucky, since you can't tell how fast the function will jump, which essentially means f(x) is not a well-behavior function, making it hard to solve.
The sum is a step function. You can discretize the problem by calculating where the floor function jumps for the next value; this is periodic for every j. Then you overlay the N ''rhythms'' (each has its own speed specified by the Pj) and get all the locations where the sum jumps. Each segment can have exactly 0 or 1 intersection with qi(x). You should visualize the problem for intuitive understanding like this:
f = #(q) 2 + (floor(q/3)*0.5 + floor(q/4)*3 + floor(q/2)*.3);
xx = -10:0.01:10;
plot(xx,f(xx),xx,xx)
For each step, it can be checked analytically if an intersection exists or not.
jumps = unique([0:3:10,0:4:10,0:2:10]); % Vector with position of jumps
lBounds = jumps(1:end-1); % Vector with lower bounds of stairs
uBounds = jumps(2:end); % Vector with upper bounds of stairs
middle = (lBounds+uBounds)/2; % center of each stair
fStep = f(middle); % height of the stairs
intersection = fStep; % Solution of linear function q=fStep
% Check if intersection is within the bounds of the specific step
solutions = intersection(intersection>=lBounds & intersection<uBounds)
2.3000 6.9000

Matlab: reverse of eps? Accuracy on positive weight?

eps returns the distance from 1.0 to the next largest double-precision number, so I can use it to interpret the numbers value on negative weight position. But for very large number with value on high positive weight position, what can I use to interpret?
I mean that I need to have some reference to count out computation noise on numbers obtained on Matlab.
Have you read "What Every Computer Scientist Should Know About Floating-Point Arithmetic"?
It discusses rounding error (what you're calling "computation noise"), the IEEE 754 standard for representation of floating-point numbers, and implementations of floating-point math on computers.
I believe that reading this paper would answer your question, or at least give you more insight into exactly how floating point math works.
Some clarifications to aid your understanding - too big to fit in the comments of #Richante's post:
Firstly, the difference between realmin and eps:
realmin is the smallest normalised floating point number. You can represent smaller numbers in denormalised form.
eps is the smallest increment between distinct numbers. realmin = eps(realmin) * 2^52.
"Normalised" and "denormalised" floating point numbers are explained in the paper linked above.
Secondly, rounding error is no indicator of how much you can "trust" the nth digit of a number.
Take, for example, this:
>> ((0.1+0.1+0.1)^512)/(0.3^512)
ans =
1.0000
We're dividing 0.3^512 by itself, so the answer should be exactly one, right? We should be able to trust every digit up to eps(1).
The error in this calculation is actually 400 * eps:
>> ((0.1+0.1+0.1)^512)/(0.3^512) - 1
ans =
9.4591e-014
>> ans / eps(1)
ans =
426
The calculation error, i.e. the extent to which the nth digit is untrustworthy, is far greater than eps, the floating-point roundoff error in the representation of the answer. Note that we only did six floating-point operations here! You can easily rack up millions of FLOPs to produce one result.
I'll say it one more time: eps() is not an indicator of the error in your calculation. Do not attempt to display : "My result is 1234.567 +/- eps(1234.567)". That is meaningless and deceptive, because it implies your numbers are more precise than they actually are.
eps, the rounding error in the representation of your answer, is only 1 part per billion trillion or so. Your real enemy is the error that accumulates every time you do a floating point operation, and that is what you need to track for a meaningful estimate of the error.
Easier to digest than the paper Li-aung Yip recommends would be the Wikipedia article on machine epsilon. Then read What Every Computer Scientist ...
Your question isn't very well worded, but I think you want something that gives the distance from a number to the next smallest double-precision number? If this is the case, then you can just use:
x = 100;
x + eps(x) %Next largest double-precision number
x - eps(-x) %Next smallest double-precision number
Double-precision numbers have a single sign bit, so counting up from a negative number is the same as counting down from a positive.
Edit:
According to help eps, "For all X, EPS(X) is equal to EPS(ABS(X))." which really confuses me; I can't see how that can be consistent with double having a single sign bit, and values not being equally spaced.