I'm trying to optimize a function which I know the results but matlab is giving me weird results. Here's what I'm trying to do:
max: f(x)= -1815·x1 - 379·x2
subject to:
-1475·x1 - 112013·x2 >= -700000
(x1,x2) <= 80
(x1,x2) >= 0
Here is my actual code:
f = [1815;379]
A = [-1475 -11203]
b = [-700000]
ub = (ones(1,2)*80)'
lb = zeros(2,1)
x = linprog(f,A,b,[],[],lb,ub)
How would you do it?
This problem can easily be solved analitically.
As mentioned in the comments, you currently would expect 0. If however, you actually change your constraint from larger than, into smaller than, the optimum solution is actually close what matlab gives you.
It would basically be 700000/112013 = 6.248...
It is off by a factor 10, but I assume that you made a typo somewhere.
If you are struggeling with how this function works, just try a simple case first (that you can easily verify manually) and then increase the complexity. Either way, your excel solution is nowhere near what would come out of the problem description.
Your linear constraint has incorrect sign w.r.t. how it's expected by linprog.
As with many linear problems, it's actually easiest to just make a plot:
[x1,x2] = meshgrid(0:80);
f = -1815*x1 - 379*x2;
f(-1475*x1 - 112013*x2 < -7e5) = NaN;
surf(x1,x2,f, 'edgecolor', 'none')
xlabel('x1'), ylabel('x2')
This makes it obvious that (0,0) is the solution:
Related
So on this problem it seems pretty straight forward we are given
mean of x = 10,281 and sigma of x = 4112.4
We are asked to determine P(X<15,000)
Now I thought the code for this in matlab should be super straightforward
mu = 10281
sigma = 4112.4
p = logncdf(15000,10281,4112.4)
However this gives
p = .0063
The given answer is .8790 and just looking at p you can tell it is wrong because we are at 15000 which is over the mean which means it should be above .5. What is the deal with this function?
I saw somewhere you might need to take the exp(15000) for x in the function that results in a probability of 1 which is too high.
Any pointers would be much appreciated
%If X is lognormally distributed with parameters:-
mu = 10281;
sigma = 4112.4;
%then log(X) is normally distributed with following parameters:
mew_actual = log((mu^2)/sqrt(sigma^2+mu^2));
sigma_actual = sqrt(log((sigma^2)/(mu^2) +1));
Now you can use either of the following to compute CDF:-
p = cdf('Normal',log(15000),mew_actual,sigma_actual)
or
p=logncdf(15000,mew_actual,sigma_actual)
which gives 0.8796
(which I believe is the correct answer)
The answer given to you is 0.8790 because if you solve the question by hand, you'll get something like: z = 1.172759 and when you look this value in the table, you can only find z = 1.17(without the rest of decimal places) and for which φ(z)=0.8790.
You can verify the exact answer using this calculator. The related screenshot is attached below:
I have an integrated error expression E = int[ abs(x-p)^2 ]dx with limits x|0 to x|L. The variable p is a polynomial of the form 2*(a*sin(x)+b(a)*sin(2*x)+c(a)*sin(3*x)). In other words, both coefficients b and c are known expressions of a. An additional equation is given as dE/da = 0. If the upper limit L is defined, the system of equations is closed and I can solve for a, giving the three coefficients.
I managed to get an optimization routine to solve for a purely based on maximizing L. This is confirmed by setting optimize=0 in the code below. It gives the same solution as if I solved the problem analytically. Therefore, I know the equations to solve for the coefficent a are correct.
I know the example I presented can be solved with pencil and paper, but I'm trying to build an optimization function that is generalized for this type of problem (I have a lot to evaluate). Ideally, polynomial is given as an input argument to a function which then outputs xsol. Obviously, I need to get the optimization to work for the polynomial I presented here before I can worry about generalizations.
Anyway, I now need to further optimize the problem with some constraints. To start, L is chosen. This allows me to calculate a. Once a is know, the polynomial is a known function of x only i.e p(x). I need to then determine the largest INTERVAL from 0->x over which the following constraint is satisfied: |dp(x)/dx - 1| < tol. This gives me a measure of the performance of the polynomial with the coefficient a. The interval is what I call the "bandwidth". I would like to emphasis two things: 1) The "bandwidth" is NOT the same as L. 2) All values of x within the "bandwidth" must meet the constraint. The function dp(x)/dx does oscillate in and out of the tolerance criteria, so testing the criteria for a single value of x does not work. It must be tested over an interval. The first instance of violation defines the bandwidth. I need to maximize this "bandwidth"/interval. For output, I also need to know which L lead to such an optimization, hence I know the correct a to choose for the given constraints. That is the formal problem statement. (I hope I got it right this time)
Now my problem is setting this whole thing up with MATLAB's optimization tools. I tried to follow ideas from the following articles:
Tutorial for the Optimization Toolbox™
Setting optimize=1 for the if statement will work with the constrained optimization. I thought some how nested optimization is involved, but I couldn't get anything to work. I provided known solutions to the problem from the IMSL optimization library to compare/check with. They are written below the optimization routine. Anyway, here is the code I've put together so far:
function [history] = testing()
% History
history.fval = [];
history.x = [];
history.a = [];
%----------------
% Equations
polynomial = #(x,a) 2*sin(x)*a + 2*sin(2*x)*(9/20 -(4*a)/5) + 2*sin(3*x)*(a/5 - 2/15);
dpdx = #(x,a) 2*cos(x)*a + 4*cos(2*x)*(9/20 -(4*a)/5) + 6*cos(3*x)*(a/5 - 2/15);
% Upper limit of integration
IC = 0.8; % initial
LB = 0; % lower
UB = pi/2; % upper
% Optimization
tol = 0.003;
% Coefficient
% --------------------------------------------------------------------------------------------
dpda = #(x,a) 2*sin(x) + 2*sin(2*x)*(-4/5) + 2*sin(3*x)*1/5;
dEda = #(L,a) -2*integral(#(x) (x-polynomial(x,a)).*dpda(x,a),0,L);
a_of_L = #(L) fzero(#(a)dEda(L,a),0); % Calculate the value of "a" for a given "L"
EXITFLAG = #(L) get_outputs(#()a_of_L(L),3); % Be sure a zero is actually calculated
% NL Constraints
% --------------------------------------------------------------------------------------------
% Equality constraint (No inequality constraints for parent optimization)
ceq = #(L) EXITFLAG(L) - 1; % Just make sure fzero finds unique solution
confun = #(L) deal([],ceq(L));
% Objective function
% --------------------------------------------------------------------------------------------
% (Set optimize=0 to test coefficent equations and proper maximization of L )
optimize = 1;
if optimize
%%%% Plug in solution below
else
% Optimization options
options = optimset('Algorithm','interior-point','Display','iter','MaxIter',500,'OutputFcn',#outfun);
% Optimize objective
objective = #(L) -L;
xsol = fmincon(objective,IC,[],[],[],[],LB,UB,confun,options);
% Known optimized solution from IMSL library
% a = 0.799266;
% lim = pi/2;
disp(['IMSL coeff (a): 0.799266 Upper bound (L): ',num2str(pi/2)])
disp(['code coeff (a): ',num2str(history.a(end)),' Upper bound: ',num2str(xsol)])
end
% http://stackoverflow.com/questions/7921133/anonymous-functions-calling-functions-with-multiple-output-forms
function varargout = get_outputs(fn, ixsOutputs)
output_cell = cell(1,max(ixsOutputs));
[output_cell{:}] = (fn());
varargout = output_cell(ixsOutputs);
end
function stop = outfun(x,optimValues,state)
stop = false;
switch state
case 'init'
case 'iter'
% Concatenate current point and objective function
% value with history. x must be a row vector.
history.fval = [history.fval; optimValues.fval];
history.x = [history.x; x(1)];
history.a = [history.a; a_of_L(x(1))];
case 'done'
otherwise
end
end
end
I could really use some help setting up the constrained optimization. I'm not only new to optimizations, I've never used MATLAB to do so. I should also note that what I have above does not work and is incorrect for the constrained optimization.
UPDATE: I added a for loop in the section if optimizeto show what I'm trying to achieve with the optimization. Obviously, I could just use this, but it seems very inefficient, especially if I increase the resolution of range and have to run this optimization many times. If you uncomment the plots, it will show how the bandwidth behaves. By looping over the full range, I'm basically testing every L but surely there's got to be a more efficient way to do this??
UPDATE: Solved
So it seems fmincon is not the only tool for this job. In fact I couldn't even get it to work. Below, fmincon gets "stuck" on the IC and refuses to do anything...why...that's for a different post! Using the same layout and formulation, fminbnd finds the correct solution. The only difference, as far as I know, is that the former was using a conditional. But my conditional is nothing fancy, and really unneeded. So it's got to have something to do with the algorithm. I guess that's what you get when using a "black box". Anyway, after a long, drawn out, painful, learning experience, here is a solution:
options = optimset('Display','iter','MaxIter',500,'OutputFcn',#outfun);
% Conditional
index = #(L) min(find(abs([dpdx(range(range<=L),a_of_L(L)),inf] - 1) - tol > 0,1,'first'),length(range));
% Optimize
%xsol = fmincon(#(L) -range(index(L)),IC,[],[],[],[],LB,UB,confun,options);
xsol = fminbnd(#(L) -range(index(L)),LB,UB,options);
I would like to especially thank #AndrasDeak for all their support. I wouldn't have figured it out without the assistance!
I have to implement EKF without actually having good mathematical understanding of it. (Great... not...) So far I have been doing well but since I tried to implement the prediction step things started going wrong.
The agent that uses EKF (red) shoots off in a random direction
Eventually some variables (pose, Sigma, S, K) become NaNs and the simulation fails
I base my code on the code from Thrun's "Probabilistic Robotics" on page. 204. This is the part of the code that seems to be messing things up
% Get variables
[x,y,th] = getPose(mu_bar);
numoffeatures = size(map,2);
for f = 1:numoffeatures
j = c(f);
[fx,fy] = getFeatures(map,f);
q = (fx-x).^2 + (fy-y).^2;
z_hat = [sqrt(q);
atan2(fy-y,fx-x)-th;
j];
H = [(-fx-x)/sqrt(q) (-fy-y)/sqrt(q) 0;
(fy-y)/q (-fx-x)/q -1;
0 0 0];
S = H*Sigma_bar*H'+Q;
K = Sigma_bar*H'/inv(S);
mu_bar = mu_bar+K*(z(:,j)-z_hat);
Sigma_bar = (eye(3)-K*H)*Sigma_bar;
end
I am totally clueless... Any ideas and hints will be appreciated. Thank you.
UPDATE
The reason of the agent shooting off is the 'error' when computing the difference between two angles. Those are computed using atan2. Although I know what the problem is I still can't figure out how to fix it.
Let's imagine that after computing atan2 for two objects I have values resulting in a = 135 and b = 45. I computed the difference between them for both possibilities 90 degrees and 270 degrees but the agent still doesn't behave the way it is supposed to. I've never really encountered atan2 before. Is my understanding of calculating the difference between atan2 values wrong? Here is the illustration of my understanding:
Q is the process noise?
You cannot set the process noise as
Q = randn*eye(3);
because you may have negative covariance, this doesn't make sense.
I'm simply trying to find the exact minimum of a simple function in MATLAB. I've been experimenting with the use of built-in functions such as "fminbnd" and inline function definition, but I don't think I quite know what I'm doing.
My code is below. I want to find the x and y of Error's minimum.
clear all
A = 5;
tau = linspace(1,4,500); %Array of many tau values between 1 and 4
E1 = qfunc(((-tau) + 5) /(sqrt(2.5)));
E0 = qfunc((tau)/(sqrt(2.5)));
Error = 0.5*E0 + 0.5*E1;
figure
subplot (311), plot(tau, E0);
xlabel('Threshold (Tau)'), ylabel('E0')
title('Error vs. Threshold (E0, 1 <= T <= 4)')
subplot (312), plot(tau, E1);
xlabel('Threshold (Tau)'), ylabel('E1')
title('Error vs. Threshold (E1, 1 <= T <= 4)')
subplot (313), plot(tau, Error);
xlabel('Threshold (Tau)'), ylabel('Pr[Error]');
title('Error vs. Threshold (Pr[Error], 1 <= T <= 4)')
I mean, I can use the cursor when the function is graphed to get close (though not right at the point where it occurs (Threshold = 2.5), but there must be a method just to print the number to the window. So far I have tried:
fminbnd('Error', 'E0', 'E1')
And many other variants. Also tried using anonymous and inline function definitions with no luck.
Can anyone point me in the right direction? Feel foolish for being stuck with this simple problem... Any help greatly appreciated!
See fminbnd
You should try something like this:
Error =#(tau) 0.5*qfunc(((-tau) + 5) /(sqrt(2.5))) + 0.5*qfunc((tau)/(sqrt(2.5)));
x = fminbnd(Error,0,10)
The first argument of fminbnd(f,x1,x2) is the function and the other arguments are the bounds. I did f=Error, x1=0 and x2=10.
Output:
x=2.5000
Another way is to save your error function in .m file. See the webpage above.
I don't understand why you're using E0 and E1 as the limits of the range where the minimum should be found. Or am I misunderstanding something in your code?
Maybe if you have your function as a discrete collection of samples (as seems implied from your way of constructing it, error is going to be a matrix, I think), you could use the "min" command: http://www.mathworks.es/es/help/matlab/ref/min.html
Hope this helped!
Does anyone know how to make the following Matlab code approximate the exponential function more accurately when dealing with large and negative real numbers?
For example when x = 1, the code works well, when x = -100, it returns an answer of 8.7364e+31 when it should be closer to 3.7201e-44.
The code is as follows:
s=1
a=1;
y=1;
for k=1:40
a=a/k;
y=y*x;
s=s+a*y;
end
s
Any assistance is appreciated, cheers.
EDIT:
Ok so the question is as follows:
Which mathematical function does this code approximate? (I say the exponential function.) Does it work when x = 1? (Yes.) Unfortunately, using this when x = -100 produces the answer s = 8.7364e+31. Your colleague believes that there is a silly bug in the program, and asks for your assistance. Explain the behaviour carefully and give a simple fix which produces a better result. [You must suggest a modification to the above code, or it's use. You must also check your simple fix works.]
So I somewhat understand that the problem surrounds large numbers when there is 16 (or more) orders of magnitude between terms, precision is lost, but the solution eludes me.
Thanks
EDIT:
So in the end I went with this:
s = 1;
x = -100;
a = 1;
y = 1;
x1 = 1;
for k=1:40
x1 = x/10;
a = a/k;
y = y*x1;
s = s + a*y;
end
s = s^10;
s
Not sure if it's completely correct but it returns some good approximations.
exp(-100) = 3.720075976020836e-044
s = 3.722053303838800e-044
After further analysis (and unfortunately submitting the assignment), I realised increasing the number of iterations, and thus increasing terms, further improves efficiency. In fact the following was even more efficient:
s = 1;
x = -100;
a = 1;
y = 1;
x1 = 1;
for k=1:200
x1 = x/200;
a = a/k;
y = y*x1;
s = s + a*y;
end
s = s^200;
s
Which gives:
exp(-100) = 3.720075976020836e-044
s = 3.720075976020701e-044
As John points out in a comment, you have an error inside the loop. The y = y*k line does not do what you need. Look more carefully at the terms in the series for exp(x).
Anyway, I assume this is why you have been given this homework assignment, to learn that series like this don't converge very well for large values. Instead, you should consider how to do range reduction.
For example, can you use the identity
exp(x+y) = exp(x)*exp(y)
to your advantage? Suppose you store the value of exp(1) = 2.7182818284590452353...
Now, if I were to ask you to compute the value of exp(1.3), how would you use the above information?
exp(1.3) = exp(1)*exp(0.3)
But we KNOW the value of exp(1) already. In fact, with a little thought, this will let you reduce the range for an exponential down to needing the series to converge rapidly only for abs(x) <= 0.5.
Edit: There is a second way one can do range reduction using a variation of the same identity.
exp(x) = exp(x/2)*exp(x/2) = exp(x/2)^2
Thus, suppose you wish to compute the exponential of large number, perhaps 12.8. Getting this to converge acceptably fast will take many terms in the simple series, and there will be a great deal of subtractive cancellation happening, so you won't get good accuracy anyway. However, if we recognize that
12.8 = 2*6.4 = 2*2*3.2 = ... = 16*0.8
then IF you could efficiently compute the exponential of 0.8, then the desired value is easy to recover, perhaps by repeated squaring.
exp(12.8)
ans =
362217.449611248
a = exp(0.8)
a =
2.22554092849247
a = a*a;
a = a*a;
a = a*a;
a = a*a
362217.449611249
exp(0.8)^16
ans =
362217.449611249
Note that WHENEVER you do range reduction using methods like this, while you may incur numerical problems due to the additional computations necessary, you will usually come out way ahead due to the greatly enhanced convergence of your series.
Why do you think that's the wrong answer? Look at the last term of that sequence, and it's size, and tell me why you expect you should have an answer that's close to 0.
My original answer stated that roundoff error was the problem. That will be a problem with this basic approach, but why do you think 40 is enough terms for the appropriate mathematical ( as opposed to computer floating point arithmetic) answer.
100^40 / 40! ~= 10^31.
Woodchip has the right idea with range reduction. That's the typical approach people use to implement these kinds of functions very quickly. Once you get that all figured out, you deal with roundoff errors of alternating sequences, by summing adjacent terms within the loop, and stepping with k = 1 : 2 : 40 (for instance). That doesn't work here until you use woodchips's idea because for x = -100, the summands grow for a very long time. You need |x| < 1 to guarantee intermediate terms are shrinking, and thus a rewrite will work.