I'm working on an optimization problem in Matlab, but unfortunately, I'm stuck.
I want to maximize \theta using the function fmincon, but this particular problem depends on $n$, and $n$ can get very large. There are $n-1$ inequality constraints, all defined with the relation:
For all i \neq j \leq n : \theta - (x_i - x_j)^2 - (y_i - y_j)^2 \leq 0.
So $c(x)$ is an (n-1)x1 - vector.
I'm looking for a way to implement this, so that I don't have to write a new matlab file for each different $n$. (and as you can imagine, when n gets large, that would be one heck of a job)
Any help would be dearly appreciated.
Cheers!
EDIT : I now have created an extra m.file, just for this constraint:
function constraint(n)
%this is a function which creates the constraints of the distance.
for i= 1: n
for j= 1:n
if j==i
continue;
end
(x(i)-x(j))^2 + (y(i)-y(j))^2;
end
end
But the problem now is that matlab goes over the elements one by one. For example: it doesn't calculate (x(1) - x(4))^2 + (y(1) - y(4))^2.
Any idea on how to solve this one?
Thanks again!
I don't see that your function wouldn't at some point calculate that value (when i = 1 and j = 4). The main issue seems to be that your function doesn't return anything, or take in x. According to this, a constraint function should return two things:
Nonlinear constraint functions must return both c and ceq, the
inequality and equality constraint functions, even if they do not both
exist. Return empty [] for a nonexistent constraint.
So first, we need to make sure our constraints are saved into an output vector, c, that c and an empty ceq are returned, and that our function takes both x and n. There might be prettier ways of doing it but
function [c, ceq] = constraint(x,n)
%this is a function which creates the constraints of the distance.
counter = 1;
for i= 1: n
for j= 1:n
if j==i
continue;
end
c(counter)=(x(i)-x(j))^2 + (y(i)-y(j))^2;
c = c+1;
end
end
ceq = [];
end
Next problem: this function takes two inputs, but nonlcon as an input to fmincon needs to take only one, x. We get around this by wrapping this function in an anonymous function (n needs to be predefined), so in the actual fmincon call you would set it to be something like #(x)constraint(x,n)
Related
I have 100 equations with 5 variables. Is there a function in Matlab which I can use to find the optimal solution of these equations?
My problem is to find argmin ||(a-ic)^2 + (b-jd)^2 + e - h(i,j)|| over all i, j from -10 to 10. ie.
%% Note: not Matlab code. Just showing the Math.
for i = -10:10
for j = -10:10
(a-ic)^2 + (b-jd)^2 + e = h(i,j)
known: h(i,j) is a 10*10 matrix,and i,j are indexes
expected: the optimal result of a,b,c,d,e
You can try using lsqnonlin as follows.
%% define a helper function in your .m file
function f = fun(x)
a=x(1); b=x(2); c=x(3); d=x(4); e=x(5); % Using variable names from your question. In other situations, be careful when overwriting e.
f=zeros(21*21,0); % size(f) is taken from your question. You should make this a variable for good practice.
for i = -10:10
for j = -10:10
f(10*(i+10+1)+(j+10+1)) = (a-i*c)^2 + (b-j*d)^2 + e - h(i,j); % 10 is taken from your question.
end
end
end
(Aside, why is your h(i,j) taking negative indices??)
In your main function you can simply write
function out=myproblem(x0)
out=lsqnonlin(#fun,x0);
end
In your cmd, you can call with specific initial try such as
myproblem([0,0,0,0,0])
Helper function over anonymous because in my experience helpers get sped up by JIT while anonymous do not. I also opted to reshape in the loops as an opposed to actually call reshape after because I expect reshape to cost significant extra time. Remember that O(1) in fun is not O(1) in lsqnonlin.
(As always, a solution to a nonlinear problem is not guaranteed.)
I am trying to apply Newton's method in Matlab, and I wrote a script:
syms f(x)
f(x) = x^2-4
g = diff(f)
x_1=1 %initial point
while f(['x_' num2str(i+1)])<0.001;% tolerance
for i=1:1000 %it should be stopped when tolerance is reached
['x_' num2str(i+1)]=['x_' num2str(i)]-f(['x_' num2str(i)])/g(['x_' num2str(i)])
end
end
I am getting this error:
Error: An array for multiple LHS assignment cannot contain M_STRING.
Newton's Method formula is x_(n+1)= x_n-f(x_n)/df(x_n) that goes until f(x_n) value gets closer to zero.
All of the main pieces are present in the code present. However, there are some problems.
The main problem is assuming string concatenation makes a variable in the workspace; it does not. The primary culprit is this line is this one
['x_' num2str(i+1)]=['x_' num2str(i)]-f(['x_' num2str(i)])/g(['x_' num2str(i)])
['x_' num2str(i+1)] is a string, and the MATLAB language does not support assignment to character arrays (which is my interpretation of An array for multiple LHS assignment cannot contain M_STRING.).
My answer, those others' may vary, would be
Convert the symbolic functions to handles via matlabFunction (since Netwon's Method is almost always a numerical implementation, symbolic functions should be dropper once the result of their usage is completed)
Replace the string creations with a double array for x (much, much cleaner, faster, and overall better code).
Put a if-test with a break in the for-loop versus the current construction.
My suggestions, implemented, would look like this:
syms f(x)
f(x) = x^2-4;
g = diff(f);
f = matlabFunction(f);
g = matlabFunction(g);
nmax = 1000;
tol = 0.001;% tolerance
x = zeros(1, nmax);
x(1) = 1; %initial point
fk = f(x(1));
for k = 1:nmax
if (abs(fk) < tol)
break;
end
x(k+1) = x(k) - f(x(k))/g(x(k));
fk = f(x(k));
end
I'm trying to fit some data with Matlab, using the least square method.
I found best fit parameters, and I want to determine the uncertainty on them now.
To determine the uncertainty on the first parameter, say a, we have seen in course that one should apply a variation to one parameter, until the difference between the function (evaluated at that variation) minus the original function value equals 1.
That is, I have a vector called [bestparam] in my Matlab code, containing the four parameters a, b, c and d.
I also have a function defined in another file, called chi-square, which I evaluated at the best parameters.
I now want to apply a small variation to the parameter a, and keep doing this until chi-square(a + variation) - chi-square = 1. The difference must be exactly one. I implemented for this the following code:
i = 0;
a_new = a + i;
%small variation on the parameter a
new_param = [a_new b c d];
%my new parameters at which I want the function chisquare to be evaluated
newchisquare = feval(#chisquare, [new_param], X, Y, dY);
%the function value
while newchisquare - chisquarevalue ~= 1
i = i + 0.0001;
a_new = a_new + i;
new_param = [a_new b c d];
newchisquare = feval(#chisquare, [new_param], X, Y, dY);
end
disp(a_new);
disp(newchisquare);
But when I execute this loop, it never stops running. When I change the condition to < 1, i.e. that the difference should be larger than one, then it does stop after like 5 seconds. But then the difference between the function values is no longer exactly one. For example, my original function value is 63.5509 and the new one is then 64.6145 which is not exactly 1 larger.
So is there some way to implement the code, and to keep updating the parameter a until the difference is exactly one? Help is appreciated.
Performing numerical methods I wouldn't recommend using operations like == or ~= unless you are sure that you are comparing two integers. Only small deviations of your value may cause your code to never stop. You can apply some tolerance treshold to make your code stop if it is approximately correct:
TOL = 1e-2;
while (abs(newchisquare - chisquarevalue) <= 1 - TOL)
% your code
end
I have an integrated error expression E = int[ abs(x-p)^2 ]dx with limits x|0 to x|L. The variable p is a polynomial of the form 2*(a*sin(x)+b(a)*sin(2*x)+c(a)*sin(3*x)). In other words, both coefficients b and c are known expressions of a. An additional equation is given as dE/da = 0. If the upper limit L is defined, the system of equations is closed and I can solve for a, giving the three coefficients.
I managed to get an optimization routine to solve for a purely based on maximizing L. This is confirmed by setting optimize=0 in the code below. It gives the same solution as if I solved the problem analytically. Therefore, I know the equations to solve for the coefficent a are correct.
I know the example I presented can be solved with pencil and paper, but I'm trying to build an optimization function that is generalized for this type of problem (I have a lot to evaluate). Ideally, polynomial is given as an input argument to a function which then outputs xsol. Obviously, I need to get the optimization to work for the polynomial I presented here before I can worry about generalizations.
Anyway, I now need to further optimize the problem with some constraints. To start, L is chosen. This allows me to calculate a. Once a is know, the polynomial is a known function of x only i.e p(x). I need to then determine the largest INTERVAL from 0->x over which the following constraint is satisfied: |dp(x)/dx - 1| < tol. This gives me a measure of the performance of the polynomial with the coefficient a. The interval is what I call the "bandwidth". I would like to emphasis two things: 1) The "bandwidth" is NOT the same as L. 2) All values of x within the "bandwidth" must meet the constraint. The function dp(x)/dx does oscillate in and out of the tolerance criteria, so testing the criteria for a single value of x does not work. It must be tested over an interval. The first instance of violation defines the bandwidth. I need to maximize this "bandwidth"/interval. For output, I also need to know which L lead to such an optimization, hence I know the correct a to choose for the given constraints. That is the formal problem statement. (I hope I got it right this time)
Now my problem is setting this whole thing up with MATLAB's optimization tools. I tried to follow ideas from the following articles:
Tutorial for the Optimization Toolbox™
Setting optimize=1 for the if statement will work with the constrained optimization. I thought some how nested optimization is involved, but I couldn't get anything to work. I provided known solutions to the problem from the IMSL optimization library to compare/check with. They are written below the optimization routine. Anyway, here is the code I've put together so far:
function [history] = testing()
% History
history.fval = [];
history.x = [];
history.a = [];
%----------------
% Equations
polynomial = #(x,a) 2*sin(x)*a + 2*sin(2*x)*(9/20 -(4*a)/5) + 2*sin(3*x)*(a/5 - 2/15);
dpdx = #(x,a) 2*cos(x)*a + 4*cos(2*x)*(9/20 -(4*a)/5) + 6*cos(3*x)*(a/5 - 2/15);
% Upper limit of integration
IC = 0.8; % initial
LB = 0; % lower
UB = pi/2; % upper
% Optimization
tol = 0.003;
% Coefficient
% --------------------------------------------------------------------------------------------
dpda = #(x,a) 2*sin(x) + 2*sin(2*x)*(-4/5) + 2*sin(3*x)*1/5;
dEda = #(L,a) -2*integral(#(x) (x-polynomial(x,a)).*dpda(x,a),0,L);
a_of_L = #(L) fzero(#(a)dEda(L,a),0); % Calculate the value of "a" for a given "L"
EXITFLAG = #(L) get_outputs(#()a_of_L(L),3); % Be sure a zero is actually calculated
% NL Constraints
% --------------------------------------------------------------------------------------------
% Equality constraint (No inequality constraints for parent optimization)
ceq = #(L) EXITFLAG(L) - 1; % Just make sure fzero finds unique solution
confun = #(L) deal([],ceq(L));
% Objective function
% --------------------------------------------------------------------------------------------
% (Set optimize=0 to test coefficent equations and proper maximization of L )
optimize = 1;
if optimize
%%%% Plug in solution below
else
% Optimization options
options = optimset('Algorithm','interior-point','Display','iter','MaxIter',500,'OutputFcn',#outfun);
% Optimize objective
objective = #(L) -L;
xsol = fmincon(objective,IC,[],[],[],[],LB,UB,confun,options);
% Known optimized solution from IMSL library
% a = 0.799266;
% lim = pi/2;
disp(['IMSL coeff (a): 0.799266 Upper bound (L): ',num2str(pi/2)])
disp(['code coeff (a): ',num2str(history.a(end)),' Upper bound: ',num2str(xsol)])
end
% http://stackoverflow.com/questions/7921133/anonymous-functions-calling-functions-with-multiple-output-forms
function varargout = get_outputs(fn, ixsOutputs)
output_cell = cell(1,max(ixsOutputs));
[output_cell{:}] = (fn());
varargout = output_cell(ixsOutputs);
end
function stop = outfun(x,optimValues,state)
stop = false;
switch state
case 'init'
case 'iter'
% Concatenate current point and objective function
% value with history. x must be a row vector.
history.fval = [history.fval; optimValues.fval];
history.x = [history.x; x(1)];
history.a = [history.a; a_of_L(x(1))];
case 'done'
otherwise
end
end
end
I could really use some help setting up the constrained optimization. I'm not only new to optimizations, I've never used MATLAB to do so. I should also note that what I have above does not work and is incorrect for the constrained optimization.
UPDATE: I added a for loop in the section if optimizeto show what I'm trying to achieve with the optimization. Obviously, I could just use this, but it seems very inefficient, especially if I increase the resolution of range and have to run this optimization many times. If you uncomment the plots, it will show how the bandwidth behaves. By looping over the full range, I'm basically testing every L but surely there's got to be a more efficient way to do this??
UPDATE: Solved
So it seems fmincon is not the only tool for this job. In fact I couldn't even get it to work. Below, fmincon gets "stuck" on the IC and refuses to do anything...why...that's for a different post! Using the same layout and formulation, fminbnd finds the correct solution. The only difference, as far as I know, is that the former was using a conditional. But my conditional is nothing fancy, and really unneeded. So it's got to have something to do with the algorithm. I guess that's what you get when using a "black box". Anyway, after a long, drawn out, painful, learning experience, here is a solution:
options = optimset('Display','iter','MaxIter',500,'OutputFcn',#outfun);
% Conditional
index = #(L) min(find(abs([dpdx(range(range<=L),a_of_L(L)),inf] - 1) - tol > 0,1,'first'),length(range));
% Optimize
%xsol = fmincon(#(L) -range(index(L)),IC,[],[],[],[],LB,UB,confun,options);
xsol = fminbnd(#(L) -range(index(L)),LB,UB,options);
I would like to especially thank #AndrasDeak for all their support. I wouldn't have figured it out without the assistance!
I have a model with linear constraints and a nonlinear objective function, and I'm trying to use "fmincon" toolbox of MATLAB to solve it. Actually, the Aineq is a 24*13 matrix, and the Aeq is a 24*13 matrix as well. But when I insert this command:
>> [x , lambda] = fmincon(#MP_ObjF,Aineq,bineq,Aeq,beq);
I encounter this error:
Warning: Trust-region-reflective method does not currently solve this type of
problem, using active-set (line search) instead.
In fmincon at 439??? Error using ==> fmincon at 692
Aeq must have 312 column(s).
What is probably wrong with it? Why should Aeq have 312 columns?!? I will appreciate any help. Thanks.
If you look at the documentation for fmincon (doc fmincon ) you'll see an input called opt.In this you can set the algorithm used by matlab to solve your minimization problem. If you run
Opt=optimset('fmincon');
Then you can modify the algorithm option using
Opt.algorithm="active-set";
Just send Opt to fmincon and then matlab wont have this problem anymore. Take a look inside Opt and you'll find a ton of options you can change to modify the optimization routine.
As for the number of columns. If you're using linear constraints then you input argument for MPobjF must be a column vector with n rows and 1 column. Then A must be m X n. Where M is the number of constraints and n is the number of variables. This is so that matrix multiplication is well defined.
I'm sorry if my first answer was ambiguous. Maybe it will help if I do an example, as I saw several suspicious things in your comments. Lets say we want to minimize x^2 + y^2 + (z-1)^2 subject to x + y + z = 1, 2x + 3y - 4z <= 5, x,y,z>=-5. The solution is obviously (0,0,1)...
We first have to make our objective function,
fun = #(vec) vec[1]^2 + vec[2]^2 + (vec[3]-1)^2;
For fmincon to work, there can only be one input to the function, but that input can be a vector. So here x = vec[1] and so on...I think your comments are indicating that your objective function has multiple inputs. If you need to pass some parameters that aren't being optimized there is documentation for this on Matlab's site (http://www.mathworks.com/help/optim/ug/passing-extra-parameters.html)
Then we can set the optimization settings
opt = optimset('fmincon');
opt.algorithm = 'active-set';
You may also have to modify the large-scale setting for the algorithm warning to go away, I can't remember...
Then we can set
Aeq = [1,1,1]; % equality constraint, if you had another eq constraint, it would be another row to Aeq
beq = 1; % equality constraint
A = [2,3,-4]; % inequality
b = 5; % inequality
lb = [-5;-5;-5]; % lower bound
x0 = [0.5;0.5;0]; % initial feasible guess, needs to be a column vector
[x,fval] = fmincon(fun,x0,A,b,Aeq,beq,lb,[],[],opt);
Then hopefully this finds x = [0;0;1]