Update the number of nonlinear constraints in fmincon in Matlab - matlab

I am trying to use fmincon in a while loop such that until the while condition is not satisfied, fmincon must be executed. Each time fmincon can't satisfy a specific condition (e.g., x(N)-7.6==Tol), the number N of nonlinear constraints should be updated (increased). How is this possible with fmincon?
Suppose I initially have 18 nonlinear equality equations (ceq(1) ... ceq(18)). When the while condition can't be satisfied, the number of nonlinear equality equations should be increased to 23 (ceq(1) ... ceq(23)) on the next iteration.
tnx for your innovative idea....let's give you more details about what I wanna do. I have some set of nonlinear algebraic equations so that I need to work with NLP (nonlinear programming) solvers. besides my cost function is minimum time problem. actually my nonlinear constrained equations are some dynamic governing equations which are discreticized in time coordinate. N is the number of discretization. based on Lagrange optimization technique, for finding optimal solution, gradient of scalar function(Lagrangian) should be added to system. as I mentioned in my question I need to test my problem with an initial N, then if the Xoutput of fmincon could'nt meet the constrained, it needs to increase the number of descritization. it continues, until the output optimal answer of fmincon get closed enough to my desired answer.

Peeling back the onion, it appears to me your question doesn't address your real problem, and you're going about solving your problem numerically in somewhat the wrong way. How you solve a problem numerically is related but rather different to how you would solve a problem analytically, writing out equations by hand. See my further comment at the end, and I'd advise talking to fellow students, fellow students in your lab, and/or the prof.
Several comments:
Your objective funciton #(x) norm(myfoctest(x)) will always return 0 because myfoctest returns the empty array [] as its first argument, and in Matlab, norm([]) is defined as 0.
Instead of minimize 0 subject to f(x)==0, it appears you intend to solve problem minimize norm(f(x)) subject to f(x)==0? I don't understand the purpose of constraint f(x)==0 in this context. Why not minimize norm(f(x))?
In your function, myfoctest Why is Ceq(2*N)=x(5*N+12)-x(5*N+11)-x(4*N+2) in the for loop? You're assigning the value of x(32)- x(31) - x(18) to Ceq(8) four separate times (i.e. for j=1:4). Is this what you intend? This error suggests to me there might be other errors in the way you've written myfoctest.
A number of these constraints are linear constraints. Entering them as non-linear constraints will make fmincon's job harder.
I don't know the original problem, but this feels to me that you're going about numerically solving it in a haphazard, screwy way. I've spotted several errors just glancing at the code, and I'd be concerned there are more if I actually understood the problem.
You have 5*N+13 variables and 5*N+13 non-linear equality constraints. Your feasible set may be a single point! Digression: many optimization algorithms start with a feasible point and take a step in a feasible direction. If the feasible set is a single point, there's no feasible direction... In your problem, the whole game is finding the one feasible point (if it even exists)?!
I doubt the main problem is that you need to "Update the number of nonlinear constraints in fmincon in Matlab."
It vaguely sounds like you've calculated the first order conditions of the Lagrangian, and you're entering those as constraints for the optimization problem? If so, that's very likely not what you should be doing to solve numerically.
Some suggestions...
I think you need to take a step back to the beginning: write down what you're trying to solve in a clean way, and then figure out how to solve it numerically in an efficient manner. My gut reaction is that this so far is a somewhat confused mess.
Further comment
Say you have some minimization problem (eg. an optimal control problem):
minimize f(x) subject to g(x) <= 0.
where f and g and convex, Slater's condition holds, and the first order conditions are necessary and sufficient conditions to achieve a minimum. You might solve this mathematically and get some first order conditions:
dL/dx = 0
You might think that the way to solve this problem numerically is to numerically solve the system of equations dL/dx (from the FOC). If dL/dx is a system of linear equations, this might be true, but in general, that's often an intractable way to go about solving the problem. Instead, you want to feed f and g directly to your optimization algorithms.
General points to keep in mind:
Solving a system of linear equations is efficient and fast.
Solving a convex optimization problem is efficient and fast.
In general, solving a system of non-linear equations or non-convex optimization can be horrible, horrible problems.

You have two cases. In first, some condition is satisfied, and in second, it's not. Make the while statement hold true in each case. In your loop add the flag variable that would change its value if your condition is not satisfied. For example you can put something like:
flag = (x(N)-7.6<Tol);
that would return 1 if condition is satisfied and 0 otherwise.
In your mycon function add flag as an input variable:
function [c,ceq] = mycon(all_variables_you_had_before,flag)
Then, add the logical block in mycon looking like:
if flag == 1
ceq = [___]; %//put your 18 conditions here
else
ceq = [___]; %//put your 23 conditions here
end
Finally, do not forget to add mycon(all_variables_you_had_before,flag) in the fmincon line in your main script:
x = fmincon(#myfun,x0,A,b,Aeq,beq,lb,ub,#(all_variables_you_had_before) mycon(all_variables_you_had_before,flag))
So, if the condition is satisfied, your fmincon would get the constraints as usual. But if the condition is not satisfied, the constraints would change. Hope that helps.

function [x]=runnested(x0,N)
r=ones(4,1);
N=length(r);
Tol=0.001;
for k=1:N
for i=1:N
x0=rand(5*N+13,1)
options = optimset('Largescale','off','algorithm','interior-point','Display','iter');
[x(i,:),fval,exitflag,output]=fmincon(#(x) norm(myfoctest(x)),x0,[],[],[],[],[],[],#myfoctest,options)
end
if x(N)-7.61<=Tol
break;
else
N=N+1;
end
end
function [C,Ceq]=myfoctest(x,N,r)
C=[];
r=ones(4,1);
N=length(r);
f=3.5e-6; %km/s^2
i1=10*(pi/180);
Ts=110; %sec
V0=7.79; %km/sec
a1=7.61; %km/sec
b1=0.01*a1;
a2=20*(pi/180); % rad %10 deg
b2=0.01*a2; %rad
Omeg0=10*(pi/180); %rad
Ceq=zeros(5*N+13,1);
for j=1:N-1
Ceq(j)=x(3*N+1+j)- x(3*N+j)-2*x(4*N+1+j)*Ts*f*sin(x(2*N+1+j))./(pi*sin(i1)*x(j)^2)
Ceq(N)=x(5*N+10)-x(5*N+9)-x(3*N+2) %x(5*N+10)-x(5*N+9)-x(4*N+7)
Ceq(N+j)=x(4*N+1+j)-x(4*N+j)
Ceq(2*N)=x(5*N+12)-x(5*N+11)-x(4*N+2)
Ceq(2*N+1)=x(3*N+1)*Ts*f*sin(x(2*N+1))+2*x(4*N+1)*Ts*f*cos(x(2*N+1))/(pi*V0*sin(i1))
Ceq(2*N+1+j)=x(3*N+1+j)*Ts*f*sin(x(2*N+1+j))+2*x(4*N+1+j)*Ts*f*cos(x(2*N+1+j))./(pi*x(j)*sin(i1))
Ceq(3*N+1)=1-x(5*N+9)*b1-x(5*N+10)*b1-x(5*N+11)*b2-x(5*N+12)*b2-x(5*N+8)*N*Ts/100-x(5*N+13)
Ceq(3*N+2)=-2*x(5*N+8)*x(5*N+2)
Ceq(3*N+3)=-2*x(5*N+9)*x(5*N+3)
Ceq(3*N+4)=-2*x(5*N+10)*x(5*N+4)
Ceq(3*N+5)=-2*x(5*N+11)*x(5*N+5)
Ceq(3*N+6)=-2*x(5*N+12)*x(5*N+6)
Ceq(3*N+7)=2*x(5*N+13)*cos(x(5*N+7))*sin(x(5*N+7))
Ceq(3*N+8)=V0-x(1)-Ts*f*cos(x(2*N+1))
Ceq(3*N+8+j)=x(j)-x(j+1)-Ts*f*cos(x(2*N+1+j))
Ceq(4*N+8)=Omeg0-x(N+1)+2*Ts*f*sin(x(2*N+1))/(pi*V0*sin(i1))
Ceq(4*N+8+j)=Omeg0-x(j+1)+2*Ts*f*sin(x(2*N+1+j))./(pi*x(j)*sin(i1))
Ceq(5*N+8)=-x(5*N+2)^2-N*Ts/100-N*Ts*x(3*N+1)/100
Ceq(5*N+9)=-x(5*N+3)^2-x(N)+a1+b1-b1*x(3*N+1)+7.61/100
Ceq(5*N+10)=-x(5*N+4)^2+x(N)+a1+b1-b1*x(3*N+1)-7.61/100
Ceq(5*N+11)=-x(5*N+5)^2-x(2*N)+a2+b2-b2*x(3*N+1)+0.35/100
Ceq(5*N+12)=-x(5*N+6)^2+x(2*N)+a2+b2-b2*x(3*N+1)-0.35/100
Ceq(5*N+13)=-(sin(x(5*N+7)))^2-x(5*N+1)
end
end
end

Related

How to stop quadprog when Hessian is not symmetric?

I am trying to solve a quadratic optimization problem using the MATLAB's function quadprog. Actually I am trying to solve many (not one) quadratic optimization problems in series utilizing a for loop, where each qp depends on the results of the previous qp's results. The thing is that sometimes depending on the initial point, the warning: "Your Hessian is not symmetric. Resetting H=(H+H')/2." appears.
Question 1 & 2: Does that mean that the solver did not produce a (correct) solution? Or the case is that when the new hessian is used the resulting decision vector can be considered correct, the one that minimizes the objective function?
My (ignorant) idea is that in such case I should stop the simulation and try different initial point. BUT I expected that the exit flag would change from 1 to some other value, and with a simple if exitflag ~= 1 return end loop the process would stop. However this is not the case. The exit flag does not change yet the decision variable vector does not seem to be a correct answer.

The right package/software for non-linear optimization with semidefinite constraints

I am struggling to solve an optimization problem, numerically, of the following (generic) form.
minimize F(x)
such that:
___(1): 0 < x < 1
___(2): M(x) >= 0.
where M(x) is a matrix whose elements are quadratic functions of x. The last constraint means that M(x) must be a positive semidefinite matrix. Furthermore F(x) is a callable function. For the more curious, here is a similar minimum-working-example.
I have tried a few options, but to no success.
PICOS, CVXPY and CVX -- In the first two cases, I cannot find a way of encoding a minimax problem such as mine. In the third one which is implemented in MATLAB, the matrices involved in a semidefinite constraint must be affine. So my problem falls outside this criteria.
fmincon -- How can we encode a matrix positivity constraint? One way is to compute the eigenvalues of the matrix M(x) analytically, and constraint each one to be positive. But the analytic expression for the eigenvalues can be horrendous.
MOSEK -- The objective function must be a expressible in a standard form. I cannot find an example of a user-defined objective function.
scipy.optimize -- Along with the objective functions and the constraints, it is necessary to provide the derivative of these functions as well. Particularly in my case, that is fine for the objective function. But, if I were to express the matrix positivity constraint (as well as it's derivative) with an analytic expression of the eigenvalues, that can be very tedious.
My apologies for not providing a MWE to illustrate my attempts with each of the above packages/softwares.
Can anyone please suggest a package/software which could be useful to me in solving my optimization problem?
Have a look at a nonlinear optimization package with box constraints, where different type of constraints may be coded via penalty or barrier techniques.
Look at the following URL
merlin.cs.uoi.gr

Why getting different solutions by feeding constraint to fmincon in two similar way?

I am using fmincon to solve a problem. The problem has some linear inequality constraints that are written in Matrix A and B.
I can write these constraints in 2 way and I should get analogous results. But, weirdly I am getting different solutions. why is that?
1) In the first way, I can feed the constraint to 'fmincon' function as follows:
[Xsqp, FUN ,FLAG ,Options] = fmincon(#(X)SQP(X,Dat),X,A,B,[],[],lb,ub,#(X)SQPnolcon(X,Dat,A,B),options);
% I comment the line 'C=A*X'-B;'
in the function 'SQPnolcon' and put C=[] instead, because A and B are defined already in fmincon function
2) As the second way I can write it like this:
[Xsqp, FUN ,FLAG ,Options] = fmincon(#(X)SQP(X,Dat),X,[],[],[],[],lb,ub,#(X)SQPnolcon(X,Dat,A,B),options);
and also the constraint function as follows:
function [C,Ceq] = SQPnolcon(X,Dat,A,B)
C=A*X'-B;
Ceq = [];
end
In the first, you're supplying A and B as both linear inequality constraints and as nonlinear inequality constraints, but in the second you're only supplying them as nonlinear inequality constraints.
I get why you might expect that would be equivalent, since they're the same constraints anyway. But the linear equality constraints are applied in a different context than the nonlinear equality constraints, and that leads the optimization algorithm to find a different solution.
I'm afraid I'm not able to explain exactly how the two different types of constraints are applied, and at what points in the algorithm - and in any case, this would vary depending on which algorithm you're asking fmincon to use (active-set, trust-region and so on). For that level of detail, you might need to ask MathWorks. But the basic answer is that you're getting different results because you're asking the algorithm to do two different things.

Matlab issue: linprog

Good evening,
I've got an issue with the linprog function of Matlab, here is the message I get:
Exiting due to infeasibility: an all-zero row in the constraint
matrix does not have a zero in corresponding right-hand-side entry.
According to the help, it means that I have at least a row in Aeq that is full of zeros (let's say, row i), but that beq(i) is not equal to zero.
I checked my matrices doing:
checkmat=full(sum(abs(Aeq')))';
checkmat=horzcat(checkmat,beq);
for i=1:length(checkmat)
if (checkmat(i,1)==0 && checkmat(i,2)~=0) || (checkmat(i,2)==0 && checkmat(i,1)~=0)
i
end
end
but it seems to be alright. Has anybody an idea about where it could come from?
If missing any information, I will gladly try to gather them.
The default interior-point method used by linprog performs some preprocessing steps before the actual iterations begin. Therefore, while your Aeq might not contain an all-zero row for which the corresponding element of beq is non-zero, this might occur after the preprocessing.
You could try running linprog using another algorithm (use optimset('LargeScale', 'off') and/or optimset('LargeScale', 'off', 'Simplex', 'on')) and see what the output is in that case.
However, I suspect that in all cases you'll get an "infeasible problem" exit flag, since your equality constraints seem to be impossible to satisfy.
More info on Matlab's available linear programming algorithms and their preprocessing steps.

MATLAB code help. Backward Euler method

Here is the MATLAB/FreeMat code I got to solve an ODE numerically using the backward Euler method. However, the results are inconsistent with my textbook results, and sometimes even ridiculously inconsistent. What is wrong with the code?
function [x,y] = backEuler(f,xinit,yinit,xfinal,h)
%f - this is your y prime
%xinit - initial X
%yinit - initial Y
%xfinal - final X
%h - step size
n = (xfinal-xinit)/h; %Calculate steps
%Inititialize arrays...
%The first elements take xinit and yinit corespondigly, the rest fill with 0s.
x = [xinit zeros(1,n)];
y = [yinit zeros(1,n)];
%Numeric routine
for i = 1:n
x(i+1) = x(i)+h;
ynew = y(i)+h*(f(x(i),y(i)));
y(i+1) = y(i)+h*f(x(i+1),ynew);
end
end
Your method is a method of a new kind. It is neither backward nor forward Euler. :-)
Forward Euler: y1 = y0 + h*f(x0,y0)
Backward Euler solve in y1: y1 - h*f(x1,y1) = y0
Your method: y1 = y0 +h*f(x0,x0+h*f(x0,y0))
Your method is not backward Euler.
You don't solve in y1, you just estimate y1 with the forward Euler method. I don't want to pursue the analysis of your method, but I believe it will behave poorly indeed, even compared with forward Euler, since you evaluate the function f at the wrong point.
Here is the closest method to your method that I can think of, explicit as well, which should give much better results. It's Heun's Method:
y1 = y0 + h/2*(f(x0,y0) + f(x1,x0+h*f(x0,y0)))
The only issue I can spot is that the line:
n=(xfinal-xinit)/h
Should be:
n = abs((xfinal-xinit)/h)
To avoid a negative step count. If you are moving in the negative-x direction, make sure to give the function a negative step size.
Your answers probably deviate because of how coarsely you are approximating your answer. To get a semi-accurate result, deltaX has to be very very small and your step size has to be very very very small.
PS. This isn't the "backward Euler method," it is just regular old Euler's method.
If this is homework please tag it so.
Have a look at numerical recipes, specifically chapter 16, integration of ordinary differential equations. Euler's method is known to have problems:
There are several reasons that Euler’s method is not recommended for practical use, among them, (i) the method is not very accurate when compared to other, fancier, methods run at the equivalent stepsize, and (ii) neither is it very stable
So unless you know your textbook is using Euler's method, I wouldn't expect the results to match. Even if it is, you probably have to use an identical step size to get an identical result.
Unless you really want to solve an ODE via Euler's method that you've written by yourself you should have a look at built-in ODE solvers.
On a sidenote: you don't need to create x(i) inside the loop like this: x(i+1) = x(i)+h;. Instead, you can simply write x = xinit:h:xfinal;. Also, you may want to write n = round(xfinal-xinit)/h); to avoid warnings.
Here are the solvers implemented by MATLAB.
ode45 is based on an explicit
Runge-Kutta (4,5) formula, the
Dormand-Prince pair. It is a one-step
solver – in computing y(tn), it needs
only the solution at the immediately
preceding time point, y(tn-1). In
general, ode45 is the best function to
apply as a first try for most
problems.
ode23 is an implementation of an
explicit Runge-Kutta (2,3) pair of
Bogacki and Shampine. It may be more
efficient than ode45 at crude
tolerances and in the presence of
moderate stiffness. Like ode45, ode23
is a one-step solver.
ode113 is a variable order
Adams-Bashforth-Moulton PECE solver.
It may be more efficient than ode45 at
stringent tolerances and when the ODE
file function is particularly
expensive to evaluate. ode113 is a
multistep solver — it normally needs
the solutions at several preceding
time points to compute the current
solution.
The above algorithms are intended to
solve nonstiff systems. If they appear
to be unduly slow, try using one of
the stiff solvers below.
ode15s is a variable order solver
based on the numerical differentiation
formulas (NDFs). Optionally, it uses
the backward differentiation formulas
(BDFs, also known as Gear's method)
that are usually less efficient. Like
ode113, ode15s is a multistep solver.
Try ode15s when ode45 fails, or is
very inefficient, and you suspect that
the problem is stiff, or when solving
a differential-algebraic problem.
ode23s is based on a modified
Rosenbrock formula of order 2. Because
it is a one-step solver, it may be
more efficient than ode15s at crude
tolerances. It can solve some kinds of
stiff problems for which ode15s is not
effective.
ode23t is an implementation of the
trapezoidal rule using a "free"
interpolant. Use this solver if the
problem is only moderately stiff and
you need a solution without numerical
damping. ode23t can solve DAEs.
ode23tb is an implementation of
TR-BDF2, an implicit Runge-Kutta
formula with a first stage that is a
trapezoidal rule step and a second
stage that is a backward
differentiation formula of order two.
By construction, the same iteration
matrix is used in evaluating both
stages. Like ode23s, this solver may
be more efficient than ode15s at crude
tolerances.
I think this code could work. Try this.
for i =1:n
t(i +1)=t(i )+dt;
y(i+1)=solve('y(i+1)=y(i)+dt*f(t(i+1),y(i+1)');
end
The code is fine. Just you have to add another loop within the for loop. To check the level of consistency.
if abs((y(i+1) - ynew)/ynew) > 0.0000000001
ynew = y(i+1);
y(i+1) = y(i)+h*f(x(i+1),ynew);
end
I checked for a dummy function and the results were promising.