How to stop quadprog when Hessian is not symmetric? - matlab

I am trying to solve a quadratic optimization problem using the MATLAB's function quadprog. Actually I am trying to solve many (not one) quadratic optimization problems in series utilizing a for loop, where each qp depends on the results of the previous qp's results. The thing is that sometimes depending on the initial point, the warning: "Your Hessian is not symmetric. Resetting H=(H+H')/2." appears.
Question 1 & 2: Does that mean that the solver did not produce a (correct) solution? Or the case is that when the new hessian is used the resulting decision vector can be considered correct, the one that minimizes the objective function?
My (ignorant) idea is that in such case I should stop the simulation and try different initial point. BUT I expected that the exit flag would change from 1 to some other value, and with a simple if exitflag ~= 1 return end loop the process would stop. However this is not the case. The exit flag does not change yet the decision variable vector does not seem to be a correct answer.

Related

Update the number of nonlinear constraints in fmincon in Matlab

I am trying to use fmincon in a while loop such that until the while condition is not satisfied, fmincon must be executed. Each time fmincon can't satisfy a specific condition (e.g., x(N)-7.6==Tol), the number N of nonlinear constraints should be updated (increased). How is this possible with fmincon?
Suppose I initially have 18 nonlinear equality equations (ceq(1) ... ceq(18)). When the while condition can't be satisfied, the number of nonlinear equality equations should be increased to 23 (ceq(1) ... ceq(23)) on the next iteration.
tnx for your innovative idea....let's give you more details about what I wanna do. I have some set of nonlinear algebraic equations so that I need to work with NLP (nonlinear programming) solvers. besides my cost function is minimum time problem. actually my nonlinear constrained equations are some dynamic governing equations which are discreticized in time coordinate. N is the number of discretization. based on Lagrange optimization technique, for finding optimal solution, gradient of scalar function(Lagrangian) should be added to system. as I mentioned in my question I need to test my problem with an initial N, then if the Xoutput of fmincon could'nt meet the constrained, it needs to increase the number of descritization. it continues, until the output optimal answer of fmincon get closed enough to my desired answer.
Peeling back the onion, it appears to me your question doesn't address your real problem, and you're going about solving your problem numerically in somewhat the wrong way. How you solve a problem numerically is related but rather different to how you would solve a problem analytically, writing out equations by hand. See my further comment at the end, and I'd advise talking to fellow students, fellow students in your lab, and/or the prof.
Several comments:
Your objective funciton #(x) norm(myfoctest(x)) will always return 0 because myfoctest returns the empty array [] as its first argument, and in Matlab, norm([]) is defined as 0.
Instead of minimize 0 subject to f(x)==0, it appears you intend to solve problem minimize norm(f(x)) subject to f(x)==0? I don't understand the purpose of constraint f(x)==0 in this context. Why not minimize norm(f(x))?
In your function, myfoctest Why is Ceq(2*N)=x(5*N+12)-x(5*N+11)-x(4*N+2) in the for loop? You're assigning the value of x(32)- x(31) - x(18) to Ceq(8) four separate times (i.e. for j=1:4). Is this what you intend? This error suggests to me there might be other errors in the way you've written myfoctest.
A number of these constraints are linear constraints. Entering them as non-linear constraints will make fmincon's job harder.
I don't know the original problem, but this feels to me that you're going about numerically solving it in a haphazard, screwy way. I've spotted several errors just glancing at the code, and I'd be concerned there are more if I actually understood the problem.
You have 5*N+13 variables and 5*N+13 non-linear equality constraints. Your feasible set may be a single point! Digression: many optimization algorithms start with a feasible point and take a step in a feasible direction. If the feasible set is a single point, there's no feasible direction... In your problem, the whole game is finding the one feasible point (if it even exists)?!
I doubt the main problem is that you need to "Update the number of nonlinear constraints in fmincon in Matlab."
It vaguely sounds like you've calculated the first order conditions of the Lagrangian, and you're entering those as constraints for the optimization problem? If so, that's very likely not what you should be doing to solve numerically.
Some suggestions...
I think you need to take a step back to the beginning: write down what you're trying to solve in a clean way, and then figure out how to solve it numerically in an efficient manner. My gut reaction is that this so far is a somewhat confused mess.
Further comment
Say you have some minimization problem (eg. an optimal control problem):
minimize f(x) subject to g(x) <= 0.
where f and g and convex, Slater's condition holds, and the first order conditions are necessary and sufficient conditions to achieve a minimum. You might solve this mathematically and get some first order conditions:
dL/dx = 0
You might think that the way to solve this problem numerically is to numerically solve the system of equations dL/dx (from the FOC). If dL/dx is a system of linear equations, this might be true, but in general, that's often an intractable way to go about solving the problem. Instead, you want to feed f and g directly to your optimization algorithms.
General points to keep in mind:
Solving a system of linear equations is efficient and fast.
Solving a convex optimization problem is efficient and fast.
In general, solving a system of non-linear equations or non-convex optimization can be horrible, horrible problems.
You have two cases. In first, some condition is satisfied, and in second, it's not. Make the while statement hold true in each case. In your loop add the flag variable that would change its value if your condition is not satisfied. For example you can put something like:
flag = (x(N)-7.6<Tol);
that would return 1 if condition is satisfied and 0 otherwise.
In your mycon function add flag as an input variable:
function [c,ceq] = mycon(all_variables_you_had_before,flag)
Then, add the logical block in mycon looking like:
if flag == 1
ceq = [___]; %//put your 18 conditions here
else
ceq = [___]; %//put your 23 conditions here
end
Finally, do not forget to add mycon(all_variables_you_had_before,flag) in the fmincon line in your main script:
x = fmincon(#myfun,x0,A,b,Aeq,beq,lb,ub,#(all_variables_you_had_before) mycon(all_variables_you_had_before,flag))
So, if the condition is satisfied, your fmincon would get the constraints as usual. But if the condition is not satisfied, the constraints would change. Hope that helps.
function [x]=runnested(x0,N)
r=ones(4,1);
N=length(r);
Tol=0.001;
for k=1:N
for i=1:N
x0=rand(5*N+13,1)
options = optimset('Largescale','off','algorithm','interior-point','Display','iter');
[x(i,:),fval,exitflag,output]=fmincon(#(x) norm(myfoctest(x)),x0,[],[],[],[],[],[],#myfoctest,options)
end
if x(N)-7.61<=Tol
break;
else
N=N+1;
end
end
function [C,Ceq]=myfoctest(x,N,r)
C=[];
r=ones(4,1);
N=length(r);
f=3.5e-6; %km/s^2
i1=10*(pi/180);
Ts=110; %sec
V0=7.79; %km/sec
a1=7.61; %km/sec
b1=0.01*a1;
a2=20*(pi/180); % rad %10 deg
b2=0.01*a2; %rad
Omeg0=10*(pi/180); %rad
Ceq=zeros(5*N+13,1);
for j=1:N-1
Ceq(j)=x(3*N+1+j)- x(3*N+j)-2*x(4*N+1+j)*Ts*f*sin(x(2*N+1+j))./(pi*sin(i1)*x(j)^2)
Ceq(N)=x(5*N+10)-x(5*N+9)-x(3*N+2) %x(5*N+10)-x(5*N+9)-x(4*N+7)
Ceq(N+j)=x(4*N+1+j)-x(4*N+j)
Ceq(2*N)=x(5*N+12)-x(5*N+11)-x(4*N+2)
Ceq(2*N+1)=x(3*N+1)*Ts*f*sin(x(2*N+1))+2*x(4*N+1)*Ts*f*cos(x(2*N+1))/(pi*V0*sin(i1))
Ceq(2*N+1+j)=x(3*N+1+j)*Ts*f*sin(x(2*N+1+j))+2*x(4*N+1+j)*Ts*f*cos(x(2*N+1+j))./(pi*x(j)*sin(i1))
Ceq(3*N+1)=1-x(5*N+9)*b1-x(5*N+10)*b1-x(5*N+11)*b2-x(5*N+12)*b2-x(5*N+8)*N*Ts/100-x(5*N+13)
Ceq(3*N+2)=-2*x(5*N+8)*x(5*N+2)
Ceq(3*N+3)=-2*x(5*N+9)*x(5*N+3)
Ceq(3*N+4)=-2*x(5*N+10)*x(5*N+4)
Ceq(3*N+5)=-2*x(5*N+11)*x(5*N+5)
Ceq(3*N+6)=-2*x(5*N+12)*x(5*N+6)
Ceq(3*N+7)=2*x(5*N+13)*cos(x(5*N+7))*sin(x(5*N+7))
Ceq(3*N+8)=V0-x(1)-Ts*f*cos(x(2*N+1))
Ceq(3*N+8+j)=x(j)-x(j+1)-Ts*f*cos(x(2*N+1+j))
Ceq(4*N+8)=Omeg0-x(N+1)+2*Ts*f*sin(x(2*N+1))/(pi*V0*sin(i1))
Ceq(4*N+8+j)=Omeg0-x(j+1)+2*Ts*f*sin(x(2*N+1+j))./(pi*x(j)*sin(i1))
Ceq(5*N+8)=-x(5*N+2)^2-N*Ts/100-N*Ts*x(3*N+1)/100
Ceq(5*N+9)=-x(5*N+3)^2-x(N)+a1+b1-b1*x(3*N+1)+7.61/100
Ceq(5*N+10)=-x(5*N+4)^2+x(N)+a1+b1-b1*x(3*N+1)-7.61/100
Ceq(5*N+11)=-x(5*N+5)^2-x(2*N)+a2+b2-b2*x(3*N+1)+0.35/100
Ceq(5*N+12)=-x(5*N+6)^2+x(2*N)+a2+b2-b2*x(3*N+1)-0.35/100
Ceq(5*N+13)=-(sin(x(5*N+7)))^2-x(5*N+1)
end
end
end

How to find the infeasible solutions faster with fmincon Matlab?

I am solving a multi objective optimisation problem. The pseudo code is as follows:
For h=1:10 % Number of points
1. Update some constraints % Update the constraint to find new points
2. Minimize F(X) % Objective function, is not updated at ittirations
Subject to the Updated constraints.
End
For each iteration , a new solution is obtained. But, for some iteration the solution is infeasible and fmincon returns exitflag of -2. So, they must be discarded from the possible solution set of the problem.
fmincon consumes a lot of time to recognise an infeasible point but for other points which have exit flag >0 it is fine. Thus, the total consumed time of the problem is significantly large due to these infeasible points.
How can I set the fmincon option to get the infeasible points faster considering the fact that I can only set the option for 1 time.

Matlab understanding ode solver

I have a system of linked differential equations that I am solving with the ode23 solver. When a certain threshold is reached one of the parameters changes which reverses the slope of my function.
I followed the behavior of the ode with the debugging function and noticed that it starts to jump back in "time" around this point. Basically it generates more data points.However, these are not all represented in the final solution vector.
Can somebody explain this behavior, especially why not all calculated values find their way into the solution vector?
//Edit: To clarify, the behavior starts when v changes from 0 to any other value. (When I write every value of v to a vector it has more than a 1000 components while the ode solver solution only has ~300).
Find the code of my equations below:
%chemostat model, based on:
%DCc=-v0*Cc/V + umax*Cs*Cc/(Ks+Cs)-rd
%Dcs=(v0/V)*(Cs0-Cs) - Cc*(Ys*umax*Cs/(Ks+Cs)-m)
function dydt=systemEquationsRibose(t,y,funV0Ribose,V,umax,Ks,rd,Cs0,Ys,m)
v=funV0Ribose(t,y); %funV0Ribose determines v dependent on y(1)
if y(2)<0
y(2)=0
end
dydt=[-(v/V)*y(1)+(umax*y(1)*y(2))/(Ks+y(2))-rd;
(v/V)*(Cs0-y(2))-((1/Ys)*(umax*y(2)*y(1))/(Ks+y(2)))];
Thanks in advance!
Cheers,
dahlai
The first conditional can also be expressed as
y(2) = max(0, y(2)).
As one can see, this is still a continuous function, but with a kink, i.e., a discontinuity in the first derivative. One can this also interpret as a point with curvature radius 0, i.e., infinite curvature.
ode23 uses an order 2 method to integrate, an order 3 method to estimate the error and probably the order 1 Euler step to estimate stiffness.
An integration step over the kink renders all discretization errors to be order 1 (or 2, depending on the convention), confounding the logic of the step size control. This forces a rather radical step-size reduction, but since that small step then falls, most probably, short of the kink, the correct orders are found again, resulting in a step-size increase in the next step which could again go over the kink etc.
The return array only contains successful integration steps, not the failed attempts of the step-size control.

Forcing matlab ODE solvers to use dy/dx = 0 IF dy/dx is negative

I need to numerically integrate the following system of ODEs:
dA/dR = f(R,A,B)
dB/dR = g(R,A,B)
I'm solving the ODEs for a Initial-value stability problem. In this problem, the system is initially stable but goes unstable at some radius. However, whilst stable, I don't want the amplitude to decay away from the starting value (to O(10^-5) for example) as this is non-physical since the system's stability is limited to the background noise amplitude. The amplitude should remain at the starting value of 1 until the system destabilises. Hence, I want to overwrite the derivative estimate to zero whenever it is negative.
I've written some 4th order Runge-Kutta code that achieves this, but I'd much prefer to simply pass ODE45 (or any of the built in solvers) a parameter to make it overwrite the derivative whenever it is negative. Is this possible?
A simple, fast, efficient way to implement this is via the max function. For example, if you want to make sure all of your derivatives remain non-negative, in your integration function:
function ydot = f(x,y)
ydot(1) = ...
ydot(2) = ...
...
ydot = max(ydot,0);
Note that this is not the same thing as the output states returned by ode45 remaining non-negative. The above should ensure that your state variables never decay.
Note, however, that that this effectively makes your integration function stiff. You might consider using a solver like ode15s instead, or at least confirming that the results are consistent with those from ode45. Alternatively, you could use a continuous sigmoid function, instead of the discontinuous, step-like max. This is partly a modeling decision.

Managing with oscillation and torsion in newton Raphson method without limiting iterations

I've been trying to write a MATLAB-function which calculates the a root of a function simply by using Newton-Raphson. The problem with this algorithm is it diverges near torsion points and roots with oscillations (e.g for x^2+2 after 10 iteration with initial guess -1 the method diverges). Is there any satisfying condition to identify when we get oscillations and torsions which doesn't count iterations in a really inefficient way?
You may be interested in the Matlab File Exchange entry called "Newton Raphson Solver with adaptive Step Size". It implements the Newton-Raphson method to extract the roots of a polynomial.
In particular, this function has a while staement on line 147. Simply replace
while( err > ConvCrit && n < maxIter)
with
while( err > ConvCrit) %removing the maximum iteration criterion
I'd argue that your initial estimate of -1 is poor, hence the estimation error is large and this is what probably causes the algorithm to overshoot and oscillate (and eventually diverge).
You could consider doing successive over-relaxation by multiplying the quotient f(xn)/f'(xn) by a positive factor. I recommended you to look up methods for adaptive successive over-relaxation (which I won't elaborate here), that (adaptively) set the relaxation parameter iteratively based on the observed behavior of the converging process.
I think that you are trying to solve a function which does not have any real roots... so the newton raphson is not going to provide you any results for any initial guess..