How to solve non-linear constrained optimization in Matlab? - matlab

I have to solve a nonlinear constrained function in matlab. and I am not familiar to it's command.
the problem is:
minimize E(b,c)
constrains k1< c.b^0.5< k2 and c/6>k3
note: E(b,c) is a nonlinear function; also how can I solve this easier one
minimize E(b,c)
constrains c.b^0.5=k2 and c/6>k3
I must use matlab mfile. please suggest me what should I do!
for simplicity imagine: i.e. E(b,c)=b^2+√c+c and k1=8,k2=12,k3=5
I must use Matlab mfile. Please suggest me what should I do!
I would like to plot the E(b,c) based on given constraints and find the b,c pairs, if that is possible.
I am not sure if I really need optimization toolbox or not.
Please give me a short Matlab script if it's ok.
thanks in advance

Your problem seems to be non-linear constrained optimization. Check, if your objective function is convex or not. Save you objective function and constraints in an m.file. Use optimization toolbox, select a solver that best suits your problem [Please refer http://in.mathworks.com/help/optim/ug/choosing-a-solver.html?refresh=true for choosing the right solver and right algorithm.]

Related

Why does fmincon yield different solutions

I am very new to MatLab. Thus I am sorry if this is very basic.
I use a function called fmincon to do find a solution for minimizing a function. Why do I get different solutions for running fmincon?
I would like to know a satisfying or convincing mathematical or programming explanation for having different solutions using fmincon.
Check these limitations in the MATLAB documentation.
fmincon is a gradient-based method that is designed to work on problems where the objective and constraint functions are both continuous and have continuous first derivatives.
The function is very delicate and it is best if you can avoid it. It only works neatly on problems that are neatly defined to begin with. Any deviation can lead to local instead of global minima, and these can depend (among other things) on your initial solution estimate or starting point.
As fmincon is sensitive to initial point, If you set different start point for the fmincon, you might get a different solution in each apply. You can find one of the algorithms of fmincon here.

Alternative optimization Tool to fmincon

i am currently trying to minimize a function with linear inequality and equality constraints. The Problem is that fmincon (MATLAB Tool) can not find a feasible solution. I already tried to do everything from the list: http://de.mathworks.com/help/optim/ug/when-the-solver-fails.html
Maybe the problem is too large for fmincon. I have to solve with ~3300 inequality constraints and 1 equality constraint. The function is a scalar function with 9 variables: S = sum((X_i-1)^2)
In addition to that, i have to solve this problem ~3300 times (number of inequality constraints). So i can not wait too long for one minimization.
I do not know if fmincon is not capable of this problem and would like to her suggestions for alternative optimization tools. MATLAB would be perfect (or C/C++). And i can not afford to purchase any software.
I hope you can help me
So you want to solve a quadratic problem with 3300 equations and you expect it to be fast. I think the real problem isn't the programming, but that you'll have to do more analysis of your problem rather than just using brute force.
If you think that there is nothing more to do, one idea could be to use some heuristics, but then you aren't sure that you get the exact solution. Using Heuristics will require that you know your problem, such that you can apply the correct one.
Another possibility is to try and figure out which constraints are really going to be the ones that matter. Maybe you can identify 10 such constraints, solve the problem with those, and then apply one additional constraint after another with the previous solution as initial guess and then hoping that the solution not suddenly change.

Fit data with a numerical ode solution (on Matlab)

I am looking for a way to fit my experimental data by a theoretical model which is described by a non-linear differential equation.
Unfortunately this latter can only be solved numerically (by solving this second degree, non-linear differential equation).
I manage to solve the differential equation for a set of parameters using the ode45 Matlab solver but now I want to find the proper fit parameters of the model. Also, I may have to mention that my ode45 is initiated at z=zmax (max being large so I can assume it is infinity) by y(zmax)=y0 and yprime(zmax)=yprime0 and I solve backward (from zmax to z=0).
I am quite new to this kind of numerical problems, are there classical ways to solve such problems?
Does anyone knows if there is a Matlab procedure which would help me solve this? On which principles is it based/constructed? (if possible I'd like to know the theoretical trick to solve this problem in a smart way, not by trying all the possible sets of parameters which would be very time consuming (I have 5 fit parameters!).
Thank you for your precious help!
You have facy methods in the Optimization Toolbox. In case you don't have access to it, you could do it manually by:
Selecting a cost function between the experimental and model data. For example, mean-squared-error.
Doing heuristic optimization of the cost function. For example, Nelder-Mead method.

Find minimum of nonlinear system of equations with nonlinear equality and inequality constraints in MATLAB

I need to solve this problem better described at the title. The idea is that I have two nonlinear equations in four variables, together with two nonlinear inequality constraints. I have discovered that the function fmincon is probably the best approach, as you can set everything I require in this situation (please let me know otherwise). However, I'm having some doubts at the implementation stage. Below I'm exposing the complete case, I think it's simple enough to be in its real form.
The first thing I did was to define the objective function in a separate file.
function fcns=eqns(x,phi_b,theta_b,l_1,l_2)
fcns=[sin(theta_b)*(x(1)*x(4)-x(2)*x(3))+x(4)*sqrt(x(1)^2+x(2)^2-l_2^2)-x(2)*sqrt(x(3)^2+x(4)^2-l_1^2);
cos(theta_b)*sin(phi_b)*(x(1)*x(4)-x(2)*x(3))+x(3)*sqrt(x(1)^2+x(2)^2-l_2^2)-x(1)*sqrt(x(3)^2+x(4)^2-l_1^2)];
Then the inequality constraints, also in another file.
function [c,ceq]=nlinconst(x,phi_b,theta_b,l_1,l_2)
c=[-x(1)^2-x(2)^2+l_2^2; -x(3)^2-x(4)^2+l_1^2];
ceq=[];
The next step was to actually run it in a script. Below, since the objective function requires extra variables, I defined an anonymous function f. In the next line, I did the same for the constraint (anonymous function). After that, it's pretty self explanatory.
f=#(x)norm(eqns(x,phi_b,theta_b,l_1,l_2));
f_c=#(x)nlinconst(x,phi_b,theta_b,l_1,l_2);
x_0=[15 14 16 18],
LB=0.5*[l_2 l_2 l_1 l_1];
UB=1.5*[l_2 l_2 l_1 l_1];
[res,fval]=fmincon(f,x_0,[],[],[],[],LB,UB,f_c),
The first thing to notice is that I had to transform my original objective function by the use of norm, otherwise I'd get a "User supplied objective function must return a scalar value." error message. So, is this the best approach or is there a better way to go around this?
This actually works, but according to my research (one question from stackoverflow actually!) you can guide the optimization procedure if you define an equality constraint from the objective function, which makes sense. I did that through the following code at the constraint file:
ceq=eqns(x,phi_b,theta_b,l_1,l_2);
After that, I found out I could use the deal function and define the constraints within the script.
c=#(x)[-x(1)^2-x(2)^2+l_2^2; -x(3)^2-x(4)^2+l_1^2];
f_c=#(x)deal(c(x),f(x));
So, which is the best method to do it? Through the constraint file or with this function?
Additionally, I found in MATLAB's documentation that it is suggested in these cases to set:
f=#(x)0;
Since the original objective function is already at the equality constraints. However, the optimization doesn't go beyond the initial guess obviously (the cost value is already 0 for every solution), which makes sense but leaves me wondering why is it suggested at the documentation (last section here: http://www.mathworks.com/help/optim/ug/nonlinear-systems-with-constraints.html).
Any input will be valued, and sorry for the long text, I like to go into detail if you didn't pick up on it yet... Thank you!
I believe fmincon is well suited for your problem. Naturally, as with most minimization problems, the objective function is a multivariate scalar function. Since you are dealing with a vector function, fmincon complained about that.
Is using the norm the "best" approach? The short answer is: it depends. The reason I say this is norm in MATLAB is, by default, the Euclidean (or L2) norm and is the most natural choice for most problems. Sometimes however, it may be easier to solve a problem (or more physically meaningful) to use an L1 or a more stringent infinity-norm. I defer a thorough discussion of norms to the following superb blog post: https://rorasa.wordpress.com/2012/05/13/l0-norm-l1-norm-l2-norm-l-infinity-norm/
As for why the example on Mathworks is formulated the way it is: they are solving a system of nonlinear equations - not minimizing a function. They first use the standard approach, using fsolve, but then they propose alternate methods of solving the same problem.
One such way is to reformulate solving the nonlinear equations as a minimization problem with an equality constraint. By using f=#(x)0 with fmincon, the objective function f is naturally already minimized, and the only thing that has to be satisfied in this case is the equality constraint - which would be the solution to the system of nonlinear equations. Clever indeed.

How can I implicitly solve a single equation in Matlab?

The following equation is to be solved for M by MATLAB:
(Atemp/At)^2=1/M^2*((2/(gamma+1))*(1+(gamma-1)*M^2/2))^((gamma+1)/(gamma-1))
It is not possible to solve this equation symbolically. In Maple it is easily possible to solve such an equation implicitly; now, is there also a pre-made function in Matlab that does this for me? I could program one myself, but as my skills are limited, its performance would not fit my needs.
I would try using fzero, or if that encounters problems because of complex values/infinities, fminbnd.