Why does fmincon yield different solutions - matlab

I am very new to MatLab. Thus I am sorry if this is very basic.
I use a function called fmincon to do find a solution for minimizing a function. Why do I get different solutions for running fmincon?
I would like to know a satisfying or convincing mathematical or programming explanation for having different solutions using fmincon.

Check these limitations in the MATLAB documentation.
fmincon is a gradient-based method that is designed to work on problems where the objective and constraint functions are both continuous and have continuous first derivatives.
The function is very delicate and it is best if you can avoid it. It only works neatly on problems that are neatly defined to begin with. Any deviation can lead to local instead of global minima, and these can depend (among other things) on your initial solution estimate or starting point.

As fmincon is sensitive to initial point, If you set different start point for the fmincon, you might get a different solution in each apply. You can find one of the algorithms of fmincon here.

Related

How to ensure my optimization algorithm has found the solution?

I am performing a numerical optimization where I try to find the parameters of a statistical model that best match certain moments of the data. I have 6 parameters in total I need to find. I have written a matlab function which takes the parameters as input and gives the sum of squared deviations from the empirical moments as output. I use the fminsearch function to find the parameters and it gives me a solution.
However, I am unsure if this is really a global minimum. What type of checks I could do to ensure the numerical solution is correct? Plotting the function is challenging due to high dimensionality. Any general advice in solving this type of problem is also appreciated.
You are describing the difficulties of a global optimization problem.
As mentioned in one of the comments, fminsearch() and related function fminunc() will return a local minimum. It provides no guarantee that you will get a global minimum.
A simple way to check if the answer you get really is a global minimum, would be to run the function multiple times from various starting points. If the answer all converges to the same value, it might be a global minimum. If you find an answer with lower error values, then the last answer was not the global minimum.
The only way to be perfectly sure that you have the global minima, is to know whether or not your function is convex (i.e. your function has only a single minima.) This will have to be done analytically.
If it is not possible to be done analytically, there are many global optimization methods you may want to consider, including some available as this MATLAB toolbox.

Matlab - output of the algorithm

I have a program using PSO algorithm using penalty function for Constraint Satisfaction. But when I run the program for different iterations, the output of the algorithm would be :
"Iteration 1: Best Cost = Inf"
.
Does anyone know why I always get inf answer?
There could be many reasons for that, none of which will be accurate if you don't provide a MWE with the code you have already tried or a context of the function you are analysing.
For instance, while studying the PSO algorithm you might use it on functions which have analytical solutions first. By doing this you can study the behaviour of the algorithm before applying to a similar problem, and fine tune its parameters.
My guess is that you might not be providing either the right function (I have done that already, getting a signal wrong is easy!), the right constraints (same logic applies), your weights for the penalty function and velocity update are way off.

Find minimum of nonlinear system of equations with nonlinear equality and inequality constraints in MATLAB

I need to solve this problem better described at the title. The idea is that I have two nonlinear equations in four variables, together with two nonlinear inequality constraints. I have discovered that the function fmincon is probably the best approach, as you can set everything I require in this situation (please let me know otherwise). However, I'm having some doubts at the implementation stage. Below I'm exposing the complete case, I think it's simple enough to be in its real form.
The first thing I did was to define the objective function in a separate file.
function fcns=eqns(x,phi_b,theta_b,l_1,l_2)
fcns=[sin(theta_b)*(x(1)*x(4)-x(2)*x(3))+x(4)*sqrt(x(1)^2+x(2)^2-l_2^2)-x(2)*sqrt(x(3)^2+x(4)^2-l_1^2);
cos(theta_b)*sin(phi_b)*(x(1)*x(4)-x(2)*x(3))+x(3)*sqrt(x(1)^2+x(2)^2-l_2^2)-x(1)*sqrt(x(3)^2+x(4)^2-l_1^2)];
Then the inequality constraints, also in another file.
function [c,ceq]=nlinconst(x,phi_b,theta_b,l_1,l_2)
c=[-x(1)^2-x(2)^2+l_2^2; -x(3)^2-x(4)^2+l_1^2];
ceq=[];
The next step was to actually run it in a script. Below, since the objective function requires extra variables, I defined an anonymous function f. In the next line, I did the same for the constraint (anonymous function). After that, it's pretty self explanatory.
f=#(x)norm(eqns(x,phi_b,theta_b,l_1,l_2));
f_c=#(x)nlinconst(x,phi_b,theta_b,l_1,l_2);
x_0=[15 14 16 18],
LB=0.5*[l_2 l_2 l_1 l_1];
UB=1.5*[l_2 l_2 l_1 l_1];
[res,fval]=fmincon(f,x_0,[],[],[],[],LB,UB,f_c),
The first thing to notice is that I had to transform my original objective function by the use of norm, otherwise I'd get a "User supplied objective function must return a scalar value." error message. So, is this the best approach or is there a better way to go around this?
This actually works, but according to my research (one question from stackoverflow actually!) you can guide the optimization procedure if you define an equality constraint from the objective function, which makes sense. I did that through the following code at the constraint file:
ceq=eqns(x,phi_b,theta_b,l_1,l_2);
After that, I found out I could use the deal function and define the constraints within the script.
c=#(x)[-x(1)^2-x(2)^2+l_2^2; -x(3)^2-x(4)^2+l_1^2];
f_c=#(x)deal(c(x),f(x));
So, which is the best method to do it? Through the constraint file or with this function?
Additionally, I found in MATLAB's documentation that it is suggested in these cases to set:
f=#(x)0;
Since the original objective function is already at the equality constraints. However, the optimization doesn't go beyond the initial guess obviously (the cost value is already 0 for every solution), which makes sense but leaves me wondering why is it suggested at the documentation (last section here: http://www.mathworks.com/help/optim/ug/nonlinear-systems-with-constraints.html).
Any input will be valued, and sorry for the long text, I like to go into detail if you didn't pick up on it yet... Thank you!
I believe fmincon is well suited for your problem. Naturally, as with most minimization problems, the objective function is a multivariate scalar function. Since you are dealing with a vector function, fmincon complained about that.
Is using the norm the "best" approach? The short answer is: it depends. The reason I say this is norm in MATLAB is, by default, the Euclidean (or L2) norm and is the most natural choice for most problems. Sometimes however, it may be easier to solve a problem (or more physically meaningful) to use an L1 or a more stringent infinity-norm. I defer a thorough discussion of norms to the following superb blog post: https://rorasa.wordpress.com/2012/05/13/l0-norm-l1-norm-l2-norm-l-infinity-norm/
As for why the example on Mathworks is formulated the way it is: they are solving a system of nonlinear equations - not minimizing a function. They first use the standard approach, using fsolve, but then they propose alternate methods of solving the same problem.
One such way is to reformulate solving the nonlinear equations as a minimization problem with an equality constraint. By using f=#(x)0 with fmincon, the objective function f is naturally already minimized, and the only thing that has to be satisfied in this case is the equality constraint - which would be the solution to the system of nonlinear equations. Clever indeed.

Why does GlobalSearch return different solutions each run?

When running the GlobalSearch solver on a nonlinear constrained optimization problem I have, I often get very different solutions each run. For the cases that I have an analytical solution, the numerical results are less dispersed than the non-analytical cases but are still different each run. It would be nice to get the same results at least for these analytical cases so that I know the optimization routine is working properly. Is there a good explanation of this in the Global Optimization Toolbox User Guide that I missed?
Also, why does GlobalSearch use a different number of local solver runs each run?
Thanks!
A full description of how the GlobalSearch algorithm works can be found Here.
In summary the GlobalSearch method iteratively performs convex optimization. Basically it starts out by using fmincon to search for a local minimum near the initial conditions you have provided. Then a bunch of "trial points", based on how good the initial result was, are generated using the "scatter search algorithm." Then there is some more convex optimization and rating of "how good" the minima around these points are.
There are a couple of things that can cause the algorithm give you different answers:
1. Changing the initial conditions you give it
2. The scatter search algorithm itself
The fact that you are getting different answers each time likely means that your function is highly non-convex. The best thing that I know of that you can do in this scenario is just to try the optimization algorithm at several different initial conditions and see what result you get back the most frequently.
It also looks like there is something called the 'PlotFcns' property which would allow you get a better idea what the functions the solver is generating for you look like.
You can use the ga or gamulti objective functions within the GlobalSearch api. I would recommend this. Convex optimizers wont be able to solve a non-linear problem. Even then Genetic Algorithms dont gaurantee the solution. If you run the ga and then use its final minimum as the start of your fmincon search then it should result in the same answer consistently. There may be better ones but if the search space is unknown you may never know.

Alternatives to FMINCON

Are there any faster and more efficient solvers other than fmincon? I'm using fmincon for a specific problem and I run out of memory for modest sized vector variable. I don't have any supercomputers or cloud computing options at my disposal, either. I know that any alternate solution will still run out of memory but I'm just trying to see where the problem is.
P.S. I don't want a solution that would change the way I'm approaching the actual problem. I know convex optimization is the way to go and I have already done enough work to get up until here.
P.P.S I saw the other question regarding the open source alternatives. That's not what I'm looking for. I'm looking for more efficient ones, if someone faced the same problem adn shifted to a better solver.
Hmmm...
Without further information, I'd guess that fmincon runs out of memory because it needs the Hessian (which, given that your decision variable is 10^4, will be 10^4 x numel(f(x1,x2,x3,....)) large).
It also takes a lot of time to determine the values of the Hessian, because fmincon normally uses finite differences for that if you don't specify derivatives explicitly.
There's a couple of things you can do to speed things up here.
If you know beforehand that there will be a lot of zeros in your Hessian, you can pass sparsity patterns of the Hessian matrix via HessPattern. This saves a lot of memory and computation time.
If it is fairly easy to come up with explicit formulae for the Hessian of your objective function, create a function that computes the Hessian and pass it on to fmincon via the HessFcn option in optimset.
The same holds for the gradients. The GradConstr (for your non-linear constraint functions) and/or GradObj (for your objective function) apply here.
There's probably a few options I forgot here, that could also help you. Just go through all the options in the optimization toolbox' optimset and see if they could help you.
If all this doesn't help, you'll really have to switch optimizers. Given that fmincon is the pride and joy of MATLAB's optimization toolbox, there really isn't anything much better readily available, and you'll have to search elsewhere.
TOMLAB is a very good commercial solution for MATLAB. If you don't mind going to C or C++...There's SNOPT (which is what TOMLAB/SNOPT is based on). And there's a bunch of things you could try in the GSL (although I haven't seen anything quite as advanced as SNOPT in there...).
I don't know on what version of MATLAB you have, but I know for a fact that in R2009b (and possibly also later), fmincon has a few real weaknesses for certain types of problems. I know this very well, because I once lost a very prestigious competition (the GTOC) because of it. Our approach turned out to be exactly the same as that of the winners, except that they had access to SNOPT which made their few-million variable optimization problem converge in a couple of iterations, whereas fmincon could not be brought to converge at all, whatever we tried (and trust me, WE TRIED). To this day I still don't know exactly why this happens, but I verified it myself when I had access to SNOPT. Once, when I have an infinite amount of time, I'll find this out and report this to the MathWorks. But until then...I lost a bit of trust in fmincon :)