Alternative optimization Tool to fmincon - matlab

i am currently trying to minimize a function with linear inequality and equality constraints. The Problem is that fmincon (MATLAB Tool) can not find a feasible solution. I already tried to do everything from the list: http://de.mathworks.com/help/optim/ug/when-the-solver-fails.html
Maybe the problem is too large for fmincon. I have to solve with ~3300 inequality constraints and 1 equality constraint. The function is a scalar function with 9 variables: S = sum((X_i-1)^2)
In addition to that, i have to solve this problem ~3300 times (number of inequality constraints). So i can not wait too long for one minimization.
I do not know if fmincon is not capable of this problem and would like to her suggestions for alternative optimization tools. MATLAB would be perfect (or C/C++). And i can not afford to purchase any software.
I hope you can help me

So you want to solve a quadratic problem with 3300 equations and you expect it to be fast. I think the real problem isn't the programming, but that you'll have to do more analysis of your problem rather than just using brute force.
If you think that there is nothing more to do, one idea could be to use some heuristics, but then you aren't sure that you get the exact solution. Using Heuristics will require that you know your problem, such that you can apply the correct one.
Another possibility is to try and figure out which constraints are really going to be the ones that matter. Maybe you can identify 10 such constraints, solve the problem with those, and then apply one additional constraint after another with the previous solution as initial guess and then hoping that the solution not suddenly change.

Related

Is there a way to initialize the starting point of scipy.optimize.linprog?

I have a sequence of linear programs to solve. Each instance only differs from the previous one with the A, bounds, and costs being slightly different. Intuitively, the solutions from previous problems should help. How can I go about implementing that?
scipy.optimize.linprog has an option x0
x0: 1-D array, optional
Starting values of the independent variables, which will be refined by the optimization algorithm. For the revised simplex method, these must correspond with a basic feasible solution.
which appears to do that, but doesn't seem to work if I just initialize the results from the previous optimization (res.x). It fails with the following error:
6 : Guess x0 cannot be converted to a basic feasible solution
The error basically means that res.x from the problem you just solved does not satisfy the constraints of the problem you are trying to solve when passing in res.x as x0.
Why is that? The solution to a linear programming problem is always at one of the vertices of the feasible set, basically on the boundary of what is allowed by your constraints. If you next problem varies a bit from the one you solved, it is highly likely that the solution of the previous problem does not satisfy the constraints of the new one -- it was on the boundary and small changes to the problem moved the boundary a bit and made the previous point be outside. Without knowing the details of your optimization problem it is hard to recommend a sensible strategy here. For example, if you know that the point (0,...,0) is always feasible, you can scale all coordinates of res.x down until you get into the feasible set.
It has been a while so I am not sure, but you can try method='interior-point' as it may be more forgiving to x0 being outside the feasible set. Otherwise Google 'how to find a feasible solution for linear programming problem'

Why does fmincon yield different solutions

I am very new to MatLab. Thus I am sorry if this is very basic.
I use a function called fmincon to do find a solution for minimizing a function. Why do I get different solutions for running fmincon?
I would like to know a satisfying or convincing mathematical or programming explanation for having different solutions using fmincon.
Check these limitations in the MATLAB documentation.
fmincon is a gradient-based method that is designed to work on problems where the objective and constraint functions are both continuous and have continuous first derivatives.
The function is very delicate and it is best if you can avoid it. It only works neatly on problems that are neatly defined to begin with. Any deviation can lead to local instead of global minima, and these can depend (among other things) on your initial solution estimate or starting point.
As fmincon is sensitive to initial point, If you set different start point for the fmincon, you might get a different solution in each apply. You can find one of the algorithms of fmincon here.

Find minimum of nonlinear system of equations with nonlinear equality and inequality constraints in MATLAB

I need to solve this problem better described at the title. The idea is that I have two nonlinear equations in four variables, together with two nonlinear inequality constraints. I have discovered that the function fmincon is probably the best approach, as you can set everything I require in this situation (please let me know otherwise). However, I'm having some doubts at the implementation stage. Below I'm exposing the complete case, I think it's simple enough to be in its real form.
The first thing I did was to define the objective function in a separate file.
function fcns=eqns(x,phi_b,theta_b,l_1,l_2)
fcns=[sin(theta_b)*(x(1)*x(4)-x(2)*x(3))+x(4)*sqrt(x(1)^2+x(2)^2-l_2^2)-x(2)*sqrt(x(3)^2+x(4)^2-l_1^2);
cos(theta_b)*sin(phi_b)*(x(1)*x(4)-x(2)*x(3))+x(3)*sqrt(x(1)^2+x(2)^2-l_2^2)-x(1)*sqrt(x(3)^2+x(4)^2-l_1^2)];
Then the inequality constraints, also in another file.
function [c,ceq]=nlinconst(x,phi_b,theta_b,l_1,l_2)
c=[-x(1)^2-x(2)^2+l_2^2; -x(3)^2-x(4)^2+l_1^2];
ceq=[];
The next step was to actually run it in a script. Below, since the objective function requires extra variables, I defined an anonymous function f. In the next line, I did the same for the constraint (anonymous function). After that, it's pretty self explanatory.
f=#(x)norm(eqns(x,phi_b,theta_b,l_1,l_2));
f_c=#(x)nlinconst(x,phi_b,theta_b,l_1,l_2);
x_0=[15 14 16 18],
LB=0.5*[l_2 l_2 l_1 l_1];
UB=1.5*[l_2 l_2 l_1 l_1];
[res,fval]=fmincon(f,x_0,[],[],[],[],LB,UB,f_c),
The first thing to notice is that I had to transform my original objective function by the use of norm, otherwise I'd get a "User supplied objective function must return a scalar value." error message. So, is this the best approach or is there a better way to go around this?
This actually works, but according to my research (one question from stackoverflow actually!) you can guide the optimization procedure if you define an equality constraint from the objective function, which makes sense. I did that through the following code at the constraint file:
ceq=eqns(x,phi_b,theta_b,l_1,l_2);
After that, I found out I could use the deal function and define the constraints within the script.
c=#(x)[-x(1)^2-x(2)^2+l_2^2; -x(3)^2-x(4)^2+l_1^2];
f_c=#(x)deal(c(x),f(x));
So, which is the best method to do it? Through the constraint file or with this function?
Additionally, I found in MATLAB's documentation that it is suggested in these cases to set:
f=#(x)0;
Since the original objective function is already at the equality constraints. However, the optimization doesn't go beyond the initial guess obviously (the cost value is already 0 for every solution), which makes sense but leaves me wondering why is it suggested at the documentation (last section here: http://www.mathworks.com/help/optim/ug/nonlinear-systems-with-constraints.html).
Any input will be valued, and sorry for the long text, I like to go into detail if you didn't pick up on it yet... Thank you!
I believe fmincon is well suited for your problem. Naturally, as with most minimization problems, the objective function is a multivariate scalar function. Since you are dealing with a vector function, fmincon complained about that.
Is using the norm the "best" approach? The short answer is: it depends. The reason I say this is norm in MATLAB is, by default, the Euclidean (or L2) norm and is the most natural choice for most problems. Sometimes however, it may be easier to solve a problem (or more physically meaningful) to use an L1 or a more stringent infinity-norm. I defer a thorough discussion of norms to the following superb blog post: https://rorasa.wordpress.com/2012/05/13/l0-norm-l1-norm-l2-norm-l-infinity-norm/
As for why the example on Mathworks is formulated the way it is: they are solving a system of nonlinear equations - not minimizing a function. They first use the standard approach, using fsolve, but then they propose alternate methods of solving the same problem.
One such way is to reformulate solving the nonlinear equations as a minimization problem with an equality constraint. By using f=#(x)0 with fmincon, the objective function f is naturally already minimized, and the only thing that has to be satisfied in this case is the equality constraint - which would be the solution to the system of nonlinear equations. Clever indeed.

Why does GlobalSearch return different solutions each run?

When running the GlobalSearch solver on a nonlinear constrained optimization problem I have, I often get very different solutions each run. For the cases that I have an analytical solution, the numerical results are less dispersed than the non-analytical cases but are still different each run. It would be nice to get the same results at least for these analytical cases so that I know the optimization routine is working properly. Is there a good explanation of this in the Global Optimization Toolbox User Guide that I missed?
Also, why does GlobalSearch use a different number of local solver runs each run?
Thanks!
A full description of how the GlobalSearch algorithm works can be found Here.
In summary the GlobalSearch method iteratively performs convex optimization. Basically it starts out by using fmincon to search for a local minimum near the initial conditions you have provided. Then a bunch of "trial points", based on how good the initial result was, are generated using the "scatter search algorithm." Then there is some more convex optimization and rating of "how good" the minima around these points are.
There are a couple of things that can cause the algorithm give you different answers:
1. Changing the initial conditions you give it
2. The scatter search algorithm itself
The fact that you are getting different answers each time likely means that your function is highly non-convex. The best thing that I know of that you can do in this scenario is just to try the optimization algorithm at several different initial conditions and see what result you get back the most frequently.
It also looks like there is something called the 'PlotFcns' property which would allow you get a better idea what the functions the solver is generating for you look like.
You can use the ga or gamulti objective functions within the GlobalSearch api. I would recommend this. Convex optimizers wont be able to solve a non-linear problem. Even then Genetic Algorithms dont gaurantee the solution. If you run the ga and then use its final minimum as the start of your fmincon search then it should result in the same answer consistently. There may be better ones but if the search space is unknown you may never know.

Alternatives to FMINCON

Are there any faster and more efficient solvers other than fmincon? I'm using fmincon for a specific problem and I run out of memory for modest sized vector variable. I don't have any supercomputers or cloud computing options at my disposal, either. I know that any alternate solution will still run out of memory but I'm just trying to see where the problem is.
P.S. I don't want a solution that would change the way I'm approaching the actual problem. I know convex optimization is the way to go and I have already done enough work to get up until here.
P.P.S I saw the other question regarding the open source alternatives. That's not what I'm looking for. I'm looking for more efficient ones, if someone faced the same problem adn shifted to a better solver.
Hmmm...
Without further information, I'd guess that fmincon runs out of memory because it needs the Hessian (which, given that your decision variable is 10^4, will be 10^4 x numel(f(x1,x2,x3,....)) large).
It also takes a lot of time to determine the values of the Hessian, because fmincon normally uses finite differences for that if you don't specify derivatives explicitly.
There's a couple of things you can do to speed things up here.
If you know beforehand that there will be a lot of zeros in your Hessian, you can pass sparsity patterns of the Hessian matrix via HessPattern. This saves a lot of memory and computation time.
If it is fairly easy to come up with explicit formulae for the Hessian of your objective function, create a function that computes the Hessian and pass it on to fmincon via the HessFcn option in optimset.
The same holds for the gradients. The GradConstr (for your non-linear constraint functions) and/or GradObj (for your objective function) apply here.
There's probably a few options I forgot here, that could also help you. Just go through all the options in the optimization toolbox' optimset and see if they could help you.
If all this doesn't help, you'll really have to switch optimizers. Given that fmincon is the pride and joy of MATLAB's optimization toolbox, there really isn't anything much better readily available, and you'll have to search elsewhere.
TOMLAB is a very good commercial solution for MATLAB. If you don't mind going to C or C++...There's SNOPT (which is what TOMLAB/SNOPT is based on). And there's a bunch of things you could try in the GSL (although I haven't seen anything quite as advanced as SNOPT in there...).
I don't know on what version of MATLAB you have, but I know for a fact that in R2009b (and possibly also later), fmincon has a few real weaknesses for certain types of problems. I know this very well, because I once lost a very prestigious competition (the GTOC) because of it. Our approach turned out to be exactly the same as that of the winners, except that they had access to SNOPT which made their few-million variable optimization problem converge in a couple of iterations, whereas fmincon could not be brought to converge at all, whatever we tried (and trust me, WE TRIED). To this day I still don't know exactly why this happens, but I verified it myself when I had access to SNOPT. Once, when I have an infinite amount of time, I'll find this out and report this to the MathWorks. But until then...I lost a bit of trust in fmincon :)