Matlab optimization - Variable bounds have "donut hole" - matlab

I'm trying to solve a problem using Matlab's genetic algorithm and fmincon functions where the variables' values do not have single upper and lower bounds. Instead, the variables should be allowed to take a value of x=0 or be lb<=x<=ub. This is a turbine allocation problem, where the turbine can either be turned off (x=0) or be within the lower and upper cavitation limits (lb and ub). Of course I can trick the problem by creating a constraint which will violate for values in between 0 and lb, but I'm finding that the problem is having a hard time converging like this. Is there an easier way to do this, which will trim down the search space?

If the number of variables is small enough (say, like 10 or 15 or less) then you can try every subset of variables that are set to be non-zero, and see which subset gives you the optimal value. If you can't make assumptions about the structure of your optimization problem (e.g. you have penalties for non-zero variables but your main objective function is "exotic"), this is essentially the best that you can do. If you are willing to settle for an approximate solution, you can add a so-called "L1" penalty to your objective function which is the sum of a constant times the absolute values of the variables. This will encourage some variables to be zero, and if your main objective function is convex then the resulting objective function will be convex because negative absolute value is convex. It's much easier to optimize convex functions (taking the minimum) because strictly convex functions always have a global minimum that you can reach using any number of optimization routines (including the ones that are implemented in matlab.)

Related

Forcing fmincon to consider candidate solutions only from a specific set

Is it possible to force MATLAB's fmincon to generate candidate points from a specific set of points that I supply? I ask this because if this were possible, it would greatly reduce computation complexity, and knock out a couple of constraints from the optimization.
Edited to add more info: Essentially, I'm looking for a set of points in a high dimension space which satisfy a particular function (belonging to that space), along with a couple of other constraints. The objective function minimizes the total length of the path formed by these points. Think generalized geodesic problem, attempted via non-linear optimization. While I manage to get near a solution while using fmincon, it is painfully slow and is prone to get stuck in local minima. I already have a sizeable set of data-points which satisfy this high-dimensional function, and have no compulsion that this new set of points not belong to this pre-existing set.

Solving non-convex optimization with global optimization algorithm using MATLAB

I have a simple unconstrained non-convex optimization problem. Since problems of these type have multiple local minima, I am looking for global optimization algorithm that yields a unique/global minimum. In the internet I came across global optimization algorithms like genetic algorithms, simulated annealing, etc but for solving a simple one variable unconstrained non-convex optimization problem, I think using these high level algorithms doesn't seem to be a good idea. Could anyone recommend me a simple global algorithm for solving such simple one variable unconstrained non-convex optimization problem? I would highly appreciate ideas on this.
"Since problems of these type have multiple local minima". It's not true, the real situation is the following:
Maybe you have one local minimum
Maybe you have infinite set of local miminums
Maybe you have finite number of local minimums
Maybe minimum is not attained
Maybe problem is unbounded below
Also big picture is that there are really true methods which really solve problems (numerically and they slow), but there is a slang to call method which is not nessesary find minumum value of function also call as "solve".
In fact M^n~M for any finite n and any infinite set M. So the fact that you problem has one dimension is nothing. It is still hard as problem with 1000000 parameters which are drawn from the set M from theoretical point of view.
If you interesting how approximately solve problem with known precision epsilon in domain - then split you domain into 1/espsilon regions, sample value(evalute function) at middle point, and select minimum
Method which I will describe below is precise method, and other methods: particle estimation, sequent.convex.programming, alternative direction, particle swarm, Neidler-Mead simplex method, mutlistart gradient/subgradient descend or any descend algorithm like Newton Method or coordinate descend, they all has no gurantess for non-convex problems and some of them even can no be applied if function is nonconvex.
If you interesting in really solve with some precision on function value then then you can take attention into method, which is called branch-and-bound and which truly found minimum, algorithms which you described I don't think so that they solve problem and find minimum in strong sense:
Basic idea of branch and bound - partition domain into convex sets and improve lower/upper bound, in your case it is intervals.
You should have a routine to find upper bound of optimal (min.) value: you can do it e.g. just by sampling subdomain and take smallest or use local optimization method start from random point.
But also you should have lower bound of optimal (min.) value by some principle and this is hard part:
convex relaxation of integer variables to make them real variables
use Lagrange Dual function
use Lipshitc constant on function, etc.
This is sophisticaed step.
If this two values are near - we're done in other case partion or refine partition.
Get info about lower and upper bound of child subproblems and then take min. of upper bounds and min. of lower bounds of children. If child returns more worse lower bound it can be upgraded by parent.
References:
For more great explanation please look into:
EE364B, Lecture 18, prof. Stephen Boyd, Stanford University. It's available on youtube and in ITunes University. If you new to this area I recommend you to look EE263, EE364A, EE364B courses of Stephen P. Boyd. You will love it
Since this is a one dimensional problem, things are easier.
A simple steepest descend procedure may be used as follows.
Suppose the interval of search is a<x<b.
Start the SD from a minimizing your function say f(x). You recover the first minimum Xm1. You should use a fine step, not too large.
Shift this point by adding a positive small constant Xm1+ε. Then maximize f or minimize -f, starting from this point. You get a max of f, you distort it by ε and start from there a minimization, and so on so forth.

Tolerances in Numerical quadrature - MATLAB

What is the difference between abtol and reltol in MATLAB when performing numerical quadrature?
I have an triple integral that is supposed to generate a number between 0 and 1 and I am wondering what would be the best tolerance for my application?
Any other ideas on decreasing the time of integral3 execution.
Also does anyone know whether integral3 or quadgk is faster?
When performing the integration, MATLAB (or most any other integration software) computes a low-order solution qLow and a high-order solution qHigh.
There are a number of different methods of computing the true error (i.e., how far either qLow or qHigh is from the actual solution qTrue), but MATLAB simply computes an absolute error as the difference between the high and low order integral solutions:
errAbs = abs(qLow - qHigh).
If the integral is truly a large value, that difference may be large in an absolute sense but not a relative sense. For example, errAbs might be 1E3, but qTrue is 1E12; in that case, the method could be said to converge relatively since at least 8 digits of accuracy has been reached.
So MATLAB also considers the relative error :
errRel = abs(qLow - qHigh)/abs(qHigh).
You'll notice I'm treating qHigh as qTrue since it is our best estimate.
Over a given sub-region, if the error estimate falls below either the absolute limit or the relative limit times the current integral estimate, the integral is considered converged. If not, the region is divided, and the calculation repeated.
For the integral function and integral2/integral3 functions with the iterated method, the low-high solutions are a Gauss-Kronrod 7-15 pair (the same 7-th order/15-th order set used by quadgk.
For the integral2/integral3 functions with the tiled method, the low-high solutions are a Gauss-Kronrod 3-7 pair (I've never used this option, so I'm not sure how it compares to others).
Since all of these methods come down to a Gauss-Kronrod quadrature rule, I'd say sticking with integral3 and letting it do the adaptive refinement as needed is the best course.

Is there an fmincon algorithm that always satisfies linear constraints?

I'm trying to perform a constrained linear optimization in Matlab with a fairly complicated objective function. This objective function, as is, will yield errors for input values that don't meet the linear inequality constraints I've defined. I know there are a few algorithms that enforce strict adherence to bounds at every iteration, but does anyone know of any algorithms (or other mechanisms) to enforce strict adherence to linear (inequality) constraints at each iteration?
I could make my objective function return zero at any such points, but I'm worried about introducing large discontinuities.
Disclaimer: I'm not an optimization maven. A few ideas though:
Log barrier function to represent constraints
To expand on DanielTheRocketMan's suggestion, you can use a log barrier function to represent the constraint. If you have constraint g(x) <= 0 and objective to minimize is f(x) then you can define a new objective:
fprim(x) = f(x) - (1/t) * log(-g(x))
where t is a parameter defining how sharp to make the constraint. As g(x) approaches 0 from below, -log(-g(x)) goes to infinity, penalizing the objective function for getting close to violating the constraint. A higher value of t lets g(x) get closer to 0.
You answered your own question? Use fmincon with one of the algorithms that satisfy strict feasibility of the constraints?
If your constraints are linear, that should be easy to pass to fmincon. Use one of the algorithms that satisfy strict feasibility.
Sounds like this wouldnt' work for you, but cvx is an awesome package for some convex problems but horrible/unworkable for others. If your problem is (i) convex and (ii) objective function and constraints aren't too complicated, then cvx is a really cool package. There's a bit of a learning curve to using it though.
Obvious point, but if your problem isn't convex, you may have big problems with getting stuck at local optima rather than finding the global optima. Always something to be aware of.
If the Matlab is not working for you, you can implement by yourself the so-called Interior point penalty method [you need to change your objective function]. See equations (1) and (2) [from wikipedia page]. Note that by using an internal barrier, when x is close to the constraint [c(x) is close to zero], the penalty diverges. This solution deals with the inequality constraints. You can also control the value of mu overtime. The best solution is to assume that mu decreases over time. This means that you need to deal with a sequence of optimizations. If mu is different from zero, the solution is always affected. Furthermore, note that using this method your problem is not linear anymore.
In the case of equality constraints, the only simple (and general) way to deal with that is to use directly the constraint equation. For instance, X1+x2+x3=3. Rewrite it as x1=3-x2-x3 and use it to replace the value of x1 in all other equations. Since your system is linear, it should work.

Is normalization useful/necessary in optimization?

I am trying to optimize a device design using Matlab optimization toolbox (using the fmincon function to be precise). To get my point across quickly I am providing a small variable set {l_m, r_m, l_c, r_c} which at it's starting value is equal to {4mm, 2mm, 1mm, 0.5mm}.
Though Matlab doesn't specifically recommend normalizing the input variables, my professor advised me to normalize the variables to the maximum value of {l_m , r_m, l_c, r_c}. Thus the variables will now take values from 0 to 1 (instead of say 3mm to 4.5mm in the case of l_m). Of course I have to modify my objective function to convert it back to the proper values and then do the calculations.
My question is: do optimization functions like fmincon care if the input variables are normalized? Is it reasonable to expect change in performance on account of normalization? The point to be considered is how the optimizer varies variables like say l_m — in one case it can change it from 4mm to 4.1mm and in the other case it can change it from 0.75 to 0.76.
It is usually much easier to optimize when the input is normalized. You can expect an improvement in both speed of convergence and in the accuracy of the output.
For instance, As you can see on this article ( http://www-personal.umich.edu/~mepelman/teaching/IOE511/Handouts/511notes07-7.pdf ), the convergence rate of gradient descent is better bounded when the ratio of largest and smallest eigenvalues of the Hessian is small. Typically, when your data is normalized, this ratio is 1 (optimal).