Does Matlab's fmincon() do automatic gradient scaling? - matlab

I'm currently working with Matlab for a constrained nonlinear optimization problem where I supply analytic jacobians for both the objective and contraints. fmincon() is able to find a solution for this problem without scaling the constraint and objective functions (i.e. with ScaleProblem set to false) but I'm wondering if fmincon() is scaling the analytic jacobians automatically, as I'm having trouble replicating the result using other packages (e.g. IPOPT) without gradient scaling. If scaling is being performed, how exactly is this scaling being done?

In the documentation:
ScaleProblem true causes the algorithm to normalize all constraints and the objective function. Disable by setting to the default false.
For optimset, the values are 'obj-and-constr' or 'none'. See Current and Legacy Option Names.
So it normalises the objective function and all constraints. If the function is non-obfuscated, you can use edit fmincon to open it and see whether that can shed more light on the matter. If it's obfuscated/built-in though, the only ones with more knowledge would be The MathWorks themselves.

Related

Interior-point linear programing solver in MATLAB, with target barrier parameter option

Is there any linear programing solver, written for MATLAB, that (a) solves with the primal-dual interior point method, (b) the user has the options to set the target barrier parameter (he lowest value of barrier parameter for which the KKT system is solved)?
I currently use IPOPT, which has the target barrier parameter options.
However, at convergence, the product of dual*slack seems to only be approximately satisfied (with an error of say (+-)1e-7 for a target parameter of 1e-5).
I have tried to play around with the tolerances, but to no avail.
For MATLAB use, I recommend using CVX, which includes Gurobi, MOSEK, GLPK, and SDPT3. All of those can solve the linear program very efficiently.
CVX is very easy to use in MATLAB.

Define initial parameters of a nonlinear fit with no information

I was wondering if there exists a technical way to choose initial parameters to these kind of problems (as they can take virtually any form). My question arises from the fact that my solution depends a little on initial parameters (as usual). My fit consists of 10 parameters and approximately 5120 data points (x,y,z) and has non linear constraints. I have been doing this by brute force, that is, trying parameters randomly and trying to observe a pattern but it led me nowhere.
I also have tried using MATLAB's Genetic Algorithm (to find a global optimum) but with no success as it seems my function has a ton of local minima.
For the purpose of my problem, I need justfy in some manner the reasons behind choosing initial parameters.
Without any insight on the model and likely values of the parameters, the search space is too large for anything feasible. Think that just trying ten values for every parameter corresponds to ten billion combinations.
There is no magical black box.
You can try Bayesian Optimization to find a global optimum for expensive black box functions. Matlab describes it's implementation [bayesopt][2] as
Select optimal machine learning hyperparameters using Bayesian optimization
but you can use it to optimize any function. Bayesian Optimization works by updating a prior belief over a distribution of functions with the observed data.
To speed up the optimization I would recommend adding your existing data via the InitialX and InitialObjective input arguments.

Why does fmincon yield different solutions

I am very new to MatLab. Thus I am sorry if this is very basic.
I use a function called fmincon to do find a solution for minimizing a function. Why do I get different solutions for running fmincon?
I would like to know a satisfying or convincing mathematical or programming explanation for having different solutions using fmincon.
Check these limitations in the MATLAB documentation.
fmincon is a gradient-based method that is designed to work on problems where the objective and constraint functions are both continuous and have continuous first derivatives.
The function is very delicate and it is best if you can avoid it. It only works neatly on problems that are neatly defined to begin with. Any deviation can lead to local instead of global minima, and these can depend (among other things) on your initial solution estimate or starting point.
As fmincon is sensitive to initial point, If you set different start point for the fmincon, you might get a different solution in each apply. You can find one of the algorithms of fmincon here.

Patternsearch discrete variables

I want to optimize a multi-variable function with the patternsearch function in MATLAB. The function requires a lower and upper boundary and looks within the boundaries in a continuous domain.
I however have a discrete set of values in an excel file and would like the algorithm to search within this discrete domain instead of in the continuous domain.
Is this possible with patternsearch?
Maybe I don't understand correctly your question but if you have a (discret and finite) set of values, why don't you compute the function's value at these points and return the minium?
In short, no. That is not what patternsearch is intended for. Optimization techniques for discrete and continuous search spaces are quite expectedly different.
If you're looking for an approximate answer however, it is possible to use spline, polyfit, etc. to arrive at an approximate continuous function for your data and then apply patternsearch on it.
If you provide greater detail about your problem, I or someone else may be able to suggest a more suitable way of working with your data.
The best optimization tool for this is the Genetic Algorithm. This optimization tool comes with Matlab's global optimization toolbox and allows for optimization of both continuous and discrete variables at the same time.
In the genetic algorithm variables that are integers have to be declared as such. Non-declared variables are continuous by default.
Check the Global Optimization Toolbox guide for information on how it works: http://it.mathworks.com/help/pdf_doc/gads/gads_tb.pdf.

Is there an fmincon algorithm that always satisfies linear constraints?

I'm trying to perform a constrained linear optimization in Matlab with a fairly complicated objective function. This objective function, as is, will yield errors for input values that don't meet the linear inequality constraints I've defined. I know there are a few algorithms that enforce strict adherence to bounds at every iteration, but does anyone know of any algorithms (or other mechanisms) to enforce strict adherence to linear (inequality) constraints at each iteration?
I could make my objective function return zero at any such points, but I'm worried about introducing large discontinuities.
Disclaimer: I'm not an optimization maven. A few ideas though:
Log barrier function to represent constraints
To expand on DanielTheRocketMan's suggestion, you can use a log barrier function to represent the constraint. If you have constraint g(x) <= 0 and objective to minimize is f(x) then you can define a new objective:
fprim(x) = f(x) - (1/t) * log(-g(x))
where t is a parameter defining how sharp to make the constraint. As g(x) approaches 0 from below, -log(-g(x)) goes to infinity, penalizing the objective function for getting close to violating the constraint. A higher value of t lets g(x) get closer to 0.
You answered your own question? Use fmincon with one of the algorithms that satisfy strict feasibility of the constraints?
If your constraints are linear, that should be easy to pass to fmincon. Use one of the algorithms that satisfy strict feasibility.
Sounds like this wouldnt' work for you, but cvx is an awesome package for some convex problems but horrible/unworkable for others. If your problem is (i) convex and (ii) objective function and constraints aren't too complicated, then cvx is a really cool package. There's a bit of a learning curve to using it though.
Obvious point, but if your problem isn't convex, you may have big problems with getting stuck at local optima rather than finding the global optima. Always something to be aware of.
If the Matlab is not working for you, you can implement by yourself the so-called Interior point penalty method [you need to change your objective function]. See equations (1) and (2) [from wikipedia page]. Note that by using an internal barrier, when x is close to the constraint [c(x) is close to zero], the penalty diverges. This solution deals with the inequality constraints. You can also control the value of mu overtime. The best solution is to assume that mu decreases over time. This means that you need to deal with a sequence of optimizations. If mu is different from zero, the solution is always affected. Furthermore, note that using this method your problem is not linear anymore.
In the case of equality constraints, the only simple (and general) way to deal with that is to use directly the constraint equation. For instance, X1+x2+x3=3. Rewrite it as x1=3-x2-x3 and use it to replace the value of x1 in all other equations. Since your system is linear, it should work.