MATLAB: Global Optimization where one parameters has to be an odd integer - matlab

I want to find the global minimum of a function f that takes two parameters a and b. While a is continuous, b has to be an odd integer. How can I approach this problem, given MATLAB's built in functions and those that come with the Optimization and Global Optimization Toolboxes?

Related

How to ensure my optimization algorithm has found the solution?

I am performing a numerical optimization where I try to find the parameters of a statistical model that best match certain moments of the data. I have 6 parameters in total I need to find. I have written a matlab function which takes the parameters as input and gives the sum of squared deviations from the empirical moments as output. I use the fminsearch function to find the parameters and it gives me a solution.
However, I am unsure if this is really a global minimum. What type of checks I could do to ensure the numerical solution is correct? Plotting the function is challenging due to high dimensionality. Any general advice in solving this type of problem is also appreciated.
You are describing the difficulties of a global optimization problem.
As mentioned in one of the comments, fminsearch() and related function fminunc() will return a local minimum. It provides no guarantee that you will get a global minimum.
A simple way to check if the answer you get really is a global minimum, would be to run the function multiple times from various starting points. If the answer all converges to the same value, it might be a global minimum. If you find an answer with lower error values, then the last answer was not the global minimum.
The only way to be perfectly sure that you have the global minima, is to know whether or not your function is convex (i.e. your function has only a single minima.) This will have to be done analytically.
If it is not possible to be done analytically, there are many global optimization methods you may want to consider, including some available as this MATLAB toolbox.

ILP variable mapping in converting Matlab problem-based formulation to solver-based formulation

I am spinning up on ILPs using Matlab tools as the "vehicle" for this. They have both a conventional "solver" based (SB) formulation and a "problem" based (PB) formulation at a higher level of abstraction.
The difference between SB and PB is that, for PB, the user doesn't have to worry about what problem variables map to elements of the ILP column vector of variables. An optimization problem object accepts the optimization function and equality/inequality constraints in symbolic form, and the class methods handle the bookkeeping of defining a column vector of problem variables, the coefficients of the optimization function and the matrices & associated RHSs for equality & inequality constraints.
One can actually examine the SB counterpart to the PB formulation by using prob2struct to convert from PB to SB. Unfortunately, it isn't clear how prob2struct decides which PB variables map to which elements of the column vector of variables.
I tried searching through the optimization problem object in the PB formulation to see if it contains its own internal SB formulation details (regardless if it matches those of prob2struct), or at least the variable mapping. I couldn't find such details.
For prob2struct, is there a reliable rule for us to know which symbolic PB variables maps to which elements in the SB's column vector of variables?
Try the varindex function. It was introduced in R2019a.

How to let fminsearch only search over integers?

I'm using the fminsearch Method of Matlab to minimize a function:
c = cvpartition(200,'KFold',10);
minfn = #(z)kfoldLoss(fitcsvm(cdata,grp,'CVPartition',c,...
'KernelFunction','rbf','BoxConstraint',exp(z(2)),...
'KernelScale',exp(z(1))));
opts = optimset('TolX',5e-4,'TolFun',5e-4);
[searchmin fval] = fminsearch(minfn,randn(2,1),opts)
The minimization is over two parameters.
Now I would like to minimize a third parameter, but this parameter can only take positive integer values, i.e. 1,2,3,...
How can I tell fminsearch to only consider positive integers?
Second, if my third parameter gets initialized to 10 but it actual best value is 100, does fminsearch converge fast in such cases?
You can't tell fminsearch to consider only integers. The algorithm it uses is not suitable for discrete optimization, which in general is much harder than continuous optimization.
If there are only relatively few plausible values for your integer parameter(s), you could just loop over them all, but that might be too expensive. Or you could cook up your own 1-dimensional discrete optimization function and have it call fminsearch for each value of the integer parameter it tries. (E.g., you could imitate some standard 1-dimensional continuous optimization algorithm, and just return once you've found a parameter value that's, say, better than both its neighbours.) You may well be able to adapt this function to the particular problem you're trying to solve.
As #Gareth McCaughan said, you can't tell fminsearch to restrict the search space to integers. If you want to search for solvers that can handle this type of problem, you want to search for "mixed integer programming." Mixed integer is for part continuous, part integer programming. And "programming" is jargon for optimization (horribly confusing name, but like the QWERTY keyboard, we're stuck with it).
Be aware though that integer programming is in general NP-hard! Larger problems may be entirely intractable.
In side the case I handled, i looked for an vector-index which satifies a
condition.
The vector-Index is postive integer.
The workaround for fminsearch I did, was an interpolation of the error-function. Assume, fminsearch proposes 5.1267 as new index. Than I calculated the error-function for indexes 5 and 6 and gave an interpolation back. This leaded to stable and satisfying results.
Holger.Lindow#plr-magdeburg.de

Constraints on dependent variable Matlab

I am working on a Simulation-Optimization routine in Matlab. Myprogram solves a set of Differential Algebraic Equations (DAEs) which depend on a set of design variables x and computes a cost function (objective function). The value of the objective function is passed to fmincon, which decides how to update x, so constraints are fulfilled.
I was wondering if there is a way to recursively impose bounds not only to x, but to internal variables. I have, for instance, a physical limitation on the area of a piece of equipment, but this value does not belong to x, (it is a dependent variable). I know that one can include penalty functions to the objective function to account for these variables, but I would want to know if there is a way to make this internal variables "visible" to the optimizer, so it runs according to bounds or inequality conditions on them.

Matlab optimization - Variable bounds have "donut hole"

I'm trying to solve a problem using Matlab's genetic algorithm and fmincon functions where the variables' values do not have single upper and lower bounds. Instead, the variables should be allowed to take a value of x=0 or be lb<=x<=ub. This is a turbine allocation problem, where the turbine can either be turned off (x=0) or be within the lower and upper cavitation limits (lb and ub). Of course I can trick the problem by creating a constraint which will violate for values in between 0 and lb, but I'm finding that the problem is having a hard time converging like this. Is there an easier way to do this, which will trim down the search space?
If the number of variables is small enough (say, like 10 or 15 or less) then you can try every subset of variables that are set to be non-zero, and see which subset gives you the optimal value. If you can't make assumptions about the structure of your optimization problem (e.g. you have penalties for non-zero variables but your main objective function is "exotic"), this is essentially the best that you can do. If you are willing to settle for an approximate solution, you can add a so-called "L1" penalty to your objective function which is the sum of a constant times the absolute values of the variables. This will encourage some variables to be zero, and if your main objective function is convex then the resulting objective function will be convex because negative absolute value is convex. It's much easier to optimize convex functions (taking the minimum) because strictly convex functions always have a global minimum that you can reach using any number of optimization routines (including the ones that are implemented in matlab.)