i am trying to find an optimal point with multistart / fmincon. Matlab finds a local minima, for which the following message is given in the output struct:
Local minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in feasible directions,
to within the value of the optimality tolerance,and constraints are satisfied to within the value of the
constraint tolerance.<stopping criteria details>
Optimization completed: The relative first-order optimality measure, 7.337955e-07,
is less than options.OptimalityTolerance = 1.000000e-06, and the relative maximum constraintviolation, 7.082693e-07, is less than options.ConstraintTolerance = 1.000000e-06.
I have 13 nonlinear constraints, of which one is violated.
This is the constraint:
c(10) = abs(L_1 - (L_2 - L_3)) - 0.001;
I want to achieve:
abs(L_1 - (L_2 - L_3)) <= 0.001;
If i check this constraint by hand, I get
abs(L_1 - (L_2 - L_3)) = 0.0011;
which is larger than the desired value 0.001
The constraint tolerance in optimoptions is set to 1e-6 by default.
How can this happen? The output clearly says, that all the constraints are met, but in reality this one is not.
I think you've misunderstood what fmincon provides. Solutions are guaranteed to satisfy the bounds. There is no assurance that all constraints (linear or non-linear) will be satisfied.
I will admit that you have to read between the lines when looking at the MATLAB documentation.
For the default 'interior-point' algorithm, fmincon sets components of x0 that violate the bounds lb ≤ x ≤ ub, or are equal to a bound, to the interior of the bound region. For the 'trust-region-reflective' algorithm, fmincon sets violating components to the interior of the bound region. For other algorithms, fmincon sets violating components to the closest bound. Components that respect the bounds are not changed. See Iterations Can Violate Constraints.
Related
I am trying to solve a system of 12 equations in Matlab. Because I have constraints on the minimum and maximum values of the variables I use lsqnonlin rather than fsolve. However, I would like the optimizer to stop once the output (sum of squared deviations from the point where each equation holds) is sufficiently close to zero. Is there a way to specify such a stopping criterion?
The standard stopping criteria are about comparing the change in the output value compared to previous iteration but this is less relevant for me.
Use the fmincon function to solve the equations with bounded constraints.
Because you have not provided anything, follow the example provided by MATLAB:
fun = #(x)1+x(1)/(1+x(2)) - 3*x(1)*x(2) + x(2)*(1+x(1)); % objective function
lb = [0,0]; % lower bounds
ub = [1,2]; % upper bounds
x0 = (lb + ub)/2; % initial estimate
x = fmincon(fun,x0,[],[],[],[],lb,ub)
This specifies the range 0<x(1)<1 and 0<x(2)<2 for the variables.
The fmincon function also lets you change the default options. To specify the tolerance for the output, set it:
options = optimoptions('fmincon','Display','iter','FunctionTolerance',1e-10);
This sets fmincon options to have iterative display, and to have a FunctionTolerance of 1e-10. Call the fmincon function with these nonstandard options:
x = fmincon(fun,x0,[],[],[],[],lb,ub,[],options)
I am using the differential evolution optimizer in scipy and I don't understand the intuition behind the tol argument. Specifically is say in the documentation:
tol: float, optional
When the mean of the population energies, multiplied by tol, divided
by the standard deviation of the population energies is greater than 1
the solving process terminates:
convergence = mean(pop) * tol / stdev(pop) > 1
What does setting tol represent from a user perspective?
Maybe the formula in the documentation is easier to understand in the following form (see lines 508 and 526 in the code):
std(population_energies) / mean(population_energies) < tol
It means that convergence is reached when the standard deviation of the energies for each individual in the population, normed by the average, is smaller than the given tolerance value.
The optimization algorithm is iterative. At every iteration a better solution is found. The tolerance parameters is used to define a stopping condition. The stopping condition is actually that all the individuals (parameter sets) have approximately the same energy, i.e. the same cost function value. Then, the parameter set giving the lowest energy is returned as a solution.
It also implies that all the individuals are relatively close to each other in the parameter space. So, no better solution can be expected on the following generations.
I'm trying to solve a non-linear constraint optimization problem using MatLab's fmincon function with SQP algorithm. This solver has been successfully applied on my problem, as I found out during my literature research.
I know my problem's solution, but fmincon struggles to find it reliably. When running the optimization a 100 times with randomly generated start values within my boundaries, I got about 40 % good results. 'good' means that the results are that close to the optimum that I would accept it, although those 'good' results correspond with different ExitFlags. Most common are Exit Flags -2 and 2:
ExitFlag = 2
Local minimum possible. Constraints satisfied.
fmincon stopped because the size of the current step is less than the selected value of the step size tolerance and constraints are satisfied to within the selected value of the constraint tolerance.
ExitFlag = -2
No feasible solution found.
fmincon stopped because the size of the current step is less than the selected value of the step size tolerance but constraints are not satisfied to within the selected value of the constraint tolerance.
The 'non-good' results deviate about 2% of the optimal solution and correspond to ExitFlags 2 and -2, as well.
I played around with the tolerances, but without success. When relaxing the constraint tolerance the number of ExitFlag -2 decreases and some ExitFlag 1 cases occur, but consequently the deviation from the optimal solution rises.
A big problem seems to be the step size which violates its tolerance. Often the solver exits after 2 or 3 iterations because of too small step size / norm of step size (relative change in X is below TolX).Is there a way to counteract these problems? I'd like to tune the solver In away to get appropriate results reliably.
For your information, the options used:
options=optimset('fmincon');
options=optimset(options,...
'Algorithm','sqp',...
'ScaleProblem','obj-and-constr',...
'TypicalX',[3, 50, 3, 40, 50, 50],...
'TolX',1e-12,...%12
'TolFun',1e-8,...%6
'TolCon',1e-3,...%6
'MaxFunEvals',1000,... %1000
'DiffMinChange',1e-10);
I would like to use the function fminsearch
of matlab to search for the best hyperparameters of my SVM with a weighted RBF kernel classifier. fminsearch uses the Nelder-Mead simplex method.
Let's say I have the following hyperparameters: C, gamma, w1....wn where wi are the weights of the kernel.
Additionally, I have the constraint that sum(wi) = 1, i.e. all weights must sum up to one.
Is there a possibility to use Nelder-Mead with this equality constraint? I know that there is the fminsearchbnd
method for Matlab but I think it can handle only boundary inequality constraints.
Edit: I'm using a SVM classifier and the weights are used in a weighted RBF kernel (one weight for each feature). The parameters to estimate are thus C, gamma and the weights. The cost function is the accuracy.
Can you substitute out one of the w(i)? That means, replace e.g. w1 by 1-w2-w3-... (and drop the constraint). Otherwise have a look at fmincon which allows explicit constraints. In addition you may need 0 <= w(i) <= 1.
I am struggling to solve an optimization problem, numerically, of the following (generic) form.
minimize F(x)
such that:
___(1): 0 < x < 1
___(2): M(x) >= 0.
where M(x) is a matrix whose elements are quadratic functions of x. The last constraint means that M(x) must be a positive semidefinite matrix. Furthermore F(x) is a callable function. For the more curious, here is a similar minimum-working-example.
I have tried a few options, but to no success.
PICOS, CVXPY and CVX -- In the first two cases, I cannot find a way of encoding a minimax problem such as mine. In the third one which is implemented in MATLAB, the matrices involved in a semidefinite constraint must be affine. So my problem falls outside this criteria.
fmincon -- How can we encode a matrix positivity constraint? One way is to compute the eigenvalues of the matrix M(x) analytically, and constraint each one to be positive. But the analytic expression for the eigenvalues can be horrendous.
MOSEK -- The objective function must be a expressible in a standard form. I cannot find an example of a user-defined objective function.
scipy.optimize -- Along with the objective functions and the constraints, it is necessary to provide the derivative of these functions as well. Particularly in my case, that is fine for the objective function. But, if I were to express the matrix positivity constraint (as well as it's derivative) with an analytic expression of the eigenvalues, that can be very tedious.
My apologies for not providing a MWE to illustrate my attempts with each of the above packages/softwares.
Can anyone please suggest a package/software which could be useful to me in solving my optimization problem?
Have a look at a nonlinear optimization package with box constraints, where different type of constraints may be coded via penalty or barrier techniques.
Look at the following URL
merlin.cs.uoi.gr