I was using Matlab's fmincon but the optimization stopped with the following message
fmincon stopped because the size of the current step is less than
the selected value of the step size tolerance.
I have TolX set to 10^-10 and Tolfun to 10^-10 as well
I checked the logs and the first-order optimality was 198. Therefore this is definitely not the optimum solution. What could possibly go wrong?
Further, I used different version of matlab R2013b and R2014a and for the same code and data, they have different results. Is there something wrong with fmincon in matlab R2013b?
When you have a question about the difference between two versions of Matlab or, as in this case, two toolbox versions, the first place to check is release notes: for Matlab and for the Optimization toolbox. Indeed, it seems that fmincon was updated. Do you specify an 'Algorithm'? If not, the difference may be due the different default in R2014a. In this case, you may be able to specify the use of the 'interior-point' algorithm in R2013b and get better results.
You can read about the algorithms used by fmincon here.
Related
Is there any linear programing solver, written for MATLAB, that (a) solves with the primal-dual interior point method, (b) the user has the options to set the target barrier parameter (he lowest value of barrier parameter for which the KKT system is solved)?
I currently use IPOPT, which has the target barrier parameter options.
However, at convergence, the product of dual*slack seems to only be approximately satisfied (with an error of say (+-)1e-7 for a target parameter of 1e-5).
I have tried to play around with the tolerances, but to no avail.
For MATLAB use, I recommend using CVX, which includes Gurobi, MOSEK, GLPK, and SDPT3. All of those can solve the linear program very efficiently.
CVX is very easy to use in MATLAB.
I am performing a numerical optimization where I try to find the parameters of a statistical model that best match certain moments of the data. I have 6 parameters in total I need to find. I have written a matlab function which takes the parameters as input and gives the sum of squared deviations from the empirical moments as output. I use the fminsearch function to find the parameters and it gives me a solution.
However, I am unsure if this is really a global minimum. What type of checks I could do to ensure the numerical solution is correct? Plotting the function is challenging due to high dimensionality. Any general advice in solving this type of problem is also appreciated.
You are describing the difficulties of a global optimization problem.
As mentioned in one of the comments, fminsearch() and related function fminunc() will return a local minimum. It provides no guarantee that you will get a global minimum.
A simple way to check if the answer you get really is a global minimum, would be to run the function multiple times from various starting points. If the answer all converges to the same value, it might be a global minimum. If you find an answer with lower error values, then the last answer was not the global minimum.
The only way to be perfectly sure that you have the global minima, is to know whether or not your function is convex (i.e. your function has only a single minima.) This will have to be done analytically.
If it is not possible to be done analytically, there are many global optimization methods you may want to consider, including some available as this MATLAB toolbox.
I know that fitcsvm is a new command in matlab new version and in the latest document say that svmtrain will be removed. Are the two commands the same? Actually I notice that they are different in result in my recent work. Can anyone help me with this strange problem?
According to my experiments, I found that the two functions are different. fitcsvm takes the empirical distribution into consideration, the distribution is related to the number of positive samples and negatives in the default situation. However, svmtrain just take this distribution as [0.5 0.5], and one can think there's no prior knowledge.
Further, it may be with the data whether they have been standardization, to get more about this, just find the related document about SVM.
from fitcsvm:
fitcsvm and svmtrain use, among other algorithms, SMO for optimization. The software implements SMO differently between the two functions, but numerical studies show that there is sensible agreement in the results.
from Wikipedia Sequential minimal optimization:
SMO is an iterative algorithm for solving the optimization problem ...
I have a program using fminbnd and it works perfectly on my new version of MATLAB. Some of my colleagues have an older version 2010b and it yields and error message. Have there been any major changes to this function over the past two years?
Do you use the LargeScale algorithm ? (default case).
It was improved in 2011b:
Enhanced Robustness in Nonlinear Solvers
More solvers now attempt to recover from errors in the evaluation of
objective functions and nonlinear constraint functions during
iteration steps, or, for some algorithms, during gradient estimation.
The errors include results that are NaN or Inf for all solvers, or
complex for fmincon and fminunc. If there is such an error, the
algorithms attempt to take different steps. The following solvers are enhanced:
[...]
fminunc LargeScale algorithm
[...]
See release notes
I have an fsolve in an m file and it works perfectly in version 2011b. However, the fsolve fails each time in 2012a. Has there been a major change to eithre the function or the options that would cause this?
Here is what R2012a Release Notice saying about fsolve:
Levenberg-Marquardt Algorithm Tweak
The fsolve, lsqcurvefit, and lsqnonlin solvers no longer use the
magnitude of the Levenberg-Marquardt regularization parameter as a
stopping criterion, so they no longer return an exit flag of -3 when
using the levenberg-marquardt algorithm. Instead, they use the TolX
tolerance in all internal calculations.
http://www.mathworks.com/help/toolbox/optim/rn/bs86_xz.html#btd80ns
You might want to compare it from the documentation of the current release and the older releases.