How to ensure my optimization algorithm has found the solution? - matlab

I am performing a numerical optimization where I try to find the parameters of a statistical model that best match certain moments of the data. I have 6 parameters in total I need to find. I have written a matlab function which takes the parameters as input and gives the sum of squared deviations from the empirical moments as output. I use the fminsearch function to find the parameters and it gives me a solution.
However, I am unsure if this is really a global minimum. What type of checks I could do to ensure the numerical solution is correct? Plotting the function is challenging due to high dimensionality. Any general advice in solving this type of problem is also appreciated.

You are describing the difficulties of a global optimization problem.
As mentioned in one of the comments, fminsearch() and related function fminunc() will return a local minimum. It provides no guarantee that you will get a global minimum.
A simple way to check if the answer you get really is a global minimum, would be to run the function multiple times from various starting points. If the answer all converges to the same value, it might be a global minimum. If you find an answer with lower error values, then the last answer was not the global minimum.
The only way to be perfectly sure that you have the global minima, is to know whether or not your function is convex (i.e. your function has only a single minima.) This will have to be done analytically.
If it is not possible to be done analytically, there are many global optimization methods you may want to consider, including some available as this MATLAB toolbox.

Related

Why does fmincon yield different solutions

I am very new to MatLab. Thus I am sorry if this is very basic.
I use a function called fmincon to do find a solution for minimizing a function. Why do I get different solutions for running fmincon?
I would like to know a satisfying or convincing mathematical or programming explanation for having different solutions using fmincon.
Check these limitations in the MATLAB documentation.
fmincon is a gradient-based method that is designed to work on problems where the objective and constraint functions are both continuous and have continuous first derivatives.
The function is very delicate and it is best if you can avoid it. It only works neatly on problems that are neatly defined to begin with. Any deviation can lead to local instead of global minima, and these can depend (among other things) on your initial solution estimate or starting point.
As fmincon is sensitive to initial point, If you set different start point for the fmincon, you might get a different solution in each apply. You can find one of the algorithms of fmincon here.

Any suggestion for solving linear equations with two unknown to be assumed?

I am trying to solve a "linearized" linear-system-of-equations, which requires two parameters to be estimated by iteration because of linearization. The actual problem is nonlinear actually, but using fourier series method, it iss linearized.
I have been solving linear system by just matrices and SVDs which takes not much time but these matrices depend on the two parameters that are to be iteratively solved. At the end I just need to make sure that one of the parameters I solve iteratively matches the response I get in the system. This is the criteria to be minimized.
I have been using "fmincon" and "multi-start" to solve for two parameters and I get some results, but it is taking longer than what I expect. There is local minima issue too, so I had to include "multi-start".
Anyone has an idea if any other method would be easier to solve this problem?
I really appreciate it.
A global optimization method that one may use is Simulated annealing.
May be MATLAB has a relevant routine.
There is free Simulated annealing software that you may also try.
I got an improvement in my problem, and I just replied it in comments but I think it is worth putting it in here since what I did emerged something unexpected:
So I ran a monte carlo sim for two variables to be iteratively solved, and plotted how the error changes with respect to input variables. I realized that there are tons of local minima in the error of the response and that's why fmincon was not able to solve itself because it was quickly jumping into one of those local minima holes, and I needed a very refined multi-start for fmincon so that I could get global minimum. This is very interesting observation because I wasn't expecting that rough error distribution with respect to two parameters.
Is there any efficient solver/optimizer in matlab that you know of, to get the global minimum in cases where there are many local minima? Or any other method?
Thanks,

Why does GlobalSearch return different solutions each run?

When running the GlobalSearch solver on a nonlinear constrained optimization problem I have, I often get very different solutions each run. For the cases that I have an analytical solution, the numerical results are less dispersed than the non-analytical cases but are still different each run. It would be nice to get the same results at least for these analytical cases so that I know the optimization routine is working properly. Is there a good explanation of this in the Global Optimization Toolbox User Guide that I missed?
Also, why does GlobalSearch use a different number of local solver runs each run?
Thanks!
A full description of how the GlobalSearch algorithm works can be found Here.
In summary the GlobalSearch method iteratively performs convex optimization. Basically it starts out by using fmincon to search for a local minimum near the initial conditions you have provided. Then a bunch of "trial points", based on how good the initial result was, are generated using the "scatter search algorithm." Then there is some more convex optimization and rating of "how good" the minima around these points are.
There are a couple of things that can cause the algorithm give you different answers:
1. Changing the initial conditions you give it
2. The scatter search algorithm itself
The fact that you are getting different answers each time likely means that your function is highly non-convex. The best thing that I know of that you can do in this scenario is just to try the optimization algorithm at several different initial conditions and see what result you get back the most frequently.
It also looks like there is something called the 'PlotFcns' property which would allow you get a better idea what the functions the solver is generating for you look like.
You can use the ga or gamulti objective functions within the GlobalSearch api. I would recommend this. Convex optimizers wont be able to solve a non-linear problem. Even then Genetic Algorithms dont gaurantee the solution. If you run the ga and then use its final minimum as the start of your fmincon search then it should result in the same answer consistently. There may be better ones but if the search space is unknown you may never know.

meet -Inf or NaN as the result for genetic algorithm using matlab

I sometimes get -Inf or NaN as the final value of my target function when I am using matlab ga toolbox doing the minimization. But if I do the optimization again with exactly the same option set up, I get a finite answer... Could anyone tell me why this is the case? and how could I solve the problem? Thanks very much!
The documentation and examples for ga are bad about this and barely mention the stochastic nature of this method (though if you're using it maybe you would be aware). If you wish to have repeatable results, you should always specify a seed value when perform stochastic simulations. This can be done in at least two ways. You can use the rng function:
rng(0);
where 0 is the seed value. Or you can possibly use the 'rngstate' field if you specify the optimization as a problem structure. See more here on reproducing results.
If you're doing any sort of experiments you should be specifying a seed. That way you can repeat a run if necessary to check why something may have happened or to obtain more finely-grained data. Just change the seed value to another positive integer if you want to run again.
The Genetic Algorithm is a stochastic algorithm, which means it does not explore the same problem space every time you run it. On each run it will be trying different solutions, and occasionally it is running into a solution on which your target function is ill-behaved.
Without knowing more about your specific problem, all I can really suggest is that you take a closer look at your target function and see if you can restrict it so that it does not explode to negative infinity. Look at the solution returned by the GA when you get these crazy target values, and see if you can adjust your target function so that it does not return infinite values for such solutions.

Matlab's fsolve converges *but* seems to give wrong solution

I am trying to solve a system of non linear equations using fsolve; lets say
F(x;lambda) = 0, where lambda is a vector of parameters, and x the vector I want to solve for.
I am using Matlab's fsolve.
I have 2 values of the parameter lambda, that I want to solve the system for. For the one value of lambda I get a solution, which seems alright.
For the other value of lambda I get a solution again (matlab exits with a flag of 1. However I know this is not an actual solution For example I know that some of the dimensions of x have to be equal to each other, and this is not the case in the solution I get from fsolve.
I have tried both trust-region and the levenberg-marquardt algorithm, and I am not getting any better results. (explicitly enforcing those x's to be the same, still seems to give solutions that are not consistent with what I would be expecting from the properties of the system)
My question is: do the algorithms used by fsolve depend on any kind of stability of the system? Could it be that changing the parameter lambda in the second case I mention above, I make the system unstable, and could that make fsolve having a hard time to solve it correctly?
Thank you, George
fsolve isn't "failing" - as commented by jucestain, it's giving you a local minimum, which is not necessarily a global minimum. This is what it's designed to do.
To improve your chances of obtaining a global minimum you need to either:
Know that your initial guess is good
Run the optimisation several times with a grid of initial guesses, and pick the best result
Add constraints to prevent the solver straying into areas you know to have local minima
Modify your cost function to remove local minima
If you ever come across a non-linear solver that can guarantee a global minimum, do let us know!