Matlab's fsolve converges *but* seems to give wrong solution - matlab

I am trying to solve a system of non linear equations using fsolve; lets say
F(x;lambda) = 0, where lambda is a vector of parameters, and x the vector I want to solve for.
I am using Matlab's fsolve.
I have 2 values of the parameter lambda, that I want to solve the system for. For the one value of lambda I get a solution, which seems alright.
For the other value of lambda I get a solution again (matlab exits with a flag of 1. However I know this is not an actual solution For example I know that some of the dimensions of x have to be equal to each other, and this is not the case in the solution I get from fsolve.
I have tried both trust-region and the levenberg-marquardt algorithm, and I am not getting any better results. (explicitly enforcing those x's to be the same, still seems to give solutions that are not consistent with what I would be expecting from the properties of the system)
My question is: do the algorithms used by fsolve depend on any kind of stability of the system? Could it be that changing the parameter lambda in the second case I mention above, I make the system unstable, and could that make fsolve having a hard time to solve it correctly?
Thank you, George

fsolve isn't "failing" - as commented by jucestain, it's giving you a local minimum, which is not necessarily a global minimum. This is what it's designed to do.
To improve your chances of obtaining a global minimum you need to either:
Know that your initial guess is good
Run the optimisation several times with a grid of initial guesses, and pick the best result
Add constraints to prevent the solver straying into areas you know to have local minima
Modify your cost function to remove local minima
If you ever come across a non-linear solver that can guarantee a global minimum, do let us know!

Related

How to ensure my optimization algorithm has found the solution?

I am performing a numerical optimization where I try to find the parameters of a statistical model that best match certain moments of the data. I have 6 parameters in total I need to find. I have written a matlab function which takes the parameters as input and gives the sum of squared deviations from the empirical moments as output. I use the fminsearch function to find the parameters and it gives me a solution.
However, I am unsure if this is really a global minimum. What type of checks I could do to ensure the numerical solution is correct? Plotting the function is challenging due to high dimensionality. Any general advice in solving this type of problem is also appreciated.
You are describing the difficulties of a global optimization problem.
As mentioned in one of the comments, fminsearch() and related function fminunc() will return a local minimum. It provides no guarantee that you will get a global minimum.
A simple way to check if the answer you get really is a global minimum, would be to run the function multiple times from various starting points. If the answer all converges to the same value, it might be a global minimum. If you find an answer with lower error values, then the last answer was not the global minimum.
The only way to be perfectly sure that you have the global minima, is to know whether or not your function is convex (i.e. your function has only a single minima.) This will have to be done analytically.
If it is not possible to be done analytically, there are many global optimization methods you may want to consider, including some available as this MATLAB toolbox.

Any suggestion for solving linear equations with two unknown to be assumed?

I am trying to solve a "linearized" linear-system-of-equations, which requires two parameters to be estimated by iteration because of linearization. The actual problem is nonlinear actually, but using fourier series method, it iss linearized.
I have been solving linear system by just matrices and SVDs which takes not much time but these matrices depend on the two parameters that are to be iteratively solved. At the end I just need to make sure that one of the parameters I solve iteratively matches the response I get in the system. This is the criteria to be minimized.
I have been using "fmincon" and "multi-start" to solve for two parameters and I get some results, but it is taking longer than what I expect. There is local minima issue too, so I had to include "multi-start".
Anyone has an idea if any other method would be easier to solve this problem?
I really appreciate it.
A global optimization method that one may use is Simulated annealing.
May be MATLAB has a relevant routine.
There is free Simulated annealing software that you may also try.
I got an improvement in my problem, and I just replied it in comments but I think it is worth putting it in here since what I did emerged something unexpected:
So I ran a monte carlo sim for two variables to be iteratively solved, and plotted how the error changes with respect to input variables. I realized that there are tons of local minima in the error of the response and that's why fmincon was not able to solve itself because it was quickly jumping into one of those local minima holes, and I needed a very refined multi-start for fmincon so that I could get global minimum. This is very interesting observation because I wasn't expecting that rough error distribution with respect to two parameters.
Is there any efficient solver/optimizer in matlab that you know of, to get the global minimum in cases where there are many local minima? Or any other method?
Thanks,

How to Supply the Jacobian to Fsolve?

pow=fsolve(#eqns,pop);
This is the code I am using to solve a 2x2 non-linear system of equations, defined in the function eqns.m.
pop is a 2x1 initialisation vector pretty close to the solution. When I run it, the output says
No solution found.fsolve stopped because the relative size of the current step is less than the default value of the step size tolerance squared, but the vector of function values is not near zero as measured by the default value of the function tolerance.<stopping criteria details>
Any way out? I tried moving the initial point further away from the solution intentionally, still it is not working. How do I set the tolerance or some other parameter? Some posts gave me the impression that supplying the jacobian to matlab can be helpful, but how do I do that? Please note that I need the solution in the form of a code which I can put in a function file to be called repeatedly. I believe the interactive optimtool toolbox would not help here. Any help please?
Also from the documentation, the fsolve can employ three different algorithms. Is any of them more helpful than the others for certain problem structures? Where can I get a comparative study of them, suitable for some non-expert in optimisation?

Why does GlobalSearch return different solutions each run?

When running the GlobalSearch solver on a nonlinear constrained optimization problem I have, I often get very different solutions each run. For the cases that I have an analytical solution, the numerical results are less dispersed than the non-analytical cases but are still different each run. It would be nice to get the same results at least for these analytical cases so that I know the optimization routine is working properly. Is there a good explanation of this in the Global Optimization Toolbox User Guide that I missed?
Also, why does GlobalSearch use a different number of local solver runs each run?
Thanks!
A full description of how the GlobalSearch algorithm works can be found Here.
In summary the GlobalSearch method iteratively performs convex optimization. Basically it starts out by using fmincon to search for a local minimum near the initial conditions you have provided. Then a bunch of "trial points", based on how good the initial result was, are generated using the "scatter search algorithm." Then there is some more convex optimization and rating of "how good" the minima around these points are.
There are a couple of things that can cause the algorithm give you different answers:
1. Changing the initial conditions you give it
2. The scatter search algorithm itself
The fact that you are getting different answers each time likely means that your function is highly non-convex. The best thing that I know of that you can do in this scenario is just to try the optimization algorithm at several different initial conditions and see what result you get back the most frequently.
It also looks like there is something called the 'PlotFcns' property which would allow you get a better idea what the functions the solver is generating for you look like.
You can use the ga or gamulti objective functions within the GlobalSearch api. I would recommend this. Convex optimizers wont be able to solve a non-linear problem. Even then Genetic Algorithms dont gaurantee the solution. If you run the ga and then use its final minimum as the start of your fmincon search then it should result in the same answer consistently. There may be better ones but if the search space is unknown you may never know.

Solving a non-polynomial equation numerically

I've got a problem with my equation that I try to solve numerically using both MATLAB and Symbolic Toolbox. I'm after several source pages of MATLAB help, picked up a few tricks and tried most of them, still without satisfying result.
My goal is to solve set of three non-polynomial equations with q1, q2 and q3 angles. Those variables represent joint angles in my industrial manipulator and what I'm trying to achieve is to solve inverse kinematics of this model. My set of equations looks like this: http://imgur.com/bU6XjNP
I'm solving it with
numeric::solve([z1,z2,z3], [q1=x1..x2,q2=x3..x4,q3=x5..x6], MultiSolutions)
Changing the xn constant according to my needs. Yet I still get some odd results, the q1 var is off by approximately 0.1 rad, q2 and q3 being off by ~0.01 rad. I don't have much experience with numeric solve, so I just need information, should it supposed to look like that?
And, if not, what valid option do you suggest I should take next? Maybe transforming this equation to polynomial, maybe using a different toolbox?
Or, if trying to do this in Matlab, how can you limit your solutions when using solve()? I'm thinking of an equivalent to Symbolic Toolbox's assume() and assumeAlso.
I would be grateful for your help.
The numerical solution of a system of nonlinear equations is generally taken as an iterative minimization process involving the minimization (i.e., finding the global minimum) of the norm of the difference of left and right hand sides of the equations. For example fsolve essentially uses Newton iterations. Those methods perform a "deterministic" optimization: they start from an initial guess and then move in the unknowns space essentially according to the opposite of the gradient until the solution is not found.
You then have two kinds of issues:
Local minima: the stopping rule of the iteration is related to the gradient of the functional. When the gradient becomes small, the iterations are stopped. But the gradient can become small in correspondence to local minima, besides the desired global one. When the initial guess is far from the actual solution, then you are stucked in a false solution.
Ill-conditioning: large variations of the unknowns can be reflected into large variations of the data. So, small numerical errors on data (for example, machine rounding) can lead to large variations of the unknowns.
Due to the above problems, the solution found by your numerical algorithm will be likely to differ (even relevantly) from the actual one.
I recommend that you make a consistency test by choosing a starting guess, for example when using fsolve, very close to the actual solution and verify that your final result is accurate. Then you will discover that, by making the initial guess more far away from the actual solution, your result will be likely to show some (even large) errors. Of course, the entity of the errors depend on the nature of the system of equations. In some lucky cases, those errors could keep also very small.