Sensivity analysis in MATLAB - matlab

I have a large scale linear programming problem. I can solve it within matlab using "linprog". However, it is within a loop, and I need to bypass it from second iteration to end of my loop. It is a simple LP in the form of below:
Minimize sum a_i b_i
st. ...
Where a_is are my variables and b_is are coefficients. In each loop iteration only b_is change slightly. I want the new values of my variables after this change. (please note that matlab does not use simplex method for large-scale problems).
Is there any way I can save my time in the loop and do not solve LP multiple times?
Thanks

Note that Sensitivity Analysis for LPs/IPs is not one of MATLAB's strengths.
Option 1: If at all you can use CPLEX or SAS, they both have "warm-start" methods that will have your previous basis and come up with something fast. (This is true Sensitivity Analysis.)
Here's one IBM/CPLEX's link to setting an initial solution.
Similarly, SAS/OR also has warmstart options.
Option 2: If you only have access to MATLAB
From Matlab's documentation, here's how to "force" it to use Simplex.
To use the simplex method, set 'LargeScale' to 'off' and 'Simplex' to 'on' in options.
options = optimset('LargeScale','off','Simplex','on')
Note: If the default Interior-point method is much better suited for your particular LP, first solve it as you are doing in iteration 1. Then set the Upper and Lower bounds of your basic variables to be the solution values, and now set linprog options to invoke Simplex. It will trivially solve it.
Try switching the Solution engine to use simplex, and see if that helps in your second and subsequent iterations of the LP with slight changes to the coefficients.

Related

How to ensure my optimization algorithm has found the solution?

I am performing a numerical optimization where I try to find the parameters of a statistical model that best match certain moments of the data. I have 6 parameters in total I need to find. I have written a matlab function which takes the parameters as input and gives the sum of squared deviations from the empirical moments as output. I use the fminsearch function to find the parameters and it gives me a solution.
However, I am unsure if this is really a global minimum. What type of checks I could do to ensure the numerical solution is correct? Plotting the function is challenging due to high dimensionality. Any general advice in solving this type of problem is also appreciated.
You are describing the difficulties of a global optimization problem.
As mentioned in one of the comments, fminsearch() and related function fminunc() will return a local minimum. It provides no guarantee that you will get a global minimum.
A simple way to check if the answer you get really is a global minimum, would be to run the function multiple times from various starting points. If the answer all converges to the same value, it might be a global minimum. If you find an answer with lower error values, then the last answer was not the global minimum.
The only way to be perfectly sure that you have the global minima, is to know whether or not your function is convex (i.e. your function has only a single minima.) This will have to be done analytically.
If it is not possible to be done analytically, there are many global optimization methods you may want to consider, including some available as this MATLAB toolbox.

Explain Matlab ode45 output. Is ode45 an iterative algorithm?

I tried to use ode45 to solve an equation, and get output like the following. I get the idea it is trying to estimate using nearby points (as explained here https://www.mathworks.com/videos/solving-odes-in-matlab-6-ode45-117537.html). By my understanding, it should solve the equation in one round of computation? but the output looks like ode45 is an iterative algorithm (so that it generates output that repeat the '... steps ... failed attempt ... function evaluations' over and over again)? If it is iterative, could you help give some detail or references? Thanks!
ode45 is an iterative adaptive ODE solver. That is, it uses a 5th order (FSAL) method to propose the an update using some stepsize h. Then it does the same again, but now with a 4th order method, then it compared those two updates to one another, if the difference is less than some local tolerance, it accepts the proposed update. If the difference is larger than some local tolerance, the update is rejected and the stepsize is lowered (in some smart way).
To reduce the cost of using both a 4th and 5th order method, those two methods uses (roughly) the same function evaluations.
As for your output, it is, as also noted by #LutzL, not the standard output, which might point to an error in your code.

How to Supply the Jacobian to Fsolve?

pow=fsolve(#eqns,pop);
This is the code I am using to solve a 2x2 non-linear system of equations, defined in the function eqns.m.
pop is a 2x1 initialisation vector pretty close to the solution. When I run it, the output says
No solution found.fsolve stopped because the relative size of the current step is less than the default value of the step size tolerance squared, but the vector of function values is not near zero as measured by the default value of the function tolerance.<stopping criteria details>
Any way out? I tried moving the initial point further away from the solution intentionally, still it is not working. How do I set the tolerance or some other parameter? Some posts gave me the impression that supplying the jacobian to matlab can be helpful, but how do I do that? Please note that I need the solution in the form of a code which I can put in a function file to be called repeatedly. I believe the interactive optimtool toolbox would not help here. Any help please?
Also from the documentation, the fsolve can employ three different algorithms. Is any of them more helpful than the others for certain problem structures? Where can I get a comparative study of them, suitable for some non-expert in optimisation?

using "Refine" option in Matlab's ode45

I am trying to use ode45 in MAtlab and want to fix the number of points that MAtlab uses (number of time steps). Using the 'refine' option in ode45 seems not to help. For instance, if I set 'refine' to be 10, Matlab returns an array of 101.
Changing 'RelTol' and 'AbsTol' also does not help either. I know that it is possible to write tspan as [0,t1,t2,t3,...,tn] and that solves this issue, but I'd like to fix number of points via the 'refine' option.
Perhaps you misunderstand what the 'Refine' option actually does. From the documentation for odeset:
Refine — If Refine is 1, the solver returns solutions only at the end of each time step. If Refine is n >1, the solver subdivides each time step into n smaller intervals and returns solutions at each time point. Refine does not apply when length(tspan)>2 or the ODE solver returns the solution as a structure.
In other words, setting 'Refine' to 10 does not guarantee that you'll get 10 output points but rather that you'll get 10 output points per integration time step. In the case of an adaptive step size method like ode45, the solver chooses how big the steps are based on many criteria. If you want a given number of output points you must specify fixed time steps as you've already done via tspan. The linspace function might be helpful to you.
Another possibility is that you're not actually applying your options. Simply calling odeset is not sufficient. You must also remember to pass the output into ode45.

Matlab's fsolve converges *but* seems to give wrong solution

I am trying to solve a system of non linear equations using fsolve; lets say
F(x;lambda) = 0, where lambda is a vector of parameters, and x the vector I want to solve for.
I am using Matlab's fsolve.
I have 2 values of the parameter lambda, that I want to solve the system for. For the one value of lambda I get a solution, which seems alright.
For the other value of lambda I get a solution again (matlab exits with a flag of 1. However I know this is not an actual solution For example I know that some of the dimensions of x have to be equal to each other, and this is not the case in the solution I get from fsolve.
I have tried both trust-region and the levenberg-marquardt algorithm, and I am not getting any better results. (explicitly enforcing those x's to be the same, still seems to give solutions that are not consistent with what I would be expecting from the properties of the system)
My question is: do the algorithms used by fsolve depend on any kind of stability of the system? Could it be that changing the parameter lambda in the second case I mention above, I make the system unstable, and could that make fsolve having a hard time to solve it correctly?
Thank you, George
fsolve isn't "failing" - as commented by jucestain, it's giving you a local minimum, which is not necessarily a global minimum. This is what it's designed to do.
To improve your chances of obtaining a global minimum you need to either:
Know that your initial guess is good
Run the optimisation several times with a grid of initial guesses, and pick the best result
Add constraints to prevent the solver straying into areas you know to have local minima
Modify your cost function to remove local minima
If you ever come across a non-linear solver that can guarantee a global minimum, do let us know!