matlab fsolve changes in version 2012a - matlab

I have an fsolve in an m file and it works perfectly in version 2011b. However, the fsolve fails each time in 2012a. Has there been a major change to eithre the function or the options that would cause this?

Here is what R2012a Release Notice saying about fsolve:
Levenberg-Marquardt Algorithm Tweak
The fsolve, lsqcurvefit, and lsqnonlin solvers no longer use the
magnitude of the Levenberg-Marquardt regularization parameter as a
stopping criterion, so they no longer return an exit flag of -3 when
using the levenberg-marquardt algorithm. Instead, they use the TolX
tolerance in all internal calculations.
http://www.mathworks.com/help/toolbox/optim/rn/bs86_xz.html#btd80ns

You might want to compare it from the documentation of the current release and the older releases.

Related

Interior-point linear programing solver in MATLAB, with target barrier parameter option

Is there any linear programing solver, written for MATLAB, that (a) solves with the primal-dual interior point method, (b) the user has the options to set the target barrier parameter (he lowest value of barrier parameter for which the KKT system is solved)?
I currently use IPOPT, which has the target barrier parameter options.
However, at convergence, the product of dual*slack seems to only be approximately satisfied (with an error of say (+-)1e-7 for a target parameter of 1e-5).
I have tried to play around with the tolerances, but to no avail.
For MATLAB use, I recommend using CVX, which includes Gurobi, MOSEK, GLPK, and SDPT3. All of those can solve the linear program very efficiently.
CVX is very easy to use in MATLAB.

What kind of numerical method does 'pdepe' (MATLAB function's) use?

I'm using the MATLAB's function 'pdepe' to solve a problem with some partial differential equations, a parabolic one.
I need to know the kind of numerical method that function uses, 'cause I have to notify this in a report.
The description of the function in MathWorks is "Solve initial-boundary value problems for systems of parabolic and elliptic PDEs in one space variable and time". Is it a finite difference method?
Thanks for helping me.
Taken from the Matlab 2016b documentation for pdepe:
The time integration is done with ode15s. pdepe exploits the
capabilities of ode15s for solving the differential-algebraic
equations that arise when Equation 1-3 contains elliptic equations,
and for handling Jacobians with a specified sparsity pattern.
Also, from the ode15s documentation:
ode15s is a variable-step, variable-order (VSVO) solver based on the
numerical differentiation formulas (NDFs) of orders 1 to 5.
Optionally, it can use the backward differentiation formulas (BDFs,
also known as Gear's method) that are usually less efficient
As indicated by Alessandro Trigilio, ode15s is used to advance the solution forward in time. Exactly what the function is advancing in time is a semi-discrete, second-order Galerkin formulation for non-singular problems or a semi-discrete, second-order Petrov-Galerkin formulation for singular problems (polar or spherical meshes that include the origin). As such, the spatial discretization is finite element in nature.

Some issues with fmincon in matlab

I was using Matlab's fmincon but the optimization stopped with the following message
fmincon stopped because the size of the current step is less than
the selected value of the step size tolerance.
I have TolX set to 10^-10 and Tolfun to 10^-10 as well
I checked the logs and the first-order optimality was 198. Therefore this is definitely not the optimum solution. What could possibly go wrong?
Further, I used different version of matlab R2013b and R2014a and for the same code and data, they have different results. Is there something wrong with fmincon in matlab R2013b?
When you have a question about the difference between two versions of Matlab or, as in this case, two toolbox versions, the first place to check is release notes: for Matlab and for the Optimization toolbox. Indeed, it seems that fmincon was updated. Do you specify an 'Algorithm'? If not, the difference may be due the different default in R2014a. In this case, you may be able to specify the use of the 'interior-point' algorithm in R2013b and get better results.
You can read about the algorithms used by fmincon here.

matlab changes to fminbnd

I have a program using fminbnd and it works perfectly on my new version of MATLAB. Some of my colleagues have an older version 2010b and it yields and error message. Have there been any major changes to this function over the past two years?
Do you use the LargeScale algorithm ? (default case).
It was improved in 2011b:
Enhanced Robustness in Nonlinear Solvers
More solvers now attempt to recover from errors in the evaluation of
objective functions and nonlinear constraint functions during
iteration steps, or, for some algorithms, during gradient estimation.
The errors include results that are NaN or Inf for all solvers, or
complex for fmincon and fminunc. If there is such an error, the
algorithms attempt to take different steps. The following solvers are enhanced:
[...]
fminunc LargeScale algorithm
[...]
See release notes

Looking for ODE integrator/solver with a relaxed attitude to derivative precision

I have a system of (first order) ODEs with fairly expensive to compute derivatives.
However, the derivatives can be computed considerably cheaper to within given error bounds, either because the derivatives are computed from a convergent series and bounds can be placed on the maximum contribution from dropped terms, or through use of precomputed range information stored in kd-tree/octree lookup tables.
Unfortunately, I haven't been able to find any general ODE solvers which can benefit from this; they all seem to just give you coordinates and want an exact result back. (Mind you, I'm no expert on ODEs; I'm familiar with Runge-Kutta, the material in the Numerical Recipies book, LSODE and the Gnu Scientific Library's solver).
ie for all the solvers I've seen, you provide a derivs callback function accepting a t and an array of x, and returning an array of dx/dt back; but ideally I'm looking for one which gives the callback t, xs, and an array of acceptable errors, and receives dx/dt_min and dx/dt_max arrays back, with the derivative range guaranteed to be within the required precision. (There are probably numerous equally useful variations possible).
Any pointers to solvers which are designed with this sort of thing in mind, or alternative approaches to the problem (I can't believe I'm the first person wanting something like this) would be greatly appreciated.
Roughly speaking, if you know f' up to absolute error eps, and integrate from x0 to x1, the error of the integral coming from the error in the derivative is going to be <= eps*(x1 - x0). There is also discretization error, coming from your ODE solver. Consider how big eps*(x1 - x0) can be for you and feed the ODE solver with f' values computed with error <= eps.
I'm not sure this is a well-posed question.
In many algorithms, e.g, nonlinear equation solving, f(x) = 0, an estimate of a derivative f'(x) is all that's required for use in something like Newton's method since you only need to go in the "general direction" of the answer.
However, in this case, the derivative is a primary part of the (ODE) equation you're solving - get the derivative wrong, and you'll just get the wrong answer; it's like trying to solve f(x) = 0 with only an approximation for f(x).
As another answer has suggested, if you set up your ODE as applied f(x) + g(x) where g(x) is an error term, you should be able to relate errors in your derivatives to errors in your inputs.
Having thought about this some more, it occurred to me that interval arithmetic is probably key. My derivs function basically returns intervals. An integrator using interval arithmetic would maintain x's as intervals. All I'm interested in is obtaining a sufficiently small error bound on the xs at a final t. An obvious approach would be to iteratively re-integrate, improving the quality of the sample introducing the most error each iteration until we finally get a result with acceptable bounds (although that sounds like it could be a "cure worse than the disease" with regards to overall efficiency). I suspect adaptive step size control could fit in nicely in such a scheme, with step size chosen to keep the "implicit" discretization error comparable with the "explicit error" ie the interval range).
Anyway, googling "ode solver interval arithmetic" or just "interval ode" turns up a load of interesting new and relevant stuff (VNODE and its references in particular).
If you have a stiff system, you will be using some form of implicit method in which case the derivatives are only used within the Newton iteration. Using an approximate Jacobian will cost you strict quadratic convergence on the Newton iterations, but that is often acceptable. Alternatively (mostly if the system is large) you can use a Jacobian-free Newton-Krylov method to solve the stages, in which case your approximate Jacobian becomes merely a preconditioner and you retain quadratic convergence in the Newton iteration.
Have you looked into using odeset? It allows you to set options for an ODE solver, then you pass the options structure as the fourth argument to whichever solver you call. The error control properties (RelTol, AbsTol, NormControl) may be of most interest to you. Not sure if this is exactly the sort of help you need, but it's the best suggestion I could come up with, having last used the MATLAB ODE functions years ago.
In addition: For the user-defined derivative function, could you just hard-code tolerances into the computation of the derivatives, or do you really need error limits to be passed from the solver?
Not sure I'm contributing much, but in the pharma modeling world, we use LSODE, DVERK, and DGPADM. DVERK is a nice fast simple order 5/6 Runge-Kutta solver. DGPADM is a good matrix-exponent solver. If your ODEs are linear, matrix exponent is best by far. But your problem is a little different.
BTW, the T argument is only in there for generality. I've never seen an actual system that depended on T.
You may be breaking into new theoretical territory. Good luck!
Added: If you're doing orbital simulations, seems to me I heard of special methods used for that, based on conic-section curves.
Check into a finite element method with linear basis functions and midpoint quadrature. Solving the following ODE requires only one evaluation each of f(x), k(x), and b(x) per element:
-k(x)u''(x) + b(x)u'(x) = f(x)
The answer will have pointwise error proportional to the error in your evaluations.
If you need smoother results, you can use quadratic basis functions with 2 evaluation of each of the above functions per element.