Explain Matlab ode45 output. Is ode45 an iterative algorithm? - matlab

I tried to use ode45 to solve an equation, and get output like the following. I get the idea it is trying to estimate using nearby points (as explained here https://www.mathworks.com/videos/solving-odes-in-matlab-6-ode45-117537.html). By my understanding, it should solve the equation in one round of computation? but the output looks like ode45 is an iterative algorithm (so that it generates output that repeat the '... steps ... failed attempt ... function evaluations' over and over again)? If it is iterative, could you help give some detail or references? Thanks!

ode45 is an iterative adaptive ODE solver. That is, it uses a 5th order (FSAL) method to propose the an update using some stepsize h. Then it does the same again, but now with a 4th order method, then it compared those two updates to one another, if the difference is less than some local tolerance, it accepts the proposed update. If the difference is larger than some local tolerance, the update is rejected and the stepsize is lowered (in some smart way).
To reduce the cost of using both a 4th and 5th order method, those two methods uses (roughly) the same function evaluations.
As for your output, it is, as also noted by #LutzL, not the standard output, which might point to an error in your code.

Related

Solving Delayed Differential Equations using ode45 Matlab

I am trying to solve DDE using ode45 in Matlab. My question is about the way that I am solving this equation. I don't know if I am right or I am wrong and I should use dde23 instead.
I have a following equation:
xdot(t)=Ax+BU(t-td)+E(t) & U(t-td)=Kx(t-td) & K=constant
Normally, when I don’t have delay on my equation, I solve this using ode45. Now with delay on my equation, again I am using ode45 to get the result. I have the exact amount of U(t-td) at each step and I replace its amount and solve the equation.
Is my solution correct or should I use dde23?
You have two problems here:
ode45 is a solver with adaptive step size. This means that your sampling steps are not necessarily equivalent to the actual integration steps. Instead, the integrator splits a sampling step into several integration steps as needed to achieve the desired accuracy (see this question on Scientific Computing for more information).
As a consequence, you may not provide correct delayed value of U at each step of the integration, even if you believe to do so.
However, if your sampling steps are sufficiently small, you will indeed have one time step per sampling step. The reason for this is that you effectively disable the adaptive integration by making your time step smaller than needed (and thus waste computation time).
Higher-order Runge–Kutta methods such as ode45 do not only make use of the value of the derivative at each integration step, but also evaluate it in-between (and no, they cannot provide a usable solution for this in-between time step).
For example, suppose that your delay and integration step are td=16. To make the integration step from t=32 to t=48, you need to evaluate U not only at t = 32−16 = 16 and t = 48−16 = 32, but also at t = 40−16 = 24. Now, you might say: Okay, let’s integrate such that we have an integration step at all those time points. But for these integration steps, you again need steps in the middle, e.g., if you want to integrate from t=16 to t=24, you need to evaluate U at t=0, t=4, and t=8. You get a never-ending cascades of smaller and smaller time steps.
Due to problem 2, it is impossible to provide the exact states from the past with any but a one-step integrator – using which is probably not a good idea in your case. For this reason, it is inevitable to use some sort of interpolation to obtain past values if you want to integrate DDEs with a multi-step integrator. dde23 does this in a sophisticated way using a good interpolation.
If you only provide U at the integration steps, you are essentially performing a piecewise-constant interpolation, which is the worst possible interpolation and therefore requires you to use very small integration steps. While you can do this if you really want to, dde23 with its more sophisticated piecewise cubic Hermite interpolation can work with much larger time steps and integrate adaptively, and therefore will be much faster. Also, it’s less likely that you somehow make a mistake. Finally, dde23 can deal with very small delays (smaller than the integration step), if you’re into that sort of thing.

about backpropagation and sigmoid function

I have been reading this ebook about ANN:https://www4.rgu.ac.uk/files/chapter3%20-%20bp.pdf
and got a doubt about the effect of the sigmoid function for calculating the errorB. In the text says that if I have threshold neuron I can use:
Target-Output
but because I have a sigmoid function involved I should add:
Output(1-Output)
and end up with:
ErrorB=OutputB(1-OutputB)(TargetB-OutputB)
I mean why I should add the part of O(1-O), I have tried with different values, but I really do not get the intuition why it should be in that way.
Any help?
Thanks
As Kelu stated, that part of the equation is based on derivatives of your transfer function (in this case sigmoid). To understand why you need derivatives, you need to understand how the delta rule works(*):
Your overall goal is to minimize the error in the network's output using gradient descent. Gradient descent itself tries to find a minimum in the error function (E) by taking steps proportional to the negative of the gradient. A gradient is simply the derivative and the reason you're working with derivatives mathematically is that gradients point in the direction of the greatest rate of increase of the (error) function. Conclusion: Since you wanna minimize the error, you go the opposite way of the gradient.
This is the intuitive reason for using gradients. If you want the mathematical derivation, you should check this basic wiki article (additional comment as it's not mentioned anywhere: the g'(x) in the article is the first derivative of g(x))
Other transfer functions can be used, e.g. linear (in this case there is no g'(x) term as the derivative is simply a constant) or hyperbolic tangent in which case the derivative is something different again.
(*) Equation is derived from following equation where you start by minimizing the error of the output:
It is like that because of the fact that Output(1-Output) is a derivative of sigmoid function (simplified). In general, this part is based on derivatives, you can try with different functions (from sigmoid) and then you have to use their derivatives too to get a proper learning rate.
If you want you can take a look at my implementation (it's far from perfect, but maybe you will get some idea from it ;)), it's a simple project I made on my university - https://github.com/kelostrada/neuron-network

using "Refine" option in Matlab's ode45

I am trying to use ode45 in MAtlab and want to fix the number of points that MAtlab uses (number of time steps). Using the 'refine' option in ode45 seems not to help. For instance, if I set 'refine' to be 10, Matlab returns an array of 101.
Changing 'RelTol' and 'AbsTol' also does not help either. I know that it is possible to write tspan as [0,t1,t2,t3,...,tn] and that solves this issue, but I'd like to fix number of points via the 'refine' option.
Perhaps you misunderstand what the 'Refine' option actually does. From the documentation for odeset:
Refine — If Refine is 1, the solver returns solutions only at the end of each time step. If Refine is n >1, the solver subdivides each time step into n smaller intervals and returns solutions at each time point. Refine does not apply when length(tspan)>2 or the ODE solver returns the solution as a structure.
In other words, setting 'Refine' to 10 does not guarantee that you'll get 10 output points but rather that you'll get 10 output points per integration time step. In the case of an adaptive step size method like ode45, the solver chooses how big the steps are based on many criteria. If you want a given number of output points you must specify fixed time steps as you've already done via tspan. The linspace function might be helpful to you.
Another possibility is that you're not actually applying your options. Simply calling odeset is not sufficient. You must also remember to pass the output into ode45.

maximum of a polynomial

I have a polynomial of order N (where N is even). This polynomial is equal to minus infinity for x minus/plus infinity (thus it has a maximum). What I am doing right now is taking the derivative of the polynomial by using polyder then finding the roots of the N-1 th order polynomial by using the roots function in Matlab which returns N-1 solutions. Then I am picking the real root that really maximizes the polynomial. The problem is that I am updating my polynomial a lot and at each time step I am using the above procedure to find the maximizer. Therefore, the roots function takes too much of a computation time making my application slow. Is there a way either in Matlab or a proposed algorithm that does this maximization in a computationally efficient fashion( i.e. just finding one solution instead of N-1 solutions)? Thanks.
Edit: I would also like to know whether there is a routine in Matlab that only returns the real roots instead of
roots which returns all real/complex ones.
I think that you are probably out of luck. If the coefficients of the polynomial change at every time step in an arbitrary fashion, then ultimately you are faced with a distinct and unrelated optimisation problem at every stage. There is insufficient information available to consider calculating just a subset of roots of the derivative polynomial - how could you know which derivative root provides the maximum stationary point of the polynomial without comparing the function value at ALL of the derivative roots?? If your polynomial coefficients were being perturbed at each step by only a (bounded) small amount or in a predictable manner, then it is conceivable that you would be able to try something iterative to refine the solution at each step (for example something crude such as using your previous roots as starting point of a new set of newton iterations to identify the updated derivative roots), but the question does not suggest that this is in fact the case so I am just guessing. I could be completely wrong here but you might just be out of luck in getting something faster unless you can provide more information of have some kind of relationship between the polynomials generated at each step.
There is a file exchange submission by Steve Morris which finds all real roots of functions on a given interval. It does so by interpolating the polynomial by a Chebychev polynomial, and finding its roots.
You can modify the eig evaluation of the companion matrix in there, to eigs. This allows you to find only one (or a few) roots and save time (there's a fair chance it's also possible to compute the roots or extrema of a Chebychev analytically, although I could not find a good reference for that (or even a bad one for that matter...)).
Another attempt that you can make in speeding things up, is to note that polyder does nothing more than
Pprime = (numel(P)-1:-1:1) .* P(1:end-1);
for your polynomial P. Also, roots does nothing more than find the eigenvalues of the companion matrix, so you could find these eigenvalues yourself, which prevents a call to roots. This could both be beneficial, because calls to non-builtin functions inside a loop prevent Matlab's JIT compiler from translating the loop to machine language. This could otherwise give you a large speed gain (factors of 100 or more are not uncommon).

Looking for ODE integrator/solver with a relaxed attitude to derivative precision

I have a system of (first order) ODEs with fairly expensive to compute derivatives.
However, the derivatives can be computed considerably cheaper to within given error bounds, either because the derivatives are computed from a convergent series and bounds can be placed on the maximum contribution from dropped terms, or through use of precomputed range information stored in kd-tree/octree lookup tables.
Unfortunately, I haven't been able to find any general ODE solvers which can benefit from this; they all seem to just give you coordinates and want an exact result back. (Mind you, I'm no expert on ODEs; I'm familiar with Runge-Kutta, the material in the Numerical Recipies book, LSODE and the Gnu Scientific Library's solver).
ie for all the solvers I've seen, you provide a derivs callback function accepting a t and an array of x, and returning an array of dx/dt back; but ideally I'm looking for one which gives the callback t, xs, and an array of acceptable errors, and receives dx/dt_min and dx/dt_max arrays back, with the derivative range guaranteed to be within the required precision. (There are probably numerous equally useful variations possible).
Any pointers to solvers which are designed with this sort of thing in mind, or alternative approaches to the problem (I can't believe I'm the first person wanting something like this) would be greatly appreciated.
Roughly speaking, if you know f' up to absolute error eps, and integrate from x0 to x1, the error of the integral coming from the error in the derivative is going to be <= eps*(x1 - x0). There is also discretization error, coming from your ODE solver. Consider how big eps*(x1 - x0) can be for you and feed the ODE solver with f' values computed with error <= eps.
I'm not sure this is a well-posed question.
In many algorithms, e.g, nonlinear equation solving, f(x) = 0, an estimate of a derivative f'(x) is all that's required for use in something like Newton's method since you only need to go in the "general direction" of the answer.
However, in this case, the derivative is a primary part of the (ODE) equation you're solving - get the derivative wrong, and you'll just get the wrong answer; it's like trying to solve f(x) = 0 with only an approximation for f(x).
As another answer has suggested, if you set up your ODE as applied f(x) + g(x) where g(x) is an error term, you should be able to relate errors in your derivatives to errors in your inputs.
Having thought about this some more, it occurred to me that interval arithmetic is probably key. My derivs function basically returns intervals. An integrator using interval arithmetic would maintain x's as intervals. All I'm interested in is obtaining a sufficiently small error bound on the xs at a final t. An obvious approach would be to iteratively re-integrate, improving the quality of the sample introducing the most error each iteration until we finally get a result with acceptable bounds (although that sounds like it could be a "cure worse than the disease" with regards to overall efficiency). I suspect adaptive step size control could fit in nicely in such a scheme, with step size chosen to keep the "implicit" discretization error comparable with the "explicit error" ie the interval range).
Anyway, googling "ode solver interval arithmetic" or just "interval ode" turns up a load of interesting new and relevant stuff (VNODE and its references in particular).
If you have a stiff system, you will be using some form of implicit method in which case the derivatives are only used within the Newton iteration. Using an approximate Jacobian will cost you strict quadratic convergence on the Newton iterations, but that is often acceptable. Alternatively (mostly if the system is large) you can use a Jacobian-free Newton-Krylov method to solve the stages, in which case your approximate Jacobian becomes merely a preconditioner and you retain quadratic convergence in the Newton iteration.
Have you looked into using odeset? It allows you to set options for an ODE solver, then you pass the options structure as the fourth argument to whichever solver you call. The error control properties (RelTol, AbsTol, NormControl) may be of most interest to you. Not sure if this is exactly the sort of help you need, but it's the best suggestion I could come up with, having last used the MATLAB ODE functions years ago.
In addition: For the user-defined derivative function, could you just hard-code tolerances into the computation of the derivatives, or do you really need error limits to be passed from the solver?
Not sure I'm contributing much, but in the pharma modeling world, we use LSODE, DVERK, and DGPADM. DVERK is a nice fast simple order 5/6 Runge-Kutta solver. DGPADM is a good matrix-exponent solver. If your ODEs are linear, matrix exponent is best by far. But your problem is a little different.
BTW, the T argument is only in there for generality. I've never seen an actual system that depended on T.
You may be breaking into new theoretical territory. Good luck!
Added: If you're doing orbital simulations, seems to me I heard of special methods used for that, based on conic-section curves.
Check into a finite element method with linear basis functions and midpoint quadrature. Solving the following ODE requires only one evaluation each of f(x), k(x), and b(x) per element:
-k(x)u''(x) + b(x)u'(x) = f(x)
The answer will have pointwise error proportional to the error in your evaluations.
If you need smoother results, you can use quadratic basis functions with 2 evaluation of each of the above functions per element.