I asked a question regarding how the matlabFunction worked (here), which spurred a question related to the ode45 function. Using the example I gave in my post on the matlabFunction, when I pass this function through ode45, with some initial conditions, does ode45 read the derivative -s.*(x-y) as approximating the unknown function x; the same thing being said of -y+r.*x-x.*z and y, and -b.*z+x.*y and z? More specifically, we have the
matlabFunction([s*(y-x);-x*z+r*x-y; x*y-b*z],
'vars',{t,[x;y;z],[s;r;b]},'file','Example2');
and then we use
[tout,yout]=ode45(#(t,Y,p) Example2(t,Y,[10;5;8/3]),[0,50],[1;0;0]);
To approximately solve for the unknown functions x,y, and z. How does ode45 know to take the functions, which are defined as variables, [x;y;z] and approximate them? I have an inkling of a feeling that my question is rather vague, but I would just like to know the connection between these things.
The semantics of your commands is that x'(t)=s*(y(t)-x(t)), y'(t)=-x(t)*z(t)+r*x(t)-y(t), and z'(t)=x(t)*y(t)-b*z(t), with the constants you have given for s, r, and b. MATLAB will follow the commands you have given and compute a numerical approximation to this system. I am not entirely sure what you mean by your question,
How does ode45 know to take the functions, […] and approximate them?
That is exactly what you told it to do, and it is the only thing ode45 ever does. (By the time you call ode45, the names x, y, z are completely irrelevant, by the way. The function only cares for a vector of values.) If you are asking about the algorithm behind approximating the solution of an ODE given this way, you can easily find any number of books and lectures on the topic using google or any other search engine.
You may be interested in the function odeToVectorfield, by the way, which simplifies getting these functions from a differential equation written in more traditional form.
Related
My task is to model a certain physical problem and use matlab to solve it's differential equations. I made the model but it seems far more complex than what I've learned so far so I have no idea how to solve this.
The black color means it's a constant
I assume that by "solve" you seek a closed form solution of the form x(t) = ..., z(t) = ... Unforunately, it's very likely you cannot solve this system of differential equations. Only very specific canonical systems actually have a closed-form solution, and they are the most simple (few terms and dependent variables). See Wikipedia's entry for Ordinary Differential Equations, in particular the section Summary of exact solutions.
Nevertheless, the procedure for attempting to solve with Matlab's Symbolic Math Toolbox is described here.
If instead you were asking for numerical integration, then I will give you some pointers, but you must carry out the math:
Convert the second order system to a first order system by using a substitution w(t) = dx/dt, allowing you to replace the d2x/dt2 term by dw/dt. Example here.
Read the documentation for ode15i and implement your transformed model as an implicit differential equation system.
N.B. You must supply numerical values for your constants.
I am studying Stochastic calculus, and occasionally we need to compute an integral (from -infinity to +infinity) for some complex distribution. In this case, it was
with the answer on the right. This is the code I put into Matlab (and I have the symbolic math toolbox), which Matlab simply cannot process:
>> syms x t
>> f = exp(1+2*x)*(1/((2*pi*t)^0.5))*exp(-(x^2)/(2*t))
>> int(f,-inf,inf)
ans =
-((2^(1/2)*pi^(1/2)*exp(2*t + 1)*limit(erf((2^(1/2)*((x*1i)/t - 2i))/(2*(-1/t)^(1/2))), x, -Inf)*1i)/(2*(-1/t)^(1/2)) - (2^(1/2)*pi^(1/2)*exp(2*t + 1)*limit(erf((2^(1/2)*((x*1i)/t - 2i))/(2*(-1/t)^(1/2))), x, Inf)*1i)/(2*(-1/t)^(1/2)))/(2*pi*t)^(1/2)
This answer at the end looks like nonsense, while Wolfram (via their free tool), gives me the answer that the picture above has. Am I missing something fundamental about doing such integrations in Matlab that the basic Mathworks pages don't cover? Where am I proceeding incorrectly?
In order to explain what is happening, we need some theory:
Symbolic systems such as Matlab or Mathematica calculate integrals symbolically by the Risch algorithm (yes, there is a method to mechanically calculate integrals, just like derivatives).
However, the Risch algorithms works differently than using derivation rules. Strictly spoken, it is not an algorithm but a semi-algorithm. This is, it is not deterministic one (as algorithms are).
This (semi) algorithm makes a series of transformations on the input expression (the one to be integrated), and in a specific point, it requires to ask if the transformed expression is equals to zero, because if it were zero, it cannot continue (the input is not integrable using a finite set of terms).
The problem (and the reason of the "semi-algoritmicity") is that, the (apparently simple) equation:
E = 0
Is indecidable (it is also called the constant problem). It means that there cannot exist a formal method to solve the constant problem, for any expression E. Of course, we know to solve the constant problem for specific forms of the expression E (i.e. polynomials), but it is impossible to solve the problem for the general case.
It also means that the Risch algorithm cannot be perfect (being able to solve any integral -integrable in finite terms-). In other words, the Risch algorithm will be as powerful as our ability to solve the constant problem for as many forms of the expression E as we can, but losing any hope of solving for the general case.
Different symbolic systems have similar, but different methods to try to solve equations (and therefore the constant problem), it explains why some of them can "solve" different sets of integrals than others.
And generalizing, because no symbolic system will never be able to solve the constant problem for the general case, it will neither be able to solve any integral (integrable in finite terms).
The second parameter of int() needs to be the variable you're integrating over (which looks like t in this case):
syms x t
f = exp(1+2*x)*(1/((2*pi*t)^0.5))*exp(-(x^2)/(2*t))
int(f,'t',-inf,inf) % <- integrate over t
Is there a way to check if a rational function is a polynomial in Matlab?
I have a big rational function, call it R, that I am trying to show is a polynomial. I've tried the simplify and simplifyFraction functions and the following (not very effective) procedure:
Split it into denominator and numerator:
[num,den] = numden(R);
Calculate the roots of both polynomials:
r_num = roots(sym2poly(num));
r_den = roots(sym2poly(den));
Check if all the elements of r_den belong to r_num:
Because of numerical imprecision I haven't been able to come up with a reliable way of doing this.
This is a not-so-easy problem and finding greatest common divisor of polynomials is a very active area of research. There are tons of publications and you can find them online.
The main problem is that root finding is an ill-conditioned problem. And recently a few experts are trying to combine the numerical computations with symbolic representations. If you google for ERES method you will have an entry point together with thesis of Christou.
This problem is particularly important for signals and control people because of the transfer function representations and pole zero cancellations. Matlab goes out a long way to make sure that all is OK and a minimal neighborhood of each pole zero is accepted as a cancellation.
So as a quick remedy, convert your polynomial coefficients to 1D vectors, say a and b, and use minreal(tf(a,b)). Then you can extract num and den of that transfer representation.
Shameless plug: I am the author of a python3 library and I also implemented a system theoretical approach. Here and here is the full implementation details with citations about LCM and GCD operations.
I have confronted an equation containing Bessel functions of the first type on one side and modified Bessel functions of the second type on the other. I want to know its exact solutions (values of u). The equation is as follows:
u*besselj(s-1,u)/besselj(s,u)=-w*besselK(s-1,w)/besselk(s,w)
where s is an arbitrary integer number, for example 2.
w can be written as a function of u:
w=sqrt(1-u^2);
and so this equation has only one variable: u
I'm new to MATLAB. I have no idea about how I should approach this. Could anyone please help me?
A quick thing to try may be the FZERO function, a generic nonlinear zero finder. To learn how to use it, you can implement the examples given in the documentation. Then, rewrite your function so it can be input to fzero and see what you get..
(Note: I haven't tried this, but I just noticed there were no replies yet so maybe it's better than nothing.)
I have a system of (first order) ODEs with fairly expensive to compute derivatives.
However, the derivatives can be computed considerably cheaper to within given error bounds, either because the derivatives are computed from a convergent series and bounds can be placed on the maximum contribution from dropped terms, or through use of precomputed range information stored in kd-tree/octree lookup tables.
Unfortunately, I haven't been able to find any general ODE solvers which can benefit from this; they all seem to just give you coordinates and want an exact result back. (Mind you, I'm no expert on ODEs; I'm familiar with Runge-Kutta, the material in the Numerical Recipies book, LSODE and the Gnu Scientific Library's solver).
ie for all the solvers I've seen, you provide a derivs callback function accepting a t and an array of x, and returning an array of dx/dt back; but ideally I'm looking for one which gives the callback t, xs, and an array of acceptable errors, and receives dx/dt_min and dx/dt_max arrays back, with the derivative range guaranteed to be within the required precision. (There are probably numerous equally useful variations possible).
Any pointers to solvers which are designed with this sort of thing in mind, or alternative approaches to the problem (I can't believe I'm the first person wanting something like this) would be greatly appreciated.
Roughly speaking, if you know f' up to absolute error eps, and integrate from x0 to x1, the error of the integral coming from the error in the derivative is going to be <= eps*(x1 - x0). There is also discretization error, coming from your ODE solver. Consider how big eps*(x1 - x0) can be for you and feed the ODE solver with f' values computed with error <= eps.
I'm not sure this is a well-posed question.
In many algorithms, e.g, nonlinear equation solving, f(x) = 0, an estimate of a derivative f'(x) is all that's required for use in something like Newton's method since you only need to go in the "general direction" of the answer.
However, in this case, the derivative is a primary part of the (ODE) equation you're solving - get the derivative wrong, and you'll just get the wrong answer; it's like trying to solve f(x) = 0 with only an approximation for f(x).
As another answer has suggested, if you set up your ODE as applied f(x) + g(x) where g(x) is an error term, you should be able to relate errors in your derivatives to errors in your inputs.
Having thought about this some more, it occurred to me that interval arithmetic is probably key. My derivs function basically returns intervals. An integrator using interval arithmetic would maintain x's as intervals. All I'm interested in is obtaining a sufficiently small error bound on the xs at a final t. An obvious approach would be to iteratively re-integrate, improving the quality of the sample introducing the most error each iteration until we finally get a result with acceptable bounds (although that sounds like it could be a "cure worse than the disease" with regards to overall efficiency). I suspect adaptive step size control could fit in nicely in such a scheme, with step size chosen to keep the "implicit" discretization error comparable with the "explicit error" ie the interval range).
Anyway, googling "ode solver interval arithmetic" or just "interval ode" turns up a load of interesting new and relevant stuff (VNODE and its references in particular).
If you have a stiff system, you will be using some form of implicit method in which case the derivatives are only used within the Newton iteration. Using an approximate Jacobian will cost you strict quadratic convergence on the Newton iterations, but that is often acceptable. Alternatively (mostly if the system is large) you can use a Jacobian-free Newton-Krylov method to solve the stages, in which case your approximate Jacobian becomes merely a preconditioner and you retain quadratic convergence in the Newton iteration.
Have you looked into using odeset? It allows you to set options for an ODE solver, then you pass the options structure as the fourth argument to whichever solver you call. The error control properties (RelTol, AbsTol, NormControl) may be of most interest to you. Not sure if this is exactly the sort of help you need, but it's the best suggestion I could come up with, having last used the MATLAB ODE functions years ago.
In addition: For the user-defined derivative function, could you just hard-code tolerances into the computation of the derivatives, or do you really need error limits to be passed from the solver?
Not sure I'm contributing much, but in the pharma modeling world, we use LSODE, DVERK, and DGPADM. DVERK is a nice fast simple order 5/6 Runge-Kutta solver. DGPADM is a good matrix-exponent solver. If your ODEs are linear, matrix exponent is best by far. But your problem is a little different.
BTW, the T argument is only in there for generality. I've never seen an actual system that depended on T.
You may be breaking into new theoretical territory. Good luck!
Added: If you're doing orbital simulations, seems to me I heard of special methods used for that, based on conic-section curves.
Check into a finite element method with linear basis functions and midpoint quadrature. Solving the following ODE requires only one evaluation each of f(x), k(x), and b(x) per element:
-k(x)u''(x) + b(x)u'(x) = f(x)
The answer will have pointwise error proportional to the error in your evaluations.
If you need smoother results, you can use quadratic basis functions with 2 evaluation of each of the above functions per element.