My task is to model a certain physical problem and use matlab to solve it's differential equations. I made the model but it seems far more complex than what I've learned so far so I have no idea how to solve this.
The black color means it's a constant
I assume that by "solve" you seek a closed form solution of the form x(t) = ..., z(t) = ... Unforunately, it's very likely you cannot solve this system of differential equations. Only very specific canonical systems actually have a closed-form solution, and they are the most simple (few terms and dependent variables). See Wikipedia's entry for Ordinary Differential Equations, in particular the section Summary of exact solutions.
Nevertheless, the procedure for attempting to solve with Matlab's Symbolic Math Toolbox is described here.
If instead you were asking for numerical integration, then I will give you some pointers, but you must carry out the math:
Convert the second order system to a first order system by using a substitution w(t) = dx/dt, allowing you to replace the d2x/dt2 term by dw/dt. Example here.
Read the documentation for ode15i and implement your transformed model as an implicit differential equation system.
N.B. You must supply numerical values for your constants.
How do i check if the ode is stiff or not? Unfortunately, my problem is large so I cannot run different ode matlab solvers and compare their finish time. I hope I can tell from the ode itself directly. What are some characteristics of a stiff ode?
Unfortunately it is extremly hard to tell if the ode is stiff or not without running some integration methods especially if it's a large problem.
You could try to reduce the problem to a simpler one, e.g. Taylor-expansion and only taking the first 2 or 3 terms into consideration. Solving this problem could give you a hint whether your problem is stiff or not.
Note that I assumed your problem would be continous. Then according to Weierstraß' approximation theorem a series of polynomials (your Taylor expansion) would converge uniformly to your problem. This means if the problem with taylor series is stiff your original problem might be too.
I know that MATLAB can solve a system of 2 coupled PDEs using pdex4, however is there something similar that can solve a system of more coupled PDEs, say 6? The bigger system has the same structure (dependence on partial derivatives, boundary conditions, type of initial condition etc) as the system of 2 equations.
Thanks.
With the FEATool Matlab FEM toolbox you can set up and solve an arbitrary number of coupled PDEs.
The function pdefun (that you pass as an input to pdepe) defines your system of equations and has the general form,
[c,f,s] = pdefun(x,t,u,dudx)
c, f, and s are coefficients in the PDE (see Eq. 1-3 here). They can be column vectors to allow for any number of coupled equations. In the pdex4 example these vectors have 2 elements; in your case they would have 6.
MATLABs Partial Differential Equation Toolbox allows you to solve systems of multiple equations. For coupling of source terms, you can solve the initial PDE for the source, then use that as an input for a second PDE model which will give the final results. More info can be found here
I am looking for a way to fit my experimental data by a theoretical model which is described by a non-linear differential equation.
Unfortunately this latter can only be solved numerically (by solving this second degree, non-linear differential equation).
I manage to solve the differential equation for a set of parameters using the ode45 Matlab solver but now I want to find the proper fit parameters of the model. Also, I may have to mention that my ode45 is initiated at z=zmax (max being large so I can assume it is infinity) by y(zmax)=y0 and yprime(zmax)=yprime0 and I solve backward (from zmax to z=0).
I am quite new to this kind of numerical problems, are there classical ways to solve such problems?
Does anyone knows if there is a Matlab procedure which would help me solve this? On which principles is it based/constructed? (if possible I'd like to know the theoretical trick to solve this problem in a smart way, not by trying all the possible sets of parameters which would be very time consuming (I have 5 fit parameters!).
Thank you for your precious help!
You have facy methods in the Optimization Toolbox. In case you don't have access to it, you could do it manually by:
Selecting a cost function between the experimental and model data. For example, mean-squared-error.
Doing heuristic optimization of the cost function. For example, Nelder-Mead method.
I have a system of (first order) ODEs with fairly expensive to compute derivatives.
However, the derivatives can be computed considerably cheaper to within given error bounds, either because the derivatives are computed from a convergent series and bounds can be placed on the maximum contribution from dropped terms, or through use of precomputed range information stored in kd-tree/octree lookup tables.
Unfortunately, I haven't been able to find any general ODE solvers which can benefit from this; they all seem to just give you coordinates and want an exact result back. (Mind you, I'm no expert on ODEs; I'm familiar with Runge-Kutta, the material in the Numerical Recipies book, LSODE and the Gnu Scientific Library's solver).
ie for all the solvers I've seen, you provide a derivs callback function accepting a t and an array of x, and returning an array of dx/dt back; but ideally I'm looking for one which gives the callback t, xs, and an array of acceptable errors, and receives dx/dt_min and dx/dt_max arrays back, with the derivative range guaranteed to be within the required precision. (There are probably numerous equally useful variations possible).
Any pointers to solvers which are designed with this sort of thing in mind, or alternative approaches to the problem (I can't believe I'm the first person wanting something like this) would be greatly appreciated.
Roughly speaking, if you know f' up to absolute error eps, and integrate from x0 to x1, the error of the integral coming from the error in the derivative is going to be <= eps*(x1 - x0). There is also discretization error, coming from your ODE solver. Consider how big eps*(x1 - x0) can be for you and feed the ODE solver with f' values computed with error <= eps.
I'm not sure this is a well-posed question.
In many algorithms, e.g, nonlinear equation solving, f(x) = 0, an estimate of a derivative f'(x) is all that's required for use in something like Newton's method since you only need to go in the "general direction" of the answer.
However, in this case, the derivative is a primary part of the (ODE) equation you're solving - get the derivative wrong, and you'll just get the wrong answer; it's like trying to solve f(x) = 0 with only an approximation for f(x).
As another answer has suggested, if you set up your ODE as applied f(x) + g(x) where g(x) is an error term, you should be able to relate errors in your derivatives to errors in your inputs.
Having thought about this some more, it occurred to me that interval arithmetic is probably key. My derivs function basically returns intervals. An integrator using interval arithmetic would maintain x's as intervals. All I'm interested in is obtaining a sufficiently small error bound on the xs at a final t. An obvious approach would be to iteratively re-integrate, improving the quality of the sample introducing the most error each iteration until we finally get a result with acceptable bounds (although that sounds like it could be a "cure worse than the disease" with regards to overall efficiency). I suspect adaptive step size control could fit in nicely in such a scheme, with step size chosen to keep the "implicit" discretization error comparable with the "explicit error" ie the interval range).
Anyway, googling "ode solver interval arithmetic" or just "interval ode" turns up a load of interesting new and relevant stuff (VNODE and its references in particular).
If you have a stiff system, you will be using some form of implicit method in which case the derivatives are only used within the Newton iteration. Using an approximate Jacobian will cost you strict quadratic convergence on the Newton iterations, but that is often acceptable. Alternatively (mostly if the system is large) you can use a Jacobian-free Newton-Krylov method to solve the stages, in which case your approximate Jacobian becomes merely a preconditioner and you retain quadratic convergence in the Newton iteration.
Have you looked into using odeset? It allows you to set options for an ODE solver, then you pass the options structure as the fourth argument to whichever solver you call. The error control properties (RelTol, AbsTol, NormControl) may be of most interest to you. Not sure if this is exactly the sort of help you need, but it's the best suggestion I could come up with, having last used the MATLAB ODE functions years ago.
In addition: For the user-defined derivative function, could you just hard-code tolerances into the computation of the derivatives, or do you really need error limits to be passed from the solver?
Not sure I'm contributing much, but in the pharma modeling world, we use LSODE, DVERK, and DGPADM. DVERK is a nice fast simple order 5/6 Runge-Kutta solver. DGPADM is a good matrix-exponent solver. If your ODEs are linear, matrix exponent is best by far. But your problem is a little different.
BTW, the T argument is only in there for generality. I've never seen an actual system that depended on T.
You may be breaking into new theoretical territory. Good luck!
Added: If you're doing orbital simulations, seems to me I heard of special methods used for that, based on conic-section curves.
Check into a finite element method with linear basis functions and midpoint quadrature. Solving the following ODE requires only one evaluation each of f(x), k(x), and b(x) per element:
-k(x)u''(x) + b(x)u'(x) = f(x)
The answer will have pointwise error proportional to the error in your evaluations.
If you need smoother results, you can use quadratic basis functions with 2 evaluation of each of the above functions per element.