Solving ODEs in NetLogo, Eulers vs R-K vs R solver - netlogo

In my model each agent solves a system of ODEs at each tick. I have employed Eulers method (similar to the systems dynamics modeler in NetLogo) to solve these first order ODEs. However, for a stable solution, I am forced to use a very small time step (dt), which means the simulation proceeds very slowly with this method. I´m curious if anyone has advice on a method to solve the ODEs more quickly? I am considering implementing Runge-Kutta (with a larger time step?) as was done here (http://academic.evergreen.edu/m/mcavityd/netlogo/Bouncing_Ball.html). I would also consider using the R extension and using an ODE solver in R. But again, the ODEs are solved by each agent, so I don´t know if this is an efficient method.
I´m hoping someone has a feel for the performance of these methods and could offer some advice. If not, I will try to share what I find out.

In general your idea is correct. For a method of order p to reach a global error level tol over an integration interval of length T you will need a step size in the magnitude range
h=pow(tol/T,1.0/p).
However, not only the discretization error accumulates over the N=T/h steps, but also the floating point error. This gives a lower bound for useful step sizes of magnitude h=pow(T*mu,1.0/(p+1)).
Example: For T=1, mu=1e-15 and tol=1e-6
the Euler method of order 1 would need a step size of about h=1e-6 and thus N=1e+6 steps and function evaluations. The range of step sizes where reasonable results can be expected is bounded below by h=3e-8.
the improved Euler or Heun method has order 2, which implies a step size 1e-3, N=1000 steps and 2N=2000 function evaluations, the lower bound for useful step sizes is 1e-3.
the classical Runge-Kutta method has order 4, which gives a required step size of about h=3e-2 with about N=30 steps and 4N=120 function evaluations. The lower bound is 1e-3.
So there is a significant gain to be had by using higher order methods. At the same time the range where step size reduction results in a lower global error also gets significantly narrower for increasing order. But at the same time the achievable accuracy increases. So one has to knowingly care when the point is reached to leave well enough alone.
The implementation of RK4 in the ball example, as in general for the numerical integration of ODE, is for an ODE system x'=f(t,x), where x is the, possibly very large, state vector
A second order ODE (system) is transformed to a first order system by making the velocities members of the state vector. x''=a(x,x') gets transformed to [x',v']=[v, a(x,v)]. The big vector of the agent system is then composed of the collection of the pairs [x,v] or, if desired, as the concatenation of the collection of all x components and the collection of all v components.
In an agent based system it is reasonable to store the components of the state vector belonging to the agent as internal variables of the agent. Then the vector operations are performed by iterating over the agent collection and computing the operation tailored to the internal variables.
Taking into consideration that in the LOGO language there are no explicit parameters for function calls, the evaluation of dotx = f(t,x) needs to first fix the correct values of t and x before calling the function evaluation of f
save t0=t, x0=x
evaluate k1 = f_of_t_x
set t=t0+h/2, x=x0+h/2*k1
evaluate k2=f_of_t_x
set x=x0+h/2*k2
evaluate k3=f_of_t_x
set t=t+h, x=x0+h*k3
evaluate k4=f_of_t_x
set x=x0+h/6*(k1+2*(k2+k3)+k4)

Related

How to choose step size when using fixed step-size solver in Dymola?

I wanna do a real-time simulation, if I wanna use the fixed step-size solver in Dymola, with different step sizes, the result could be a little bit different, so is there any standard procedure to choose the step size? Or do I have to do a lot of calculations to prove step size independence just like in the CFD area I need to prove grid independence?
I don't know if there is a standard procedure, but proving numerical stability is not straightforward for numerical solving of nonlinear/hybrid models. Therefore I would go with some not strictly mathematical procedure. As it seems you are free to chose the step-size, so I would do the following.
Option 1 (with at least a little mathematical background):
Linearize the model using the "Tools -> Linear Analysis -> Poles"
The result is a plot containing the Eigenvalues and a table in the "Commands"-window. The latter should contain a column freq. [Hz] (Additional information can be generated by running a "Full Linear Analysis")
Take the highest value for the frequency from the table and derive the necessary step-size for it, given the solvers properties (e.g. stability region)
For Forward Euler it would make sense to use StepSize = 1/max(freq) * 1/10
For others the relation can be very different, but for most explicit solvers, this should be a good starting point
Note: Probably other functions of the "Linear Analysis" contain useful information as well, so it is worth a try to run them.
The problem with the above method is, that the poles of a non-LTI system can depend on the inputs/states of the model. Therefore it can go wrong as the result depends on the state of the system or the time of linearization respectively.
Option 2 (just go by trail and error):
Given you have a rough idea what the step-size should be you can do this:
Pick a solver and select a rather small step-size. This should provide a good result but slow simulation (e.g. 100ns in your case).
Then increase the step-size by e.g. a factor of 10, until the difference is getting to a level where you consider it too big to continue.
Then reduce the changes in step-size to find a sweet-spot for the trade-off between performance and precision.
Note: The above steps could be the flipped, by starting with a big step-size and reducing it until the results match well enough.
Validation/Finetuning
To prove that the result of any of the two above options is not totally off, it would make sense do the following:
Create a reference result with a proven well-working solver (in Dymola I would use DASSL with a reasonable relative tolerance).
Double-check the reference result with a second solver, ideally something rather different (in Dymola this could be Radau, CVode is similar to DASSL)
Compare the results of the reference solver with your fixed-step solver and check if you are fine with the difference.
If the results are similar enough, you can try to increase the step-size to a point where the difference gets too big (finetuning)
For both Options
Note that when you change the system's properties (poles) or input the above procedure(s) should be repeated - at least the validation part.

Changing the parameter of the controlling system would cause the system stiff?

I got a model working fine with the following controlling system parameters,
but if I change one of the parameters, the system would be stiff and no chance to solve it at all.
So my question is:
Why changing just one parameter would cause the system stiff?
If I meet the stiff problem again, how could I locate the exact parameter that causes the problem?
DASSL is an implicit solver and should therefore be able to deal with stiff systems pretty well. Still it seems there are many >500 steps it has to do within <2s, as this is your output interval (which causes the message). In your case this could relate to fast dynamics that happen within the model.
Regarding your questions:
If the model simulates to the end, check the controlled variables and see if the have fast oscillations (Frequency of > 100Hz) occur. This can happen when increasing the proportional gain of the controller, which is making the overall system "less stable".
A general advice on this is pretty difficult, but the linearSystems2 library can help. Creating a "Full Linear Analysis" gives a list of states and how they correlate to poles. The poles with highest frequency are usually responsible for the stiffness and from seeing which states relate to poles of interest, indicates which states to investigate. The way from the state to the parameter is up to the modeler - at least I don't know a general advice on this.
For 2. applied to Modelica.Blocks.Examples.PID_Controller the result looks like:
Seeing that likely the spring causes the fastest states in the system.
The answer is yes! Changing only one parameter value may cause the system to be stiff.
Assuming that a given model maps to an explicit ODE system:
dx/dt = f(x,p,...)
Conventionally, a system can be characterized as stiff via some stiffness indices expressed in terms of the eigenvalues of the Jacobian df/dx. For instance, one of these indices is the stiffness ratio: the ratio of the largest eigenvalue to the smallest eigenvalue of the Jacobian. If this ratio is large, some literature assume > 10^5, then the system is characterized to be stiff around the chosen initial and the parameter values.
The Jacobian df/dx as well as its eigenvalues is a time-dependent function of p and initial values. So theoretically and depending on the given system, one single parameter could be capable of causing such undesired system behavior.
Having a way to access the Jacobian and to perform eigenvalue analysis together with parametric sensitivity analysis, e.g. via computation of dynamic parameter sensitivities, identifying such evil parameters is possible.

MATLAB optimization: objective function with "steps"

I am trying to find a minimum using fmincon in MATLAB, and I am facing a following problem:
Optimization completed because the size of the gradient at the initial point
is less than the default value of the function tolerance.
My objective function's surface shows "steps", and therefore it has the same values over certain ranges of input variables (the size of the gradient is zero, if I am correct):
When moving from the initial point, the solver doesn't see any changes in the objective function's value, and finishes the optimization:
Iteration Func-count f(x) Step-size optimality
0 3 581.542 0
Initial point is a local minimum.
Optimization completed because the size of the gradient at the initial point
is less than the default value of the function tolerance.
Is there any way make the solver move forward when the objective function keeps its value unchanged (until the objective function starts to increase)?
Thanks for your help.
I post my extended comment as an answer in the hope that it will be easier for future answer seekers to find the solution:
Probably you would get reasonable results with a non-gradient based solver, e.g. ga, if the evaluation of the objective function is not costly. These are not dependent on the gradient and performing well on non-smooth functions. It is also worth to read the following guide before selecting solver algorithm: How to choose solver.
The answer is right there :
Initial point is a local minimum.
The point you are giving as the initial point is already a local minimum. So the algorithm finds that minimum and sticks there.
In order to find other local minimum or maybe the global one, change the initial points to something else far from the local minimum.
In order to find the global minimum use a global optimization technique.

By setting a tspan=[to:very_small_step:tf], can it affect ode45 solver's step size?

I have know that the ode45 solver has adaptive step size controlled by Matlab program itself. The description below is given by Matlab website:
Specifying tspan with more than two elements does not affect the internal time steps that >the solver uses to traverse the interval from tspan(1) to tspan(end). All solvers in the ODE >suite obtain output values by means of continuous extensions of the basic formulas. Although >a solver does not necessarily step precisely to a time point specified in tspan, the >solutions produced at the specified time points are of the same order of accuracy as the >solutions computed at the internal time points.
However, if I specify very_small_step in tspan=[to:very_small_step:tf], will this affect program controlled step size. Will this force step size less than the value of very_small_step? OR matlab will make interpolation calculation to get the corresponding result at specified time point?
From your quote
Specifying tspan with more than two elements does not affect the internal time steps
Also there exists the MaxStep property to configure the maximum step size.
For steps in between the solvers use continuous extension formulas as described here.
Why are you asking anyway? What problem do you encounter?

Looking for ODE integrator/solver with a relaxed attitude to derivative precision

I have a system of (first order) ODEs with fairly expensive to compute derivatives.
However, the derivatives can be computed considerably cheaper to within given error bounds, either because the derivatives are computed from a convergent series and bounds can be placed on the maximum contribution from dropped terms, or through use of precomputed range information stored in kd-tree/octree lookup tables.
Unfortunately, I haven't been able to find any general ODE solvers which can benefit from this; they all seem to just give you coordinates and want an exact result back. (Mind you, I'm no expert on ODEs; I'm familiar with Runge-Kutta, the material in the Numerical Recipies book, LSODE and the Gnu Scientific Library's solver).
ie for all the solvers I've seen, you provide a derivs callback function accepting a t and an array of x, and returning an array of dx/dt back; but ideally I'm looking for one which gives the callback t, xs, and an array of acceptable errors, and receives dx/dt_min and dx/dt_max arrays back, with the derivative range guaranteed to be within the required precision. (There are probably numerous equally useful variations possible).
Any pointers to solvers which are designed with this sort of thing in mind, or alternative approaches to the problem (I can't believe I'm the first person wanting something like this) would be greatly appreciated.
Roughly speaking, if you know f' up to absolute error eps, and integrate from x0 to x1, the error of the integral coming from the error in the derivative is going to be <= eps*(x1 - x0). There is also discretization error, coming from your ODE solver. Consider how big eps*(x1 - x0) can be for you and feed the ODE solver with f' values computed with error <= eps.
I'm not sure this is a well-posed question.
In many algorithms, e.g, nonlinear equation solving, f(x) = 0, an estimate of a derivative f'(x) is all that's required for use in something like Newton's method since you only need to go in the "general direction" of the answer.
However, in this case, the derivative is a primary part of the (ODE) equation you're solving - get the derivative wrong, and you'll just get the wrong answer; it's like trying to solve f(x) = 0 with only an approximation for f(x).
As another answer has suggested, if you set up your ODE as applied f(x) + g(x) where g(x) is an error term, you should be able to relate errors in your derivatives to errors in your inputs.
Having thought about this some more, it occurred to me that interval arithmetic is probably key. My derivs function basically returns intervals. An integrator using interval arithmetic would maintain x's as intervals. All I'm interested in is obtaining a sufficiently small error bound on the xs at a final t. An obvious approach would be to iteratively re-integrate, improving the quality of the sample introducing the most error each iteration until we finally get a result with acceptable bounds (although that sounds like it could be a "cure worse than the disease" with regards to overall efficiency). I suspect adaptive step size control could fit in nicely in such a scheme, with step size chosen to keep the "implicit" discretization error comparable with the "explicit error" ie the interval range).
Anyway, googling "ode solver interval arithmetic" or just "interval ode" turns up a load of interesting new and relevant stuff (VNODE and its references in particular).
If you have a stiff system, you will be using some form of implicit method in which case the derivatives are only used within the Newton iteration. Using an approximate Jacobian will cost you strict quadratic convergence on the Newton iterations, but that is often acceptable. Alternatively (mostly if the system is large) you can use a Jacobian-free Newton-Krylov method to solve the stages, in which case your approximate Jacobian becomes merely a preconditioner and you retain quadratic convergence in the Newton iteration.
Have you looked into using odeset? It allows you to set options for an ODE solver, then you pass the options structure as the fourth argument to whichever solver you call. The error control properties (RelTol, AbsTol, NormControl) may be of most interest to you. Not sure if this is exactly the sort of help you need, but it's the best suggestion I could come up with, having last used the MATLAB ODE functions years ago.
In addition: For the user-defined derivative function, could you just hard-code tolerances into the computation of the derivatives, or do you really need error limits to be passed from the solver?
Not sure I'm contributing much, but in the pharma modeling world, we use LSODE, DVERK, and DGPADM. DVERK is a nice fast simple order 5/6 Runge-Kutta solver. DGPADM is a good matrix-exponent solver. If your ODEs are linear, matrix exponent is best by far. But your problem is a little different.
BTW, the T argument is only in there for generality. I've never seen an actual system that depended on T.
You may be breaking into new theoretical territory. Good luck!
Added: If you're doing orbital simulations, seems to me I heard of special methods used for that, based on conic-section curves.
Check into a finite element method with linear basis functions and midpoint quadrature. Solving the following ODE requires only one evaluation each of f(x), k(x), and b(x) per element:
-k(x)u''(x) + b(x)u'(x) = f(x)
The answer will have pointwise error proportional to the error in your evaluations.
If you need smoother results, you can use quadratic basis functions with 2 evaluation of each of the above functions per element.