How to choose step size when using fixed step-size solver in Dymola? - modelica

I wanna do a real-time simulation, if I wanna use the fixed step-size solver in Dymola, with different step sizes, the result could be a little bit different, so is there any standard procedure to choose the step size? Or do I have to do a lot of calculations to prove step size independence just like in the CFD area I need to prove grid independence?

I don't know if there is a standard procedure, but proving numerical stability is not straightforward for numerical solving of nonlinear/hybrid models. Therefore I would go with some not strictly mathematical procedure. As it seems you are free to chose the step-size, so I would do the following.
Option 1 (with at least a little mathematical background):
Linearize the model using the "Tools -> Linear Analysis -> Poles"
The result is a plot containing the Eigenvalues and a table in the "Commands"-window. The latter should contain a column freq. [Hz] (Additional information can be generated by running a "Full Linear Analysis")
Take the highest value for the frequency from the table and derive the necessary step-size for it, given the solvers properties (e.g. stability region)
For Forward Euler it would make sense to use StepSize = 1/max(freq) * 1/10
For others the relation can be very different, but for most explicit solvers, this should be a good starting point
Note: Probably other functions of the "Linear Analysis" contain useful information as well, so it is worth a try to run them.
The problem with the above method is, that the poles of a non-LTI system can depend on the inputs/states of the model. Therefore it can go wrong as the result depends on the state of the system or the time of linearization respectively.
Option 2 (just go by trail and error):
Given you have a rough idea what the step-size should be you can do this:
Pick a solver and select a rather small step-size. This should provide a good result but slow simulation (e.g. 100ns in your case).
Then increase the step-size by e.g. a factor of 10, until the difference is getting to a level where you consider it too big to continue.
Then reduce the changes in step-size to find a sweet-spot for the trade-off between performance and precision.
Note: The above steps could be the flipped, by starting with a big step-size and reducing it until the results match well enough.
Validation/Finetuning
To prove that the result of any of the two above options is not totally off, it would make sense do the following:
Create a reference result with a proven well-working solver (in Dymola I would use DASSL with a reasonable relative tolerance).
Double-check the reference result with a second solver, ideally something rather different (in Dymola this could be Radau, CVode is similar to DASSL)
Compare the results of the reference solver with your fixed-step solver and check if you are fine with the difference.
If the results are similar enough, you can try to increase the step-size to a point where the difference gets too big (finetuning)
For both Options
Note that when you change the system's properties (poles) or input the above procedure(s) should be repeated - at least the validation part.

Related

How to test whether the ODE integration has reached equilibrium?

I am using Matlab for this project. I have introduced some modifications to the ode45 solver.
I am using sometimes up to 64 components, all in the [0,1] interval and the components sum up to 1.
At some intervals I halt the integration process in order to run a quick check to see whether further integration is needed and I am looking for some clever way to efficiently figure this one.
I have found four cases and I should be able to detect each of them during a check:
1: The system has settled into an equilibrium and all components are unchanged.
2: Three or more components are wildly fluctuating in a periodic manner.
3: One or two components are changing very rapidly with low amplitude and short frequency.
4: None of the above is true and the integration must be continued.
To give an idea: I have found it to be a good practice to use the last ~5k states generated by the ode45 solver to a function for this purpose.
In short: how does one detect equilibrium or a nonchanging periodic pattern during ODE integration?
Steady-state only occurs when the time derivatives your model function computes are all 0. A periodic solution like you described corresponds rather to a limit cycle, i.e. oscillations around an unstable equilibrium. I don't know if there are methods to detect these cycles. I might update my answers to give more info on that. Maybe an idea would be to see if the last part of the signal correlates with itself (with a delay corresponding to the cycle period).
Note that if you are only interested in the steady state, an implicit method like ode15s may be more efficient, as it can "dissipate" all the transient fluctuations and use much larger time steps than explicit methods, which must resolve the transient accurately to avoid exploding. However, they may also dissipate small-amplitude limit cycles. A pragmatic solution is then to slightly perturb the steady-state values and see if an explicit integration converges towards the unperturbed steady-state.
Something I often do is to look at the norm of the difference between the solution at each step and the solution at the last step. If this difference is small for a sufficiently high number of steps, then steady-state is reached. You can also observe how the norm $||frac{dy}{dt}||$ converges to zero.
This question is actually better suited for the computational science forum I think.

Changing the parameter of the controlling system would cause the system stiff?

I got a model working fine with the following controlling system parameters,
but if I change one of the parameters, the system would be stiff and no chance to solve it at all.
So my question is:
Why changing just one parameter would cause the system stiff?
If I meet the stiff problem again, how could I locate the exact parameter that causes the problem?
DASSL is an implicit solver and should therefore be able to deal with stiff systems pretty well. Still it seems there are many >500 steps it has to do within <2s, as this is your output interval (which causes the message). In your case this could relate to fast dynamics that happen within the model.
Regarding your questions:
If the model simulates to the end, check the controlled variables and see if the have fast oscillations (Frequency of > 100Hz) occur. This can happen when increasing the proportional gain of the controller, which is making the overall system "less stable".
A general advice on this is pretty difficult, but the linearSystems2 library can help. Creating a "Full Linear Analysis" gives a list of states and how they correlate to poles. The poles with highest frequency are usually responsible for the stiffness and from seeing which states relate to poles of interest, indicates which states to investigate. The way from the state to the parameter is up to the modeler - at least I don't know a general advice on this.
For 2. applied to Modelica.Blocks.Examples.PID_Controller the result looks like:
Seeing that likely the spring causes the fastest states in the system.
The answer is yes! Changing only one parameter value may cause the system to be stiff.
Assuming that a given model maps to an explicit ODE system:
dx/dt = f(x,p,...)
Conventionally, a system can be characterized as stiff via some stiffness indices expressed in terms of the eigenvalues of the Jacobian df/dx. For instance, one of these indices is the stiffness ratio: the ratio of the largest eigenvalue to the smallest eigenvalue of the Jacobian. If this ratio is large, some literature assume > 10^5, then the system is characterized to be stiff around the chosen initial and the parameter values.
The Jacobian df/dx as well as its eigenvalues is a time-dependent function of p and initial values. So theoretically and depending on the given system, one single parameter could be capable of causing such undesired system behavior.
Having a way to access the Jacobian and to perform eigenvalue analysis together with parametric sensitivity analysis, e.g. via computation of dynamic parameter sensitivities, identifying such evil parameters is possible.

What does Default Solver Iteration Means?

I'm trying to understand Unity Physics engine (PhysX), Can somebody explain that what exactly Default Solver Iterations and Default Solver Velocity Iterations are?
This is from Unity documentation :
Default Solver Iterations: Define how many solver processes Unity runs
on every physics frame. Solvers are small physics engine tasks which
determine a number of physics interactions, such as the movements of
joints or managing contact between overlapping Rigidbody components.
This affects the quality of the solver output and it’s advisable to
change the property in case non-default Time.fixedDeltaTime is used,
or the configuration is extra demanding. Typically, it’s used to
reduce the jitter resulting from joints or contacts.
Please provide some example of how it works and how does increase or decreasing it affects the final result?
I asked this question on Unity Forum and Hyblademin answered it:
In mathematics, an iterative solution method is any algorithm which
approximately solves a system of unknown values like [x1, x2, x3 ...
xn] by repeating a set of steps (iterating). Often, the system of
interest is a set of linear equations exactly like those seen in
algebra class but with a prohibitively high number of unknowns.
Starting with a guess for the solution to each unknown, which could be
based on a similar, known system or could be from a common starting
point like [1, 1, 1 ... 1], a procedure is carried out which gives an
approximate solution which will be closer to the exact values. After
only one iteration, the approximation won't be a very good one unless
the initial guess was already close. But the procedure can be repeated
with the first approximation as the new input, which will give a
closer approximation.
After repeating a few more times, we can expect a reliable
approximation. It still isn't exact, which we could confirm by just
plugging in our answers into our original system and seeing that it
isn't quite right (after simplifying, we would end up with things like
10=10.001 or something to that effect). That said, if the
approximation is close enough for our application, we stop iterating
and use it.
These lecture notes courtesy of a Notre Dame course give a nice
example of this in action using the well-known Jacobi method. Carrying
out an iteration of an iterative method outputs an approximation that
is better than the input because the methods are defined in a way that
causes this to happen, and this is a property called convergence. When
looking at why any given method converges, things get abstract pretty
quickly. I think this is outside the scope of your question,
especially since I don't know what method(s) Unity uses anyway.
When physics is calculated in Unity, we end up with a lot of systems
of equations. We could draw a free-body diagram to show forces and
torques during a collision for a given FixedUpdate in a Unity runtime
to show this. We could try to solve them "directly", which means to
use logical relationships to determine the exact results of the values
(like solving for x in algebra class), but even if the systems are on
the simple side, doing a lot of them will slow the execution to a
crawl. Luckily, iterative, "indirect" methods can be used to get a
pretty good approximation at a fraction of the computing cost.
Increasing the number of iterations will lead to more precise
approximate solutions. There is a point where increasing the number of
iterations gives an increase in precision that is not at all worth the
processing overhead of doing another iteration. But the number of
iterations for this point depends on what you need your project to do.
Sometimes a given arrangement of physics objects will result in jitter
with the default settings that might be improved with more solver
iterations, which is mentioned in the manual entry. There isn't a
great way to determine if changing solver iteration counts will
improve behavior or performance in the way that you need, except for
just trial and error (use the Profiler for a more-objective indication
of performance impact).
https://forum.unity.com/threads/what-does-default-solver-iteration-means.673912/#post-4512004

Solving ODEs in NetLogo, Eulers vs R-K vs R solver

In my model each agent solves a system of ODEs at each tick. I have employed Eulers method (similar to the systems dynamics modeler in NetLogo) to solve these first order ODEs. However, for a stable solution, I am forced to use a very small time step (dt), which means the simulation proceeds very slowly with this method. I´m curious if anyone has advice on a method to solve the ODEs more quickly? I am considering implementing Runge-Kutta (with a larger time step?) as was done here (http://academic.evergreen.edu/m/mcavityd/netlogo/Bouncing_Ball.html). I would also consider using the R extension and using an ODE solver in R. But again, the ODEs are solved by each agent, so I don´t know if this is an efficient method.
I´m hoping someone has a feel for the performance of these methods and could offer some advice. If not, I will try to share what I find out.
In general your idea is correct. For a method of order p to reach a global error level tol over an integration interval of length T you will need a step size in the magnitude range
h=pow(tol/T,1.0/p).
However, not only the discretization error accumulates over the N=T/h steps, but also the floating point error. This gives a lower bound for useful step sizes of magnitude h=pow(T*mu,1.0/(p+1)).
Example: For T=1, mu=1e-15 and tol=1e-6
the Euler method of order 1 would need a step size of about h=1e-6 and thus N=1e+6 steps and function evaluations. The range of step sizes where reasonable results can be expected is bounded below by h=3e-8.
the improved Euler or Heun method has order 2, which implies a step size 1e-3, N=1000 steps and 2N=2000 function evaluations, the lower bound for useful step sizes is 1e-3.
the classical Runge-Kutta method has order 4, which gives a required step size of about h=3e-2 with about N=30 steps and 4N=120 function evaluations. The lower bound is 1e-3.
So there is a significant gain to be had by using higher order methods. At the same time the range where step size reduction results in a lower global error also gets significantly narrower for increasing order. But at the same time the achievable accuracy increases. So one has to knowingly care when the point is reached to leave well enough alone.
The implementation of RK4 in the ball example, as in general for the numerical integration of ODE, is for an ODE system x'=f(t,x), where x is the, possibly very large, state vector
A second order ODE (system) is transformed to a first order system by making the velocities members of the state vector. x''=a(x,x') gets transformed to [x',v']=[v, a(x,v)]. The big vector of the agent system is then composed of the collection of the pairs [x,v] or, if desired, as the concatenation of the collection of all x components and the collection of all v components.
In an agent based system it is reasonable to store the components of the state vector belonging to the agent as internal variables of the agent. Then the vector operations are performed by iterating over the agent collection and computing the operation tailored to the internal variables.
Taking into consideration that in the LOGO language there are no explicit parameters for function calls, the evaluation of dotx = f(t,x) needs to first fix the correct values of t and x before calling the function evaluation of f
save t0=t, x0=x
evaluate k1 = f_of_t_x
set t=t0+h/2, x=x0+h/2*k1
evaluate k2=f_of_t_x
set x=x0+h/2*k2
evaluate k3=f_of_t_x
set t=t+h, x=x0+h*k3
evaluate k4=f_of_t_x
set x=x0+h/6*(k1+2*(k2+k3)+k4)

3rd-order rate limiter in Simulink? How to generate smooth triggered signals?

First for those, who are not familiar with Simulink, there is a imaginable outside-Simulink partial solution:
I need to create a vector satisfying the following conditions:
known initial value a1
known final value a2
it has a pre-defined step size, but the length is not pre-determined
the first derivative over the whole range is limited to v_max resp. -v_max
the second derivative over the whole range is limited to a_max resp. -a_max
the third derivative over the whole range is limited to j_max resp. -j_max
at the first and the final point all derivatives are zero.
Before you ask "what have you tried so far", I just had the idea to solve it outside Simulink and I tried the whole stuff below ;)
But maybe you guys have a good idea, while I keep working on my own solution.
I'd like to generate smooth ramp signals (3rd derivative limited) based on a trigger signal in Simulink.
To get a triggered step I created a triggered subsystem propagating the trigger output. It looks like that:
But I actually don't want a step, I need a very smooth ramp with limited derivatives up to the 3rd order. The math behind is:
displacement: x
speed: v = x'
acceleration: a = v' = x''
jerk: j = a' = v'' = x'''
(If this looks familiar to you, I once had a very similar question. I thought about a bounty on it, but after the necessary edit of the question both answers would have been invalid)
As there are just rate limiters of 1st order, I used two derivates and a double integration to resolve my problem. But there is a mayor drawback, I can not ignore anymore. For the sake of illustration I chose a relatively big step size of 0.1.
The complete minimal example (Fixed Step, stepsize: 0.1, ode4): Download here
It can be seen, that the signal not even reaches the intended step height of 10 and furthermore is not constant at the end.
Over the development process of my whole model, this approach was satisfactory enough for small step sizes. But I reached the point where I really need the smooth ramp as intended. That means I need a finally constant signal at exactly the value, specified by the step height gain.
I already spent days to resolve the problem, and hope to fine some help here now.
Some of my ideas:
dynamically increase the step height over the actual desired value and saturate the final output. If the rate limits,step height and the simulation step size wouldn't be flexible one could probably find a satisfying solution. But as everything has to be flexible, there are too much cases where the acceleration and jerk limit is violated.
I tried to use the Matlab function block and write my own 3rd order rate limiter. Though it seems possible for me for the trigger moment, I have no solution how to smooth the "deceleration" at the end of the ramp. Also I'd need C-compilers, which would make it hard to use my model on other systems without problems. (At least I think so.)
The solver can not be changed siginificantly (either ode3 or ode4) and a fixed step size is mandatory (0.00001 to 0.01).
Currently used, not really useful approach:
For a dynamic amplification of 1.07 I get the following output (all values normalised on their limits):
Though the displacement looks nice, the violation of the acceleration limit is very harmful.
For a dynamic amplification of 1.05 I get the following output (all values normalised on their limits):
The acceleration stays in its boundaries, but the displacement does not reach the intended value. (not really clear in the picture) The jerk is still to big. (I could live with that, but it's not nice)
So it appears to me that a inside-Simulink solutions is far from reality. Any ideas how to create a well-behaving custom function block?
Simulation step size, step height, and the rate limits are known before the simulation starts. (But I have a lot of these triggered smooth ramps in a row, it should feed a event-discrete control). So I could imagine to create the whole smooth ramp outside simulink and save it as a timeseries object and append it on the current signal when the trigger is activated.
The problems you see are because the difference is not conditioned very well.
Taking the difference amplifies the numerical that exists in your simulation.
Also the jerk will always be large if you try to apply an actual step.
I guess for your approach it would be better to work the other way around:
i.e. make a jerk, acceleration and velocity with which your step is achieved.
I think your looking for something like the ref3 block:
http://www.dct.tue.nl/home_of_ref3.htm
Note the disclaimer on the site and that it is a little cumbersome to use.
An easy (yet to be improved) way is to use a rate limiter and then a state space model with a filter. From the filter you get the velocity, which in turn you can apply a rate limiter to. You continue with rate-limiter and filters until you have the desired curve.
Otherwise you can come up with numerical rate-limiters of higher order using ie runge kutta formulas or finite differences. However it was pointed out, that they may suffer from bad conditioning.
What I usually do is to use one rate limiter and a filter of 3rd Order and just tune the time constant (1 tripple pole), such that my needs are met. This works well, esp
Integrator chains of length > 1 are unstable!
There is a huge field of research dealing with trajectory planning. The easiest way might be to use FIR filters (Biagotti et al) or to implement an online trajectory planner (Ezair et al 2014 / Knierim et al 2012).