Reducing calculation time for derivative blocks in SimMechanics - simulink

I have a program in SimMechanics that uses 6 derivative blocks (du/dt). It takes about 24 hours to do 10 secs of simulation. Is there any way to reduce the calculation time of the Simulink derivative blocks?

You don't say what your integration time step is. If it's on the order of milliseconds, and you're simulating a 10 sec total transient time, that means 10,000 time steps.
The stability limit of the time step is determined by the characteristics of the dynamic system you're simulating.
It's also affected by the integration scheme you're using. Explicit integration is well-known to have stability problems for larger time steps, so if you're using an Euler method of integration you'll be forced to use a small time step.
Maybe you can switch your integration scheme to an implicit method, 5th order Runge Kutta with error correction, or Burlich-Storer. See your documentation for details.
You've given no useful information about the physics of the system of interest, the size of the model, or your simulation choices, so all this is an educated guess on my part.

Runge-Kutta methods (called ODE45 or ODE23 in Matlab dialect) are not always useful with mechanical problems, due to best performance with variable time slice setup. Move to fixed time setup and select the solver by evaluating the error order you can admit. Refer to both Matlab documentation (and some Numerical Analysis texts too, :-) ) for deeper detail.
Consider also if your problem needs some "stiff-enabled" technique of resolution. Huge constant terms could drive to instability your solver if not properly handled.

Related

Performance of inline integration in Dymola compared to normal calculation mode using DASSL

I am trying to using inline integration in Dymola to do real-time simulation, I take the Modelica.Fluid.Examples.HeatingSystem as an example, but no matter which inline integration method I choose, the simulation always fails.
When I choose an explicit method, Dymola is unable to start the integration.
When I choose an implicit method, Dymola got stuck.
The special one is the Rosenbrock method, the error shows Dymola fails to differentiate some equations.
In my understandings, inline integration means adding the discretization equations to the model equations, then Dymola could do more symbolic manipulations and get a new BLT form. I understand this method could cause more algebraic loops and make it hard for Newton Method to solve these algebraic loops.
My questions are:
Compared to the normal method in Dymola, what kind of model is more suitable for the inline integration method?
Inline integration is designed to increase the simulation speed, but it could hard when the nonlinear algebraic loops are hard to solve, so is there a limitation or rule of using the inline integration method?
Take the Modelica.Fluid.Examples.HeatingSystem as the case, how could I adjust the model to use inline integration?
I know that Dymola only supports using the inline integration method with the Euler integrator(fixed step size integration algorithms), so why does the inline integration method only support fixed step size? Is it unnecessary to use variable step size? If not limited to real-time simulation, I just want to use inline integration to increase the simulation speed, is it possible to combine the inline integration method with DASSl?
It seems stuck for most implicit solvers because:
It is a reasonable sized model that you integrate with milli-second timestep for 6 000 second; that means 6 million steps (each involving systems of equations).
There is less feedback during inline integration (since giving that feedback takes too much time).
But implicit Euler isn't stuck - it just takes a couple of minutes to complete.
However, you can can increase the step-size for implicit Euler a lot for this model, it actually works fine with 1 s; and then completes in less than second.
Inline explicit Euler fails unless you use a lot smaller step-size (same as non-inline explicit Euler).
Note: The inline solvers in Dymola are all fixed-step-size solvers and thus setting a too short step-size will slow down the simulation and a too long step-size will cause the simulation to fail, whereas dassl, lsodar, radau, esdirk* all adjust the step-size during the integration to avoid both of those problems.

How to test whether the ODE integration has reached equilibrium?

I am using Matlab for this project. I have introduced some modifications to the ode45 solver.
I am using sometimes up to 64 components, all in the [0,1] interval and the components sum up to 1.
At some intervals I halt the integration process in order to run a quick check to see whether further integration is needed and I am looking for some clever way to efficiently figure this one.
I have found four cases and I should be able to detect each of them during a check:
1: The system has settled into an equilibrium and all components are unchanged.
2: Three or more components are wildly fluctuating in a periodic manner.
3: One or two components are changing very rapidly with low amplitude and short frequency.
4: None of the above is true and the integration must be continued.
To give an idea: I have found it to be a good practice to use the last ~5k states generated by the ode45 solver to a function for this purpose.
In short: how does one detect equilibrium or a nonchanging periodic pattern during ODE integration?
Steady-state only occurs when the time derivatives your model function computes are all 0. A periodic solution like you described corresponds rather to a limit cycle, i.e. oscillations around an unstable equilibrium. I don't know if there are methods to detect these cycles. I might update my answers to give more info on that. Maybe an idea would be to see if the last part of the signal correlates with itself (with a delay corresponding to the cycle period).
Note that if you are only interested in the steady state, an implicit method like ode15s may be more efficient, as it can "dissipate" all the transient fluctuations and use much larger time steps than explicit methods, which must resolve the transient accurately to avoid exploding. However, they may also dissipate small-amplitude limit cycles. A pragmatic solution is then to slightly perturb the steady-state values and see if an explicit integration converges towards the unperturbed steady-state.
Something I often do is to look at the norm of the difference between the solution at each step and the solution at the last step. If this difference is small for a sufficiently high number of steps, then steady-state is reached. You can also observe how the norm $||frac{dy}{dt}||$ converges to zero.
This question is actually better suited for the computational science forum I think.

Solving Delayed Differential Equations using ode45 Matlab

I am trying to solve DDE using ode45 in Matlab. My question is about the way that I am solving this equation. I don't know if I am right or I am wrong and I should use dde23 instead.
I have a following equation:
xdot(t)=Ax+BU(t-td)+E(t) & U(t-td)=Kx(t-td) & K=constant
Normally, when I don’t have delay on my equation, I solve this using ode45. Now with delay on my equation, again I am using ode45 to get the result. I have the exact amount of U(t-td) at each step and I replace its amount and solve the equation.
Is my solution correct or should I use dde23?
You have two problems here:
ode45 is a solver with adaptive step size. This means that your sampling steps are not necessarily equivalent to the actual integration steps. Instead, the integrator splits a sampling step into several integration steps as needed to achieve the desired accuracy (see this question on Scientific Computing for more information).
As a consequence, you may not provide correct delayed value of U at each step of the integration, even if you believe to do so.
However, if your sampling steps are sufficiently small, you will indeed have one time step per sampling step. The reason for this is that you effectively disable the adaptive integration by making your time step smaller than needed (and thus waste computation time).
Higher-order Runge–Kutta methods such as ode45 do not only make use of the value of the derivative at each integration step, but also evaluate it in-between (and no, they cannot provide a usable solution for this in-between time step).
For example, suppose that your delay and integration step are td=16. To make the integration step from t=32 to t=48, you need to evaluate U not only at t = 32−16 = 16 and t = 48−16 = 32, but also at t = 40−16 = 24. Now, you might say: Okay, let’s integrate such that we have an integration step at all those time points. But for these integration steps, you again need steps in the middle, e.g., if you want to integrate from t=16 to t=24, you need to evaluate U at t=0, t=4, and t=8. You get a never-ending cascades of smaller and smaller time steps.
Due to problem 2, it is impossible to provide the exact states from the past with any but a one-step integrator – using which is probably not a good idea in your case. For this reason, it is inevitable to use some sort of interpolation to obtain past values if you want to integrate DDEs with a multi-step integrator. dde23 does this in a sophisticated way using a good interpolation.
If you only provide U at the integration steps, you are essentially performing a piecewise-constant interpolation, which is the worst possible interpolation and therefore requires you to use very small integration steps. While you can do this if you really want to, dde23 with its more sophisticated piecewise cubic Hermite interpolation can work with much larger time steps and integrate adaptively, and therefore will be much faster. Also, it’s less likely that you somehow make a mistake. Finally, dde23 can deal with very small delays (smaller than the integration step), if you’re into that sort of thing.

time integration stability in modelica

I am constructing a finite volume model in Dymola which evolves in time and space. The spatial discretization is hard coded in the equations section, the time evolution is implemented with a term consisting of der(phi).
Is the time integration of Dymola always numerically stable when using a variable step size algorithm? If not, can I do something about that?
Is the Euler integration algorithm from Dymola the explicit or implicit Euler method?
The Dymola Euler solver by default is explicit (if an in-line sovler is not selected).
The stability of time integration is going to depend on your integrator. Generally speaking, implicit methods are going to be much better than explicit ones.
But since you mention spatial and time discretization, I think it is worth pointing out that for certain classes of problems things can get pretty sticky. In general, I think elliptic and parabolic PDEs are pretty safe to solve in this way. But hyperbolic PDEs can get very tricky.
For example, the Courant-Friedrichs-Lewy condition will affect the overall stability of the solution method. But by discretizing in space first, you leave the solver with information only regarding time and it cannot check or conform to the CFL condition. My guess is that a variable time step integrator will detect the error being introduced by not following the CFL condition but that it will struggle to identify the proper time step and probably also end up permitting an unacceptably unstable solution.

Running a Simulink xPC block at a faster rate than the continuous rate

I have a Simulink xPC target application that has blocks with discrete states at several different sample rates and some sections using continuous states. My intention on keeping the continuous states is for better numerical integration.
What creates the problem: One block is reading a device at a very fast rate (500 hz). The rest of the application can and should run at a slower rate (say, 25 or 50 Hz) because it would be overkill to run it at the highest rate, and because the processor simply cannot squeeze a full application cycle into the .002 secs of the faster rate. So I need both rates. However, the continuous states run by definition in Simulink at the faster discrete rate of the whole application! This means everywhere I have continuous states now they're forced to run at 500 Hz when 25 Hz would do!
Is there a way to force the continuous states in xPC target to a rate that is not the fastest in the application? Or alternatively, is there a way to allow certain block to run at a faster speed than the rest of the application?
You are thinking about continuous solvers in the wrong way - continuous doesn't only mean that it's run as fast as possible - it uses a fundamentally different algorithm to solve the equations than discrete. Due to this, they must be run at least as fast as the discrete solvers.
From Using Simulink:
Continuous solvers use numerical
integration to compute a model's
continuous states at the current time
step from the states at previous time
steps and the state derivatives.
Continuous solvers rely on the model's
blocks to compute the values of the
model's discrete states at each time
step.
Mathematicians have developed a wide
variety of numerical integration
techniques for solving the ordinary
differential equations (ODEs) that
represent the continuous states of
dynamic systems. Simulink provides an
extensive set of fixed-step and
variable-step continuous solvers, each
implementing a specific ODE solution
method (see Solvers).
Discrete solvers exist primarily to
solve purely discrete models. They
compute the next simulation time step
for a model and nothing else. They do
not compute continuous states and they
rely on the model's blocks to update
the model's discrete states.
So the upshot is that no it's not good enough to have the continuous run more slowly than the fastest discrete solvers - otherwise they are, by definition, not continuous. You should reconsider why you are specifying them as continuous.
What are you trying to accomplish by slowing down the continuous solvers? Is this a simulation time/performance issue?
-Adam
My take on this is that it cannot be done. One way to approach this is to replace the continuous states by discrete ones (perhaps at an intermediate rate, say 100 Hz), and cross my fingers that the loss of precision is bearable.
Maybe it's possible to isolate a block and run it separately at a faster rate somehow, but I don't know.
Truly continuous computation is impossible in a digital processor such as your computer's.
What MATLAB/Simulink means by "continuous" is "I will (dynamically) try to guess what discrete step size is small enough so that discretization error is very small in your application".
If you already know, by knowing your application, that 20ms (50Hz) would be small enough, then use discrete - 50Hz.