How much is the largest capability of solving the nonlinear system model in Dymola? - modelica

In Dymola, I often meet a nonlinear system initialization failure or maybe a stiff system that is hard to solve in the large thermo-fluid system, but for a simple system, there wouldn't be this kind of problem. My questions are:
So I am wondering how much is the largest capability of solving a nonlinear system model? For example, how many nonlinear equations I could include in my model at most?
Is there any setting in Dymola which allows increasing the capability of solving nonlinear system?
How could I decrease the number of nonlinear equations in the model without damage to the accuracy of the model?

These are pretty difficult questions to be answered in a generally valid fashion. Still I'll try to share some of my experience with Dymola and non-linear systems.
There is no hard number which will limit the size. It depends more on how strongly non-linear the equations are than on their number. I have simulated models with non-linear systems of size 150, which are pretty stable while others of size 10 can brake...
There are multiple perspectives to this
I have worked on some models that made the C-Compiler run out of memory during compilation. If you have this sort of problem, forcing 64Bit compilation by setting Advanced.CompileWith64=2 can help. Then you shouldn't run out of memory any more. This only refers to the size only.
Performance for non-linear systems can be improved by activating DAE-mode by setting Advanced.Define.DAEsolver=true. This does not work with all solvers though.
Additionally to the above it can help to set Advanced.MoveEquationsToDynamics=true, for which the manual states: "It forces the integrator to solve the nonlinear
equations each integrator step and thereby it also updates the initial guesses more often."
As mentioned by Erik, the homotopy()-operator can be very important as it helps the solver converging in case of difficult initialization.
This is very specific to the model. Decoupling can help, e.g. by splitting the system in smaller systems by adding energy storing elements/states. This can be done based on physics of the system and is the preferable solution if possible. As an (more artificial) alternative filter/delays can be added. Usually this has a negative effect on accuracy.

I very much agree with Markus's advice but would also like to remind you about Modelica's homotopy operator. A well-chosen simplified model can greatly help Dymola to initialize a model with a large and difficult non-linear system.
In general good initial guesses are very important when solving non-linear systems. Using homotopy is simply an implicit way to provide these good guesses.

Related

what does impact a simulation runtime in Modelica

In order to make my model simulation's in Modelica run faster am asking the following quesion :
What does impact simulation runtime in Modelica ?
i will aprecicate any help possible.
Edit: More details can be consulted from my book "Modelica by Application -- Power Systems" (URL)
What does impact the runtime performance?
I. Applied compilation techniques
Naturally, object-oriented Modelica models, even trivial ones, would correspond to a large-scale system of equations. Modelica simulation environments would usually optimize such generated models:
reduce the number of possible equations by removing trivial ones (i.e. alias equations)
decompose a large-block of equation system with so called BLT-transformation into smaller cascaded blocks of equation systems that can be solved faster in a sequential manner and not as a single block of equations,
solve s.c. large algebraic loops using tearing methods.
It can theoretically even go too far and attempt to solve blocks of equation system in an analytical manner if possible instead of conducting expensive numerical integration
Thus, the runtime performance would be influenced by the underlying Modelica compiler and how far does it exploit equation-based compiler methods. Usually some extra settings need to be activated to exploit all possible kind of such techniques. Digging the documentation to enable such settings is needed.
II. The nature of the model
The nature of the model would influence the runtime performance, particularly:
Is the model a large-scale system? or a small-scale one?
Is it strongly nonlinear or semi-linear one?
Is the resulting optimized equation system corresponding to the model sparse (i.e. large set of equations each with few number of variables, e.g. power system network models) or dense (e.g. multibody systems and biochemical networks)
Is it a stiff system? (e.g. a system with several subsystems some exhibiting very quick dynamics and others very slow dynamics)
Does the system exhibit large number of state events
...
III The choice of the solver
The mentioned characteristics of a given model would typically influence the ideal choice of the solver. The solver can largely influence the runtime performance (and accuracy). A strategy for solver choice could be made in the following order:
For a non-stiff weakly nonlinear model, the ideal choice would be an explicit method, e.g. Single-step Runga-Kutta or Multi-step Adam-Bashforth of higher order. If accuracy is less significant, one can attempt an explicit method of a lower order which would executes faster. Naturally, increasing the solver error tolerance would also speed-up the simulation.
However, it could happen, particularly for large-scale systems, that numerical stability could be more difficult to guarantee. Then, smaller solver step-sizes (and/or smaller error tolerance) for explicit solvers should be attempted. In this case, an implicit solver with larger error tolerance can be comparable with an explicit solver with a smaller tolerance.
Actually, it is wise to try both methods, comparing the accuracy of the results, and figuring out if explicit methods produce comparably accurate results. However, as a warning this would be just a heuristic, since the system does not necessarily have the same behavior over the entire space of admissible parameter values.
For increasing nonlinearity of the model, the choice would tend more towards modern solvers making use of variable step-size techniques. Here I would start with implicit variable-step Runga-Kutta (i.e. single-step) and/or the implicit variable-step multi-step methods, Adams–Moulton. For both of these classes, one can enlarge the solver tolerance and/or lower the solver error order and figure out if the simulation produces comparably accurate solutions (but with faster runtime).
Implementations of the previous classes of methods are usually less conservative with error control, and therefore, for increasing stiffness of the model or badly scalable models, the choice would tend more towards modern solvers implementing so-called numerically more stable backward differentiation formula (BDF), s.a. DASSL, CVODE, IDA. These solvers (can) also make use of the s.c. Jacobian of the system for adaptive step-size control.
A modern solver like LSODAR that switches between explicit and implicit solvers and also perform automatic error order control (switching between different orders) is a good choice if one does not know that much information about the behavior of the model. May be some Modelica environments have an advanced solver making use of automatic switching. However, if one knows the behavior of the model in advance, it is also wise to use other suggested methods since LSODAR may not perform the most optimal switching when needed.
x. ...
The comparisons between solvers from classes 3,4 and 5 are not straightforward to judge and it depends also on whether the system is continuous or hybrid, i.e. the underlying root-finding algorithms.
Usually DASSL could be slower as it is more conservative with step-size/error control. So it seems that IDA and others are faster. Some published works exist that can give some intuitions regarding such comparisons. It would be nice to have a Modelica library including all possible types of models and running all possible benchmarks w.r.t. accuracy and runtime to draw some more solver/model specific conclusions. A library that could be used and extended for such a purpose is the ScalableTestSuite Modelica library.
IV. Advanced aspects
There have been some published works in the Modelica community regarding making use of sparse solvers to exploit the expected sparsity of the Jacobian. If such a feature is provided by the simulation environment, this would usually significantly improve the runtime performance of large-scale models.
For models with massive number of events, numerical integration in the standard way can be extremely inefficient. Particularly challenging is when an event is triggered, other sets of state-events could be further triggered and a queue of state-events should be evaluated. The root-finding algorithm could further trigger other events and the solver could be hanging on in a s.c. chattering situation. There are advanced strategies for such situations, s.c. sliding mode, however I am not sure how far Modelica simulation environments are handing this issue.
One set of suggested solutions (also for systems with high degree of stiffness) is to employ so called QSS (quantized state system) methods. This would be significantly beneficial particularly for models that can not be solved using explicit solvers. There are both explicit and implicit QSS methods. There have been also other worth-to-try numerical integration strategies where only subsets of the entire equation system is evaluated when approximating a state event. Here I am not sure about availability of such solvers.
Some simulation environments differentiate between two simulation modes which can influence the simulation runtime: the ODE Mode and DAE Mode. In the first mode, the system is reduced to an ODE system with potentially additional cascaded blocks of nonlinear equation systems. In the DAE mode, the system is reduced to a DAE system of index one. The former mode would be beneficial for dense systems exhibiting such large cascaded blocks of nonlinear equations to be solved using s.c. Tearing methods instead of numerical integration. The DAE mode would be beneficial for large-scale sparse systems solved using sparse solvers. I think the ODE mode is usually activated by choosing CVODE or LSODAR while DAE mode is activated by choosing IDA or DASSL. But digging the documentation here and there is also recommended.
There are also some published works regarding so called multirate numerical integration solvers. Here, in each numerical integration step, only the numerically-significant portion of the equation system and not the entire equation system is integrated. Hence, this is significantly beneficial for large-scale stiff systems.
x. ...
V. Parallelization
Obviously, making use of multicore / GPUs for executing numerical integration in parallel, among other approaches for applying parallelization can speed-up computations.
VI. quite very advanced topics
In order to pay attention at some excellent research attempts some of which can be exploited for speeding up the simulation runtime performance of large-scale (loosely-coupled) hybrid networked models, I am listing this here as well. Speed-up can be obtained by making use of hybrid paradigms, agent-based modeling paradigm and/or multimode paradigm. The idea behind is that it is possible to describe a loosely coupled system in several smaller subsystems and conduct the communication among subsystems only when necessary. This can be beneficial and the reasons can be traced by searching for relevant publications. There have been some excellent work in some of the mentioned directions, and it is worth to continue them where they have stopped if this is the case.
Remark: Any of the mentioned solvers is not necessarily present in all possible Modelica simulation environments. If a solver is not provided as a choice, one would still be able to produce an FMU-ME (Functional mockup unit for model exchange) and write code that numerically integrate this FMU with a desired solver.
Warning: Some of the above aspects are based on personal experiences for a particular type of models and are not necessarily true for all model types.
Few suggested reading and I am definitely missing a lot of key publications:
F. Casella, Simulation of Large-Scale Models in Modelica: State of the Art and Future Perspectives, Modelica 2016
Liu Liu, Felix Felgner and Georg Frey, Comparison of 4 numerical solvers for stiff and hybrid systems simulation, Conference 2010
Willi Braun, Francesco Casella and Bernhard Bachmann, Solving large-scale Modelica models: new approaches and experimental results using OpenModelica, Modelica 2017
Erik Henningsson and Hans Olsson and Luigi Vanfretti, DAE Solvers for Large-Scale Hybrid Models, Modelica 2019
Tamara Beltrame and François Cellier, Quantised state system simulation in Dymola/Modelica using the DEVS formalism, Modelica 2006
Victorino Sanz and Federico Bergero and Alfonso Urquia, An approach to agent-based modeling with Modelica, Simpra 2010

How to deal with failure to solve nonlinear equations during simulation in Dymola?

I build a model of solar power plant with the Rankine cycle in Dymola, even though the initialization works fine, but at the time of 2840s, there is a failure to solve the nonlinear equations which leads the simulation to stop.
As shown in the following screenshot, Dymola recommends giving better start values, but it doesn't make any sense, because the failure happens during the simulation instead of initialization. And with better start values, the situation doesn't improve at all.
My question is :
How should I deal with the failure to solve nonlinear equations during simulation?
The error message indicates that the solver was not able to find a solution to a nonlinear system of equations in your model. This could mean either that there is no solution to the system or just that the solver is not able to find it.
The first easy steps would be to try a lower tolerance in simulation settings. And to use another solver that might suit better for your problem. Dassl can handle nonlinear systems quite well. If these steps won't work it might be that there just isn't a solution to your problem and you made a wrong assumption or accidentially have a wrong parameter somewhere.

Managing of Navier-Stokes PDEs by means of SBF in Dymola

Has anyone tried to implement the Navier Stokes Partial Differential Equations (PDE) in Modelica?
I found the method of the spatial basis functions (SBF) which by means of numerical modifications gets Ordinary Differential Equations (ODE) that could be handled by Dymola.
Regards,
Victor
The aim of the method I was saying before is to convert PDEs in ODEs, so the issues with the CFL coefficient would disappear, the problem is that the Modelica.Fluids elements just define the equations in function of the variables in both ends of each component.
i.e dp=port_a.p-port_b.p
but with that sort of methodology, the variables such as pressure, density, mass flow... would be function also of the surrounding components... it would be a kind of massive interaction between all the components,
I would like to see an example in Modelica, because I hardly haven't found information about that topic linked to Modelica.
Modelica is a language for modeling behavior described by DAEs. As such, as long as you can create a system of ODEs, you should be able to express your problem in Modelica.
However, if your PDEs are hyperbolic, the wave dynamics in the equations might cause some issues with simulation. This is because the CFL condition imposes limits on time steps that an ordinary differential equation solver will be unaware of. If the solver includes error control, it will probably manage to get a solutions but may run quite slow because it won't know how to explicitly limit the simulation step size. If it doesn't include error control and it violates the CFL condition, the system will go unstable. Note, this only applies to systems where the CFL condition applies.

Factors limiting MILP solver calculation speed?

I am fairly new to Mixed Integer Linear Programming and I was hoping someone could clarify a performance question for me. Basically I am performing a calculation with about 34 decision variables and my calculation time is around 5 seconds. I would like to ideally get the calculation time down into the sub 1 second range.
Currently I am using the CBC solver & MATLAB, but as I understand it this is a single-threaded solver. Most of the MILP solvers I've seen seem to pride themselves on large project performance with 1k+ variables and knocking compute time down from days to hours but only a few of the expensive ones are even multi-threaded. Processor speed would seem to only go so far with such a problem so there has to be something that could be done on the software side.
In a situation like I have what factors play a role in the calculation time? In theory would a solution like Gurobi be able to ramp up and make a discernible difference over CBC on such a small problem?
When solving MIPs, multi-threading usually doesn't help that much, compared to other applications. One thing more sophisticated solvers do better than CBC is presolving. It may very well happen, that all decision variables can be eliminated/fixed in the root node. In general, it is not possible to predict the solving time of a MIP based on its size. The solving process is simply too complex.
To answer your question: I do suppose that you will see a significant speedup when trying another solver (Gurobi, Xpress, CPLEX, SCIP, etc.). It is not guaranteed though, and you might also witness a slowdown.
Most of the commercial MILP solvers would probably consider most problems with only 1000 variables to be tiny. It is very common to solve problems with hundreds of thousands or millions of variables. Note however, that there are also some really small problems that are considered to be very hard or still open in that nobody has solved them to a proven optimal solution. Basically, the time taken to solve these problems is enormously problem dependent.
If you want to see how fast another solver can solve your problem, try submitting it as e.g. an MPS file to the NEOS solver service at http://www.neos-server.org/neos/

time integration stability in modelica

I am constructing a finite volume model in Dymola which evolves in time and space. The spatial discretization is hard coded in the equations section, the time evolution is implemented with a term consisting of der(phi).
Is the time integration of Dymola always numerically stable when using a variable step size algorithm? If not, can I do something about that?
Is the Euler integration algorithm from Dymola the explicit or implicit Euler method?
The Dymola Euler solver by default is explicit (if an in-line sovler is not selected).
The stability of time integration is going to depend on your integrator. Generally speaking, implicit methods are going to be much better than explicit ones.
But since you mention spatial and time discretization, I think it is worth pointing out that for certain classes of problems things can get pretty sticky. In general, I think elliptic and parabolic PDEs are pretty safe to solve in this way. But hyperbolic PDEs can get very tricky.
For example, the Courant-Friedrichs-Lewy condition will affect the overall stability of the solution method. But by discretizing in space first, you leave the solver with information only regarding time and it cannot check or conform to the CFL condition. My guess is that a variable time step integrator will detect the error being introduced by not following the CFL condition but that it will struggle to identify the proper time step and probably also end up permitting an unacceptably unstable solution.