I am new to the topic of co-simulation. I am familiar with the definitions (based on Trcka "COMPARISON OF CO-SIMULATIONAPPROACHES FOR BUILDING ANDHVAC/R SYSTEM SIMULATION "):
Quasi-dynamic coupling, also called loose coupling,
orping-pongcoupling, where distributed models run in sequence, and one
model uses the known output values, based on the values at the previous
time steps, of the coupled model.
Fully-dynamic coupling, also called strong coupling, oronion coupling,
where distributed models iterate withineach time step until the error
estimate falls within a predefined tolerance.
My question: Is FMI/co-simulation a loose coupling method? What is FMI/model-exchange? From my understanding, it is not a strong coupling method. Am I understanding it correct that in model-exchange, the tool that imports the FMU is collecting all ODE and algebraic equations and the tool solve the entire system with a single solver. So it is more a standard to describe models in a unified way so that they can be integrated in different simulation environments?
Thank you very much for your help
FMI/Model-exchange is targeted at the distribution of models (systems of differential algebraic equations), whereas FMI/Co-Simulation targets the distribution of models along with an appropriate solver.
Due to the many challenges in coding solvers with an appropriate support of rollback, it is hard to come by exported FMUs that can be used in a strongly coupled co-simulation.
So, to answer your question: it depends on the scenario. If you wish to simulate a strongly coupled physical system using FMI/Co-simulation, and you wish to do so with multiple FMUs, it better be that these support rollback, to avoid stability issues. If you have, for example, a scenario where one FMU simulates the physical system, and another FMU simulates a controller, then you may do well with a loose coupling approach.
It is hard to pinpoint exactly how strongly coupled two FMUs need to be before you need to apply a stabilization technique.
Have a look at the following experiment, which compares a strong coupling master with a loose coupling one.
Both master are used for the co-simulation of a strongly coupled mechanical system:
https://github.com/into-cps/case-study_mass-springer-damper
Also, see the following report (disclosure: I contributed to it :) ) for an introduction to these concepts:
https://arxiv.org/pdf/1702.00686v1
I'm not an expert on simulation solver but I'm involved in an implementation of an FMI Co-Simulation slave.
First, you are entirely right about the model-exchange.
Regarding the co-simulation, The solver sets the input values, do a step and read the output values. There is no interactions within timestep. I would say that is more a Quasi-dynamic coupling.
But it is possible for the solver to cancel the previous step in order to refine time step and redo computation, ...And so on until the error estimate falls within a predefined tolerance. That is more close to a fully-dynamic coupling.
Because it is the responsibility of the solver (Co-simulation master) to set/get input/output values and to do step (and refining timesteps), definition of coupling with other model will depends on solver.
regards,
Related
In order to make my model simulation's in Modelica run faster am asking the following quesion :
What does impact simulation runtime in Modelica ?
i will aprecicate any help possible.
Edit: More details can be consulted from my book "Modelica by Application -- Power Systems" (URL)
What does impact the runtime performance?
I. Applied compilation techniques
Naturally, object-oriented Modelica models, even trivial ones, would correspond to a large-scale system of equations. Modelica simulation environments would usually optimize such generated models:
reduce the number of possible equations by removing trivial ones (i.e. alias equations)
decompose a large-block of equation system with so called BLT-transformation into smaller cascaded blocks of equation systems that can be solved faster in a sequential manner and not as a single block of equations,
solve s.c. large algebraic loops using tearing methods.
It can theoretically even go too far and attempt to solve blocks of equation system in an analytical manner if possible instead of conducting expensive numerical integration
Thus, the runtime performance would be influenced by the underlying Modelica compiler and how far does it exploit equation-based compiler methods. Usually some extra settings need to be activated to exploit all possible kind of such techniques. Digging the documentation to enable such settings is needed.
II. The nature of the model
The nature of the model would influence the runtime performance, particularly:
Is the model a large-scale system? or a small-scale one?
Is it strongly nonlinear or semi-linear one?
Is the resulting optimized equation system corresponding to the model sparse (i.e. large set of equations each with few number of variables, e.g. power system network models) or dense (e.g. multibody systems and biochemical networks)
Is it a stiff system? (e.g. a system with several subsystems some exhibiting very quick dynamics and others very slow dynamics)
Does the system exhibit large number of state events
...
III The choice of the solver
The mentioned characteristics of a given model would typically influence the ideal choice of the solver. The solver can largely influence the runtime performance (and accuracy). A strategy for solver choice could be made in the following order:
For a non-stiff weakly nonlinear model, the ideal choice would be an explicit method, e.g. Single-step Runga-Kutta or Multi-step Adam-Bashforth of higher order. If accuracy is less significant, one can attempt an explicit method of a lower order which would executes faster. Naturally, increasing the solver error tolerance would also speed-up the simulation.
However, it could happen, particularly for large-scale systems, that numerical stability could be more difficult to guarantee. Then, smaller solver step-sizes (and/or smaller error tolerance) for explicit solvers should be attempted. In this case, an implicit solver with larger error tolerance can be comparable with an explicit solver with a smaller tolerance.
Actually, it is wise to try both methods, comparing the accuracy of the results, and figuring out if explicit methods produce comparably accurate results. However, as a warning this would be just a heuristic, since the system does not necessarily have the same behavior over the entire space of admissible parameter values.
For increasing nonlinearity of the model, the choice would tend more towards modern solvers making use of variable step-size techniques. Here I would start with implicit variable-step Runga-Kutta (i.e. single-step) and/or the implicit variable-step multi-step methods, Adams–Moulton. For both of these classes, one can enlarge the solver tolerance and/or lower the solver error order and figure out if the simulation produces comparably accurate solutions (but with faster runtime).
Implementations of the previous classes of methods are usually less conservative with error control, and therefore, for increasing stiffness of the model or badly scalable models, the choice would tend more towards modern solvers implementing so-called numerically more stable backward differentiation formula (BDF), s.a. DASSL, CVODE, IDA. These solvers (can) also make use of the s.c. Jacobian of the system for adaptive step-size control.
A modern solver like LSODAR that switches between explicit and implicit solvers and also perform automatic error order control (switching between different orders) is a good choice if one does not know that much information about the behavior of the model. May be some Modelica environments have an advanced solver making use of automatic switching. However, if one knows the behavior of the model in advance, it is also wise to use other suggested methods since LSODAR may not perform the most optimal switching when needed.
x. ...
The comparisons between solvers from classes 3,4 and 5 are not straightforward to judge and it depends also on whether the system is continuous or hybrid, i.e. the underlying root-finding algorithms.
Usually DASSL could be slower as it is more conservative with step-size/error control. So it seems that IDA and others are faster. Some published works exist that can give some intuitions regarding such comparisons. It would be nice to have a Modelica library including all possible types of models and running all possible benchmarks w.r.t. accuracy and runtime to draw some more solver/model specific conclusions. A library that could be used and extended for such a purpose is the ScalableTestSuite Modelica library.
IV. Advanced aspects
There have been some published works in the Modelica community regarding making use of sparse solvers to exploit the expected sparsity of the Jacobian. If such a feature is provided by the simulation environment, this would usually significantly improve the runtime performance of large-scale models.
For models with massive number of events, numerical integration in the standard way can be extremely inefficient. Particularly challenging is when an event is triggered, other sets of state-events could be further triggered and a queue of state-events should be evaluated. The root-finding algorithm could further trigger other events and the solver could be hanging on in a s.c. chattering situation. There are advanced strategies for such situations, s.c. sliding mode, however I am not sure how far Modelica simulation environments are handing this issue.
One set of suggested solutions (also for systems with high degree of stiffness) is to employ so called QSS (quantized state system) methods. This would be significantly beneficial particularly for models that can not be solved using explicit solvers. There are both explicit and implicit QSS methods. There have been also other worth-to-try numerical integration strategies where only subsets of the entire equation system is evaluated when approximating a state event. Here I am not sure about availability of such solvers.
Some simulation environments differentiate between two simulation modes which can influence the simulation runtime: the ODE Mode and DAE Mode. In the first mode, the system is reduced to an ODE system with potentially additional cascaded blocks of nonlinear equation systems. In the DAE mode, the system is reduced to a DAE system of index one. The former mode would be beneficial for dense systems exhibiting such large cascaded blocks of nonlinear equations to be solved using s.c. Tearing methods instead of numerical integration. The DAE mode would be beneficial for large-scale sparse systems solved using sparse solvers. I think the ODE mode is usually activated by choosing CVODE or LSODAR while DAE mode is activated by choosing IDA or DASSL. But digging the documentation here and there is also recommended.
There are also some published works regarding so called multirate numerical integration solvers. Here, in each numerical integration step, only the numerically-significant portion of the equation system and not the entire equation system is integrated. Hence, this is significantly beneficial for large-scale stiff systems.
x. ...
V. Parallelization
Obviously, making use of multicore / GPUs for executing numerical integration in parallel, among other approaches for applying parallelization can speed-up computations.
VI. quite very advanced topics
In order to pay attention at some excellent research attempts some of which can be exploited for speeding up the simulation runtime performance of large-scale (loosely-coupled) hybrid networked models, I am listing this here as well. Speed-up can be obtained by making use of hybrid paradigms, agent-based modeling paradigm and/or multimode paradigm. The idea behind is that it is possible to describe a loosely coupled system in several smaller subsystems and conduct the communication among subsystems only when necessary. This can be beneficial and the reasons can be traced by searching for relevant publications. There have been some excellent work in some of the mentioned directions, and it is worth to continue them where they have stopped if this is the case.
Remark: Any of the mentioned solvers is not necessarily present in all possible Modelica simulation environments. If a solver is not provided as a choice, one would still be able to produce an FMU-ME (Functional mockup unit for model exchange) and write code that numerically integrate this FMU with a desired solver.
Warning: Some of the above aspects are based on personal experiences for a particular type of models and are not necessarily true for all model types.
Few suggested reading and I am definitely missing a lot of key publications:
F. Casella, Simulation of Large-Scale Models in Modelica: State of the Art and Future Perspectives, Modelica 2016
Liu Liu, Felix Felgner and Georg Frey, Comparison of 4 numerical solvers for stiff and hybrid systems simulation, Conference 2010
Willi Braun, Francesco Casella and Bernhard Bachmann, Solving large-scale Modelica models: new approaches and experimental results using OpenModelica, Modelica 2017
Erik Henningsson and Hans Olsson and Luigi Vanfretti, DAE Solvers for Large-Scale Hybrid Models, Modelica 2019
Tamara Beltrame and François Cellier, Quantised state system simulation in Dymola/Modelica using the DEVS formalism, Modelica 2006
Victorino Sanz and Federico Bergero and Alfonso Urquia, An approach to agent-based modeling with Modelica, Simpra 2010
we want to publish an Open-Source for integrating Reinforcement Learning to Smartgrid optimization.
We use OpenModelica as GUI, PyFMI for the import to Python and Gym.
Nearly everything is running, but a possibility to connect or disconnect additional loads during the simulation is missing. Everything we can do for now is a variation of the parameters of existing loads, which gives some flexibility, but way less then the possibility to switch loads on and off.
Using the implemented switches in OpenModelica is not really an option. They just place a resistor at this spot, giving it either a very low or very high resistance. First, its not really decoupled, and second, high resistances make the ODE-system stiff, which makes it really hard (and costly) to solve it. In tests our LSODA solver (in stiff cases basically a BDF) ran often in numerical errors, regardless of how the jacobian was calculated (analytically by directional derivatives or with finite differences).
Has anyone an idea how we can implement a real "switching effect"?
Best regards,
Henrik
Ideal connection and disconnection of components during simulation
requires structure variability, which is not fully supported
by Modelica (yet). See also this answer https://stackoverflow.com/a/30487641/8725275
One solution for this problem is to translate all possible
model structures in advance and switch the simulation model if certain conditions are met. As there is some overhead involved, this approach only makes sense, when the model is not switched very often.
There is a python framework, which was built to support this process: DySMo. The tool was written by Alexandra Mehlhase, who made a lot of interesting publications regarding structure variability, e.g. An example of beneficial use of
variable-structure modeling to enhance an existing rocket model.
The paper Simulating a Variable-structure Model of an Electric Vehicle for Battery Life Estimation Using Modelica/Dymola and Python of Moritz Stueber is also worth a look. It contains a nice introduction about variable structure systems and available solutions.
I am reading people's implementation of DCGAN, especially this one in tensorflow.
In that implementation, the author draws the losses of the discriminator and of the generator, which is shown below (images come from https://github.com/carpedm20/DCGAN-tensorflow):
Both the losses of the discriminator and of the generator don't seem to follow any pattern. Unlike general neural networks, whose loss decreases along with the increase of training iteration. How to interpret the loss when training GANs?
Unfortunately, like you've said for GANs the losses are very non-intuitive. Mostly it happens down to the fact that generator and discriminator are competing against each other, hence improvement on the one means the higher loss on the other, until this other learns better on the received loss, which screws up its competitor, etc.
Now one thing that should happen often enough (depending on your data and initialisation) is that both discriminator and generator losses are converging to some permanent numbers, like this:
(it's ok for loss to bounce around a bit - it's just the evidence of the model trying to improve itself)
This loss convergence would normally signify that the GAN model found some optimum, where it can't improve more, which also should mean that it has learned well enough. (Also note, that the numbers themselves usually aren't very informative.)
Here are a few side notes, that I hope would be of help:
if loss haven't converged very well, it doesn't necessarily mean that the model hasn't learned anything - check the generated examples, sometimes they come out good enough. Alternatively, can try changing learning rate and other parameters.
if the model converged well, still check the generated examples - sometimes the generator finds one/few examples that discriminator can't distinguish from the genuine data. The trouble is it always gives out these few, not creating anything new, this is called mode collapse. Usually introducing some diversity to your data helps.
as vanilla GANs are rather unstable, I'd suggest to use some version
of the DCGAN models, as they contain some features like convolutional
layers and batch normalisation, that are supposed to help with the
stability of the convergence. (the picture above is a result of the DCGAN rather than vanilla GAN)
This is some common sense but still: like with most neural net structures tweaking the model, i.e. changing its parameters or/and architecture to fit your certain needs/data can improve the model or screw it.
I am constructing a finite volume model in Dymola which evolves in time and space. The spatial discretization is hard coded in the equations section, the time evolution is implemented with a term consisting of der(phi).
Is the time integration of Dymola always numerically stable when using a variable step size algorithm? If not, can I do something about that?
Is the Euler integration algorithm from Dymola the explicit or implicit Euler method?
The Dymola Euler solver by default is explicit (if an in-line sovler is not selected).
The stability of time integration is going to depend on your integrator. Generally speaking, implicit methods are going to be much better than explicit ones.
But since you mention spatial and time discretization, I think it is worth pointing out that for certain classes of problems things can get pretty sticky. In general, I think elliptic and parabolic PDEs are pretty safe to solve in this way. But hyperbolic PDEs can get very tricky.
For example, the Courant-Friedrichs-Lewy condition will affect the overall stability of the solution method. But by discretizing in space first, you leave the solver with information only regarding time and it cannot check or conform to the CFL condition. My guess is that a variable time step integrator will detect the error being introduced by not following the CFL condition but that it will struggle to identify the proper time step and probably also end up permitting an unacceptably unstable solution.
I am currently developing a large and complex thermo-hydraulic systems in Modelica/Dymola environment using ThermoPower library by Prof. Francesco Casella. At present, I have completed building our system model (which contains several closed-loop hydraulic circuits) and concentrating on designing controllers for the developed model. Given complexity of the system, I have about 25 PI controllers controlling various valve opening, pump, condenser and boilers. At this stage, I am tuning the controller gains using some judicious trial-and-error method. I tried to look into literature to see if there are any formal design methodology or any rule-of-thumbs for designing controllers for such a multi-input-multi-output (MIMO) thermo-hydraulic system. Consequently, I would like to ask if anyone can provide some pointers or literature/papers which deals with controller designs for such systems. Because my knowledge in controller design (sliding mode, linear control, root locus, etc) are not helping me here as most of these methodology are based on available model equations.
Furthermore, for such a large thermo-hydraulic systems, how one sets initial conditions of the system? Does one need to just provide some reasonable guess value and expect Dymola to take care of rest of it?
Well, I have to qualify my response by pointing out that I am NOT a controls engineer so take everything I say with a grain of salt.
To some extent, it really depends on what tool you are using since different tools specialize in different analysis features and offer different capabilities. For example, if you are using Dymola, you can use the "linearize" function to linearize your system. This will give you an entry into the formal controller design methods you are familiar with. The problem is, of course, that your system is probably highly non-linear so you will have to formulate a strategy to determine over what range of state space you need to control and then potentially develop strategies to adjust your gains accordingly.
One the other hand, if you are using tools like SystemModeler (from Wolfram) or MapleSim (from Maplesoft), I'm pretty sure you have the option to elaborate the Modelica model into a symbolic system of equations. As a result, you can again revisit the classical techniques that require the model equations to be available. Since these are not linearized, you will have full visibility on the non-linearities in symbolic form and you can take whatever measures are possible to address them.
Does that help?
I would try Model Predictive Control in your case (as long as your system will only be active in an approximately linear region or it can be made approximately linear).
Here is some info:
http://www.stanford.edu/class/archive/ee/ee392m/ee392m.1056/Lecture14_MPC.pdf
But I would recommend getting a good control engineer book that describes this in more detail.
It has been quite a few years back that I have done an example of this so maybe this suggestion is outdated now.
Note that when you implement this in Modelica/Dymola that you will have to simulate the model using a fixed time step solver.