How JModelica(v2.14) enable directional derivatives? - modelica

I'm using PyFMI.master to do co-simulation and the FMUs are compiled by JModelica.
However, PyFMI.master said that "UserWarning: The model, room_interface_3, does not support directional derivatives which is necessary in-case of an algebraic loop. The simulation might become unstable..."
It is indeed unstable because my co-simulation crashed with an absurd situation(like temperature blew the min limit).
So, are there any solutions to this problem?
Thanks in advance!

Related

How much is the largest capability of solving the nonlinear system model in Dymola?

In Dymola, I often meet a nonlinear system initialization failure or maybe a stiff system that is hard to solve in the large thermo-fluid system, but for a simple system, there wouldn't be this kind of problem. My questions are:
So I am wondering how much is the largest capability of solving a nonlinear system model? For example, how many nonlinear equations I could include in my model at most?
Is there any setting in Dymola which allows increasing the capability of solving nonlinear system?
How could I decrease the number of nonlinear equations in the model without damage to the accuracy of the model?
These are pretty difficult questions to be answered in a generally valid fashion. Still I'll try to share some of my experience with Dymola and non-linear systems.
There is no hard number which will limit the size. It depends more on how strongly non-linear the equations are than on their number. I have simulated models with non-linear systems of size 150, which are pretty stable while others of size 10 can brake...
There are multiple perspectives to this
I have worked on some models that made the C-Compiler run out of memory during compilation. If you have this sort of problem, forcing 64Bit compilation by setting Advanced.CompileWith64=2 can help. Then you shouldn't run out of memory any more. This only refers to the size only.
Performance for non-linear systems can be improved by activating DAE-mode by setting Advanced.Define.DAEsolver=true. This does not work with all solvers though.
Additionally to the above it can help to set Advanced.MoveEquationsToDynamics=true, for which the manual states: "It forces the integrator to solve the nonlinear
equations each integrator step and thereby it also updates the initial guesses more often."
As mentioned by Erik, the homotopy()-operator can be very important as it helps the solver converging in case of difficult initialization.
This is very specific to the model. Decoupling can help, e.g. by splitting the system in smaller systems by adding energy storing elements/states. This can be done based on physics of the system and is the preferable solution if possible. As an (more artificial) alternative filter/delays can be added. Usually this has a negative effect on accuracy.
I very much agree with Markus's advice but would also like to remind you about Modelica's homotopy operator. A well-chosen simplified model can greatly help Dymola to initialize a model with a large and difficult non-linear system.
In general good initial guesses are very important when solving non-linear systems. Using homotopy is simply an implicit way to provide these good guesses.

Change Equation set in FMU for Model Exchange

we want to publish an Open-Source for integrating Reinforcement Learning to Smartgrid optimization.
We use OpenModelica as GUI, PyFMI for the import to Python and Gym.
Nearly everything is running, but a possibility to connect or disconnect additional loads during the simulation is missing. Everything we can do for now is a variation of the parameters of existing loads, which gives some flexibility, but way less then the possibility to switch loads on and off.
Using the implemented switches in OpenModelica is not really an option. They just place a resistor at this spot, giving it either a very low or very high resistance. First, its not really decoupled, and second, high resistances make the ODE-system stiff, which makes it really hard (and costly) to solve it. In tests our LSODA solver (in stiff cases basically a BDF) ran often in numerical errors, regardless of how the jacobian was calculated (analytically by directional derivatives or with finite differences).
Has anyone an idea how we can implement a real "switching effect"?
Best regards,
Henrik
Ideal connection and disconnection of components during simulation
requires structure variability, which is not fully supported
by Modelica (yet). See also this answer https://stackoverflow.com/a/30487641/8725275
One solution for this problem is to translate all possible
model structures in advance and switch the simulation model if certain conditions are met. As there is some overhead involved, this approach only makes sense, when the model is not switched very often.
There is a python framework, which was built to support this process: DySMo. The tool was written by Alexandra Mehlhase, who made a lot of interesting publications regarding structure variability, e.g. An example of beneficial use of
variable-structure modeling to enhance an existing rocket model.
The paper Simulating a Variable-structure Model of an Electric Vehicle for Battery Life Estimation Using Modelica/Dymola and Python of Moritz Stueber is also worth a look. It contains a nice introduction about variable structure systems and available solutions.

How to speed up simulation of Simscape based Physical model?

I am working on modeling and controlling of a hydraulic system. Modeling of the system is modeled in Matlab simscape in simulink environment which is looks like this
and for basic controlling to control the piston position (Piston Pos in figure) I have established simple feedback to check the position.
While I run the simulation when this comes to control the position Simulation takes too much time. For example if I gave desired piston position 300 mm than while output comes to around 290-294 mm simulation time reaches at around 5.18sec than it is stuck on that for longer time.
I want to know that, is there any way to speed up the simulation ?
I am using Matlab simulink solver ode23t due to simscape modeling.
Speeding up simulations in general is vast subject. It seems the issue here is an event which triggers multiple small time-step in the variable step solver.
This can be perfectly normal, for example a clutch engaging, or a valve opening.
To check whether or nor this is the case you can execute (make sure time-logging is enabled):
semilogy(tout(2:end), diff(tout))
Sharp downward spikes indicate small time-steps were taken. For a more in-depth analysis you can use the Solver Profiler:
https://www.mathworks.com/help/simulink/ug/examine-solver-behavior-using-solver-profiler.html
This will give you detailed information as to which components are causing solver resets.
Such behavior can be difficult to debug if you're not used to the tool. I'd highly recommend getting in touch with MathWorks tech support if the behavior persists. They'll be able to look at your model and diagnose the issue.

Factors limiting MILP solver calculation speed?

I am fairly new to Mixed Integer Linear Programming and I was hoping someone could clarify a performance question for me. Basically I am performing a calculation with about 34 decision variables and my calculation time is around 5 seconds. I would like to ideally get the calculation time down into the sub 1 second range.
Currently I am using the CBC solver & MATLAB, but as I understand it this is a single-threaded solver. Most of the MILP solvers I've seen seem to pride themselves on large project performance with 1k+ variables and knocking compute time down from days to hours but only a few of the expensive ones are even multi-threaded. Processor speed would seem to only go so far with such a problem so there has to be something that could be done on the software side.
In a situation like I have what factors play a role in the calculation time? In theory would a solution like Gurobi be able to ramp up and make a discernible difference over CBC on such a small problem?
When solving MIPs, multi-threading usually doesn't help that much, compared to other applications. One thing more sophisticated solvers do better than CBC is presolving. It may very well happen, that all decision variables can be eliminated/fixed in the root node. In general, it is not possible to predict the solving time of a MIP based on its size. The solving process is simply too complex.
To answer your question: I do suppose that you will see a significant speedup when trying another solver (Gurobi, Xpress, CPLEX, SCIP, etc.). It is not guaranteed though, and you might also witness a slowdown.
Most of the commercial MILP solvers would probably consider most problems with only 1000 variables to be tiny. It is very common to solve problems with hundreds of thousands or millions of variables. Note however, that there are also some really small problems that are considered to be very hard or still open in that nobody has solved them to a proven optimal solution. Basically, the time taken to solve these problems is enormously problem dependent.
If you want to see how fast another solver can solve your problem, try submitting it as e.g. an MPS file to the NEOS solver service at http://www.neos-server.org/neos/

time integration stability in modelica

I am constructing a finite volume model in Dymola which evolves in time and space. The spatial discretization is hard coded in the equations section, the time evolution is implemented with a term consisting of der(phi).
Is the time integration of Dymola always numerically stable when using a variable step size algorithm? If not, can I do something about that?
Is the Euler integration algorithm from Dymola the explicit or implicit Euler method?
The Dymola Euler solver by default is explicit (if an in-line sovler is not selected).
The stability of time integration is going to depend on your integrator. Generally speaking, implicit methods are going to be much better than explicit ones.
But since you mention spatial and time discretization, I think it is worth pointing out that for certain classes of problems things can get pretty sticky. In general, I think elliptic and parabolic PDEs are pretty safe to solve in this way. But hyperbolic PDEs can get very tricky.
For example, the Courant-Friedrichs-Lewy condition will affect the overall stability of the solution method. But by discretizing in space first, you leave the solver with information only regarding time and it cannot check or conform to the CFL condition. My guess is that a variable time step integrator will detect the error being introduced by not following the CFL condition but that it will struggle to identify the proper time step and probably also end up permitting an unacceptably unstable solution.