How to make the dynamic model in Dymola agree with the steady-state design result? - modelica

Modelica modeling is the first principle modeling, so how to test the model and set an effective benchmark is important, for example, I could design a fluid network as my wish, but when building a dynamic simulation model, I need to know the detailed geometry structure and parameters to set up every piece of my model. Usually, I would build a steady-state model with simple energy and mass conservation laws, then design every piece of equipment based on the corresponding design manual, but when I put every dynamic component together, when simulation till steady-state, the result is different from the steady-state model more or less. So I was wondering if I should modify my workflow to make the dynamic model agree with the steady-state model. Any suggestions are welcome.
#dymola #modelica

To my understanding of the question, your parameter values are fixed and physically known. I would attempt the following approach as a heuristic to identify the (few) component(s) that one needs to carefully investigate in order to understand how they influence or violates the assumed first principles.
This is just as a first trial and it could be subject to further improvement and fine-tuning.
Consider the set of significant set of variables xd(p,t) \in R^n and parameters p. Note that p also includes significant start values. p in R^m includes only the set of additional parameters not available in the steady state model.
Denote the corresponding variables of the steady state model by x_s
Denote a time point where the dynamic model is "numerically" in "semi-" steady-state by t*
Consider the function C(xd(p,t*),xs) = ||D||^2 with D = xd(p,t*) - xs
It could be beneficial to describe C as a vector rather than a single valued function.
Compute the partial derivatives of C w.t. p expressed in terms of dxd/dp, i.e.
dC/dp = d[D^T D]/dp
= d[(x_d-x_s)^T (x_d - x_s)]/dp
= (dx_d/dp)^T D + ...
Consider scaling the above function, i.e. dC/dp * p/C (avoid expected numerical issues via some epsilon-tricks)
Here you get a ranking of most significant parameters which are causing the apparent differences. The hopefully few number of components including these parameters could be the ones causing such violation.
If this still does not help, may be due to expected high correlation among parameters, I would go further and consider a dummy parameter identification problem, out of which a more rigorous ranking of significant model parameters can be obtained.
If the Modelica language had capabilities for expressing dynamic parameter sensitivities, all the above computation can be easily carried out as a single Modelica model (with a slightly modified formulation).
For instance, if we had something like der(x,p) corresponding to dx/dp, one could simply state
dcdp = der(C,p)
An alternative approach is proposed via the DerXP library

Related

A general question about Modelica initialization

How to set values to all the variables that could be possibly used as iteration variables, for example, there is a heat exchanger which includes a few connectors, and each connector includes a few variables, I can't know which variables could be used as iteration variables, when dealing with initialization, do I need to set values to every variable so that no matter which variable is chosen as iteration variable, there is a reasonable value?
Marvel,
I think that you are a bit on the wrong track for finding a solution: setting values to all variables that possibly could become iteration variables is often too many, and will lead to errors and problems. But I think I can give you some useful advice in any case.
Alias variables: there are many alias variable sin Modelica models. You should always try to only select one of them to set start values.
Feedback between start values and iteration variables: most Modelica tools will prefer to select iteration variables that have start values set. Selecting fewer thus can guide the algorithm towards selecting good one. Therefore: don't overdo it.
General advice for selecting iteration variables. For a pure ODE, the states will always be a complete set of start variables, even if sometimes not the best one. For DAE you can start with the following exercise: think of all equations that result from a singular perturbation of the complete physics as differential equations with states. For example, in a heat exchanger, you need to consider the dynamic momentum balance and not the most often used static reduction to an algebraic pressure loss only, i.e. add the mass flow as a state. Similar in chemical reactions: think of it as Kinetics, not equilibrium reactions. That gives you a pretty good starting point, even though often not the best one.
If your troubles don't quite resolve from that, I recommend that you contact us via www.modelon.com: we have advanced ways of dealing with hard initialization and steady state problems in our Modelic tool. :-)
There is also a simplest way to answer your question, working quite well with fluid models.
Giving the fact that you are using a dynamic model, what you need to initialize are the state variables of your system. To know the state variables, either you know the type of model you are wirking with or you can dig through them using options like 'List continuous time states selected' in Dymola (I do not know about other tools), giving you the state variables in the translation log.
In case of fluid models, most of the times those are pressure and energy (enthalpy or temperature). All other variables will be calculated based on them.
For complex (or not) models, this approach show limits, which can sometimes be solved by changing/correcting the structure of the model.
Static models are something else...
Hope this can help :)

The initialization process in Dymola and Start attribute uses

For a simple model in Dymola, the Start attribute works to provide initial conditions for the DOE equations, like the following examples.
model QuiescentModelUsingStart "Find steady-state solutions to LotkaVolterra equations"
parameter Real alpha=0.1 "Reproduction rate of prey";
parameter Real beta=0.02 "Mortality rate of predator per prey";
parameter Real gamma=0.4 "Mortality rate of predator";
parameter Real delta=0.02 "Reproduction rate of predator per prey";
Real x(start=10) "Prey population";
Real y(start=10) "Predator population";
initial equation
der(x) = 0;
der(y) = 0;
equation
der(x) = x*(alpha-beta*y);
der(y) = y*(delta*x-gamma);
end QuiescentModelUsingStart;
But for the complicated model like a power plant model, which is a strong nonlinear model, it is a lot more complicated.
Based on the Modelica by example(https://mbe.modelica.university/behavior/equations/variables/), the start attribute may also be used as an initial guess if the variable has been chosen as an iteration variable.
So, what is the process of initializing a model in Dymola? Would Dymola take the "equation" part into consideration during initialization, and set the derivate as zero, so it could Find the Steady-State as Initial Conditions?
Or Dymola just uses the "start attributes" and "initial equation" part to get a group of initial values?
How should I ensure that the initialization values I use could make up a steady-state?
Probably an excerpt from the Modelica Language Specification describes what you are looking for:
Before any operation is carried out with a Modelica model [e.g., simulation or linearization], initialization takes place to assign consistent values for all variables present in the model. During this phase, also the derivatives, der(..), and the pre-variables, pre(..), are interpreted as unknown algebraic variables. The initialization uses all equations and algorithms that are utilized in the intended operation [such as simulation or linearization].
This is the first part of Section 8.6, which is about three pages and should give you a pretty good insight on what happens during initialization. It also discusses the start attribute with fixed=true/false.

How could I redefine or change the value of a predefined parameter in Dymola during the simulation?

I am building model in Dymola. I have defined the mass of this model as a parameter, because it would be transfered into other moduls and called in them. But the mass should be changing during the simulation in different time intervals. For example, during the first 100 seconds the mass should remain 500kg, and during 100 to 200 sec, a passenger is going to get in, so that a new mass should be calculated including the mass of the passenger. But it has been showed, that "The problem is structurally singular", because to the parameter values have been twice assigned. Could someone give some tips to solve this problem? Thanks a lot.
If you define the mass of your component as an input rather than a parameter then you can change it during simulation by assigning e.g. the output from a TimeTable to it. For example
model Component
input Modelica.SIunits.Mass mass "Passenger dependent mass";
equation
...
end Component;
model systemModel
TimeTable timeTable;
Component component(mass=timeTable.y);
OtherComponent otherComponent(mass=component.mass);
equation
...
end systemModel;
Note that the other components using the mass must also have their internal mass 'parameters' defined as input to allow higher variability than parameters.
Best regards
Rene Just Nielsen
Modelica parameters are defined by the fact, that they don't change over time. Therefore you would need to stop the simulation, change the parameter and restart the simulation (see another question). Given you description I would rather not use this possibility, as it seems your variable is designed to change over time.
A better alternative seems to be defining the mass as a variable. If this is done, you can:
Transfer this variable from one model to the others using interfaces. This could be a bit tedious depending on the amount of classes using the variable.
Use inner/outer (basically global variables) is a feasible concept for this use-case. This concept is used in the MultiBody libraries world model.
With both solutions you will have to modify the original mass model, as m would then have to be a variable instead of a mass.

Algorithm generation

I have a rather large(not too large but possibly 50+) set of conditions that must be placed on a set of data(or rather the data should be manipulated to fit the conditions).
For example, Suppose I have the a sequence of binary numbers of length n,
if n = 5 then a element in the data might be {0,1,1,0,0} or {0,0,0,1,1}, etc...
BUT there might be a set of conditions such as
x_3 + x_4 = 2
sum(x_even) <= 2
x_2*x_3 = x_4 mod 2
etc...
Because the conditions are quite complex in that they come from experiment(although they can be written down in logic form) and are hard to diagnose I would like instead to use a large sample set of valid data. i.e., Data I know satisfies the conditions and is a pretty large set. i.e., it is easier to collect the data then it is to deduce the conditions that the data must abide by.
Having said that, basically what I'm doing is very similar to neural networks. The difference is, I would like an actual algorithm, in some sense optimal, in some form of code that I can run instead of the network.
It might not be clear what I'm actually trying to do. What I have is a set of data in some raw format that is unique and unambiguous but not appropriate for my needs(in a sense the amount of data is too large).
I need to map the data into another set that actually is ambiguous to some degree but also has certain specific set of constraints that all the data follows(certain things just cannot happen while others are preferred).
The unique constraints and preferences are hard to figure out. That is, the mapping from the non-ambiguous set to the ambiguous set is hard to describe(which is why it is ambiguous). The goal, actually, is to have an unambiguous map by supplying the right constraints if at all possible.
So, on the vein of my initial example, I'm given(or supply) a set of elements and need some way to derive a list of constraints similar to what I've listed.
In a sense, I simply have a set of valid data and train it very similar to neural networks.
Then, after this "Training" I'm given the mapping function I can then use on any element in my dataset and it will produce a new element satisfying the constraint's, or if it can't, will give as close as possible an unambiguous result.
The main difference between neural networks and what I'm trying to achieve is I'd like to be able to use have an algorithm to code to be used instead of having to run a neural network. The difference here is the algorithm would probably be a lot less complex, not need potential retraining, and a lot faster.
Here is a simple example.
Suppose my "training set" are the binary sequences and mappings
01000 => 10000
00001 => 00010
01010 => 10100
00111 => 01110
then from the "Magical Algorithm Finder"(tm) I would get a mapping out like
f(x) = x rol 1 (rol = rotate left)
or whatever way one would want to express it.
Then I could simply apply f(x) to any other element, such as x = 011100 and could apply f to generate a hopefully unambiguous output.
Of course there are many such functions that will work on this example but the goal is to supply enough of the dataset to narrow it down to hopefully a few functions that make the most sense(at the very least will always map the training set correctly).
In my specific case I could easily convert my problem into mapping the set of binary digits of length m to the set of base B digits of length n. The constraints prevents some numbers from having an inverse. e.g., the mapping is injective but not surjective.
My algorithm could be a simple collection if statements acting on the digits if need be.
I think what you are looking for here is an application of Learning Classifier Systems, LCS -wiki. There are actually quite a few LCS open-source applications available, but you may need to experiment with the parameters in order to get a good result.
LCS/XCS/ZCS have the features that you are looking for including individual rules that could be heavily optimized, pressure to reduce the rule-set, and of course a human-readable/understandable set of rules. (Unlike a neural-net)

Modelica.Media: BaseProperties versus setState_XXX

The Modelica Standard Library comes with the Modelica.Media library which makes available thermodynamic properties of fluids.
Quoting from the Modelica.Media documentation:
Media models in Modelica.Media are provided by packages, inheriting
from the partial package Modelica.Media.Interfaces.PartialMedium.
Every package defines:
[...]
A BaseProperties model, to compute the basic thermodynamic properties of the fluid;
setState_XXX functions to compute the thermodynamic state record from different input arguments (such as density, temperature, and composition which would be setState_dTX);
[...]
There are - as stated above - two different basic ways of using the Media library
which will be described in more details in the following section.
One way is to use the model BaseProperties.
[...]
The second way is to use the setState_XXX functions to compute the thermodynamic state record from which all other thermodynamic state variables can be computed [...]
My colleague prefers BaseProperties (he spends most time modeling components),
I prefer the setState_XXX functions (I spend most time writing a property library).
Now we want to develop a simple&small component library together and probably we should agree to use one of the two approaches.
Can you recommend a publication that explains the advantages/disadvantages of the two approaches? Publications that promote the use of the setState_XXX function are preferred of course... ;-)
Are there some simple rules to decide which one of the two approaches to use when modeling a component (e.g. a very simple turbine)?
The components in Modelica.Fluid seem to use both.
The 2 types of patterns for computing properties can both be used for all types of components, but BaseProperties have been designed to make life for the Modeller easy for components with dynamic states, i.e. usually for the storage of mass and energy in volumes. You need to write just the conservation equations, instantiate BaseProperties, equate the relevant variables and you are done. That is often overkill (more equations than minimally needed) for components with a stationary mass and energy balance, like simple valves, pumps and turbines. For that type of components (no dynamic states), the setState_xxx method provide a way to work with the minimally necessary number of equations. I think that is also what you will see in Modelica.Fluid: BaseProperties are used together with dynamic equations for mass- and energy storage, and setState elswhere.
The minimum number of equations is not the whole story w.r.t. computational efficiency, but in geeneral models shoudl not ocmpute more than what is actually needed.