I tryed to use this code:
Real x,y;
Boolean trigger(start = true)
when x < y and trigger then
trigger = false;
end when;
I want to generate event for "when" only once. But my code doesn't work.
How can I generate complex events in modelica for when statement?
In Dymola you get the following error message:
The computational causality analysis requires the variables trigger
to be solved from the equation: when x < y and trigger then trigger =
false; end when;
however, the when condition also depends on the unknowns.
You may be able to cut the loop by putting 'pre' around these
references in the when condition.
Thus the solution would be:
Real x,y;
Boolean trigger(start = true) ;
equation
when x < y and pre(trigger) then
trigger = false;
end when;
As you see this is quite simple (and simulates in Dymola), but I haven't checked it in OpenModelica.
The problem you seem to be hitting is the first error message Internal error BackendDAETransform.analyseStrongComponentBlock failed (Sorry - Support for Discrete Equation Systems is not yet implemented). This seems to be https://trac.openmodelica.org/OpenModelica/ticket/1232, and I think it is caused by redefining part of your condition variable within the when statement.
You can work around this with reinit. See also the Bouncing ball example and the reference. It needs to act on a state variable, that's why I put the der(trigger) in there.
model test_when
Real trigger(start = 1.0, fixed = true);
equation
der(trigger) = 0;
when trigger > 0.5 and time > 5 then
reinit(trigger, 0);
end when;
annotation(
experiment(StartTime = 0, StopTime = 10, Tolerance = 1e-06, Interval = 0.02));
end test_when;
Probably there is a nicer way to achieve this. Anybody else got input on this?
You can check the compilation log (Statistics - events) to confirm that only one event was fired.
Related
I am using the default dassl integrator. In my model a volume is controlled using Booleans to open or close 4 valves (2 work together). After the state of the boolean changes (from 1 to zero) with milliseconds I receive this error message:
Is there any way to find out more about what is causing the problem?
model CONTROLLER
Modelica.Blocks.Interfaces.RealInput V_LT_min;
Modelica.Blocks.Interfaces.RealInput V_LT_max;
Modelica.Blocks.Interfaces.RealInput V_LT_lev;
Modelica.Blocks.Interfaces.BooleanOutput open1(start=true);
Modelica.Blocks.Interfaces.BooleanOutput open2(start=false);
equation
when (V_LT_lev <= V_LT_max) then
open1 = true;
elsewhen (V_LT_lev < V_LT_min) then
open1 = false;
end when;
open2 = not open1;
end CONTROLLER;
model EV_LT
package SI = Modelica.SIunits;
package Medium = Modelica.Media.Water.WaterIF97_ph;
Thermofluid_connector port_e;
Thermofluid_connector port_s;
parameter Real Kv=3.79;
Modelica.Blocks.Interfaces.BooleanInput open;
Real dbM;
Real delta_p;
equation
//dbM=port_e.dbM;
delta_p = (port_e.p - port_s.p)/10^5;
if (delta_p >= 10^(-5)) then
dbM = Kv*sqrt(delta_p)*1000/3600;
else
dbM = 0;
end if;
port_e.dbM = if open then dbM else 0;
port_e.dbM + port_s.dbM = 0;
port_s.dbH = port_s.dbM*port_s.h;
port_e.h = port_s.h;
end EV_LT;
connector Thermofluid_connector
package SI = Modelica.SIunits;
SI.AbsolutePressure p;
flow SI.MassFlowRate dbM;
SI.SpecificEnthalpy h;
flow SI.EnthalpyFlowRate dbH;
equation
end Thermofluid_connector;
I looked a little bit deeper into this model and i don't know why any of this works on your system actually. I tried to simulate the model EV_LT and ran into a structural singularity which is quite obvious looking at the system.
You define the equation port_e.dbM + port_s.dbM = 0.0; and because neither port_e nor port_s are connected to anything the equations port_e.dbM = 0.0 and port_s.dbM = 0.0 are generated (Flow is set to zero for unconnected connectors). These are three equations with only 2 unknowns -> structural singularity. The compiler actually tries to resolve this by differentiating these (index reduction), but as you might imagine that does not yield any additional information whatsoever.
Are you sure you simulated the exact model stated above?
If you are talking about the simulation of only the controller, that does work but is pretty unexciting since it does not depend on time at all.
EDIT: Can you provide the omc version you are working with? I tested it with 1.14 and 1.16~dev. Maybe you have an older version and should upgrade.
I am using the spatialDistribution() Operator in Dymola and get the follwing message when using Hidden.PrintFailureToDifferentiate = true;
"Can only compute non-scalar gradients of functions specifying derivatives and not for: spatialDistribution"
I call the operator like this :
(time_rev,time_flow) = spatialDistribution(time,time,x/length,v_water>=0,{0.0,1.0}, {time,time});
and use it to calculate the outlet Temperature of my pipe.
Anyone got an idea where the issue lies? I don't really understand the error message.
More complete example:
cp_in = //Calculates specific Heatcap
cp_out = //Calculates specific Heatcap
cp = (cp_in+cp_out)*0.5;
C = (Modelica.Constants.pi*(1/4))*diameter_i^2*fluidInlet.d*cp;
R= // Calculates Heatresistance
//---------Conservation of mass flow and composition
//The usual stuff equal massflow,xi and p at both connectors
//----------Spatial
tau_nom = C*R;
v_water = //Calc Speed of water from Geometric data and inlet rho
der(x) = v_water;
(time_reversed,time_flow) = spatialDistribution(time,time,x/length,v_water>=0, {0.0,1.0}, {time,time});
tau_delay= time - time_flow;
tau_reversed= time - time_reversed; //Not used right now
if inlet.m_flow >= 0 then
T_out = (T_amb + (T_in - heat.T)*exp(-tau_delay/tau_nom));
heat.Q_flow = -inlet.m_flow*cp*(T_in - T_out);
inlet.h = inStream(outlet.h);
else
outlet.h = inStream(inlet.h);
T_in = T_out;
heat.Q_flow = -inlet.m_flow*cp*(T_in - T_out);
end if;
The reason for getting this error message is that Dymola cannot compute a gradient, that is likely used as part of computing a Jacobian for a non-linear system of equations.
If you look at the translation log I would expect that "Number of numerical Jacobians: " is non-zero.
A missing Jacobian for a non-linear system of equations is normally not a major issue.
However, that the non-linear system needs the gradient for spatialDistribution does not seem right, since it indicates that the delayed variables are implicitly given in some odd way.
It could be that the delay of the spatial distribution should solve that and in that case Dymola 2019 FD01 might remove the issue if you set Advanced.BreakDelayLoops=true; (but it is difficult to say without the complete model).
(It seems you have an earlier version, and the flag does not work there.)
I know it is a bit late answer, but it was difficult to investigate without a complete model.
I'm trying to create a model where one Modelica variable is a triangular wave of another variable. First I tried the floor() function as below:
model test1
final constant Real pi=2*Modelica.Math.asin(1.0);
parameter Real b = 1;
parameter Real a = 1;
Real x,p,u;
equation
if sign(sin(x*pi/b))>=0 then
p=a*(x-b*floor(x/b));
else
p=a*(b-(x-b*floor(x/b)));
end if;
x=time;
u = floor(x/b);
end test1
(x=time; is arbitrary so the model compiles)
but the result is weird, as you can see below
zoom in:
somehow 0.005 seconds before the next step floor function behaves unexpectedly and becomes a linear function ending by the next value.
then I tried the ceil() function. everything seemed right till I realised the same problem happens with ceil() function at other values (e.g. x=13)
I would appreciate if you could:
help me understand why this "glitch" happens and if it is intentional by design or a bug?
how I can fix this?
are there any alternatives to create a triangular wave function?
P.S. I am using this "wave function" to model the interaction between two jagged bodies"
If you are allowed to utilize the Modelica Standard Library, you can build up a parametrized, time-based zigzag signal using the CombiTimeTable block with linear interpolation and periodic extrapolation. For example,
model Test4
parameter Real a=2 "Amplitude";
parameter Real b=3 "Period";
Real y=zigzag.y[1] "Zigzag";
Modelica.Blocks.Sources.CombiTimeTable zigzag(
table=[0,0;b/4,a;b/4,a;b/2,0;b/2,0;3*b/4,-a;3*b/4,-a;b,0],
extrapolation=Modelica.Blocks.Types.Extrapolation.Periodic)
annotation(Placement(transformation(extent={{-80,60},{-60,80}})));
Modelica.Blocks.Sources.Trapezoid trapezoid(
amplitude=2*a,
rising=b/2,
width=0,
falling=b/2,
period=b,
offset=-a)
annotation(Placement(transformation(extent={{-80,25},{-60,45}})));
annotation(uses(Modelica(version="3.2.2")));
end Test4;
I don't have an explanation for the glitches in your simulation.
However, I would take another approach to the sawtooth function: I see it as an integrator integrating +1 and -1 upwards and downwards. The integration time determines the amplitude and period of the sawtooth function.
The pictures below show an implementation using MSL blocks and one using code. The simulation results below are the same for both implementations.
Best regards,
Rene Just Nielsen
Block diagram:
Code:
model test3
parameter Real a=2 "amplitude";
parameter Real b=3 "period";
Real u, y;
initial equation
u = 1;
y = 0;
equation
4*a/b*u = der(y);
when y > a then
u = -1;
elsewhen y < -a then
u = 1;
end when;
end test3;
Simulation result:
The problem I guess is due to floating point representation and events not occurring at exact times.
Consider x-floor(x) and 1-(x-floor(x)) at time=0.99, they are 0.99 and 0.01; at time=1.00 they are 0.0 and 1.0, which causes your problems.
For a=b=1, you can use the following equation for p:
p=min(mod(x,2),2-mod(x,2));. You can even add noEvent to it, and you can consider the signal continuous (but not differentiable).
model test
parameter Real b = 1;
parameter Real a = 3;
Real x, p;
equation
p = 2*a*min(1 / b * mod(x, b ),1 - 1/b * mod(x, b));
x = time;
end test;
My first advise would be to remove the sign-function, since there is no benefit of doing sign(foo)>=0 compared to foo>=0.
Interesting enough that seems to fix the problem in Dymola - and I assume also in OpenModelica:
model test1 "almost original"
final constant Real pi=2*Modelica.Math.asin(1.0);
parameter Real b = 1;
parameter Real a = 1;
Real x,p,u;
equation
if sin(x*pi/b)>=0 then
p=a*(x-b*floor(x/b));
else
p=a*(b-(x-b*floor(x/b)));
end if;
x=time;
u = floor(x/b);
end test1;
Now I only have to explain that - and the reason is that sin(x*pi/b) is slightly out of sync with the floor-function, but if you use sin(x*pi/b)>=0 that is within the root-finding epsilon and nothing strange happen.
When you use sign(sin(x*pi/b))>=0 that is no longer possible, instead of having sin(x*pi/b) an epsilon below zero it is now -1, and instead of epsilon above zero it is 1.
The real solution is thus slightly more complicated:
model test2 "working"
parameter Real b = 1;
parameter Real a = 1;
Real x,p,u;
Real phase=mod(x,b*2);
equation
if phase<b then
p=a/b*phase;
else
p=a-a/b*(phase-b);
end if;
x=time;
u = floor(x/b);
end test2;
which was improved based on a suggested solution:
model test3 "almost working"
parameter Real b = 1;
parameter Real a = 1;
Real x,p,u;
equation
if mod(x,2*b)<b then
p=a/b*mod(x,b);
else
p=a-a/b*mod(x,b);
end if;
x=time;
u = floor(x/b);
end test3;
The key point in this solution, test2, is that there is only one problematic event generating expression mod(x,2*b) - and the < will not get out of sync with this.
In practice test3 will almost certainly also work, but in unlikely cases the event generation might get out of sync between mod(x,2*b) and mod(x,b); with unknown consequences.
Note that all three examples are now modified to generate output that looks similar.
I would like to ask a Modelica question about when function, and the following source code cannot be properly functioned. The variable Pstart_CONV is an initial condition for der(x_calc) in the if statement, and the Pstart_CONV value is given by x when the "when statement" becomes true. Because x is a step function, I want to assign an initial condition for der(x_calc) so x can be continued for the whole domain.
Thank you very much,
Source:
model Unnamed4
Real Pstart_CONV;
Real P_crit_ratio;
parameter Real P_crit_ratio_criteria = 2.00;
Real x;
Real x_calc(start=0);
equation
P_crit_ratio = 10-time;
when P_crit_ratio <= P_crit_ratio_criteria then
Pstart_CONV = x;
end when;
if P_crit_ratio >= P_crit_ratio_criteria then
x = time^2;
x_calc = 0;
else
der(x_calc) = time * 5;
x = x_calc + Pstart_CONV;
end if;
end Unnamed4;
There are two issues I see with this code. The main one has to do with the fact that this is what is called a "variable index" problem. I'll address that. But first, I want to point out that your if and when clauses are not properly synchronized. What I mean by that is that the change in behavior represented by your if statement will not necessarily occur at the same instant that the when clause is activated.
To address this, you can easily refactor your model to look like this:
model Model1
Real Pstart_CONV;
Real P_crit_ratio;
parameter Real P_crit_ratio_criteria=2.00;
Real x;
Real x_calc(start=0);
Boolean trigger(start=false);
equation
P_crit_ratio = 10-time;
when P_crit_ratio <= P_crit_ratio_criteria then
Pstart_CONV = x;
trigger = true;
end when;
if trigger then
der(x_calc) = time * 5;
x = x_calc + Pstart_CONV;
else
x_calc = 0;
x = time^2;
end if;
end Model1;
Now, both the if and when clauses are tied to the trigger variable. Now we can address your main problem which is that on one side of your if statement you have:
der(x_calc) = time * 5;
...and on the other side you have:
x_calc = 0;
In practice, what this means is that for part of the simulation you solve x_calc using a differential equation while during the other part of the simulation you solve x_calc using an algebraic equation. This leads to the "variable index" problem because the "index" of the DAE changes depending on whether the value of trigger is true or false.
One approach to this is to modify the equations slightly. Instead of using the equation x_calc = 0 we specify an initial condition of 0 for x_calc and then enforce a differential equation that says the value of x_calc doesn't change, i.e., der(x_calc) = 0. In other words, get the same behavior by removing an algebraic equation settings x_calc to a constant and replacing it with an equation where we set the initial value of x_calc to be the desired value and then add a differential equation that, in effect, simply says the value of x_calc doesn't change.
Making such a change in your case leads to the following model:
model Model2
Real Pstart_CONV;
Real P_crit_ratio;
parameter Real P_crit_ratio_criteria=2.0;
Real x;
Real x_calc(start=0);
Boolean trigger(start=false);
initial equation
x_calc = 0;
equation
P_crit_ratio = 10-time;
when P_crit_ratio <= P_crit_ratio_criteria then
Pstart_CONV = x;
trigger = true;
end when;
if trigger then
der(x_calc) = time * 5;
x = x_calc + Pstart_CONV;
else
der(x_calc) = 0;
x = time^2;
end if;
end Model2;
I tested it, and this model ran using SystemModeler (although I don't know enough about your problem or the expected results to truly validate the results).
I hope that helps.
I apologize for the poor title, but I found it hard to describe the problem in a comprehensible way.
What I want to do is to solve an ODE, but I don't want to start integrating at time = 0. I want the initial value, i.e. the starting point of the integration, to be accessible for changes until the integration starts. I'll try to illustrate this with a piece of code:
model testModel "A test"
parameter Real startTime = 10 "Starting time of integration";
parameter Real a = 0.1 "Some constant";
Real x;
input Real x_init = 3;
initial equation
x = x_init;
equation
if time <= startTime then
x = x_init;
else
der(x) = -a*x;
end if;
end testModel;
Notice that x_init is declared as input, and can be changed continuously. This code yields an error message, and as far as I can tell, this is due to the fact that I have declared x as both der(x) = and x =. The error message is:
Error: Singular inconsistent scalar system for der(x) = ( -(if time <= 10 then x-x_init else a*x))/((if time <= 10 then 0.0 else 1.0)) = -1e-011/0
I thought about writing
der(x) = 0
instead of
x = init_x
in the if-statement, which will avoid the error message. The problem in such an approach, however, is that I lose the ability to modify the x_init, i.e. the starting point of the integration, before the integration starts. Lets say, for instance, that x_init changes from 3 to 4 at time = 7.
Is there a work-around to perform what I want? Thanks.
(I'm gonna use this to simulate several submodels as part of a network, but the submodels are not going to be initiated at the same time, hence the startTime-variable and the ability to change the initial condition before integration.)
Suggested solution: I've tried out the following:
when time >= startTime
reinit(x,x_init);
end when;
in combination with the der(x) = 0 alternative. This seems to work. Other suggestions are welcome.
If your input is differentiable, this should work:
model testModel "A test"
parameter Real startTime = 10 "Starting time of integration";
parameter Real a = 0.1 "Some constant";
Real x;
input Real x_init = 3;
initial equation
x = x_init;
equation
if time <= startTime then
der(x) = der(x_init);
else
der(x) = -a*x;
end if;
end testModel;
Otherwise, I suspect the best you could do would be to have your x variable be a very fast first-order tracker before startTime.
The fundamental issue here is that you are trying to model a variable index DAE. None of the Modelica tools I'm aware of support variable index systems like this.