For simple test models, I commonly a syntax similar to:
// Assuming the start time is 0 and stop time is 1
x = xMin + (xMax - xMin) * time;
y = f(x);
To be correct no matter the simulation setup, I would like to use:
x = xMin + (xMax - xMin) * (time - startTime) / (stopTime - startTime);
y = f(x);
However, I am unsure how I can reference the values defined in the Simulation Setup / General form.
I have tried simply referencing StartTime, startTime, starttime, timestart, timeStart, etc. with no success.
I understand that it is possible to set StartTime and StopTime using an annotation, but those values are only set the first time a model is opened and so may not truly reflect a simulation's start time and stop time.
It is currently not possible to access the stop-time of a simulation inside Dymola to use in the model, but you can get the start-time as follows:
parameter Real startTime(fixed=false);
initial equation
startTime=time;
Note that if you use Simulation>Continue>Continue the startTime will not be updated, but continue with its original value.
Not perfect, but you could provide the information from outside:
Add the start- and stop-time as parameters to your model
parameter Modelica.SIunits.Time stopTime = 0;
parameter Modelica.SIunits.Time startTime = 1;
and use a function to perform the simulations
function sim
input Modelica.SIunits.Time startTime = 1;
input Modelica.SIunits.Time stopTime = 2;
algorithm
DymolaCommands.SimulatorAPI.simulateExtendedModel(
"model-name", startTime, stopTime,
initialNames={"startTime", "stopTime"},
initialValues={startTime, stopTime});
end sim;
Related
I build a simple model in Dymola, I choose to use i_R1 to set the intializaiton condition, as shown in the follwing code and screenshot.
model circuit1
Real i_gen(unit="A") "Current of the generator";
Real i_R1(start=1,fixed=true,unit="A") "Current of R1";
Real i_R2(unit="A") "Current of R2";
Real i_C(unit="A") "Current of the capacitor";
Real i_D(unit="A") "Current of the diode";
Real u_1(unit="V") "Voltage of generator";
Real u_2(unit="V") "Output voltage";
// Voltage generator
constant Real PI = 3.1415926536;
parameter Real U0( unit="V") = 5;
parameter Real frec( unit="Hz") = 100;
parameter Real w( unit="rad/s") = 2*PI*frec;
parameter Real phi( unit="rad") = 0;
// Resistors
parameter Real R1( unit="Ohm") = 100;
parameter Real R2( unit="Ohm") = 100;
// Capacitor
parameter Real C( unit="F") = 1e-6;
// Diode
parameter Real Is( unit="A") = 1e-9;
parameter Real Vt( unit="V") = 0.025;
equation
// Node equations
i_gen = i_R1;
i_R1 = i_D + i_R2 + i_C;
// Constitutive relationships
u_1 = U0 * sin( w * time + phi);
u_1 - u_2 = i_R1 * R1;
i_D = Is * ( exp(u_2 / Vt) - 1);
u_2 = i_R2 * R2;
C * der(u_2) = i_C;
end circuit1;
But after translation, in the dsin.txt, it shows that i_R1 is a free variable, but u_2 is fixed.
My question is:
Why Dymola sets u_2 as fixed instead of i_R1?
The first column in dsin.txt is now primarily used for continue simulation in Dymola, and is otherwise sort of ignored.
If you want to know which values are relevant for starting the simulation, i.e. parameters and variables with fixed=true you should instead look at the 6th column and x&8, that shows that i_R1, U0, freq, phi, R1, R2, C, Is, and Vt will influence a normal simulation.
For continue simulation it is instead x&16 that matters, so u_2 instead of i_R1.
The x above is the value in the 6th column, and &8 represents bitwise and. In Modelica you could do mod(div(x, 8),2)==1 to test the numbers.
Better read Hans Olsson's answer, he knows better, still here is what I wrote:
I didn't implement it, so take everything with a grain of salt:
dsmodel.mof for the posted example, contains the following code:
// Initial Section
u_1 := U0*sin(w*time+phi);
u_2 := u_1-i_R1*R1;
Using the values from the example results in u_1 = 0 and u_2=-100. So it seems the fixed start value for i_R1 is used to compute the initial value u_2 using the above equations.
This should be the reason for the fixed statements in the model and dsin.txt being different in dsin.txt compared to the original Modelica code. Basically information from the model is used to compute the initial value of the state (u_2) from the start value from an auxiliary variable (i_R1). In the executed code, the state is being initialized.
Speculation: u_2 is unknown when creating dsin.txt, so it is set to zero and computed later. This should correspond to the first case described in dsin.txt in
"Initial values are calculated according to the following procedure:"
which is when all states are fixed.
I think it is a bug: even though it is signed as fixed, the voltage u_2 starts at -100V instead of 0V when I simulate it, and i_R1 starts at 1A.
Speculation: Perhaps the sorting algorithms are allowed during translation to set fixed initial values to more meaningful variables, as long as the condition given by the Modelica code (i_R1=1, in your case) is met. If that's the case, it would still count as a bug for me, but it might explain what's going on.
I'm trying to calculate the DT value from a model I set up on Sim4Life. Firstly, i'd like to say that I am a complete beginner and I am trying to understand how programming works in general.
Now, I have a function with some constants and two variables, the one being time Dt (starting from 1 sec to 900 secs) and the other being the initial DT_i value. I want to calculate the increase of temperature for every second and create a loop that replaces the DT_i value with the DT_1_i value and also calculates the increased temperature DT_i_1. The function looks like this: DT_1_i=DT_i+Dt.
I know it is a very simple problem but I couldn't work my way through other similar questions. Any help would be appreciated.
Temperature variation:
You need initial temperature variation , I used 0
T(i+1) stands for Next temperature variation
T(i) stands for present temperature variation
i stands for time step, dt
Read through comment in my code
Time
Use for loop to set the time for i = 1 : 900 %Temperature increase end
i =1:900 just means
first run use time = 1s,
second run time = 1+1 = 2
so on till 900
The code is as follow
% Initial Temperature variation is set to zero, unless you have some data
d = 1.3;
c = 3.7;
S_i = 3*10^3;
t_reg = 900;
%Time
t = 1:900;
% Length oftime to help me know the size of the variable needed to
% initialize
l = length(t);
% Initialize variable that used to store DT it helps speed up
% comutation
% Initial Temperature variation is set to zero, unless you have some data
DT = zeros(1, l);
for i = 1:900
% the value of i represent dt, first run i = 1, dt = 1, second run
% i = 2 and dt = 2 so on
if i == 900
%do nothing already reached the last index 900, i+1 = 901 will be
%out of range
else
DT(i+1) = DT(i) + (i./t_reg).*(d.*sqrt(c*S_i)-DT(i+1));
end
end
Is there an alternative to the sample function in Openmodelica, which accepts arguments which are not of type parameter? That is, the alternative should permit the sampling of a variable range of values during a simulation.
The end goal is to create a class with which I can measure the RMS value of a real signal during a simulation. The RMS value is used as a control variable. The real signal has a continuously changing frequency so in order to have better measurements, I want to either be able to varry the sampling range continuously during simulation or discretely in some sections/periods of the oscillation.
Is it also possible to have a "running RMS" function so that the output is continuous?
In short, I would like to calculate the RMS value over a variable sampling range and the sample should only have one new term or value per iteration and not a completely new set of values.
Some possible solutions (you probably should check my math and just use them for inspiration; also check the RootMeanSquare block in the standard library which for some reason samples the Mean block):
Running RMS from beginning of time (no frequency).
model RMS
Real signal = sin(time);
Real rms = if time < 1e-10 then signal else sqrt(i_sq / time /* Assume start-time is 0; can also integrate the denominator using der(denom)=1 for a portable solution. Remember to guard the first period of time against division by zero */);
Real i_sq(start=0, fixed=true) "Integrated square of the signal";
equation
der(i_sq) = signal^2;
end RMS;
With a fixed window, f:
model RMS
constant Real f = 2*2*asin(1.0);
Real signal = sin(time);
Real rms = if time < f then (if time < 1e-10 then signal else sqrt(i_sq / time)) else sqrt(i_sq_f / f);
Real i_sq(start=0, fixed=true);
Real i_sq_f = i_sq - delay(i_sq, f);
equation
der(i_sq) = signal^2;
end RMS;
With a variable window, f (limited by f_max):
model RMS
constant Real f_max = 2*2*asin(1.0);
constant Real f = 1+abs(2*asin(time));
Real signal = sin(time);
Real rms = if time < f then (if time < 1e-10 then signal else sqrt(i_sq / time)) else sqrt(i_sq_f / f);
Real i_sq(start=0, fixed=true);
Real i_sq_f = i_sq - delay(i_sq, f, f_max);
equation
der(i_sq) = signal^2;
end RMS;
Variable time for sampling in synchronous Modelica: https://trac.modelica.org/Modelica/ticket/2022
Variable time for sampling in older Modelica:
when time>=nextEvent then
doSampleStuff(...);
nextEvent = calculateNextSampleTime(...);
end when;
I need to use etime to calculate how many seconds a computation takes. I thought about something like this:
t1 = datetime('now');
% Do some computation
t2 = datetime('now');
temp = etime(t2, t1)
But I am getting this error message:
Error using etime(line 40), Index exceeds matrix dimensions.
What's wrong with it?
The inputs to etime are expected to be vectors that are the same format as the output of clock and not datetime objects.
t1 = clock;
t2 = clock;
elapsed = etime(t2, t1)
It is likely easier to surround your code with tic and toc which will automatically compute the elapsed time.
tmr = tic;
% do stuff
elapsed = toc(tmr);
That being said, if you want an accurate measurement of execution time, it is far better to use timeit.
I apologize for the poor title, but I found it hard to describe the problem in a comprehensible way.
What I want to do is to solve an ODE, but I don't want to start integrating at time = 0. I want the initial value, i.e. the starting point of the integration, to be accessible for changes until the integration starts. I'll try to illustrate this with a piece of code:
model testModel "A test"
parameter Real startTime = 10 "Starting time of integration";
parameter Real a = 0.1 "Some constant";
Real x;
input Real x_init = 3;
initial equation
x = x_init;
equation
if time <= startTime then
x = x_init;
else
der(x) = -a*x;
end if;
end testModel;
Notice that x_init is declared as input, and can be changed continuously. This code yields an error message, and as far as I can tell, this is due to the fact that I have declared x as both der(x) = and x =. The error message is:
Error: Singular inconsistent scalar system for der(x) = ( -(if time <= 10 then x-x_init else a*x))/((if time <= 10 then 0.0 else 1.0)) = -1e-011/0
I thought about writing
der(x) = 0
instead of
x = init_x
in the if-statement, which will avoid the error message. The problem in such an approach, however, is that I lose the ability to modify the x_init, i.e. the starting point of the integration, before the integration starts. Lets say, for instance, that x_init changes from 3 to 4 at time = 7.
Is there a work-around to perform what I want? Thanks.
(I'm gonna use this to simulate several submodels as part of a network, but the submodels are not going to be initiated at the same time, hence the startTime-variable and the ability to change the initial condition before integration.)
Suggested solution: I've tried out the following:
when time >= startTime
reinit(x,x_init);
end when;
in combination with the der(x) = 0 alternative. This seems to work. Other suggestions are welcome.
If your input is differentiable, this should work:
model testModel "A test"
parameter Real startTime = 10 "Starting time of integration";
parameter Real a = 0.1 "Some constant";
Real x;
input Real x_init = 3;
initial equation
x = x_init;
equation
if time <= startTime then
der(x) = der(x_init);
else
der(x) = -a*x;
end if;
end testModel;
Otherwise, I suspect the best you could do would be to have your x variable be a very fast first-order tracker before startTime.
The fundamental issue here is that you are trying to model a variable index DAE. None of the Modelica tools I'm aware of support variable index systems like this.