OMEdit simulation flags “-override” and “-overrideFile” usage - modelica

Can someone share simple examples on usage of simulation flags “-override” and “-overrideFile” in Openmodelica's OMEdit.

This is a simple mos-script showing how to use override and overrideFile
loadString("model M Real r(start=1.0) = der(r); end M;");
simulate(M);
val(r, 0.5);
simulate(M, simflags="-override r=2.0");
val(r, 0.5);
writeFile("a.txt", "r=4.0\n");
simulate(M, simflags="-overrideFile=a.txt");
val(r, 0.5);
Returns r at time=0.5 as 1.65, 3.30, 6.59 (showing that the start-value is overridden; you can also override some parameters depending on how it was defined and used in the model).

Related

How to use functions in the command window of Dymola?

I am working with Dymola, and try to use the functions provided by Modelica standard library in the command window, but it seems that I can't use them, and I couldn't claim a variable of a specific type either. I am wondering if there is some kind of limit of the command I could use in the command window of Dymola. Where should I find all the allowable commands?
I try to use some functions from Modelica.Media, it seems the input variables are out of range, but I tried a lot of times and different units system. I find that I can't declare a variable of pressure type in the command window, but Modelica.Media.Water.IF97_Utilities.h_pT() requires that I need to provide the variable as pressure and enthalpy type, is this the reason I can't use this function in the command window?
Modelica.Media.Water.IF97_Utilities.h_pT(1e6,800,1)
Failed to expand Modelica.Media.Water.IF97_Utilities.h_props_pT(
1000000.0,
800,
Modelica.Media.Common.IF97BaseTwoPhase(
phase = 1,
region = 1,
p = 1000000.0,
T = 800.0,
h = 9.577648835649013E+20,
R = 461.526,
cp = 1.8074392528071426E+20,
cv = -3.7247229288028774E+18,
rho = 5.195917767496603E-13,
s = 1.2052984524009106E+18,
pt = 645518.9415389205,
pd = 6.693617079374418E+18,
vt = 357209983199.2206,
vp = -553368.7088215105,
x = 0.0,
dpT = 645518.9415389205
)).
Failed to expand Modelica.Media.Water.IF97_Utilities.h_pT(1000000.0, 800, 1).
Assuming the inputs are valid there seems to be an issue specifically related to evaluating some media-functions interactively in Dymola (since they shouldn't be evaluated in models). It will be corrected in Dymola 2022x.
A temporary work-around is to first set the flag Advanced.SemiLinear = false; and then:
Modelica.Media.Water.IF97_Utilities.h_pT(1e6,800,1)
= 9.577648835649013E+20
(I'm not sure how valid the formulation is in that region.)
But please remember to set Advanced.SemiLinear = true; before translating and simulating any models - in particular models using media-functions.
The problem is that you are giving the function an invalid input. It seems Dymola does not give you the error-message for this based on the screenshot and logs you provided. I tried it in OpenModelica and got:
Modelica.Media.Water.IF97_Utilities.h_pT(100e5, 500e3)
[Modelica 4.0.0/Media/Water/IF97_Utilities.mo:2245:9-2246:77] Error: assert triggered: IF97 medium function g5: input temperature (= 500000 K) is higher than limit of 2273.15K in region 5
By using a value within the limits, it returns a value:
Modelica.Media.Water.IF97_Utilities.h_pT(100e5, 1e3)

How to eliminate dead code in Dymola/Modelica

I am trying to slim down a very complex model to improve performance, and noticed big performance changes when I add or remove variables into the signal bus, especially multi-body frames.
I am wondering if there is any setting that can eliminate code that isn't involved in generating outputs from the model.
I tried setting the bus connector to "protected" to ensure it doesn't become an output but the code to calculate them is still being generated.
I also tried these flags but it doesn't eliminate the dead code:
Advanced.Embedded.OptimizeForOutputs=true;
Advanced.SubstituteVariablesUsedOnce=true;
Evaluate=true;
Advanced.EvaluateAlsoTop=true;
Advanced.SubstituteVariablesUsedOnce=true;
This is a simple model to replicate the scenario:
model TestBusConnector
extends Modelica.Icons.Example;
protected
Modelica.Blocks.Examples.BusUsage_Utilities.Interfaces.ControlBus controlBus
annotation (Placement(transformation(extent={{-20,-20},{20,20}})));
public
Modelica.Blocks.Sources.Sine sine(freqHz=1)
annotation (Placement(transformation(extent={{-40,-50},{-20,-30}})));
Modelica.Blocks.Sources.Constant const(k=0)
annotation (Placement(transformation(extent={{-10,50},{10,70}})));
Modelica.Blocks.Interfaces.RealOutput y
annotation (Placement(transformation(extent={{90,-10},{110,10}})));
equation
connect(y, const.y) annotation (Line(points={{100,0},{60,0},{60,60},{11,60}}, color={0,0,127}));
connect(sine.y, controlBus.testBusVariable)
annotation (Line(points={{-19,-40},{0,-40},{0,0}}, color={0,0,127}));
annotation (experiment(__Dymola_fixedstepsize=0.001, __Dymola_Algorithm="Euler"),
__Dymola_experimentFlags(Advanced(
InlineMethod=0,
InlineOrder=2,
InlineFixedStep=0.001)),
__Dymola_experimentSetupOutput(
states=false,
derivatives=false,
inputs=false,
outputs=false,
auxiliaries=false,
equidistant=false,
events=false));
end TestBusConnector;
Code generated from Dymola 2019 FD01 is shown below:
include <dsblock6.c>
PreNonAliasNew(0)
StartNonAlias(0)
DeclareVariable("sine.amplitude", "Amplitude of sine wave", 1, 0.0,0.0,0.0,0,513)
DeclareVariable("sine.freqHz", "Frequency of sine wave [Hz]", 1, 0.0,0.0,0.0,0,513)
DeclareVariable("sine.phase", "Phase of sine wave [rad|deg]", 0, 0.0,0.0,0.0,0,513)
DeclareVariable("sine.offset", "Offset of output signal", 0, 0.0,0.0,0.0,0,513)
DeclareVariable("sine.startTime", "Output = offset for time < startTime [s]", 0,\
0.0,0.0,0.0,0,513)
DeclareVariable("sine.y", "Connector of Real output signal", 0.0, 0.0,0.0,0.0,0,512)
DeclareVariable("const.k", "Constant output value", 0, 0.0,0.0,0.0,0,513)
DeclareVariable("const.y", "Connector of Real output signal", 0, 0.0,0.0,0.0,0,513)
DeclareOutput("y", "", 0, 0.0, 0.0,0.0,0.0,0,513)
DeclareAlias2("controlBus.testBusVariable", "Connector of Real output signal", \
"sine.y", 1, 5, 5, 1028)
EndNonAlias(0)
#define DymolaHaveUpdateInitVars 1
#include <dsblock5.c>
DYMOLA_STATIC void UpdateInitVars(double*time, double* X_, double* XD_, double* U_, double* DP_, int IP_[], Dymola_bool LP_[], double* F_, double* Y_, double* W_, double QZ_[], double duser_[], int iuser_[], void*cuser_[],struct DYNInstanceData*did_,int initialCall) {
}
StartDataBlock
EndDataBlock
The translated modelica code (dsmodel.mof) still has the calculation for the sine block.
// Translated Modelica model generated by Dymola from Modelica model
// TEMP.TEST.TestBusConnector
// -----------------------------------------------------------------------------
// Initial Section
sine.amplitude := 1;
sine.freqHz := 1;
sine.phase := 0;
sine.offset := 0;
sine.startTime := 0;
const.k := 0;
const.y := 0;
y := 0.0;
// -----------------------------------------------------------------------------
// Conditionally Accepted Section
sine.y := (if time < 0 then 0 else sin(6.283185307179586*time));
// -----------------------------------------------------------------------------
// Eliminated alias variables
// To have eliminated alias variables listed, set
// Advanced.OutputModelicaCodeWithAliasVariables = true
// before translation. May give much output.
Ideally, I would like the model to translate to:
y := 0.0;
The reason the other answers don't work is that your model is not consistent with your question:
"I am wondering if there is any setting that can eliminate code that isn't involved in generating outputs from the model."
By connecting the control-bus to sine.y you implicitly create an output, and thus sine.y is involved in generating outputs from the model.
That can be avoided in one of the following ways:
Remove the connection between sine.y and controlBus
Change controlBus to be protected
Change so that controlBus isn't at the top-level
It's not a direct answer to your question, but still it could help to improve performance. Part of the computational effort you are trying to avoid is generated by computing variables in the result file. This can be avoided by the settings below:
This can be set as an annotation in the model itself using:
annotation (__Dymola_experimentSetupOutput(
states=false,
derivatives=false,
inputs=false,
auxiliaries=false));
There is another flag which could help. It does not give the result you expected, but it might be still useful:
Advanced.Define.AutoRemoveAuxiliaries = true;
The Dymola User Manual 2 describes the flag as follows:
Removes code for auxiliary variables that neither influences the
simulation state nor the outputs. This improves performance a bit.
From this description my expectation was that the code is generated like you asked for, but unfortunately it is not the case.

How to implement exponentially decay learning rate in Keras by following the global steps

Look at the following example
# encoding: utf-8
import numpy as np
import pandas as pd
import random
import math
from keras import Sequential
from keras.layers import Dense, Activation
from keras.optimizers import Adam, RMSprop
from keras.callbacks import LearningRateScheduler
X = [i*0.05 for i in range(100)]
def step_decay(epoch):
initial_lrate = 1.0
drop = 0.5
epochs_drop = 2.0
lrate = initial_lrate * math.pow(drop,
math.floor((1+epoch)/epochs_drop))
return lrate
def build_model():
model = Sequential()
model.add(Dense(32, input_shape=(1,), activation='relu'))
model.add(Dense(1, activation='linear'))
adam = Adam(lr=0.5)
model.compile(loss='mse', optimizer=adam)
return model
model = build_model()
lrate = LearningRateScheduler(step_decay)
callback_list = [lrate]
for ep in range(20):
X_train = np.array(random.sample(X, 10))
y_train = np.sin(X_train)
X_train = np.reshape(X_train, (-1,1))
y_train = np.reshape(y_train, (-1,1))
model.fit(X_train, y_train, batch_size=2, callbacks=callback_list,
epochs=1, verbose=2)
In this example, the LearningRateSchedule does not change the learning rate at all because in each iteration of ep, epoch=1. Thus the learning rate is just const (1.0, according to step_decay). In fact, instead of setting epoch>1 directly, I have to do outer loop as shown in the example, and insider each loop, I just run 1 epoch. (This is the case when I implement deep reinforcement learning, instead of supervised learning).
My question is how to set an exponentially decay learning rate in my example and how to get the learning rate in each iteration of ep.
You can actually pass two arguments to the LearningRateScheduler.
According to Keras documentation, the scheduler is
a function that takes an epoch index as input (integer, indexed from
0) and current learning rate and returns a new learning rate as output
(float).
So, basically, simply replace your initial_lr with a function parameter, like so:
def step_decay(epoch, lr):
# initial_lrate = 1.0 # no longer needed
drop = 0.5
epochs_drop = 2.0
lrate = lr * math.pow(drop,math.floor((1+epoch)/epochs_drop))
return lrate
The actual function you implement is not exponential decay (as you mention in your title) but a staircase function.
Also, you mention your learning rate does not change inside your loop. That's true because you set model.fit(..., epochs=1,...) and your epochs_drop = 2.0 at the same time. I am not sure this is your desired case or not. You are providing a toy example and it's not clear in that case.
I would like to add the more common case where you don't mix a for loop with fit() and just provide a different epochs parameter in your fit() function. In this case you have the following options:
First of all keras provides a decaying functionality itself with the predefined optimizers. For example in your case Adam() the actual code is:
lr = lr * (1. / (1. + self.decay * K.cast(self.iterations, K.dtype(self.decay))))
which is not exactly exponential either and it's somehow different than tensorflow's one. Also, it's used only when decay > 0.0 as it's obvious.
To follow the tensorflow convention of exponential decay you should implement:
decayed_learning_rate = learning_rate * ^ (global_step / decay_steps)
Depending on your needs you could choose to implement a Callback subclass and define a function within it (see 3rd bullet below) or use LearningRateScheduler which is actually exactly this with some checking: a Callback subclass which updates the learning rate at each epoch end.
If you want a finer handling of your learning rate policy (per batch for example) you would have to implement your subclass since as far as I know there is no implemented subclass for this task. The good part is that it's super easy:
Create a subclass
class LearningRateExponentialDecay(Callback):
and add the __init__() function which will initialize your instance with all needed parameters and also create a global_step variables to keep track of the iterations (batches):
def __init__(self, init_learining_rate, decay_rate, decay_steps):
self.init_learining_rate = init_learining_rate
self.decay_rate = decay_rate
self.decay_steps = decay_steps
self.global_step = 0
Finally, add the actual function inside the class:
def on_batch_begin(self, batch, logs=None):
actual_lr = float(K.get_value(self.model.optimizer.lr))
decayed_learning_rate = actual_lr * self.decay_rate ^ (self.global_step / self.decay_steps)
K.set_value(self.model.optimizer.lr, decayed_learning_rate)
self.global_step += 1
The really cool part is the if you want the above subclass to update every epoch you could use on_epoch_begin(self, epoch, logs=None) which nicely has epoch as parameter to it's signature. This case is even easier as you could skip global step altogether (no need to keep track of it now unless you want a fancier way to apply your decay) and use epoch in it's place.

what is the default kernel_initializer in keras

In the user manual, it shows the different kernel_initializer below
https://keras.io/initializers/
the main purpose is to initialize the weight matrix in the neural network.
Anyone knows what the default initializer is? the document didn't show the default.
Usually, it's glorot_uniform by default. Different layer types might have different default kernel_initializer. When in doubt, just look in the source code. For example, for Dense layer:
class Dense(Layer):
...
def __init__(self, units,
activation=None,
use_bias=True,
kernel_initializer='glorot_uniform',
bias_initializer='zeros',
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
**kwargs):
GlorotUniform, keras uses Glorot initialization with a uniform distribution.r = √(3/fan_avg)
fan_avg = (fan_in + fan_out) /2
number of inputs = fan_in
number of nurons in a layer = fan_out

MATLAB: dynamic variable definitions

For a numerical simulation in MATLAB I have parameters defined in an .m file.
%; Parameters as simple definitons
amb.T = 273.15+25; ... ambient temperature [K]
amb.P = 101325; ... ambient pressure [Pa]
combustor.T = 273.15+800; ... [K]
combustor.P = 100000; ... [Pa]
combustor.lambda = 1.1;
fuel.x.CH4 = 0.5; ... [0..1]
fuel.n = 1;
air.x.O2 = 0.21;
%; more complex definitions consisting of other params
air.P = reactor.P;
air.T = amb.T;
air.n = fuel.x.CH4 * 2 * fuel.n * combustor.lambda / air.x.O2;
Consider this set as 'default' definitions. For running one simulation this definitions works fine.
It's getting more complicated if I want to change one of these parameters programmatically for a parameter study (the effect of changing parameters on the results), that is, to perform multiple simulations by using a for loop. In the script performing this I want to change the defintion of several parameters beforehand, i.e. overwrite default definitions. Is there a way to do this without touching the default definitions in-code (comment them/overwrite them literally)? It should be possible to change any parameter in the study-performing script and catch up on default definitions from the listing above (or the other way round).
Let me illustrate the problem with the following example: If I want to vary combustor.lambda (let's say running from 0.9 to 1.3) field air.n has to be evaluated again for the change to take place in the actual simulation. So, I could evaluate the listing again, but this way I would lose the study-defined combustor.lambda for the default one.
I am thinking about these solutions but I cannot get to how to do this:
Use references/handles in a way that the struct fields only hold the definitions, not the actual values. This allows for changing default definitions before 'parsing' the whole struct to get the actual values.
Evaluate the default definition set by a function considering (non-default) definitions defined preliminarily, i.e. skipping these lines of the default definition set during evaluation.
Any OOP approach. Of course, it is not limited to struct data types, but on the other hand, maybe there are useful functions for structs?
Edit:
The purpose of the default set is for the programmer to be as free as possible in choosing the varying parameters with any of the other parameters keeping their default definition which can be independent (= values) as well as dependent (= equations like air.n).
% one default parameter set
S = struct('T', 25, 'P', 101000, 'lambda', .5, 'fuel', .5);
GetNByLambda = #(fuel, lambda) fuel * 2 * lambda;
T = struct('P', S.P, 'n', GetNByLambda(S.fuel, S.lambda));
% add more sets
S(end+1) = struct('T', 200, 'P', 10000, 'lambda', .8, 'fuel', .7);
T(end+1) = struct('P', S.P, 'n', GetNByLambda(S(end+1).fuel, S(end+1).lambda));
% iterate over parameter sets
for ii = 1:length(S)
disp(S(end+1))
disp(T(end+1))
end