Pause JModelica and Pass Incremental Inputs During Simulation - modelica

Hi Modelica Community,
I would like to run two models in parallel in JModelica but I'm not sure how to pass variables between the models. One model is a python model and the other is an EnergyPlusToFMU model.
The examples in the JModelica documentation has the full simulation period inputs defined prior to the simulation of the model. I don't understand how one would configure a model that pauses for inputs, which is a key feature of FMUs and co-simulation.
Can someone provide me with an example or piece of code that shows how this could be implemented in JModelica?
Do I put the simulate command in a loop? If so, how do I handle warm up periods and initialization without losing data at prior timesteps?
Thank you for your time,
Justin

Late answer, but in case it is picked up by others...
You can indeed put the simulation into a loop, you just need to keep track of the state of your system, such that you can re-init it at every iteration. Consider the following example:
Ts = 100
x_k = x_0
for k in range(100):
# Do whatever you need to get your input here
u_k = ...
FMU.reset()
FMU.set(x_k.keys(), x_k.values())
sim_res = FMU.simulate(
start_time=k*Ts,
final_time=(k+1)*Ts,
input=u_k
)
x_k = get_state(sim_res)
Now, I have written a small function to grab the state, x_k, of the system:
# Get state names and their values at given index
def get_state(fmu, results, index):
# Identify states as variables with a _start_ value
identifier = "_start_"
keys = fmu.get_model_variables(filter=identifier + "*").keys()
# Now, loop through all states, get their value and put it in x
x = {}
for name in keys:
x[name] = results[name[len(identifier):]][index]
# Return state
return x
This relies on setting "state_initial_equations": True compile option.

Related

Use unit-testing framework in matlab to test data

I would like to test datasets with a variable number of values. Each value should be tested and I would like to have a standardized output that I can read in afterward again. My used framework is Matlab.
Example:
The use case would be a dataset which includes, e.g., 14 values that need to be testet. The comparison is already completely handled by my implementation. So I have 14 values, which I would like to compare against some tolerance or similar and get an output like
1..14
ok value1
ok value2
not ok value3
...
ok value14
Current solution:
I try to use the unit-testing framework and the according TAPPlugin that would produce exactly such an output (tap), one for every unit-test. My main problem is that the unit-testing framework does not take any input parameters. I already read about parametrization, but I do not know how this helps me. I could put the values as a list into the parameter, but how to I pass them there? Afaik the unit-test class does not allow additional parameters during initialization, so I cannot include this in the program the way I want.
I would like to avoid to need to format the TAP output on my own, because it is already there, but only for unit-test objects. Unfortunately, I cannot see how to implement this wisely.
How can I implement the output of a test anything protocol where I have a variable amount of comparisons (values) in Matlab?
If you are using class based unit tests you could access its properties from outside the test.
So let's say you have following unit test:
classdef MyTestCase < matlab.unittest.TestCase
properties
property1 = false;
end
methods(Test)
function test1(testCase)
verifyTrue(testCase,testCase.property1)
end
end
You could access and change properties from outside:
test=MyTestCase;
MyTestCase.property1 = true;
MyTestCase.run;
This should no succeed since you changed from false to true. If you want to have a more flexible way, you could have a list with the variables and a list with the requirements and then cycle through both in one of the test functions.
properties
variables = [];
requirements = [];
end
methods(Test)
function test1(testCase)
for i = 1:length(variables):
verifyEqual(testCase,testCase.variables[i],testCase.requirements [i])
end
end
end
Now you would set variables and requirements:
test=MyTestCase;
MyTestCase.variables = [1,2,3,4,5,6];
MyTestCase.requirements = [1,3,4,5,5,6];
MyTestCase.run;
Please note, that in theory you should not have multiple assert statements in one test.

Doc2Vec Clustering with kmeans for a new document

I have a corpus trained with Doc2Vec as follows:
d2vmodel = Doc2Vec(vector_size=100, min_count=5, epochs=10)
d2vmodel.build_vocab(train_corpus)
d2vmodel.train(train_corpus, total_examples=d2vmodel.corpus_count, epochs=d2vmodel.epochs)
Using the vectors, the documents are clustered with kmeans:
kmeans_model = KMeans(n_clusters=NUM_CLUSTERS, init='k-means++', random_state = 42)
X = kmeans_model.fit(d2vmodel.docvecs.vectors_docs)
labels=kmeans_model.labels_.tolist()
I would like to use the k-means to cluster a new document and know which cluster it belongs to. I've tried the following but I don't think the input for predict is correct.
from numpy import array
testdocument = gensim.utils.simple_preprocess('Microsoft excel')
cluster_label = kmeans_model.predict(array(testdocument))
Any help is appreciated!
Your kmeans_model expects a features-vector similar to what it was provided during its original clustering – not the list-of-string-tokens you'll get back from gensim.simple_preprocess().
In fact, you want to use the Doc2Vec model to take such lists-of-tokens and turn them into model-compatible vectors, via its infer_vector() method. For example:
testdoc_words = gensim.utils.simple_preprocess('Microsoft excel')
testdoc_vector = d2vmodel.infer_vector(testdoc_words)
cluster_label = kmeans_model.predict(array(testdoc_vector))
Note that both Doc2Vec and inference work better on documents of at least tens-of-words long (not tiny 2-word phrases like your test here), and that inference may also often benefit from using a larger-than-default optional epochs parameter (especially on short documents).
Note also that your test document should be really preprocessed and tokenized exactly the same as your training data – so if some other process was used for preparing train_corpus, use that same process for post-training documents. (Words not recognized by the Doc2Vec model, because they weren't present during training, will be silently ignored – so an error like doing a different style of case-flattening at inference time will weaken results a lot.)

Avoiding eval in matlab function

I use the symbolic toolbox in matlab to generate some very long symbolic expressions. Then I use matlabFunction to generate a function file.
Say there are three parameters: p1, p2 and p3.
I have a cell with strings {'p1', 'p2', 'p3'}.
In the derivation of the model I generate symbolic variables p1, p2 and p3 out of them using eval in a loop and stack them in a vector par.
Then when in matlabFunction, I specify par as input.
Moreover, I save the cell string in a .mat file.
Then when I want to simulate this model, I can construct this parameter array using that cell of strings from the .mat file out of 30 available parameters and their values.
Advantages: No need to keep track of the different parameters if I add one to . I can change the order, mess around, but older models still work.
Disadvantage:
Turning things into a function file leads to this error (psi is one of the parameters):
Error: File: f_derive_model.m Line: 96 Column: 5
"psi" previously appeared to be used as a function or
command, conflicting with its use here as the name of a
variable.
A possible cause of this error is that you forgot to
initialize the variable, or you have initialized it
implicitly using load or eval.
Apparently some unnescescary checking is going on because the variable will be intialized in an eval statement.
Question: How can I avoid eval but keep the list of parameters indepent from the model stuff.
Code deriving the long equations:
% Model parameters
mdl.parameters = {'mp','mb','lp','lb','g','d','mP','mM','k','kt'};
par = [];
for i=1:length(mdl.parameters)
eval(strcat(mdl.parameters{i}, '=sym(''', mdl.parameters{i}, "');"));
eval(sprintf(['par = [par;' mdl.parameters{i} '];']));
end
%% Calculate stuff
matlabFunction(MM,'file',[modelName '_mass'],'vars',{par},'outputs',{'M'});
Code using the generated file:
getparams
load('m3d_1')
par = [];
for i=1:length(mdl.parameters)
eval(sprintf(['par = [par;params.' mdl.parameters{i} '];']));
end
See how, as long as I specify the correct value to for example params.mp, it always gets assigned to the input corresponding to the symbolic variable mp in the par vector. I do not want to lose that and have to keep track of the order and so on, nor do I want to call my functions with all the parameters one by one.
Actually, I see nothing wrong in your approach even if the "public opinion" affirms that it's better to avoid using the eval function. An alternative would be using the assignin function as follows:
% use 'caller' instead of 'base' if this code runs within a function
for i = 1:numel(mdl.parameters)
var_name = mdl.parameters{i};
assignin('base',var_name,sym(var_name));
end
In the second case (the one concerning the par variable) I would instead use the getfield function:
par_len = numel(mdl.parameters);
par = cell(par_len,1);
for i = 1:par_len
par{i} = getfield(params,mdl.parameters{i});
end
or, alternatively, this approach:
par_len = numel(mdl.parameters);
par = cell(par_len,1);
for i = 1:par_len
par{i} = params.(mdl.parameters{i});
end

Avoid repeat calling simulation/function with same arguments

This is a general algorithm question, but my primary environment is Matlab.
I have a function
out=f(arg1,arg2,,.....)
which takes a long time to execute and is expensive to compute (i.e. cluster time). A given argument argn can be a string, integer, vector, and even a function handle
For this reason, I want to avoid calling f(args) for the same argument values. Inside my program, this can occur in ways that are not necessarily controllable by the programmer.
So, I want to call f() once for each possible value of args, and save the results to disk. Then, whenever it is called the next time, check if there is currently a result for those argument values. If so, I would load it from disk.
My current idea is to create a cell variable, with one row for each function call. In the first column is out. In column 2:N are the values of argn, and check the equivalence of each individually.
Since the variable types of the arguments vary, how would I go about doing this?
Is there a better algorithm?
More generally, how do people deal with saving simulation results to disk and storing metadata? (other than cramming everything into a filename!)
You can implement a function that looks something like this:
function result = myfun(input)
persistent cache
if isempty(cache)
cachedInputs = [];
cachedOutputs = [];
cache = {cachedInputs, cachedOutputs};
end
[isCached, idx] = ismember(input, cache{1});
if isCached
result = cache{2}(idx);
else
result = doHardThingOnCluster(input);
cache{1}(end+1) = input;
cache{2}(end+1) = result;
end
This simple example assumes that your inputs and outputs are both scalar numbers that can be stored in an array. If you have to deal with strings, or anything more complicated, you could use a cell array for caching rather than an array. Or in fact, maybe a containers.Map might be even better. Alternatively, if you have to cache really massive results, you might be better off saving it to a file and caching the file name, then loading the file in if you find it's been cached.
Hope that helps!

Changing Scope of parameter in the simbiology reaction in command line - MATLAB TOOLBOX SIMBIOLOGY

I am using Simbiology to construct a model. I am actually reading the model from an SBML file. Here is what I get after I load the model
m1
SimBiology Model - Model1
Model Components:
Compartments: 1
Events: 0
Parameters: 200
Reactions: 200
Rules: 0
Species: 100
However,
m1.Parameters gives
ans =
Empty matrix: 0-by-1
The reason I believe is because all the parameters have "Reaction" Scope. How can I make all of them "Model" Scope by command line?
Also, I was not able to access the parameter (value or scope) through Reaction Object. How do I access Parameter value and Scope (if its is scoped to Reaction)?
Any help here would be much appreciated.
Thanks!
Ayesha
P.S. - I also posted the same enquiry on the Mathworks Newsreader (user forum). Hope someone replies from there or here.
Pramod also posted an answer on the user forum, but I wanted to copy it here for completeness.
-Arthur
The following code illustrates how to change the scope of a parameter from reaction to model.
% Load lotka.
m1 = sbmlimport('lotka')
% There are no Parameters at the model level
m1.Parameters
% Copy the parameters from reactions to the model
for i = 1:numel(m1.Reactions)
p = m1.Reactions(i).KineticLaw.Parameters;
copyobj(p,m1)
delete(p)
end
m1.Parameters
Note that if there is more than one parameter with the same name there will be an error because the model requires unique names for the parameters.
As shown in the above code you can access a reaction scoped parameter by
reaction.KineticLaw.Parameters
You probably don't want to change the scope of the parameters to Model just to view them - that would change the structure of the model and potentially make it impossible to simulate.
You can view all the parameters in a model using the command
sbioselect(m1, 'Type', 'parameter')
When a parameter is scoped to a Reaction rather than the model, its parent is the Reaction's KineticLaw, rather than the Reaction itself. So if r is your reaction of interest, you can get its parameters with r.KineticLaw.Parameters.
Hope that helps!