My objective is to create a 1D/2D lookup table that can read a variable sized array in JSON file without having to specify a statically sized Modelica array parameter.
I started out by trying to extend ExternData to implement a custom table based on a suggestion in a github issue: https://github.com/modelica-3rdparty/ExternData/issues/34#issuecomment-718552210
The steps outlined were:
Create your own copy of a CombiTable, modify it to accept a data access object and a stable name, see code snippet below
Create a duplicate of the ExternalCombiTable1D, which instead references your own CombiTable data object
Create a c-function which reads the data directly from the json and stores it into a table object, which is then passed to the MSL
function ModelicaStandardTables_CombiTable1D_init2
I've implemented all three steps to make a custom CombiTable1D block which populates a dynamic sized table from ExternData JSON functions. Package code below:
https://github.com/vsskanth/ExternData.CustomTable
In this package, there are 3 experiments of relevance to this issue. All experiments compile but only only one experiment works:
ExternData.Examples.JSONTestVariableArrayBroken.mo - Single instance of custom CombiTable1D - does not initialize
ExternData.Examples.JSONTestArrayCombi2DBroken.mo - Custom CombiTable1D and instance of Modelica CombiTable2D - does not initialize
ExternData.Examples.JSONTestVariableArrayWorking.mo - Custom CombiTable1D and a couple instances of Modelica CombiTable1D - works as expected
I am trying to find out why my custom CombiTable1D implementation initializes and runs only when there is at least one instance of Modelica CombiTable1D present in the experiment. I made sure to link ModelicaStandardTables.h and ModelicaStandardTables.lib libraries in my own implementation, checked for warnings in dsbuild.txt and compilation seems to be fine.
For some reason, the constructor function for the custom ExternalCombiTable1D external object (ExternData.Types.ExternalCombiTable1D) does not return and hence the custom CombiTable1D block (ExternData.Tables.CombiTable1D) is not able to initialize when there are no instances of Modelica CombiTable1D tables in the model.
I would appreciate any thoughts on why this is happening and how to overcome it.
IDE - Dymola 2021x
OS - Windows 10
Compiler - Visual Studio 2019
#tbeu has been generous enough to add JSON support to
https://github.com/tbeu/ModelicaTableAdditions which renders this issue moot since I can just use that package with ExternData.
Its still interesting as to why this happens though. Seems like a Dymola translation bug.
Related
Context: I have a huge Simulink Model that is going to be used for automated simulations on a Debian 10. Therefore it has to be built as standalone C-Code using the Matlab Coder. This code is then called to start the simulation.
What I need: I need to find a way to initialize my built model with ~500 parameters. These change with each simulation run and are stored in a SQLite file. The goal is to have parameters written to the database, then start the Model which reads the parameters from SQLite during initialization (presumably using the InitFcn Model Callback, although I'm open to alternatives).
What I have tried:
Direct SQL interface: I tried to use a direct Matlab-SQL interface such as JDBC (since I don't have access to the Database-toolbox) but those are not supported for Code generation.
Write a C-function that reads the SQLite file, then call the function during initialization in the InitFcn Callback using coder.ceval like this:
data = 0;
err = coder.ceval('read_function',4, 2, 12, coder.wref(data));
parameter = data;
Problem here is that coder.wref is not supported in Matlab and therefore doesn't work in the InitFcn. (Please correct me if I'm wrong)
This only seems to work inside a Matlab-Function-Block:
Error evaluating 'InitFcn' callback of block_diagram 'Model'.
Caused by:
The coder.wref function is not supported in MATLAB.
So my problem with the second approach is, that I can't call the C-function during initialization.
Using a Matlab-function-Block to read the parameters isn't really an option, since I would have to route all the signals out which makes maintaining and further development of the model really hard. Also my suggestion is, that the model would not even run because the parameters are needed to initialize the model.
Questions:
Is there a way to make one of the above approaches work? If yes, how? Where is my mistake?
Is there another (simpler) option to pass the data as an array or struct to my model?
Database looks like this:
Identifier Default
latitude 52.5
longitude 13.4
electricity_consumption 4000.0
ventilation_stream 50.0
PV_peak 30.0
PV_orientation 0.0
no_vessels 28.0
heatpump_exists 1.0
hotwater_consumption 1000.0
.
.
.
After having spent so much time on this issue, I would like to share my experience on this problem:
SQLite: This approach did not work out for me because the direct SQL-Matlab interfaces are not supported for code generation.
It is in fact possible to write a C-function, that reads from SQLite and call that function in a Matlab-function-block via coder.ceval wich allows to read in a signal during simulation. This works for code generation (Simulink coder) as well. However this will not work for initialization (see question).
So none of my original approaches ended up working.
Workaround: I ended up switching to an approach based on the Simulink RSIM-target wich generates code (also for Linux) and can be parametrized via a .mat file wich contains all the parameters. The .mat file can be modified to update parameters. This required some additional code wich automates this step. Also the model configuration for RSIM is a bit tricky.
I'm porting a large Simulink model from Simulink R2010a → R2017b.
The main model is basically a glue-layer for many interwoven reference models. My objective is to generate a standalone executable out of this main model using Coder.
Parameter tunability in this context is not done via the Signals and Parameters section on the Optimization tab in the Model Configuration Parameters dialog (as is the case in stand-alone models), but rather, via constructing Simulink.Parameter objects in the base workspace, and referencing those in the respective referenced models, or in their respective model workspaces.
Now, AFAIK, in R2010a it was enough to set
new_parameter.RTWInfo.StorageClass = 'Auto';
new_parameter.RTWInfo.CustomStorageClass = 'Define';
to make the parameter non-tunable and convert it into a #define in the generated code. In R2017b, this is no longer allowed; the StorageClass must be 'Custom' if you set a non-empty CustomStorageClass:
new_parameter.CoderInfo.StorageClass = 'Custom'; % <- can't be 'Auto'
new_parameter.CoderInfo.CustomStorageClass = 'Define';
But apparently, this does not make the parameter non-tunable:
Warning: Parameter 'OutPortSampleTime' of '[...]/Rate Transition1' is non-tunable but refers to tunable variables (Simulation_compiletimeConstant (base workspace))
I can't find anything in the R2017b documentation on making parameters non-tunable, programatically; I can only find how to do it in stand-alone models via the dialog, but that's not what I want here.
Can anyone point me in the right direction?
NOTE: Back in the day, Simulink Coder was called Real-Time Workshop (well, Real-time Workshop split into Coder and several other things), hence the difference RTWInfo vs. CoderInfo. Note that RTWInfo still works in R2017b, but issues a warning and gets converted into Coderinfo automatically.
In generated code it should appear as #define, the way you specified it.
https://www.mathworks.com/help/rtw/ug/choose-a-built-in-storage-class-for-controlling-data-representation-in-the-generated-code.html
Btw, yes, it's a bit confusing, because in m-file you specify CustomStorageClass = 'Define';, in GUI you specify Storage class as Define (custom), but in documentation they say Storage Class as Defined.
I am not sure why warning about tunability shows up.
As far as I know current usage for compiler plugin is to define the attribute for compiler to recognize and then the compiler will invoke the code defined and registered in the plugin.
I am thinking if it is possible to build a compiler plugin that have a post processor. I can some how first register the structs encountered by proc_macro_derive in a data structure within the plugin, then the post processor can generate code according to the plugin data structure populated before.
My intention is to generate a symbol table form derived structs, so I can do some experiment for dynamic typing in rust. I am not sure if it is possible to achieved in compile time without manually register them one by one in runtime.
I need to implement a custom ResultHandler but I am confused about how to actually integrate my custom class into the software package.
I have read this: http://elki.dbs.ifi.lmu.de/wiki/HowTo/InvokingELKIFromJava but my question is how are you meant to implement a custom result handler such that it shows up in the GUI?
The only way I can think of doing it is by extracting the elki.jar package and manually inserting my custom class into the source code, and then re-jarring the package. However I am fairly sure this is not the way it is meant to be done.
Also, in my resulthandler I need to output all the rows to a single text file with the cluster that each row belongs to displayed. How tips on how I can achieve this?
There are two questions in here.
in order to make your class instantiable by the UIs (both MiniGUI and command line), the classes must implement our Parameterization API. There are essentially two choices to make your class instantiable:
Add a public constructor without parameters (the UI won't know how to set your parameters!)
Add an inner static class Parameterizer that handles parameterization
in order to add your class to autocompletion (dropdown menu), the classes must be discovered by the MiniGUI/CLI/other UIs. ELKI uses two methods of discovery:
for .jar files, it reads the META-INF/elki/interfacename service files. This is a classic service-loader approach; except that we also allow ordering instances.
for directories only, ELKI will also scan for all .class files, and inspect them. This is mostly meant for development time, to avoid having to update the service files all the time. For performance reasons, we do not inspect the contents of .jar files; these are expected to use service files.
You do not need your class to be in the dropdown menu - you can always type the full class name. If this does not work, adding the name to the service file will not help either, but ELKI can either not find the class at all, or cannot instantiate it.
There is also a tutorial on implementing a custom result handler, but it does not discuss how to add it to the menu. In "development mode" - when having a folder with .class files - it will show up automatically.
I have created a couple of activities and stored them as XAML.
Opening them in the Workflowdesigner works great and I can Execute them.
Now I would like to create a new Activity and add the activities I created to it.
Basically loading it from the XAML and into the designer as part of another activity/flow.
I have tried adding my activities to the toolbox but the render as dynamicactivity and (understandably) does not work.
Any suggestions?
Is it even possible?
/Jimmy
DynamicActivity and the toolbox were basically not designed to work together that way. The toolbox expects to deal with types, not class instances.
One thing you can do instead is subclass IActivityTemplateFactory and in the Create() function return DynamicActivity. But you will probably have some really weird issues once you try to save a XAML file created which contains dynamic activities. Because in fact the designer doesn't do any special treatment for DynamicActivity, and it will not get serialized as any kind of 'logical reference' to the XAML file you created it from.
Tim