Error of ModelStructure/Outputs by using FMU container - simulation

I am trying to combine three FMUs into one FMU that contains all of the three. Specifically, I have one FMU of a pandapower electricity network and 2 FMUs that are CSV files converted to FMUs by using PythonFMU tool. All of the FMUs have been tested by the FMU Check and they have been simulated together to check that everything works fine.
Then I am using FMPy tool to combine all of them together and export successfully the final FMU.
However, when I am trying to validate this I get the following error:
ModelStructure/Outputs must have exactly one entry for each variable with causality="output".
Any idea of what is wrong here?

Your problem seems to be fixed in https://github.com/CATIA-Systems/FMPy/issues/281#issuecomment-879092943
You should try to re-generate the containerized FMU with the developmenet branch of fmpy.

Related

Add input txt in FMU for co-simulation

I have an FMU for co-simulation and I want to add as input to this model a txt file with CombiTable and then export it again as an FMU. My question is how can I achieve that since OpenModelica cannot import FMU for co-simulation.
There are several Modelica Tools that allow for the re-export of FMUs, e.g. Dymola and SimulationX.
If you want to do it with open source software, you could export the combitable as a second FMU, and create a containerized FMU with fmpy out of these two FMUs, see https://github.com/CATIA-Systems/FMPy/blob/master/tests/test_fmu_container.py for an example.
I'm not sure if there are any tools that support such a way to pack FMUs.
You could however make a co-simulation SSP via OMEdit.
First make another FMU that contains the combitable.
Then make a new SSP, add both these FMUs.

Darknet model to onnx

I am currently working with Darknet on Yolov4, with 1 class.
I need to export those weights to onnx format, for tensorRT inference.
I've tried multiple technics, using ultralytics to convert or going from tensorflow to onnx. But none seems to work. Is there a direct way to do it?
Check this GitHub repo: https://github.com/Tianxiaomo/pytorch-YOLOv4
Running the demo_darknet2onnx.py script you'll be able to generate the ONNX model from the .cfg and .weights darknet files.
Usage example:
python demo_darknet2onnx.py <cfgFile> <weightFile> <imageFile> <batchSize>
You can also decide the batch size for the inference calls of the converted model.
The following repo exports yolov3 models from darknet to onnx, for tensorRT inference. You can use this as reference for your model.
https://github.com/jkjung-avt/tensorrt_demos/tree/master/yolo
You can convert scaled YOLO-yolov4,yolov4-csp.yolov4x-mish,yolov4-P5 etc models into onxx & its perfectly work fine.
https://github.com/linghu8812/tensorrt_inference

Run Simulink from Matlab function

I am running Simulink using FastRestart, as I need to start and stop the simulation multiple times changing parameters. When I run Simulink from the main script, there are no problems. However, as soon as I make the script a function so that I can run it for different input data, I get an error that is clearly related to Simulink not seeing the Matlab workspace within the function.
To be more precise, say sfile is my Simulink file, then I run the following lines AFTER having initialized all variables I need in structures in Matlab:
load_system(sfile);
set_param(sfile,'FastRestart','on');
set_param(sfile,'SaveFinalState','on');
set_param(sfile,'SaveCompleteFinalSimState','on');
set_param(sfile,'SimulationCommand','update');
At the last line, I get the error that Simulink does not recognize mdl.tStep (which is the time step), as mdl is not a recognized structure. In fact, it is and if I run Simulink from the main script everything is fine.
Now, in the past, I would have used
options = simset('SrcWorkspace','current');
However, an expert I know has advised me against simset (as it may get deprecated in the future) and encouraged me to use set_param instead. I have
looked up the options for set_param on-line, but I could not find the setting for the Matlab workspace.
Any help would be greatly appreciated. Thank you in advance!
In many instances it is better to use the Model Workspace rather than the Base Workspace:
hws = get_param(model, 'modelworkspace');
hws.assignin('mdl',mdl);
At least be aware that this option exists.
A solution to your problem might be to use the assignin-function to all the variable whose value you want to pass to simulink in your matlab base workspace. To do so just use
assignin('base','mdl',mdl)
Having the variable in your base workspace should allow simulink to see it.

Embedded Coder not recognizing tokens in default code generation template

I recently obtained a license to use Embedded Coder with an existing Simulink model that we have developed. In attempting to generate C code for the first time from the model, I am working through several errors. At first, we had no code generation templates (.cgt) files defined in the model parameters. After some hunting, I found the default template that comes with MATLAB (matlabroot/toolbox/rtw/targets/ecoder/ert_code_template.cgt).
The latest is that I get errors on nearly every token in this default code generation template.
Since I'm just trying to get something to build, at first I commented out the offending lines (things like RTWFileVersion, etc), but now I am noticing that it's giving me errors for things that are mandatory (ie. Types). Types is one of several required items that must be in the .cgt file, so what's wrong that causes MATLAB to not recognize these tokens? I'm guessing something may be messed up with my installation, such as a path.
Other details:
Simulink R2013A x32
Target is a Freescale device
Thanks to Matthias W for getting me to check other configuration options. Turns out I had selected a .tlc file that was probably incompatible with Embedded Coder.
In Code Generation for "System target file" I have selected the ert.tlc file and now I am able to build the parts of my model I'm interested in.

Save array to base workspace from simulink model

I'm using a MATLAB function block within a Simulink model. I build this model and run it on a dspace system with 1 kHz. To evaluate my experiment I need the data (20x20 double array) that is calculated in my MATLAB function block. Is it possible to export the data to the base workspace?
To read a variable from your system, the easiest way to do so is using ControlDesk. Create a project and download/start your experiment using ControlDesk, then it is automatically aware of the running application and can read the variable. You now have to configure a Measurement (or Capture in old versions) and export the results to MAT. You can find detailed instructions in the documentation from dSPACE, called HelpDesk.
Alternatively you can use the XIL-API or HIL-API to automate the above steps.