Add a Simulink block to set system-wide parameters - matlab

I have a Simulink model with a number of system-wide parameters that affect many different blocks. The way I deal with this right now is by encapsulating the entire model inside of a masked subsystem at the top level and managing the parameters there. Doing this makes the parameters visible to all blocks. However, I would rather have my model reside at the top level and include a parameters block there that I can use to manipulate the system parameters.
I don't know if pictures will help here, but they can't hurt:
The picture above shows an example of my current setup. Notice that the entire design is nested inside of a masked subsystem called "System Parameters"
This picture shows how I would like for the top level to appear. This seems to be a much more intuitive interface. It would also allow for much easier copying of my parameters block between models, which is my main interest in it. I would really like to convert it to a library block that I can use in a handful of models that are based on the same hardware system. However, the problem is that the parameters within the System Parameters block are not visible to the rest of the blocks in the model (at least not directly).
Is there a way that a block like the one in the second image could make its parameters easily available to the rest of the model?

For the parameters to be available to the other blocks, they need to be either in the model workspace or in the base workspace. You could add an initialise callback to your block that would copy the mask parameters to either workspace, but in my opinion, a much better practice would be to have a MATLAB script defining all the parameters in the base workspace that is called during the InitFcn model callback. You then just need to distribute that MATLAB script along with your model for the end user.

Related

Best way to initialize Matlab parameters based on the machine

I am currently in a stage where I would like to have my code modularized and following software-engineering techniques to make it reusable and understandable. In particular, I run my code either in my laptop and in an external server.
My goal is to have the main part of the code exactly the same in the laptop and the server, but different initialization parameters in the two case (in the server I will increase the # of iterations for instance and other variables). What is the common practice to do so, apart from clearly an if-else statement in the main?
I was thinking of an initialization file (like a JSON) in the laptop and the server, which is different and I just need to modify the values. Or a Matlab function which initializes the variables, but still, it would have an if-else statement.
Other suggestions? Keep in mind I might to want to extend the algorithmic part and introduce new parameters in the future.
Thanks

Accessing Simulink Functions from inside Atomic Subchart

For reasons far outside my control, I have now been thrust into doing MATLAB/Simulink/Stateflow work. I've done the On-Ramp training, and already I despise how unintuitive it is to do things that are just common and easy in any text-based programming language.
So here's my issue: I am trying to create Stateflow subcharts that I can reuse like a function, like a series of steps to take when requesting a series of responses from an SPI bus. I want to be able to use this subchart from within other subcharts in the same parent Stateflow diagram. My research has so far led me to Atomic Subcharts.
The problem is, my subchart relies on a number of Simulink Functions, which in turn call S-Functions to communicate with an STM32 target. I can make the subchart, no problem, with the Simulink Functions in the root Stateflow diagram. But when I convert the subchart into an Atomic Subchart, it can no longer detect the Simulink functions, giving me errors in the subchart for them.
All of this I am doing inside of a library as common code for a particular chip we use on a number of in-house circuit boards. As a final wrinkle, this whole thing is being used inside a much larger system and uses the "after()" transition between states so that the RTOS can go do other things. As far as I am aware, I cannot do the same thing inside of a Simulink or MATLAB function and HAVE to do this in Stateflow, which means I can't just make a normal "do all SPI reads" function but need a "Stateflow function".
Is there any way to access Simulink Functions from inside an Atomic Function?
Is there any other way to reuse a Stateflow diagram like a function, so that I can update the root diagram without having to modify the same copy/pasted diagram code in multiple places?
I also cannot use graphical functions because these diagrams have loops, and apparently you can't backtrack inside of a graphical function.
So I found an answer while working through this problem with more Stateflow experience than me. He mentioned the idea of "concurrently running states".
What you do is, at the root state, group your normal state into a subchart. Then for any reusable code you want to call like a function, make that a separate state machine with its own substates that defaults to an Idle state. You can then declare a local event in your main state to move the "function" state from its idle state, and have your main state wait for the "function" to return to its idle condition.
Your "function calls" in your main states are going to always have 2 states as you want for the "function" to be in its idle state, and then wait again until it returns to its idle state, but if you've got a subchart of sufficient complexity, this is a lot more compact than trying to copy and paste the same behavior in multiple subcharts and having to modify them later. Plus, you get the function-like behavior of only having to modify your "function" subchart to modify the behavior all across your stateflow diagram.
Information about how to create parallel running states can be found here:
https://www.mathworks.com/help/stateflow/gs/parallelism.html
https://www.mathworks.com/help/stateflow/gs/events.html

Scala legacy code: how to access input parameters at different points in execution path?

I am working with a legacy scala codebase, and as is always the case modifying the code is quite difficult without touching different parts.
One of my new requirement in to make several decisions based on some input parameters. Problem is that these decisions are to be made at various points along the execution. So either I encapsulate all those parameters in a case class instance and pass it along. But it means I would have to modify multiple methods signatures, and I want to avoid this approach as much as possible.
Another approach can be to create a global object containing all those input parameters and accessible from different points in the execution. Is it a good approach in Scala?
No, using global mutable variables to pass “hidden” parameters is not a good idea, not in Scala and not in any other programming language. It makes the code hard to understand and modify, because a function's behaviour will now depend on which functions were invoked earlier. And it's extremely fragile, because you might forget setting one of those global parameters before invoking the function, which means that it will use whatever value was stored there before. This is the kind of thing that can appear to work for years, and then break when you modify a completely unrelated part of the program.
I can't stress this enough: do not use global mutable variables, period. The solution is to man up and change those method signatures. Depending on the details, dependency injection may or may not help in your particular case.

MATLAB organise external toolboxes or turn them into packages to prevent shadowing

I'm working on a large data analysis that incorporates lots of different elements and hence I make heavy use of external toolboxes and functions from file exchange and github. Adding them all to the path via startup.m is my current working method but I'm running into problems of shadowing function names across toolboxes. I don't want to manually change function names or turn them into packages, since a) it's a lot of work to check for shadowing and find all function calls and more importantly b) I'm often updating the toolboxes via git. Since I'm not the author all my changes would be lost.
Is there programmatic way of packaging the toolboxes to create their own namespaces? (With as little overhead as possible?)
Thanks for the help
You can achieve this. Basic idea is to make all functions in a private folder and have only one entry point visible. This entry point is the only file seeing the toolbox, and at the same time it sees the toolbox function first regardless of the order in the search path.
toolbox/exampleToolbox.m
function varargout=exampleToolbox(fcn,varargin)
fcn=str2func(fcn);
varargout=cell(1,nargout);
[varargout{:}]=fcn(varargin{:});
end
with toolbox/exampleToolbox/private/foo.m beeing an example function.
Now you can call foo(1,2,3) via exampleToolbox('foo',1,2,3)
The same technique could be used generating a class. This way you could use exampleToolbox.foo(1,2,3)

Searching for a concept like 'verbosity' in Modelica

I'm struggling with the size of output files for large Modelica models. Off course, I can protect some objects in order to remove them completely from the result file. However, that gives rise to two problems:
it's not possible to redeclare protected objects
if i want to test my model in detail (eg for a short time period), i need to declare those objects publicly again in order to see their variables
I wonder if there's a trick to set the 'verbosity' of a Modelica model. Maybe what I would like is a third keyword next to public, protected, eg. transparent. Then, when setting up a simulation, I want be able to set the verbosity level to 1, or 2 with the following effect:
1--> consider all transparentelements as protected
2--> consider all transparentelements as public
This effect would propagate to all models and submodels.
I don't think this already exists. But is there an easy workaround?
Thanks,
Roel
As Michael Tiller wrote above, this is not handled the same way in all Modelica tools and there is no definite answer. To give an OpenModelica-specific answer, it's possible to use simulate(ModelName,outputFilter="regex"), to store only the variables that fully match the given regex (default is .*, matching any variable).
Roel,
I know several people wrestling with this issue. At the moment, all of this depends on the tool being used. I don't know how other tools handle filtering of results, but in Dymola you control it (as you point out) by giving the signals special qualifiers (e.g. protected).
One thing I've done in the past is to extend from a model and then add a bunch of output signals for things I'm interested in. Then you can select "Outputs" in Dymola to make sure those get in the results file. This is far from perfect because a) listing everything you want can get tedious and b) referencing protected variables is not strictly allowed (although Dymola lets you get away with it but issues a warning).
At Dassault, we are actively discussing this idea and hope to provide some better functionality along these lines. It isn't clear whether such functionality will be strictly tool specific or whether it will involve the language somehow. But if it is language related, we will (of course) work with the design group to formulate a specification that other tool vendors can support as well.
In SystemModeler, you go to the Settings tab in the Experiment Browswer in Simulation Center. Click on Output on the bottom and select which variables to store.
(The options are state variables, derivatives, algebraic variables, parameters, protected variables and if you mark the Store simulation log-option, you'll get some interesting statistics on events over time and function evaluations, opening another possibility to track down parts of the simulation and model that creates more evaluations)
I am not sure if this helps you, but in Dymola you can go to Simulation->Setup->Output and mark a checkbox saying "Store Protected variables". That way it is possible to declare most variables as protected: during normal simulation they are not stored, but when debugging your model, you just mark that checkbox and they are stored.
Of course that is not the same as your suggested keyword transparent, but maybe it helps a little...
A bit late, but in Dymola 2013 FD01 and later you can select which variables to store based on names (and model names) using the annotation __Dymola_selections, and even filter on user-defined tags - so you could create a tag name "transparent" in the model. See "Matching and variable selections" in the manual.