What impact will it cause on the generated code, if a subsystem block is marked "Treat as atomic". Even with option as unchecked, there is no use of virtual keyword in the generated code. Please explain.
As a very general statement, consider:
During simulation, non-atomic (virtual) subsystems are just visual grouping elements.
During simulation, Simulink runs the content of atomic subsystems as if they were in a separate function of their own.
They will also appear as separate functions in the generated code (although this also depends on their function packaging attributes).
Related
I am currently in a stage where I would like to have my code modularized and following software-engineering techniques to make it reusable and understandable. In particular, I run my code either in my laptop and in an external server.
My goal is to have the main part of the code exactly the same in the laptop and the server, but different initialization parameters in the two case (in the server I will increase the # of iterations for instance and other variables). What is the common practice to do so, apart from clearly an if-else statement in the main?
I was thinking of an initialization file (like a JSON) in the laptop and the server, which is different and I just need to modify the values. Or a Matlab function which initializes the variables, but still, it would have an if-else statement.
Other suggestions? Keep in mind I might to want to extend the algorithmic part and introduce new parameters in the future.
Thanks
For reasons far outside my control, I have now been thrust into doing MATLAB/Simulink/Stateflow work. I've done the On-Ramp training, and already I despise how unintuitive it is to do things that are just common and easy in any text-based programming language.
So here's my issue: I am trying to create Stateflow subcharts that I can reuse like a function, like a series of steps to take when requesting a series of responses from an SPI bus. I want to be able to use this subchart from within other subcharts in the same parent Stateflow diagram. My research has so far led me to Atomic Subcharts.
The problem is, my subchart relies on a number of Simulink Functions, which in turn call S-Functions to communicate with an STM32 target. I can make the subchart, no problem, with the Simulink Functions in the root Stateflow diagram. But when I convert the subchart into an Atomic Subchart, it can no longer detect the Simulink functions, giving me errors in the subchart for them.
All of this I am doing inside of a library as common code for a particular chip we use on a number of in-house circuit boards. As a final wrinkle, this whole thing is being used inside a much larger system and uses the "after()" transition between states so that the RTOS can go do other things. As far as I am aware, I cannot do the same thing inside of a Simulink or MATLAB function and HAVE to do this in Stateflow, which means I can't just make a normal "do all SPI reads" function but need a "Stateflow function".
Is there any way to access Simulink Functions from inside an Atomic Function?
Is there any other way to reuse a Stateflow diagram like a function, so that I can update the root diagram without having to modify the same copy/pasted diagram code in multiple places?
I also cannot use graphical functions because these diagrams have loops, and apparently you can't backtrack inside of a graphical function.
So I found an answer while working through this problem with more Stateflow experience than me. He mentioned the idea of "concurrently running states".
What you do is, at the root state, group your normal state into a subchart. Then for any reusable code you want to call like a function, make that a separate state machine with its own substates that defaults to an Idle state. You can then declare a local event in your main state to move the "function" state from its idle state, and have your main state wait for the "function" to return to its idle condition.
Your "function calls" in your main states are going to always have 2 states as you want for the "function" to be in its idle state, and then wait again until it returns to its idle state, but if you've got a subchart of sufficient complexity, this is a lot more compact than trying to copy and paste the same behavior in multiple subcharts and having to modify them later. Plus, you get the function-like behavior of only having to modify your "function" subchart to modify the behavior all across your stateflow diagram.
Information about how to create parallel running states can be found here:
https://www.mathworks.com/help/stateflow/gs/parallelism.html
https://www.mathworks.com/help/stateflow/gs/events.html
I am currently working on the implementation of some C-Code in a Simulink model using the S-Function Builder block.
The code uses various timers and counters, which are defined as static variables to enable the access to the data in following simulation steps.
However, if I start the simulation MATLAB crashes without error message ('Fatal Exception'). To test I defined the variables without the 'static' statement. The Simulation works in this case, however with (logically) wrong results of the S-Function.
Has anybody else faced similar issues or knows how to declare static variables in Simulink?
P.S.
I know I could use Work Vectors, which I do not intend to do, since it would result in huge efforts in adopting the function to do so.
Furthermore I could simply build a feed-back loop in the model using a memory block. For approximately 100 variables this solution would also be pretty impractical.
Not a solution, but a possible workaround is to use the coder.ceval functionality. I have used this to wrap a C-function with similar (static variables used as counters) function. The coder.ceval call is then placed in an embedded matlab block. Possibly some definitions of the interfaces must also be made (structures / bus objects).
Check coder.ceval, coder.rref and coder.wref for the call structure.
It seems like it was a bug in Simulink or the MinGW Compiler. However I tore down the code, ending up with it crashing with the call of one specific variable. I renamed the variable, since I could not find any error in the syntax. Now everything works fine...
The variable name had various underscores and capital letters - in case anyone makes similar experiences.
Scheme offers a primitive call-with-current-continuation, commonly abbreviated call/cc, which has no equivalent in the ANSI Common Lisp specification (although there are some libraries that try to implement them).
Does anybody know the reason why the decision of not creating a similar primitive in the ANSI Common Lisp specification was made?
Common Lisp has a detailed file compilation model as part of the standard language. The model supports compiling the program to object files in one environment, and loading them into an image in another environment. There is nothing comparable in Scheme. No eval-when, or compile-file, load-time-value or concepts like what is an externalizable object, how semantics in compiled code must agree with interpreted code. Lisp has a way to have functions inlined or not to have them inlined, and so basically you control with great precision what happens when a compiled module is re-loaded.
By contrast, until a recent revision of the Scheme report, the Scheme language was completely silent on the topic of how a Scheme program is broken into multiple files. No functions or macros were provided for this. Look at R5RS, under 6.6.4 System Interface. All that you have there is a very loosely defined load function:
optional procedure: (load filename)
Filename should be a string naming an existing file containing Scheme source code. The load procedure reads expressions and definitions from the file and evaluates them sequentially. It is unspecified whether the results of the expressions are printed. The load procedure does not affect the values returned by current-input-port and current-output-port. Load returns an unspecified value.
Rationale: For portability, load must operate on source files. Its operation on other kinds of files necessarily varies among implementations.
So if that is the extent of your vision about how applications are built from modules, and all details beyond that are left to implementors to work out, of course the sky is the limit regarding inventing programming language semantics. Note in part the Rationale part: if load is defined as operating on source files (with all else being a bonus courtesy of the implementors) then it is nothing more than a textual inclusion mechanism like #include in the C language, and so the Scheme application is really just one body of text that is physically spread into multiple text files pulled together by load.
If you're thinking about adding any feature to Common Lisp, you have to think about how it fits into its detailed dynamic loading and compilation model, while preserving the good performance that users expect.
If the feature you're thinking of requires global, whole-program optimization (whereby the system needs to see the structural source code of everything) in order that users' programs not run poorly (and in particular programs which don't use that feature) then it won't really fly.
Specifically with regard to the semantics of continuations, there are issues. In the usual semantics of a block scope, once we leave a scope and perform cleanup, that is gone; we cannot go back to that scope in time and resume the computation. Common Lisp is ordinary in that way. We have the unwind-protect construct which performs unconditional cleanup actions when a scope terminates. This is the basis for features like with-open-file which provides an open file handle object to a block scope and ensures that this is closed no matter how the block scope terminates. If a continuation escapes from that scope, that continuation no longer has a valid file. We cannot simply not close the file when we leave the scope because there is no assurance that the continuation will ever be used; that is to say, we have to assume that the scope is in fact being abandoned forever and clean up the resource in a timely way. The band-aid solution for this kind of problem is dynamic-wind, which lets us add handlers on entry and exit to a block scope. Thus we can re-open the file when the block is restarted by a continuation. And not only re-open it, but actually position the stream at exactly the same position in the file and so on. If the stream was half way through decoding some UTF-8 character, we must put it into the same state. So if Lisp got continuations, either they would be broken by various with- constructs that perform cleanup (poor integration) or else those constructs would have to acquire much more hairy semantics.
There are alternatives to continuations. Some uses of continuations are non-essential. Essentially the same code organization can be obtained with closures or restarts. Also, there is a powerful language/operating-system construct that can compete with the continuation: namely, the thread. While continuations have aspects that are not modeled nicely by threads (and not to mention that they do not introduce deadlocks and race conditions into the code) they also have disadvantages compared to threads: like the lack of actual concurrency for utilization of multiple processors, or prioritization. Many problems expressible with continuations can be expressed with threads almost as easily. For instance, continuations let us write a recursive-descent parser which looks like a stream-like object which just returns progressive results as it parses. The code is actually a recursive descent parser and not a state machine which simulates one. Threads let us do the same thing: we can put the parser into a thread wrapped in an "active object", which has some "get next thing" method that pulls stuff from a queue. As the thread parsers, instead of returning a continuation, it just throws objects into a queue (and possibly blocks for some other thread to remove them). Continuation of execution is provided by resuming that thread; its thread context is the continuation. Not all threading models suffer from race conditions (as much); there is for instance cooperative threading, under which one thread runs at a time, and thread switches only potentially take place when a thread makes an explicit call into the threading kernel. Major Common Lisp implementations have had light-weight threads (typically called "processes") for decades, and have gradually moved toward more sophisticated threading with multiprocessing support. The support for threads lessens the need for continuations, and is a greater implementation priority because language run-times without thread support are at technological disadvantage: inability to take full advantage of the hardware resources.
This is what Kent M. Pitman, one of the designers of Common Lisp, had to say on the topic: from comp.lang.lisp
The design of Scheme was based on using function calls to replace most common control structures. This is why Scheme requires tail-call elimination: it allows a loop to be converted to a recursive call without potentially running out of stack space. And the underlying approach of this is continuation-passing style.
Common Lisp is more practical and less pedagogic. It doesn't dictate implementation strategies, and continuations are not required to implement it.
Common Lisp is the result of a standardization effort on several flavors of practical (applied) Lisps (thus "Common"). CL is geared towards real life applications, thus it has more "specific" features (like handler-bind) instead of call/cc.
Scheme was designed as small clean language for teaching CS, so it has the fundamental call/cc which can be used to implement other tools.
See also Can call-with-current-continuation be implemented only with lambdas and closures?
I'm struggling with the size of output files for large Modelica models. Off course, I can protect some objects in order to remove them completely from the result file. However, that gives rise to two problems:
it's not possible to redeclare protected objects
if i want to test my model in detail (eg for a short time period), i need to declare those objects publicly again in order to see their variables
I wonder if there's a trick to set the 'verbosity' of a Modelica model. Maybe what I would like is a third keyword next to public, protected, eg. transparent. Then, when setting up a simulation, I want be able to set the verbosity level to 1, or 2 with the following effect:
1--> consider all transparentelements as protected
2--> consider all transparentelements as public
This effect would propagate to all models and submodels.
I don't think this already exists. But is there an easy workaround?
Thanks,
Roel
As Michael Tiller wrote above, this is not handled the same way in all Modelica tools and there is no definite answer. To give an OpenModelica-specific answer, it's possible to use simulate(ModelName,outputFilter="regex"), to store only the variables that fully match the given regex (default is .*, matching any variable).
Roel,
I know several people wrestling with this issue. At the moment, all of this depends on the tool being used. I don't know how other tools handle filtering of results, but in Dymola you control it (as you point out) by giving the signals special qualifiers (e.g. protected).
One thing I've done in the past is to extend from a model and then add a bunch of output signals for things I'm interested in. Then you can select "Outputs" in Dymola to make sure those get in the results file. This is far from perfect because a) listing everything you want can get tedious and b) referencing protected variables is not strictly allowed (although Dymola lets you get away with it but issues a warning).
At Dassault, we are actively discussing this idea and hope to provide some better functionality along these lines. It isn't clear whether such functionality will be strictly tool specific or whether it will involve the language somehow. But if it is language related, we will (of course) work with the design group to formulate a specification that other tool vendors can support as well.
In SystemModeler, you go to the Settings tab in the Experiment Browswer in Simulation Center. Click on Output on the bottom and select which variables to store.
(The options are state variables, derivatives, algebraic variables, parameters, protected variables and if you mark the Store simulation log-option, you'll get some interesting statistics on events over time and function evaluations, opening another possibility to track down parts of the simulation and model that creates more evaluations)
I am not sure if this helps you, but in Dymola you can go to Simulation->Setup->Output and mark a checkbox saying "Store Protected variables". That way it is possible to declare most variables as protected: during normal simulation they are not stored, but when debugging your model, you just mark that checkbox and they are stored.
Of course that is not the same as your suggested keyword transparent, but maybe it helps a little...
A bit late, but in Dymola 2013 FD01 and later you can select which variables to store based on names (and model names) using the annotation __Dymola_selections, and even filter on user-defined tags - so you could create a tag name "transparent" in the model. See "Matching and variable selections" in the manual.