Difference between a Simulink library and a model reference - simulink

What are (If there are) the differences between a Simulink library and a model reference. There's advantadges in using either of them in different situations?

The main purpose of libraries and model reference are the same: facilitate the reuse of simulink models. When you work with libraries, simulink "imports" the content of the referenced models in to the main model. Sometimes, this leads to the developer dealing with gigantic models (more than 50k blocks), which can be time consuming. When you are designing a library, the lib file cannot be run. You have to "instantiate" it in the main model. On the other side, model reference deals with separated models. They are put together when you press the simulate button, but during the design time, you deal with completely separated models. With model reference, you can also select acceleration methods (it basically compiles the model) and this can't be done with libraries.

Adding some more to danielmsd's answer:
Configuration Management: Model references can be put easily into
version control and developers can work independently from each other. A library is one file, so the blocks
cannot be versioned individually and developers cannot work in parallel.
You can protect model references.
Code Generation: Incremental build is only possible with model referencing.
BUT:
Model referencing has quite a few limitations, so check them out carefully before picking this option. See Model Referencing Limitations.

From a system design perspective model references should be used for components of your system. That is the different parts your system is made from. Libraries should be used as utilities. That is reuable blocks that are used through out your design.
For example a robot control system includes components: navigation, control, path_plannen etc. These are components and should be implemented with model references. In that case they are developed as independent models and can be tested independently.
Inside the components you could need utility blocks such as low_pass_fileter, error_state_handler and check_input_range, they are libraries.

Advantages of mdl ref:
- Code generation: Model references allow partial builds when using the coder product. Assuming you have a really large model with 100k blocks and it takes 20 minutes to build, splitting it up in model references will reduce the build time since only the changed model will need to rebuild.
Model update: only changed model references are updated "CTRL+D" therefore this helps when having really large models.
Simulation: In simulations mdl refs are generated as dlls which makes your simulations much faster (the effect is much bigger than the difference between normal and accellerator mode)
Disadvantages pains:
- In general Mdl referencing is somehow a pain to use due to limitations
There is no possibility to pass a Simulink.parameter.object which has a tree structure. (When using type:BusObject only the value property has a structure, the other properties don't)
When a subsystem has a bus signal as input, a mdl ref needs a bus object to specify the input interface, and the library block doesen't. (Even if its quite ugly to use unspecified bus inputs in a lib block). (Note that busobject are always global in the base workspace... risk of naming collisions)

Related

Scala, GUI and immutability

I created an algorithm that calculates certain things. This can be considered as the model. The algorithm is implemented in a fully functional way, so it uses immutable classes only.
Now using this model, I would like to develop a GUI layer on the top of it. However I do not know anything about the best-practises of building GUI in Scala. I intend to use ScalaFX.
My problem is the following: in ScalaFX (similarly to JavaFX) you can bind values from the GUI to object properties. This clearly violates the functional paradigm, but seems very convenient.
This would require rewriting my classes to use bindable properties which would feel like the tail wagging the dog — the model would depend on the GUI.
On the other hand, I could have an independent GUI layer. In this case I would need proxy objects to bind to and I would have to create my model objects based on these proxy objects. This would feel more idiomatic but implies a lot of code duplication and extra work. My model and the proxy objects would have to be kept in sync and I would have to take care of copying the attributes.
What is a good way of doing this? A GUI is always full of mutability so functional programming does not feel right here. Nevertheless I love Scala so I would like to keep using it for the GUI, too.
Despite the extra effort, take the second approach. Create small mutable "view" instances for each of your model. Bind the views to the widgets and install observers or hooks that update the view proxies based on changes in your model. Don't let the GUI API dictate how your concurrency approach and model should look like.
I believe there are a few open source libraries around that provide a more functional and/or reactive abstraction layer to the plain Scala-Swing or Scala-FX.

Simulink model interface to external C++ application

I have a fairly complex Simulink model with multiple referenced models that I am trying to interface with external C++ code. I am using 2014a and generating code using Simulink Coder. Below are two important requirements I would like to satisfy while doing this:
Have the ability to create multiple instances of the model in my
external C++ code.
Have the ability to overwrite data that is input to the model and
have it reflected to the external application.
Given these requirements, at the top level what is the best method to interface to external IO, datastores or IO ports?
From my understanding using datastores (or Simulink.Signal objects) and by specifying the appropriate storage class I may be able to satisfy 2 above, but in doing so I would have to specify signal names and this would not allow me to satisfy 1.
If I use IO port blocks at top level, I may be able to satisfy 1 but not 2.
Update: #2 constraint has been removed due to design change. Hence the io port approach works for me.

Use Model Reference in Simulink to improve performance

I have a huge Simulink model and I am testing some options to improve its performance. The model is implemented using a library for reusable components and subsystem for the hierarchy and organization.
I was wondering if converting some subsystems to model reference will improve performance, besides other advantages. The problem is that every single library component is a masked subsystem itself, and so far I couldn't figure out how to convert a masked subsystem to a model reference. A error message appears when trying to convert:
Invalid usage of convertToModelReference. The subsystem to be
converted must be an atomic, or triggered subsystem block. Cannot
convert a virtual subsystem to a model
Reading simulink documentation did not help.
My questions is:
Is it possible to convert a masked subsystem to a model reference?
If yes, do I need to make some adjustments in every masked subsystem/library block through the whole hierarchy? (Or only in the reespective top tier block that I am trying to convert?)
What is this error message about?
In addition to what Phil said, you can only convert an atomic subsystem into a referenced model. An atomic subsystem means that the subsystem executes as a whole rather than the hierarchy of the model being flattened during compilation, as with virtual subsystems. For more details, see the documentation. You can have library components within referenced models, but there are a number of limitations, see Simulink Model Referencing Requirements and Model Referencing Limitations.
Despite the constraints, model referencing is generally advised as the way to approach large-scale modelling and it should give you performance improvements, as the referenced models run in accelerated mode (unless otherwise configured), and are rebuilt only when structural changes are made to the model. See Componentization Guidelines in the documentation for more details.
In my experience that's a catch all error message, which basically means that something is wrong (but not necessary the obvious virtual subsystem problem that it seems to indicate), but the converter doesn't have enough "smarts" to give a very specific fix to the problem.
Sometimes the problem is that parameters needed by the reference block aren't being passed through the mask to it correctly.
But you most likely need to look at various of the limitations of model referencing and work through the potential issues.
Depending on how many conversions you have to do you might find manually converting the subsystems (by copying them into new models and configuring the new models manually) is less frustrating that trying to figure out why the automatic conversion is not working.

Best Data Structure for an Entity-Component-System Framework

I've been reading a lot about Entity Frameworks and now I want to implement it on my game. An Entity Framework is based on making the game entities simple containers of Components, where a Component contains a certain characteristic of an Entity (and all the variables/accessors which describe this characteristic).
The game logic is then modularized by creating Systems. Each System implements and runs a certain aspect of the game logic (eg. Collisions, Rendering, Animation). Each System has to be able to access every Entity which has some certain combination of Components (eg. RenderSystem has to get only Entities which have PositionComponent and AnimationComponent).
My question regards the best data structure for achieving such functionality.
My current idea is to create a Vector (with N cells, where N is the number of possible components) of List of Entity. So whenever I create (instantiate and add certain Components) an Entity, I would also reference this Entity from each List for each Component it contains. "Killing" an Entity would require removing each reference from each List. The problem would be querying which entities have to be processed by a certain System, because the search-key would be a combination of Components, and not a single Component, adding overhead to the operation (many searches and comparisons would have to be done).
Is my idea good? Is there any better data structure I can use? Note that everything in the game is supposed to be an Entity, summing up to thousands of Entites on a single Level (I could possibly use some space partitioning).
They are two ways of doing it,
The purely data oriented system would lead you not to have an Entity class but just components sharing an ID. In this case, a vector or a hashmap for every system wouldn't be a problem as the search in these data structure is fast. If you want several components per system per entity you can aggregate your components in one data structure adapted for each system.
The problem is that a pure data oriented system can be less usable than a more pragmatic approach where you keep all the features of the previously described system but you keep an entity class that holds reference to his components (or aggregated components structures) of every system. Processing an entity (deleting or inspecting it) becomes much easier as you still have a place where all the information about what the entity is, i.e. what it is made of and not what state it is in, can be found in one place instead of querying every system.
In your case, the best thing is to try... It's quite easy and fast to implement a rough engine in the two ways, and once you've played with the two you'll be able to decide which one suites you better.
This article is valuable as far as it suggests 4 iterations for the data structure, but no one is a good solution in my opinion. But I recommend to read it, because there is a detailed analysis of the problem, nice estimations in terms of memory and such other good material.

Why is it called "EMF Generator _Model_"?

In the Eclipse Modeling Framework (EMF), there are ecore files to define a model. From this model code (and other things) can be generated. This generation step is described by an "EMF Generator Model". Now my question is, why this file is called "model" instead of "configuration" or something like that. In my opinion, it does not model anything, but it describes a generation step...
While the other answers are perfectly correct, there is one additional difference between a "model" and a "configuration". All EMF models (including this generator model) can be modified, transformed and so on by every already available EMF tool (because they all use the same meta model).
This is a huge difference compared to the situation that the configuration can only be read by another tool, if it knows the exact format of the configuration serialization.
So you can create an UML diagram of the generator model, you can use it in a model based graphical editor, you can transform it using model-to-model transformation plugins, you can put it into EMFstore, ... without any of these tools having been prepared specifically for that model.
The current implementation of EMF was created with a bootstrapping-approach.
At first, the model that describes the data that is stored in ecore and genmodel were written by hand. As soon as EMF was stable enough, these were modeled and generated with EMF itself.
This means, the ecore and genmodel are in every way a EMF model.
This is similar to how many compilers for new programming languages are developed. The initial implementation has to be written in a second language, but as soon as the compiler is complete, you can use the new language to write a new implementation, add features, and then use the binaries of the previous version of the compiler to create the next.
From the creator of EMF, Ed Merks:
After all, EMF's generator model generates both the Ecore model and itself, so we're not actually in a position to delete our generated code. We need it to bootstrap the environment. It's prickly problem.
http://ed-merks.blogspot.de/2008/10/hand-written-and-generated-code-never.html
Actually the genmodel as well as the ecore files are also EMF models technically speaking. So this is not a surprise that it is called this way.
In fact you have to understand that EMF allows to describe any kind of structured information. So it can be used to describe your own semantics as well as describe code generation configurations or even describe itself (ecore).