What are the best practices for large scale modeling in Simulink when it comes to connecting blocks? Would you use the same structure for all I/O ports of your blocks to facilitate their interconnection (but obviously there will be a lot of redundant signals) or would you define custom structures for each I/O port type with only the necessary information?
For example:
A reactor is modeled as a single block with 4 inputs and 1 output:
I1. Feed which is a structure containing: Flow and Concentrations (7 species);
I2. Mass flow of enzymes - scalar;
I3. Mass flow of water - scalar;
I4. Outflow - which is adjusted by a controller to keep a constant mass in the tank - scalar;
O1. The outstream, which is a struct: Flow and Concentrations
(let's say 10 species).
Now imagine this reactor block is only a tiny piece of an entire process. There are enzymes and water tanks connected to it and some other downstream processes etc.
Would you use a unique structure for all IO ports (even if it scales up to 50-100 components but you would need less per block or 1 component like I2, I3 and I4 above which are scalars)? Is this regarded as bad programming practice?
Or would you customize the IO port structure for each block? Of course you would group them somehow and make reuse of them but with no redundant information.
Thanks!
You might find the following useful: http://www.mathworks.co.uk/videos/tips-and-tricks-for-large-scale-model-based-design-part-2-81873.html.
I would personally use a single bus input and a single bus output for your reactor block. You can then group buses together to form larger bus signals as you move up the hierarchy of your model. Look at the Bus Creator and Bus Selector blocks.
Related
I have been working with Anylogic for about 6 months now and my goal is to model a generic energy supply chain for an energy demand (e.g. storm and heat for a house). As a result I want to evaluate how suitable the components in the energy supply chain are to meet the energy demand.
My idea would be to model the components (Ex. PV->Battery Storage->House) as agents. I would have modeled the energy flow in the agents with SD and individual events of the components (e.g. charging and discharging at the battery) via state diagrams.
Currently I have two problems:
Which possibilities are there to create a variable interconnection of my components (agents). For example, if I do not want to evaluate the scenario PV->Battery Storage->House, but PV->Electrolysis->Tank->Fuel Cell->House. My current approach would be to visually connect the agents with ports and connectors and then pass input and output variables for DS calculation via set and get functions. Are there other possibilities, e.g. to realize such a connection via an input Excel? I have seen a similar solution in the video: "How to Build a True Digital Twin with Self-Configuring Models Using the Material Handling Library" by Benjamin Schumann, but I am not sure if this approach can be applied to SD.
To evaluate the energy supply chain, I would like to add information to the energy flow, for example the type (electricity, heat), generation price (depending on which components the energy flow went through) and others. Is there a way to add this information to a flow in SD? My current approach would be to model the energy flow as an agent population with appropriate parameters and variables. Then agents could die when energy is consumed or converted from electricity to heat type. However, I don't know if this fits with the SD modeling of the energy flow.
Maybe you can help me with my problems? I would basically be interested in the opinion of more experienced Anylogic users if my approaches would be feasible or if there are other or easier approaches. If you know of any tutorial videos or example models that address similar problems, I would also be happy to learn from them.
Best
Christoph
Sounds like what you need is a model that combine agent-based and system dynamics approaches with Agents populating the stocks (in your case energy that then gets converted into heat) depending on their connection. There is an example of AB-SD combination model in 'Example' models and I also found one on cloud.anylogic.com, although it is from a different domain.
Perhaps if you can put together a simple example and share then I'll be able to provide more help.
The Intel Optimization Reference Manual
https://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf
discusses the advantage of Structure-Of-Arrays (SoA) data layout for SIMD processing compared to the traditional Array-Of-Structures (AoS) layout. This is clear.
However, there's one argument I don't understand. On page 4-23 it says "SoA can have the disadvantage of requiring more independent memory stream references. A computation that uses arrays X, Y, and Z (see Example 4-20) would require three separate data streams. This can require the use of more prefetches, additional address generation calculations, as well as having a greater impact on DRAM page access efficiency." To mitigate this problem they recommend a hybrid approach (Example 4-22).
Can somebody please explain the "three separate data streams", the "prefetches" and "additional address generation calculations", and "impact on DRAM page access efficiency"?
At https://stackoverflow.com/a/40169187/3852630 Peter Cordes discusses two effects: Three different data streams for X, Y, and Z would tie up three registers for the addresses, and if the three arrays would be mapped to the same cache lines, frequent cache eviction would be a problem. However, registers are not a sparse resource on modern CPUs, and multi-way caches should mitigate the cache problem.
In the Thermal Power Library from Modelon, there are two kinds of connectors: flow connector and volume connector.
Based on the tutorial shipped with the library, these two kinds of connectors should NOT be connected with the same kind of connector.
But I checked their code, it seems the codes are the same.
I checked the code in the ThermoSysPro library from EDF and ThermoPower library, too. They also use two kinds of connectors, and the recommendation of connecting principle is also the same.
So I read the code of “MixVolume” and “SteamTurbineStodola”, which include volume connectors and flow connectors respectively, but I am not sure the difference between these two kinds of connectors.
My question is :
Could someone tell me the philosophy of using such two kinds of connectors in thermo-hydraulic systems, and in the code of every component, how should I deal with them so they work like they’re designed for.
Here is a very short and simplified explanation applying to thermo-hydraulic systems.
In flow models (pipes, valves etc.) enthalpy is unchanged and mass flow/pressure drop are related with a static equation.
In volume models pressure and enthalpy are dynamic state variables, that is, mass and energy conservation is "elastic".
As a rule of thumb, you should build thermo-hydraulic system models of alternating flow and volume models (in a staggered grid scheme) to decouple nonlinear systems.
For the dynamic pipe model in the top figure in your post the connectors merely indicate that, internally, the pipe model begins with a volume model and ends with a flow model.
Claytex has a nice blog post on the subject here https://www.claytex.com/blog/how-to-avoid-computationally-expensive-fluid-networks-in-dymola/
Also the authors of the Modelica Buildings Library have done a great effort explaining this in various papers. See e.g. https://buildings.lbl.gov/publications/simulation-speed-analysis-and
These kind of connectors are indeed the same due to modelica language specification. You can only connect two connectors that are interchangeable, that have the exact same amount and type of flow and potential variables. At every node all flows have to sum up to zero and all potentials have to be the same, therefore they have to be type consistent.
The difference is just information wise for the modeler or someone trying to understand the model and all components have been designed with such a thing in mind. It is easiest to understand with electrical components, where you have positive and negative pins which indicate in which direction the current should flow, but this is actually never really forced. Positive and negative pins are, ignoring their name, identical.
Although i don't know the connectors you are talking about i would assume that the VolumePort is a connector of something that has a volume and passes that information, whereas FlowPort passes the information about the mass flow rate. Usually a pipe i guess (?). Broken down to abstract dae theory one could say the names indicate if the potential or the flow variable are considered unknown for the component.
I have to emphasize that these are only indicators and that it is never actually forced by the model or the compiler. It is just how it should logically resolve in the end if you respect these restrictions of only connecting VolumePortto FlowPort connectors.
I need to make a Simulink block which receives a concatenation of a number of bus signals and performs the same math operations on the signals contained in the bus for each pair of consequent buses. The bus signals are of the same type and are non-virtual.
For the sake of the question, let's assume to have a concatenation of 4 simple buses, each containing a x and a y field. A bus of signals composed of a=x1+x2 and b=y1-y2 need to be made out of buses inputs 1,2 and 3,4. So, the output of the block should be a concatenation of 2 buses, the first containing information from the first pair of input buses, and the second one from the second pair.
An hard-to-scale way to do it is the following.
Are there any built-in Buses Math operations possibilities, or better ways to implement this? I could not find anything in Mathworks documentation, and simple operations block generate incompatibility errors.
You need to use For Each Subsystem Block. As shown in this example. Note that I called the bus BusTest and made the dimension and datatype of signals visible:
Now set the Signal Width parameter of that block to two so it divides input array into chunks of length 2:
Then move your logic into that block:
I understand the need for vector clocks in terms of scalar logical clocks failing to provide enough information to tell whether there is an update conflict in a key value store update for example.
But I am not sure what problem is still unsolved by vector clocks and then solved by the more bulky matrix clocks?
In an eventual consistency environment all messages ever created by a system need to be kept until every peer has received the message (== eventual consistency). But you don't want to keep messages for ever, so you need to have a way to tell which messages were received by all nodes and can be deleted, this is why you use matrix clocks.
Matrix clocks are a list of vector clocks, so you know the current state of each node in the system. Based on this you can know which peer received already which messages. When you exchange messages with another node in the system you compare the matrix clocks and remember always the highest values for each node. Afterwards you can delete messages which were sent before, because the node already must have received them.
This is a very brief description of TSAE (timestamped anti-entropy) protocol. You can read more about it in the dissertation project Weak-consistency group communication and membership by Richard Andrew Golding from 1992 (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.88.7385&rep=rep1&type=pdf) starting from chapter 5.
The distinctions among Lamport clock (scalar logical clock, in your term), vector clock, and matrix clock lie in that they represent different levels of knowledge.
For vector clock $vt_i[1 \ldots n]$ in site $i$, the entry $vt_i[k]$ represents the knowledge the site $S_i$ has about site $S_k$. The knowledge has the form of "$i$ knows $k$ that $\ldots$".
For matrix clock $mt_i[1 \ldots n, 1 \ldots n]$ in site $S_i$, the entry $mt_i[k,l]$ represents the knowledge the site $S_i$ has about the knowledge by $S_k$ about site $S_l$. The knowledge here the form of "$i$ knows $k$ knows $l$ that $\ldots$".
Intuitively, we can do more things with more knowledge.
The following description is mainly quoted from [1]:
Vector clocks and matrix clocks are widely used in asynchronous distributed message-passing systems.
Some example areas using vector clocks are checkpointing, causal memory, maintaining consistency of replicated files, global snapshot, global time approximation, termination detection, bounded multiwriter construction of shared variables, mutual exclusion and debugging (predicate detection).
Some example areas that use matrix clocks are designing fault-tolerant protocols and distributed database protocols, including protocols to discard obsolete information in distributed databases, and protocols to solve the replicated log and replicated dictionary problems.
For matrix clock, we notice that
$min_k(mt_i[k,i]) \ge t$ means that site $S_i$ knows that every other site $k$ knows its progress till its local time $t$.
It is this property that allows a site to no longer send an information with a local time $\le t$ or to discard obsolete information.
[1] Concurrent Knowledge and Logical Clock Abstractions Ajay D. Kshemkalyani 2000