I am using OMNeT++ as my simulation engine for an arbitrary network topology simulation. I have created different custom OMNeT modules to simulate different entities in my simulation. I am also using OMNeT signals and statistics for result gathering.
I am wondering whether I can collect data originating from different modules with separate signals but to be gathered, processed, and recorded in the output file by the same statistic?
I know I could probably get away with just registering and using separate statistics per module but as the documentation states that the resulting collection and recording is happening on a higher level in the OMNeT inheritance hierarchy and thus across different instances of a module, I am thinking that this should be possible.
So it turns out, I can get the intended result by retrieving a reference to the module instance that has created the statistic and signal and emitting the value that I want, even when handling an event on a different module.
A relevant code snippet below:
auto ref = (ModuleClass *)getParentModule()->getSubmodule("ModuleName");
if (ref == NULL)
{
//check successful instance retrieval
}
ref->emit(ref->relaventSignal, ValueToEmit);
Related
Since the generator generates test stimuli for the DUT (Design Under Test), why not feed them directly? What is the need for a driver? Please enlighten me with an example if possible.
UVM is based on transaction-level modeling (TLM). This means that you can model things like memory reads and writes at a high level of abstraction using a transaction object. For example, a memory transaction can be represented by:
Data value
Address
Direction (read or write)
In UVM, we typically create a uvm_agent which consists of a sequencer, a driver and a monitor. The sequencer plays the role of the transaction generator. You will also create a sequence which is attached to the sequencer to generate several transactions. Each transaction object is sent from the sequencer to the driver.
When the driver receives a transaction object, it looks at the object properties and assigns the transaction values to the interface signals. In the testbench, the interface signals are connected directly to the DUT ports.
The driver contains more details than the sequencer or transaction. It controls the timing of the signals, adding delays, wait-states, etc.
See also chapter 1 of the UVM User's Guide
The UVM was developed using the Separation of Concerns principle in software programming. In Object Oriented Programming, this usually means giving a class one basic responsibility and passing data on to another class for another responsibility.
The UVM separates the job of generating stimulus from driving the signals into the DUT to enable re-use at different stages of abstraction because of different levels of unit testing. In the UVM, you will see that even the passing of data is abstracted into a separate set of classes called TLM ports and exports.
An example might be a design that processes an encrypted image received through a serial port. Your design will probably have stages the deserialized the serial input, decrypted the input, and finally stores an image in memory. Depending on which stage you are testing, you would want to use the same image generator and send the image through a sequence, and have layered sets of sequences to translate to the particular stage you are testing.
In:
https://docs.omnetpp.org/tutorials/tictoc/part5/
and
https://doc.omnetpp.org/omnetpp/manual/#sec:simple-modules:declaring-statistics
it's shown how network statistics can be processed after a simulation.
Is it possible to get network parameters dynamically?
TL;DR: Use signals (not statistics) and hook up your own simple module on these signals and compute the required statistics in that module.
You cannot access the value of #statistics in your code, and there is a reason for this as this would be an anti pattern. NED based statistics were introduced as a method to add calculations and measurements to your model without modifying your models behavior and code. This means that statistics are NOT considered part of a model, but rather they are considered as a configuration. Changing a statistics (i.e. deciding that you want to measure something else) should never change the behavior of your model. That's why the actual value of a given statistic is not exposed (easily) to the C++ code. You could dig them out, but it is highly discouraged.
Now, this does not mean that what you want to achieve is not legitimate but the actual statistics gathering must be an integral part of your model. I.e. you should not aim for using built-in statistics, but rather create an explicit statistics gathering submodule that should hook up on the necessary signals (https://doc.omnetpp.org/omnetpp/manual/#sec:simple-modules:subscribing-to-signals) and do the actual statistics computation you need in its C++ code. After that, other modules are free to access this information and make decisions based on that.
From what I read aggregates must only contain properties which are used to protect their invariants.
I also read sagas can be aggregates which makes sense to me.
Now I modeled a registration process using a saga: on RegistrationStarted event it sends a ReserveEmail command which will trigger an EmailReserved or EmailReservationFailed given if the email is free or not. A listener will then either send a validation link or a message telling an account already exists.
I would like to use data from the RegistrationStarted event in this listener (say the IP and user-agent). How should I do it?
Storing these data in the saga? But they’re not used to protect invariants.
Pushing them through ReserveEmail command and the resulting event? Sounds tedious.
Project the saga to the read model? What about eventual consistency?
Another way?
Rinat Abdullin wrote a good overview of sagas / process managers.
The usual answer is that the saga has copies of the events that it cares about, and uses the information in those events to compute the command messages to send.
List[Command] processManager(List[Event] events)
Pushing them through ReserveEmail command and the resulting event?
Yes, that's the usual approach; we get a list [RegistrationStarted], and we use that to calculate the result [ReserveEmail]. Later on, we'll get [RegistrationStarted, EmailReserved], and we can use that to compute the next set of commands (if any).
Sounds tedious.
The data has to travel between the two capabilities somehow. So you are either copying the data from one message to another, or you are copying a correlation identifier from one message to another and then allowing the consumer to decide how to use the correlation identifier to fetch a copy of the data.
Storing these data in the saga? But they’re not used to protect invariants.
You are typically going to be storing events in the sagas (to keep track of what has happened). That gives you a copy of the data provided in the event. You don't have an invariant to protect because you are just caching a copy of a decision made somewhere else. You won't usually have the process manager running queries to collect additional data.
What about eventual consistency?
By their nature, sagas are always going to be "eventually consistent"; the "state" of an instance of a saga is just cached copies of data controlled elsewhere. The data is probably nanoseconds old by the time the saga sees it, there's no point in pretending that the data is "now".
If I understand correctly I could model my saga as a Registration aggregate storing all the events whose correlation identifier is its own identifier?
Udi Dahan, writing about CQRS:
Here’s the strongest indication I can give you to know that you’re doing CQRS correctly: Your aggregate roots are sagas.
As I am reading some CQRS resources, there is a recurrent point I do not catch. For instance, let's say a client emits a command. This command is integrated by the domain, so it can refresh its domain model (DM). On the other hand, the command is persisted in an Event-Store. That is the most common scenario.
1) When we say the DM is refreshed, I suppose data is persisted in the underlying database (if any). Am I right ? Otherwise, we would deal with a memory-transient model, which I suppose, would not be a good thing ? (state is not supposed to remain in memory on server side outside a client request).
2) If data is persisted, I suppose the read-model that relies on it is automatically updated, as each client that requests it generates a new "state/context" in the application (in case of a Web-Application or a RESTful architecture) ?
3) If the command is persisted, does that mean we deal with Event-Sourcing (by construct when we use CQRS) ? Does Event-Sourcing invalidate the database update process ? (as if state is reconstructed from the Event-Store, maintaining the database seems useless) ?
Does CQRS only apply to multi-databases systems (when data is propagated on separate databases), and, if it deals with memory-transient models, does that fit well with Web-Applications or RESTful services ?
1) As already said, the only things that are really stored are the events.
The only things that commands do are consistency checks prior to the raise of events. In pseudo-code:
public void BorrowBook(BorrowableBook dto){
if (dto is valid)
RaiseEvent(new BookBorrowedEvent(dto))
else
throw exception
}
public void Apply(BookBorrowedEvent evt) {
this.aProperty = evt.aProperty;
...
}
Current state is retrieved by sequential Apply. Since this, you have to point a great attention in the design phase cause there are common pitfalls to avoid (maybe you already read it, but let me suggest this article by Martin Fowler).
So far so good, but this is just Event Sourcing. CQRS come into play if you decide to use a different database to persist the state of an aggregate.
In my project we have a projection that every x minutes apply the new events (from event store) on the aggregate and save the results on a separate instance of MongoDB (presentation layer will access to this DB for reading). This model is clearly eventually consistent, but in this way you really separate Command (write) from Query (read).
2) If you have decided to divide the write model from the read model there are various options that you can use to make them synchronized:
Every x seconds apply events from the last checkpoint (some solutions offer snapshot to avoid reapplying of heavy commands)
A projection that subscribe events and update the read model as soon event is raised
3) The only thing stored are the events. Infact we have an event-store, not a command store :)
Is database is useless? Depends! How many events do you need to reapply for take the aggregate to the current state?
Three? Maybe you don't need to have a database for read-model
The thing to grok is that the ONLY thing stored is the events*. The domain model is rebuilt from the events.
So yes, the domain model is memory transient as you say in that no representation of the domain model is stored* only the events which happend to the domain to put the model in the current state.
When an element from the domain model is loaded what happens is a new instance of the element is created and then the events that affect that instance are replayed one after the other in the right order to put the element into the correct state.
you could keep instances of your domain objects around and subscribing to new events so that they can be kept up to date without loading them from all the events every time, but usually its quick enough just to load all the events from the database and apply them every time in the same way that you might load the instance from the database on every call to your web service.
*Unless you have snapshots of you domain object to reduce the number of events you need to load/process
Persistence of data is not strictly needed. It might be sufficient to have enough copies in enough different locations (GigaSpaces). So no, a database is not required. This is (at least was a few years ago) used in production by the Dutch eBay equivalent.
Short version
I am considering to use BusObjects to implement hard interface control on a (large industrial) application using Simulink and I would like to store the BusObjects (hundrends of them) into a Matlab structure so that the entire application interface specification is well organized. However, it seems that BusObjects cant be contained into structures, nor they can reside on other workspaces other than Matlab Base. Any idea on how to handle this?
Long version
I would like the interfaces specification to be hierarchical and centralized in some way. I mean, I would like to specify the external interface of my application, then the internal interfaces, then the internal interfaces of the internal interfaces and so on. And I would like this information to be stored in one object that resembles the hierarchy. I was thinking in using an structure with BusObjects as elements.
Unfortunately, it seems that, for a bus object to work, it must be declared on the Matlab workspace as an independent variable of class BusObject. It cant be an element of an structure that is a BusObject, or an element of a cell whose elements are BusObjects or an element of a BusObject vector.
Any suggestion on how to handle this? take into account that if you have a model with dozens and dozens of blocks and more than 3 hierarchy levels, then you end up with hundreds of bus objects in the Matlab workspace without any particular structure... I think that is too messy to let it be...
Bus objects are always stored in the global workspace.
Send a request to Mathworks if you want to change this.