In:
https://docs.omnetpp.org/tutorials/tictoc/part5/
and
https://doc.omnetpp.org/omnetpp/manual/#sec:simple-modules:declaring-statistics
it's shown how network statistics can be processed after a simulation.
Is it possible to get network parameters dynamically?
TL;DR: Use signals (not statistics) and hook up your own simple module on these signals and compute the required statistics in that module.
You cannot access the value of #statistics in your code, and there is a reason for this as this would be an anti pattern. NED based statistics were introduced as a method to add calculations and measurements to your model without modifying your models behavior and code. This means that statistics are NOT considered part of a model, but rather they are considered as a configuration. Changing a statistics (i.e. deciding that you want to measure something else) should never change the behavior of your model. That's why the actual value of a given statistic is not exposed (easily) to the C++ code. You could dig them out, but it is highly discouraged.
Now, this does not mean that what you want to achieve is not legitimate but the actual statistics gathering must be an integral part of your model. I.e. you should not aim for using built-in statistics, but rather create an explicit statistics gathering submodule that should hook up on the necessary signals (https://doc.omnetpp.org/omnetpp/manual/#sec:simple-modules:subscribing-to-signals) and do the actual statistics computation you need in its C++ code. After that, other modules are free to access this information and make decisions based on that.
Related
I am constructing an experiment in Anylogic, which saves data in the Parameter variation tab under a custom-class list. The model needs to perform a lot of simulations, and repetitions to optimize for Setting variables in the model itself. After x amount of iterations, I use a Python connector to run some code in finding new possible parameters for the underlaying model.
The problem I am having right now, is that around Simulation-run number 200, the memory usage is maximum (4Gb), and it proceeds to run super-slow. I have found some interesting ways to cut on memory usage, but I believe there is only one thing that could help me right now: let the system delete memory that is used for past iterations. After each iteration, the data of a simulation is stored, so I am fine with anylogic deleting the logs of the specific simulation afterwards.
Is such a thing possible? If so, how can I implement that?
Java makes use of a Garbage collector to manage memory usage and you have no control over it. How it works is that every now and then, based on some internal logic, it will collect and remove all instances of classes in memory that do not contain any active references and remove them.
Thus to reduce memory you must ensure that any instances that are no longer needed are not referenced by any of the objects currently active in your model.
To identify these you must use a Java profiler like JProfiler, or some of the free alternatives - see here for more.
This will show you exactly what classes are using up all your memory and with some deep diving you should be able to identify who is keeping reference to them.
I'm doing x86-64 binary obfuscation research and fundamentally one of the key challenges in the offense / defense cat and mouse game of executing a known-bad program and detecting it (even when obfuscated) is system call sequence analysis.
Put simply, obfuscation is just achieving the same effects on the system through a different sequence of instructions and memory states in order to minimize observable analysis channels. But at the end of the day, you need to execute certain system calls in a certain order to achieve certain input / output behaviors for a program.
Or do you? The question I want to study is this: Could the intended outcome of some or all system calls be achieved through different system calls? Let's say system call D, when executed 3 times consecutively, with certain parameters can be heuristically attributed to malicious behavior. If system calls A, B, and C could be found to achieve the same effect (perhaps in addition to other side-effects) desired from system call D, then it would be possible to evade kernel hooks designed to trace and heuristically analyze system call sequences.
To determine how often this system call outcome overlap exists in a given OS, I don't want to use documentation and manual analysis for a few reasons:
undocumented behavior
lots of work, repeated for every OS and even different versions
So rather, I'm interested in performing black-box analysis to fuzz system calls with various arguments and observing the effects. My problem is I'm not sure how to measure the effects. Once I execute a system call, what mechanism could I use to observe exactly which changes result from it? Is there any reliable way, aside from completely iterating over entire forensic snapshots of the machine before and after?
I am using OMNeT++ as my simulation engine for an arbitrary network topology simulation. I have created different custom OMNeT modules to simulate different entities in my simulation. I am also using OMNeT signals and statistics for result gathering.
I am wondering whether I can collect data originating from different modules with separate signals but to be gathered, processed, and recorded in the output file by the same statistic?
I know I could probably get away with just registering and using separate statistics per module but as the documentation states that the resulting collection and recording is happening on a higher level in the OMNeT inheritance hierarchy and thus across different instances of a module, I am thinking that this should be possible.
So it turns out, I can get the intended result by retrieving a reference to the module instance that has created the statistic and signal and emitting the value that I want, even when handling an event on a different module.
A relevant code snippet below:
auto ref = (ModuleClass *)getParentModule()->getSubmodule("ModuleName");
if (ref == NULL)
{
//check successful instance retrieval
}
ref->emit(ref->relaventSignal, ValueToEmit);
I have a model in my Modelica and I use Dymola to compile this model. In my model I need the simulation information "Output Interval length". I have searched for it but I could not get the useful information. Is there any other possible way we could access simulation information.
If you are simply trying to get the results reported at specific intervals, you can use a sample operator to achieve that. That would force the solution to be computed at specific times without directly specifying something like the time step.
The important point to understand here is that a model where the behavior of the model depends on the numerical integration is highly suspect and I've never seen a case where the behavior couldn't be described without knowledge of the solution method. Said another way, "mother nature" doesn't know anything about "time steps". :-)
You could use a clocked system with an integrator.
For an Example, see File -->Libraries-->Modelica_Synchronous --> Examples --> Systems --> Controlled_mixing_unit in Dymola
There the period (i.e. in this case the timestep of the explicit Euler method) is a parameter of the periodic clock)
Modelica by design prohibits accessing any numerical solver internals, so you cannot access it. The output interval length also cannot be determined by the model in any reliable way since the solver will take internal steps longer than the output interval and then interpolate values for the result file.
You could create a function that reads the dsin.txt file and extracts that information.
I read about how FoundationDB does its network testing/simulation here: http://www.slideshare.net/FoundationDB/deterministic-simulation-testing
I would like to implement something very similar, but cannot figure out how they actually did implement it. How would one go about writing, for example, a C++ class that does what they do. Is it possible to do the kind of simulation they do without doing any code generation (as they presumeably do)?
Also: How can a simulation be repeated, if it contains random events?? Each time the simulation would require to choose a new random value and thus be not the same run as the one before. Maybe I am missing something here...hope somebody can shed a bit of light on the matter.
You can find a little bit more detail in the talk that went along with those slides here: https://www.youtube.com/watch?v=4fFDFbi3toc
As for the determinism question, you're right that a simulation cannot be repeated exactly unless all possible sources of randomness and other non-determinism are carefully controlled. To that end:
(1) Generate all random numbers from a PRNG that you seed with a known value.
(2) Avoid any sort of branching or conditionals based on facts about the world which you don't control (e.g. the time of day, the load on the machine, etc.), or if you can't help that, then pseudo-randomly simulate those things too.
(3) Ensure that whatever mechanism you pick for concurrency has a mode in which it can guarantee a deterministic execution order.
Since it's easy to mess all those things up, you'll also want to have a way of checking whether determinism has been violated.
All of this is covered in greater detail in the talk that I linked above.
In the sims I've built the biggest issue with repeatability ends up being proper seed management (as per the previous answer). You want your simulations to give different results only when you supply a different seed to your random number generators than before.
After that the biggest issue I've seen seems tends to be making sure you don't iterate over collections with nondeterministic ordering. For instance, in Java, you'd use a LinkedHashMap instead of a HashMap.