Multiple Simulation Runs (OMnet++) - simulation

I implemented a 100 km long highway scenario using Veins Framework for OMNET++.
In order to get more reliable results, how many simulation runs are required for each set of experiment ?
How can we define and control the number of simulation runs?

Quicker simulations:
You can make your simulations run faster in 3 possible ways:
run sumo without the gui by starting the ./sumo-launchd.py excluding sumo-gui in the end and writing only sumo.
run simulations using Cmdenv and not Tkenv,
compile your Veins project code in in release mode. You can achieve that by doing:
-make MODE=release -j <number-of-cores>
These steps will improve simulation run-time up to 50%.
In the Veins FAQ you have the following questions:
I've launched a simulation in the OMNeT++ TkEnv; why is it running so
awefully slow?
I've launched a simulation in the OMNeT++ Cmdenv; can I speed it up
further?
There are some suggestions given in the FAQ which might help you run simulations quicker.
Number of simulation runs:
As far as the number of simulation runs is concerned, it is hard to tell. You can use confidence intervals for your results to see how fine-grained they are; In any case I would suggest starting with 5 repetitions.
Automatic control of simulation runs:
This can be accomplished using repeat parameter in the .ini file as it is explained here.
On how to do that from the OMNeT++ IDE follow this answer (note the comments as well).
To run parallel simulations through the command line, follow this answer.

a) This is an open ended question as you have not defined what 'more reliable' means. To get a more reliable result, you need more runs. That's all that can be said.
b) use repeat = 2 in the ini file to get two repetitions
I'm also suggesting reading the manual's corresponding chapter:
https://omnetpp.org/doc/omnetpp/manual/usman.html#sec341
(Chapter 10 is also related to your question)

Related

Run Matlab commands with reduced priority

I frequently encounter the following problem: I start a time-consuming (sometimes parallelized) script, and while the said script is running Matlab becomes very slow & unresponsive. (I would like to keep editing files). I suspect that part of the problem is that the script that's running is eating up all CPU capacities.
Hence my question: is there a way to start all commands from within Matlab with a reduced process priority, while not reducing the priority of the Matlab GUI from which these processes are started? I'd be interested in solutions for Windows & Linux.
E.g., on Linux I know I can increase the niceness of the sub-processes using renice on the command line, but I obviously do not want to do so manually each time. I also checked whether there's a way to start the parallel worker threads with a modified priority, but I could not find anything in the documentation. Ideally - as in many other IDEs - there would be a setting somewhere in Matlab where one can configure how to run commands, and I would change it from matlab ... to nice -10 matlab ....

Running Dynare from Matlab

I am new to Dynare++ and I have really quick question I cannot seem to find the answer too.
What is the difference between these two commands and why is the output different?
!dynare++ --per 50 --sim 3 file_name.mod
dynare file_name.mod
In the first command its unable to find steady state values based on my initial values and in the second it can. Why?
The first command calls the standalone Dynare++ instead of Matab-based Dynare. The latter uses Matlab's solvers for numerically finding the steady state. Note that there is a dedicated forum for Dynare at https://forum.dynare.org

Faster way to run simulink simulation repeatedly for a large number of time

I want to run a simulation which includes SimEvent blocks (thus only Normal option is available for sim run) for a large number of times, like at least 1000. When I use sim it compiles the program every time and I wonder if there is any other solution which just run the simulation repeatedly in a faster way. I disabled Rebuild option from Configuration Parameter and it does make it faster but still takes ages to run for around 100 times.
And single simulation time is not long at all.
Thank you!
It's difficult to say why the model compiles every time without actually seeing the model and what's inside it. However, the Parallel Computing Toolbox provides you with the ability to distribute the iterations of your model across several cores, or even several machines (with the MATLAB Distributed Computing Server). See Run Parallel Simulations in the documentation for more details.

Spawn multiple copies of matlab on the same machine

I am facing a huge problem. I built a complex C application with embedded Matlab functions that I call using the Matlab engine (engOpen() and such ...). The following happens:
I spawn multiple instances of this application on a machine, one for each core
However! ... The application then slows down to a halt. In fact, on my 16-core machine, the application slows down approximately by factor 16.
Now I realized this is because there is only a sngle matlab engine started per machine and all my 16 instances share the same copy of matlab!
I tried to replicate this with the matlab GUI and its the same problem. I run a program in the GUI that takes 14 seconds, and THEN I run it in two GUIs at the same time and it takes 28 seconds
This is a huge problem for me, because I will miss my deadline if I have to reprogram my entire c application without matlab. I know that matlab has commands for parallel programming, but my matlab calls are embedded in the C application and I want to run multiple instances of the C application. Again, I cannot refactor my entire c application because I will miss the deadline.
Can anyone please let me know if there is a solution for this (e.g. really start multiple matlab processes on the same machine). I am willing to pay for extra licenses. I currently have fully lincensed matlab installed on all machines.
Thank you so so much!
EDIT
Thank you Ben Voigt for your help. I found that a single instance of Matlab is already using multiple cores. In fact, running one instance shows me full utilization of 4 cores. If I run two copies of Matlab, I get full utilization of 8 cores. Hence it is actually running in parallel. However, even though 2 instances seem to take up double the processing power, I still get 2* slowdown. Hence, 2 instances seem to get twice the result with 4* the compute power total. Why could that be?
Your slowdown is not caused by stuffing all N instances into a single MatLab instance on a single core, but by the fact that there are no longer 16 cores at the disposal of each instance. Many MATLAB vector operations use parallel computation even without explicit parallel constructs, so more than one core per instance is needed for optimal efficiency.
MATLAB libraries are not thread-safe. If you create multithreaded applications, make sure only one thread accesses the engine application.
I think the matlab engine is the wrong technique. For windows platforms, you can try using the com automation server, which has the .Single option which starts one matlab instance for each com client you open.
Alternatives are:
Generate C++ code for the functions.
Create a .NET library. (NE Builder)
Run matlab via command line.

How to fully use the CPU in Matlab [Improving performance of a repetitive, time-consuming program]

I'm working on an adaptive and Fully automatic segmentation algorithm under varying light condition , the core of this algorithm uses Particle swarm Optimization(PSO) to tune the fuzzy system and believe me it's very time consuming :| for only 5 particles and 100 iterations I have to wait 2 to 3 hours ! and it's just processing one image from my data set containing over 100 photos !
I'm using matlab R2013 ,with a intel coer i7-2670Qm # 2.2GHz //8.00GB RAM//64-bit operating system
the problem is : when starting the program it uses only 12%-16% of my CPU and only one core is working !!
I've searched a lot and came into matlabpool so I added this line to my code :
matlabpool open 8
now when I start the program the task manger shows 98% CPU usage, but it's just for a few seconds ! after that it came back to 12-13% CPU usage :|
Do you have any idea how can I get this code run faster ?!
12 Percent sounds like Matlab is using only one Thread/Core and this one with with full load, which is normal.
matlabpool open 8 is not enough, this simply opens workers. You have to use commands like parfor, to assign work to them.
Further to Daniel's suggestion, ideally to apply PARFOR you'd find a time-consuming FOR loop in you algorithm where the iterations are independent and convert that to PARFOR. Generally, PARFOR works best when applied at the outermost level possible. It's also definitely worth using the MATLAB profiler to help you optimise your serial code before you start adding parallelism.
With my own simulations I find that I cannot recode them using Parfor, the for loops I have are too intertwined to take advantage of multiple cores.
HOWEVER:
You can open a second (and third, and fourth etc) instance of Matlab and tell this additional instance to run another job. Each instance of matlab open will use a different core. So if you have a quadcore, you can have 4 instances open and get 100% efficiency by running code in all 4.
So, I gained efficiency by having multiple instances of matlab open at the same time and running a job. My jobs took 8 to 27 hours at a time, and as one might imagine without liquid cooling I burnt out my cpu fan and had to replace it.
Also do look into optimizing your matlab code, I recently optimized my code and now it runs 40% faster.