Run Matlab commands with reduced priority - matlab

I frequently encounter the following problem: I start a time-consuming (sometimes parallelized) script, and while the said script is running Matlab becomes very slow & unresponsive. (I would like to keep editing files). I suspect that part of the problem is that the script that's running is eating up all CPU capacities.
Hence my question: is there a way to start all commands from within Matlab with a reduced process priority, while not reducing the priority of the Matlab GUI from which these processes are started? I'd be interested in solutions for Windows & Linux.
E.g., on Linux I know I can increase the niceness of the sub-processes using renice on the command line, but I obviously do not want to do so manually each time. I also checked whether there's a way to start the parallel worker threads with a modified priority, but I could not find anything in the documentation. Ideally - as in many other IDEs - there would be a setting somewhere in Matlab where one can configure how to run commands, and I would change it from matlab ... to nice -10 matlab ....

Related

Matlab library code on cluster - not using more than one core/thread?

I am running matlab on a university cluster. The code has no parfor loops but has makes extensive use of vectorized code. So on my local machine when I run the code, the code actually often uses several threads.
However, on the cluster, even though I allocate 76 cores to the program, it never uses more than 1.
I am not sure if there is any specific instruction I need to add to the beginning of the code or to the sbatch command.
Any ideas?
You can use maxNumCompThreads to control the number of computational threads MATLAB will use.

Multiple Simulation Runs (OMnet++)

I implemented a 100 km long highway scenario using Veins Framework for OMNET++.
In order to get more reliable results, how many simulation runs are required for each set of experiment ?
How can we define and control the number of simulation runs?
Quicker simulations:
You can make your simulations run faster in 3 possible ways:
run sumo without the gui by starting the ./sumo-launchd.py excluding sumo-gui in the end and writing only sumo.
run simulations using Cmdenv and not Tkenv,
compile your Veins project code in in release mode. You can achieve that by doing:
-make MODE=release -j <number-of-cores>
These steps will improve simulation run-time up to 50%.
In the Veins FAQ you have the following questions:
I've launched a simulation in the OMNeT++ TkEnv; why is it running so
awefully slow?
I've launched a simulation in the OMNeT++ Cmdenv; can I speed it up
further?
There are some suggestions given in the FAQ which might help you run simulations quicker.
Number of simulation runs:
As far as the number of simulation runs is concerned, it is hard to tell. You can use confidence intervals for your results to see how fine-grained they are; In any case I would suggest starting with 5 repetitions.
Automatic control of simulation runs:
This can be accomplished using repeat parameter in the .ini file as it is explained here.
On how to do that from the OMNeT++ IDE follow this answer (note the comments as well).
To run parallel simulations through the command line, follow this answer.
a) This is an open ended question as you have not defined what 'more reliable' means. To get a more reliable result, you need more runs. That's all that can be said.
b) use repeat = 2 in the ini file to get two repetitions
I'm also suggesting reading the manual's corresponding chapter:
https://omnetpp.org/doc/omnetpp/manual/usman.html#sec341
(Chapter 10 is also related to your question)

How to fully use the CPU in Matlab [Improving performance of a repetitive, time-consuming program]

I'm working on an adaptive and Fully automatic segmentation algorithm under varying light condition , the core of this algorithm uses Particle swarm Optimization(PSO) to tune the fuzzy system and believe me it's very time consuming :| for only 5 particles and 100 iterations I have to wait 2 to 3 hours ! and it's just processing one image from my data set containing over 100 photos !
I'm using matlab R2013 ,with a intel coer i7-2670Qm # 2.2GHz //8.00GB RAM//64-bit operating system
the problem is : when starting the program it uses only 12%-16% of my CPU and only one core is working !!
I've searched a lot and came into matlabpool so I added this line to my code :
matlabpool open 8
now when I start the program the task manger shows 98% CPU usage, but it's just for a few seconds ! after that it came back to 12-13% CPU usage :|
Do you have any idea how can I get this code run faster ?!
12 Percent sounds like Matlab is using only one Thread/Core and this one with with full load, which is normal.
matlabpool open 8 is not enough, this simply opens workers. You have to use commands like parfor, to assign work to them.
Further to Daniel's suggestion, ideally to apply PARFOR you'd find a time-consuming FOR loop in you algorithm where the iterations are independent and convert that to PARFOR. Generally, PARFOR works best when applied at the outermost level possible. It's also definitely worth using the MATLAB profiler to help you optimise your serial code before you start adding parallelism.
With my own simulations I find that I cannot recode them using Parfor, the for loops I have are too intertwined to take advantage of multiple cores.
HOWEVER:
You can open a second (and third, and fourth etc) instance of Matlab and tell this additional instance to run another job. Each instance of matlab open will use a different core. So if you have a quadcore, you can have 4 instances open and get 100% efficiency by running code in all 4.
So, I gained efficiency by having multiple instances of matlab open at the same time and running a job. My jobs took 8 to 27 hours at a time, and as one might imagine without liquid cooling I burnt out my cpu fan and had to replace it.
Also do look into optimizing your matlab code, I recently optimized my code and now it runs 40% faster.

Determine the Maximum Number of Processors Available for matlabpool (MATLAB Parallel Toolbox)

I'm currently writing some code in MATLAB that uses the parfor loop to speed up some tedious calculations.
My issue is that the code will be run on a remote cluster, and could be run on 4-core, 8-core or 12-core machines (I won't know which one in advance)...
I basically need a code snippet that will allow MATLAB to determine the maximum number of cores that can be used in matlabpool. If we call this variable maxcores, I can then go ahead and use
matlabpool('open',maxcores).
so that I can make sure that I am using all the cores that are available to me.
You can get the number of cores on the machine through feature('numCores'), which is undocumented but seems unlikely to break. (source)
Someone claims there that getNumberOfComputationalThreads also works since R2007a, but it doesn't on my R2012a.
Beyond Dougal's response, I found getenv('NUMBER_OF_PROCESSORS') returns the number of threads on my Windows systems.

Gforth parallel processing

I have written a Forth Mandelbrot fractal plotter, and as much as a technical exercise as anything else I would like to try to speed it up with some parallel processing.
For the time being I would be happy if I could just use both of my cores (have one core do one half of the image and the other the other half).
I have noticed that Windows XP will quite happily manage two instances of Gforth and try use as much processor capacity as possible, so running two processes could be a start. However I am not sure if they can share memory, or if they can both write to a file at the same time (or how to tell one process to start writing at x bytes from the start of the file).
In summary, how can I do parallel processing using Gforth on Windows XP?
You could have each program do a grid of pixels rather than a single pixel, and then recombine them in the end.
AFAIK, pixels in Mandelbrot sets are independent of each other (someone correct me if I am wrong), however the computation of each of them is non-deterministic, making it a hard problem to properly parallelize, without having some kind of central dispatch thread (then again you run into potential problems with contention).
See GForth Pipes.