Matlabpool very slow to open workers - matlab

I have just put together a new rig (i7-4770K, 512gb SSD, 16gb DDR3 2133 mhz ram), and installed MATLAB r2013a. When I invoke the matlabpool command it takes an extremely long time to open up each individual worker (the ones you see open in task manager). On my old rig it was about 10 seconds, but my new one takes 1 minute. I have tried with just 1 additional worker and it takes a long time.
Any help would be appreciated.
Thanks

You can try executing the following to eliminate one of the newer features (included in versions after MATLAB R2012a) that has caused some people (including myself) problems:
distcomp.feature( 'LocalUseMpiexec', false )
Mathworks made some changes to the way the local scheduler launches the
workers for R2010a, that change reverts it back to R2009b.

This is indeed quite weird. In fact, MathWorks website states:
For disk-intensive MATLAB applications or to improve the start-up time of MATLAB, you can take advantage of technologies such as solid-state drives or RAID.
You can try to use this toolbox, it may solve your problems.

Related

Spawn multiple copies of matlab on the same machine

I am facing a huge problem. I built a complex C application with embedded Matlab functions that I call using the Matlab engine (engOpen() and such ...). The following happens:
I spawn multiple instances of this application on a machine, one for each core
However! ... The application then slows down to a halt. In fact, on my 16-core machine, the application slows down approximately by factor 16.
Now I realized this is because there is only a sngle matlab engine started per machine and all my 16 instances share the same copy of matlab!
I tried to replicate this with the matlab GUI and its the same problem. I run a program in the GUI that takes 14 seconds, and THEN I run it in two GUIs at the same time and it takes 28 seconds
This is a huge problem for me, because I will miss my deadline if I have to reprogram my entire c application without matlab. I know that matlab has commands for parallel programming, but my matlab calls are embedded in the C application and I want to run multiple instances of the C application. Again, I cannot refactor my entire c application because I will miss the deadline.
Can anyone please let me know if there is a solution for this (e.g. really start multiple matlab processes on the same machine). I am willing to pay for extra licenses. I currently have fully lincensed matlab installed on all machines.
Thank you so so much!
EDIT
Thank you Ben Voigt for your help. I found that a single instance of Matlab is already using multiple cores. In fact, running one instance shows me full utilization of 4 cores. If I run two copies of Matlab, I get full utilization of 8 cores. Hence it is actually running in parallel. However, even though 2 instances seem to take up double the processing power, I still get 2* slowdown. Hence, 2 instances seem to get twice the result with 4* the compute power total. Why could that be?
Your slowdown is not caused by stuffing all N instances into a single MatLab instance on a single core, but by the fact that there are no longer 16 cores at the disposal of each instance. Many MATLAB vector operations use parallel computation even without explicit parallel constructs, so more than one core per instance is needed for optimal efficiency.
MATLAB libraries are not thread-safe. If you create multithreaded applications, make sure only one thread accesses the engine application.
I think the matlab engine is the wrong technique. For windows platforms, you can try using the com automation server, which has the .Single option which starts one matlab instance for each com client you open.
Alternatives are:
Generate C++ code for the functions.
Create a .NET library. (NE Builder)
Run matlab via command line.

How to fully use the CPU in Matlab [Improving performance of a repetitive, time-consuming program]

I'm working on an adaptive and Fully automatic segmentation algorithm under varying light condition , the core of this algorithm uses Particle swarm Optimization(PSO) to tune the fuzzy system and believe me it's very time consuming :| for only 5 particles and 100 iterations I have to wait 2 to 3 hours ! and it's just processing one image from my data set containing over 100 photos !
I'm using matlab R2013 ,with a intel coer i7-2670Qm # 2.2GHz //8.00GB RAM//64-bit operating system
the problem is : when starting the program it uses only 12%-16% of my CPU and only one core is working !!
I've searched a lot and came into matlabpool so I added this line to my code :
matlabpool open 8
now when I start the program the task manger shows 98% CPU usage, but it's just for a few seconds ! after that it came back to 12-13% CPU usage :|
Do you have any idea how can I get this code run faster ?!
12 Percent sounds like Matlab is using only one Thread/Core and this one with with full load, which is normal.
matlabpool open 8 is not enough, this simply opens workers. You have to use commands like parfor, to assign work to them.
Further to Daniel's suggestion, ideally to apply PARFOR you'd find a time-consuming FOR loop in you algorithm where the iterations are independent and convert that to PARFOR. Generally, PARFOR works best when applied at the outermost level possible. It's also definitely worth using the MATLAB profiler to help you optimise your serial code before you start adding parallelism.
With my own simulations I find that I cannot recode them using Parfor, the for loops I have are too intertwined to take advantage of multiple cores.
HOWEVER:
You can open a second (and third, and fourth etc) instance of Matlab and tell this additional instance to run another job. Each instance of matlab open will use a different core. So if you have a quadcore, you can have 4 instances open and get 100% efficiency by running code in all 4.
So, I gained efficiency by having multiple instances of matlab open at the same time and running a job. My jobs took 8 to 27 hours at a time, and as one might imagine without liquid cooling I burnt out my cpu fan and had to replace it.
Also do look into optimizing your matlab code, I recently optimized my code and now it runs 40% faster.

Deployed Matlab application using significantly more memory than Matlab scripts

I was testing a stand-alone application we developed in Matlab when I noticed that its memory usage, according to Windows Task Manager, was peaking several times above 16gb. I decided to run Matlab's profiler with profile -memory on on the scripts behind the compiled version to see where the memory peaks were occurring, using the exact same input. However, the highest peak memory it found was 2400860.00 Kb, or about 1/4 as much, for the function that essentially acts as the program's main().
Thus, I was wondering if people have noticed huge memory usage differences between running a compiled Matlab program and running the original scripts in Matlab. I noticed it took a lot longer running in Matlab, but I figured that was due to the profiler keeping track of all of the memory allocations and deallocations, rather than reading and writing to a swap space on disk.
To make a real quick answer to this question. Yes, MATLAB compiled applications run with more overhead than MATLAB scripts.
This is because MATLAB deployed applications open up a version of MATLAB which is stored in the memory called the MCR. The MCR runs with more overhead than MATLAB.
One thing that I have found useful in situations like this is to recompile and see if that helps at all. If it doesn't, you could try to lower the memory usage by running calculations in segments.
This might be helpful for better memory usage: http://www.mathworks.com/help/matlab/matlab_prog/strategies-for-efficient-use-of-memory.html
Source:
http://www.mathworks.com/matlabcentral/newsreader/view_thread/306814
Matlab executable too slow
Comment if you have questions.

Determine the Maximum Number of Processors Available for matlabpool (MATLAB Parallel Toolbox)

I'm currently writing some code in MATLAB that uses the parfor loop to speed up some tedious calculations.
My issue is that the code will be run on a remote cluster, and could be run on 4-core, 8-core or 12-core machines (I won't know which one in advance)...
I basically need a code snippet that will allow MATLAB to determine the maximum number of cores that can be used in matlabpool. If we call this variable maxcores, I can then go ahead and use
matlabpool('open',maxcores).
so that I can make sure that I am using all the cores that are available to me.
You can get the number of cores on the machine through feature('numCores'), which is undocumented but seems unlikely to break. (source)
Someone claims there that getNumberOfComputationalThreads also works since R2007a, but it doesn't on my R2012a.
Beyond Dougal's response, I found getenv('NUMBER_OF_PROCESSORS') returns the number of threads on my Windows systems.

How to utilise parallel processing in Matlab

I am working on a time series based calculation. Each iteration of the calculation is independent. Could anyone share some tips / online primers on using utilising parallel processing in Matlab? How can this be specified inside the actual code?
Since you have access to the Parallel toolbox, I suggest that you first check whether you can do it the easy way.
Basically, instead of writing
for i=1:lots
out(:,i)=do(something);
end
You write
parfor i=1:lots
out(:,i)=do(something);
end
Then, you use matlabpool to create a number of workers (you can have a maximum of 8 on your local machine with the toolbox, and tons on a remote cluster if you also have a Distributed Computing Server license), and you run the code, and see nice speed gains when your iterations are run by 8 cores instead of one.
Even though the parfor route is the easiest, it may not work right out of the box, since you might do your indexing wrong, or you may be referencing an array in a problematic way etc. Look at the mlint warnings in the editor, read the documentation, and rely on good old trial and error, and you should figure it out reasonably fast. If you have nested loops, it's often best parallelize only the innermost one and ensure it does tons of iterations - this is not only good design, it also reduces the amount of code that could give you trouble.
Note that especially if you run the code on a local machine, you may run into memory issues (which might manifest in really slow execution in parallel mode because you're paging): Every worker gets a copy of the workspace, so if your calculation involves creating a 500MB array, 8 workers will need a total 4GB of RAM - and then you haven't even started counting the RAM of the parent process! In addition, it can be good to only use N-1 cores on your machine, so that there is still one core left for other processes that may run on the computer (such as a mandatory antivirus...).
Mathworks offers its own parallel computing toolbox. If you do not want to purchase that, there a few options
You could write your own mex file and use pthreads or OpenMP.
However make sure you do not call any Mex api in the parallel part of the code, because they arent thread safe
If you want coarser grained parallelism via MPI you can try pmatlab
Same with parmatlab
Edit: Adding link Parallel MATLAB with openmp mex files
I have only tried the first.
Don't forget that many Matlab functions are already multithreaded. By careful programming you may be able to take advantage of them -- check the documentation for your version as the Mathworks seem to be increasing the range and number of multithreaded functions with each new release. For example, it seems that 2010a has multithreaded ffts which may be useful for time series processing.
If the intrinsic multithreading is not what you need, then as #srean suggests, the Parallel Computing Toolbox is available. For my money (or rather, my employers' money) it's the way to go, allowing you to program in parallel in Matlab, rather than having to bolt things on. I have to admit, too, that I'm quite impressed by the toolbox and the facilities it offers.