Can I measure the speedup from parallelization in matlab? - matlab

If I assume that a problem is a candidate for parallization e.g. matrix multiplication or some other problem and I use an Intel i7 haswell dualcore, is there some way I can compare a parallel execution to a sequential version of the same program or will matlab optimize a program to my architecture (dualcore, quadcore..)? I would like to know the speedup from adding more processors from a good benchmark parallell program.

Unfortunately there is no such thing as a benchmark parallel program. If you measure a speedup for a benchmark algorithm that does not mean that all the algorithms will benefit from parallelization
Since your target architecture has only 2 cores you might be better off avoiding parallelization at all and let Matlab and the operative system to optimize the execution. Anyway, here are the steps I followed.
Determine if your problem is apt for parallelization by calculating the theoretical speedup. Some problems like matrix multiplication or Gauss elimination are well studied. Since I assume your problem is more complicated than that, try to decompose your algorithm into simple blocks and determine, block-wise, the advantages of parallelization.
If you find that several parts of your algorithms could profit from parallelization, study those part separately.
Obtain statistical information of the runtime of your sequential algorithm. That is, run your program X number of times under similar conditions (and similar inputs) and average the running time.
Obtain statistical information of the runtime of your parallel algorithm.
Measure with the profiler. Many people recommends to use function like tic or toc. The profiler will give you a more accurate picture of your running times, as well as detailed information per function. See the documentation for detailed information on how to use the profiler.
Don't make the mistake of not taking into account the time Matlab takes to open the pool of workers (I assume you are working with the Parallel Computing Toolbox). Depending on your number of workers, the pool takes more/less time and in some occasions it could be up to 1 minute (2011b)!

You can try "Run and time" feature on MATLAB.
Or simply put some tic and toc to the first and end of your code, respectively.

Matlab provides a number of timing functions to help you assess the performance of your code: go read the documentation here and select the function that you deem most appropriate in your case! In particular, be aware of the difference between tic toc and the cputime function.

Related

Parallel Optimization in Matlab: Gradient or Loop

I am optimizing a rather messy likelihood function in Matlab, where I need to run about 1,000 separate runs of the optimization algorithm (fmincon) at different initial points, where there are something like 32 free parameters.
Unfortunately I can not both parallelize the 1,000 runs of the optimization algorithm, and the computation of the finite difference gradient simultaneously. I must choose one.
Does anyone know if its more efficient to parallelize the outer loop and have each optimization run on its own core, or the calculation of the finite-difference gradient computation?
Thanks!
This is impossible to answer exactly without knowing anything about your code and or hardware.
If you have more than 32 cores, then some of them will have nothing to do during parallel gradient computation. In this case, running the 1000 simulations in parallel might be faster.
On the other hand, computing the gradients in parallel might enable your CPU(s) to use their caches more efficiently, in that there will be fewer cache misses. You may have a look at Why does the order of the loops affect performance when iterating over a 2D array? or What is “cache-friendly” code?.

Neural Nets: computing weight deltas is a bottleneck

After profiling my Neural Nets' code I've realized that the method, which computes the weight changes for each arc in the network (-rate*gradient + momentum*previous_delta - decay*rate*weight), already given the gradient, is the bottleneck (55% inclusive samples).
Is there any trick to compute these values in a efficient manner?
This is normal behaviour. I am assuming that you are using an iterative process to solve the weights at each evolution step (such as backpropagation?). If the number of neurons is large and the training (back-testing) algorithm is short, then it is normal that weight mutation such as this will consume a larger fraction of compute time during training of the neural network.
Did you get this result using a simple XOR problem or similar? If so, you will probably find that if you start to solve more complex problems (such as pattern detection in multidimensional arrays, image processing, etc.) that those functions will begin to consume an insignificant fraction of compute time.
If you are profiling, I would suggest you profile with a problem that is closer to the purpose for which the neural network is designed (I am guessing you didn't design it to solve XOR or play tic tac toe) and you will probably find that optimising code such as -rate*gradient + momentum*previous_delta - decay*rate*weight is more or less a waste of time, at least this is my experience.
If you do find that this code is compute-intensive in real-world applications then I would suggest trying to reduce the number of times this line of code is executed via structural changes. Neural network optimization is a rich field and I can't possibly give you useful advise from such a broad question, but I will say that if your program is unusually slow, you're unlikely to see significant improvements by tinkering at such low-level code. I will however suggest the following from my own experience:
Consider parallelisation. Many search algorithms such as those implemented in back-propagation techniques are amenable to parallel attempts to improve convergence. As weight-adjustments are identical in terms of computation demand for a given network, think static loops in Open MP.
Modify the convergence criterion (the critical convergence rate before you stop adjustments of weights) to perform less of these calculations
Consider an alternative to deterministic solutions such as back-propagations, which are slightly more prone to local optimisation anyway. Consider gaussian mutation (All things being equal gaussian mutation will 1) reduce time spent on mutation relative to backtesting 2) increase convergence time and 3) be less prone to getting caught in local minima of the error search space)
Please note that this is a non-technical answer to what I have interpreted as a non-technical question.

Is it possible to improve speed in ODE solvers from matlab? (ode45 ode15s etc)

I wrote a code to solve a system using ode45 and ode15s in matlab. I am wondering if I can improve the speed of the code using multiple core (or parallel code) in my script.
Anyone have tried this ??
Thanks
No, you can't.
All numerical integrators, ode45 and friends included, use some form of iterative scheme to solve the user-implemented (coupled) non-linear (partial) differential equations.
Each new step in the iterative schemes of ode45/15s/.. (to compute the new state of the system) depends on the previous step (the old state of the system), therefore, these numerical integrators cannot be parallelized effectively.
The only speedup you can do that's likely to have a big impact is to optimize your implementation of the differential equation.
From my experience, the only way to use multiple cores for ODE suite solvers in MATLAB is to use "parfor loop" to start multiple computations together at the same time, your single computation not be any faster, but you can start many with different parameters and have multiple solutions after that long wait. So if you need to start ODE many times that might speed up your work.
To speed up one ODE function it also a good idea to play with RelTol and AbsTol settings (changes time form seconds to hours), using Jpattern option can also be very helpful (my almost tridiagonal pattern made it run twice as fast). If your differential equation is simple maybe try to compile it first, or at least vectorize (I used to write some part of code in Java and then point MATLAB to use compiled .class file). Obviously the length of your solution vector plays important role, so don't make it more than a few hounded.

Matlab and GPU/CUDA programming

I need to run several independent analyses on the same data set.
Specifically, I need to run bunches of 100 glm (generalized linear models) analyses and was thinking to take advantage of my video card (GTX580).
As I have access to Matlab and the Parallel Computing Toolbox (and I'm not good with C++), I decided to give it a try.
I understand that a single GLM is not ideal for parallel computing, but as I need to run 100-200 in parallel, I thought that using parfor could be a solution.
My problem is that it is not clear to me which approach I should follow. I wrote a gpuArray version of the matlab function glmfit, but using parfor doesn't have any advantage over a standard "for" loop.
Has this anything to do with the matlabpool setting? It is not even clear to me how to set this to "see" the GPU card. By default, it is set to the number of cores in the CPU (4 in my case), if I'm not wrong.
Am I completely wrong on the approach?
Any suggestion would be highly appreciated.
Edit
Thanks. I'm aware of GPUmat and Jacket, and I could start writing in C without too much effort, but I'm testing the GPU computing possibilities for a department where everybody uses Matlab or R. The final goal would be a cluster based on C2050 and the Matlab Distribution Server (or at least this was the first project).
Reading the ADs from Mathworks I was under the impression that parallel computing was possible even without C skills. It is impossible to ask the researchers in my department to learn C, so I'm guessing that GPUmat and Jacket are the better solutions, even if the limitations are quite big and the support to several commonly used routines like glm is non-existent.
How can they be interfaced with a cluster? Do they work with some job distribution system?
I would recommend you try either GPUMat (free) or AccelerEyes Jacket (buy, but has free trial) rather than the Parallel Computing Toolbox. The toolbox doesn't have as much functionality.
To get the most performance, you may want to learn some C (no need for C++) and code in raw CUDA yourself. Many of these high level tools may not be smart enough about how they manage memory transfers (you could lose all your computational benefits from needlessly shuffling data across the PCI-E bus).
Parfor will help you for utilizing multiple GPUs, but not a single GPU. The thing is that a single GPU can do only one thing at a time, so parfor on a single GPU or for on a single GPU will achieve the exact same effect (as you are seeing).
Jacket tends to be more efficient as it can combine multiple operations and run them more efficiently and has more features, but most departments already have parallel computing toolbox and not jacket so that can be an issue. You can try the demo to check.
No experience with gpumat.
The parallel computing toolbox is getting better, what you need is some large matrix operations. GPUs are good at doing the same thing multiple times, so you need to either combine your code somehow into one operation or make each operation big enough. We are talking a need for ~10000 things in parallel at least, although it's not a set of 1e4 matrices but rather a large matrix with at least 1e4 elements.
I do find that with the parallel computing toolbox you still need quite a bit of inline CUDA code to be effective (it's still pretty limited). It does better allow you to inline kernels and transform matlab code into kernels though, something that

CUDA and MATLAB for loop optimization

I'm going to attempt to optimize some code written in MATLAB, by using CUDA. I recently started programming CUDA, but I've got a general idea of how it works.
So, say I want to add two matrices together. In CUDA, I could write an algorithm that would utilize a thread to calculate the answer for each element in the result matrix. However, isn't this technique probably similar to what MATLAB already does? In that case, wouldn't the efficiency be independent of the technique and attributable only to the hardware level?
The technique might be similar but remember with CUDA you have hundreds of threads running simultaneously. If MATLAB is using threads and those threads are running on a Quad core, you are only going to get 4 threads excuted per clock cycle while you might achieve a couple of hundred threads to run on CUDA with that same clock cycle.
So to answer you question, YES, the efficiency in this example is independent of the technique and attributable only to the hardware.
The answer is unequivocally yes, all the efficiencies are hardware level. I don't how exactly matlab works, but the advantage of CUDA is that mutltiple threads can be executed simultaneously, unlike matlab.
On a side note, if your problem is small, or requires many read write operations, CUDA will probably only be an additional headache.
CUDA has official support for matlab.
[need link]
You can make use of mex files to run on GPU from MATLAB.
The bottleneck is the speed at which data is transfered from CPU-RAM to GPU. So if the transfer is minimized and done in large chunks, the speedup is great.
For simple things, it's better to use the gpuArray support in the Matlab PCT. You can check it here
http://www.mathworks.de/de/help/distcomp/using-gpuarray.html
For things like adding gpuArrays, multiplications, mins, maxs, etc., the implementation they use tends to be OK. I did find out that for making things like batch operations of small matrices like abs(y-Hx).^2, you're better off writing a small Kernel that does it for you.