Parallel scripts on MATLAB - matlab

I have two systems running on MATLAB: the control system and the computer vision system.
The control system needs to receive three variables generated by the computer vision system periodically. However, I can't single thread both systems, because the computer vision system latency is too high compared with the control system latency.
I tried to run each program in a different MATLAB session and use a .mat file as interface between both sessions, but it did not work.
I'm not familiar with the Parallel Computing Toolbox. So I was wondering if someone can help with this? Or at lest give a start up idea, because, as I've said, I will start to learn the Parallel Computing Toolbox now.

I think the function in the Parallel Computing Toolbox you may be looking for is parfeval. It lets you spawn an asynchronous task, and get its result whenever it is ready.

In addition to parfeval as suggested by #Dima you might also want to look into labSendReceive
and associated function like labSend and labReceive which allow to share data between individual workers in your parallel pool. I guess which one is best for you depends on the type of calculation you want to do.

Related

Is it beneficial to run Matlab calculations in parallel on a multi-core computer?

I have a laptop with a multi-core processor and I would like to run a lengthy loop in which Simulink simulations are performed. Is it beneficial to split the loop into two parts (it is possible in my case), open the Matlab application twice, and run a Matlab script in each of them?
Someone told me that Matlab/Simulink always uses one core per opened Matlab application. Is that correct?
MATLAB splits some builtin functions across multiple cores, but standard MATLAB code uses just one core. Generally, if you are running several independent iterations, then the computation time can benefit from parallelization. You can do this easily using either parfor (if the have the Parallel Computing Toolbox), or batch_job.

Data from LabVIEW to Matlab for processing

I want to make a biometric identification system of the ECG/EKG.
Provided that Matlab does not perform Data Acquisition in Real Time (for monitoring), is there any way to make the monitoring and data acquisition in LabVIEW and then work simultaneously with Matlab for signal processing?
You could just get a matlab compatible daq and run everything in matlab. http://www.mathworks.com/products/daq/
You can indeed do some data acquisition with LabView and work simultaneously with Matlab for signal processing by calling the Matlab script node, which executes some Matlab code during vi execution.
You may have some performance issues, though, because both Labview and Matlab have to run on your machine simultaneously.
Question:
is there any way to make the monitoring and data acquisition on
LabView and then work simultaneously with Matlab for signal processing
Answers:
LabVIEW has "MathScript" node which is basic MatLab built into
an add-on. It is not the MatLab toolboxes. It runs native MatLab
code. It also runs slightly faster LabVIEW updates to the code. If
your code runs there, then LabVIEW will pass data natively
to your code. This box does not have direct MatLab toolbox access, so if
you use any special calls then that can cause a problem.
If you have MatLab on the box, then you can call the external MatLab
function/code using mathscript (link), and the MatLab will run
the function.
Clarification:
Real time just means "bounded time" (link), not "instant". If your idea of bounds are loose enough then many systems can work for them. You do not state it in your question - but what do you consider acceptable response time?
I've worked a lot with LabVIEW and Matlab. Personally, I would not use the Math Scripting node and would opt for using the Matlab Automation Server. You can call Matlab from LabVIEW using the ActiveX palette in LabVIEW (See Functions>>Connectivity>>ActiveX>>Automation Open) A couple reasons why I'd go for ActiveX and NOT the MathScript node:
The Math Script node does not allow you to change code dynamically. You must hardcode your data into the Math Script node and any future changes would require a change to LabVIEW's G code and therefore a recompile of your EXE
The Math Script node does not support all functions when compiled to an executable. Most notably graphing functions. See the help file here to read more on this.
Calling Matlab from ActiveX is going to give you a lot more flexibility in regards to how data is passed and processed.

How to run Multiple Tasks in MATLAB

I want to process multiple tasks in MatLab simultaneously.
For example:
I want to make two different analysis at the same time. Presently I open two instances of MATLAB".
I was wondering if there is anyway to accomplish this using one instance of MatLab--one MatLab window?
Take a look at the parallel toolbox of MATLAB.
http://www.mathworks.co.uk/products/parallel-computing/?s_cid=sol_compbio_sub2_relprod4_parallel_computing_toolbox
You should use multithreading...
Instructables Tutorial Here
Also check out how to better utilize multi-cores via this matlab article dealing with Built-in Multithreading and Parallelism Using MATLAB Workers.
Math Works Explanation Here

Dedicated distributed system to handle matlab jobs

I'm a software engineer and currently looking forward to setup a distributed system at my laboratory so that I can process some matlab jobs over them. I have looked into MATLAB MPI but I want to know if there is some way so that I can setup a system here without any FEE or AMOUNT.
I have spent a lot of time looking at that very issue, and the short answer is: nope, not possible.
There are two long answers. First, if you're constrained to using Matlab, then all roads lead back to MathWorks. One possibility is that you could compile your code, you'd need to buy the compiler from Mathworks, though, and then you could run the compiled code on whatever grid infrastructure you wish, such as Hadoop.
Second, for this reason, I have found it much better to simply port code to another language, usually one in open source. For the work I tend to do, Octave is a poor replacement for Matlab. Instead, R and Python are great for most of the same functionality. Personally, I lean a lot more toward R than Python, but that's because R is a better fit for these applications (i.e. they're very statistical in nature).
I've ported a lot of Matlab code to R and it's not too bad. Porting to Python would be easier in general, though, and there is a very large Matlab refugee community that has switched to Python.
Once you're in either Python or R, there are a lot of options for MPI, multicore tools, distributed systems, GPU tools, and more. In fact, you may find the migration easier by writing some of the distributed functions in Python or R, loading up an easy to use grid system, and then have Matlab submit the job to the server. Your local code could be the same, but then you can work on porting only the gridded parts, which you'd probably have to devote some time to write in Matlab, anyway.
I wouldn't say it's completely impossible; You can use TCP/IP sockets to build a client/server application (you will find many MEX implementations for BSD sockets on File Exchange).
The architecture is simple: your main MATLAB client script sends jobs (code along with any needed data serialized) to nodes to evaluate and send back results when done. These nodes would be distributed MATLAB instances running the server part which listens for connections, and runs anything it receive through the EVAL function.
Obviously it is up to write code that can be divided into breakable tasks.
This is not as sophisticated as what is offered by the Distributed Computing Toolbox, but basically does the same thing...

Matlab and GPU/CUDA programming

I need to run several independent analyses on the same data set.
Specifically, I need to run bunches of 100 glm (generalized linear models) analyses and was thinking to take advantage of my video card (GTX580).
As I have access to Matlab and the Parallel Computing Toolbox (and I'm not good with C++), I decided to give it a try.
I understand that a single GLM is not ideal for parallel computing, but as I need to run 100-200 in parallel, I thought that using parfor could be a solution.
My problem is that it is not clear to me which approach I should follow. I wrote a gpuArray version of the matlab function glmfit, but using parfor doesn't have any advantage over a standard "for" loop.
Has this anything to do with the matlabpool setting? It is not even clear to me how to set this to "see" the GPU card. By default, it is set to the number of cores in the CPU (4 in my case), if I'm not wrong.
Am I completely wrong on the approach?
Any suggestion would be highly appreciated.
Edit
Thanks. I'm aware of GPUmat and Jacket, and I could start writing in C without too much effort, but I'm testing the GPU computing possibilities for a department where everybody uses Matlab or R. The final goal would be a cluster based on C2050 and the Matlab Distribution Server (or at least this was the first project).
Reading the ADs from Mathworks I was under the impression that parallel computing was possible even without C skills. It is impossible to ask the researchers in my department to learn C, so I'm guessing that GPUmat and Jacket are the better solutions, even if the limitations are quite big and the support to several commonly used routines like glm is non-existent.
How can they be interfaced with a cluster? Do they work with some job distribution system?
I would recommend you try either GPUMat (free) or AccelerEyes Jacket (buy, but has free trial) rather than the Parallel Computing Toolbox. The toolbox doesn't have as much functionality.
To get the most performance, you may want to learn some C (no need for C++) and code in raw CUDA yourself. Many of these high level tools may not be smart enough about how they manage memory transfers (you could lose all your computational benefits from needlessly shuffling data across the PCI-E bus).
Parfor will help you for utilizing multiple GPUs, but not a single GPU. The thing is that a single GPU can do only one thing at a time, so parfor on a single GPU or for on a single GPU will achieve the exact same effect (as you are seeing).
Jacket tends to be more efficient as it can combine multiple operations and run them more efficiently and has more features, but most departments already have parallel computing toolbox and not jacket so that can be an issue. You can try the demo to check.
No experience with gpumat.
The parallel computing toolbox is getting better, what you need is some large matrix operations. GPUs are good at doing the same thing multiple times, so you need to either combine your code somehow into one operation or make each operation big enough. We are talking a need for ~10000 things in parallel at least, although it's not a set of 1e4 matrices but rather a large matrix with at least 1e4 elements.
I do find that with the parallel computing toolbox you still need quite a bit of inline CUDA code to be effective (it's still pretty limited). It does better allow you to inline kernels and transform matlab code into kernels though, something that