Parallel Computing in MATLAB using drange - matlab

I have a code that goes like this which I want to run using parpool:
result = zeros(J,K)
for k = 1:K
for j = 1:J
build(:,1) = old1(:,j,k)
build(:,2) = old2(:,j,k)
result(j,k) = call_function(build); %Takes a long time to run
end
end
It takes a long time to run this code and I have to run this multiple times for my simulation so I want to run the outermost loop (k = 1:K) in parallel in MATLAB.
From what I have read, I cannot use parfor since all each function uses the same variables old1 and old2. I could use spmd and distribute my matrices old1 and old2. But I read this creates as many copies of the variable as the workers and I do not want this to happen. I could use drange. But I am not sure how it exactly works. I am finding it difficult to actually use what I have been reading in MATLAB references. Any resource and pointers would be of great help!
Constraints are as follows:
Must not create multiple copies of the variables old1, old2. But I can slice it across workers as each iteration doesn't require other iterations.
Have to distribute for the outermost loop only. For ease of accessing data outside this block of code.
Thank you.

old1 and old2 can be used, I think. Initialize as constants using:
old1 = parallel.pool.Constant(old1);
old2 = parallel.pool.Constant(old2);
Have you seen this post?
https://www.mathworks.com/help/distcomp/improve-parfor-performance.html

Related

how can I make these four loop compute paralleling?

I have a problem with MathWorks Parallel Computing Toolbox in Matlab. See my code below
for k=1:length(Xab)
n1=length(Z)*(k-1)+1:length(Z)*k;
MX_j(1,n1)=JXab{k};
MY_j(1,n1)=JYab{k};
MZ_j(1,n1)=Z;
end
for k=length(Xab)+1:length(Xab)+length(Xbc)
n2=length(Z)*(k-1)+1:length(Z)*k;
MX_j(1,n2)=JXbc{k-length(Xab)};
MY_j(1,n2)=JYbc{k-length(Yab)};
MZ_j(1,n2)=Z;
end
for k=length(Xab)+length(Xbc)+1:length(Xab)+length(Xbc)+length(Xcd)
n3=length(Z)*(k-1)+1:length(Z)*k;
MX_j(1,n3)=JXcd{k-length(Xab)-length(Xbc)};
MY_j(1,n3)=JYcd{k-length(Yab)-length(Ybc)};
MZ_j(1,n3)=Z;
end
for k=length(Xab)+length(Xbc)+length(Xcd)+1:length(Xab)+length(Xbc)+length(Xcd)+length(Xda)
n4=length(Z)*(k-1)+1:length(Z)*k;
MX_j(1,n4)=JXda{k-length(Xab)-length(Xbc)-length(Xcd)};
MY_j(1,n4)=JYda{k-length(Yab)-length(Ybc)-length(Ycd)};
MZ_j(1,n4)=Z;
end
If I change the for-loop to parfor-loop, matlab warns me that MX_j is not an efficient variable. I have no idea how to solve this and how to make these for loops compute in parallel?
For me, it looks like you can combine it to one loop. Create combined cell arrays.
JX = cat(2,JXab, JXbc, JXcd, JXda);
JY = cat(2,JYab, JYbc, JYcd, JYda);
Check for the right dimension here. If your JXcc arrays are column arrays, use cat(1,....
After doing that, one single loop should do it:
n = length(Xab)+length(Xbc)+length(Xcd)+length(Xda);
for k=1:n
k2 = length(Z)*(k-1)+1:length(Z)*k;
MX_j(1,k2)=JX{k};
MY_j(1,k2)=JY{k};
MZ_j(1,k2)=Z;
end
Before parallizing anything, check if this still valid. I haven't tested it. If everything's nice, you can switch to parfor.
When using parfor, the arrays must be preallocated. The following code could work (untested due to lack of test-data):
n = length(Xab)+length(Xbc)+length(Xcd)+length(Xda);
MX_j = zeros(1,n*length(Z));
MY_j = MX_j;
MZ_j = MX_j;
parfor k=1:n
k2 = length(Z)*(k-1)+1:length(Z)*k;
MX_j(1,k2)=JX{k};
MY_j(1,k2)=JY{k};
MZ_j(1,k2)=Z;
end
Note: As far as I can see, the parfor loop will be much slower here. You simply assign some values... no calculation at all. The setup of the worker pool will take 99.9% of the total execution time.

Matlab parfor and input files

I have an algorithm myAlgo() which uses a parameter par1 in order to analyze a set of data (about 1000 .mat files). The path to the .mat files is some cell array I pass also to myAlgo(). The myAlgo() function contains classes and other functions. For every value of par1 I have to test all 1000 .mat files. So it would be a lot faster if I could use a parallel loop since I have an independent (?) problem.
I use the following code with parfor:
par1 = linespace(1,10,100);
myFiles % cell array with the .mat file location
myResult = zeros(length(par1),1);
parfor k=1:length(par1)
myPar = par1(k);
myResult(k) = myAlgo(myPar, myFiles);
end
% do something with myResult
.
function theResult = myAlgo(myPar, myFiles)
for ii=1:length(myFiles)
tempResult = initAlgo(myPar, myFiles(ii));
end
theResult = sum(tempResult);
end
So for every parameter in par1 I do the same thing. Unfortunately the processing time does not decrease. But if I check the workload of the CPU (i5), all cores are quiet active.
Now my question: Is it possible, that parfordoes not work in this case, because every worker initialized by parfor needs to access the folder with the 1000 .mat files. Therefore they can not do their job on the same time. Right? So is there a way handle this?
First of all, check if you've got a license for the parallel computing toolbox (PCT). If you do not have one, parfor will behave just like a normal for loop WITHOUT actually parallel processing (for compatibility reasons)..
Second, make sure to open a parpool first.
Another problem may be that you are using parallel processing for the outer loop with 100 iterations, but not for the larger inner loop with 1000 iterations. You should rephrase your problem as one big loop that allows parfor to parallelize the 100*1000=100000 tasks, not just the 100 outer loops. This excellent post explains the problem nicely and offers several solutions.

Sending data to workers

I am trying to create a piece of parallel code to speed up the processing of a very large (couple of hundred million rows) array. In order to parallelise this, I chopped my data into 8 (my number of cores) pieces and tried sending each worker 1 piece. Looking at my RAM usage however, it seems each piece is send to each worker, effectively multiplying my RAM usage by 8. A minimum working example:
A = 1:16;
for ii = 1:8
data{ii} = A(2*ii-1:2*ii);
end
Now, when I send this data to workers using parfor it seems to send the full cell instead of just the desired piece:
output = cell(1,8);
parfor ii = 1:8
output{ii} = data{ii};
end
I actually use some function within the parfor loop, but this illustrates the case. Does MATLAB actually send the full cell data to each worker, and if so, how to make it send only the desired piece?
In my personal experience, I found that using parfeval is better regarding memory usage than parfor. In addition, your problem seems to be more breakable, so you can use parfeval for submitting more smaller jobs to MATLAB workers.
Let's say that you have workerCnt MATLAB workers to which you are gonna handle jobCnt jobs. Let data be a cell array of size jobCnt x 1, and each of its elements corresponds to a data input for function getOutput which does the analysis on data. The results are then stored in cell array output of size jobCnt x 1.
in the following code, jobs are assigned in the first for loop and the results are retrieved in the second while loop. The boolean variable doneJobs indicates which job is done.
poolObj = parpool(workerCnt);
jobCnt = length(data); % number of jobs
output = cell(jobCnt,1);
for jobNo = 1:jobCnt
future(jobNo) = parfeval(poolObj,#getOutput,...
nargout('getOutput'),data{jobNo});
end
doneJobs = false(jobCnt,1);
while ~all(doneJobs)
[idx,result] = fetchnext(future);
output{idx} = result;
doneJobs(idx) = true;
end
Also, you can take this approach one step further if you want to save up more memory. What you could do is that after fetching the results of a done job, you can delete the corresponding member of future. The reason is that this object stores all the input and output data of getOutput function which probably is going to be huge. But you need to be careful, as deleting members of future results index shift.
The following is the code I wrote for this porpuse.
poolObj = parpool(workerCnt);
jobCnt = length(data); % number of jobs
output = cell(jobCnt,1);
for jobNo = 1:jobCnt
future(jobNo) = parfeval(poolObj,#getOutput,...
nargout('getOutput'),data{jobNo});
end
doneJobs = false(jobCnt,1);
while ~all(doneJobs)
[idx,result] = fetchnext(future);
furure(idx) = []; % remove the done future object
oldIdx = 0;
% find the index offset and correct index accordingly
while oldIdx ~= idx
doneJobsInIdxRange = sum(doneJobs((oldIdx + 1):idx));
oldIdx = idx
idx = idx + doneJobsInIdxRange;
end
output{idx} = result;
doneJobs(idx) = true;
end
The comment from #m.s is correct - when parfor slices an array, then each worker is sent only the slice necessary for the loop iterations that it is working on. However, you might well see the RAM usage increase beyond what you originally expect as unfortunately copies of the data are required as it is passed from the client to the workers via the parfor communication mechanism.
If you need the data only on the workers, then the best solution is to create/load/access it only on the workers if possible. It sounds like you're after data parallelism rather than task parallelism, for which spmd is indeed a better fit (as #Kostas suggests).
I would suggest to use the spmd command of MATLAB.
You can write code almost as it would be for a non-parallel implementation and also have access to the current worker by the labindex "system" variable.
Have a look here:
http://www.mathworks.com/help/distcomp/spmd.html
And also at this SO question about spmd vs parfor:
SPMD vs. Parfor

Fill an array with spmd in Matlab

I have a 1-by-p array R in Matlab (with p large). I initialize this array with all its entries equal to 0 and the i-th element of the array R shall receive the output from myfunction, applied to parameters(i). In other words :
R=zeros(p,1);
for i=1:p
R(i)=myfunction(parameters(i));
end
The same function myfunction is applied multiple times with different input. And because p might become large, I recognized a spmd problem (single program, multiple data) and thought that using the spmd construct would help the previous code run faster.
If I run matlabpool, I obtain n_workers different labs. My idea is to break the array R into n_workers different parts and ask each available worker to fill a part of the array. I would like to do something like this :
q=((p-1)-mod(p-1,n_workers-1))/(n_workers-1);
lab 1:
for j=1:q
R(j) = myfunction(parameters(j));
end
lab 2:
for j=(q+1):(2*q+1)
R(j) = myfunction(parameters(j));
end
...
lab n_workers:
for j=( q*(n_workers-1)+1 ):p
R(j) = myfunction(parameters(j));
end
However, since I'm new to the parallel programming, I don't know how to write this properly in Matlab. Instead of subdividing myself the array R, could I use a coditributed array instead ?
Firstly, if your function evaluations are independent, you might well be better off using parfor, like so:
R=zeros(p,1);
parfor i=1:p
R(i)=myfunction(parameters(i));
end
spmd is generally only useful when you need communication between iterations. In any case, you can run this sort of thing inside spmd using the for-drange construct like so:
spmd
R = codistributed.zeros(p, 1)
for i = drange(1:p)
R(i) = myfunction(parameters(i));
end
end
In that case, you'd probably also want to make parameters be a distributed array too to avoid having multiple copies in memory. (parfor automatically avoids that problem by "slicing" both R and parameters)

Simple parallel execution in MATLAB

I have figured out some awesome ways of speeding up my MATLAB code: vectorizing, arrayfun, and basically just getting rid of for loops (not using parfor). I want to take it to the next step.
Suppose I have 2 function calls that are computationally intensive.
x = fun(a);
y = fun(b);
They are completely independent, and I want to run them in parallel rather than serially. I dont have the parallel processing toolbox. Any help is appreciated.
thanks
If I am optimistic I think you ask "How Can I simply do parallel processing in Matlab". In that case the answer would be:
Parallel processing can most easily be done with the parallel computing toolbox. This gives you access to things like parfor.
I guess you can do:
parfor t = 1:2
if t == 1, x = fun(a); end
if t == 2, y = fun(b); end
end
Of course there are other ways, but that should be the simplest.
The MATLAB interpreter is single-threaded, so the only way to achieve parallelism across MATLAB functions is to run multiple instances of MATLAB. Parallel Computing Toolbox does this for you, and gives you a convenient interface in the form of PARFOR/SPMD/PARFEVAL etc. You can run multiple MATLAB instances manually, but you'll probably need to do a fair bit of work to organise the work that you want to be done.
The usual examples involve parfor, which is probably the easiest way to get parallelism out of MATLAB's Parallel Computing Toolbox (PCT). The parfeval function is quite easy, as demonstrated in this other post. A less frequently discussed functionality of the PCT is the system of jobs and tasks, which are probably the most appropriate solution for your simple case of two completely independent function calls. Spoiler: the batch command can help to simplify creation of simple jobs (see bottom of this post).
Unfortunately, it is not as straightforward to implement; for the sake of completeness, here's an example:
% Build a cluster from the default profile
c = parcluster();
% Create an independent job object
j = createJob(c);
% Use cells to pass inputs to the tasks
taskdataA = {field1varA,...};
taskdataB = {field1varB,...};
% Create the task with 2 outputs
nTaskOutputs = 2;
t = createTask(j, #myCoarseFunction, nTaskOutputs, {taskdataA, taskdataB});
% Start the job and wait for it to finish the tasks
submit(j); wait(j);
% Get the ouptuts from each task
taskoutput = get(t,'OutputArguments');
delete(j); % do not forget to remove the job or your APPDATA folder will fill up!
% Get the outputs
out1A = taskoutput{1}{1};
out1B = taskoutput{2}{1};
out2A = taskoutput{1}{2};
out2B = taskoutput{2}{2};
The key here is the function myCoarseFunction given to createTask as the function to evaluate in the task objects to creates. This can be your fun or a wrapper if you have complicated inputs/outputs that might require a struct container.
Note that for a single task, the entire workflow above of creating a job and task, then starting them with submit can be simplified with batch as follows:
c = parcluster();
jobA = batch(c, #myCoarseFunction, 1, taskdataA,...
'Pool', c.NumWorkers / 2 - 1, 'CaptureDiary', true);
Also, keep in mind that as with matlabpool(now called parpool), using parcluster requires time to startup the MATLAB.exe processes that will run your job.