performance difference between subscript indexing and linear indexing - matlab

I have a 2D matrix in MATLAB and I use two different ways to access its elements. One is based on subscript indexing and the other is based on linear indexing. I test both methods by following code:
N = 512; it = 400; im = zeros(N);
%// linear indexing
[ind_x,ind_y] = ndgrid(1:2:N,1:2:N);
index = sub2ind(size(im),ind_x,ind_y);
tic
for i=1:it
im(index) = im(index) + 1;
end
toc %//cost 0.45 seconds on my machine (MATLAB2015b, Thinkpad T410)
%// subscript indexing
x = 1:2:N;
y = 1:2:N;
tic
for i=1:it
im(x,y) = im(x,y) +1;
end
toc %// cost 0.12 seconds on my machine(MATLAB2015b, Thinkpad T410)
%//someone pointed that double or uint32 might an issue, so we turn both into uint32
%//uint32 for linear indexing
index = uint32(index);
tic
for i=1:it
im(index) = im(index) +1;
end
toc%// cost 0.25 seconds on my machine(MATLAB2015b, Thinkpad T410)
%//uint32 for the subscript indexing
x = uint32(1:2:N);
y = uint32(1:2:N);
tic
for i=1:it
im(x,y) = im(x,y) +1;
end
toc%// cost 0.11 seconds on my machine(MATLAB2015b, Thinkpad T410)
%% /*********************comparison with others*****************/
%//third way of indexing, loops
tic
for i=1:it
for j=1:2:N
for k=1:2:N
im(j,k) = im(j,k)+1;
end
end
end
toc%// cost 0.74 seconds on my machine(MATLAB2015b, Thinkpad T410)
It seems that directly using subscript indexing is faster than the linear indexing obtained from sub2ind. Does anyone know why? I thought they were almost the same.

The intuition
As Daniel mentioned in his answer, the linear index takes up more space in RAM while the subscripts are much smaller.
For the subscripted indexing, internally, Matlab will not create the linear index, but it will use a (double) compiled loop to cycle through all elements.
The subscripted version on the other hand will have to loop through all the linear indices passed from outside, which will require more reads from memory, thus will take longer.
Claims
Linear indexing is faster
...as long as the total number of indices is the same
Timings
From the timings we see a direct confirmation for the first claim and we can infer the second with some additional testing (below).
LOOPED
subs assignment: 0.2878s
linear assignment: 0.0812s
VECTORIZED
subs assignment: 0.0302s
linear assignment: 0.0862s
First claim
We can test it with loops. The number of subref operations is the same but the linear index points directly to the element of interest while subscripts, internally, need to be converted.
The functions of interest:
function B = subscriptedIndexing(A,row,col)
n = numel(row);
B = zeros(n);
for r = 1:n
for c = 1:n
B(r,c) = A(row(r),col(c));
end
end
end
function B = linearIndexing(A,index)
B = zeros(size(index));
for ii = 1:numel(index)
B(ii) = A(index(ii));
end
end
Second claim
This claim is an inference from the observed difference in speed when using the vectorized approach.
First, the vectorized approach (as opposed to the looped) speeds up the subscripted assignment while linear indexing is slightly slower (probably not statistically significant).
Second, the only difference in the two indexing methods comes from the size of the indices/subscripts. We want to isolate this as the only possible cause of the difference in the timings. One other major player could be JIT optimization.
The testing functions:
function B = subscriptedIndexingVect(A,row,col)
n = numel(row);
B = zeros(n);
B = A(row,col);
end
function B = linearIndexingVect(A,index)
B = zeros(size(index));
B = A(index);
end
NOTE: I keep the superfluous preallocation of B, to keep the vectorized and looped approaches comparable. In other words, differences in timings should only come from indexing and the internal implementation of the loops.
All tests are run with:
function testFun(N)
A = magic(N);
row = 1:2:N;
col = 1:2:N;
[ind_x,ind_y] = ndgrid(row,col);
index = sub2ind(size(A),ind_x,ind_y);
% isequal(linearIndexing(A,index), subscriptedIndexing(A,row,col))
% isequal(linearIndexingVect(A,index), subscriptedIndexingVect(A,row,col))
fprintf('<strong>LOOPED</strong>\n')
fprintf(' subs assignment: %.4fs\n', timeit(#()subscriptedIndexing(A,row,col)))
fprintf(' linear assignment: %.4fs\n\n',timeit(#()linearIndexing(A,index)))
fprintf('<strong>VECTORIZED</strong>\n')
fprintf(' subs assignment: %.4fs\n', timeit(#()subscriptedIndexingVect(A,row,col)))
fprintf(' linear assignment: %.4fs\n', timeit(#()linearIndexingVect(A,index)))
end
Turning JIT on/off has NO impact:
feature accel off
testFun(5e3)
...
VECTORIZED
subs assignment: 0.0303s
linear assignment: 0.0873s
feature accel on
testFun(5e3)
...
VECTORIZED
subs assignment: 0.0303s
linear assignment: 0.0871s
This excludes that subscripted assignment's superior speed comes from JIT optimization which leaves us with the only plausible cause, number of RAM accesses. It is true that the final matrix has the same number of elements. However, the linear assignment has to retrieve all elements of the index in order to fetch the numbers.
SETUP
Tested on Win7 64 with MATLAB R2015b. Prior versions of Matlab will provide different results due to recent changes in Matlab's execution engine
In fact, turning JIT off in Matlab R2014a affects timings, but only for the loops (expected result):
feature accel off
testFun(5e3)
LOOPED
subs assignment: 7.8915s
linear assignment: 6.4418s
VECTORIZED
subs assignment: 0.0295s
linear assignment: 0.0878s
This again confirms that the difference in timings between linear and sibscripted assignment should come from the number of RAM accesses, since JIT does not play a role in the vectorized approach.

It does not really surprise me that the subscript indexing is much faster here. If you take a look at your input data, the index is much smaller in this case. For the subscript indexing case you have 512 elements while for the linear indexing case you have 65536 elements.
When you apply your example to a vector instead, you will notice that there is no difference between both methods.
Here is the slightly modified code I used to evaluate different matrix sizes:
it = 400; im = zeros(512*512,1);
x = 1:2:size(im,1);
y = 1:2:size(im,2);
%// linear indexing
[ind_x,ind_y] = ndgrid(x,y);
index = sub2ind(size(im),ind_x,ind_y);
tic
for i=1:it
im(index) = im(index) + 1;
end
toc
%// subscript indexing
tic
for i=1:it
im(x,y) = im(x,y) +1;
end
toc

A very good question. Right ahead, I don't know the correct answer, however, you can analyze the behavior. Save the first toc into t1 and the second one into t2. At the end calculate t1/t2. You will recognize, changing the number of iterations or the size of your matrix does (almost) not change the factor.
I propose:
The amount of iterations only improves the quality of the tictoc. (obvious?)
The size of the matrix has no influcence, i.e. there must be a time delay in the syntax.
I imagine, that there is simply an internal check or transformation from linear index to subscript indexing, i.e. the internal addition (operation) you perform is exactly the same. It appears to be more natural to use subscript indexing instead of linear indexing, so maybe mathworks simply optimized the first.
UPDATE:
You can also simply access an element in your matrix, you will see, that using subscript index is faster than using linear index. That supports the theory, that there is a slow conversion done internally from linear to subscript.

DISCLAIMER: I don't have a MATLAB license at the moment, so the code I provide below is admittedly untested. However, if anyone decides to test, please comment on this answer accordingly.
Depending on your release of MATLAB (are you using R2015b?), there is a possibility that you may not have paid the full upfront cost of preallocation when invoking "zeros". There is a possibility that you are paying for allocation on the first get/set of im, which is causing additional but hidden overhead when you first access the values inside im.
See: http://undocumentedmatlab.com/blog/preallocation-performance
As an initial test, I suggest switching the order that you are profiling the code:
N = 512; it = 400; im = zeros(N);
%// subscript indexing
x = 1:2:N;
y = 1:2:N;
tic
for i=1:it
im(x,y) = im(x,y) +1;
end
toc %// What's the cost now?
%// linear indexing
[ind_x,ind_y] = ndgrid(1:2:N,1:2:N);
index = sub2ind(size(im),ind_x,ind_y);
tic
for i=1:it
im(index) = im(index) + 1;
end
toc %// What's the cost now?
To profile perhaps more fairly against subscript vs. linear indexing, I suggest one of two possible methods:
Make sure you incur allocation costs on both methods by creating two separate im matrices, im1 and im2, both initially set to zeros(N), and use each matrix for a separate indexing method.
Run a full get/set on each element of im before actually profiling between subscript vs. linear indexing.
Method 1:
N = 512; it = 400; im1 = zeros(N); im2 = zeros(N);
%// subscript indexing
x = 1:2:N;
y = 1:2:N;
tic
for i=1:it
im1(x,y) = im1(x,y) + 1;
end
toc %// What's the cost now?
%// linear indexing
[ind_x,ind_y] = ndgrid(1:2:N,1:2:N);
index = sub2ind(size(im2),ind_x,ind_y);
tic
for i=1:it
im2(index) = im2(index) + 1;
end
toc %// What's the cost now?
Method 2:
N = 512; it = 400; im = zeros(N);
%// Run a full get/set on each element to force allocation
tic
for i=1:N^2
im(i) = im(i) +1;
end
toc
%// subscript indexing
x = 1:2:N;
y = 1:2:N;
tic
for i=1:it
im(x,y) = im(x,y) +1;
end
toc %// What's the cost now?
%// linear indexing
[ind_x,ind_y] = ndgrid(1:2:N,1:2:N);
index = sub2ind(size(im),ind_x,ind_y);
tic
for i=1:it
im(index) = im(index) + 1;
end
toc %// What's the cost now?
I have a second hypothesis, which is that you incur some additional overhead when you explicitly declare each and every single element to be accessed, versus if you have MATLAB infer the elements for you. excasa's "duplicate post" reference (not exactly a duplicate in my humble opinion) has the same general insight, but uses different datapoints to come to this conclusion. I won't write examples of this here, but basically, creating a straight up giant array index compared to the smaller subscript indices x and y gives MATLAB less room for internal optimizations. I don't know what inside MATLAB would perform these specific optimizations, but perhaps they come from the black magic that you may know as MATLAB's JIT/LXE. If you honestly want to check if JIT is the culprit here (and are working in 2014b or prior), then you can try disabling it and then running the code above.
There are several ways to disable the JIT:
Use undocumented feature methods.
Copy/paste the commands to the command prompt, as opposed running them straight from the script editor.
Unfortunately, I do not know of a way to turn of LXE in R2015a and later, and trying to diagnose if LXE is the culprit may be a bit of an uphill battle. If this is where you are stuck, perhaps you can delve even further via MathWorks' technical support or MathWorks Central. You may be surprised to find some astounding experts from either source.

Related

Fastest approach to copying/indexing variable parts of 3D matrix

I have large sets of 3D data consisting of 1D signals acquired in 2D space.
The first step in processing this data is thresholding all signals to find the arrival of a high-amplitude pulse. This pulse is present in all signals and arrives at different times.
After thresholding, the 3D data set should be reordered so that every signal starts at the arrival of the pulse and what came before is thrown away (the end of the signals is of no importance, as of now i concatenate zeros to the end of all signals so the data remains the same size).
Now, I have implemented this in the following manner:
First, i start by calculating the sample number of the first sample exceeding the threshold in all signals
M = randn(1000,500,500); % example matrix of realistic size
threshold = 0.25*max(M(:,1,1)); % 25% of the maximum in the first signal as threshold
[~,index] = max(M>threshold); % indices of first sample exceeding threshold in all signals
Next, I want all signals to be shifted so that they all start with the pulse. For now, I have implemented it this way:
outM = zeros(size(M)); % preallocation for speed
for i = 1:size(M,2)
for j = 1:size(M,3)
outM(1:size(M,1)+1-index(1,i,j),i,j) = M(index(1,i,j):end,i,j);
end
end
This works fine, and i know for-loops are not that slow anymore, but this easily takes a few seconds for the datasets on my machine. A single iteration of the for-loop takes about 0.05-0.1 sec, which seems slow to me for just copying a vector containing 500-2000 double values.
Therefore, I have looked into the best way to tackle this, but for now I haven't found anything better.
I have tried several things: 3D masks, linear indexing, and parallel loops (parfor).
for 3D masks, I checked to see if any improvements are possible. Therefore i first contruct a logical mask, and then compare the speed of the logical mask indexing/copying to the double nested for loop.
%% set up for logical mask copying
AA = logical(ones(500,1)); % only copy the first 500 values after the threshold value
Mask = logical(zeros(size(M)));
Jepla = zeros(500,size(M,2),size(M,3));
for i = 1:size(M,2)
for j = 1:size(M,3)
Mask(index(1,i,j):index(1,i,j)+499,i,j) = AA;
end
end
%% speed comparison
tic
Jepla = M(Mask);
toc
tic
for i = 1:size(M,2)
for j = 1:size(M,3)
outM(1:size(M,1)+1-index(1,i,j),i,j) = M(index(1,i,j):end,i,j);
end
end
toc
The for-loop is faster every time, even though there is more that's copied.
Next, linear indexing.
%% setup for linear index copying
%put all indices in 1 long column
LongIndex = reshape(index,numel(index),1);
% convert to linear indices and store in new variable
linearIndices = sub2ind(size(M),LongIndex,repmat(1:size(M,2),1,size(M,3))',repelem(1:size(M,3),size(M,2))');
% extend linear indices with those of all values to copy
k = zeros(numel(M),1);
count = 1;
for i = 1:numel(LongIndex)
values = linearIndices(i):size(M,1)*i;
k(count:count+length(values)-1) = values;
count = count + length(values);
end
k = k(1:count-1);
% get linear indices of locations in new matrix
l = zeros(length(k),1);
count = 1;
for i = 1:numel(LongIndex)
values = repelem(LongIndex(i)-1,size(M,1)-LongIndex(i)+1);
l(count:count+length(values)-1) = values;
count = count + length(values);
end
l = k-l;
% create new matrix
outM = zeros(size(M));
%% speed comparison
tic
outM(l) = M(k);
toc
tic
for i = 1:size(M,2)
for j = 1:size(M,3)
outM(1:size(M,1)+1-index(1,i,j),i,j) = M(index(1,i,j):end,i,j);
end
end
toc
Again, the alternative approach, linear indexing, is (a lot) slower.
After this failed, I learned about parallelisation, and though this would for sure speed up my code.
By reading some of the documentation around parfor and trying it out a bit, I changed my code to the following:
gcp;
outM = zeros(size(M));
inM = mat2cell(M,size(M,1),ones(size(M,2),1),size(M,3));
tic
parfor i = 1:500
for j = 1:500
outM(:,i,j) = [inM{i}(index(1,i,j):end,1,j);zeros(index(1,i,j)-1,1)];
end
end
end
toc
I changed it so that "outM" and "inM" would both be sliced variables, as I read this is best. Still this is very slow, a lot slower than the original for loop.
So now the question, should I give up on trying to improve the speed of this operation? Or is there another way in which to do this? I have searched a lot, and for now do not see how to speed this up.
Sorry for the long question, but I wanted to show what I tried.
Thank you in advance!
Not sure if an option in your situation, but looks like cell arrays are actually faster here:
outM2 = cell(size(M,2),size(M,3));
tic;
for i = 1:size(M,2)
for j = 1:size(M,3)
outM2{i,j} = M(index(1,i,j):end,i,j);
end
end
toc
And a second idea which also came out faster, batch all data which have to be shifted by the same value:
tic;
for i = 1:unique(index).'
outM(1:size(M,1)+1-i,index==i) = M(i:end,index==i);
end
toc
It totally depends on your data if this approach is actually faster.
And yes integer valued and logical indexing can be mixed

Vectorizing the solution of a linear equation system in MATLAB

Summary: This question deals with the improvement of an algorithm for the computation of linear regression.
I have a 3D (dlMAT) array representing monochrome photographs of the same scene taken at different exposure times (the vector IT) . Mathematically, every vector along the 3rd dimension of dlMAT represents a separate linear regression problem that needs to be solved. The equation whose coefficients need to be estimated is of the form:
DL = R*IT^P, where DL and IT are obtained experimentally and R and P must be estimated.
The above equation can be transformed into a simple linear model after applying a logarithm:
log(DL) = log(R) + P*log(IT) => y = a + b*x
Presented below is the most "naive" way to solve this system of equations, which essentially involves iterating over all "3rd dimension vectors" and fitting a polynomial of order 1 to (IT,DL(ind1,ind2,:):
%// Define some nominal values:
R = 0.3;
IT = 600:600:3000;
P = 0.97;
%// Impose some believable spatial variations:
pMAT = 0.01*randn(3)+P;
rMAT = 0.1*randn(3)+R;
%// Generate "fake" observation data:
dlMAT = bsxfun(#times,rMAT,bsxfun(#power,permute(IT,[3,1,2]),pMAT));
%// Regression:
sol = cell(size(rMAT)); %// preallocation
for ind1 = 1:size(dlMAT,1)
for ind2 = 1:size(dlMAT,2)
sol{ind1,ind2} = polyfit(log(IT(:)),log(squeeze(dlMAT(ind1,ind2,:))),1);
end
end
fittedP = cellfun(#(x)x(1),sol); %// Estimate of pMAT
fittedR = cellfun(#(x)exp(x(2)),sol); %// Estimate of rMAT
The above approach seems like a good candidate for vectorization, since it does not utilize MATLAB's main strength that is MATrix operations. For this reason, it does not scale very well and takes much longer to execute than I think it should.
There exist alternative ways to perform this computation based on matrix division, as demonstrated here and here, which involve something like this:
sol = [ones(size(x)),log(x)]\log(y);
That is, appending a vector of 1s to the observations, followed by mldivide to solve the equation system.
The main challenge I'm facing is how to adapt my data to the algorithm (or vice versa).
Question #1: How can the matrix-division-based solution be extended to solve the problem presented above (and potentially replace the loops I am using)?
Question #2 (bonus): What is the principle behind this matrix-division-based solution?
The secret ingredient behind the solution that includes matrix division is the Vandermonde matrix. The question discusses a linear problem (linear regression), and those can always be formulated as a matrix problem, which \ (mldivide) can solve in a mean-square error senseā€”. Such an algorithm, solving a similar problem, is demonstrated and explained in this answer.
Below is benchmarking code that compares the original solution with two alternatives suggested in chat1, 2 :
function regressionBenchmark(numEl)
clc
if nargin<1, numEl=10; end
%// Define some nominal values:
R = 5;
IT = 600:600:3000;
P = 0.97;
%// Impose some believable spatial variations:
pMAT = 0.01*randn(numEl)+P;
rMAT = 0.1*randn(numEl)+R;
%// Generate "fake" measurement data using the relation "DL = R*IT.^P"
dlMAT = bsxfun(#times,rMAT,bsxfun(#power,permute(IT,[3,1,2]),pMAT));
%% // Method1: loops + polyval
disp('-------------------------------Method 1: loops + polyval')
tic; [fR,fP] = method1(IT,dlMAT); toc;
fprintf(1,'Regression performance:\nR: %d\nP: %d\n',norm(fR-rMAT,1),norm(fP-pMAT,1));
%% // Method2: loops + Vandermonde
disp('-------------------------------Method 2: loops + Vandermonde')
tic; [fR,fP] = method2(IT,dlMAT); toc;
fprintf(1,'Regression performance:\nR: %d\nP: %d\n',norm(fR-rMAT,1),norm(fP-pMAT,1));
%% // Method3: vectorized Vandermonde
disp('-------------------------------Method 3: vectorized Vandermonde')
tic; [fR,fP] = method3(IT,dlMAT); toc;
fprintf(1,'Regression performance:\nR: %d\nP: %d\n',norm(fR-rMAT,1),norm(fP-pMAT,1));
function [fittedR,fittedP] = method1(IT,dlMAT)
sol = cell(size(dlMAT,1),size(dlMAT,2));
for ind1 = 1:size(dlMAT,1)
for ind2 = 1:size(dlMAT,2)
sol{ind1,ind2} = polyfit(log(IT(:)),log(squeeze(dlMAT(ind1,ind2,:))),1);
end
end
fittedR = cellfun(#(x)exp(x(2)),sol);
fittedP = cellfun(#(x)x(1),sol);
function [fittedR,fittedP] = method2(IT,dlMAT)
sol = cell(size(dlMAT,1),size(dlMAT,2));
for ind1 = 1:size(dlMAT,1)
for ind2 = 1:size(dlMAT,2)
sol{ind1,ind2} = flipud([ones(numel(IT),1) log(IT(:))]\log(squeeze(dlMAT(ind1,ind2,:)))).'; %'
end
end
fittedR = cellfun(#(x)exp(x(2)),sol);
fittedP = cellfun(#(x)x(1),sol);
function [fittedR,fittedP] = method3(IT,dlMAT)
N = 1; %// Degree of polynomial
VM = bsxfun(#power, log(IT(:)), 0:N); %// Vandermonde matrix
result = fliplr((VM\log(reshape(dlMAT,[],size(dlMAT,3)).')).');
%// Compressed version:
%// result = fliplr(([ones(numel(IT),1) log(IT(:))]\log(reshape(dlMAT,[],size(dlMAT,3)).')).');
fittedR = exp(real(reshape(result(:,2),size(dlMAT,1),size(dlMAT,2))));
fittedP = real(reshape(result(:,1),size(dlMAT,1),size(dlMAT,2)));
The reason why method 2 can be vectorized into method 3 is essentially that matrix multiplication can be separated by the columns of the second matrix. If A*B produces matrix X, then by definition A*B(:,n) gives X(:,n) for any n. Moving A to the right-hand side with mldivide, this means that the divisions A\X(:,n) can be done in one go for all n with A\X. The same holds for an overdetermined system (linear regression problem), in which there is no exact solution in general, and mldivide finds the matrix that minimizes the mean-square error. In this case too, the operations A\X(:,n) (method 2) can be done in one go for all n with A\X (method 3).
The implications of improving the algorithm when increasing the size of dlMAT can be seen below:
For the case of 500*500 (or 2.5E5) elements, the speedup from Method 1 to Method 3 is about x3500!
It is also interesting to observe the output of profile (here, for the case of 500*500):
Method 1
Method 2
Method 3
From the above it is seen that rearranging the elements via squeeze and flipud takes up about half (!) of the runtime of Method 2. It is also seen that some time is lost on the conversion of the solution from cells to matrices.
Since the 3rd solution avoids all of these pitfalls, as well as the loops altogether (which mostly means re-evaluation of the script on every iteration) - it unsurprisingly results in a considerable speedup.
Notes:
There was very little difference between the "compressed" and the "explicit" versions of Method 3 in favor of the "explicit" version. For this reason it was not included in the comparison.
A solution was attempted where the inputs to Method 3 were gpuArray-ed. This did not provide improved performance (and even somewhat degradaed them), possibly due to wrong implementation, or the overhead associated with copying matrices back and forth between RAM and VRAM.

Is there a way to speed up concatenation in MATLAB?

I want to concatenate along the third dimension
z = cat(3,A,B,C);
Many many times. I if I was doing that along the second dimension then
z = [A,B,C];
Would be faster than
z = cat(2,A,B,C);
Can a similar thing be done along the third dimension or is there any other way to speed this up?
There are some indexing options to get a slightly better performance than cat(3,...).
Both solutions use U(30,30,3)=0; instead of zeros(30,30,3) to preallocate, but it is unsave as it will result in a subscript dimension missmatch when U is already a variable of a larger size.
The first option is to assign the different slices individually.
%fast but unsafe preallocation
U(30,30,3)=0;
%robust alternative:
%U=zeros(30,30,3)
U(:,:,3)=C;
U(:,:,1)=A;
U(:,:,2)=B;
The second option is to use linear indexing. For z1 = cat(3,A,B,C); and z2=[A;B;C] it is true that z1(:)==z2(:)
%fast but unsafe preallocation
U(30,30,3)=0;
%robust alternative:
%U=zeros(30,30,3)
U(:)=[A,B,C];
I benchmarked the solutions, comparing it to cat(3,A,B,C) and [A,B,C]. The linear indexing solution is only slightly slower than [A,B,C].
0.392289 s for 2D CAT
0.476525 s for Assign slices
0.588346 s for cat(3...)
0.392703 s for linear indexing
Code for benchmarking:
N=30;
A=randn(N,N);
B=randn(N,N);
C=randn(N,N);
T=containers.Map;
cycles=10^5;
tic;
for i=1:cycles
W=[A;B;C];
X=W+1;
end
T('2D CAT')=toc;
tic;
for i=1:cycles
W=cat(3,A,B,C);
X=W+1;
end
T('cat(3...)')=toc;
U=zeros(N,N,3);
tic;
for i=1:cycles
U(N,N,3)=0;
U(:,:,3)=C;
U(:,:,1)=A;
U(:,:,2)=B;
V=U+1;
end
T('Assign slices')=toc;
tic;
for i=1:cycles
U(N,N,3)=0;
U(:)=[A,B,C];
V=U+1;
end
T('linear indexing')=toc;
for X=T.keys
fprintf('%f s for %s\n',T(X{1}),X{1})
end

Apply function to rolling window

Say I have a long list A of values (say of length 1000) for which I want to compute the std in pairs of 100, i.e. I want to compute std(A(1:100)), std(A(2:101)), std(A(3:102)), ..., std(A(901:1000)).
In Excel/VBA one can easily accomplish this by writing e.g. =STDEV(A1:A100) in one cell and then filling down in one go. Now my question is, how could one accomplish this efficiently in Matlab without having to use any expensive for-loops.
edit: Is it also possible to do this for a list of time series, e.g. when A has dimensions 1000 x 4 (i.e. 4 time series of length 1000)? The output matrix should then have dimensions 901 x 4.
Note: For the fastest solution see Luis Mendo's answer
So firstly using a for loop for this (especially if those are your actual dimensions) really isn't going to be expensive. Unless you're using a very old version of Matlab, the JIT compiler (together with pre-allocation of course) makes for loops inexpensive.
Secondly - have you tried for loops yet? Because you should really try out the naive implementation first before you start optimizing prematurely.
Thirdly - arrayfun can make this a one liner but it is basically just a for loop with extra overhead and very likely to be slower than a for loop if speed really is your concern.
Finally some code:
n = 1000;
A = rand(n,1);
l = 100;
for loop (hardly bulky, likely to be efficient):
S = zeros(n-l+1,1); %//Pre-allocation of memory like this is essential for efficiency!
for t = 1:(n-l+1)
S(t) = std(A(t:(t+l-1)));
end
A vectorized (memory in-efficient!) solution:
[X,Y] = meshgrid(1:l)
S = std(A(X+Y-1))
A probably better vectorized solution (and a one-liner) but still memory in-efficient:
S = std(A(bsxfun(#plus, 0:l-1, (1:l)')))
Note that with all these methods you can replace std with any function so long as it is applies itself to the columns of the matrix (which is the standard in Matlab)
Going 2D:
To go 2D we need to go 3D
n = 1000;
k = 4;
A = rand(n,k);
l = 100;
ind = bsxfun(#plus, permute(o:n:(k-1)*n, [3,1,2]), bsxfun(#plus, 0:l-1, (1:l)')); %'
S = squeeze(std(A(ind)));
M = squeeze(mean(A(ind)));
%// etc...
OR
[X,Y,Z] = meshgrid(1:l, 1:l, o:n:(k-1)*n);
ind = X+Y+Z-1;
S = squeeze(std(A(ind)))
M = squeeze(mean(A(ind)))
%// etc...
OR
ind = bsxfun(#plus, 0:l-1, (1:l)'); %'
for t = 1:k
S = std(A(ind));
M = mean(A(ind));
%// etc...
end
OR (taken from Luis Mendo's answer - note in his answer he shows a faster alternative to this simple loop)
S = zeros(n-l+1,k);
M = zeros(n-l+1,k);
for t = 1:(n-l+1)
S(t,:) = std(A(k:(k+l-1),:));
M(t,:) = mean(A(k:(k+l-1),:));
%// etc...
end
What you're doing is basically a filter operation.
If you have access to the image processing toolbox,
stdfilt(A,ones(101,1)) %# assumes that data series are in columns
will do the trick (no matter the dimensionality of A). Note that if you also have access to the parallel computing toolbox, you can let filter operations like these run on a GPU, although your problem might be too small to generate noticeable speedups.
To minimize number of operations, you can exploit the fact that the standard deviation can be computed as a difference involving second and first moments,
and moments over a rolling window are obtained efficiently with a cumulative sum (using cumsum):
A = randn(1000,4); %// random data
N = 100; %// window size
c = size(A,2);
A1 = [zeros(1,c); cumsum(A)];
A2 = [zeros(1,c); cumsum(A.^2)];
S = sqrt( (A2(1+N:end,:)-A2(1:end-N,:) ...
- (A1(1+N:end,:)-A1(1:end-N,:)).^2/N) / (N-1) ); %// result
Benchmarking
Here's a comparison against a loop based solution, using timeit. The loop approach is as in Dan's solution but adapted to the 2D case, exploting the fact that std works along each column in a vectorized manner.
%// File loop_approach.m
function S = loop_approach(A,N);
[n, p] = size(A);
S = zeros(n-N+1,p);
for k = 1:(n-N+1)
S(k,:) = std(A(k:(k+N-1),:));
end
%// File bsxfun_approach.m
function S = bsxfun_approach(A,N);
[n, p] = size(A);
ind = bsxfun(#plus, permute(0:n:(p-1)*n, [3,1,2]), bsxfun(#plus, 0:n-N, (1:N).')); %'
S = squeeze(std(A(ind)));
%// File cumsum_approach.m
function S = cumsum_approach(A,N);
c = size(A,2);
A1 = [zeros(1,c); cumsum(A)];
A2 = [zeros(1,c); cumsum(A.^2)];
S = sqrt( (A2(1+N:end,:)-A2(1:end-N,:) ...
- (A1(1+N:end,:)-A1(1:end-N,:)).^2/N) / (N-1) );
%// Benchmarking code
clear all
A = randn(1000,4); %// Or A = randn(1000,1);
N = 100;
t_loop = timeit(#() loop_approach(A,N));
t_bsxfun = timeit(#() bsxfun_approach(A,N));
t_cumsum = timeit(#() cumsum_approach(A,N));
disp(' ')
disp(['loop approach: ' num2str(t_loop)])
disp(['bsxfun approach: ' num2str(t_bsxfun)])
disp(['cumsum approach: ' num2str(t_cumsum)])
disp(' ')
disp(['bsxfun/loop gain factor: ' num2str(t_loop/t_bsxfun)])
disp(['cumsum/loop gain factor: ' num2str(t_loop/t_cumsum)])
Results
I'm using Matlab R2014b, Windows 7 64 bits, dual core processor, 4 GB RAM:
4-column case:
loop approach: 0.092035
bsxfun approach: 0.023535
cumsum approach: 0.0002338
bsxfun/loop gain factor: 3.9106
cumsum/loop gain factor: 393.6526
Single-column case:
loop approach: 0.085618
bsxfun approach: 0.0040495
cumsum approach: 8.3642e-05
bsxfun/loop gain factor: 21.1431
cumsum/loop gain factor: 1023.6236
So the cumsum-based approach seems to be the fastest: about 400 times faster than the loop in the 4-column case, and 1000 times faster in the single-column case.
Several functions can do the job efficiently in Matlab.
On one side, you can use functions such as colfilt or nlfilter, which performs computations on sliding blocks. colfilt is way more efficient than nlfilter, but can be used only if the order of the elements inside a block does not matter. Here is how to use it on your data:
S = colfilt(A, [100,1], 'sliding', #std);
or
S = nlfilter(A, [100,1], #std);
On your example, you can clearly see the difference of performance. But there is a trick : both functions pad the input array so that the output vector has the same size as the input array. To get only the relevant part of the output vector, you need to skip the first floor((100-1)/2) = 49 first elements, and take 1000-100+1 values.
S(50:end-50)
But there is also another solution, close to colfilt, more efficient. colfilt calls col2im to reshape the input vector into a matrix on which it applies the given function on each distinct column. This transforms your input vector of size [1000,1] into a matrix of size [100,901]. But colfilt pads the input array with 0 or 1, and you don't need it. So you can run colfilt without the padding step, then apply std on each column and this is easy because std applied on a matrix returns a row vector of the stds of the columns. Finally, transpose it to get a column vector if you want. In brief and in one line:
S = std(im2col(X,[100 1],'sliding')).';
Remark: if you want to apply a more complex function, see the code of colfilt, line 144 and 147 (for v2013b).
If your concern is speed of the for loop, you can greatly reduce the number of loop iteration by folding your vector into an array (using reshape) with the columns having the number of element you want to apply your function on.
This will let Matlab and the JIT perform the optimization (and in most case they do that way better than us) by calculating your function on each column of your array.
You then reshape an offseted version of your array and do the same. You will still need a loop but the number of iteration will only be l (so 100 in your example case), instead of n-l+1=901 in a classic for loop (one window at a time).
When you're done, you reshape the array of result in a vector, then you still need to calculate manually the last window, but overall it is still much faster.
Taking the same input notation than Dan:
n = 1000;
A = rand(n,1);
l = 100;
It will take this shape:
width = (n/l)-1 ; %// width of each line in the temporary result array
tmp = zeros( l , width ) ; %// preallocation never hurts
for k = 1:l
tmp(k,:) = std( reshape( A(k:end-l+k-1) , l , [] ) ) ; %// calculate your stat on the array (reshaped vector)
end
S2 = [tmp(:) ; std( A(end-l+1:end) ) ] ; %// "unfold" your results then add the last window calculation
If I tic ... toc the complete loop version and the folded one, I obtain this averaged results:
Elapsed time is 0.057190 seconds. %// windows by window FOR loop
Elapsed time is 0.016345 seconds. %// "Folded" FOR loop
I know tic/toc is not the way to go for perfect timing but I don't have the timeit function on my matlab version. Besides, the difference is significant enough to show that there is an improvement (albeit not precisely quantifiable by this method). I removed the first run of course and I checked that the results are consistent with different matrix sizes.
Now regarding your "one liner" request, I suggest your wrap this code into a function like so:
function out = foldfunction( func , vec , nPts )
n = length( vec ) ;
width = (n/nPts)-1 ;
tmp = zeros( nPts , width ) ;
for k = 1:nPts
tmp(k,:) = func( reshape( vec(k:end-nPts+k-1) , nPts , [] ) ) ;
end
out = [tmp(:) ; func( vec(end-nPts+1:end) ) ] ;
Which in your main code allows you to call it in one line:
S = foldfunction( #std , A , l ) ;
The other great benefit of this format, is that you can use the very same sub function for other statistical function. For example, if you want the "mean" of your windows, you call the same just changing the func argument:
S = foldfunction( #mean , A , l ) ;
Only restriction, as it is it only works for vector as input, but with a bit of rework it could be made to take arrays as input too.

MATLAB fast (componentwise) vector operations are...really fast

I am writing MATLAB scripts since some time and, still, I do not understand how it works "under the hood". Consider the following script, that do some computation using (big) vectors in three different ways:
MATLAB vector operations;
Simple for cycle that do the same computation component-wise;
An optimized cycle that is supposed to be faster than 2. since avoid some allocation and some assignment.
Here is the code:
N = 10000000;
A = linspace(0,100,N);
B = linspace(-100,100,N);
C = linspace(0,200,N);
D = linspace(100,200,N);
% 1. MATLAB Operations
tic
C_ = C./A;
D_ = D./B;
G_ = (A+B)/2;
H_ = (C_+D_)/2;
I_ = (C_.^2+D_.^2)/2;
X = G_ .* H_;
Y = G_ .* H_.^2 + I_;
toc
tic
X;
Y;
toc
% 2. Simple cycle
tic
C_ = zeros(1,N);
D_ = zeros(1,N);
G_ = zeros(1,N);
H_ = zeros(1,N);
I_ = zeros(1,N);
X = zeros(1,N);
Y = zeros(1,N);
for i = 1:N,
C_(i) = C(i)/A(i);
D_(i) = D(i)/B(i);
G_(i) = (A(i)+B(i))/2;
H_(i) = (C_(i)+D_(i))/2;
I_(i) = (C_(i)^2+D_(i)^2)/2;
X(i) = G_(i) * H_(i);
Y(i) = G_(i) * H_(i)^2 + I_(i);
end
toc
tic
X;
Y;
toc
% 3. Opzimized cycle
tic
X = zeros(1,N);
Y = zeros(1,N);
for i = 1:N,
X(i) = (A(i)+B(i))/2 * (( C(i)/A(i) + D(i)/B(i) ) /2);
Y(i) = (A(i)+B(i))/2 * (( C(i)/A(i) + D(i)/B(i) ) /2)^2 + ( (C(i)/A(i))^2 + (D(i)/B(i))^2 ) / 2;
end
toc
tic
X;
Y;
toc
I know that one shall always try to vectorize computations, being MATLAB build over matrices/vectors (thus, nowadays, it is not always the best choice), so I am expecting that something like:
C = A .* B;
is faster than:
for i in 1:N,
C(i) = A(i) * B(i);
end
What I am not expecting is that it is actually faster even in the above script, despite the second and the third methods I am using go through only one cycle, whereas the first method performs many vector operations (which, theoretically, are a "for" cycle every time). This force me to conclude that MATLAB has some magic that permit (for example) to:
C = A .* B;
D = C .* C;
to be run faster than a single "for" cycle with some operation inside it.
So:
what is the magic that avoid the 1st part to be executed so fast?
when you write "D= A .* B" does MATLAB actually do a component wise computation with a "for" cycle, or simply keeps track that D contains some multiplication of "bla" and "bla"?
EDIT
suppose I want to implement the same computation using C++ (using maybe some library). Will be the first method of MATLAB be faster even than the third one implemented in C++? (I'll answer to this question myself, just give me some time.)
EDIT 2
As requested, here there are the experiment runtimes:
Part 1: 0.237143
Part 2: 4.440132
of which 0.195154 for allocation
Part 3: 2.280640
of which 0.057500 for allocation
and without JIT:
Part 1: 0.337259
Part 2: 149.602017
of which 0.033886 for allocation
Part 3: 82.167713
of which 0.010852 for allocation
The first one is the fastest because vectorized code can be easily interpreted to a small number of optimized C++ library calls. Matlab could also optimize it at more high level, for example, replace G*H+I with an optimized mul_add(G,H,I) instead of add(mul(G,H),I) in its core.
The second one can't be converted to C++ calls easily. It has to be interpreted or compiled. The most modern approach for scripting languages is JIT-compilation. The Matlab JIT compiler is not very good but it doesn't mean it has to be so. I don't know why MathWorks don't improve it. Thus #2 performs so slow that #1 is faster even it makes more "mathematical" operations.
Julia language was invented to be a compromise between Matlab expression and C++ speed. The same non-vectorized code (julia vs matlab) works very fast because JIT-compilation is very good.
Regarding performance optimization I follow #memyself suggestion using the profiler for both approaches as mentioned in 'for' loop vs vectorization in MATLAB.
For educational purposes it does make sense to experiment with numerical algorithms, for anything else I would go with well proven libraries.