I am working on computing some features of an image data set and saving the features for later use. Below is the code:
tic
l = 9907 % size of image data set
% pre-allocating space for variables in the for loop
Icolor = cell(1,l);
Iwave = cell(1,l);
IglrlFeatures = cell(1,l);
for i = 1:l % l = size of image data set = 9907
IDB{1,i} = imread(strcat(path,strcat(num2str(i),'.jpg')));
Icolor{1,i} = colorMoments(IDB{1,i}); % 6-features in each cell
Iwave{1,i} = waveletTransform(IDB{1,i}); % 8-features in each cell
IglrlFeatures{1,i} = textureFeatures(IDB{1,i}); % 44-features in each cell
ICW{1,i} = [Icolor{1,i} Iwave{1,i} IglrlFeatures{1,i}];
end
toc
Here the computation time for each function on single image is:
colorMoments(single_image) = Elapsed time is 0.009689 seconds.
waveletTransform(single_image) = Elapsed time is 0.018069 seconds.
textureFeatures(single_image) = Elapsed time is 0.022902 seconds.
l = data set size = 9907 images
Computational times for different data set sizes (l):
l = 10; Elapsed time is 0.402629 seconds.
l = 100; Elapsed time is 2.233971 seconds.
l = 1000; Elapsed time is 21.178395 seconds.
l = 2000; Elapsed time is 44.510071 seconds.
l = 5000; Elapsed time is 111.393866 seconds.
l = 9907; Elapsed time is 238.924998 seconds. approximately (~4 mins)
I want to decrease this computational time, any suggestions?
Thanks,
Gopi
The measured times seem to indicate computational complexity of order O(n). I doubt that the order can be reduced further for this type of problem. So, I believe that, at best, we can only hope for a linear increase in performance.
One thing you should look into is whether the code is using multiple processor cores. If not, try restructuring the for-loop to be able to use parfor instead.
I have some problems with the efficiency of my code. Basically my code works like this:
a = zeros(1,50000);
for n = 1:50000
a(n) = 10.*n - 5;
end
sum(a);
What is the fastest way to solve the sum of all the elements of this matrix?
first you want to remove your for loop by making it a vector multiplication:
tic
a = zeros(1,50000);
b = [1:50000];
a = 10.*b-5;
result = sum(a);
toc
Elapsed time is 0.008504 seconds.
An alternative way is to simplify your operation, you are multiplying 1 to 50000 by 10 and subtracting 5 then taking the sum (which is a single number), which is equivalent to:
tic
result = sum(1:50000)*10 - 5*50000;
toc
Elapsed time is 0.003851 seconds.
or if you are really into Math (this is a pure mathematical expression approach) :
tic
result = (1+50000)*(50000/2)*10 - 5*50000;
toc
Elapsed time is 0.003702 seconds.
and as you can see, a little math can do greater good than pure efficient programming, and actually, loop is not always slow, in your case, the loop is actually faster than the vectorized method:
tic
a = zeros(1,50000);
for n = 1:50000
a(n)=10.*n-5;
end
sum(a);
toc
Elapsed time is 0.006431 seconds.
Timing
Let's do some timing and see the results. The function to run it yourself is provided at the bottom. The approximate execution time execTime is in seconds and the percentage of improvement impPercentage in %.
Results
R2016a on OSX 10.11.4
execTime impPercentage
__________ _____________
loop 0.00059336 0
vectorized 0.00014494 75.574
adiel 0.00010468 82.359
math 9.3659e-08 99.984
Code
The following function can be used to generate the output. Note that it requires minimum R2013b to be able to use the built-in timeit-function and table.
function timings
%feature('accel','on') %// commented out because it's undocumented
cycleCount = 100;
execTime = zeros(4,cycleCount);
names = {'loop';'vectorized';'adiel';'math'};
w = warning;
warning('off','MATLAB:timeit:HighOverhead');
for k = 1:cycleCount
execTime(1,k) = timeit(#()loop,1);
execTime(2,k) = timeit(#()vectorized,1);
execTime(3,k) = timeit(#()adiel,1);
execTime(4,k) = timeit(#()math,1);
end
warning(w);
execTime = min(execTime,[],2);
impPercentage = (1 - execTime/max(execTime)) * 100;
table(execTime,impPercentage,'RowNames',names)
function result = loop
a = zeros(1,50000);
for n = 1:50000
a(n) = 10.*n - 5;
end
result = sum(a);
function result = vectorized
b = 1:50000;
a = 10.*b - 5;
result = sum(a);
function result = adiel
result = sum(1:50000)*10 - 5*50000;
function result = math
result = (1+50000)*(50000/2)*10 - 5*50000;
I commonly need to summarize a time series with irregular timing with a given aggregation function (i.e., sum, average, etc.). However, the current solution that I have seems inefficient and slow.
Take the aggregation function:
function aggArray = aggregate(array, groupIndex, collapseFn)
groups = unique(groupIndex, 'rows');
aggArray = nan(size(groups, 1), size(array, 2));
for iGr = 1:size(groups,1)
grIdx = all(groupIndex == repmat(groups(iGr,:), [size(groupIndex,1), 1]), 2);
for iSer = 1:size(array, 2)
aggArray(iGr,iSer) = collapseFn(array(grIdx,iSer));
end
end
end
Note that both array and groupIndex can be 2D. Every column in array is an independent series to be aggregated, but the columns of groupIndex should be taken together (as a row) to specify a period.
Then when we bring an irregular time series to it (note the second period is one base period longer), the timing results are poor:
a = rand(20006,10);
b = transpose([ones(1,5) 2*ones(1,6) sort(repmat((3:4001), [1 5]))]);
tic; aggregate(a, b, #sum); toc
Elapsed time is 1.370001 seconds.
Using the profiler, we can find out that the grpIdx line takes about 1/4 of the execution time (.28 s) and the iSer loop takes about 3/4 (1.17 s) of the total (1.48 s).
Compare this with the period-indifferent case:
tic; cumsum(a); toc
Elapsed time is 0.000930 seconds.
Is there a more efficient way to aggregate this data?
Timing Results
Taking each response and putting it in a separate function, here are the timing results I get with timeit with Matlab 2015b on Windows 7 with an Intel i7:
original | 1.32451
felix1 | 0.35446
felix2 | 0.16432
divakar1 | 0.41905
divakar2 | 0.30509
divakar3 | 0.16738
matthewGunn1 | 0.02678
matthewGunn2 | 0.01977
Clarification on groupIndex
An example of a 2D groupIndex would be where both the year number and week number are specified for a set of daily data covering 1980-2015:
a2 = rand(36*52*5, 10);
b2 = [sort(repmat(1980:2015, [1 52*5]))' repmat(1:52, [1 36*5])'];
Thus a "year-week" period are uniquely identified by a row of groupIndex. This is effectively handled through calling unique(groupIndex, 'rows') and taking the third output, so feel free to disregard this portion of the question.
Method #1
You can create the mask corresponding to grIdx across all
groups in one go with bsxfun(#eq,..). Now, for collapseFn as #sum, you can bring in matrix-multiplication and thus have a completely vectorized approach, like so -
M = squeeze(all(bsxfun(#eq,groupIndex,permute(groups,[3 2 1])),2))
aggArray = M.'*array
For collapseFn as #mean, you need to do a bit more work, as shown here -
M = squeeze(all(bsxfun(#eq,groupIndex,permute(groups,[3 2 1])),2))
aggArray = bsxfun(#rdivide,M,sum(M,1)).'*array
Method #2
In case you are working with a generic collapseFn, you can use the 2D mask M created with the previous method to index into the rows of array, thus changing the complexity from O(n^2) to O(n). Some quick tests suggest this to give appreciable speedup over the original loopy code. Here's the implementation -
n = size(groups,1);
M = squeeze(all(bsxfun(#eq,groupIndex,permute(groups,[3 2 1])),2));
out = zeros(n,size(array,2));
for iGr = 1:n
out(iGr,:) = collapseFn(array(M(:,iGr),:),1);
end
Please note that the 1 in collapseFn(array(M(:,iGr),:),1) denotes the dimension along which collapseFn would be applied, so that 1 is essential there.
Bonus
By its name groupIndex seems like would hold integer values, which could be abused to have a more efficient M creation by considering each row of groupIndex as an indexing tuple and thus converting each row of groupIndex into a scalar and finally get a 1D array version of groupIndex. This must be more efficient as the datasize would be 0(n) now. This M could be fed to all the approaches listed in this post. So, we would have M like so -
dims = max(groupIndex,[],1);
agg_dims = cumprod([1 dims(end:-1:2)]);
[~,~,idx] = unique(groupIndex*agg_dims(end:-1:1).'); %//'
m = size(groupIndex,1);
M = false(m,max(idx));
M((idx-1)*m + [1:m]') = 1;
Mex Function 1
HAMMER TIME: Mex function to crush it:
The base case test with original code from the question took 1.334139 seconds on my machine. IMHO, the 2nd fastest answer from #Divakar is:
groups2 = unique(groupIndex);
aggArray2 = squeeze(all(bsxfun(#eq,groupIndex,permute(groups,[3 2 1])),2)).'*array;
Elapsed time is 0.589330 seconds.
Then my MEX function:
[groups3, aggArray3] = mg_aggregate(array, groupIndex, #(x) sum(x, 1));
Elapsed time is 0.079725 seconds.
Testing that we get the same answer: norm(groups2-groups3) returns 0 and norm(aggArray2 - aggArray3) returns 2.3959e-15. Results also match original code.
Code to generate the test conditions:
array = rand(20006,10);
groupIndex = transpose([ones(1,5) 2*ones(1,6) sort(repmat((3:4001), [1 5]))]);
For pure speed, go mex. If the thought of compiling c++ code / complexity is too much of a pain, go with Divakar's answer. Another disclaimer: I haven't subject my function to robust testing.
Mex Approach 2
Somewhat surprising to me, this code appears even faster than the full Mex version in some cases (eg. in this test took about .05 seconds). It uses a mex function mg_getRowsWithKey to figure out the indices of groups. I think it may be because my array copying in the full mex function isn't as fast as it could be and/or overhead from calling 'feval'. It's basically the same algorithmic complexity as the other version.
[unique_groups, map] = mg_getRowsWithKey(groupIndex);
results = zeros(length(unique_groups), size(array,2));
for iGr = 1:length(unique_groups)
array_subset = array(map{iGr},:);
%// do your collapse function on array_subset. eg.
results(iGr,:) = sum(array_subset, 1);
end
When you do array(groups(1)==groupIndex,:) to pull out array entries associated with the full group, you're searching through the ENTIRE length of groupIndex. If you have millions of row entries, this will totally suck. array(map{1},:) is far more efficient.
There's still unnecessary copying of memory and other overhead associated with calling 'feval' on the collapse function. If you implement the aggregator function efficiently in c++ in such a way to avoid copying of memory, probably another 2x speedup can be achieved.
A little late to the party, but a single loop using accumarray makes a huge difference:
function aggArray = aggregate_gnovice(array, groupIndex, collapseFn)
[groups, ~, index] = unique(groupIndex, 'rows');
numCols = size(array, 2);
aggArray = nan(numel(groups), numCols);
for col = 1:numCols
aggArray(:, col) = accumarray(index, array(:, col), [], collapseFn);
end
end
Timing this using timeit in MATLAB R2016b for the sample data in the question gives the following:
original | 1.127141
gnovice | 0.002205
Over a 500x speedup!
Doing away with the inner loop, i.e.
function aggArray = aggregate(array, groupIndex, collapseFn)
groups = unique(groupIndex, 'rows');
aggArray = nan(size(groups, 1), size(array, 2));
for iGr = 1:size(groups,1)
grIdx = all(groupIndex == repmat(groups(iGr,:), [size(groupIndex,1), 1]), 2);
aggArray(iGr,:) = collapseFn(array(grIdx,:));
end
and calling the collapse function with a dimension parameter
res=aggregate(a, b, #(x)sum(x,1));
gives some speedup (3x on my machine) already and avoids the errors e.g. sum or mean produce, when they encounter a single row of data without a dimension parameter and then collapse across columns rather than labels.
If you had just one group label vector, i.e. same group labels for all columns of data, you could speed further up:
function aggArray = aggregate(array, groupIndex, collapseFn)
ng=max(groupIndex);
aggArray = nan(ng, size(array, 2));
for iGr = 1:ng
aggArray(iGr,:) = collapseFn(array(groupIndex==iGr,:));
end
The latter functions gives identical results for your example, with a 6x speedup, but cannot handle different group labels per data column.
Assuming a 2D test case for the group index (provided here as well with 10 different columns for groupIndex:
a = rand(20006,10);
B=[]; % make random length periods for each of the 10 signals
for i=1:size(a,2)
n0=randi(10);
b=transpose([ones(1,n0) 2*ones(1,11-n0) sort(repmat((3:4001), [1 5]))]);
B=[B b];
end
tic; erg0=aggregate(a, B, #sum); toc % original method
tic; erg1=aggregate2(a, B, #(x)sum(x,1)); toc %just remove the inner loop
tic; erg2=aggregate3(a, B, #(x)sum(x,1)); toc %use function below
Elapsed time is 2.646297 seconds.
Elapsed time is 1.214365 seconds.
Elapsed time is 0.039678 seconds (!!!!).
function aggArray = aggregate3(array, groupIndex, collapseFn)
[groups,ix1,jx] = unique(groupIndex, 'rows','first');
[groups,ix2,jx] = unique(groupIndex, 'rows','last');
ng=size(groups,1);
aggArray = nan(ng, size(array, 2));
for iGr = 1:ng
aggArray(iGr,:) = collapseFn(array(ix1(iGr):ix2(iGr),:));
end
I think this is as fast as it gets without using MEX. Thanks to the suggestion of Matthew Gunn!
Profiling shows that 'unique' is really cheap here and getting out just the first and last index of the repeating rows in groupIndex speeds things up considerably. I get 88x speedup with this iteration of the aggregation.
Well I have a solution that is almost as quick as the mex but only using matlab.
The logic is the same as most of the above, creating a dummy 2D matrix but instead of using #eq I initialize a logical array from the start.
Elapsed time for mine is 0.172975 seconds.
Elapsed time for Divakar 0.289122 seconds.
function aggArray = aggregate(array, group, collapseFn)
[m,~] = size(array);
n = max(group);
D = false(m,n);
row = (1:m)';
idx = m*(group(:) - 1) + row;
D(idx) = true;
out = zeros(m,size(array,2));
for ii = 1:n
out(ii,:) = collapseFn(array(D(:,ii),:),1);
end
end