Convert point cloud to voxels via averaging - matlab

I have the following data:
N = 10^3;
x = randn(N,1);
y = randn(N,1);
z = randn(N,1);
f = x.^2+y.^2+z.^2;
Now I want to split this continuous 3D space into nB bins.
nB = 20;
[~,~,x_bins] = histcounts(x,nB);
[~,~,y_bins] = histcounts(y,nB);
[~,~,z_bins] = histcounts(z,nB);
And put in each cube average f or nan if no observations happen in the cube:
F = nan(50,50,50);
for iX = 1:20
for iY = 1:20
for iZ = 1:20
idx = (x_bins==iX)&(y_bins==iY)&(z_bins==iZ);
F(iX,iY,iZ) = mean(f(idx));
end
end
end
isosurface(F,0.5)
This code does what I want. My problem is the speed. This code is extremely slow when N > 10^5 and nB = 100.
How can I speed up this code?
I also tried the accumarray() function:
subs=([x_bins,y_bins,z_bins]);
F2 = accumarray(subs,f,[],#mean);
all(F(:) == F2(:)) % false
However, this code produces a different result.

The problem with the code in the OP is that it tests all elements of the data for each element in the output array. The output array has nB^3 elements, the data has N elements, so the algorithm is O(N*nB^3). Instead, one can loop over the N elements of the input, and set the corresponding element in the output array, which is an operation O(N) (2nd code block below).
The accumarray solution in the OP needs to use the fillvals parameter, set it to NaN (3rd code block below).
To compare the results, one needs to explicitly test that both arrays have NaN in the same locations, and have equal non-NaN values elsewhere:
all( ( isnan(F(:)) & isnan(F2(:)) ) | ( F(:) == F2(:) ) )
% \-------same NaN values------/ \--same values--/
Here is code. All three versions produce identical results. Timings in Octave 4.4.1 (no JIT), in MATLAB the loop code should be faster. (Using input data from OP, with N=10^3 and nB=20).
%% OP's code, O(N*nB^3)
tic
F = nan(nB,nB,nB);
for iX = 1:nB
for iY = 1:nB
for iZ = 1:nB
idx = (x_bins==iX)&(y_bins==iY)&(z_bins==iZ);
F(iX,iY,iZ) = mean(f(idx));
end
end
end
toc
% Elapsed time is 1.61736 seconds.
%% Looping over input, O(N)
tic
s = zeros(nB,nB,nB);
c = zeros(nB,nB,nB);
ind = sub2ind([nB,nB,nB],x_bins,y_bins,z_bins);
for ii=1:N
s(ind(ii)) = s(ind(ii)) + f(ii);
c(ind(ii)) = c(ind(ii)) + 1;
end
F2 = s ./ c;
toc
% Elapsed time is 0.0606539 seconds.
%% Other alternative, using accumarray
tic
ind = sub2ind([nB,nB,nB],x_bins,y_bins,z_bins);
F3 = accumarray(ind,f,[nB,nB,nB],#mean,NaN);
toc
% Elapsed time is 0.14113 seconds.

Related

Serious performance issue with iterating simulations

I recently stumbled upon a performance problem while implementing a simulation algorithm. I managed to find the bottleneck function (signally, it's the internal call to arrayfun that slows everything down):
function sim = simulate_frequency(the_f,k,n)
r = rand(1,n); %
x = arrayfun(#(x) find(x <= the_f,1,'first'),r);
sim = (histcounts(x,[1:k Inf]) ./ n).';
end
It is being used in other parts of code as follows:
h0 = zeros(1,sims);
for i = 1:sims
p = simulate_frequency(the_f,k,n);
h0(i) = max(abs(p - the_p));
end
Here are some possible values:
% Test Case 1
sims = 10000;
the_f = [0.3010; 0.4771; 0.6021; 0.6990; 0.7782; 0.8451; 0.9031; 0.9542; 1.0000];
k = 9;
n = 95;
% Test Case 2
sims = 10000;
the_f = [0.0413; 0.0791; 0.1139; 0.1461; 0.1760; 0.2041; 0.2304; 0.2552; 0.2787; 0.3010; 0.3222; 0.3424; 0.3617; 0.3802; 0.3979; 0.4149; 0.4313; 0.4471; 0.4623; 0.4771; 0.4913; 0.5051; 0.5185; 0.5314; 0.5440; 0.5563; 0.5682; 0.5797; 0.5910; 0.6020; 0.6127; 0.6232; 0.6334; 0.6434; 0.6532; 0.6627; 0.6720; 0.6812; 0.6901; 0.6989; 0.7075; 0.7160; 0.7242; 0.7323; 0.7403; 0.7481; 0.7558; 0.7634; 0.7708; 0.7781; 0.7853; 0.7923; 0.7993; 0.8061; 0.8129; 0.8195; 0.8260; 0.8325; 0.8388; 0.8450; 0.8512; 0.8573; 0.8633; 0.8692; 0.8750; 0.8808; 0.8864; 0.8920; 0.8976; 0.9030; 0.9084; 0.9138; 0.9190; 0.9242; 0.9294; 0.9344; 0.9395; 0.9444; 0.9493; 0.9542; 0.9590; 0.9637; 0.9684; 0.9731; 0.9777; 0.9822; 0.9867; 0.9912; 0.9956; 1.000];
k = 90;
n = 95;
The scalar sims must be in the range 1000 1000000. The vector of cumulated frequencies the_f never contains more than 100 elements. The scalar k represents the number of elements in the_f. Finally, the scalar n represents the number of elements in the empirical sample vector, and can even be very large (up to 10000 elements, as far as I can tell).
Any clue about how to improve the computation time of this process?
This seems to be slightly faster for me in the second test case, not the first. The time differences might be larger for longer the_f and larger values of n.
function sim = simulate_frequency(the_f,k,n)
r = rand(1,n); %
[row,col] = find(r <= the_f); % Implicit singleton expansion going on here!
[~,ind] = unique(col,'first');
x = row(ind);
sim = (histcounts(x,[1:k Inf]) ./ n).';
end
I'm using implicit singleton expansion in r <= the_f, use bsxfun if you have an older version of MATLAB (but you know the drill).
Find then returns row and column to all the locations where r is larger than the_f. unique finds the indices into the result for the first element of each column.
Credit: Andrei Bobrov over on MATLAB Answers
Another option (derived from this other answer) is a bit shorter but also a bit more obscure IMO:
mask = r <= the_f;
[x,~] = find(mask & (cumsum(mask,1)==1));
If I want performance, I would avoid arrayfun. Even this for loop is faster:
function sim = simulate_frequency(the_f,k,n)
r = rand(1,n); %
for i = 1:numel(r)
x(i) = find(r(i)<the_f,1,'first');
end
sim = (histcounts(x,[1:k Inf]) ./ n).';
end
Running 10000 sims with the first set of the sample data gives the following timing.
Your arrayfun function:
>Elapsed time is 2.848206 seconds.
The for loop function:
>Elapsed time is 0.938479 seconds.
Inspired by Cris Luengo's answer, I suggest below:
function sim = simulate_frequency(the_f,k,n)
r = rand(1,n); %
x = sum(r > the_f)+1;
sim = (histcounts(x,[1:k Inf]) ./ n)';
end
Time:
>Elapsed time is 0.264146 seconds.
You can use histcounts with r as its input:
r = rand(1,n);
sim = (histcounts(r,[-inf ;the_f]) ./ n).';
If histc is used instead of histcounts the whole simulation can be vectorized:
r = rand(n,sims);
p = histc(r, [-inf; the_f],1);
p = [p(1:end-2,:) ;sum(p(end-1:end,:))]./n;
h0 = max(abs(p-the_p(:))); %h0 = max(abs(bsxfun(#minus,p,the_p(:))));

How can I vectorize the loops of this function in Octave?

I want to be able to vectorize the for-loops of this function to then be able to parallelize it in octave. Can these for-loops be vectorized? Thank you very much in advance!
I attach the code of the function commenting on the start and end of each for-loop and if-else.
function [par]=pem_v(tsm,pr)
% tsm and pr are arrays of N by n. % par is an array of N by 8
tss=[27:0.5:32];
tc=[20:0.01:29];
N=size(tsm,1);
% main-loop
for ii=1:N
% I extract the rows in each loop because each one represents a sample
sst=tsm(ii,:); sst=sst'; %then I convert each sample to column vectors
pre=pr(ii,:); pre=pre';
% main-condition
if isnan(nanmean(sst))==1;
par(ii,1:8)=NaN;
else
% first sub-loop
for k=1:length(tss);
idxx=find(sst>=tss(k)-0.25 & sst<=tss(k)+0.25);
out(k)=prctile(pre(idxx),90);
end
% end first sub-loop
tp90=tss(find(max(out)==out));
% second sub-loop
for j=1:length(tc)
cond1=find(sst>=tc(j) & sst<=tp90);
cond2=find(sst>=tp90);
pem=zeros(length(sst),1);
A=[sst(cond1),ones(length(cond1),1)];
B=regress(pre(cond1),A);
pt90=B(1)*(tp90-tc(j));
AA=[(sst(cond2)-tp90)];
BB=regress(pre(cond2)-pt90,AA);
pem(cond1)=max(0,B(1)*(sst(cond1)-tc(j)));
pem(cond2)=max(0,(BB(1)*(sst(cond2)-tp90))+pt90);
clear A B AA BB;
E(j)=sqrt(nansum((pem-pre).^2)/length(pre));
clear pem;
end
% end second sub-loop
tcc=tc(find(E==min(E)));
% sub-condition
if(isempty(tcc)==1);
par(ii,1:9)=NaN;
else
cond1=find(sst>=tcc & sst<=tp90);
cond2=find(sst>=tp90);
pem=zeros(length(sst),1);
A=[sst(cond1),ones(length(cond1),1)];
B=regress(pre(cond1),A);
pt90=B(1)*(tp90-tcc);
AA=[sst(cond2)-tp90];
BB=regress(pre(cond2)-pt90,AA);
pem(cond1)=max(0,B(1)*(sst(cond1)-tcc));
pem(cond2)=max(0,(BB(1)*(sst(cond2)-tp90))+pt90);
RMSE=sqrt(nansum((pem-pre).^2)/length(pre));
% outputs
par(ii,1)=tcc;
par(ii,2)=tp90;
par(ii,3)=B(1);
par(ii,4)=BB(1);
par(ii,5)=RMSE;
par(ii,6)=nanmean(sst);
par(ii,7)=nanmean(pre);
par(ii,8)=nanmean(pem);
end
% end sub-condition
clear pem pre sst RMSE BB B tp90 tcc
end
% end main-condition
end
% end main-loop
You haven't given any example inputs, so I've created some like so:
N = 5; n = 800;
tsm = rand(N,n)*5+27; pr = rand(N,n);
Then, before you even consider vectorising your code, you should keep 4 things in mind...
Avoid calulating the same thing (like the size of a vector) every loop, instead do it before looping
Pre-allocate arrays where possible (declare them as zeros/NaNs etc)
Don't use find to convert logical indices into linear indices, there is no need and it will slow down your code
Don't repeatedly use clear, especially many times within loops. It is slow! Instead, use pre-allocation to ensure the variables are as you expect each loop.
Using the above random inputs, and taking account of these 4 things, the below code is ~65% quicker than your code. Note: this is without even doing any vectorising!
function [par]=pem_v(tsm,pr)
% tsm and pr are arrays of N by n.
% par is an array of N by 8
tss=[27:0.5:32];
tc=[20:0.01:29];
N=size(tsm,1);
% Transpose once here instead of every loop
tsm = tsm';
pr = pr';
% Pre-allocate memory for output 'par'
par = NaN(N, 8);
% Don't compute these every loop, do it before the loop.
% numel simpler than length for vectors, and size is clearer still
ntss = numel(tss);
nsst = size(tsm,1);
ntc = numel(tc);
npr = size(pr, 1);
for ii=1:N
% Extract the columns in each loop because each one represents a sample
sst=tsm(:,ii);
pre=pr(:,ii);
% main-condition. Previously isnan(nanmean(sst))==1, but that's only true if all(isnan(sst))
% We don't need to assign par(ii,1:8)=NaN since we initialised par to a matrix of NaNs
if ~all(isnan(sst));
% first sub-loop, initialise 'out' first
out = zeros(1, ntss);
for k=1:ntss;
% Don't use FIND on an indexing vector. Use the logical index raw, it's quicker
idxx = (sst>=tss(k)-0.25 & sst<=tss(k)+0.25);
% We need a check that some values of idxx are true, otherwise prctile will error.
if nnz(idxx) > 0
out(k) = prctile(pre(idxx), 90);
end
end
% Again, no need for FIND, just reduces speed. This is a theme...
tp90=tss(max(out)==out);
for jj=1:ntc
cond1 = (sst>=tc(jj) & sst<=tp90);
cond2 = (sst>=tp90);
% Use nnz (numer of non-zero) instead of length, since cond1 is now a logical vector of all elements
A = [sst(cond1),ones(nnz(cond1),1)];
B = regress(pre(cond1), A);
pt90 = B(1)*(tp90-tc(jj));
AA = [(sst(cond2)-tp90)];
BB = regress(pre(cond2)-pt90,AA);
pem=zeros(nsst,1);
pem(cond1) = max(0, B(1)*(sst(cond1)-tc(jj)));
pem(cond2) = max(0, (BB(1)*(sst(cond2)-tp90))+pt90);
E(jj) = sqrt(nansum((pem-pre).^2)/npr);
end
tcc = tc(E==min(E));
if ~isempty(tcc);
cond1 = (sst>=tcc & sst<=tp90);
cond2 = (sst>=tp90);
A = [sst(cond1),ones(nnz(cond1),1)];
B = regress(pre(cond1),A);
pt90 = B(1)*(tp90-tcc);
AA = [sst(cond2)-tp90];
BB = regress(pre(cond2)-pt90,AA);
pem = zeros(length(sst),1);
pem(cond1) = max(0, B(1)*(sst(cond1)-tcc));
pem(cond2) = max(0, (BB(1)*(sst(cond2)-tp90))+pt90);
RMSE = sqrt(nansum((pem-pre).^2)/npr);
% Outputs, which we might as well assign all at once!
par(ii,:)=[tcc, tp90, B(1), BB(1), RMSE, ...
nanmean(sst), nanmean(pre), nanmean(pem)];
end
end
end

Speed up calculation of maximum of normxcorr2

I need to calculate the maximum of normalized cross correlation of million of particles. The size of the two parameters of normxcorr2 is 56*56. I can't parallelize the calculations. Is there any suggestion to speed up the code especially that I don't need all the results but only the maximum value of each cross correlation (to know the displacement)?
Example of the algorithm
%The choice of 170 particles is because in each time
%the code detects 170 particles, so over 10000 images it's 1 700 000 particles
particle_1=rand(54,54,170);
particle_2=rand(56,56,170);
for i=1:170
C=normxcorr2(particle_1(:,:,i),particle_2(:,:,i));
L(i)=max(C(:));
end
I don't have MATLAB so I ran the following code on this site: https://www.tutorialspoint.com/execute_matlab_online.php which is actually octave. So I implemented "naive" normalized cross correlation and indeed for these small images sizes the naive performs better:
Elapsed time is 2.62645 seconds - for normxcorr2
Elapsed time is 0.199034 seconds - for my naive_normxcorr2
The code is based on the article http://scribblethink.org/Work/nvisionInterface/nip.pdf which describes how to calculate the standard deviation needed for the normalization in an efficient way using integral image, this is the box_corr function.
Also, MATLAB's normxcorr2 returns a padded image so I took the max on the unpadded part.
pkg load image
function [N] = naive_corr(pat,img)
[n,m] = size(img);
[np,mp] = size(pat);
N = zeros(n-np+1,m-mp+1);
for i = 1:n-np+1
for j = 1:m-mp+1
N(i,j) = sum(dot(pat,img(i:i+np-1,j:j+mp-1)));
end
end
end
%w_arr the array of coefficients for the boxes
%box_arr of size [k,4] where k is the number boxes, each box represented by
%4 something ...
function [C] = box_corr2(img,box_arr,w_arr,n_p,m_p)
% construct integral image + zeros pad (for boundary problems)
I = cumsum(cumsum(img,2),1);
I = [zeros(1,size(I,2)+2); [zeros(size(I,1),1) I zeros(size(I,1),1)]; zeros(1,size(I,2)+2)];
% initialize result matrix
[n,m] = size(img);
C = zeros(n-n_p+1,m-m_p+1);
%C = zeros(n,m);
jump_x = 1;
jump_y = 1;
x_start = ceil(n_p/2);
x_end = n-x_start+mod(n_p,2);
x_span = x_start:jump_x:x_end;
y_start = ceil(m_p/2);
y_end = m-y_start+mod(m_p,2);
y_span = y_start:jump_y:y_end;
arr_a = box_arr(:,1) - x_start;
arr_b = box_arr(:,2) - x_start+1;
arr_c = box_arr(:,3) - y_start;
arr_d = box_arr(:,4) - y_start+1;
% cumulate box responses
k = size(box_arr,1); % == numel(w_arr)
for i = 1:k
a = arr_a(i);
b = arr_b(i);
c = arr_c(i);
d = arr_d(i);
C = C ...
+ w_arr(i) * ( I(x_span+b,y_span+d) ...
- I(x_span+b,y_span+c) ...
- I(x_span+a,y_span+d) ...
+ I(x_span+a,y_span+c) );
end
end
function [NCC] = naive_normxcorr2(temp,img)
[n_p,m_p]=size(temp);
M = n_p*m_p;
% compute template mean & std
temp_mean = mean(temp(:));
temp = temp - temp_mean;
temp_std = sqrt(sum(temp(:).^2)/M);
% compute windows' mean & std
wins_mean = box_corr2(img,[1,n_p,1,m_p],1/M, n_p,m_p);
wins_mean2 = box_corr2(img.^2,[1,n_p,1,m_p],1/M,n_p,m_p);
wins_std = real(sqrt(wins_mean2 - wins_mean.^2));
NCC_naive = naive_corr(temp,img);
NCC = NCC_naive ./ (M .* temp_std .* wins_std);
end
n = 170;
particle_1=rand(54,54,n);
particle_2=rand(56,56,n);
[n_p1,m_p1,c_p1]=size(particle_1);
[n_p2,m_p2,c_p2]=size(particle_2);
L1 = zeros(n,1);
L2 = zeros (n,1);
tic
for i=1:n
C1=normxcorr2(particle_1(:,:,i),particle_2(:,:,i));
C1_unpadded = C1(n_p1:n_p2 , m_p1:m_p2);
L1(i)=max(C1_unpadded(:));
end
toc
tic
for i=1:n
C2=naive_normxcorr2(particle_1(:,:,i),particle_2(:,:,i));
L2(i)=max(C2(:));
end
toc

How do I index codistributed arrays in a spmd block

I am doing a very large calculation (atmospheric absorption) that has a lot of individual narrow peaks that all get added up at the end. For each peak, I have pre-calculated the range over which the value of the peak shape function is above my chosen threshold, and I am then going line by line and adding the peaks to my spectrum. A minimum example is given below:
X = 1:1e7;
K = numel(a); % count the number of peaks I have.
spectrum = zeros(size(X));
for k = 1:K
grid = X >= rng(1,k) & X <= rng(2,k);
spectrum(grid) = spectrum(grid) + peakfn(X(grid),a(k),b(k),c(k)]);
end
Here, each peak has some parameters that define the position and shape (a,b,c), and a range over which to do the calculation (rng). This works great, and on my machine it benchmarks at around 220 seconds to do a complete data set. However, I have a 4 core machine and I would eventually like to run this on a cluster, so I'd like to parallelize it and make it scaleable.
Because each loop relies on the results of the previous iteration, I cannot use parfor, so I am taking my first step into learning how to use spmd blocks. My first try looked like this:
X = 1:1e7;
cores = matlabpool('size');
K = numel(a);
spectrum = zeros(size(X),cores);
spmd
n = labindex:cores:K
N = numel(n);
for k = 1:N
grid = X >= rng(1,n(k)) & X <= rng(2,n(k));
spectrum(grid,labindex) = spectrum(grid,labindex) + peakfn(X(grid),a(n(k)),b(n(k)),c(n(k))]);
end
end
finalSpectrum = sum(spectrum,2);
This almost works. The program crashes at the last line because spectrum is of type Composite, and the documentation for 2013a is spotty on how to turn Composite data into a matrix (cell2mat does not work). This also does not scale well because the more cores I have, the larger the matrix is, and that large matrix has to get copied to each worker, which then ignores most of the data. Question 1: how do I turn a Composite data type into a useable array?
The second thing I tried was to use a codistributed array.
spmd
spectrum = codistributed.zeros(K,cores);
disp(size(getLocalPart(spectrum)))
end
This tells me that each worker has a single vector of size [K 1], which I believe is what I want, but when I try to then meld the above methods
spmd
spectrum = codistributed.zeros(K,cores);
n = labindex:cores:K
N = numel(n);
for k = 1:N
grid = X >= rng(1,n(k)) & X <= rng(2,n(k));
spectrum(grid) = spectrum(grid) + peakfn(X(grid),a(n(k)),b(n(k)),c(n(k))]); end
finalSpectrum = gather(spectrum);
end
finalSpectrum = sum(finalSpectrum,2);
I get Matrix dimensions must agree errors. Since it's in a parallel block, I can't use my normal debugging crutch of stepping through the loop and seeing what the size of each block is at each point to see what's going on. Question 2: what is the proper way to index into and out of a codistributed array in an spmd block?
Regarding question#1, the Composite variable in the client basically refers to a non-distributed variant array stored on the workers. You can access the array from each worker by {}-indexing using its corresponding labindex (e.g: spectrum{1}, spectrum{2}, ..).
For your code that would be: finalSpectrum = sum(cat(2,spectrum{:}), 2);
Now I tried this problem myself using random data. Below are three implementations to compare (see here to understand the difference between distributed and nondistributed arrays). First we start with the common data:
len = 100; % spectrum length
K = 10; % number of peaks
X = 1:len;
% random position and shape parameters
a = rand(1,K); b = rand(1,K); c = rand(1,K);
% random peak ranges (lower/upper thresholds)
ranges = sort(randi([1 len], [2 K]));
% dummy peakfn() function
fcn = #(x,a,b,c) x+a+b+c;
% prepare a pool of MATLAB workers
matlabpool open
1) Serial for-loop:
spectrum = zeros(size(X));
for i=1:size(ranges,2)
r = ranges(:,i);
idx = (r(1) <= X & X <= r(2));
spectrum(idx) = spectrum(idx) + fcn(X(idx), a(i), b(i), c(i));
end
s1 = spectrum;
clear spectrum i r idx
2) SPMD with Composite array
spmd
spectrum = zeros(1,len);
ind = labindex:numlabs:K;
for i=1:numel(ind)
r = ranges(:,ind(i));
idx = (r(1) <= X & X <= r(2));
spectrum(idx) = spectrum(idx) + ...
feval(fcn, X(idx), a(ind(i)), b(ind(i)), c(ind(i)));
end
end
s2 = sum(vertcat(spectrum{:}));
clear spectrum i r idx ind
3) SPMD with co-distributed array
spmd
spectrum = zeros(numlabs, len, codistributor('1d',1));
ind = labindex:numlabs:K;
for i=1:numel(ind)
r = ranges(:,ind(i));
idx = (r(1) <= X & X <= r(2));
spectrum(labindex,idx) = spectrum(labindex,idx) + ...
feval(fcn, X(idx), a(ind(i)), b(ind(i)), c(ind(i)));
end
end
s3 = sum(gather(spectrum));
clear spectrum i r idx ind
All three results should be equal (to within an acceptably small margin of error)
>> max([max(s1-s2), max(s1-s3), max(s2-s3)])
ans =
2.8422e-14

Optimizing repetitive estimation (currently a loop) in MATLAB

I've found myself needing to do a least-squares (or similar matrix-based operation) for every pixel in an image. Every pixel has a set of numbers associated with it, and so it can be arranged as a 3D matrix.
(This next bit can be skipped)
Quick explanation of what I mean by least-squares estimation :
Let's say we have some quadratic system that is modeled by Y = Ax^2 + Bx + C and we're looking for those A,B,C coefficients. With a few samples (at least 3) of X and the corresponding Y, we can estimate them by:
Arrange the (lets say 10) X samples into a matrix like X = [x(:).^2 x(:) ones(10,1)];
Arrange the Y samples into a similar matrix: Y = y(:);
Estimate the coefficients A,B,C by solving: coeffs = (X'*X)^(-1)*X'*Y;
Try this on your own if you want:
A = 5; B = 2; C = 1;
x = 1:10;
y = A*x(:).^2 + B*x(:) + C + .25*randn(10,1); % added some noise here
X = [x(:).^2 x(:) ones(10,1)];
Y = y(:);
coeffs = (X'*X)^-1*X'*Y
coeffs =
5.0040
1.9818
0.9241
START PAYING ATTENTION AGAIN IF I LOST YOU THERE
*MAJOR REWRITE*I've modified to bring it as close to the real problem that I have and still make it a minimum working example.
Problem Setup
%// Setup
xdim = 500;
ydim = 500;
ncoils = 8;
nshots = 4;
%// matrix size for each pixel is ncoils x nshots (an overdetermined system)
%// each pixel has a matrix stored in the 3rd and 4rth dimensions
regressor = randn(xdim,ydim, ncoils,nshots);
regressand = randn(xdim, ydim,ncoils);
So my problem is that I have to do a (X'*X)^-1*X'*Y (least-squares or similar) operation for every pixel in an image. While that itself is vectorized/matrixized the only way that I have to do it for every pixel is in a for loop, like:
Original code style
%// Actual work
tic
estimate = zeros(xdim,ydim);
for col=1:size(regressor,2)
for row=1:size(regressor,1)
X = squeeze(regressor(row,col,:,:));
Y = squeeze(regressand(row,col,:));
B = X\Y;
% B = (X'*X)^(-1)*X'*Y; %// equivalently
estimate(row,col) = B(1);
end
end
toc
Elapsed time = 27.6 seconds
EDITS in reponse to comments and other ideas
I tried some things:
1. Reshaped into a long vector and removed the double for loop. This saved some time.
2. Removed the squeeze (and in-line transposing) by permute-ing the picture before hand: This save alot more time.
Current example:
%// Actual work
tic
estimate2 = zeros(xdim*ydim,1);
regressor_mod = permute(regressor,[3 4 1 2]);
regressor_mod = reshape(regressor_mod,[ncoils,nshots,xdim*ydim]);
regressand_mod = permute(regressand,[3 1 2]);
regressand_mod = reshape(regressand_mod,[ncoils,xdim*ydim]);
for ind=1:size(regressor_mod,3) % for every pixel
X = regressor_mod(:,:,ind);
Y = regressand_mod(:,ind);
B = X\Y;
estimate2(ind) = B(1);
end
estimate2 = reshape(estimate2,[xdim,ydim]);
toc
Elapsed time = 2.30 seconds (avg of 10)
isequal(estimate2,estimate) == 1;
Rody Oldenhuis's way
N = xdim*ydim*ncoils; %// number of columns
M = xdim*ydim*nshots; %// number of rows
ii = repmat(reshape(1:N,[ncoils,xdim*ydim]),[nshots 1]); %//column indicies
jj = repmat(1:M,[ncoils 1]); %//row indicies
X = sparse(ii(:),jj(:),regressor_mod(:));
Y = regressand_mod(:);
B = X\Y;
B = reshape(B(1:nshots:end),[xdim ydim]);
Elapsed time = 2.26 seconds (avg of 10)
or 2.18 seconds (if you don't include the definition of N,M,ii,jj)
SO THE QUESTION IS:
Is there an (even) faster way?
(I don't think so.)
You can achieve a ~factor of 2 speed up by precomputing the transposition of X. i.e.
for x=1:size(picture,2) % second dimension b/c already transposed
X = picture(:,x);
XX = X';
Y = randn(n_timepoints,1);
%B = (X'*X)^-1*X'*Y; ;
B = (XX*X)^-1*XX*Y;
est(x) = B(1);
end
Before: Elapsed time is 2.520944 seconds.
After: Elapsed time is 1.134081 seconds.
EDIT:
Your code, as it stands in your latest edit, can be replaced by the following
tic
xdim = 500;
ydim = 500;
n_timepoints = 10; % for example
% Actual work
picture = randn(xdim,ydim,n_timepoints);
picture = reshape(picture, [xdim*ydim,n_timepoints])'; % note transpose
YR = randn(n_timepoints,size(picture,2));
% (XX*X).^-1 = sum(picture.*picture).^-1;
% XX*Y = sum(picture.*YR);
est = sum(picture.*picture).^-1 .* sum(picture.*YR);
est = reshape(est,[xdim,ydim]);
toc
Elapsed time is 0.127014 seconds.
This is an order of magnitude speed up on the latest edit, and the results are all but identical to the previous method.
EDIT2:
Okay, so if X is a matrix, not a vector, things are a little more complicated. We basically want to precompute as much as possible outside of the for-loop to keep our costs down. We can also get a significant speed-up by computing XT*X manually - since the result will always be a symmetric matrix, we can cut a few corners to speed things up. First, the symmetric multiplication function:
function XTX = sym_mult(X) % X is a 3-d matrix
n = size(X,2);
XTX = zeros(n,n,size(X,3));
for i=1:n
for j=i:n
XTX(i,j,:) = sum(X(:,i,:).*X(:,j,:));
if i~=j
XTX(j,i,:) = XTX(i,j,:);
end
end
end
Now the actual computation script
xdim = 500;
ydim = 500;
n_timepoints = 10; % for example
Y = randn(10,xdim*ydim);
picture = randn(xdim,ydim,n_timepoints); % 500x500x10
% Actual work
tic % start timing
picture = reshape(picture, [xdim*ydim,n_timepoints])';
% Here we precompute the (XT*Y) calculation to speed things up later
picture_y = [sum(Y);sum(Y.*picture)];
% initialize
est = zeros(size(picture,2),1);
picture = permute(picture,[1,3,2]);
XTX = cat(2,ones(n_timepoints,1,size(picture,3)),picture);
XTX = sym_mult(XTX); % precompute (XT*X) for speed
X = zeros(2,2); % preallocate for speed
XY = zeros(2,1);
for x=1:size(picture,2) % second dimension b/c already transposed
%For some reason this is a lot faster than X = XTX(:,:,x);
X(1,1) = XTX(1,1,x);
X(2,1) = XTX(2,1,x);
X(1,2) = XTX(1,2,x);
X(2,2) = XTX(2,2,x);
XY(1) = picture_y(1,x);
XY(2) = picture_y(2,x);
% Here we utilise the fact that A\B is faster than inv(A)*B
% We also use the fact that (A*B)*C = A*(B*C) to speed things up
B = X\XY;
est(x) = B(1);
end
est = reshape(est,[xdim,ydim]);
toc % end timing
Before: Elapsed time is 4.56 seconds.
After: Elapsed time is 2.24 seconds.
This is a speed up of about a factor of 2. This code should be extensible to X being any dimensions you want. For instance, in the case where X = [1 x x^2], you would change picture_y to the following
picture_y = [sum(Y);sum(Y.*picture);sum(Y.*picture.^2)];
and change XTX to
XTX = cat(2,ones(n_timepoints,1,size(picture,3)),picture,picture.^2);
You would also change a lot of 2s to 3s in the code, and add XY(3) = picture_y(3,x) to the loop. It should be fairly straight-forward, I believe.
Results
I sped up your original version, since your edit 3 was actually not working (and also does something different).
So, on my PC:
Your (original) version: 8.428473 seconds.
My obfuscated one-liner given below: 0.964589 seconds.
First, for no other reason than to impress, I'll give it as I wrote it:
%%// Some example data
xdim = 500;
ydim = 500;
n_timepoints = 10; % for example
estimate = zeros(xdim,ydim); %// initialization with explicit size
picture = randn(xdim,ydim,n_timepoints);
%%// Your original solution
%// (slightly altered to make my version's results agree with yours)
tic
Y = randn(n_timepoints,xdim*ydim);
ii = 1;
for x = 1:xdim
for y = 1:ydim
X = squeeze(picture(x,y,:)); %// or similar creation of X matrix
B = (X'*X)^(-1)*X' * Y(:,ii);
ii = ii+1;
%// sometimes you keep everything and do
%// estimate(x,y,:) = B(:);
%// sometimes just the first element is important and you do
estimate(x,y) = B(1);
end
end
toc
%%// My version
tic
%// UNLEASH THE FURY!!
estimate2 = reshape(sparse(1:xdim*ydim*n_timepoints, ...
builtin('_paren', ones(n_timepoints,1)*(1:xdim*ydim),:), ...
builtin('_paren', permute(picture, [3 2 1]),:))\Y(:), ydim,xdim).'; %'
toc
%%// Check for equality
max(abs(estimate(:)-estimate2(:))) % (always less than ~1e-14)
Breakdown
First, here's the version that you should actually use:
%// Construct sparse block-diagonal matrix
%// (Type "help sparse" for more information)
N = xdim*ydim; %// number of columns
M = N*n_timepoints; %// number of rows
ii = 1:N;
jj = ones(n_timepoints,1)*(1:N);
s = permute(picture, [3 2 1]);
X = sparse(ii,jj(:), s(:));
%// Compute ALL the estimates at once
estimates = X\Y(:);
%// You loop through the *second* dimension first, so to make everything
%// agree, we have to extract elements in the "wrong" order, and transpose:
estimate2 = reshape(estimates, ydim,xdim).'; %'
Here's an example of what picture and the corresponding matrix X looks like for xdim = ydim = n_timepoints = 2:
>> clc, picture, full(X)
picture(:,:,1) =
-0.5643 -2.0504
-0.1656 0.4497
picture(:,:,2) =
0.6397 0.7782
0.5830 -0.3138
ans =
-0.5643 0 0 0
0.6397 0 0 0
0 -2.0504 0 0
0 0.7782 0 0
0 0 -0.1656 0
0 0 0.5830 0
0 0 0 0.4497
0 0 0 -0.3138
You can see why sparse is necessary -- it's mostly zeros, but will grow large quickly. The full matrix would quickly consume all your RAM, while the sparse one will not consume much more than the original picture matrix does.
With this matrix X, the new problem
X·b = Y
now contains all the problems
X1 · b1 = Y1
X2 · b2 = Y2
...
where
b = [b1; b2; b3; ...]
Y = [Y1; Y2; Y3; ...]
so, the single command
X\Y
will solve all your systems at once.
This offloads all the hard work to a set of highly specialized, compiled to machine-specific code, optimized-in-every-way algorithms, rather than the interpreted, generic, always-two-steps-away from the hardware loops in MATLAB.
It should be straightforward to convert this to a version where X is a matrix; you'll end up with something like what blkdiag does, which can also be used by mldivide in exactly the same way as above.
I had a wee play around with an idea, and I decided to stick it as a separate answer, as its a completely different approach to my other idea, and I don't actually condone what I'm about to do. I think this is the fastest approach so far:
Orignal (unoptimised): 13.507176 seconds.
Fast Cholesky-decomposition method: 0.424464 seconds
First, we've got a function to quickly do the X'*X multiplication. We can speed things up here because the result will always be symmetric.
function XX = sym_mult(X)
n = size(X,2);
XX = zeros(n,n,size(X,3));
for i=1:n
for j=i:n
XX(i,j,:) = sum(X(:,i,:).*X(:,j,:));
if i~=j
XX(j,i,:) = XX(i,j,:);
end
end
end
The we have a function to do LDL Cholesky decomposition of a 3D matrix (we can do this because the (X'*X) matrix will always be symmetric) and then do forward and backwards substitution to solve the LDL inversion equation
function Y = fast_chol(X,XY)
n=size(X,2);
L = zeros(n,n,size(X,3));
D = zeros(n,n,size(X,3));
B = zeros(n,1,size(X,3));
Y = zeros(n,1,size(X,3));
% These loops compute the LDL decomposition of the 3D matrix
for i=1:n
D(i,i,:) = X(i,i,:);
L(i,i,:) = 1;
for j=1:i-1
L(i,j,:) = X(i,j,:);
for k=1:(j-1)
L(i,j,:) = L(i,j,:) - L(i,k,:).*L(j,k,:).*D(k,k,:);
end
D(i,j,:) = L(i,j,:);
L(i,j,:) = L(i,j,:)./D(j,j,:);
if i~=j
D(i,i,:) = D(i,i,:) - L(i,j,:).^2.*D(j,j,:);
end
end
end
for i=1:n
B(i,1,:) = XY(i,:);
for j=1:(i-1)
B(i,1,:) = B(i,1,:)-D(i,j,:).*B(j,1,:);
end
B(i,1,:) = B(i,1,:)./D(i,i,:);
end
for i=n:-1:1
Y(i,1,:) = B(i,1,:);
for j=n:-1:(i+1)
Y(i,1,:) = Y(i,1,:)-L(j,i,:).*Y(j,1,:);
end
end
Finally, we have the main script which calls all of this
xdim = 500;
ydim = 500;
n_timepoints = 10; % for example
Y = randn(10,xdim*ydim);
picture = randn(xdim,ydim,n_timepoints); % 500x500x10
tic % start timing
picture = reshape(pr, [xdim*ydim,n_timepoints])';
% Here we precompute the (XT*Y) calculation
picture_y = [sum(Y);sum(Y.*picture)];
% initialize
est2 = zeros(size(picture,2),1);
picture = permute(picture,[1,3,2]);
% Now we calculate the X'*X matrix
XTX = cat(2,ones(n_timepoints,1,size(picture,3)),picture);
XTX = sym_mult(XTX);
% Call our fast Cholesky decomposition routine
B = fast_chol(XTX,picture_y);
est2 = B(1,:);
est2 = reshape(est2,[xdim,ydim]);
toc
Again, this should work equally well for a Nx3 X matrix, or however big you want.
I use octave, thus I can't say anything about the resulting performance in Matlab, but would expect this code to be slightly faster:
pictureT=picture'
est=arrayfun(#(x)( (pictureT(x,:)*picture(:,x))^-1*pictureT(x,:)*randn(n_ti
mepoints,1)),1:size(picture,2));