Matlab: Optimization of reducing vector size - matlab

I have developed a function to reduce the size of an initial vector X = [x,y]. But for an X of 500,000 points and points_limit = 10000, Matlab needs 16sec to complete this function.
Are there any ways to optimize this, maybe by removing the loop using matrix operations (vectorisation)?
function X = reduce_vector_size(X,points_limit)
while length(X) > points_limit
k = 1;
X2 = zeros(round(length(X(:,1))/2),2);
X = sortrows(X);
for i=1:2:length(X(:,1))-1
X2(k,1) = mean([X(i,1) ,X(i+1,1) ]);
X2(k,2) = mean([X(i,2) ,X(i+1,2) ]);
k = k + 1;
end
X = X2;
end
An other best idea is to have a new approach :
Ratio = ceil(length(X(:,1))/points_limit);
X = ceil(X);
X = sortrows(X,1);
X = sortrows(X,2);
X1=[];
for i=1:points_limit - 1
X1 = [X1; mean(X(i*Ratio:(i+1)*Ratio,1)), mean(X(i*Ratio:(i+1)*Ratio,2))];
end
X = X1;
The objective is to reduce the number of points in a vector: a form of compression function for a 2D vectors.
Do you know if I can do this new method with loop ?
What do you think about my algorithm of compression ?

you can easily vectorize the inner for loop:
k = 1;
X = rand(5e5,2);
X2 = zeros(round(length(X(:,1))/2),2);
tic
for i=1:2:length(X(:,1))-1
X2(k,1) = mean([X(i,1) ,X(i+1,1) ]);
X2(k,2) = mean([X(i,2) ,X(i+1,2) ]);
k = k + 1;
end
toc % Elapsed time is 1.988739 seconds.
tic
X3 = (X(1:2:length(X(:,1))-1,:) + X(2:2:length(X(:,1)),:))/2;
toc % Elapsed time is 0.014575 seconds.
isequal(X2,X3) % true

Related

Mel-frequency function: error with matrix dimensions

I'm trying to make a prototype audio recognition system by following this link: http://www.ifp.illinois.edu/~minhdo/teaching/speaker_recognition/. It is quite straightforward so there is almost nothing to worry about. But my problem is with the mel-frequency function. Here is the code as provided on the website:
function m = melfb(p, n, fs)
% MELFB Determine matrix for a mel-spaced filterbank
%
% Inputs: p number of filters in filterbank
% n length of fft
% fs sample rate in Hz
%
% Outputs: x a (sparse) matrix containing the filterbank amplitudes
% size(x) = [p, 1+floor(n/2)]
%
% Usage: For example, to compute the mel-scale spectrum of a
% colum-vector signal s, with length n and sample rate fs:
%
% f = fft(s);
% m = melfb(p, n, fs);
% n2 = 1 + floor(n/2);
% z = m * abs(f(1:n2)).^2;
%
% z would contain p samples of the desired mel-scale spectrum
%
% To plot filterbanks e.g.:
%
% plot(linspace(0, (12500/2), 129), melfb(20, 256, 12500)'),
% title('Mel-spaced filterbank'), xlabel('Frequency (Hz)');
f0 = 700 / fs;
fn2 = floor(n/2);
lr = log(1 + 0.5/f0) / (p+1);
% convert to fft bin numbers with 0 for DC term
bl = n * (f0 * (exp([0 1 p p+1] * lr) - 1));
b1 = floor(bl(1)) + 1;
b2 = ceil(bl(2));
b3 = floor(bl(3));
b4 = min(fn2, ceil(bl(4))) - 1;
pf = log(1 + (b1:b4)/n/f0) / lr;
fp = floor(pf);
pm = pf - fp;
r = [fp(b2:b4) 1+fp(1:b3)];
c = [b2:b4 1:b3] + 1;
v = 2 * [1-pm(b2:b4) pm(1:b3)];
m = sparse(r, c, v, p, 1+fn2);
But it gave me an error:
Error using * Inner matrix dimensions must agree.
Error in MFFC (line 17) z = m * abs(f(1:n2)).^2;
When I include these 2 lines just before line 17:
size(m)
size(abs(f(1:n2)).^2)
It gave me :
ans =
20 65
ans =
1 65
So should I transpose the second matrix? Or should I interpret this as an row-wise multiplication and modify the code?
Edit: Here is the main function (I simply run MFCC()):
function result = MFFC()
[y Fs] = audioread('s1.wav');
% sound(y,Fs)
Frames = Frame_Blocking(y,128);
Windowed = Windowing(Frames);
spectrum = FFT_After_Windowing(Windowed);
%imagesc(mag2db(abs(spectrum)))
p = 20;
S = size(spectrum);
n = S(2);
f = spectrum;
m = melfb(p, n, Fs);
n2 = 1 + floor(n/2);
size(m)
size(abs(f(1:n2)).^2)
z = m * abs(f(1:n2)).^2;
result = z;
And here are the auxiliary functions:
function f = Frame_Blocking(y,N)
% Parameters: M = 100, N = 256
% Default : M = 100; N = 256;
M = fix(N/3);
Frames = [];
first = 1; last = N;
len = length(y);
while last <= len
Frames = [Frames; y(first:last)'];
first = first + M;
last = last + M;
end;
if last < len
first = first + M;
Frames = [Frames; y(first : len)];
end
f = Frames;
function f = Windowing(Frames)
S = size(Frames);
N = S(2);
M = S(1);
Windowed = zeros(M,N);
nn = 1:N;
wn = 0.54 - 0.46*cos(2*pi/(N-1)*(nn-1));
for ii = 1:M
Windowed(ii,:) = Frames(ii,:).*wn;
end;
f = Windowed;
function f = FFT_After_Windowing(Windowed)
spectrum = fft(Windowed);
f = spectrum;
Transpose s or transpose the resulting f (it's just a matter of convention).
There is nothing wrong with the melfb function you are using, merely with the dimensions of the signal in the example you are trying to run (in the commented lines 14-17).
% f = fft(s);
% m = melfb(p, n, fs);
% n2 = 1 + floor(n/2);
% z = m * abs(f(1:n2)).^2;
The example assumes that you are using a "colum-vector signal s". From the size of your Fourier transformed f (done via fft which respects the input signal dimensions) your input signal s is a row-vector signal.
The part that gives you the error is the actual filtering operation that requires multiplying a p x n2 matrix with a n2 x 1 column-vector (i.e., each filter's response is multiplied pointwise with the Fourier of the input signal). Since your input s is 1 x n, your f will be 1 x n and the final matrix to vector multiplication for z will give an error.
Thanks to gevang's anwer, I was able to find out my mistake. Here is how I modified the code:
function result = MFFC()
[y Fs] = audioread('s2.wav');
% sound(y,Fs)
Frames = Frame_Blocking(y,128);
Windowed = Windowing(Frames);
%spectrum = FFT_After_Windowing(Windowed');
%imagesc(mag2db(abs(spectrum)))
p = 20;
%S = size(spectrum);
%n = S(2);
%f = spectrum;
S1 = size(Windowed);
n = S1(2);
n2 = 1 + floor(n/2);
%z = zeros(S1(1),n2);
z = zeros(20,S1(1));
for ii=1: S1(1)
s = (FFT_After_Windowing(Windowed(ii,:)'));
f = fft(s);
m = melfb(p,n,Fs);
% n2 = 1 + floor(n/2);
z(:,ii) = m * abs(f(1:n2)).^2;
end;
%f = FFT_After_Windowing(Windowed');
%S = size(f);
%n = S(2);
%size(f)
%m = melfb(p, n, Fs);
%n2 = 1 + floor(n/2);
%size(m)
%size(abs(f(1:n2)).^2)
%z = m * abs(f(1:n2)).^2;
result = z;
As you can see, I naively assumed that the function deals with row-wise matrices, but in fact it deals with column vectors (and maybe column-wise matrices). So I iterate through each column of the input matrix and then combine the results.
But I don't think this is efficient and vectorized code. Also I still can't figure out how to do column-wise operations on the input matrix (Windowed - after the windowing step), instead of using a loop.

Can't recover the parameters of a model using ode45

I am trying to simulate the rotation dynamics of a system. I am testing my code to verify that it's working using simulation, but I never recovered the parameters I pass to the model. In other words, I can't re-estimate the parameters I chose for the model.
I am using MATLAB for that and specifically ode45. Here is my code:
% Load the input-output data
[torque outputs] = DataLogs2();
u = torque;
% using the simulation data
Ixx = 1.00;
Iyy = 2.00;
Izz = 3.00;
x0 = [0; 0; 0];
Ts = .02;
t = 0:Ts:Ts * ( length(u) - 1 );
[ T, x ] = ode45( #(t,x) rotationDyn( t, x, u(1+floor(t/Ts),:), Ixx, Iyy, Izz), t, x0 );
w = x';
N = length(w);
q = 1; % a counter for the A and B matrices
% The Algorithm
for k=1:1:N
w_telda = [ 0 -w(3, k) w(2,k); ...
w(3,k) 0 -w(1,k); ...
-w(2,k) w(1,k) 0 ];
if k == N % to handle the problem of the last iteration
w_dash(:,k) = (-w(:,k))/Ts;
else
w_dash(:,k) = (w(:,k+1)-w(:,k))/Ts;
end
a = kron( w_dash(:,k)', eye(3) ) + kron( w(:,k)', w_telda );
A(q:q+2,:) = a; % a 3N*9 matrix
B(q:q+2,:) = u(k,:)'; % a 3N*1 matrix % u(:,k)
q = q + 3;
end
% Forcing J to be diagonal. This is the case when we consider our quadcopter as two thin uniform
% rods crossed at the origin with a point mass (motor) at the end of each.
A_new = [A(:, 1) A(:, 5) A(:, 9)];
vec_J_diag = A_new\B;
J_diag = diag([vec_J_diag(1), vec_J_diag(2), vec_J_diag(3)])
eigenvalues_J_diag = eig(J_diag)
error = norm(A_new*vec_J_diag - B)
where my dynamic model is defined as:
function [dw, y] = rotationDyn(t, w, tau, Ixx, Iyy, Izz, varargin)
% The output equation
y = [w(1); w(2); w(3)];
% State equation
% dw = (I^-1)*( tau - cross(w, I*w) );
dw = [Ixx^-1 * tau(1) - ((Izz-Iyy)/Ixx)*w(2)*w(3);
Iyy^-1 * tau(2) - ((Ixx-Izz)/Iyy)*w(1)*w(3);
Izz^-1 * tau(3) - ((Iyy-Ixx)/Izz)*w(1)*w(2)];
end
Practically, what this code should do, is to calculate the eigenvalues of the inertia matrix, J, i.e. to recover Ixx, Iyy, and Izz that I passed to the model at the very begining (1, 2 and 3), but all what I get is wrong results.
Is the problem with using ode45?
Well the problem wasn't in the ode45 instruction, the problem is that in system identification one can create an n-1 samples-signal from an n samples-signal, thus the loop has to end at N-1 in the above code.

Vectorize MATLAB for loop

I have the following lines of code
y = zeros(n, 1);
for i=1:n
b = L * [u(i:-1:max(1,i-M+1));zeros((-i+M)*(i-M<0),1)];
y(i) = b' * gamma;
end
u is nx1, gamma is Mx1 and L is MxM
n takes very large values, so are there any ideas on how to vectorize the for loop?
Discussion and solution code
Initial approach
Matrix-multiplications based approach -
u_pad = [zeros(M-1,1) ; u]; %// Pad u with zeros
idx = bsxfun(#plus,[M:-1:1]',0:n-1);%//'# Calculate sliding indices for u
u_pad_indexed = u_pad(idx); %// Index into padded u
y_vectzed = gamma.'*L*u_pad_indexed;%//'# Matrix-multiplications for final o/p
Modified approach
Now, you have huge datasizes to work with. So, to optimize for such a case, the data could be broken into smaller pieces that are runnable and the operations could be done iteratively. Then, each iteration would calculate a part of the output array.
With this new strategy, the intial setting up could be done once and reused within each iteration. The modified approach would look something like this -
div_factor = [...] %// Make sure it is a divisor of n
nrows = n/div_factor;
start_idx = bsxfun(#plus,[M:-1:1]',0:nrows-1); %//'
u_pad = [zeros(M-1,1) ; u];
y_vectorized = zeros(div_factor,n/div_factor);
for iter = 1:div_factor
u_pad_indexed = u_pad((iter-1)*nrows + start_idx);
y_vectorized(iter,:) = gamma.'*L*u_pad_indexed; %//'
end
y_vectorized = reshape(y_vectorized.',[],1);
Benchmarking
%// Size parameters and input arrays
n = 4000000;
M = 1000;
u = rand(n,1);
gamma = rand(M,1);
L = rand(M,M);
%// Warm up tic/toc.
for k = 1:50000
tic(); elapsed = toc();
end
disp('----------- With Original loopy code');
tic
y = zeros(n, 1);
for i=1:n
b = L * [u(i:-1:max(1,i-M+1));zeros((-i+M)*(i-M<0),1)];
y(i) = b' * gamma; %//'
end
toc
clear b y
disp('----------- With Proposed solution code');
tic
..... Proposed Modified Code with div_factor = 200
toc
Runtimes
----------- With Original loopy code
Elapsed time is 498.563049 seconds.
----------- With Proposed solution code
Elapsed time is 44.273299 seconds.
Edit: I am redoing the solution because I found out that Matlab does not handle anonymous functions well. So I changed the call from an anonymous function to a normal function. Making this change:
Test 1
Comparison(40E3, 3E3)
Elapsed time is 21.731176 seconds.
Elapsed time is 251.327347 seconds.
|y2-y1| = 3.1519e-06
Test 2
Comparison(40E3, 1E3)
Elapsed time is 6.407259 seconds.
Elapsed time is 25.551116 seconds.
|y2-y1| = 2.8402e-07
Test 3
Comparison(20E3, 3E3)
Elapsed time is 10.484422 seconds.
Elapsed time is 125.033313 seconds.
|y2-y1| = 1.5462e-06
Test 4
Comparison(20E3, 1E3)
Elapsed time is 3.153404 seconds.
Elapsed time is 13.200649 seconds.
|y2-y1| = 1.5627e-07
The function is:
function Comparison(n, M)
u = rand(n, 1);
L = rand(M);
gamma = rand(M, 1);
tic
y1 = vectorized(u, L, gamma);
toc
tic
y2 = looped(u, L, gamma);
toc
disp(['|y2-y1| = ', num2str(norm(y2 - y1, 1))])
end
function y = vectorized(u, l, gamma)
global a Column
M = length(gamma);
Column = l' * gamma;
x = bsxfun(#plus, -(1:M)', (1:length(u)) + 1);
x(x < 1) = 1;
a = u(x);
a(1:M, 1:M) = a(1:M, 1:M) .* triu(ones(M));
a = a';
m = 1:size(a,1);
y = arrayfun(#VectorY , m)';
end
function yi = VectorY(j)
global a Column
yi = a(j,:) * Column;
end
function y = looped(U, l, gamma)
n = length(U);
M = length(gamma);
u = U';
L = l';
y = zeros(n, 1);
for i=1:n
y(i) = [u(i:-1:max(1,i-M+1)), zeros(1,(-i+M)*(i-M<0))] * L * gamma;
end
end

MATLAB sum series function

I am very new in Matlab. I just try to implement sum of series 1+x+x^2/2!+x^3/3!..... . But I could not find out how to do it. So far I did just sum of numbers. Help please.
for ii = 1:length(a)
sum_a = sum_a + a(ii)
sum_a
end
n = 0 : 10; % elements of the series
x = 2; % value of x
s = sum(x .^ n ./ factorial(n)); % sum
The second part of your answer is:
n = 0:input('variable?')
Cheery's approach is perfectly valid when the number of terms of the series is small. For large values, a faster approach is as follows. This is more efficient because it avoids repeating multiplications:
m = 10;
x = 2;
result = 1+sum(cumprod(x./[1:m]));
Example running time for m = 1000; x = 1;
tic
for k = 1:1e4
result = 1+sum(cumprod(x./[1:m]));
end
toc
tic
for k = 1:1e4
result = sum(x.^(0:m)./factorial(0:m));
end
toc
gives
Elapsed time is 1.572464 seconds.
Elapsed time is 2.999566 seconds.

Optimizing repetitive estimation (currently a loop) in MATLAB

I've found myself needing to do a least-squares (or similar matrix-based operation) for every pixel in an image. Every pixel has a set of numbers associated with it, and so it can be arranged as a 3D matrix.
(This next bit can be skipped)
Quick explanation of what I mean by least-squares estimation :
Let's say we have some quadratic system that is modeled by Y = Ax^2 + Bx + C and we're looking for those A,B,C coefficients. With a few samples (at least 3) of X and the corresponding Y, we can estimate them by:
Arrange the (lets say 10) X samples into a matrix like X = [x(:).^2 x(:) ones(10,1)];
Arrange the Y samples into a similar matrix: Y = y(:);
Estimate the coefficients A,B,C by solving: coeffs = (X'*X)^(-1)*X'*Y;
Try this on your own if you want:
A = 5; B = 2; C = 1;
x = 1:10;
y = A*x(:).^2 + B*x(:) + C + .25*randn(10,1); % added some noise here
X = [x(:).^2 x(:) ones(10,1)];
Y = y(:);
coeffs = (X'*X)^-1*X'*Y
coeffs =
5.0040
1.9818
0.9241
START PAYING ATTENTION AGAIN IF I LOST YOU THERE
*MAJOR REWRITE*I've modified to bring it as close to the real problem that I have and still make it a minimum working example.
Problem Setup
%// Setup
xdim = 500;
ydim = 500;
ncoils = 8;
nshots = 4;
%// matrix size for each pixel is ncoils x nshots (an overdetermined system)
%// each pixel has a matrix stored in the 3rd and 4rth dimensions
regressor = randn(xdim,ydim, ncoils,nshots);
regressand = randn(xdim, ydim,ncoils);
So my problem is that I have to do a (X'*X)^-1*X'*Y (least-squares or similar) operation for every pixel in an image. While that itself is vectorized/matrixized the only way that I have to do it for every pixel is in a for loop, like:
Original code style
%// Actual work
tic
estimate = zeros(xdim,ydim);
for col=1:size(regressor,2)
for row=1:size(regressor,1)
X = squeeze(regressor(row,col,:,:));
Y = squeeze(regressand(row,col,:));
B = X\Y;
% B = (X'*X)^(-1)*X'*Y; %// equivalently
estimate(row,col) = B(1);
end
end
toc
Elapsed time = 27.6 seconds
EDITS in reponse to comments and other ideas
I tried some things:
1. Reshaped into a long vector and removed the double for loop. This saved some time.
2. Removed the squeeze (and in-line transposing) by permute-ing the picture before hand: This save alot more time.
Current example:
%// Actual work
tic
estimate2 = zeros(xdim*ydim,1);
regressor_mod = permute(regressor,[3 4 1 2]);
regressor_mod = reshape(regressor_mod,[ncoils,nshots,xdim*ydim]);
regressand_mod = permute(regressand,[3 1 2]);
regressand_mod = reshape(regressand_mod,[ncoils,xdim*ydim]);
for ind=1:size(regressor_mod,3) % for every pixel
X = regressor_mod(:,:,ind);
Y = regressand_mod(:,ind);
B = X\Y;
estimate2(ind) = B(1);
end
estimate2 = reshape(estimate2,[xdim,ydim]);
toc
Elapsed time = 2.30 seconds (avg of 10)
isequal(estimate2,estimate) == 1;
Rody Oldenhuis's way
N = xdim*ydim*ncoils; %// number of columns
M = xdim*ydim*nshots; %// number of rows
ii = repmat(reshape(1:N,[ncoils,xdim*ydim]),[nshots 1]); %//column indicies
jj = repmat(1:M,[ncoils 1]); %//row indicies
X = sparse(ii(:),jj(:),regressor_mod(:));
Y = regressand_mod(:);
B = X\Y;
B = reshape(B(1:nshots:end),[xdim ydim]);
Elapsed time = 2.26 seconds (avg of 10)
or 2.18 seconds (if you don't include the definition of N,M,ii,jj)
SO THE QUESTION IS:
Is there an (even) faster way?
(I don't think so.)
You can achieve a ~factor of 2 speed up by precomputing the transposition of X. i.e.
for x=1:size(picture,2) % second dimension b/c already transposed
X = picture(:,x);
XX = X';
Y = randn(n_timepoints,1);
%B = (X'*X)^-1*X'*Y; ;
B = (XX*X)^-1*XX*Y;
est(x) = B(1);
end
Before: Elapsed time is 2.520944 seconds.
After: Elapsed time is 1.134081 seconds.
EDIT:
Your code, as it stands in your latest edit, can be replaced by the following
tic
xdim = 500;
ydim = 500;
n_timepoints = 10; % for example
% Actual work
picture = randn(xdim,ydim,n_timepoints);
picture = reshape(picture, [xdim*ydim,n_timepoints])'; % note transpose
YR = randn(n_timepoints,size(picture,2));
% (XX*X).^-1 = sum(picture.*picture).^-1;
% XX*Y = sum(picture.*YR);
est = sum(picture.*picture).^-1 .* sum(picture.*YR);
est = reshape(est,[xdim,ydim]);
toc
Elapsed time is 0.127014 seconds.
This is an order of magnitude speed up on the latest edit, and the results are all but identical to the previous method.
EDIT2:
Okay, so if X is a matrix, not a vector, things are a little more complicated. We basically want to precompute as much as possible outside of the for-loop to keep our costs down. We can also get a significant speed-up by computing XT*X manually - since the result will always be a symmetric matrix, we can cut a few corners to speed things up. First, the symmetric multiplication function:
function XTX = sym_mult(X) % X is a 3-d matrix
n = size(X,2);
XTX = zeros(n,n,size(X,3));
for i=1:n
for j=i:n
XTX(i,j,:) = sum(X(:,i,:).*X(:,j,:));
if i~=j
XTX(j,i,:) = XTX(i,j,:);
end
end
end
Now the actual computation script
xdim = 500;
ydim = 500;
n_timepoints = 10; % for example
Y = randn(10,xdim*ydim);
picture = randn(xdim,ydim,n_timepoints); % 500x500x10
% Actual work
tic % start timing
picture = reshape(picture, [xdim*ydim,n_timepoints])';
% Here we precompute the (XT*Y) calculation to speed things up later
picture_y = [sum(Y);sum(Y.*picture)];
% initialize
est = zeros(size(picture,2),1);
picture = permute(picture,[1,3,2]);
XTX = cat(2,ones(n_timepoints,1,size(picture,3)),picture);
XTX = sym_mult(XTX); % precompute (XT*X) for speed
X = zeros(2,2); % preallocate for speed
XY = zeros(2,1);
for x=1:size(picture,2) % second dimension b/c already transposed
%For some reason this is a lot faster than X = XTX(:,:,x);
X(1,1) = XTX(1,1,x);
X(2,1) = XTX(2,1,x);
X(1,2) = XTX(1,2,x);
X(2,2) = XTX(2,2,x);
XY(1) = picture_y(1,x);
XY(2) = picture_y(2,x);
% Here we utilise the fact that A\B is faster than inv(A)*B
% We also use the fact that (A*B)*C = A*(B*C) to speed things up
B = X\XY;
est(x) = B(1);
end
est = reshape(est,[xdim,ydim]);
toc % end timing
Before: Elapsed time is 4.56 seconds.
After: Elapsed time is 2.24 seconds.
This is a speed up of about a factor of 2. This code should be extensible to X being any dimensions you want. For instance, in the case where X = [1 x x^2], you would change picture_y to the following
picture_y = [sum(Y);sum(Y.*picture);sum(Y.*picture.^2)];
and change XTX to
XTX = cat(2,ones(n_timepoints,1,size(picture,3)),picture,picture.^2);
You would also change a lot of 2s to 3s in the code, and add XY(3) = picture_y(3,x) to the loop. It should be fairly straight-forward, I believe.
Results
I sped up your original version, since your edit 3 was actually not working (and also does something different).
So, on my PC:
Your (original) version: 8.428473 seconds.
My obfuscated one-liner given below: 0.964589 seconds.
First, for no other reason than to impress, I'll give it as I wrote it:
%%// Some example data
xdim = 500;
ydim = 500;
n_timepoints = 10; % for example
estimate = zeros(xdim,ydim); %// initialization with explicit size
picture = randn(xdim,ydim,n_timepoints);
%%// Your original solution
%// (slightly altered to make my version's results agree with yours)
tic
Y = randn(n_timepoints,xdim*ydim);
ii = 1;
for x = 1:xdim
for y = 1:ydim
X = squeeze(picture(x,y,:)); %// or similar creation of X matrix
B = (X'*X)^(-1)*X' * Y(:,ii);
ii = ii+1;
%// sometimes you keep everything and do
%// estimate(x,y,:) = B(:);
%// sometimes just the first element is important and you do
estimate(x,y) = B(1);
end
end
toc
%%// My version
tic
%// UNLEASH THE FURY!!
estimate2 = reshape(sparse(1:xdim*ydim*n_timepoints, ...
builtin('_paren', ones(n_timepoints,1)*(1:xdim*ydim),:), ...
builtin('_paren', permute(picture, [3 2 1]),:))\Y(:), ydim,xdim).'; %'
toc
%%// Check for equality
max(abs(estimate(:)-estimate2(:))) % (always less than ~1e-14)
Breakdown
First, here's the version that you should actually use:
%// Construct sparse block-diagonal matrix
%// (Type "help sparse" for more information)
N = xdim*ydim; %// number of columns
M = N*n_timepoints; %// number of rows
ii = 1:N;
jj = ones(n_timepoints,1)*(1:N);
s = permute(picture, [3 2 1]);
X = sparse(ii,jj(:), s(:));
%// Compute ALL the estimates at once
estimates = X\Y(:);
%// You loop through the *second* dimension first, so to make everything
%// agree, we have to extract elements in the "wrong" order, and transpose:
estimate2 = reshape(estimates, ydim,xdim).'; %'
Here's an example of what picture and the corresponding matrix X looks like for xdim = ydim = n_timepoints = 2:
>> clc, picture, full(X)
picture(:,:,1) =
-0.5643 -2.0504
-0.1656 0.4497
picture(:,:,2) =
0.6397 0.7782
0.5830 -0.3138
ans =
-0.5643 0 0 0
0.6397 0 0 0
0 -2.0504 0 0
0 0.7782 0 0
0 0 -0.1656 0
0 0 0.5830 0
0 0 0 0.4497
0 0 0 -0.3138
You can see why sparse is necessary -- it's mostly zeros, but will grow large quickly. The full matrix would quickly consume all your RAM, while the sparse one will not consume much more than the original picture matrix does.
With this matrix X, the new problem
X·b = Y
now contains all the problems
X1 · b1 = Y1
X2 · b2 = Y2
...
where
b = [b1; b2; b3; ...]
Y = [Y1; Y2; Y3; ...]
so, the single command
X\Y
will solve all your systems at once.
This offloads all the hard work to a set of highly specialized, compiled to machine-specific code, optimized-in-every-way algorithms, rather than the interpreted, generic, always-two-steps-away from the hardware loops in MATLAB.
It should be straightforward to convert this to a version where X is a matrix; you'll end up with something like what blkdiag does, which can also be used by mldivide in exactly the same way as above.
I had a wee play around with an idea, and I decided to stick it as a separate answer, as its a completely different approach to my other idea, and I don't actually condone what I'm about to do. I think this is the fastest approach so far:
Orignal (unoptimised): 13.507176 seconds.
Fast Cholesky-decomposition method: 0.424464 seconds
First, we've got a function to quickly do the X'*X multiplication. We can speed things up here because the result will always be symmetric.
function XX = sym_mult(X)
n = size(X,2);
XX = zeros(n,n,size(X,3));
for i=1:n
for j=i:n
XX(i,j,:) = sum(X(:,i,:).*X(:,j,:));
if i~=j
XX(j,i,:) = XX(i,j,:);
end
end
end
The we have a function to do LDL Cholesky decomposition of a 3D matrix (we can do this because the (X'*X) matrix will always be symmetric) and then do forward and backwards substitution to solve the LDL inversion equation
function Y = fast_chol(X,XY)
n=size(X,2);
L = zeros(n,n,size(X,3));
D = zeros(n,n,size(X,3));
B = zeros(n,1,size(X,3));
Y = zeros(n,1,size(X,3));
% These loops compute the LDL decomposition of the 3D matrix
for i=1:n
D(i,i,:) = X(i,i,:);
L(i,i,:) = 1;
for j=1:i-1
L(i,j,:) = X(i,j,:);
for k=1:(j-1)
L(i,j,:) = L(i,j,:) - L(i,k,:).*L(j,k,:).*D(k,k,:);
end
D(i,j,:) = L(i,j,:);
L(i,j,:) = L(i,j,:)./D(j,j,:);
if i~=j
D(i,i,:) = D(i,i,:) - L(i,j,:).^2.*D(j,j,:);
end
end
end
for i=1:n
B(i,1,:) = XY(i,:);
for j=1:(i-1)
B(i,1,:) = B(i,1,:)-D(i,j,:).*B(j,1,:);
end
B(i,1,:) = B(i,1,:)./D(i,i,:);
end
for i=n:-1:1
Y(i,1,:) = B(i,1,:);
for j=n:-1:(i+1)
Y(i,1,:) = Y(i,1,:)-L(j,i,:).*Y(j,1,:);
end
end
Finally, we have the main script which calls all of this
xdim = 500;
ydim = 500;
n_timepoints = 10; % for example
Y = randn(10,xdim*ydim);
picture = randn(xdim,ydim,n_timepoints); % 500x500x10
tic % start timing
picture = reshape(pr, [xdim*ydim,n_timepoints])';
% Here we precompute the (XT*Y) calculation
picture_y = [sum(Y);sum(Y.*picture)];
% initialize
est2 = zeros(size(picture,2),1);
picture = permute(picture,[1,3,2]);
% Now we calculate the X'*X matrix
XTX = cat(2,ones(n_timepoints,1,size(picture,3)),picture);
XTX = sym_mult(XTX);
% Call our fast Cholesky decomposition routine
B = fast_chol(XTX,picture_y);
est2 = B(1,:);
est2 = reshape(est2,[xdim,ydim]);
toc
Again, this should work equally well for a Nx3 X matrix, or however big you want.
I use octave, thus I can't say anything about the resulting performance in Matlab, but would expect this code to be slightly faster:
pictureT=picture'
est=arrayfun(#(x)( (pictureT(x,:)*picture(:,x))^-1*pictureT(x,:)*randn(n_ti
mepoints,1)),1:size(picture,2));