Optimizing a kind-of-correlation computation - matlab

I have some code that currently reads:
data = repmat(1:10, 1, 2);
N = 6;
period = 10;
result = NaN * zeros(1, period);
for i=1:period
range_indices = i:i+N;
temp_data = data(range_indices);
result(i) = sum( temp_data .* fliplr(temp_data));
end
I'm trying to make this faster (for larger datasets, e.g. period = 2000 and N = 1600), but I'm unable to get this into a form where it's a matrix operation (e.g. by using conv or xcorr).

You should be able to completely linearise this. Firstly, consider the range_indices. These have the form:
1 -> N
2 -> N+1
...
P -> N+P
where P is the period. We can set up a matrix of these values like so:
range_indices = bsxfun(#plus,1:N,(1:period)'-1);
We can use these to grab the data directly, like so:
temp_data = data(range_indices);
It is then fairly simply to complete the function:
result = sum(temp_data.*fliplr(temp_data));
Finally, this isn't really related to the question, but just something I thought I'd point out - in future if you need to generate a matrix of NaN values, you should use nan(1,period) instead.

Related

How to speed up this for-loop code (for large matrix `H_sparse`)?

H_sparse is a large matrix with size 20,000-by-5,000. The matrix-vector product dk = A * Del_H; in the code below is time consuming. How can I speed up this code?
This code is another way to get an equivalent result to the built-in function pinv(H_Sparse) in MATLAB. I think MATLAB uses mex files and bsxfun in pinv, so it's fast.
But in theory the below algorithm is faster:
function PINV_H_Spp = Recur_Pinv_Comp( H_Sparse )
L = 1;
H_candidate = H_Sparse(:,L);
A = pinv( H_candidate );
for L = 1:size( H_Sparse, 2 ) - 1
L = L + 1;
Del_H = H_Sparse(:,L);
dk = A * Del_H;
Ck = Del_H - H_candidate * dk;
Gk = pinv( Ck );
A = A - dk * Gk;
A(end+1,:) = Gk;
H_candidate(:,end+1) = Del_H;
end
PINV_H_Spp = A;
The code can be compared with pinv(H_Sparse), using H_Sparse = rand(20000, 5000) as sample data.
A few points of improvement:
You can change your loop index to 2:size(H_Sparse, 2) and remove the line L = L + 1;.
There's no need to create a separate variable H_candidate, since it's only partitions of H_Sparse. Instead, just index H_sparse accordingly and you'll save on memory.
Instead of building your matrix A row-by-row, you can preallocate it and update it using indexing. This usually provides some speed-up.
Return A as your output. No need to put it in another variable.
Here's a new version of the code incorporating the above improvements:
function A = Recur_Pinv_Comp(H_Sparse)
[nRows, nCols] = size(H_Sparse);
A = [pinv(H_Sparse(:, 1)); zeros(nRows-1, nCols)];
for L = 2:nCols
Del_H = H_Sparse(:, L);
dk = A(1:L-1, :)*Del_H;
Ck = Del_H - H_Sparse(:, 1:L-1)*dk;
Gk = pinv(Ck);
A(1:L-1, :) = A(1:L-1, :) - dk*Gk;
A(L, :) = Gk;
end
end
In addition, it looks like your calls to pinv only ever operate on column vectors, so you may be able to replace them with a simple array transpose and scaling by the sum of the squares of the vector (which might speed things up a little more):
Gk = Ck.'./(Ck.'*Ck);

vectorize two nested for-loops in MATLAB

I have two nested for-loops that are used to format data that I load it. The loops have the following construction:
data = magic(20000);
data = data(:,1:3);
for i=0:10
for j=0:10
data_tmp = data((1:100)+100*j+100*10*i,:);
vx(:, i+1,j+1) = data_tmp(:,1);
vy(:, i+1,j+1) = data_tmp(:,2);
vz(:, i+1,j+1) = data_tmp(:,3);
end
end
Arrays vx, vy and vz I do pre-allocate to their desired size. However, is there a way to vectorize the for-loops to increase the efficiency? I'm not convinced it is the case due to the first line in the second loop, data((1:100)+100*j+100*10*i,:), is there a better way to do this?
It turns out that you have repeated index in loop
at i=k, j=10 and i=k+1, j=0 for k<10
for example, you read 1:100 + 100*10 + 100*10*0 and then read 1:100 + 100*0 + 100*10*1 which are identical.
Reshape w/ Repeated index
If this was what you intended to do, then vectorization needs one more step (index generation).
Following is my suggestion (N=100, M=10 where N is the length of data_tmp and M is the maximum loop variable)
index = bsxfun(#plus,bsxfun(#plus,(1:N)',reshape(N*(0:M),1,1,M+1)),M*N*(0:M)); %index generation
vx = data(index);
vy = data(index + size(data,1));
vz = data(index + size(data,1)*2);
This is not that desirable, but it will work.
When I tested on my laptop, it is twice faster than your original code with pre-allocation. As I increase the size of data, the gap gets smaller and smaller.
Reshape w/o Repeated index
If not i.e., you want to reshape each column in the direction of 3rd dimension first, 2nd dimension last), then following would work.
Firstly, this is how I interpreted your code
data = magic(20000);
data = data(:,1:3);
N = 100; M = 10;
for i=0:(M-1)
for j=0:(M-1)
data_tmp = data((1:N)+M*j+N*M*i,:);
vx(:, i+1,j+1) = data_tmp(:,1);
vy(:, i+1,j+1) = data_tmp(:,2);
vz(:, i+1,j+1) = data_tmp(:,3);
end
end
Note that loop ended at (M-1).
Following is my suggestion.
vx = permute(reshape(dat(1:N*M*M,1), N, M, M),[1,3,2]);
vy = permute(reshape(dat(1:N*M*M,2), N, M, M),[1,3,2]);
vz = permute(reshape(dat(1:N*M*M,3), N, M, M),[1,3,2]);
In my laptop, it is 4 times faster than original code. As I increase the size, the gap approaches to 2.
Just in case you want to stick with the loop, here is a much faster way to do this:
data = randi(100,20000,3);
[vx,vy,vz] = deal(zeros(100,11,11));
[J,I] = ndgrid(1:11,1:11);
c = 1;
for k = 0:100:11000
vx(:,I(c),J(c)) = data((1:100)+k,1);
vy(:,I(c),J(c)) = data((1:100)+k,2);
vz(:,I(c),J(c)) = data((1:100)+k,3);
c = c+1;
end
My guess is that reshape from #Dohyun answer is what you looking for (and it's x10 faster than this, and x10000 faster than your code), but for next time you use loops, this may be useful.
And here is another option to do this without reshape, in a similar time to the reshape version:
[vx,vy,vz] = deal(zeros(100,10,11));
vx(:) = data(1:11000,1);
vy(:) = data(1:11000,2);
vz(:) = data(1:11000,3);
vx = permute(vx,[1 3 2]);
vy = permute(vy,[1 3 2]);
vz = permute(vz,[1 3 2]);
The idea is that you define the shape of [vx,vy,vz] while allocating them.

how to vectorize array reformatting?

I have a .csv file with data on each line in the format (x,y,z,t,f), where f is the value of some function at location (x,y,z) at time t. So each new line in the .csv gives a new set of coordinates (x,y,z,t), with accompanying value f. The .csv is not sorted.
I want to use imagesc to create a video of this data in the xy-plane, as time progresses. The way I've done this is by reformatting M into something more easily usable by imagesc. I'm doing three nested loops, roughly like this
M = csvread('file.csv');
uniqueX = unique(M(:,1));
uniqueY = unique(M(:,2));
uniqueT = unique(M(:,4));
M_reformatted = zeros(length(uniqueX), length(uniqueY), length(uniqueT));
for i = 1:length(uniqueX)
for j = 1:length(uniqueY)
for k = 1:length(uniqueT)
M_reformatted(i,j,k) = M( ...
M(:,1)==uniqueX(i) & ...
M(:,2)==uniqueY(j) & ...
M(:,4)==uniqueT(k), ...
5 ...
);
end
end
end
once I have M_reformatted, I can loop through timesteps k and use imagesc on M_reformatted(:,:,k). But doing the above nested loops is very slow. Is it possible to vectorize the above? If so, an outline of the approach would be very helpful.
edit: as noted in answers/comments below, I made a mistake in that there are several possible z-values, which I haven't taken into account. If only a single z-value, the above would be ok.
This vectorized solution allows for negative values of x and y and is many times faster than the non-vectorized solution (close to 20x times for the test case at the bottom).
The idea is to sort the x, y, and t values in lexicographical order using sortrows and then using reshape to build the time slices of M_reformatted.
The code:
idx = find(M(:,3)==0); %// find rows where z==0
M2 = M(idx,:); %// M2 has only the rows where z==0
M2(:,3) = []; %// delete z coordinate in M2
M2(:,[1 2 3]) = M2(:,[3 1 2]); %// change from (x,y,t,f) to (t,x,y,f)
M2 = sortrows(M2); %// sort rows by t, then x, then y
numT = numel(unique(M2(:,1))); %// number of unique t values
numX = numel(unique(M2(:,2))); %// number of unique x values
numY = numel(unique(M2(:,3))); %// number of unique y values
%// fill the time slice matrix with data
M_reformatted = reshape(M2(:,4), numY, numX, numT);
Note: I am assuming y refers to the columns of the image and x refers to the rows. If you want these flipped, use M_reformatted = permute(M_reformatted,[2 1 3]) at the end of the code.
The test case I used for M (to compare the result to other solutions) has a NxNxN space with T times slices:
N = 10;
T = 10;
[x,y,z] = meshgrid(-N:N,-N:N,-N:N);
numPoints = numel(x);
x=x(:); y=y(:); z=z(:);
s = repmat([x,y,z],T,1);
t = repmat(1:T,numPoints,1);
M = [s, t(:), rand(numPoints*T,1)];
M = M( randperm(size(M,1)), : );
I don't think you need to vectorize. I think you change your algorithm.
You only need one loop to step through the lines of the CSV file. For every line, you have (x,y,z,t,f) so just store it in M_reformatted where it belongs. Something like this:
M_reformatted = zeros(max(M(:,1)), max(M(:,2)), max(M(:,4)));
for line = 1:size(M,2)
z = M(line, 3);
if z ~= 0, continue; end;
x = M(line, 1);
y = M(line, 2);
t = M(line, 4);
f = M(line, 5);
M_reformatted(x, y, t) = f;
end
Also note that pre-allocating M_reformatted is a very good idea, but your code may have been getting the size wrong (depending on the data). I think using max like I did will always do the right thing.

Matlab - How to improve efficiency of two port matrix calculations?

I'm looking for a way to speed up some simple two port matrix calculations. See the below code example for what I'm doing currently. In essence, I create a [Nx1] frequency vector first. I then loop through the frequency vector and create the [2x2] matrices H1 and H2 (all functions of f). A bit of simple matrix math including a matrix left division '\' later, and I got my result pb as a [Nx1] vector. The problem is the loop - it takes a long time to calculate and I'm looking for way to improve efficiency of the calculations. I tried assembling the problem using [2x2xN] transfer matrices, but the mtimes operation cannot handle 3-D multiplications.
Can anybody please give me an idea how I can approach such a calculation without the need for looping through f?
Many thanks: svenr
% calculate frequency and wave number vector
f = linspace(20,200,400);
w = 2.*pi.*f;
% calculation for each frequency w
for i=1:length(w)
H1(i,1) = {[1, rho*c*k(i)^2 / (crad*pi); 0,1]};
H2(i,1) = {[1, 1i.*w(i).*mp; 0, 1]};
HZin(i,1) = {H1{i,1}*H2{i,1}};
temp_mat = HZin{i,1}*[1; 0];
Zin(i,1) = temp_mat(1,1)/temp_mat(2,1);
temp_mat= H1{i,1}\[1; 1/Zin(i,1)];
pb(i,1) = temp_mat(1,1); Ub(i,:) = temp_mat(2,1);
end
Assuming that length(w) == length(k) returns true , rho , c, crad, mp are all scalars and in the last line is Ub(i,1) = temp_mat(2,1) instead of Ub(i,:) = temp_mat(2,1);
temp = repmat(eyes(2),[1 1 length(w)]);
temp1(1,2,:) = rho*c*(k.^2)/crad/pi;
temp2(1,2,:) = (1i.*w)*mp;
H1 = permute(num2cell(temp1,[1 2]),[3 2 1]);
H2 = permute(num2cell(temp2,[1 2]),[3 2 1]);
HZin = cellfun(#(a,b)(a*b),H1,H2,'UniformOutput',0);
temp_cell = cellfun(#(a,b)(a*b),H1,repmat({[1;0]},length(w),1),'UniformOutput',0);
Zin_cell = cellfun(#(a)(a(1,1)/a(2,1)),temp_cell,'UniformOutput',0);
Zin = cell2mat(Zin);
temp2_cell = cellfun(#(a)({[1;1/a]}),Zin_cell,'UniformOutput',0);
temp3_cell = cellfun(#(a,b)(pinv(a)*b),H1,temp2_cell);
temp4 = cell2mat(temp3_cell);
p(:,1) = temp4(1:2:end-1);
Ub(:,1) = temp4(2:2:end);

lookup table in matlab

I am trying to implement a kind of lookup table in MATLAB.
I have data generated from a script with three variables swept, let's say var_a, var_b, var_c. These are nested sweep, (var_a -> var_b -> var_c)
And there are 10 outputs, out_01, out02, ..., out10.
Now I have arranged the each output as out_01 = f(var_a,var_b,var_c), i.e., simply rearranging the data similar to nested loop.
My question is, how can I build a lookup table for such data?
I will give input like get out_01 # certain var_a(X), var_b(Y), var_c(Z).
I have tried the following.
idx1_var_a = max(find(data.var_a <= options.var_a));
idx2_var_a = min(find(data.var_a >= options.var_a));
idx1_var_b = max(find(data.var_b <= options.var_b));
idx2_var_b = min(find(data.var_b >= options.var_b));
idx1_var_c = max(find(data.var_c <= options.var_c));
idx2_var_c = min(find(data.var_c >= options.var_c));
Y1 = interpn(data.var_c,data.var_b,data.var_a,data.out_01,data.var_c(idx1_var_c),data.var_b(idx1_var_b),data.var_a(idx1_var_a))
Y2 = interpn(data.var_c,data.var_b,data.var_a,data.out_01,data.var_c(idx2_var_c),data.var_b(idx2_var_b),data.var_a(idx2_var_a))
if Y1 == Y2
Y = Y1
else
Here I am unable to figure how to interpolate between these two output values,Y1, and Y2!!
end
Any help is welcome.
I think you are looking for this:
Suppose you have:
var_a = 1:3;
var_b = 0:0.3:0.9;
var_c = 1:2;
[A, B, C] = ndgrid(var_a, var_b, var_c)
F = A.^3+B.^2+C;
Now you can directly acces the function at all existing points:
F(1,2,2)
Or alternatively
F(var_a==1,var_b==0.3,var_c==2)
Now if you are interested in values between the gridpoints, you can use interp3
Vq = interp3(F,1.5,2.5,1.5)
Note that this takes the desired location in the vector as input.