Decomposition of 3D FFT using 1D FFT in dimension z - matlab

I have a 3D matrix:
A = [5 7 8; 0 1 9; 4 3 6];
A(:,:,2) = [1 0 4; 3 5 6; 9 8 7]
I want to apply a 3D FFT in this matrix using decomposition of 1D FFT. I read that it I should apply 1D FFT in each dimension.
How can I do this?
For x and y, I do this:
for k=0:2
y1 = A(:,k+1,:);
A(:,k+1,:) = fft(y1);
end
for k=0:2
y2 = A(k+1,:,:);
A(k+1,:,:) = fft(y2);
end
For the dimension z, I don't know how to do this.

The fft function accepts a third input specifiying dimension, and is vectorized with respect to the other dimensions. So you can simply use:
result = fft(fft(fft(A, [], 1), [], 2), [], 3);

First, your loops should look like this:
for k=1:size(A,2)
y = A(:,k,:);
A(:,k,:) = fft(y);
end
Second, the loop above is identical to (as #Luis Mendo said in his answer):
A = fft(A,[],2);
There is no need to write a loop at all.
Third, to compute the 1D FFT along the 3rd dimension, you use:
fft(A,[],3);
You could write this as a loop (just to answer your explicit question, I don't recommend you do this):
for k=1:size(A,3)
y = A(:,:,k);
A(:,:,k) = fft(y);
end
If, for some reason, that doesn't work in your version of MATLAB because of the shape of y, you can reshape y to be a column vector:
... fft(y(:));
Finally, to compute the 3D FFT using 1D decompositions, you can simply write
A = fftn(A);
This follows the exact same process you are trying to implement, except it does it much faster.

Related

3D matrix, multiplication on last dimensions

I have a 3D-matrix A, with size lets say 3x12x100. The first two dimensions define 3×12 matrices, the latter one is simply the linear index. I want a very simple operation on these 100 matrices. For all these matrices, i want them multiplied with its conjugate transpose. With a very simple for loop, i can create this:
data = data;
A = zeros(100, 12, 12);
for i=1:100
A(i, :, :) = data(:, :, i)'*data(:, :, i);
end
But i like clean code, so i dont really prefer this for-loop. I have done some searching and sometimes find something like mtimesx (which is a custom made MATLAB function from 2010). I think i am missing something very obvious (as usual), because this seems a fairly easy operation (its just an "element-wise" matrix multiplication).
The size of my actual matrix is 3x12x862400. My original script takes about 10 minutes or longer, a variant on what #FangQ posts fixes it in a matter of seconds. My new code is as following, note that it still is under construction and i still need to validate it:
data = rand(3, 12, 862400) + i*rand(3, 12, 862400)
data2 = conj(permute(data, [2 1 3])); % conjugate transpose each matrix
% my data matrix contains 862400 3x12 matrices with complex numbers
Ap = permute(data2, [2 1 4 3]);
Bp = permute(data, [1 4 2 3]);
M = Ap.*Bp;
M = sum(M, 1);
M = permute(M, [2 3 4 1]);
#Cris was right, you can find an example from this MatlabCentral post
https://www.mathworks.com/matlabcentral/answers/10161-3d-matrix-multiplication#answer_413531

making two vectors the same length using interpolatoin in matlab

i want to interpolate a vector y1 of length 3 to get a vector y2 of length 6.which of the functions interp1 or resample should i use?
ex.
y1=[1 2 3];
y2=[1 2 3 4 5 6 ];
resample(y1,length(y2),length(y1))
Use interp1.
Ex: You have a sinusoidal signal sampled every pi/4.
x = 0:pi/4:2*pi;
v = sin(x);
Now to want a finer sampling xq (every pi/16):
xq = 0:pi/16:2*pi;
The result will be:
vq1 = interp1(x,v,xq);
Where vq1 is a vector whose values are interpolated from vto satisfy the new sampling xq
PD: You can also pass as a parameter which type of interpolation you want: 'linear', 'nearest', 'cubic', etc...

How can this code be vectorized in MATLAB? Which kinds of code can be vectorized? [duplicate]

I have a matrix a and I want to calculate the distance from one point to all other points. So really the outcome matrix should have a zero (at the point I have chosen) and should appear as some sort of circle of numbers around that specific point.
This is what I have already but I cant seem to get the correct outcome.
a = [1 2 3 4 5 6 7 8 9 10]
for i = 2:20
a(i,:) = a(i-1,:) + 1;
end
N = 10
for I = 1:N
for J = 1:N
dx = a(I,1)-a(J,1);
dy = a(I,2)-a(J,2);
distance(I,J) = sqrt(dx^2 + dy^2)
end
end
Your a matrix is a 1D vector and is incompatible with the nested loop, which computes distance in 2D space from each point to each other point. So the following answer applies to the problem of finding all pairwise distances in a N-by-D matrix, as your loop does for the case of D=2.
Option 1 - pdist
I think you are looking for pdist with the 'euclidean' distance option.
a = randn(10, 2); %// 2D, 10 samples
D = pdist(a,'euclidean'); %// euclidean distance
Follow that by squareform to get the square matrix with zero on the diagonal as you want it:
distances = squareform(D);
Option 2 - bsxfun
If you don't have pdist, which is in the Statistics Toolbox, you can do this easily with bsxfun:
da = bsxfun(#minus,a,permute(a,[3 2 1]));
distances = squeeze(sqrt(sum(da.^2,2)));
Option 3 - reformulated equation
You can also use an alternate form of Euclidean (2-norm) distance,
||A-B|| = sqrt ( ||A||^2 + ||B||^2 - 2*A.B )
Writing this in MATLAB for two data arrays u and v of size NxD,
dot(u-v,u-v,2) == dot(u,u,2) + dot(v,v,2) - 2*dot(u,v,2) % useful identity
%// there are actually small differences from floating point precision, but...
abs(dot(u-v,u-v,2) - (dot(u,u,2) + dot(v,v,2) - 2*dot(u,v,2))) < 1e-15
With the reformulated equation, the solution becomes:
aa = a*a';
a2 = sum(a.*a,2); % diag(aa)
a2 = bsxfun(#plus,a2,a2');
distances = sqrt(a2 - 2*aa);
You might use this method if Option 2 eats up too much memory.
Timings
For a random data matrix of size 1e3-by-3 (N-by-D), here are timings for 100 runs (Core 2 Quad, 4GB DDR2, R2013a).
Option 1 (pdist): 1.561150 sec (0.560947 sec in pdist)
Option 2 (bsxfun): 2.695059 sec
Option 3 (bsxfun alt): 1.334880 sec
Findings: (i) Do computations with bsxfun, use the alternate formula. (ii) the pdist+squareform option has comparable performance. (iii) The reason why squareform takes twice as much time as pdist is probably because pdist only computes the triangular matrix since the distance matrix is symmetric. If you can do without the square matrix, then you can avoid squareform and do your computations in about 40% of the time required to do it manually with bsxfun (0.5609/1.3348).
This is what i was looking for, but thanks for all the suggestions.
A = rand(5, 5);
select_cell = [3 3];
distance = zeros(size(A, 1), size(A, 2));
for i = 1:size(A, 1)
for j = 1:size(A, 2)
distance(i, j) = sqrt((i - select_cell(1))^2 + (j - select_cell(2))^2);
end
end
disp(distance)
Also you can improve it by using vectorisation:
distances = sqrt((x-xCenter).^2+(y-yCenter).^2
IMPORTANT: data_matrix is D X N, where D is number of dimensions and N is number of data points!
final_dist_pairs=data_matrix'*data_matrix;
norms = diag(final_dist_pairs);
final_dist_pairs = bsxfun(#plus, norms, norms') - 2 * final_dist_pairs;
Hope it helps!
% Another important thing,
Never use pdist function of MATLAB. It is a sequential evaluation, that is something like for loops and takes a lot of time, maybe in O(N^2)

Matlab formula optimization without for loops

I am trying to implement Hough transform algorithm. Algorithm works, but it's slow.
Currently i calculate rho, by this equation in two for loops:
for i = 1 : length(x)
j=1;
for theta = -pi/2:nBinsTheta:pi/2-nBinsTheta
ro =round(x(i).*cos(theta) + y(i).*sin(theta));
....
j = j + 1;
end
end
How can i simplify this, to work without for loops?
I need to calculate ro without loops, but how can i do this, to cover all possible theta's?
EDIT: Now i need to know how to add 1, to designated cell's in accumulator matrix given x and y coordinate vector. For example let's say that i have vectors like:
x: [1 2 1 3]
y: [1 3 1 4]
I'd like to solve this problem without loops. I know that i need to convert to linear indices using sub2ind, but the problem is that there'll be a lot of same linear indices for example that i gave, there will be 2x1 (1,1 coordinate is repeated twice). If you try to add 1 like so:
A([1 1]) = A([1 1]) + 1;
it'll add 1 only once, that's my problem.
Assuming x and y to be row vectors, you can use the following code to pre-calculate all ro values in a 2D matrix which hopefully should speed things up for you inside the nested loops for the rest of the work you might be doing involving the ro values -
theta_vec = [-pi/2:nBinsTheta:pi/2-nBinsTheta].'; %//'
ro_vals = round( cos(theta_vec)*x + sin(theta_vec)*y );
assert(all(size(x) == size(y)), 'dimension mismatch: x, y')
theta = (-pi/2:nBinsTheta:pi/2-nBinsTheta)';
assert(all(size(theta) == size(y)), 'dimension mismatch: theta, y')
rho = x.*cos(theta) + y.*sin(theta);
rho_rounded = round(rho);
do you really need j?
PS: the previous answer might not work because of matrix multiplication operator * instead of elementwise .*

Is there a function to get the skew diagonal of a matrix in matlab?

A=[a_11, a_12; a_21, a_22]
The skew diagonal is [a_12, a_21]. Right now, I flip the matrix around and use diag.
As an alternative to fliplr and diag, you can index directly into the matrix like this:
A = magic(3);
s = length(A);
idx = s:(s-1):(s*(s-1)+1);
%# for anti-diagonal, use the following
%#idx = (s*(s-1)+1):(-s+1):s;
skewDiag = A(idx)
skewDiag =
4 5 6