Huge Fourier matrix - MATLAB - matlab

I need to create a Fourier matrix in order to apply it to a huge matrix that I needed to define as sparse using spalloc. I tried:
F=dftmtx(N);
but N is too large so I can't create it.
Is there any way to solve this problem?
Thank you for your help!

For each column, you can form a reduced DFT matrix by leaving out the entries that will multiply zeros. Something like
X = my_matrix;
c = column_index;
x = X(:,c);
N = length(x);
inds = find(x);
F = exp( -1j * 2*pi/N * (0:N-1)' * (inds-1) );
Xdft(:,c) = F * x(inds);
You'll have to iterate over the columns unless the zeros in the input matrix don't change column-to-column. However, the above still seems silly to me. I'd just pull off one column at a time and use fft().

Related

Mixing 3D arrays into a bigger 3D array

I wish to fill a N x M x W matrix ‘S’ with the data from matrices ‘P’ and ‘Q’. They are defined below and illustrated in the attached image. Also, we know for sure that n_1 + n_2 = N, m < M, so all the data may fit in the ‘S’ matrix.
S = zeros(M,N,W);
P = rand(m,n_1,W);
Q = rand(m,n_2,W);
I wish to combine ‘P’ and ‘Q’ in a manner specified by 3 other matrices, ‘Line_num’, ‘P_col’ and ‘Q_col’, described below and in the middle part of the attached image.
P_col = randperm(N); P_col = P_col(1:n_1); % 1 x n_1 matrix
Q_col = setxor(P_col, 1:1:N); % 1 x n_2 matrix
Line_num is a matrix composed of W vectors of the form aa:1:bb, where bb-aa = m and aa is chosen at random for each vector.
The important thing is that in this case the data along the 3rd dimension in all these matrixes represent W different test cases, with the data being different and not to be mixed between each other.
To fill ‘S’ one may proceed in two logical steps (although if it can be done in one I shall be glad)
combine Q and P into an intermediate matrix Y of shape m x N x W by
interweaving their columns. The columns specified in ‘Q_col’ are
taken from Q (using the vector index) and put in the matrix Y (using
the vector value). The same goes for P.
For each of the W vectors composing Line_num and arrays composing S,
use the values in the vector Line_num to spread out Y to the
corresponding rows in S, meanwhile maintaining their top to bottom
order.
I wish to achieve this without for-loops as I am looking to ‘vectorize’ my code and thus improve its running speed.
I have had a look at this post and this post, which are similar to what I desire. However they are simpler as the numbers to be extracted are constant. Maybe something similar would be appropriate?
Thank you for your help :)
Link to the image aforementioned
EDIT: here is an example code with a for-loop of what I want (my problem is that I want to get rid of the loop)
W = 4;
N = 10; n_1 = 4; n_2 = 6;
M = 20; m = 5;
P_col = [1,3,5,8]; % 1 x n_1 matrix
Q_col = setxor(P_col, 1:1:N); % 1 x n_2 matrix
line_num(:,:,1) = [1,5,10,15,18];
line_num(:,:,2) = [2,3,8,11,12];
line_num(:,:,3) = [4,7,8,14,19];
line_num(:,:,4) = [2,6,13,15,16];
S = zeros(M,N,W);
P = rand(m,n_1,W);
Q = rand(m,n_2,W);
for w=1:W
line_num_I = line_num(:,:,w);
S(line_num_I,Q_col,w) = Q(:,:,w);
S(line_num_I,P_col,w) = P(:,:,w);
end
Here is a vectorized solution. I'm not sure if it is more efficient than loop version specially when the size of data is large.
S ( reshape(line_num,[],1,W) ...
+ ([Q_col-1 P_col-1]) * M ...
+ (reshape(0:W-1,1,1,[]))*M*N ...
) ...
= ...
[reshape(Q,[],W);reshape(P,[],W)];
Here implicit expansion is used to convert subscripts to indices. Equivalently bsxfun can be used to compute linear indices:
S ( ...
bsxfun(#plus, ...
reshape(line_num,[],1,W), ...
bsxfun (#plus, ...
([Q_col-1 P_col-1]) * M, ...
(reshape(0:W-1,1,1,[]))*M*N ...
) ...
) ...
) ...
= ...
[reshape(Q,[],W);reshape(P,[],W)];
*Here You can find how to convert 3D index to lindex.
So I ended up finding the answer. For those of you that it may interest, the above for-loop may be replaced by
% 1. Combine columns
mixed_col = zeros(m,N,W);
mixed_col(:,Q_col,:) = Q(:,:,:);
mixed_col(:,P_col,:) = P(:,:,:);
mixed_col = permute(mixed_col,[2,1,3]); % turn 3D matrix into 2D [1]
mixed_col = reshape(mixed_col,N,[],1)';
% 2. Combine lines
S = reshape(S,M*w,N,1); % turn 3D matrix into 2D [2]
line_num_v = permute(line_num + reshape((0:1:(W-1)).*M,1,1,W),[2,1,3]); % turn 3D matrix into 2D [3]
line_num_v = reshape(line_num_v,[],1,1);
S(line_num_v,:) = mixed_col(:,:); % combine using three 2D matrices
S = permute(reshape(S',N,M,W),[2,1,3]);
This involves lots of reshaping but I don't have a simpler answer.
Thanks again for your help.

Vectorize column wise operation in octave or matlab

How can I vectorize the following code? It is basically computing the mean of each column.
mu(1) = sum(X(:,1))/C
mu(2) = sum(X(:,2))/C
and this (normalized each element, each column has different mean and std): (X is 47x2. mu, sigma are both 1x2)
X_norm(:,1) = (X(:,1)-mu(1))/sigma(1)
X_norm(:,2) = (X(:,2)-mu(2))/sigma(2)
It's as simple as:
mu = sum(X) ./ C
sum by default operates along the first dimension (on columns).
EDIT:
For the second part of the question:
X_norm = bsxfun(#rdivide, bsxfun(#minus, X, mu), sigma)
It is similar to doing repmat's, but without the memory overhead.
You can even use mu = mean(X).

bsxfun-like for matrix product

I need to multiply a matrix A with n matrices, and get n matrices back. For example, multiply a 2x2 matrix with 3 2x2 matrices stacked as a 2x2x3 Matlab array. bsxfun is what I usually use for such situations, but it only applies for element-wise operations.
I could do something like:
blkdiag(a, a, a) * blkdiag(b(:,:,1), b(:,:,2), b(:,:,3))
but I need a solution for arbitrary n - ?
You can reshape the stacked matrices. Suppose you have k-by-k matrix a and a stack of m k-by-k matrices sb and you want the product a*sb(:,:,ii) for ii = 1..m. Then all you need is
sza = size(a);
b = reshape( b, sza(2), [] ); % concatenate all matrices aloong the second dim
res = a * b;
res = reshape( res, sza(1), [], size(sb,3) ); % stack back to 3d
Your solution can be adapted to arbitrary size using comma-saparated lists obtained from cell arrays:
[k m n] = size(B);
Acell = mat2cell(repmat(A,[1 1 n]),k,m,ones(1,n));
Bcell = mat2cell(B,k,m,ones(1,n));
blkdiag(Acell{:}) * blkdiag(Bcell{:});
You could then stack the blocks on a 3D array using this answer, and keep only the relevant ones.
But in this case a good old loop is probably faster:
C = NaN(size(B));
for nn = 1:n
C(:,:,nn) = A * B(:,:,nn);
end
For large stacks of matrices and/or vectors over which to execute matrix multiplication, speed can start becoming an issue. To avoid re-inventing the wheel, you could simply compile and use the following fast MEX code:
MTIMESX - Mathworks.
As a rule of thumb, MATLAB is often quite inefficient at executing for loops over large numbers of operations which look like they should be vectorizable; I cannot think of a straightforward way of generalising Shai's answer to this case.

Matlab double summation of series

I am trying to use a function in Matlab which will give me the following equation:
The x and a values are in two matrices. I have tried almost everything, but cannot get the correct answer. Anyone who can help??
Thanks
Assuming A and X are vectors of size n x 1, you could construct that expression by writing transpose(X) * (sqrt(A * transpose(A)) .* (ones(n) - eye(n))) * X.
Another way to do this is
a = sqrt(ain); % ain is your input column vector
A = a*a.';
A = A-diag(diag(A));
aresult = x.'*A*x % x is your (other) input column vector

Vectorize call to function of two vectors (treat matrix as array of vector)

I wish to compute the cumulative cosine distance between sets of vectors.
The natural representation of a set of vectors is a matrix...but how do I vectorize the following?
function d = cosdist(P1,P2)
ds = zeros(size(P1,2),1);
for k=1:size(P1,2)
%#used transpose() to avoid SO formatting on '
ds(k)=transpose(P1(:,k))*P2(:,k)/(norm(P1(:,k))*norm(P2(:,k)));
end
d = prod(ds);
end
I can of course write
fz = #(v1,v2) transpose(v1)*v2/(norm(v1)*norm(v2));
ds = cellfun(fz,P1,P2);
...so long as I recast my matrices as cell arrays of vectors. Is there a better / entirely numeric way?
Also, will cellfun, arrayfun, etc. take advantage of vector instructions and/or multithreading?
Note probably superfluous in present company but for column vectors v1'*v2 == dot(v1,v2) and is significantly faster in Matlab.
Since P1 and P2 are of the same size, you can do element-wise operations here. v1'*v equals sum(v1.*v2), by the way.
d = prod(sum(P1.*P2,1)./sqrt(sum(P1.^2,1) .* sum(P2.^2,1)));
#Jonas had the right idea, but the normalizing denominator might be incorrect. Try this instead:
%# matrix of column vectors
P1 = rand(5,8);
P2 = rand(5,8);
d = prod( sum(P1.*P2,1) ./ sqrt(sum(P1.^2,1).*sum(P2.^2,1)) );
You can compare this against the results returned by PDIST2 function:
%# PDIST2 returns one minus cosine distance between all pairs of vectors
d2 = prod( 1-diag(pdist2(P1',P2','cosine')) );