vectorization of flip left-right of a 3D matrix using Matlab - matlab

I have a 3D matrix of a movie (say a matrix M of size J*K*L). I want to flip left right each frame. Using fliplr(M) doesn't work as M must be a 2-D matrix. I know I can use a for loop of the following:
for ii=1:size(M,3)
M(:,:,ii)=fliplr( M(:,:,ii) )
end
Is the a "vectorized" way to do it?
More generally speaking, is the a "vectorized" way to do any of Matlab's matrix manipulations (flipud, repmat, etc) in this case?

Alternatively, you can use simple indexing:
>> M = rand(3,4,5);
>> M(:, end:-1:1, :);
This is a lot faster and less resource intensive than flipdim, and I think a lot cleaner too.
However, for some people, this particular usage of the end keyword is confusing, so if you're one of those people, flipdim will do just fine :)

I think you are looking for
M = flipdim(M, 2);
This flips an N dimensional matrix along the dimension you specify as the second parameter. Thus, the flipud could be replaced with
M = flipdim(M, 1);
Not sure where you are going with the repmat question, but I often find I can use bsxfun instead of repmat. Look it up.

Related

Efficient implementation of a sequence of matrix-vector products / specific "tensor"-matrix product

I have a special algorithm where as one of the lasts steps I need to carry out a multiplication of a 3-D array with a 2-D array such that each matrix-slice of the 3-D array is multiplied wich each column of the 2-D array. In other words, if, say A is an N x N x N matrix and B is an N x N matrix, I need to compute a matrix C of size N x N where C(:,i) = A(:,:,i)*B(:,i);.
The naive way to implement this is a loop, i.e.,
C = zeros(N,N);
for i = 1:N
C(:,i) = A(:,:,i)*B(:,i);
end
However, loops aren't the fastest in Matlab and should be avoided. I'm looking for faster ways of doing this. Right now, what I do is to use the fact that (now Mathjax would be great!):
[A1 b1, A2 b2, ..., AN bN] = [A1, A2, ..., AN]*blkdiag(b1,b2,...,bN)
This allows to get rid of the loop, however, we have to create a block-diagonal matrix of size N^2 x N. I'm making it via sparse to be efficient, i.e., like this:
A_long = reshape(A,N,N^2);
b_cell = mat2cell(B,N,ones(1,N)); % convert matrix to cell array of vectors
b_cell{1} = sparse(b_cell{1}); % make first element sparse, this is enough to trigger blkdiag into sparse mode
B_blk = blkdiag(b_cell{:});
C = A_long*B_blk;
According to my benchmarks, this approach is faster than the loop by a factor of around two (for large N), despite the necessary preparations (the multiplication alone is 3 to 4-fold faster than the loop).
Here is a quick benchmark I did, varying the problem size N and measuring the time for the loop and the alternative approach (with and without the preparation steps). For large N the speedup is around 2...2.5.
Still, this looks awfully complicated to me. Is there a simpler or better way to achieve this? This looks like it's a quite generic/standard problem so I could imagine that solutions are around, I just don't know what to search for really.
P.S.: blkdiag(A1,...,AN)*B is an obvious alternative but here the block diagonal is already N^2 x N^2 so I don't think it can be better than what I did.
edit: Thanks to everyone for commenting! I have carried out a new benchmark on a Matlab R2016b. Unfortunately, I do not have both versions on the same computer so we cannot compare the absolute numbers but the relative comparison is still interesting, since it has changed a bit. Here it is:
And here is a zoom on the high-N area:
Couple of observations:
SumRepDot is the solution proposed by Divakar, namely, to use squeeze(sum(bsxfun(#times,A,permute(B,[3,1,2])),2)) which on R2016b simplifies to squeeze(sum(A.*permute(B,[3,1,2]),2)). It is faster than the loop for high N by a factor of around 1.2...1.4.
The loop is still "slow" in a sense that the multiplication with the sparse block diagonal matrix is much faster.
For the latter, the preparation overhead seems to become negligible for high N which makes it overall a factor of 3...4 faster than the loop. This is a nice result.

MATLAB: Efficient (vectorized) way to apply function on two matrices?

I have two matrices X and Y, both of order mxn. I want to create a new matrix O of order mxm such that each i,j th entry in this new matrix is computed by applying a function to ith and jth row of X and Y respectively. In my case m = 10000 and n = 500. I tried using a loop but it takes forever. Is there an efficient way to do it?
I am targeting two functions dot product -- dot(row_i, row_j) and exp(-1*norm(row_i-row_j)). But I was wondering if there is a general way so that I can plugin any function.
Solution #1
For the first case, it looks like you can simply use matrix multiplication after transposing Y -
X*Y'
If you are dealing with complex numbers -
conj(X*ctranspose(Y))
Solution #2
For the second case, you need to do a little more work. You need to use bsxfun with permute to re-arrange dimensions and employ the raw form of norm calculations and finally squeeze to get a 2D array output -
squeeze(exp(-1*sqrt(sum(bsxfun(#minus,X,permute(Y,[3 2 1])).^2,2)))
If you would like to avoid squeeze, you can use two permute's -
exp(-1*sqrt(sum(bsxfun(#minus,permute(X,[1 3 2]),permute(Y,[3 1 2])).^2,3)))
I would also advise you to look into this problem - Efficiently compute pairwise squared Euclidean distance in Matlab.
In conclusion, there isn't a common most efficient way that could be employed for every function to ith and jth row of X. If you are still hell bent on that, you can use anonymous function handles with bsxfun, but I am afraid it won't be the most efficient technique.
For the second part, you could also use pdist2:
result = exp(-pdist2(X,Y));

Large Vector Outer Product Matlab

I want to compute an outer product of the same vector in Matlab. A representative example would be:
x=rand(1e5,1);
sigma=x*x'-spdiags(x,0,length(x),length(x));
Is there any obvious way to speed this up? x*x' is a symmetric matrix, but have not figured out a way to help Matlab use that information to speed things up.
EDIT: There is a way to do this with loops but I cannot see the benefit yet:
for k=1:length(x)
sigma(k:length(x),k)=x(k).*x(k:length(x));
end
The above might work with a sparse array.
Have you considered using pdist with custom distance function
sigmaCompact = pdist( x(:), #(x, Y) x.*Y );
sigma = squareform(sigmaCompact);
up to the special treatment of sigma( k, k );

how to get the maximally independent vectors given a set of vectors in MATLAB?

If I am given a set of vectors (they can be provided as the column vectors of a matrix), and I want to get the maximally independent vectors, what is the best way to go about it?
I could add one vector to the result set at a time to see if the rank of the newly formed matrix is increased or not. But I feel it is not very efficient. Of course, I could go back to do Gauss elimination to work this out. But I am just wondering if there is a better (efficient and numerically stable and robut) approach to this problem.
Thanks.
Edit
Feel the addition by watching the rank increasing is probably not valid. We can do deletion by watching if the rank is decreasing though.
This code will do the trick. It's a little bit dirty because it grows rInd on the fly, which isn't the most efficient, but the idea is more important. It uses the QR decomposition, which is basically Gram-Schmidt orthogonalization. From this, it goes through the rows of r until it finds the next vector in A that adds something linearly independent to the currently known basis.
iUnderConsideration = 1;
[q,r] = qr(A);
rInd = [];
for j = 1:size(r,2),
if(r(iUnderConsideration,j) ~= 0)
rInd = [rInd r(:,j)];
iUnderConsideration = iUnderConsideration + 1;
end
if(iUnderConsideration > size(r,1))
break;
end
end
q*rInd %here's your answer
As a side note, this code will chose the vectors of your matrix A without changing them. svd wouldn't give you these directly.
[U,S,V]=svd(vectors);
U(1:size(vectors,1),1:size(vectors,2))=vectors;
U now contains the original vectors plus an optimally orthogonal set.
Doing RREF and looking for columns with the leading zeros is your best bet:
matr(:,logical(sum(rref(matr)==1)))
This will give you the basis for the column space of the matrix.
SVD is your answer.
The MATLAB reference for SVD.

Beginning Matlab question (matrix of zeros)

Why create a matrix of 0's in Matlab? For example,
A=zeros(5,5);
for i = 1:5
A(i)=exp(i);
end
Following on from j_random_hacker's answer, it's much more efficient in MATLAB to pre-allocate an array rather than letting MATLAB expand it. MATLAB can expand arrays if you simply assign elements off the current "end" of the array, like so:
x = []
for ii=1:1e4
x(ii) = 1/ii;
end
That's really inefficient because at each step in the loop, MATLAB will re-allocate "x" to be one element larger than it was previously. The following is much faster:
x = zeros( 1, 1e4 );
for ii=1:1e4
x(ii) = 1/ii;
end
(Probably fastest still in this case is: x = 1./(1:1e4);, but the pre-allocation route is what you need when you can't resolve things to a vectorised operation)
This is identical to asking: Why create a variable with value 0?
Usually you would do this if you plan to accumulate a bunch of results together somehow. In this case, you have to start "somewhere".
Although it is possible to start out with an empty matrix and expand it by concatenating (adding) new elements, vector extension is highly inefficient in MATLAB because it requires new memory every time another element is concatenated. Preallocation establishes a matrix that's the right size in advance, then each zero element can be replaced with the correct value. This method is much more efficient, especially in programs involving looping.
This is helpful if you are going to work on large matrix. Or if you are going to work with sparse matrix. This is also helpful when you are using the same vector or matrix again and again.