Multiply matrices with big size - matlab

I wanna to try calculate multiply of three matrix in matlab.
The formation of matrices described below:
L = D^(-1/2) * A * D^(-1/2);
D, A and L are a n*n matrices. A and L are not diagonal or sparse but D is diagonal. In this case n = 16900. When I calculate L in matlab, it takes a long time, about 4 hours!
My question is: Is there a more efficient way to calculate L?

You can use bsxfun twice. I'm not sure it will be faster or not:
v = diag(D).^(-1/2); %// this is the same as diag(D.^(-1/2))
L = bsxfun(#times, v.', bsxfun(#times, A, v));

Instead of using naive matrix multiplication, you can specialised asymptotically faster ones. Strassen's algorithm comes to mind but if I recall correctly it typically has a high constant, despite it's better asymptotic complexity. If you have a very limited set of possible values in your matrices, you can use a variation of the "four Russians" method.

Related

Diagonalizing Matrix in Matlab Gives "Wrong" Linear Combination of Eigenvectors

In Matlab, I'm trying to solve for the energies and eigenstates of a Hamiltonian matrix which has a highly degenerate set of eigenvectors. The matrix is a 55x55 hermitian matrix, and when I call either eig or schur to do the diagonalization I find that some (but not all) of the eigenvectors are the "wrong" linear combinations within each degenerate subspace. What I mean by "wrong" is that there are additional constraints in the problem. In this case, there is a good quantum number, M, which I want to preserve by not allowing states with different M values to be mixed--- but that mixing is exactly what I see when I run the code. Is there a way to tell Matlab to diagonalize the matrix while simultaneously maintaining the eigenvectors of another operator?
you can use diag to diagonalize a matrix and [eig_vect,eig_val] = eig(A) to give you eigenvectors.
I don't know matlab well enough to know whether there is a routine for this this, but here's how to do it algorithmically:
First diagonalise H, as you do now. Then for each degenerate eigen-space V, diagonalise the restriction of C to V, and use this diagonalisation to compute simulaneous diagonalisations of C and H
In more detail:
I assume you have an operator C that commutes with your Hamiltonian H. If V is the eigen-space of H for a particular (degenerate) eigen value, and you have a basis x[1] .. x[n] of V , then for each i, Cx[i] must be in V, and so we can expand Cx[i] in terms of the x[], and get a matrix representation C^ of the restriction of C to V, that is we compute
C^[k,j] = <x[k]|C*x[j]> k,j =1 .. n
Diagonalise the matrix C^, getting
C^ = U*D*U*
Then for each row (r1,..rn) of U* we can form
chi = Sum{ j | r[j]*x[j]}
A little algebra shows that this is an eigenvector of C, and also of H

Simple way to multiply column elements by corresponding vector elements in MATLAB?

I want to multiply every column of a M × N matrix by corresponding element of a vector of size N.
I know it's possible using a for loop. But I'm seeking a more simple way of doing it.
I think this is what you want:
mat1=randi(10,[4 5]);
vec1=randi(10,[1 5]);
result=mat1.*repmat(vec1,[size(mat1,1),1]);
rempat will replicate vec1 along the rows of mat1. Then we can do element-wise multiplication (.*) to "multiply every column of a M × N matrix by corresponding element of a vector of size N".
Edit: Just to add to the computational aspect. There is an alternative to repmat that I would like you to know. Matrix indexing can achieve the same behavior as repmat and be faster. I have adopted this technique from here.
Observe that you can write the following statement
repmat(vec1,[size(mat1,1),1]);
as
vec1([1:size(vec1,1)]'*ones(1,size(mat1,1)),:);
If you see closely, the expression boils down to vec1([1]'*[1 1 1 1]),:); which is again:
vec1([1 1 1 1]),:);
thereby achieving the same behavior as repmat and be faster. I ran three solutions 100000 times, namely,
Solution using repmat : 0.824518 seconds
Solution using indexing technique explained above : 0.734435 seconds
Solution using bsxfun provided by #LuisMendo : 0.683331 seconds
You can observe that bsxfun is slightly faster.
Although you can do it with repmat (as in #Parag's answer), it's often more efficient to use bsxfun. It also has the advantage that the code (last line) is the same for a row and for a column vector.
%// Example data
M = 4;
N = 5;
matrix = rand(M,N);
vector = rand(1,N); %// or size M,1
%// Computation
result = bsxfun(#times, matrix, vector); %// bsxfun does an "implicit" repmat

Matlab - Create N sparse matrices and sum them

I have N kx1 sparse vectors and I need to multiply each of them by their transpose, creating N square matrices, which I then have to sum over. The desired output is a k by k matrix. I have tried doing this in a loop and using arrayfun, but both solutions are too slow. Perhaps one of you can come up with something faster. Below are specific details about the best solution I've come up with.
mdev_big is k by N sparse matrix, containing each of the N vectors.
fun_sigma_i = #(i) mdev_big(:,i)*mdev_big(:,i)';
sigma_i = arrayfun(fun_sigma_i,1:N,'UniformOutput',false);
sigma = sum(reshape(full([sigma_i{:}]),k,k,N),3);
The slow part of this process is making sigma_i full, but I cannot reshape it into a 3d array otherwise. I've also tried cat instead of reshape (slower), ndSparse instead of full (way slower), and making fun_sigma_i return a full matrix rather than a sparse one (slower).
Thanks for the help! ,

Efficient matrix multiplications in Matlab

What's the best way to do the following (in Matlab) if I have two matrices A and B, let'say both of size m-by-n:
C = zeros(m,m);
for t=1:n
C=C+A(:,t)*B(:,t)';
end
This is nothing more than
C = A*B';
where A and B are each m-by-n. I'm not sure that you're going to get more efficient than that unless the matrices have special properties.
One place where you might get a benefit from using bsxfun for matrix multiplication is when the dimensions are sufficiently large (probably 100-by-100 or more) and one matrix is diagonal, e.g.:
A = rand(1e2);
B = diag(rand(1,1e2));
C = bsxfun(#times,A,diag(B).');
This occurs in many matrix transforms – see the code for sqrtm for example (edit sqrtm).

Matlab inverse of large matrix

This is the equation I am trying to solve:
h = (X'*X)^-1*X'*y
where X is a matrix and y is a vector ((X'X)^-1 is the inverse of X-transpose times X). I have coded this in Matlab as:
h = (X'*X)\X'*y
which I believe is correct. The problem is that X is around 10000x10000, and trying to calculate that inverse is crashing Matlab on even the most powerful computer I can find (16 cores, 24GB RAM). Is there any way to split this up, or a library designed for doing such large inversions?
Thank you.
That looks like a pseudo inverse. Are you perhaps looking for just
h = X \ y;
I generated a random 10,000 by 10,000 matrix X and a random 10,000 by 1 vector y.
I just broke up my computation step by step. (Code shown below)
Computed the transpose and held it in matrix K
Then I computed Matrix A by multiplying K by X
Computed vector b by multiplying K by vector y
Lastly, I used the backslash operator on A and b to solve
I didn't have a problem with the computation. It took a while, but breaking up the operations into the smallest groups possible helped to prevent the computer from being overwhelmed. However, it could be the composition of the matrix that you are using (ie. Sparse, decimals, etc.).
X = randi(2000, [10000, 10000]);
y = randi(2000, 10000, 1);
K = X';
A = K*X;
b = K*y;
S = A\b;
If you have multiple machines at your disposal, and you can recast your problem into the form h = X\y as proposed by #Ben, then you could use distributed arrays. This demo shows how you can do that.
Jordan,
Your equation is exactly the definition for "Moore-Penrose Matrix Inverse".
Check: http://mathworld.wolfram.com/Moore-PenroseMatrixInverse.html
Directly using h = X \ y; should help.
Or check Matlab pinv(X)*y