How to speed up c++ eigen decomposition - matlab

I use the MATLAB to do eigenvalue decomposition, and the dimension of data is about 10000, so the covariance matrix is 10000*10000. When I use the eig() function in MATLAB, it is very slow. Is there any way to speed up the eigenvalue decomposition.
I use the eigenvalue decomposition to do principal component analysis(PCA), so I just use the top K eigenvalues and eigenvectors. There is no need to get all the eigenvalues and eigenvectors. I have tried to use the Intel-MKL to do eigen decomposition, but when I use the mex interface, there are some errors. I posted it in the link https://stackoverflow.com/questions/19220271/how-to-use-intel-mkl-for-speed-my-own-matlab-mex-cpp-applications
Please give me some advice, Thanks.

use eigs if your data is sparse, or if you are interested in the first k values. For example,
eigs(A,k) returns the k largest magnitude eigenvalues. Note that eigs will be faster only for the first few eigen-values, and will be slower for k > some value (probably 5...)

Related

Do we need to normalize the eigen values in Matlab?

When using eig function in Matlab, it seems that this function has already normalize the values of the eigenvalues. Do we need to write some lines of code to normalize the eigenvalues after using the eig function.
The function eig in MATLAB normalizes the eigenvectors (not the eigenvalues).
See the following from the documentation:
[V,D] = eig(A) returns matrix V, whose columns are the right
eigenvectors of A such that AV = VD. The eigenvectors in V are
normalized so that the 2-norm of each is 1.
Eigenvectors can vary by a scalar, so a computation algorithm has to choose a particular scaled value of an eigenvector to show you. eig chooses 2-norm = 1. Just look at the eigenvector definition to see why: AV=VD. V shows up on both sides, so you can multiple V by anything without affecting the equation.
Eigenvalues do not vary. Look again at AV=VD. D is only on one side, so it can't be scaled.

MATLAB: Eig algorithm and alternatives

I am simulating a physical system, where I need to calculate the eigenvalues and vectors of a very large (~10000x10000) matrix.
So far I have used the in-built Eig algorithm in MATLAB but it is very slow for large matrices. Is there other algorithms in MATLAB that would do a better job or can I somehow improve the performance of Eig? Specifically it turns out that I only need the first ~100 eigenvectors of the matrix starting from the smallest numerical eigenvalue. Is there a way to get the algorithm to calculate only the first N eigenvectors and eigenvalues to save computation time? Of course this would only work if the eigenvectors come out sorted but they seem to do so, because of the symmetry of the Matrix I am using.
Your matrix has mostly zeros, so you should make it a sparse matrix. You'll then be able to use EIGS to calculate a smaller number of eigenvalues and eigenvectors.
http://www.mathworks.com/help/matlab/ref/eigs.html

Finding Eigenvalue and Eigenvector of a symmetric n*n matrix in Matlab

I need to find the eigen value decomposition of the symmetric matrix in Matlab. But I do not want to use the matlab inbuilt function eig Can anyone tell me efficient algortihm. I have already implemented the power Iteration Algorithm but it is not suitable for my project because it gives the best value in eigenvalue and eigen vector. Looking forward for suggestions.
PS: I am using eigenvalue decomposition several time in my algorithm...

Octave/Matlab: PCA on sparse matrix: how to get only the most important eigenvectors?

I am using Octave and have a huge sparse matrix that I have to get the eigenvalues of. However, if I just use a function to get all eigenvalues and eigenvectors, the result will take up way too much space, since the input matrix is sparse for a reason.
How can I get only a limited number of the most important eigenvectors?
Use eigs instead of eig:
D = eigs(A,k);
This returns the k largest eigenvalues of the matrix A. According to this page, Octave does support eigs for sparse matrices. eigs uses different techniques than eig, is slower overall, and shouldn't generally be used except in the cases such as the one you describe.
Be sure to check out the options for the sigma argument in case you want the largest eigenvalues with respect to their real parts only, for example.
The Matlab documentation for eigs is here.

Ordering of eigenvectors when calculating eigenvectors using LAPACK's ssteqr

I am using LAPACK's ssteqr function to calculate eigenvalues/eigenvectors. The documentation for ssteqr says that the eigenvalues are sorted "in ascending order". Is it reasonable to assume that the list of eigenvectors is also sorted in ascending order?
Yes, it is reasonable to assume that the eigenvectors are ordered so that the i-th eigenvector corresponds to the i-th eigenvalue.
Nevertheless, if I were you, I would check for each eigenvalue the result of the multiplication of the eigenvector by the matrix. This way you are sure that you interpret the output right, and you see explicitly the accuracy of the calculations.
An old question, this, but I struggled with this recently, so am adding this for current and future readers.
The basic answer is that, yes, the eigenvectors are sorted such that the ith eigenvector corresponds to the ith eigenvalue. However, note that the eigenvectors thus obtained may not be the actual eigenvectors you want. This is so because of the following.
Since the ?steqr functions work only on tridiagonal matrices, one typically uses LAPACK's ?sytrd functions to first transform one's original symmetric matrix, call it M, to a tridiagonal form, call it T, such that M = QTQT where Q is an orthogonal matrix (and QT denotes its transpose). One then applies the ?steqr function on this tridiagonal matrix T to find its eigenvalues and eigenvectors. Now the eigenvalues thus obtained (of T) are exactly the same as the eigenvalues of M, so if one only wants the eigenvalues one can stop here. But if one is interested in the eigenvectors, like the OP, then one needs to bear in mind that the eigenvectors of T and M are different. To find the eigenvectors of the original matrix M, one needs to left-multiply the obtained eigenvectors of T by Q. This is very easily done by using the LAPACK functions orgtr or ormtr. See here for a clear explanation: https://software.intel.com/en-us/mkl-developer-reference-fortran-sytrd.