Matlab gives me negative eigenvalue for positive matrix - matlab

I have a 6000*6000 symmetric matrix and all entries are positive. I use eig function of matlab to decompose its eigenvalues&eigenvectors. But there are negative eigenvalues in results. Where do you think is the problem?
Thanks.
Sevil.

There is no problem. Just because a matrix is symmetric and has all positive values doesn't guarantee positive eigenvalues. For example, try the following symmetric matrix with all positive values [3 4; 4 3]. Performing eig([3 4; 4 3]) produces the eigenvalues of -1 and 7 and so one of the two eigenvalues is negative.
Take note that a matrix with all positive values and is symmetric is different from a matrix that is positive definite. Matrices that are positive definite have all positive eigenvalues which I believe is where you are confused. All in all, symmetric matrices that have all positive values are not necessarily positive definite matrices as you can clearly see in the example I gave above.

Related

How expensive is it to compute the largest eigenvalue and corresponding eigenvector of n-by-n Hermitian matrix?

There was a similar question: How expensive is it to compute the eigenvalues of a matrix?
And the answer was big-Oh(n^3) for square and symmetric matrices.
What about the largest eigenvalue and corresponding eigenvector of n-by-n matrix?
Here I assume that matrix is square and Hermitian. I think it is still must be faster than big-Oh(n^3) because we are interested only on the largest eigenvalue and the corresponding eigenvector.
This is the Matlab code that I currently use, but I don't think it is best, because I still compute all eigenvalues instead of largest and sort them.
A=2.*rand(3,3)-1*ones(3,3)+i*(2.*rand(3,3) -1*ones(3,3));
[v,e]=eig(A+A');
[d,ind] = sort(diag(e),'descend');
e=d(1)
v = v(:,ind);
v(1:3,1)
Try this:
A=2.*rand(3,3)-1*ones(3,3)+i*(2.*rand(3,3) -1*ones(3,3));
[v,e]=eigs(A+A',1,'largestabs');
Please, let me know how it goes!

how does Matlab normalize generalized eigenvectors?

I know that the eigenvectors produced by eig(A) have 2-norm 1. But what about the vectors produced in the generalized eigenvalue problem eig(A,B)? A natural conjecture is that such a vector v should satisfy v'Bv=1. When B is the identity matrix, then v'Bv is exactly the square of the 2-norm. I ran the following test for various matrices A and B:
[p,d]=eig(A,B);
v=p(:,1);
v'*B*v
I always choose B to be diagonal. I noticed that v'Bv is not always 1. However, it is indeed 1 when A is symmetric. Does anyone know the rule for the way that Matlab normalizes the generalized eigenvectors? I can't find it in the document.
According to the documentation (emphasis mine):
The form and normalization of V depends on the combination of input arguments:
[...]
[V,D] = eig(A,B) and [V,D] = eig(A,B,algorithm) returns V as a matrix whose columns are the generalized right eigenvectors that satisfy A*V = B*V*D. The 2-norm of each eigenvector is not necessarily 1. In this case, D contains the generalized eigenvalues of the pair, (A,B), along the main diagonal.
When eig uses the 'chol' algorithm with symmetric (Hermitian) A and symmetric (Hermitian) positive definite B, it normalizes the eigenvectors in V so that the B-norm of each is 1.
This means that, unless you are using the 'chol' algorithm, V is not normalized.
If I get you correctly, you are looking for a way to generalize a vector then given a vector you can divide it by its norm to obtain a secondary vector whose norm is 1.
If you are looking for the mathematical background, then Eigendecomposition of a matrix contains a good introduction.

Why I am getting wrong matrix norm in matlab?

I have small, well conditioned hermitian matrix L with eigenvalues in [0,2]. I'm getting weird results while trying to compute norm of inverse of L:
>> norm(inv(L))
ans =
2.0788
>> min(eig(L))
ans =
0.5000
Which is strange because second norm of inverse ought to be equal inverse of minimal eigenvalue of matrix.
I know about errors introduced by machine arithmetic, but in this small, hermitian and well-conditioned example I expected it to be negligible.
Here is the matrix https://www.dropbox.com/s/nh1wegrnn53wb6w/matrix.mat
I'm using matlab 8.2.0.701 (R2013b) on Linux mint 16 (Petra).
It's not a numerical issue, as you've pointed out the matrix is well-conditioned.
second norm of inverse ought to be equal inverse of minimal eigenvalue of matrix
This is only true if the matrix is hermitian with positive eigenvalues (ie positive definite). From wikipedia: The spectral norm of a matrix A is the largest singular value of A i.e. the square root of the largest eigenvalue of the positive-semidefinite matrix A*A
So here you would calculate the norm of the inverse as:
[v,d] = eig(L'*L);
1.0/sqrt(min(diag(d))) = 2.0788539
norm(inv(L)) = 2.0788539
As we expect.

how to find Eigenvalues for non quadratic matrix

I want to make similar graphs to this given on the picture:
I am using Fisher Iris data and employ PCA to reduce dimensionality.
this is code:
load fisheriris
[pc,score,latent,tsquare,explained,mu] = princomp(meas);
I guess the eigenvalues are given in Latent, that shows me only four features and is about reduced data.
My question is how to show all eigenvalues of original matrix, which is not quadratic (150x4)? Please help! Thank you very much in advance!
The short (and useless) answer is that the [V, D] eig(_) function gives you the eigenvectors and the eigenvalues. However, I'm afraid I have bad news for you. Eigenvalues and eigenvectors only exist for square matrices, so there are no eigenvectors for your 150x4 matrix.
All is not lost. PCA actually uses the eigenvalues of the covariance matrix, not of the original matrix, and the covariance matrix is always square. That is, if you have a matrix A, the covariance matrix is AAT.
The covariance matrix is not only square, it is symmetric. This is good, because the singular values of a matrix are related to the eigenvalues of it's covariance matrix. Check the following Matlab code:
A = [10 20 35; 5 7 9]; % A rectangular matrix
X = A*A'; % The covariance matrix of A
[V, D] = eig(X); % Get the eigenvectors and eigenvalues of the covariance matrix
[U,S,W] = svd(A); % Get the singular values of the original matrix
V is a matrix containing the eigenvectors, and D contains the eigenvalues. Now, the relationship:
SST ~ D
U ~ V
I use '~' to indicate that while they are "equal", the sign and order may vary. There is no "correct" order or sign for the eigenvectors, so either is valid. Unfortunately, though, you will only have four features (unless your array is meant to be the other way around).

Issues with calculating the determinant of a matrix

I am trying to calculate the determinant of the inverse of a matrix. The inverse of the matrix exists. However, when I try to calculate the determinant of the inverse, it gives me Inf value in matlab. What is the reason behind this?
Short answer: given A = inv(B), then det(A)==Inf may have two explanations:
an overflow during the numerical computation of the determinant,
one or more infinite elements in A.
In the first case your matrix is badly scaled so that det(B) may underflow and det(A) overflow. Remember that det(a*B) == a^N * det(B) where a is a scalar and B is a N times N matrix.
In the second case (i.e. nnz(A==inf)>0) matrix B may be "singular to working precision".
PS:
A matrix is nearly singular if it has a large condition number. (A small determinant has nothing to do with singularity, since the magnitude of the determinant itself is affected by scaling.).
A matrix is singular to working precision if it has a zero pivot in the Gaussian elimination: when computing the inverse, matlab has to calculate 1/0 which returns Inf.
In fact in Matlab overflow and zero-division exceptions are not caught, so that, according to IEEE 754, an Inf value is propagated.