Why I am getting wrong matrix norm in matlab? - matlab

I have small, well conditioned hermitian matrix L with eigenvalues in [0,2]. I'm getting weird results while trying to compute norm of inverse of L:
>> norm(inv(L))
ans =
2.0788
>> min(eig(L))
ans =
0.5000
Which is strange because second norm of inverse ought to be equal inverse of minimal eigenvalue of matrix.
I know about errors introduced by machine arithmetic, but in this small, hermitian and well-conditioned example I expected it to be negligible.
Here is the matrix https://www.dropbox.com/s/nh1wegrnn53wb6w/matrix.mat
I'm using matlab 8.2.0.701 (R2013b) on Linux mint 16 (Petra).

It's not a numerical issue, as you've pointed out the matrix is well-conditioned.
second norm of inverse ought to be equal inverse of minimal eigenvalue of matrix
This is only true if the matrix is hermitian with positive eigenvalues (ie positive definite). From wikipedia: The spectral norm of a matrix A is the largest singular value of A i.e. the square root of the largest eigenvalue of the positive-semidefinite matrix A*A
So here you would calculate the norm of the inverse as:
[v,d] = eig(L'*L);
1.0/sqrt(min(diag(d))) = 2.0788539
norm(inv(L)) = 2.0788539
As we expect.

Related

How to simulate random point following multivariate t distribution?

If X is a multivariate t random variable with mean=[1,2,3,4,5] and a covariance matrix C, how to simulate points in matlab? I try mvtrnd in matlab, but clearly the sample mean does not give mean close to [1,2,3,4,5]. Also, when I test three simple examples, say X1 with mean 0 and C1=[1,0.3;0.3,1], X2 with mean 0 and C2=[0.5,0.15;0.15,0.5] and X3 with mean 0 and C3=[0.4,0.12;0.12,0.4] and use mvtrnd(C1,3,1000000), mvtrnd(C2,3,1000000) amd mvtrnd(C2,3,1000000) respectively, I find the sample points in each case give nearly the correlation matrix [1,0.3;0.3,1] but the sample covariance computed all give near [3,1;1,3]. Why and how to fix it?
The Mean
The t distribution has a zero mean unless you shift it. In the documentation for mvtrnd:
the distribution of t is that of a vector having a multivariate normal
distribution with mean 0, variance 1, and covariance matrix C, divided
by an independent chi-square random value having df degrees of
freedom.
Indeed, mean(X) will approach [0 0] for X = mvtrnd(C,df,n); as n gets larger.
The Correlation
Matching the correlation is straightforward as it addresses a part of the relationship between the two dimensions of X.
% MATLAB 2018b
df = 5; % degrees of freedom
C = [0.44 0.25; 0.25 0.44]; % covariance matrix
numSamples = 1000;
R = corrcov(C); % Convert covariance to correlation matrix
X = mvtrnd(R,df,numSamples); % X ~ multivariate t distribution
You can compare how well you matched the correlation matrix R using corrcoef or corr().
corrcoef(X) % Alternatively, use corr(X)
The Covariance
Matching the covariance is another matter. Admittedly, calling cov(X) will reveal that this is lacking. Recall that the diagonal of the covariance is the variance for the two components of X. My intuition is that we fixed the degrees of freedom df, so there is no way to match the desired variance (& covariance).
A useful function is corrcov which converts a covariance matrix into a correlation matrix.
Notice that this is unnecessary as the documentation for mvtrnd indicates
C must be a square, symmetric and positive definite matrix. If its
diagonal elements are not all 1 (that is, if C is a covariance matrix
rather than a correlation matrix), mvtrnd rescales C to transform it
to a correlation matrix before generating the random numbers.

Matlab gives me negative eigenvalue for positive matrix

I have a 6000*6000 symmetric matrix and all entries are positive. I use eig function of matlab to decompose its eigenvalues&eigenvectors. But there are negative eigenvalues in results. Where do you think is the problem?
Thanks.
Sevil.
There is no problem. Just because a matrix is symmetric and has all positive values doesn't guarantee positive eigenvalues. For example, try the following symmetric matrix with all positive values [3 4; 4 3]. Performing eig([3 4; 4 3]) produces the eigenvalues of -1 and 7 and so one of the two eigenvalues is negative.
Take note that a matrix with all positive values and is symmetric is different from a matrix that is positive definite. Matrices that are positive definite have all positive eigenvalues which I believe is where you are confused. All in all, symmetric matrices that have all positive values are not necessarily positive definite matrices as you can clearly see in the example I gave above.

how to find Eigenvalues for non quadratic matrix

I want to make similar graphs to this given on the picture:
I am using Fisher Iris data and employ PCA to reduce dimensionality.
this is code:
load fisheriris
[pc,score,latent,tsquare,explained,mu] = princomp(meas);
I guess the eigenvalues are given in Latent, that shows me only four features and is about reduced data.
My question is how to show all eigenvalues of original matrix, which is not quadratic (150x4)? Please help! Thank you very much in advance!
The short (and useless) answer is that the [V, D] eig(_) function gives you the eigenvectors and the eigenvalues. However, I'm afraid I have bad news for you. Eigenvalues and eigenvectors only exist for square matrices, so there are no eigenvectors for your 150x4 matrix.
All is not lost. PCA actually uses the eigenvalues of the covariance matrix, not of the original matrix, and the covariance matrix is always square. That is, if you have a matrix A, the covariance matrix is AAT.
The covariance matrix is not only square, it is symmetric. This is good, because the singular values of a matrix are related to the eigenvalues of it's covariance matrix. Check the following Matlab code:
A = [10 20 35; 5 7 9]; % A rectangular matrix
X = A*A'; % The covariance matrix of A
[V, D] = eig(X); % Get the eigenvectors and eigenvalues of the covariance matrix
[U,S,W] = svd(A); % Get the singular values of the original matrix
V is a matrix containing the eigenvectors, and D contains the eigenvalues. Now, the relationship:
SST ~ D
U ~ V
I use '~' to indicate that while they are "equal", the sign and order may vary. There is no "correct" order or sign for the eigenvectors, so either is valid. Unfortunately, though, you will only have four features (unless your array is meant to be the other way around).

rank of a large matrix produces error

I am using Matlab to Compute the rank of a matrix as:
r=rank(A, tol)
where A is the matrix, tol is tolerance. when the matrix is small there seems to be no issue. But when the matrix is large matlab often returns with error saying the SVD should not have NAN or INF as input.
As far as my understanding goes, the the rank computing algorithm should return a number for a matrix, but when I see such error I wonder if there are special matrices for which rank cannot be computed. Is there better way to compute the rank in Matlab ? I am looking for a reliable algo to compute the rank of a matrix! Why is the rank computation algo so sensitive to some matrices?
EDIT: please check this dependence on tol:
rank(magic(100), 10e-10)
ans =
3
rank(magic(100), 10e-30)
ans =
100
I am basically computing the controllability matrix of a linear system for which I am checking the rank condition. The matrix sizes are in the order of 100x100 to 200x200. So the input to the rank is as follows
A= ctrb (P, Q) % P, Q are matrices in the LTI system X[n+1]=P*x[n]+ Q*U[n]
r=rank(A, tol)
so, question is then can controllability function ctrb produces matrices with INF or NAN? Based on the definition of the controllability matrix :
A= [Q P*Q P^2*Q, ...P^n-1*Q]
If P and Q are any matrices with bounded values, can A have INF or NAN? I am expecting that the above computation of A using the ctrb for any bounded P, Q matrices can not produce a matrix output with NAN or INF.

How do I draw samples from multivariate gaussian distribution parameterized by precision in matlab

I am wondering how to draw samples in matlab, where I have precision matrix and mean as the input argument.
I know mvnrnd is a typical way to do so, but it requires the covariance matrix (i.e inverse of precision)) as the argument.
I only have precision matrix, and due to the computational issue, I can't invert my precision matrix, since it will take too long (my dimension is about 2000*2000)
Good question. Note that you can generate samples from a multivariant normal distribution using samples from the standard normal distribution by way of the procedure described in the relevant Wikipedia article.
Basically, this boils down to evaluating A*z + mu where z is a vector of independent random variables sampled from the standard normal distribution, mu is a vector of means, and A*A' = Sigma is the covariance matrix. Since you have the inverse of the latter quantity, i.e. inv(Sigma), you can probably do a Cholesky decomposition (see chol) to determine the inverse of A. You then need to evaluate A * z. If you only know inv(A) this can still be done without performing a matrix inverse by instead solving a linear system (e.g. via the backslash operator).
The Cholesky decomposition might still be problematic for you, but I hope this helps.
If you want to sample from N(μ,Q-1) and only Q is available, you can take the Cholesky factorization of Q, L, such that LLT=Q. Next take the inverse of LT, L-T, and sample Z from a standard normal distribution N(0, I).
Considering that L-T is an upper triangular dxd matrix and Z is a d-dimensional column vector,
μ + L-TZ will be distributed as N(μ, Q-1).
If you wish to avoid taking the inverse of L, you can instead solve the triangular system of equations LTv=Z by back substitution. μ+v will then be distributed as N(μ, Q-1).
Some illustrative matlab code:
% make a 2x2 covariance matrix and a mean vector
covm = [3 0.4*(sqrt(3*7)); 0.4*(sqrt(3*7)) 7];
mu = [100; 2];
% Get the precision matrix
Q = inv(covm);
%take the Cholesky decomposition of Q (chol in matlab already returns the upper triangular factor)
L = chol(Q);
%draw 2000 samples from a standard bivariate normal distribution
Z = normrnd(0,1, [2, 2000]);
%solve the system and add the mean
X = repmat(mu, 1, 2000)+L\Z;
%check the result
mean(X')
var(X')
corrcoef(X')
% compare to the sampling from the covariance matrix
Y=mvnrnd(mu,covm, 2000)';
mean(Y')
var(Y')
corrcoef(Y')
scatter(X(1,:), X(2,:),'b')
hold on
scatter(Y(1,:), Y(2,:), 'r')
For more efficiency, I guess you can search for some package that efficiently solves triangular systems.