I am using Matlab to Compute the rank of a matrix as:
r=rank(A, tol)
where A is the matrix, tol is tolerance. when the matrix is small there seems to be no issue. But when the matrix is large matlab often returns with error saying the SVD should not have NAN or INF as input.
As far as my understanding goes, the the rank computing algorithm should return a number for a matrix, but when I see such error I wonder if there are special matrices for which rank cannot be computed. Is there better way to compute the rank in Matlab ? I am looking for a reliable algo to compute the rank of a matrix! Why is the rank computation algo so sensitive to some matrices?
EDIT: please check this dependence on tol:
rank(magic(100), 10e-10)
ans =
3
rank(magic(100), 10e-30)
ans =
100
I am basically computing the controllability matrix of a linear system for which I am checking the rank condition. The matrix sizes are in the order of 100x100 to 200x200. So the input to the rank is as follows
A= ctrb (P, Q) % P, Q are matrices in the LTI system X[n+1]=P*x[n]+ Q*U[n]
r=rank(A, tol)
so, question is then can controllability function ctrb produces matrices with INF or NAN? Based on the definition of the controllability matrix :
A= [Q P*Q P^2*Q, ...P^n-1*Q]
If P and Q are any matrices with bounded values, can A have INF or NAN? I am expecting that the above computation of A using the ctrb for any bounded P, Q matrices can not produce a matrix output with NAN or INF.
Related
I have a matrix W which is a block diagonal matrix with dimensions 2*4, and each of its two block diagonals is 1*2 vector. I want to find the values of its entries that minimize the difference between the following function:
( F = BH-AW )
Where: W is the required block diagonal matrix to be optimized, B is a 2*2 matrix, H is a given 2*4 matrix, and A is a 2*2 matrix. A and B are calculated using the functions used in the attached code.
I tried this attached code, but I think it is now in an infinite loop and I don't know what should I do?
%% My code is:
while ((B*H)-(A*W)~=zeros(2,4))
w1=randn(1,2);
% generate the first block diagonal vector with dimensions 1*2. The values of each entry of the block diagonal vector maybe not the same.
w2=randn(1,2);
% generate the second block diagonal vector with dimensions 1*2.
W=blkdiag(w1,w2);
% build the block diagonal matrix that I want to optimize with dimensions 2*4.
R=sqrtm(W*inv(inv(P)+(H'*inv(eye(2)+D)*H))*W');
% R is a 2*2 matrix that will be used to calculate matrix A using the LLL lattice reduction algorithm. The values of P (4*4 matrix), H (2*4 matrix) and D (2*2 matrix) are given. It's clear here that matrix R is a function of W.
A= LLL(R,3/4);
% I use here LLL lattice reduction algorithm to obtain 2*2 matrix A which is function of R.
B=A'*W*P*H'*inv(eye(2)+D+H*P*H');
% B is 2*2 matrix which is function of A and W. The values of P (4*4 matrix), H (2*4 matrix) and D (2*2 matrix) are given.
end
Numerical operations with floating point numbers are only approximate on a computer (any number is only ever represented with a finite number of bits, which means you cannot exactly represent Pi for example). For more info, see this link.
Consequently, it is extremely unlikely that the loop you wrote will ever terminate, because the difference between B*H and A*W will not be exactly zero. Instead, you need to use a tolerance factor to decide when you are satisfied with the similarity achieved.
Additionally, as suggested by others in comment, the "distance" between two matrices is typically measured using some sort of norm (e.g. the Frobenius norm). By default, the norm function in Matlab will give the 2-norm of an input matrix.
In your case, this would give something like:
tol = 1e-6;
while norm(B*H-A*W) > tol
% generate the first block diagonal vector with dimensions 1*2.
% The values of each entry of the block diagonal vector maybe not the same.
w1=randn(1,2);
% generate the second block diagonal vector with dimensions 1*2.
w2=randn(1,2);
% build the block diagonal matrix that I want to optimize with dimensions 2*4.
W=blkdiag(w1,w2);
% R is a 2*2 matrix that will be used to calculate matrix A using the LLL lattice reduction algorithm.
% The values of P (4*4 matrix), H (2*4 matrix) and D (2*2 matrix) are given.
% It's clear here that matrix R is a function of W.
R=sqrtm(W/(inv(P)+(H'/(eye(2)+D)*H))*W');
% I use here LLL lattice reduction algorithm to obtain 2*2 matrix A which is function of R.
A= LLL(R,3/4);
% B is 2*2 matrix which is function of A and W. The values of P (4*4 matrix),
% H (2*4 matrix) and D (2*2 matrix) are given.
B=A'*W*P*H'/(eye(2)+D+H*P*H');
end
Note that:
With regards to the actual algorithm, I am a little bit concerned that your loop never seems to update the value of W, but instead updates matrices A and B. This suggests that your description of the problem might be incorrect or incomplete, but that is beyond the scope of this forum anyway (ask on Maths.SE if you want to know more).
Using inv() directly is discouraged in many cases. This is because the algorithm to compute the inverse of a matrix is less reliable than the algorithm to solve systems of the type AX=B. Matlab should give you a warning to use / and \ where possible; I would advise following this recommendation unless you know what you are doing.
If X is a multivariate t random variable with mean=[1,2,3,4,5] and a covariance matrix C, how to simulate points in matlab? I try mvtrnd in matlab, but clearly the sample mean does not give mean close to [1,2,3,4,5]. Also, when I test three simple examples, say X1 with mean 0 and C1=[1,0.3;0.3,1], X2 with mean 0 and C2=[0.5,0.15;0.15,0.5] and X3 with mean 0 and C3=[0.4,0.12;0.12,0.4] and use mvtrnd(C1,3,1000000), mvtrnd(C2,3,1000000) amd mvtrnd(C2,3,1000000) respectively, I find the sample points in each case give nearly the correlation matrix [1,0.3;0.3,1] but the sample covariance computed all give near [3,1;1,3]. Why and how to fix it?
The Mean
The t distribution has a zero mean unless you shift it. In the documentation for mvtrnd:
the distribution of t is that of a vector having a multivariate normal
distribution with mean 0, variance 1, and covariance matrix C, divided
by an independent chi-square random value having df degrees of
freedom.
Indeed, mean(X) will approach [0 0] for X = mvtrnd(C,df,n); as n gets larger.
The Correlation
Matching the correlation is straightforward as it addresses a part of the relationship between the two dimensions of X.
% MATLAB 2018b
df = 5; % degrees of freedom
C = [0.44 0.25; 0.25 0.44]; % covariance matrix
numSamples = 1000;
R = corrcov(C); % Convert covariance to correlation matrix
X = mvtrnd(R,df,numSamples); % X ~ multivariate t distribution
You can compare how well you matched the correlation matrix R using corrcoef or corr().
corrcoef(X) % Alternatively, use corr(X)
The Covariance
Matching the covariance is another matter. Admittedly, calling cov(X) will reveal that this is lacking. Recall that the diagonal of the covariance is the variance for the two components of X. My intuition is that we fixed the degrees of freedom df, so there is no way to match the desired variance (& covariance).
A useful function is corrcov which converts a covariance matrix into a correlation matrix.
Notice that this is unnecessary as the documentation for mvtrnd indicates
C must be a square, symmetric and positive definite matrix. If its
diagonal elements are not all 1 (that is, if C is a covariance matrix
rather than a correlation matrix), mvtrnd rescales C to transform it
to a correlation matrix before generating the random numbers.
I have small, well conditioned hermitian matrix L with eigenvalues in [0,2]. I'm getting weird results while trying to compute norm of inverse of L:
>> norm(inv(L))
ans =
2.0788
>> min(eig(L))
ans =
0.5000
Which is strange because second norm of inverse ought to be equal inverse of minimal eigenvalue of matrix.
I know about errors introduced by machine arithmetic, but in this small, hermitian and well-conditioned example I expected it to be negligible.
Here is the matrix https://www.dropbox.com/s/nh1wegrnn53wb6w/matrix.mat
I'm using matlab 8.2.0.701 (R2013b) on Linux mint 16 (Petra).
It's not a numerical issue, as you've pointed out the matrix is well-conditioned.
second norm of inverse ought to be equal inverse of minimal eigenvalue of matrix
This is only true if the matrix is hermitian with positive eigenvalues (ie positive definite). From wikipedia: The spectral norm of a matrix A is the largest singular value of A i.e. the square root of the largest eigenvalue of the positive-semidefinite matrix A*A
So here you would calculate the norm of the inverse as:
[v,d] = eig(L'*L);
1.0/sqrt(min(diag(d))) = 2.0788539
norm(inv(L)) = 2.0788539
As we expect.
I want to make similar graphs to this given on the picture:
I am using Fisher Iris data and employ PCA to reduce dimensionality.
this is code:
load fisheriris
[pc,score,latent,tsquare,explained,mu] = princomp(meas);
I guess the eigenvalues are given in Latent, that shows me only four features and is about reduced data.
My question is how to show all eigenvalues of original matrix, which is not quadratic (150x4)? Please help! Thank you very much in advance!
The short (and useless) answer is that the [V, D] eig(_) function gives you the eigenvectors and the eigenvalues. However, I'm afraid I have bad news for you. Eigenvalues and eigenvectors only exist for square matrices, so there are no eigenvectors for your 150x4 matrix.
All is not lost. PCA actually uses the eigenvalues of the covariance matrix, not of the original matrix, and the covariance matrix is always square. That is, if you have a matrix A, the covariance matrix is AAT.
The covariance matrix is not only square, it is symmetric. This is good, because the singular values of a matrix are related to the eigenvalues of it's covariance matrix. Check the following Matlab code:
A = [10 20 35; 5 7 9]; % A rectangular matrix
X = A*A'; % The covariance matrix of A
[V, D] = eig(X); % Get the eigenvectors and eigenvalues of the covariance matrix
[U,S,W] = svd(A); % Get the singular values of the original matrix
V is a matrix containing the eigenvectors, and D contains the eigenvalues. Now, the relationship:
SST ~ D
U ~ V
I use '~' to indicate that while they are "equal", the sign and order may vary. There is no "correct" order or sign for the eigenvectors, so either is valid. Unfortunately, though, you will only have four features (unless your array is meant to be the other way around).
I'm working with 6x6 matrices which have varying precisions of data. When I try to inverse that matrix in MATLAB I usually get Inf or NaN as all the data and MATLAB throws a warning:
Matrix is singular to working precision.
Is there anyway to avoid it and get proper results?
Your matrix seems to be rank deficient. Only full rank matrices can be robustly inverted.
You may circumvent your problem by adding a small identity matrix to the original one.
A = rand(6,5);
A = A*A'; %' symmetric rank 5 matrix
iA = inv(A); % results with NaNs and infs A is singular
iAs = inv( A + eye(6)*1e-3 ); % add small (1e-3) elements to diagonal - this should help