Given a random vector, Y=[y1,y2,...,yn];, its covariance matrix looks like this:
How can I calculate the covariance matrix in MATLAB?
the covariance matrix can be computed with the cov() function. But be aware: the covariance matrix of a vector will always be a 1-by-1 matrix, because there are no cross-variances in a single variable.
% random vector of length 10
vec = rand(10,1);
% covariance matrix
cov(vec)
Related
What is the difference in Matlab if I do:
R = mvnrnd(MU,SIGMA)
vs.
R = normrnd(mu,sigma)
R = normrnd(mu, sigma) outputs normal random numbers from 1D normal distribution. i.e., for each element of the inputs a single output is generated - R(i) will be a random scalar from the normal distribution defined by mu(i) and sigma(i).
If sigma is a scalar rather than vector the same value is used for each element of R.
R = mvnrnd(MU,SIGMA) outputs random numbers from multivariate normal distribution. i.e., for each row of MU and 2D matrix of SIGMA a single row of R is generated, where the dimension (d, the number of columns) of R is the same as the dimension of MU and SIGMA. R(i,:) will be a random vector from the multivariate normal distribution defined by MU(i,:) and SIGMA(:,:,i).
if SIGMA is a dxd 2D matrix rather than dxdxn 3D matrix, the same 2D matrix is used for each row of R.
As in Matlab, I can initialize a sparse matrix from 3 vectors like
Sparse = sparse(X,Y,Value,m,n);
Where X and Y are the indexes in the matrix, m and n are the required dimensions of the sparse mat.
Is there anything like this in OpenCV?
I have a feature vector(FV1) of size 1*n. Now I subtract mean of all feature vectors from the feature vector FV1 Now I take transpose of that(FV1_Transpose) which is n*1. Now I add do matrix multiplication (FV1_Transpose * FV1) to get covariance matrix which is n*n.
But my problem is that I dont get a positive definite matrix. I read everywhere that covariance matrix should be symmetric positive definite.
FV1 after subtraction of mean = -17.7926788,0.814089298,33.8878059,-17.8336430,22.4685001;
Covariance matrix =
316.579407, -14.4848289, -602.954834, 317.308289, -399.774811
-14.4848289, 0.662741363, 27.5876999, -14.5181780, 18.2913647
-602.954834, 27.5876999, 1148.38342, -604.343018, 761.408142
317.308289, -14.5181780, -604.343018, 318.038818, -400.695221
-399.774811, 18.2913647, 761.408142, -400.695221, 504.833496
This covariance matrix is not positive definite. Any ideawhy is it so?
Thanks in advance.
Are you sure the matrix is not positive definite? I did the following in octave.
A = [ 316.579407, -14.4848289, -602.954834, 317.308289, -399.774811 -14.4848289, 0.662741363, 27.5876999, -14.5181780, 18.2913647 -602.954834, 27.5876999, 1148.38342, -604.343018, 761.408142 317.308289, -14.5181780, -604.343018, 318.038818, -400.695221 -399.774811, 18.2913647, 761.408142, -400.695221, 504.833496]
A = reshape(A, 5, 5)
svd(A)
The eigen values of A as obtained from svd were.
2.2885e+03
5.4922e-05
1.5958e-05
1.3636e-05
1.1507e-08
Please note that all the eigen values are positive.
Now, A is symmetric (being a co-variance matrix), To verify,
A - A'
would give you a 5 x 5 zero matrix
A symmetric matrix which has positive eigen values should be positive definite.
reference
I'm trying to figure out Eigenvalues/Eigenvectors for large datasets in order to compute
the PCA. I can calculate the Eigenvalues and Eigenvectors for 2x2, 3x3 etc..
The problem is, I have a dataset containing 451x128 I compute the covariance matrix which
gives me 128x128 values from this. This, therefore looks like the following:
A = [ [1, 2, 3,
2, 3, 1,
..........,
= 128]
[5, 4, 1,
3, 2, 1,
2, 1, 2,
..........
= 128]
.......,
128]
Computing the Eigenvalues and vectors for a 128x128 vector seems really difficult and
would take a lot of computing power. However, if I allow for each of the blocks in A to be a 2-dimensional (3xN) I can then compute the covariance matrix which will give me a 3x3 matrix.
My question is this: Would this be a good or reasonable assumption for solving the eigenvalues and vectors? Something like this:
A is a 2-dimensional vector containing 128x451,
foreach of the blocks compute the eigenvalues and eigenvectors of the covariance vector,
like so:
Eig1 = eig(cov(A[0]))
Eig2 = eig(cov(A[1]))
This would then give me 128 Eigenvalues (for each of the blocks inside the 128x128 vector)..
If this is not correct, how does MATLAB handle such large dimensional data?
Have you tried svd()
Do the singular value decomposition
[U,S,V] = svd(X)
U and V are orthogonal matrices and S contains the eigen values. Sort U and V in descending order based on S.
As kkuilla mentions, you can use the SVD of the original matrix, as the SVD of a matrix is related to the Eigenvalues and Eigenvectors of the covariance matrix as I demonstrate in the following example:
A = [1 2 3; 6 5 4]; % A rectangular matrix
X = A*A'; % The covariance matrix of A
[V, D] = eig(X); % Get the eigenvectors and eigenvalues of the covariance matrix
[U,S,W] = svd(A); % Get the singular values of the original matrix
V is a matrix containing the eigenvectors, and D contains the eigenvalues. Now, the relationship:
SST ~ D
U ~ V
As to your own assumption, I may be misreading it, but I think it is false. I can't see why the Eigenvalues of the blocks would relate to the Eigenvalues of the matrix as a whole; they wouldn't correspond to the same Eigenvectors, as the dimensionality of the Eigenvectors wouldn't match. I think your covariances would be different too, but then I'm not completely clear on how you are creating these blocks.
As to how Matlab does it, it does use some tricks. Perhaps the link below might be informative (though it might be a little old). I believe they use (or used) LAPACK and a QZ factorisation to obtain intermediate values.
https://au.mathworks.com/company/newsletters/articles/matlab-incorporates-lapack.html
Use the word
[Eigenvectors, Eigenvalues] = eig(Matrix)
I have a 512x512x3 matrix that stores 512x512 there-dimensional vectors. What is the best way to normalize all those vectors, so that my result are 512x512 vectors with length that equals 1?
At the moment I use for loops, but I don't think that is the best way in MATLAB.
If the vectors are Euclidean, the length of each is the square root of the sum of the squares of its coordinates. To normalize each vector individually so that it has unit length, you need to divide its coordinates by its norm. For that purpose you can use bsxfun:
norm_A = sqrt(sum(A .^ 2, 3)_; %// Calculate Euclidean length
norm_A(norm_A < eps) == 1; %// Avoid division by zero
B = bsxfun(#rdivide, A, norm_A); %// Normalize
where A is your original 3-D vector matrix.
EDIT: Following Shai's comment, added a fix to avoid possible division by zero for null vectors.