Generating multivariate normally distributed random numbers in Matlab - matlab

This question is about the use of the covariance matrix in the multidimensional normal distribution:
I want to generate multi-dimensional random numbers x in Matlab with a given mean mu and covariance matrix Sigma. Assuming Z is a standard normally distributed random number (e.g. generated using randn), what is the correct code:
x = mu + chol(Sigma) * Z
or
x = mu + Sigma ^ 0.5 * Z
?
I am not sure about the use of the covariance matrix in the definition of the multidimensional normal distribution – whether the determinant in the denominator is of the square root or the Cholesky factor...

If by definition you refer to the density of the multivariate normal distribution:
it contains neither the Cholesky decomposition nor the matrix square root of Σ, but its inverse and the scalar square root of its determinant.
But for numerically generating random numbers from this distribution, the density is not helpful. It is not even the most general description of the multivariate normal distribution, since the density formula makes only sense for positive definite matrices Σ, while the distribution is also defined if there are zero eigenvalues – that just means that the variance is 0 in the direction of the respective eigenvector.
Your question follows the approach to start from standard multivariate normally distributed random numbers Z as produced by randn, and then apply a linear transformation. Assuming that mu is a p-dimensional row vector we want an nxp-dimensional random matrix (each row one observation, each column one variable):
Z = randn(n, p);
x = mu + Z * A;
We need a matrix A such that the covariance of x is Sigma. Since the covariance of Z is the identity matrix, the covariance of x is given by A' * A. A solution to this is given by the Cholesky decomposition, so the natural choice is
A = chol(Sigma);
where A is an upper triangular matrix.
However, we can also search for a Hermitian solution, A' = A, and then A' * A becomes A^2, the matrix square. A solution to this is given by a matrix square root, which is computed by replacing each eigenvalue of Sigma by its square root (or its negative); in general there are 2ⁿ possible solutions for n positive eigenvalues. The Matlab function sqrtm returns the principal matrix square root, which is the unique nonnegative-definite solution. Therefore,
A = sqrtm(Sigma)
works also. A ^ 0.5 should in principle do the same.
Simulations using this code
p = 10;
n = 1000;
nr = 1000;
cp = nan(nr, 1);
sp = nan(nr, 1);
pp = nan(nr, 1);
for i = 1 : nr
x = randn(n, p);
Sigma = cov(x);
cS = chol(Sigma);
cp(i) = norm(cS' * cS - Sigma);
sS = sqrtm(Sigma);
sp(i) = norm(sS' * sS - Sigma);
pS = Sigma ^ 0.5;
pp(i) = norm(pS' * pS - Sigma);
end
mean([cp sp pp])
yield that chol is more precise than the two other methods, and profiling shows that it is also much faster, for both p = 10 and p = 100.
The Cholesky decomposition does however have the disadvantage that it is only defined for positive-definite Σ, while the requirement of the matrix square root is merely that Σ is nonnegative-definite (sqrtm returns a warning for a singular input, but returns a valid result).

Related

How to generate random invertible symmetric positive semidefinite matrix?

How can I generate a random, invertible, symmetric, positive semidefinite matrix using MATLAB?
I found this Python code:
matrixSize = 10
A = random.rand(matrixSize,matrixSize)
B = numpy.dot(A,A.transpose())
But I am not sure if this generates random positive semi-define matrix B.
The MATLAB equivalent of your code is:
matrixSize = 10;
A = rand(matrixSize);
B = A * A.';
This does produce a symmetric, positive-semidefinite matrix. But this matrix is not necessarily invertible, it is possible (though very unlikely) that the matrix is singular. More likely is that it is almost singular, meaning that the inverse will get very large values. This inverse is imprecise, and B*inv(B) will be different from the identity matrix by an amount larger than your tolerance.
One simple way of ensuring that B*inv(B) is within tolerance of the identity matrix is to repeatedly generate a random matrix until you find one that is OK:
tol = 1e-12;
while true
A = rand(matrixSize);
B = A*A.';
err = abs(B*inv(B) - eye(matrixSize));
if all(err(:)<tol)
break
end
end
The loop above will run only once most of the time, only occasionally will it need to generate a second matrix.
For any eps > 0 and any nxk (for any k) matrix B the matrix
P = eps*I + B*B'
is positive definite and invertible.
If k < n and eps is small then P will be nearly singular, in the sense that it will have eps as an eigenvalue. When generating these matrices to test something, it can be handy to be able to generate something nearly singular.
MATLAB code to obtain P:
n = 10;
k = 1;
B = rand(n,k);
B = B * B.';
P = B + eye(size(B)) * eps(max(B(:)));

MATLAB: The determinant of a covariance matrix is either 0 or inf

I have a 1500x1500 covariance matrix of which I am trying to calculate the determinant for EM-ML method. The covariance matrix is obtained by finding the SIGMA matrix and then passing it into the nearestSPD library (Link) to make the matrix positive definite . In this case the matrix is always singular. Another method I tried was of manually generating a positive definite matrix using A'*A technique. (A was taken as a 1600x1500 matrix). This always gives me the determinant as infinite. Any idea on how I can get a positive definite matrix with a finite determinant?
Do you actually need the determinant, or the log of the determinant?
For example if you are computing a log likelihood of gaussians then what enters into the log likelihood is the log of the determinant. In high dimensions determinants mey not fit in a double, but its log most likely will.
If you perform a cholesky factorisation of the covariance C, with (lower triangular) factor L say so that
C = L*L'
then
det C = det(L) * det( L') = det(L) * det(L)
But the determinant of a lower triangular matrix is the product of its diagonal elements, so, taking logs above we get:
log det C = 2*Sum{ i | log( L[i,i])}
(In response to a comment)
Even if you need to calculate a gaussian pdf, it is better to calculate the log of that and exponentiate only when you need to. For example a d dimenions gaussian with covariance C (which has a cholesky factor L) and mean 0 (purely to save typing) is:
p(x) = exp( -0.5*x'*inv(C)*x) /( sqrt( pow(2pi,d) * det(C))
so
log p(x) = -0.5*x'*inv(C)*x - 0.5*d*log(2pi) - 0.5*log(det(C))
which can also be written
log p(x) = -0.5*y'*y - 0.5*d*log(2pi) - log(det(L))
where
y = inv(L)*x

on symmetric positive semi-definiteness of covariance matrices in matlab

Hi everybody I have this problem:
I have Dataset of n vectors each has D dimensions.
I also have a covariance matrix of size D*D, Let It be C.
I perform the following action:
I choose K vectors from the dataset, and also choose E dimensions randomly. Let M be the sample covariance of the selected data on the selected dimensions.so M is a E*E matrix.
let P be the partial covariance matrix corresponding to the dimensions E of C, ie. C(E,E) in matlab
is the following matrix positive semi definite?:
X = (1-a)P + aM
where a is a constant like 0.2.
I sometimes get the following error when using mvnrnd(mean,X) :
SIGMA must be a symmetric positive semi-definite matrix
My code is:
%%%Dims are randomly choosen dimensions
%%%Inds are randomly choosen Indexes form {1, 2, ...,n}
%%% PP are n D dimensional vectors, composing my data set PP is n*D
%%% Sigmaa is a D*D covariance matrix
co = cov(PP(Inds,Dims));
me = mean(PP(Inds,Dims));
Bettaa = 0.2;
sigmaaDims = sigmaa(Dims,Dims);
sigmaaDims = (1-Bettaa)*sigmaaDims + (co)*Bettaa;
Tem = mvnrnd(me,sigmaaDims);
Simply looking at the matrix dimensions It is not possible to tell if a matrix is positive semi-definite.
To find out if a given matrix is positive semi-definite, you must check if It's eigenvalues are non-negative and it's symmetry:
symmetry = issymmetric(X);
[~,D]=eig(X);
eigenvalues = diag(D);
if all(eigenvalues>0) & symmetry
disp('Positive semi-definite matrix.')
else
disp('Non positive semi-definite matrix.')
end
Where X is the matrix you are interested in.
Note that if you use the weaker definition of a positive definite matrix (see Extention for non symmetric matrices section), X does not need to be symmetric and you would end up with:
[~,D]=eig(X);
eigenvalues = diag(D);
if all(eigenvalues>=0)
disp('Positive semi-definite matrix.')
else
disp('Non positive semi-definite matrix.')
end

MATLAB eig returns inverted signs sometimes

I'm trying to write a program that gets a matrix A of any size, and SVD decomposes it:
A = U * S * V'
Where A is the matrix the user enters, U is an orthogonal matrix composes of the eigenvectors of A * A', S is a diagonal matrix of the singular values, and V is an orthogonal matrix of the eigenvectors of A' * A.
Problem is: the MATLAB function eig sometimes returns the wrong eigenvectors.
This is my code:
function [U,S,V]=badsvd(A)
W=A*A';
[U,S]=eig(W);
max=0;
for i=1:size(W,1) %%sort
for j=i:size(W,1)
if(S(j,j)>max)
max=S(j,j);
temp_index=j;
end
end
max=0;
temp=S(temp_index,temp_index);
S(temp_index,temp_index)=S(i,i);
S(i,i)=temp;
temp=U(:,temp_index);
U(:,temp_index)=U(:,i);
U(:,i)=temp;
end
W=A'*A;
[V,s]=eig(W);
max=0;
for i=1:size(W,1) %%sort
for j=i:size(W,1)
if(s(j,j)>max)
max=s(j,j);
temp_index=j;
end
end
max=0;
temp=s(temp_index,temp_index);
s(temp_index,temp_index)=s(i,i);
s(i,i)=temp;
temp=V(:,temp_index);
V(:,temp_index)=V(:,i);
V(:,i)=temp;
end
s=sqrt(s);
end
My code returns the correct s matrix, and also "nearly" correct U and V matrices. But some of the columns are multiplied by -1. obviously if t is an eigenvector, then also -t is an eigenvector, but with the signs inverted (for some of the columns, not all) I don't get A = U * S * V'.
Is there any way to fix this?
Example: for the matrix A=[1,2;3,4] my function returns:
U=[0.4046,-0.9145;0.9145,0.4046]
and the built-in MATLAB svd function returns:
u=[-0.4046,-0.9145;-0.9145,0.4046]
Note that eigenvectors are not unique. Multiplying by any constant, including -1 (which simply changes the sign), gives another valid eigenvector. This is clear given the definition of an eigenvector:
A·v = λ·v
MATLAB chooses to normalize the eigenvectors to have a norm of 1.0, the sign is arbitrary:
For eig(A), the eigenvectors are scaled so that the norm of each is 1.0.
For eig(A,B), eig(A,'nobalance'), and eig(A,B,flag), the eigenvectors are not normalized
Now as you know, SVD and eigendecomposition are related. Below is some code to test this fact. Note that svd and eig return results in different order (one sorted high to low, the other in reverse):
% some random matrix
A = rand(5);
% singular value decomposition
[U,S,V] = svd(A);
% eigenvectors of A'*A are the same as the right-singular vectors
[V2,D2] = eig(A'*A);
[D2,ord] = sort(diag(D2), 'descend');
S2 = diag(sqrt(D2));
V2 = V2(:,ord);
% eigenvectors of A*A' are the same as the left-singular vectors
[U2,D2] = eig(A*A');
[D2,ord] = sort(diag(D2), 'descend');
S3 = diag(sqrt(D2));
U2 = U2(:,ord);
% check results
A
U*S*V'
U2*S2*V2'
I get very similar results (ignoring minor floating-point errors):
>> norm(A - U*S*V')
ans =
7.5771e-16
>> norm(A - U2*S2*V2')
ans =
3.2841e-14
EDIT:
To get consistent results, one usually adopts a convention of requiring that the first element in each eigenvector be of a certain sign. That way if you get an eigenvector that does not follow this rule, you multiply it by -1 to flip the sign...

Calculate distance from point p to high dimensional Gaussian (M, V)

I have a high dimensional Gaussian with mean M and covariance matrix V. I would like to calculate the distance from point p to M, taking V into consideration (I guess it's the distance in standard deviations of p from M?).
Phrased differentially, I take an ellipse one sigma away from M, and would like to check whether p is inside that ellipse.
If V is a valid covariance matrix of a gaussian, it then is symmetric positive definite and therefore defines a valid scalar product. By the way inv(V) also does.
Therefore, assuming that M and p are column vectors, you could define distances as:
d1 = sqrt((M-p)'*V*(M-p));
d2 = sqrt((M-p)'*inv(V)*(M-p));
the Matlab way one would rewrite d2as (probably some unnecessary parentheses):
d2 = sqrt((M-p)'*(V\(M-p)));
The nice thing is that when V is the unit matrix, then d1==d2and it correspond to the classical euclidian distance. To find wether you have to use d1 or d2is left as an exercise (sorry, part of my job is teaching). Write the multi-dimensional gaussian formula and compare it to the 1D case, since the multidimensional case is only a particular case of the 1D (or perform some numerical experiment).
NB: in very high dimensional spaces or for very many points to test, you might find a clever / faster way from the eigenvectors and eigenvalues of V (i.e. the principal axes of the ellipsoid and their corresponding variance).
Hope this helps.
A.
Consider computing the probability of the point given the normal distribution:
M = [1 -1]; %# mean vector
V = [.9 .4; .4 .3]; %# covariance matrix
p = [0.5 -1.5]; %# 2d-point
prob = mvnpdf(p,M,V); %# probability P(p|mu,cov)
The function MVNPDF is provided by the Statistics Toolbox
Maybe I'm totally off, but isn't this the same as just asking for each dimension: Am I inside the sigma?
PSEUDOCODE:
foreach(dimension d)
(M(d) - sigma(d) < p(d) < M(d) + sigma(d)) ?
Because you want to know if p is inside every dimension of your gaussian. So actually, this is just a space problem and your Gaussian hasn't have to do anything with it (except for M and sigma which are just distances).
In MATLAB you could try something like:
all(M - sigma < p < M + sigma)
A distance to that place could be, where I don't know the function for the Euclidean distance. Maybe dist works:
dist(M, p)
Because M is just a point in space and p as well. Just 2 vectors.
And now the final one. You want to know the distance in a form of sigma's:
% create a distance vector and divide it by sigma
M - p ./ sigma
I think that will do the trick.