Why does matlab call A, in iwishrnd(A,df) function, a covariance matrix? - matlab

I get this error when I call iwishrnd() function:
??? Error using ==> iwishrnd at 41
Covariance matrix must be symmetric and positive definite.
as I explained in my previous question:
Why does eig(A) function (in which A is a positive semidefinite function) returns negative doubles?
I like to know, why matlab calls this parameter a Covariance matrix ?
I know that, this matrix is used as the conjugate prior for the covariance matrix of a multivariate normal distribution, but it is proportional to mean of that covariance matrix (as you can see in http://en.wikipedia.org/wiki/Inverse-Wishart_distribution). So, isn't it better to call this the mean of the distribution, instead of the covariance matrix?

This is a nomenclature issue; it's relatively common to call the A parameter in the Wishart distribution a 'covariance matrix', since it (1) has to have all the properties of a covariance matrix, (2) the output of the Wishart distribution is almost always used as a covariance matrix (e.g., the Wishart is the conjugate prior for a Gaussian, see link below) and (3) A represents the mean of the expected covariance matrix output from the Wishart distribution.
A does not represent the covariance of the Wishart distribution, if that's what you're wondering.
http://en.wikipedia.org/wiki/Conjugate_prior#Table_of_conjugate_distributions

Related

Eigenvalues are always 1

When I get the eigenvalues of the diagonal of a PCA transformed image, I always get 1, whatever the image. What's the reason behind this?
I used the following code.
coeff = pca(pmap);
disp(coeff);
[V,L]=eig (coeff'*coeff);
Lamda = diag(L);
disp(Lamda);
The coeff which pca outputs are already eigenvectors, which are all orthogonal. They are even orthonormal, since MATLAB normalises them. Relative weight is in the explained output parameter of pca.
So transpose(coeff)*coeff gives you the identity matrix, which just contains ones and the eigenvectors of the identity matrix are, obviously, all just 1 in a single dimension.
The reason is thus because that's how linear algebra works.

Why the covariance returned by MATLAB is only one vector?

If I have a random complex vector Z that's 2x1 in dimension, shouldn't the covariance return by MATLAB should be a 2x2 matrix? Instead, I get a single real valued covariance. According to this wiki article, when you have a nx1 vector, the covariance should be in nxn. Any idea on this?
Z=[-0.0117 + 0.0032i; -0.0109 + 0.0046i]
C=cov(Z)
The C I obtained is 1.3261e-06. I expected a 2x2 matrix.
As per official Matlab documentation concerning the [cov function][1]:
C = cov(A) returns the covariance.
If A is a vector of observations, C is the scalar-valued variance.
The Wikipedia article you linked describes what you are attempting to obtain, but you can't assume that Matlab is implementing the same functionalities in the same way.
On a side note, it doesn't surprise me that the returned covariance is not meaningful. If you don't provide enough variables and enough observations, you can't estimate a covariation.

Do we need to normalize the eigen values in Matlab?

When using eig function in Matlab, it seems that this function has already normalize the values of the eigenvalues. Do we need to write some lines of code to normalize the eigenvalues after using the eig function.
The function eig in MATLAB normalizes the eigenvectors (not the eigenvalues).
See the following from the documentation:
[V,D] = eig(A) returns matrix V, whose columns are the right
eigenvectors of A such that AV = VD. The eigenvectors in V are
normalized so that the 2-norm of each is 1.
Eigenvectors can vary by a scalar, so a computation algorithm has to choose a particular scaled value of an eigenvector to show you. eig chooses 2-norm = 1. Just look at the eigenvector definition to see why: AV=VD. V shows up on both sides, so you can multiple V by anything without affecting the equation.
Eigenvalues do not vary. Look again at AV=VD. D is only on one side, so it can't be scaled.

How to calculate the squared inverse of a matrix in Matlab

I have to calculate:
gamma=(I-K*A^-1)*OLS;
where I is the identity matrix, K and A are diagonal matrices of the same size, and OLS is the ordinary least squares estimate of the parameters.
I do this in Matlab using:
gamma=(I-A\K)*OLS;
However I then have to calculate:
gamma2=(I-K^2*A-2)*OLS;
I calculate this in Matlab using:
gamma2=(I+A\K)*(I-A\K)*OLS;
Is this correct?
Also I just want to calculate the variance of the OLS parameters:
The formula is simple enough:
Var(B)=sigma^2*(Delta)^-1;
Where sigma is a constant and Delta is a diagonal matrix containing the eigenvalues.
I tried doing this by:
Var_B=Delta\sigma^2;
But it comes back saying matrix dimensions must agree?
Please can you tell me how to calculate Var(B) in Matlab, as well as confirming whether or not my other calculations are correct.
In general, matrix multiplication does not commute, which makes A^2 - B^2 not equal to (A+B)*(A-B). However your case is special, because you have an identity matrix in the equation. So your method for finding gamma2 is valid.
'Var_B=Delta\sigma^2' is not a valid mldivide expression. See the documentation. Try Var_B=sigma^2*inv(Delta). The function inv returns a matrix inverse. Although this function can also be applied in your expression to find gamma or gamma2, the use of the operator \ is more recommended for better accuracy and faster computation.

simulation using multivariate normal distribution (MATLAB)

i am using mvnrnd function in a simulation function i have created
i have the following:
if row of sigma less than or =5 (run of simulaion function i'v created=1000)the function works BUT if the row of the segma matrix more than 5 (and run=1000 of the function i created )it return error message: SIGMA matrix must be positive semi definit matrix.
AND
if i run the simulation 50 times and the row of the sigma matrix =10 (for example)
it works
HOW can i make mvnrnd works in the simulation?
I assume the command is something like:
R=mvnrnd(mu,SIGMA)
The elements of SIGMA must be the covariances which by definition are positive and hence SIGMA must be a positive semi-definite matrix. i.e. it has positive or equal to zero eigenvalues.
Thus, you are feeding your function with a SIGMA that is ERRONEOUS.
I would suggest that you check your code or post it here.