Issues with calculating the determinant of a matrix - matlab

I am trying to calculate the determinant of the inverse of a matrix. The inverse of the matrix exists. However, when I try to calculate the determinant of the inverse, it gives me Inf value in matlab. What is the reason behind this?

Short answer: given A = inv(B), then det(A)==Inf may have two explanations:
an overflow during the numerical computation of the determinant,
one or more infinite elements in A.
In the first case your matrix is badly scaled so that det(B) may underflow and det(A) overflow. Remember that det(a*B) == a^N * det(B) where a is a scalar and B is a N times N matrix.
In the second case (i.e. nnz(A==inf)>0) matrix B may be "singular to working precision".
PS:
A matrix is nearly singular if it has a large condition number. (A small determinant has nothing to do with singularity, since the magnitude of the determinant itself is affected by scaling.).
A matrix is singular to working precision if it has a zero pivot in the Gaussian elimination: when computing the inverse, matlab has to calculate 1/0 which returns Inf.
In fact in Matlab overflow and zero-division exceptions are not caught, so that, according to IEEE 754, an Inf value is propagated.

Related

Matlab - determinant of covariance matrix coming out as zero

I'm trying to calculate the determinant of a 171x171 covariance matrix in matlab but the determinant is coming out as zero. I know the matrix is not singular because I am able to calculate the inverse of the matrix but because the numbers in the covariance matrix are quite small (between e-02 and e-05), I think matlab rounds to zero at some point because the numbers become so small.
I tried installing the symbolic toolbox and making the covariance matrix symbolic to calculate the determinant but my computer couldn't really handle it and kept crashing. Does anyone have an idea of a workaround/ know of a way of dealing with a determinant that is close to zero, but not actually zero?

Smallest eigenvalue for large nearly singular matrix

In Matlab I have a real and symmetric n x n matrix A, where n > 6000. Even though A is positive definite it is close to singular. A goes from being positive definite to singular to indefinite for a particular variable which is changed. I am to determine when A becomes singular. I don't trust the determinants so I am looking at the eigenvalues, but I don't have the memory (or time) to calculate all n eigenvalues, and I am only interested in the smallest - and in particular when it changes sign from positive to negative. I've tried
D = eigs(A,1,'smallestabs')
by which I lose the sign of the eigenvalue, and by
D = eigs(A,1,'smallestreal')
Matlab cannot get the lowest eigenvalue to converge. Then I've tried defining a shift value like
for i = 1:10
if i == 1
D(i) = eigs(A,1,0)
else
D(i) = eigs(A,1,D(i-1))
end
end
where i look in the range of the last lowest eigenvalue. However, the eigenvalues seem to behave oddly, and I am not sure if I actually find the true lowest one.
So, any ideas on how to
without doubt find the smallest eigenvalue with 'eigs', or
by another way determine when A becomes singular (when changing a variable in A)
are highly appreciated!
Solution
I seem to have solved my particular problem. Matlabs command chol have the possibility to return a value p which is zero if the matrix is positive definite. Thus, performing
[~,p] = chol(A)
in my case determines the transition from positive definite to not positive definite (meaning first singular then indefinite), and is also computationally very efficient. In the documentation for chol it is also preferred over eigs to check for positive definiteness. However, there seem to be some confusion about the result if the matrix is only positive semi-definite, so be carefull if this is the case.
Alternative solutions
I've come across several possible solutions which I'd like to state:
Determinant:
For a positive definite matrix the determinant is positive. However, for an indefinite matrix it may be negative - this could indicate the transition. Though, generally determinants for large nearly singular matrices are not recommended.
Eigenvalues: For a positive definite matrix the real part of all eigenvalues are positive. If at least one eigenvalue is zero the matrix is singular, and if one becomes negative and the rest is positive it is indefinite. Detecting the shift in sign for the lowest eigenvalue indicates the point the matrix becomes singular. In matlab the lowest eigenvalue may be found by
D = eigs(A,1,'smallestreal')
However, in my case Matlab coudn't perform this. Alternatively you can try searching around zero:
D = eigs(A,1,0)
This however only finds the eigenvalue closest to zero. Even if you make a loop like indicated in my original question above, you are not guaranteed to actually find the lowest. And the precision of the eigenvalues for a nearly singular matrix seems to be low in some cases.
Condition number: Matlabs cond returns the condition number of the matrix by performing
C = cond(A)
which states the ratio of the largest eigenvalue to the lowest. A shift in sign in the condition number thereby states the transition. This, however, didn't work for me, as I only got positive condition numbers even though I had negative eigenvalues. But maybe it will work in other cases.

it is possible determinant of matrix(256*256) be infinite

i have (256*1) vectors of feature come from (16*16) of gray images. number of vectors is 550
when i compute Sample covariance of this vectors and compute covariance matrix determinant
answer is inf
it is possible determinant of finite matrix with finite range (0:255) value be infinite or i mistake some where?
in fact i want classification with bayesian estimation , my distribution is gaussian and when
i compute determinant be inf and ultimate Answer(likelihood) is zero .
some part of my code:
Mean = mean(dataSet,2);
MeanMatrix = Mean*ones(1,NoC);
Xc = double(dataSet)-MeanMatrix; % transform data to the origine
Sigma = (1/NoC) *Xc*Xc'; % calculate sample covariance matrix
Parameters(i).M = Mean';
Parameters(i).C = Sigma;
likelihoods(i) = (1/(2*pi*sqrt(det(params(i).C)))) * (exp(-0.5 * (double(X)-params(i).M)' * inv(params(i).C) * (double(X)-params(i).M)));
variable i show my classes;
variable X show my feature vector;
Can the determinant of such matrix be infinite? No it cannot.
Can it evaluate as infinite? Yes definitely.
Here is an example of a matrix with a finite amount of elements, that are not too big, yet the determinant will rarely evaluate as a finite number:
det(rand(255)*255)
In your case, probably what is happening is that you have too few datapoints to produce a full-rank covariance matrix.
For instance, if you have N examples, each with dimension d, and N<d, then your d x d covariance matrix will not be full rank and will have a determinant of zero.
In this case, a matrix inverse (precision matrix) does not exist. However, attempting to compute the determinant of the inverse (by taking 1/|X'*X|=1/0 -> \infty) will produce an infinite value.
One way to get around this problem is to set the covariance to X'*X+eps*eye(d), where eps is a small value. This technique corresponds to placing a weak prior distribution on elements of X.
no it is not possible. it may be singular but taking elements a large value has will have a determinant value.

How to calculate the squared inverse of a matrix in Matlab

I have to calculate:
gamma=(I-K*A^-1)*OLS;
where I is the identity matrix, K and A are diagonal matrices of the same size, and OLS is the ordinary least squares estimate of the parameters.
I do this in Matlab using:
gamma=(I-A\K)*OLS;
However I then have to calculate:
gamma2=(I-K^2*A-2)*OLS;
I calculate this in Matlab using:
gamma2=(I+A\K)*(I-A\K)*OLS;
Is this correct?
Also I just want to calculate the variance of the OLS parameters:
The formula is simple enough:
Var(B)=sigma^2*(Delta)^-1;
Where sigma is a constant and Delta is a diagonal matrix containing the eigenvalues.
I tried doing this by:
Var_B=Delta\sigma^2;
But it comes back saying matrix dimensions must agree?
Please can you tell me how to calculate Var(B) in Matlab, as well as confirming whether or not my other calculations are correct.
In general, matrix multiplication does not commute, which makes A^2 - B^2 not equal to (A+B)*(A-B). However your case is special, because you have an identity matrix in the equation. So your method for finding gamma2 is valid.
'Var_B=Delta\sigma^2' is not a valid mldivide expression. See the documentation. Try Var_B=sigma^2*inv(Delta). The function inv returns a matrix inverse. Although this function can also be applied in your expression to find gamma or gamma2, the use of the operator \ is more recommended for better accuracy and faster computation.

Determinant of a positive semi definite matrix

Is it possible that the determinant of a positive semi definite matrix is equal to 0. It is coming to be zero in my case. I have a diagonal matrix with diagonal elements non zero. When I try to calculate the determinant of this matrix it is coming out to be 0. Why is it so?
This is the reason why computing the determinant is never a good idea. Yeah, I know. Your book, your teacher, or your boss told you to do so. They were probably wrong. Why? Determinants are poorly scaled beasts. Even if you compute the determinant efficiently (many algorithms fail to do even that) you don't really want a determinant most of the time.
Consider this simple positive definite matrix.
A = eye(1000);
What is the determinant? I need not even bother. It is 1. But, if you insist...
det(A)
ans =
1
OK, so that works. How about if we simply multiply that entire matrix by a small constant, 0.1 for example. What is the determinant? You might say there is no reason to bother, as we already know the determinant. It must be just det(A)*0.1^1000, so 1e-1000.
det(A*0.1)
ans =
0
What did we do wrong here? Where this failed is we forgot to remember we were working in floating point arithmetic. Since the dynamic range of a double in MATLAB goes down only to essentially
realmin
ans =
2.2250738585072e-308
then smaller numbers turn into zero - they underflow. Anyway, most of the time when we compute a determinant, we are doing so for the wrong reasons anyway. If they want you to test to see if a matrix is singular, then use rank or cond, not det.
by definition, a positive semi definite matrix may have eigenvalues equal to zero, so its determinant can therefore be zero
Now, I can't see what you mean with the sentence,
I have a diagonal matrix with diagonal elements non zero. When I try to calculate the ...
If the matrix is diagonal, and all elements in the diagonal are non-zero, the determinant should be non-zero. If you are calculating it in your computer, beware underflows.
You may consider the sum of logarithms instead of the product of the diagonal elements