Determinant of a positive semi definite matrix - matlab

Is it possible that the determinant of a positive semi definite matrix is equal to 0. It is coming to be zero in my case. I have a diagonal matrix with diagonal elements non zero. When I try to calculate the determinant of this matrix it is coming out to be 0. Why is it so?

This is the reason why computing the determinant is never a good idea. Yeah, I know. Your book, your teacher, or your boss told you to do so. They were probably wrong. Why? Determinants are poorly scaled beasts. Even if you compute the determinant efficiently (many algorithms fail to do even that) you don't really want a determinant most of the time.
Consider this simple positive definite matrix.
A = eye(1000);
What is the determinant? I need not even bother. It is 1. But, if you insist...
det(A)
ans =
1
OK, so that works. How about if we simply multiply that entire matrix by a small constant, 0.1 for example. What is the determinant? You might say there is no reason to bother, as we already know the determinant. It must be just det(A)*0.1^1000, so 1e-1000.
det(A*0.1)
ans =
0
What did we do wrong here? Where this failed is we forgot to remember we were working in floating point arithmetic. Since the dynamic range of a double in MATLAB goes down only to essentially
realmin
ans =
2.2250738585072e-308
then smaller numbers turn into zero - they underflow. Anyway, most of the time when we compute a determinant, we are doing so for the wrong reasons anyway. If they want you to test to see if a matrix is singular, then use rank or cond, not det.

by definition, a positive semi definite matrix may have eigenvalues equal to zero, so its determinant can therefore be zero
Now, I can't see what you mean with the sentence,
I have a diagonal matrix with diagonal elements non zero. When I try to calculate the ...
If the matrix is diagonal, and all elements in the diagonal are non-zero, the determinant should be non-zero. If you are calculating it in your computer, beware underflows.
You may consider the sum of logarithms instead of the product of the diagonal elements

Related

Smallest eigenvalue for large nearly singular matrix

In Matlab I have a real and symmetric n x n matrix A, where n > 6000. Even though A is positive definite it is close to singular. A goes from being positive definite to singular to indefinite for a particular variable which is changed. I am to determine when A becomes singular. I don't trust the determinants so I am looking at the eigenvalues, but I don't have the memory (or time) to calculate all n eigenvalues, and I am only interested in the smallest - and in particular when it changes sign from positive to negative. I've tried
D = eigs(A,1,'smallestabs')
by which I lose the sign of the eigenvalue, and by
D = eigs(A,1,'smallestreal')
Matlab cannot get the lowest eigenvalue to converge. Then I've tried defining a shift value like
for i = 1:10
if i == 1
D(i) = eigs(A,1,0)
else
D(i) = eigs(A,1,D(i-1))
end
end
where i look in the range of the last lowest eigenvalue. However, the eigenvalues seem to behave oddly, and I am not sure if I actually find the true lowest one.
So, any ideas on how to
without doubt find the smallest eigenvalue with 'eigs', or
by another way determine when A becomes singular (when changing a variable in A)
are highly appreciated!
Solution
I seem to have solved my particular problem. Matlabs command chol have the possibility to return a value p which is zero if the matrix is positive definite. Thus, performing
[~,p] = chol(A)
in my case determines the transition from positive definite to not positive definite (meaning first singular then indefinite), and is also computationally very efficient. In the documentation for chol it is also preferred over eigs to check for positive definiteness. However, there seem to be some confusion about the result if the matrix is only positive semi-definite, so be carefull if this is the case.
Alternative solutions
I've come across several possible solutions which I'd like to state:
Determinant:
For a positive definite matrix the determinant is positive. However, for an indefinite matrix it may be negative - this could indicate the transition. Though, generally determinants for large nearly singular matrices are not recommended.
Eigenvalues: For a positive definite matrix the real part of all eigenvalues are positive. If at least one eigenvalue is zero the matrix is singular, and if one becomes negative and the rest is positive it is indefinite. Detecting the shift in sign for the lowest eigenvalue indicates the point the matrix becomes singular. In matlab the lowest eigenvalue may be found by
D = eigs(A,1,'smallestreal')
However, in my case Matlab coudn't perform this. Alternatively you can try searching around zero:
D = eigs(A,1,0)
This however only finds the eigenvalue closest to zero. Even if you make a loop like indicated in my original question above, you are not guaranteed to actually find the lowest. And the precision of the eigenvalues for a nearly singular matrix seems to be low in some cases.
Condition number: Matlabs cond returns the condition number of the matrix by performing
C = cond(A)
which states the ratio of the largest eigenvalue to the lowest. A shift in sign in the condition number thereby states the transition. This, however, didn't work for me, as I only got positive condition numbers even though I had negative eigenvalues. But maybe it will work in other cases.

Determinant is showing infinity instead of zero! Why?

This is my matlab code I wrote for a problem I got as homework. after multiplication of A and its transpose the resulting square matrix should have determinant zero according all classmates as their codes (different one) gave them so. Why is my code not giving the determinant of c and d to be infinity
A = rand(500,1500);
b = rand(500,1);
c = (A.')*A;
detc = det(c);
cinv = inv((A.')*A);
d = A*(A.');
detd = det(d);
dinv = inv(A*(A.'));
x1 = (inv((A.')*A))*((A.')*b);
x2 = A.'*((inv(A*(A.')))*b);
This behavior is explained in the Limitations section of the det's documentation and exemplified in the Find Determinant of Singular Matrix subsection where it is stated:
The determinant of A is quite large despite the fact that A is singular. In fact, the determinant of A should be exactly zero! The inaccuracy of d is due to an aggregation of round-off errors in the MATLAB® implementation of the LU decomposition, which det uses to calculate the determinant.
That said, in this instance, you can produce your desired result by using the m-code implementation given on that same page but sorting the diagonal elements of U in an ascending matter. Consider the sample script:
clc();
clear();
A = rand(500,1500);
b = rand(500,1);
c = (A.')*A;
[L,U] = lu(c);
% Since det(L) is always (+/-)1, it doesn't impact anything
diagU = diag(U);
detU1 = prod(diagU);
detU2 = prod(sort(diagU,'descend'));
detU3 = prod(sort(diagU,'ascend'));
fprintf('Minimum: %+9.5e\n',min(abs(diagU)));
fprintf('Maximum: %+9.5e\n',max(abs(diagU)));
fprintf('Determinant:\n');
fprintf('\tNo Sort: %g\n' ,detU1);
fprintf('\tDescending Sort: %g\n' ,detU2);
fprintf('\tAscending Sort: %g\n\n',detU3);
This produces the output:
Minimum: +1.53111e-13
Maximum: +1.72592e+02
Determinant:
No Sort: Inf
Descending Sort: Inf
Ascending Sort: 0
Notice that the direction of the sort matters, and that no-sorting gives Inf since a true 0 doesn't exist on the diagonal. The descending sort sees the largest values multiplied first, and apparently, they exceed realmax and are never multiplied by a true 0, which would generate a NaN. The ascending sort clumps together all of the near-zero diagonal values with very few large negative values (in truth, a more robust method would sort based on magnitude, but that was not done here), and their multiplication generates a true 0 (meaning that the value falls below the smallest denormalized number available in IEEE-754 arithmetic) that produces the "correct" result.
All that written, and as others have implied, I'll quote original Matlab developer and Mathworks co-founder Cleve Moler:
[The determinant] is useful in theoretical considerations and hand calculations, but does not provide a sound basis for robust numerical software.
Ok. So the fact that det(A'*A) is not zero is not a good indication of the (non-)singularity of A'*A.
The determinant depends on the scaling, and matrix clearly non-singular can have very small determinant. For instance, the matrix
1/2 * I_n
where I_n is the nxn identity has a determinant of (1/2)^n which is converging (quickly) to 0 as n goes to infinity. But 1/2 * I_n is not, at all, singular.
For this reason, a best idea to check the singularity of a matrix is the condition number.
In you case, after doing some tests
>> A = rand(500, 1500) ;
>> det(A'*A)
ans =
Inf
You can see that the (computed) determinant is clearly non-zero. But this is actually not surprising, and it should not really bother you. The determinant is fairly hard to compute, so yes, it is just rounding errors. If you want a better approximation, you can do the following
>> s = eig(A'*A) ;
>> prod(s)
ans =
0
There, you see it is closer to zero.
The condition number, on the other hand, is a much better estimator of the (non-)singularity of a matrix. Here, it is
>> cond(A'*A)
ans =
1.4853e+20
And, since it is much larger than 1e+16, the matrix is clearly singular. The reason for 1e+16 is a bit tedious, but is mostly due to the computer precision when doing floating point computations.
I think this is pretty much just a rounding problem, the Inf does not mean you are getting Infinity as an answer, it's just that your determinant is really big and exceeded realmax. As Adiel said, A*A.' generates a symmetric matrix, and should have a numerical value for its determinant. for example, set:
A=rand(5,15)
and you should find that the det of A*A.' is just a numerical value.
SO how did your friends get a ZERO, well it's easy to get 0 or inf for det of large matrices (why are you doing this in the first place I have no clue). So I think they are just getting the same/similar rounding issue.

how to find cholesky of a matrix?

i have n number of matrix and i want to find square root each.but my algorithm need cholesky .i am getting error that matrix is not positive definite.i converted diagonal element into real one.still I'm getting same error. is there any other way to find cholesky of a matrix?
If your matrix is a long way from being positive definite, there's nothing you can do - the Cholesky factorization is based on the assumption that it is positive definite.
Often though, a matrix is basically positive definite, but due to some small numerical issue is very slightly non-symmetric. If this is the problem you're running into, you can force it (let's say it's called x) to be symmetric by saying x = (x+x')/2.
Hope that helps!

Issues with calculating the determinant of a matrix

I am trying to calculate the determinant of the inverse of a matrix. The inverse of the matrix exists. However, when I try to calculate the determinant of the inverse, it gives me Inf value in matlab. What is the reason behind this?
Short answer: given A = inv(B), then det(A)==Inf may have two explanations:
an overflow during the numerical computation of the determinant,
one or more infinite elements in A.
In the first case your matrix is badly scaled so that det(B) may underflow and det(A) overflow. Remember that det(a*B) == a^N * det(B) where a is a scalar and B is a N times N matrix.
In the second case (i.e. nnz(A==inf)>0) matrix B may be "singular to working precision".
PS:
A matrix is nearly singular if it has a large condition number. (A small determinant has nothing to do with singularity, since the magnitude of the determinant itself is affected by scaling.).
A matrix is singular to working precision if it has a zero pivot in the Gaussian elimination: when computing the inverse, matlab has to calculate 1/0 which returns Inf.
In fact in Matlab overflow and zero-division exceptions are not caught, so that, according to IEEE 754, an Inf value is propagated.

det of a matrix returns 0 in matlab

I have been give a very large matrix (I cannot change the values of the matrix) and I need to calculate the inverse of a (covariance) matrix.
Sometimes I get the error saying
Matrix is close to singular or badly scaled.
Results may be inaccurate
In these situations I see that the value of the det returns 0.
Before calculating inverse (of a covariance matrix) I want to check the value of the det and perform something like this
covarianceFea=cov(fea_class);
covdet=det(covarianceFea);
if(covdet ==0)
covdet=covdet+.00001;
%calculate the covariance using this new det
end
Is there any way to use the new det and then use this to calculate the inverse of the covariance matrix?
Sigh. Computation of the determinant to determine singularity is a ridiculous thing to do, utterly so. Especially so for a large matrix. Sorry, but it is. Why? Yes, some books tell you to do it. Maybe even your instructor.
Analytical singularity is one thing. But how about numerical determination of singularity? Unless you are using a symbolic tool, MATLAB uses floating point arithmetic. This means it stores numbers as floating point, double precision values. Those numbers cannot be smaller in magnitude than
>> realmin
ans =
2.2251e-308
(Actually, MATLAB goes a bit lower than that, in terms of denormalized numbers, which can go down to approximately 1e-323.) See that when I try to store a number smaller than that, MATLAB thinks it is zero.
>> A = 1e-323
A =
9.8813e-324
>> A = 1e-324
A =
0
What happens with a large matrix? For example, is this matrix singular:
M = eye(1000);
Since M is an identity matrix, it is fairly clearly non-singular. In fact, det does suggest that it is non-singular.
>> det(M)
ans =
1
But, multiply it by some constant. Does that make it non-singular? NO!!!!!!!!!!!!!!!!!!!!!!!! Of course not. But try it anyway.
>> det(M*0.1)
ans =
0
Hmm. Thats is odd. MATLAB tells me the determinant is zero. But we know that the determinant is 1e-1000. Oh, yes. Gosh, 1e-1000 is smaller, by a considerable amount than the smallest number that I just showed you that MATLAB can store as a double. So the determinant underflows, even though it is obviously non-zero. Is the matrix singular? Of course not. But does the use of det fail here? Of course it will, and this is completely expected.
Instead, use a good tool for the determination of singularity. Use a tool like cond, or rank. For example, can we fool rank?
>> rank(M)
ans =
1000
>> rank(M*.1)
ans =
1000
See that rank knows this is a full rank matrix, regardless of whether we scale it or not. The same is true of cond, computing the condition number of M.
>> cond(M)
ans =
1
>> cond(M*.1)
ans =
1
Welcome to the world of floating point arithmetic. And oh, by the way, forget about det as a tool for almost any computation using floating point arithmetic. It is a poor choice almost always.
Woodchips has given you a very good explanation for why you shouldn't use the determinant. This seems to be a common misconception and your question is very related to another question on inverting matrices: Is there a fast way to invert a matrix in Matlab?, where the OP decided that because the determinant of his matrix was 1, it was definitely invertible! Here's a snippet from my answer
Rather than det(A)=1, it is the condition number of your matrix that dictates how accurate or stable the inverse will be. Note that det(A)=∏i=1:n λi. So just setting λ1=M, λn=1/M and λi≠1,n=1 will give you det(A)=1. However, as M → ∞, cond(A) = M2 → ∞ and λn → 0, meaning your matrix is approaching singularity and there will be large numerical errors in computing the inverse.
You can test this in MATLAB with the following simple example:
A = eye(10);
A([1 2]) = [1e15 1e-15];
%# calculate determinant
det(A)
ans =
1
%# calculate condition number
cond(A)
ans =
1.0000e+30
In such a scenario, calculating an inverse is not a very good idea. If you just have to do it, I would suggest using this to increase display precision:
format long;
Other suggestion could be to try using an SVD of the matrix and tinker around with singular values there.
A = U∑V'
inv(A) = V*inv(∑)*U'
∑ is a diagonal matrix where you will see one of the diagonal entries close to 0. Try playing around with this number if you want some sort of an approximation.