Calculating the sign of a NxN determinant - discrete-mathematics

Is there a more efficient way to determine the sign (negative or positive or zero) of a determinant than calculating the full value of the determinant and comparing it with zero?

There are methods, that can approximate the determinat of an integer matrix faster, than computing the exact value. These methods are usually used to compute the sign, since there is a great probability of a correct result. See this paper for much more details.
However AFAIK there is no exact method of computing sign of determinant faster than computing the value itself.

Related

Matlab Zero Tolerance in rank function

I am wondering if there is technical or theoretical reason on why Matlab on rank function considers as zero the value max(size(A))*eps(norm(A)). Can you please provide some intuition?
Thank you!
The following answer is not based on proper mathematical reasoning, it is just some speculations (as you were asking for intuition):
norm(A) is the order of magnitude of the matrix entries.
eps(norm(A)) is thus the accuracy that the floating point representation of the matrix entries typically has.
Now, consider you add N numbers that should theoretically add up to zero, but each of them has an error of eps to it ... I think we would expect an error in the order of sqrt(N) * eps for the result.
Then, given that the algorithm that computes the rank performs N^2 operations on the matrix entries (where N is its size) to result in a number that is checked against zero, the error that we would then expect is what you stated in your question.
What I don't know, is the algorithm that Matlab uses really of complexity N^2?

MATLAB: Get small eigenvalues from `eigs` in sorted order

For example, eigs(A,k,'sm') returns the k smallest magnitude eigenvalues. However, eigs does not take care of the sign. Edit: eigs(A,k,'sr')takes care of it.
Say A is 500 by 500 sparse matrix. Without getting all eigenvalues like in eig, how to get the smallest 3 eigenvalues (not magnitude) and the corresponding eigenvectors for eigs in a sorted way efficiently?
This can be done easily by getting all eigenvalues in eig by sorting but I cannot use eig for some reasons as it takes a long time and huge memory to convert to full matrix and compute all eigenvalues.
Edit: This can also be done by eigs(A,k,'sr') and do the sorting myself. But is there a faster method or option in eigs to do so?
It should not do that unless there is a syntax error or your matrix has all the eigenvalues with positive real part. This gives the correct negative signed smallest real part (I guess that's what you mean by small) eigenvalues on R2016a. Note that smallest eigs are complex conjugates and one pair is given by only its negative imaginary part.
A = sprand(100,100,0.5);
[V,D] = eigs(A,3,'sr')

how to convert a matrix to a diagonally dominant matrix using pivoting in Matlab

Hi I am trying to solve a linear system of the following type:
A*x=b,
where A is the coefficient matrix,
x is the vectors of unknowns and
b is the vector of solution.
The coefficient matrix (A) is a n-by-n sparse matrix, with even zeros in the diagonal. In order to solve this system in an accurate way I am using an iterative method in Matlab called bicgstab (Biconjugate gradients stabilized method).
This coefficient matrix (A) has a
det(A)=-4.1548e-05 and a rcond(A)= 1.1331e-04.
Therefore the matrix is ill-conditioned. I first try to perform a scaling and the results where:
det(A)= -1.2612e+135 but the rcond(A)=5.0808e-07...
Therefore the matrix is still ill-conditioned... I verify and the sum of all absolute value of the non-diagonal elements where 163.60 and the sum of all absolute value of the diagonal elements where 32.49... Therefore the matrix of coefficient is not diagonally dominant and will not converge using my function bicgstab...
I am looking for someone that can help me with performing a pivoting to the coefficient matrix (A) so it can be diagonally dominant. Or any advice to solve this problem....
Thanks for the help.
First there should be quite a few things noted here:
Don't use the determinant to estimate the "amount of singularity" of your matrix. The determinant is the product of all the eigenvalues of your matrix, and therefore its scaling can be wildly misleading compared to a much better measure like the condition number, leading to the next point..
your conditioning (according to rcond) isn't that bad, are you working with single or double precision? Large problems can routinely get condition numbers in this range and still be quite solvable, but of course this depends on a very complicated interaction of many factors, of which the condition number plays only a small part. This leads to another complicated point:
Diagonal dominance may not help you at all here. BiCGStab as far as I know does not require diagonal dominance for its convergence, and also I don't think diagonal dominance is known even to help it. Diagonal dominance is usually an assumption made by other iterative methods such as the Jacobi method or Gauss-Seidel. Actually the convergence behavior of BiCGStab is not very well understood at all, and it is usually only used when memory is a very severe problem but conjugate gradients is not applicable.
If you are really interested in using a Krylov method (such as BiCGStab) to solve your problem, then you generally need to have more understanding of where your matrix is coming from so that you can choose a sensible preconditioner.
So this calls for a bit more information. Do you know more about this matrix? Is it arising from some kind of physical problem? Do you know for example if it is symmetric or positive definite (I will assume not both because you are not using CG).
Let me end with some actionable advice which is very generic, and so not necessarily optimal:
If memory is not an issue, consider using restarted GMRES instead of BiCGStab. My experience is that GMRES has much more robust convergence.
Try an approximate factorization preconditioner such as ILU. MATLAB has a function for this built in.

Determinant of a positive semi definite matrix

Is it possible that the determinant of a positive semi definite matrix is equal to 0. It is coming to be zero in my case. I have a diagonal matrix with diagonal elements non zero. When I try to calculate the determinant of this matrix it is coming out to be 0. Why is it so?
This is the reason why computing the determinant is never a good idea. Yeah, I know. Your book, your teacher, or your boss told you to do so. They were probably wrong. Why? Determinants are poorly scaled beasts. Even if you compute the determinant efficiently (many algorithms fail to do even that) you don't really want a determinant most of the time.
Consider this simple positive definite matrix.
A = eye(1000);
What is the determinant? I need not even bother. It is 1. But, if you insist...
det(A)
ans =
1
OK, so that works. How about if we simply multiply that entire matrix by a small constant, 0.1 for example. What is the determinant? You might say there is no reason to bother, as we already know the determinant. It must be just det(A)*0.1^1000, so 1e-1000.
det(A*0.1)
ans =
0
What did we do wrong here? Where this failed is we forgot to remember we were working in floating point arithmetic. Since the dynamic range of a double in MATLAB goes down only to essentially
realmin
ans =
2.2250738585072e-308
then smaller numbers turn into zero - they underflow. Anyway, most of the time when we compute a determinant, we are doing so for the wrong reasons anyway. If they want you to test to see if a matrix is singular, then use rank or cond, not det.
by definition, a positive semi definite matrix may have eigenvalues equal to zero, so its determinant can therefore be zero
Now, I can't see what you mean with the sentence,
I have a diagonal matrix with diagonal elements non zero. When I try to calculate the ...
If the matrix is diagonal, and all elements in the diagonal are non-zero, the determinant should be non-zero. If you are calculating it in your computer, beware underflows.
You may consider the sum of logarithms instead of the product of the diagonal elements

Finding the smallest negative eigenvalue if the eigenvalues are complex

I want find in matlab ,the smallest negative eigen value,from complex eigenvalues ,of a squaure matrix (5,5) with all the entries of the matrix are complex .The answer should be real value.So how can I do this im matlab?.
Is it what you need?
min(real(eig(A)));
You cannot compare complex numbers. At most, you can compare the magnitudes of complex numbers.So, min(abs(eig(A))) is the right answer. If you need the negative of this value, just tack on a negative sign