I want find in matlab ,the smallest negative eigen value,from complex eigenvalues ,of a squaure matrix (5,5) with all the entries of the matrix are complex .The answer should be real value.So how can I do this im matlab?.
Is it what you need?
min(real(eig(A)));
You cannot compare complex numbers. At most, you can compare the magnitudes of complex numbers.So, min(abs(eig(A))) is the right answer. If you need the negative of this value, just tack on a negative sign
Related
For example, eigs(A,k,'sm') returns the k smallest magnitude eigenvalues. However, eigs does not take care of the sign. Edit: eigs(A,k,'sr')takes care of it.
Say A is 500 by 500 sparse matrix. Without getting all eigenvalues like in eig, how to get the smallest 3 eigenvalues (not magnitude) and the corresponding eigenvectors for eigs in a sorted way efficiently?
This can be done easily by getting all eigenvalues in eig by sorting but I cannot use eig for some reasons as it takes a long time and huge memory to convert to full matrix and compute all eigenvalues.
Edit: This can also be done by eigs(A,k,'sr') and do the sorting myself. But is there a faster method or option in eigs to do so?
It should not do that unless there is a syntax error or your matrix has all the eigenvalues with positive real part. This gives the correct negative signed smallest real part (I guess that's what you mean by small) eigenvalues on R2016a. Note that smallest eigs are complex conjugates and one pair is given by only its negative imaginary part.
A = sprand(100,100,0.5);
[V,D] = eigs(A,3,'sr')
Is there a more efficient way to determine the sign (negative or positive or zero) of a determinant than calculating the full value of the determinant and comparing it with zero?
There are methods, that can approximate the determinat of an integer matrix faster, than computing the exact value. These methods are usually used to compute the sign, since there is a great probability of a correct result. See this paper for much more details.
However AFAIK there is no exact method of computing sign of determinant faster than computing the value itself.
I have a big (400K*400K) sparse matrix and I need to calculate the largest eigenvalue of A'*A.
The problem is that Matlab can't even calculate A' due to memory problems.
I also tried [a,b,c] = find(A) and then transpose by creating a transpose sparse matrix, but although the find() works, the sprase creation doesn't.
Is there a nice solution for this? it can be either in a matlab function or in another technique to calculate the largest eigenvalue for this kind of multiplication.
Thanks.
If A is sparse, see this thread and some discussion in this documentation (basically do it part by part) for a way to transpose it etc.
But now you need to calculate B=A'*A. The question is, is it still sparse? assuming it is, there shouldn't be a problem to proceed using the previous technique mentioned in the link.
Then after you've obtained B=A'*A, use eigs
eigs(B,1)
to obtain the largest magnitude eigenvalue.
Is it possible that the determinant of a positive semi definite matrix is equal to 0. It is coming to be zero in my case. I have a diagonal matrix with diagonal elements non zero. When I try to calculate the determinant of this matrix it is coming out to be 0. Why is it so?
This is the reason why computing the determinant is never a good idea. Yeah, I know. Your book, your teacher, or your boss told you to do so. They were probably wrong. Why? Determinants are poorly scaled beasts. Even if you compute the determinant efficiently (many algorithms fail to do even that) you don't really want a determinant most of the time.
Consider this simple positive definite matrix.
A = eye(1000);
What is the determinant? I need not even bother. It is 1. But, if you insist...
det(A)
ans =
1
OK, so that works. How about if we simply multiply that entire matrix by a small constant, 0.1 for example. What is the determinant? You might say there is no reason to bother, as we already know the determinant. It must be just det(A)*0.1^1000, so 1e-1000.
det(A*0.1)
ans =
0
What did we do wrong here? Where this failed is we forgot to remember we were working in floating point arithmetic. Since the dynamic range of a double in MATLAB goes down only to essentially
realmin
ans =
2.2250738585072e-308
then smaller numbers turn into zero - they underflow. Anyway, most of the time when we compute a determinant, we are doing so for the wrong reasons anyway. If they want you to test to see if a matrix is singular, then use rank or cond, not det.
by definition, a positive semi definite matrix may have eigenvalues equal to zero, so its determinant can therefore be zero
Now, I can't see what you mean with the sentence,
I have a diagonal matrix with diagonal elements non zero. When I try to calculate the ...
If the matrix is diagonal, and all elements in the diagonal are non-zero, the determinant should be non-zero. If you are calculating it in your computer, beware underflows.
You may consider the sum of logarithms instead of the product of the diagonal elements
from a simulation problem, I want to calculate complex square matrices on the order of 1000x1000 in MATLAB. Since the values refer to those of Bessel functions, the matrices are not at all sparse.
Since I am interested in the change of the determinant with respect to some parameter (the energy of a searched eigenfunction in my case), I overcome the problem at the moment by first searching a rescaling factor for the studied range and then calculate the determinants,
result(k) = det(pre_factor*Matrix{k});
Now this is a very awkward solution and only works for matrix dimensions of, say, maximum 500x500.
Does anybody know a nice solution to the problem? Interfacing to Mathematica might work in principle but I have my doubts concerning feasibility.
Thank you in advance
Robert
Edit: I did not find a convient solution to the calculation problem since this would require changing to a higher precision. Instead, I used that
ln det M = trace ln M
which is, when I derive it with respect to k
A = trace(inv(M(k))*dM/dk)
So I at least had the change of the logarithm of the determinant with respect to k. From the physical background of the problem I could derive constraints on A which in the end gave me a workaround valid for my problem. Unfortunately I do not know if such a workaround could be generalized.
You should realize that when you multiply a matrix by a constant k, then you scale the determinant of the matrix by k^n, where n is the dimension of the matrix. So for n = 1000, and k = 2, you scale the determinant by
>> 2^1000
ans =
1.07150860718627e+301
This is of course a huge number, so you might expect that it should fail, since in double precision, MATLAB will only represent floating point numbers as large as realmax.
>> realmax
ans =
1.79769313486232e+308
There is no need to do all the work of recomputing that determinant, not that computing the determinant of a huge matrix like that is a terribly well-posed problem anyway.
If speed is not a concern, you may want to use det(e^A) = e^(tr A) and take as A some scaling constant times your matrix (so that A - I has spectral radius less than one).
EDIT: In MatLab, the log of a matrix (logm) is calculated via trigonalization. So it is better for you to compute the eigenvalues of your matrix and multiply them (or better, add their logarithm). You did not specify whether your matrix was symmetric or not: if it is, finding eigenvalues are easier than if it is not.
You said the current value of the determinant is about 10^-300.
Are you trying to get the determinant at a certain value, say 1? If so, rescaling is awkward: the matrix you are considering is ill-conditioned, and, considering the precision of the machine, you should consider the output determinant to be zero. It is impossible to get a reliable inverse in other words.
I would suggest to modify the columns or lines of the matrix rather than rescale it.
I used R to make a small test with a random matrix (random normal values), it seems the determinant should be clearly non-zero.
> n=100
> M=matrix(rnorm(n**2),n,n)
> det(M)
[1] -1.977380e+77
> kappa(M)
[1] 2318.188
This is not strictly a matlab solution, but you might want to consider using Mahout. It's specifically designed for large-scale linear algebra. (1000x1000 is no problem for the scales it's used to.)
You would call into java to pass data to/from Mahout.