Finding Null space of a large sparse matrix in MATLAB - matlab

Over the course of finite difference discretization of an elliptic equation
and application of Neumann BCs on all sides of the 2D domain I have a large
sparse matrix. I need to find null space of its transpose to enforce
consistency condition on either side. For a computational domain of 50X50
and 100X100 I am able to do this with the available RAM of 32 GB in
Mathematica and MATLAB with the available full matrix commands of NullSpace and null respectively.
If the computational domain is 500X250 (which is of
the order of what I would have in general) the RAM required for the storage of the
matrix of size (500X250)X(500X250) is 125 GB and is highly prohibitive. I use sparse matrices to store this super-matrix and I don't have have the space constraint anymore. But I can't use the 'null" command on this as it is for only full matrices. MATLAB suggests to use "SVDS" command on sparse matrices. SVDS(A) gives only the first 6 singular values and singular vectors. There is another command SVDS(A,k,sigma) which gives "k" singular values and vectors around the scalar singular value of "sigma". When I use sigma=0 so as to find the singular vector corresponding to the the "zero" value, which would be the vector from null space basis I get an error that the "sigma" is too close to the eigen value of the matrix.
My matrix is singular and hence one of its Eigen values is "zero". How do I circumvent this error? Or Is there a better way to do this with the tools at hand. I have both MATLAB and Mathematica at hand.
Thanks in advance for your help and suggestions.
Best
Trinath

I guess you can try with some sort of decomposition.
http://www.mathworks.co.uk/matlabcentral/fileexchange/11120-null-space-of-a-sparse-matrix
Have you tried this?
Or maybe this?
http://www.mathworks.co.uk/matlabcentral/newsreader/view_thread/249467
I am confident they should work, but I have not tried them myself. Another way of proceeding would be to get into QR decomposition (which would give you the permutation of the first k independent columns, if k is the rank of your matrix. Then the vectors from k+1 to n would provide a basis for your null space).
Hope this helps.
Cheers,
GL

Related

How to compute null of large sparse matrix in Matlab?

I have a matrix problem in Matlab.
I have a 1million x 1million sparse matrix, and I keep using null. Usually, the problem is that I run out of memory. I tried svds (which is used for svd for sparse matrix), but my issue is I run out of memory as well. Is there a possible work around for large sparse matrices for the null() function in Matlab?
In general, the null space of a matrix, or the unitary matrices (U and V) of the singular values decomposition are NOT sparse even if the input matrix is sparse. Therefore, if you try to work with a 1M-by-1M matrix, even though it is sparse, the outputs of your operations are NOT and therefore you run out of memory.
What can you do?
If your input matrix has a certain structure to it (in addition to its sparseness) you might find some algebraic method to take advantage of this structure.
Another path you should consider, is why you need to compute the null space of the matrix? Can you achieve the same goal without explicitly estimating the null space?

how to convert a matrix to a diagonally dominant matrix using pivoting in Matlab

Hi I am trying to solve a linear system of the following type:
A*x=b,
where A is the coefficient matrix,
x is the vectors of unknowns and
b is the vector of solution.
The coefficient matrix (A) is a n-by-n sparse matrix, with even zeros in the diagonal. In order to solve this system in an accurate way I am using an iterative method in Matlab called bicgstab (Biconjugate gradients stabilized method).
This coefficient matrix (A) has a
det(A)=-4.1548e-05 and a rcond(A)= 1.1331e-04.
Therefore the matrix is ill-conditioned. I first try to perform a scaling and the results where:
det(A)= -1.2612e+135 but the rcond(A)=5.0808e-07...
Therefore the matrix is still ill-conditioned... I verify and the sum of all absolute value of the non-diagonal elements where 163.60 and the sum of all absolute value of the diagonal elements where 32.49... Therefore the matrix of coefficient is not diagonally dominant and will not converge using my function bicgstab...
I am looking for someone that can help me with performing a pivoting to the coefficient matrix (A) so it can be diagonally dominant. Or any advice to solve this problem....
Thanks for the help.
First there should be quite a few things noted here:
Don't use the determinant to estimate the "amount of singularity" of your matrix. The determinant is the product of all the eigenvalues of your matrix, and therefore its scaling can be wildly misleading compared to a much better measure like the condition number, leading to the next point..
your conditioning (according to rcond) isn't that bad, are you working with single or double precision? Large problems can routinely get condition numbers in this range and still be quite solvable, but of course this depends on a very complicated interaction of many factors, of which the condition number plays only a small part. This leads to another complicated point:
Diagonal dominance may not help you at all here. BiCGStab as far as I know does not require diagonal dominance for its convergence, and also I don't think diagonal dominance is known even to help it. Diagonal dominance is usually an assumption made by other iterative methods such as the Jacobi method or Gauss-Seidel. Actually the convergence behavior of BiCGStab is not very well understood at all, and it is usually only used when memory is a very severe problem but conjugate gradients is not applicable.
If you are really interested in using a Krylov method (such as BiCGStab) to solve your problem, then you generally need to have more understanding of where your matrix is coming from so that you can choose a sensible preconditioner.
So this calls for a bit more information. Do you know more about this matrix? Is it arising from some kind of physical problem? Do you know for example if it is symmetric or positive definite (I will assume not both because you are not using CG).
Let me end with some actionable advice which is very generic, and so not necessarily optimal:
If memory is not an issue, consider using restarted GMRES instead of BiCGStab. My experience is that GMRES has much more robust convergence.
Try an approximate factorization preconditioner such as ILU. MATLAB has a function for this built in.

Speed up gf(eye(x)) a.k.a. Speed up Galois field creation for sparse matrices

From the documentation (communications toolbox)
x_gf = gf(x,m) creates a Galois field array from the matrix x. The Galois field has 2^m elements, where m is an integer between 1 and 16.
Fine. The effort for big matrices grows with the number of elements of x. No surprise, as every element must be "touched" at some point.
Unfortunately, this means that the costs of gf(eye(n)) throw quadratically with n. Is there a way to profit from all the zeros in there?
PS: I need this to delete a row from a gf-Matrix, as the usual m(:c)= [] way does not work, and my idea of multiplying a gf-matrix with a cut unity matrix was surprisingly slow..
I don't have this toolbox, but maybe gf supports sparse-data inputs, which could drastically reduce your execution time in such a case.

How to find if a matrix is Singular in Matlab

I use the function below to generate the betas for a given set of guess lambdas from my optimiser.
When running I often get the following warning message:
Warning: Matrix is singular to working precision.
In NSS_betas at 9
In DElambda at 19
In Individual_Lambdas at 36
I'd like to be able to exclude any betas that form a singular matrix form the solution set, however I don't know how to test for it?
I've been trying to use rcond() but I don't know where to make the cut off between singular and non singular?
Surely if Matlab is generating the warning message it already knows if the matrix is singular or not so if I could just find where that variable was stored I could use that?
function betas=NSS_betas(lambda,data)
mats=data.mats2';
lambda=lambda;
yM=data.y2';
nObs=size(yM,1);
G= [ones(nObs,1) (1-exp(-mats./lambda(1)))./(mats./lambda(1)) ((1-exp(-mats./lambda(1)))./(mats./lambda(1))-exp(-mats./lambda(1))) ((1-exp(-mats./lambda(2)))./(mats./lambda(2))-exp(-mats./lambda(2)))];
betas=G\yM;
r=rcond(G);
end
Thanks for the advice:
I tested all three examples below after setting the lambda values to be equal so guiving a singular matrix
if (~isinf(G))
r=rank(G);
r2=rcond(G);
r3=min(svd(G));
end
r=3, r2 =2.602085213965190e-16; r3= 1.075949299504113e-15;
So in this test rank() and rcond () worked assuming I take the benchmark values as given below.
However what happens when I have two values that are close but not exactly equal?
How can I decide what is too close?
rcond is the right way to go here. If it nears the machine precision of zero, your matrix is singular. I usually go with:
if( rcond(A) < 1e-12 )
% This matrix doesn't look good
end
You can experiment with a value that suites your needs, but taking the inverse of a matrix that is even close to singular with MATLAB can produce garbage results.
You could compare the result of rank(G) with the number of columns of G. If the rank is less than the column dimension, you will have a singular matrix.
you can also check this by:
min(svd(A))>eps
and verifying that the smallest singular value is larger than eps, or any other numerical tolerance that is relevant to your needs. (the code will return 1 or 0)
Here's more info about it...
Condition number (Maximal singular value/Minimal singular value) is another good method:
cond(A)
It uses svd. It should be as close to 1 as possible. Very large values mean that the matrix is almost singular. Inf means that it is precisely singular.
Note that almost all of the methods mentioned in other answers use somehow svd :
There are special tools designed for this problem, appropriately called "rank revealing matrix factorizations". To my best (albeit a little old) knowledge, a good enough way to decide whether a n x n matrix A is nonsingular is to go with
det(A) <> 0 <=> rank(A) = n
and use a rank-revealing QR factorization of A:
AP = QR
where Q is orthogonal, P is a permutation matrix and R is an upper triangular matrix with the property that the magnitude of the diagonal elements is decreased along the diagonal.

Determinants of huge matrices in MATLAB

from a simulation problem, I want to calculate complex square matrices on the order of 1000x1000 in MATLAB. Since the values refer to those of Bessel functions, the matrices are not at all sparse.
Since I am interested in the change of the determinant with respect to some parameter (the energy of a searched eigenfunction in my case), I overcome the problem at the moment by first searching a rescaling factor for the studied range and then calculate the determinants,
result(k) = det(pre_factor*Matrix{k});
Now this is a very awkward solution and only works for matrix dimensions of, say, maximum 500x500.
Does anybody know a nice solution to the problem? Interfacing to Mathematica might work in principle but I have my doubts concerning feasibility.
Thank you in advance
Robert
Edit: I did not find a convient solution to the calculation problem since this would require changing to a higher precision. Instead, I used that
ln det M = trace ln M
which is, when I derive it with respect to k
A = trace(inv(M(k))*dM/dk)
So I at least had the change of the logarithm of the determinant with respect to k. From the physical background of the problem I could derive constraints on A which in the end gave me a workaround valid for my problem. Unfortunately I do not know if such a workaround could be generalized.
You should realize that when you multiply a matrix by a constant k, then you scale the determinant of the matrix by k^n, where n is the dimension of the matrix. So for n = 1000, and k = 2, you scale the determinant by
>> 2^1000
ans =
1.07150860718627e+301
This is of course a huge number, so you might expect that it should fail, since in double precision, MATLAB will only represent floating point numbers as large as realmax.
>> realmax
ans =
1.79769313486232e+308
There is no need to do all the work of recomputing that determinant, not that computing the determinant of a huge matrix like that is a terribly well-posed problem anyway.
If speed is not a concern, you may want to use det(e^A) = e^(tr A) and take as A some scaling constant times your matrix (so that A - I has spectral radius less than one).
EDIT: In MatLab, the log of a matrix (logm) is calculated via trigonalization. So it is better for you to compute the eigenvalues of your matrix and multiply them (or better, add their logarithm). You did not specify whether your matrix was symmetric or not: if it is, finding eigenvalues are easier than if it is not.
You said the current value of the determinant is about 10^-300.
Are you trying to get the determinant at a certain value, say 1? If so, rescaling is awkward: the matrix you are considering is ill-conditioned, and, considering the precision of the machine, you should consider the output determinant to be zero. It is impossible to get a reliable inverse in other words.
I would suggest to modify the columns or lines of the matrix rather than rescale it.
I used R to make a small test with a random matrix (random normal values), it seems the determinant should be clearly non-zero.
> n=100
> M=matrix(rnorm(n**2),n,n)
> det(M)
[1] -1.977380e+77
> kappa(M)
[1] 2318.188
This is not strictly a matlab solution, but you might want to consider using Mahout. It's specifically designed for large-scale linear algebra. (1000x1000 is no problem for the scales it's used to.)
You would call into java to pass data to/from Mahout.