within a matlab code of mine, I have to deal with the Cholesky factorization of a certain given matrix. I am generally calling chol(A,'lower') to generate the lower triangular factor.
Now, checking my code with the profiler, it is evident that function chol is really time consuming, especially if the size of the input matrix becomes large.
Therefore, I would like to know, if there is any valuable alternative to the built-in chol function.
I have been thinking of the LAPACK library, and namely of spptrf function. Is it available in MATLAB or not?
Any hint or support are more than welcome.
EDIT
Just as an example, the profiler retrieves this information:
where Coh_u has size (1395*1395). It has, also, to be remarked that chol is called 4000 times, since I need the cholesky factor for 4000 different configurations.
I'm not sure what version of matlab you are using, but I found this discussion, which suggests in older versions that Cholesky Factorization was very slow as you're describing.
One of the answers there says to use the CHOLMOD package or SuiteSparse, which has a chol2 function that is supposed to be faster.
Can you confirm if the correct expression for Coh_u is as under
a) Coh_u = exp(-a.*sqrt((f(ii)/Uhub).^2 + (0.12/Lc).^2)).*(df.*psd(ii,1));
or
b) Coh_u = exp(-a.*dist*sqrt((f(ii)/Uhub).^2 + (0.12/Lc).^2)).*(df.*psd(ii,1));
The difference in a) and b) is that in b) dist has been added which is the distance between two matrices Y an Z such that
dist = pdist2([Y(:) Z(:)],[Y(:) Z(:)]);
But it leads into the "Matrix not positive definite" error with the chol() function.
Related
I have a linear system of about 2000 sparse equations in Matlab. For my final result, I only really need the value of one of the variables: the other values are irrelevant. While there is no real problem in simply solving the equations and extracting the correct variable, I was wondering whether there was a faster way or Matlab command. For example, as soon as the required variable is calculated, the program could in principle stop running.
Is there anyone who knows whether this is at all possible, or if it would just be easier to keep solving the entire system?
Most of the computation time is spent inverting the matrix, if we can find a way to avoid completely inverting the matrix then we may be able to improve the computation time. Lets assume I'm only interested in the solution for the last variable x(N). Using the standard method we compute
x = A\b;
res = x(N);
Assuming A is full rank, we can instead use LU decomposition of the augmented matrix [A b] to get x(N) which looks like this
[~,U] = lu([A b]);
res = U(end,end-1)/U(end,end);
This is essentially performing Gaussian elimination and then solving for x(N) using back-substitution.
We can extend this to find any value of x by swapping the columns of A before LU decomposition,
x_index = 123; % the index of the solution we are interested in
A(:,[x_index,end]) = A(:,[end,x_index]);
[~,U] = lu([A b]);
res = U(end,end)/U(end,end-1);
Bench-marking performance in MATLAB2017a with 10,000 random 200 dimensional systems we get a slight speed-up
Total time direct method : 4.5401s
Total time LU method : 3.9149s
Note that you may experience some precision issues if A isn't well conditioned.
Also, this approach doesn't take advantage of the sparsity of A. In my experiments even with 2000x2000 sparse matrices everything significantly slowed down and the LU method is significantly slower. That said full matrix representation only requires about 30MB which shouldn't be a problem on most computers.
If you have access to theory manuals on NASTRAN, I believe (from memory) there is coverage of partial solutions of linear systems. Also try looking for iterative or tri diagonal solvers for A*x = b. On this page, review the pqr solution answer by Shantachhani. Another reference.
I'm trying to compute an inverse of a matrix P, but if I multiply inv(P)*P, the MATLAB does not return the identity matrix. It's almost the identity (non diagonal values in the order of 10^(-12)). However, in my application I need more precision.
What can I do in this situation?
Only if you explicitly need the inverse of a matrix you use inv(), otherwise you just use the backslash operator \.
The documentation on inv() explicitly states:
x = A\b is computed differently than x = inv(A)*b and is recommended for solving systems of linear equations.
This is because the backslash operator, or mldivide() uses whatever method is most suited for your specific matrix:
x = A\B solves the system of linear equations A*x = B. The matrices A and B must have the same number of rows. MATLABĀ® displays a warning message if A is badly scaled or nearly singular, but performs the calculation regardless.
Just so you know what algorithm MATLAB chooses depending on your input matrices, here's the full algorithm flowchart as provided in their documentation
The versatility of mldivide in solving linear systems stems from its ability to take advantage of symmetries in the problem by dispatching to an appropriate solver. This approach aims to minimize computation time. The first distinction the function makes is between full (also called "dense") and sparse input arrays.
As a side-note about error of order of magnitude 10^(-12), besides the above mentioned inaccuracy of the inv() function, there's floating point accuracy. This post on MATLAB issues on it is rather insightful, with a more general computer science post on it here. Basically, if you are computing numerics, don't worry (too much at least) about errors 12 orders of magnitude smaller.
You have what's called an ill-conditioned matrix. It's risky to try to take the inverse of such a matrix. In general, taking the inverse of anything but the smallest matrices (such as those you see in an introduction to linear algebra textbook) is risky. If you must, you could try taking the Moore-Penrose pseudoinverse (see Wikipedia), but even that is not foolproof.
I have a linear equation such as
Ax=b
where A is full rank matrix which its size is 512x512. b is a vector of 512x1. x is unknown vector. I want to find x, hence, I have some options for doing that
1.Using the normal way
inv(A)*b
2.Using SVD ( Singular value decomposition)
[U S V]=svd(A);
x = V*(diag(diag(S).^-1)*(U.'*b))
Both methods give the same result. So, what is benefit of using SVD to solve Ax=b, especially in the case A is a 2D matrix?
Welcome to the world of numerical methods, let me be your guide.
You, as a new person in this world wonders, "Why would I do something this difficult with this SVD stuff instead of the so commonly known inverse?! Im going to try it in Matlab!"
And no answer was found. That is, because you are not looking at the problem itself! The problems arise when you have an ill-conditioned matrix. Then the computing of the inverse is not possible numerically.
example:
A=[1 1 -1;
1 -2 3;
2 -1 2];
try to invert this matrix using inv(A). Youll get infinite.
That is, because the condition number of the matrix is very high (cond(A)).
However, if you try to solve it using SVD method (b=[1;-2;3]) you will get a result. This is still a hot research topic. Solving Ax=b systems with ill condition numbers.
As #Stewie Griffin suggested, the best way to go is mldivide, as it does a couple of things behind it.
(yeah, my example is not very good because the only solution of X is INF, but there is a way better example in this youtube video)
inv(A)*b has several negative sides. The main one is that it explicitly calculates the inverse of A, which is both time demanding, and may result in inaccuracies if values vary by many orders of magnitude.
Although it might be better than inv(A)*b, using svd is not the "correct" approach here. The MATLAB-way to do this is using mldivide, \. Using this, MATLAB chooses the best algorithm to solve the linear system based on its properties (Hermation, upper Hessenberg, real and positive diagonal, symmetric, diagonal, sparse etc.). Often, the solution will be a LU-triangulation with partial permutation, but it varies. You'll have a hard time beating MATLABs implementation of mldivide, but using svd might give you some more insight of the properties of the system if you actually investigates U, S, V. If you don't want to do that, do with mldivide.
I use eigs to calculate the eigen vectors of sparse square matrices which are large (tens of thousands).
What I want is the smallest set of eigen vectors.
But
eigs(A, 10, 'sm') % Note: A is the matrix
runs very slow.
However, using eigs(A, 10, 'lm') gives me the answer relatively faster.
And as I tried, replacing 10 with A_width in eigs(A, 10, 'lm') so that this includes all the eigen vectors, doesn't solve this problem, 'cause this make it the as slow as using 'sm'.
So, I want to know why calculating the smallest vectors(using 'sm') is much slower than calculating the largest?
BTW, if you have any idea about how to use eigs with 'sm' as fast as with 'lm', please tell me that.
The algorithm used in pretty much any standard eigs function is (some variation of) the Lanczos algorithm. It is iterative and the first iterations give you the largest eigenvalues. This explains pretty much every observation you make:
Largest eigenvalues take the least amount of iterations,
Smallest eigenvalues take the maximum amount of iterations,
All eigenvalues also take the maximum amount of iterations.
There are tricks to "fool" eigs into calculating the smallest eigenvalues by actually making them the largest eigenvalues of another problem. This is usually accomplished by a shift parameter. Skimming over the Matlab documentation for eigs, I see that they have a sigma parameter, which might help you. Note the same documentation recommends proper eig if the matrix fits into memory, as eigs has its numerical quirks.
Since eigs is actually an m-file function, we can profile it. I have run a couple of basic tests, and it depends very much on the nature of the data in the matrix. If we run the profiler separately on the following two lines of code:
eigs(eye(1000), 10, 'lm'), and
eigs(eye(1000), 10, 'sm'),
then in the first instance it calls arpackc (the main function that does the work - according to the comments in eigs it's probably from here) a total of 22 times. In the second instance it is called 103 times.
On the other hand, trying it with
eigs(rand(1000), 10, 'lm'), and
eigs(rand(1000), 10, 'sm'),
I get results where the 'lm' option consistently calls arpackc many more times than the sm option.
I'm afraid I don't know the details of the algorithm, and so can't explain it in any deeper mathematical sense, but the page that I linked suggests ARPACK is best for matrices with some structure. Since matrices generated by rand have little structure, it is probably safe to assume the latter behaviour I described is not what you'd expect under normal operating conditions.
In short: it simply takes the algorithm more iterations to converge when you ask it for the smallest eigenvalues of a structured matrix. This being an iterative process, however, it very much depends on the actual data you give it.
Edit: There is a wealth of information and references about this method here, and the key to understanding exactly why this happens is surely contained somewhere therein.
The reason is actually much more simple and due to the basics of solving large sparse eigenvalue problems. These are all based on solving:
(1) A x = lam x
Most solution methods use some power law (e.g. a Krylov subspace spanned in both the Lanczos and Arnoldi methods)
The thing is that the a power series converge to the largest eigenvalue of (1). Therefore we have that the largest eigenvalues are found by the subspace spanned by: K^k = {A*r0,....,A^k*r0}, which requires only matrix vector multiplications (cheap).
To find the smallest, we have to reformulate (1) as follows:
(2) 1/lam x = A^(-1) x or A^(-1) x = invlam x
Now solving for the largest eigenvalue of (2) is equivalent to finding the smallest eigenvalue of (1). In this case the subspace is spanned by K^k = {A^(-1)*r0,....,A^(-k)*r0}, which requires solving several linear system (expensive!).
I have this huge matrix A of dimension 900000x900000. And I have to solve this linear equation
Ax=b where b is a column matrix of size 900000x1.
I used matlab's backslash operator like A\b to try to get x. However, it freezes and I couldn't get x. Mostly I get out of memory issue. Even though I ran it in a computer with higher memory it makes the system very slow and I have to wait to get the answer.
How can I solve this equation. My matrix is pretty sparse. However, it's band is wider but most of the elements are zero. b is a full matrix. Any suggestions?
I did a project, where we also operated with such large but fortunately very sparse matrices.
Using such large matrices, you are pretty lost with direct methods: You can never compute the inverse because it will be a dense matrix, which you can never store. Also methods such as LU or Cholesky factorization are quite expensive because they again create a significant fill-in, i.e. they destroy zeros.
A viable alternative is to use iterative methods. If you know that your matrix is symmetric and positive-definite, try the Conjugate gradient method:
x = pcg(A, b); %# Computes a solution to Ax = b, with A symm. pos-def.
I would just give it a try and have a look, if the method converges. Proofing the assumption of positive-definiteness is not easy, I'm afraid.
If you do not get a solution, there are many more iterative methods. For example:
bicg - BiConjugate Gradient Method
bicgstab - BiConjugate Gradient Method (stabilized)
lsqr - Least Squares QR Method
gmres - Generalized Minimum Residual Method (I like this a lot)