do you know if MATLAB supports the LAPACK spptrf function.
This function is quite a bargain when you gotta compute Cholesky factorization of a huge positive definite symmetric matrix.
It allows for the factorization by only giving the upper triangular matrix, stored as uni-dimensional matrix, as input.
Or, else, is chol built-in function already using spptrf internally?
EDIT
I have been able to find the lapack library on the File Exchange http://www.mathworks.com/matlabcentral/fileexchange/16777-lapack, with the desired implementation of the spptrf function.
EDIT 2
MATLAB running on my machine is fatally crashing each time I call spptrf.
Is there any alternative way to directly handle this function?
Related
So I am using MATLAB for a project and am discussing the use of the power method for finding stationary distributions of Markov chains and its convergence rate. I was wondering what method/algorithms MATLAB's eig() function uses to find the eigenvectors of a matrix?
Normally Matlab is using LAPACK routines to do calculation. With that in mind I guess that from here you will be able to find the code that matlab runs. Be Aware LAPACK is in Fortran.
MATLAB Incorporates LAPACK
I like to use cuSolver code for Eigen value decomposition of complex matrix in Matlab.
I am using MATLAB CUDA kernel and it seems that its not possible to interface cuSolver with MATLAB as the cuSolver contains the code for host as well as for device (as mentioned here: http://docs.nvidia.com/cuda/cusolver/#syevd-example1)
while MATLAB CUDA kernel works only for the kernel function..
Please comment.
Any other idea to compute Eigenvalue decomposition of large no of matrices containing complex data in parallel on GPU by using Matlab environment?
You almost certainly need to use the MEX interface. This allows you to take in gpuArray data, and call kernels and other CUDA library functions.
See the doc: http://uk.mathworks.com/help/distcomp/run-mex-functions-containing-cuda-code.html for more.
I have symmetrical sparse matrices. Some of the elements would form "blocks" or "components" .
Please look at the output of spy on example matrix.
I want to efficiently find those clusters in MATLAB.
This problem is equivalent to finding connected components of a graph, however I have a feeling that relevant functionality should be available as a (combination of) fast MATLAB built-in functions that operate on sparse matrices.
Can you suggest such combination?
OK, found graphconncomp function in bioinformatics toolbox. It uses some mex routines internally.
Is there a way to turn off pivoting when computing the inverse of a tridiagonal matrix in matlab? I'm trying to see if a problem I'm having with solving a tridiagonal system is coming from not pivoting and I can test it simply in matlab by solving the same system and turning off pivoting. Any help is appreciated!
The documentation to mldivide doesn't list any options for setting low-level options like that.
I'd imagine that is because automatic pivoting is not only desired but expected from most tools these days.
For a tridiagonal matrix that is full, MATLAB will use its Hessenberg solver (which I imagine is akin to this flow) and, for a sparse tridiagonal matrix, will use a tridiagonal solver. In both cases, partial pivoting may used to ensure an accurate solution of the system.
To get around the fact that MATLAB doesn't have a toggle for pivoting, you could implement your own tridiagonal solver (see above link) without pivoting and see how the solution is affected.
If matrix A is in X, and matrix B is in Y.
Doing a multiplcation would just be Z = X*Y. Correct assuming same size for both arrays.
How can I compute it doing it with a for loop?
The anwser by ja72 is wrong, see my comments under it to see why. In general, in these simple linear algebra operations, it's impossible for your code to beat the vectorized version, not even if you write your code in C/mex (unless you have a certain sparsity structure in your matrix that you can exploit in your code). The reason is that under the hood, Matlab passes the actual job of matrix multiplication to Lapack library, written in Fortran, which then calls Blas libraries that are optimized given particular machine architecture.
Yes acai is correct, and I remember wondering the same thing when I started using Matlab. Just to provide some more detail to what acai said, LAPACK is Linear Algebra PACKage and is something a lot of other languages use to solve these types of problems, Python connects to it using SciPy, Java jlapack, etc.. BLAS is Basic Linear Algebra Subroutines, which handle the basic problem of matrix multiplication you are asking about. Acai is also right that you can never beat the performance Matlab gives for matrix multiplication, this is their bread and butter and they have spent decades now, optimizing the performance of these operations.
Yes matrix multiplication is A*B and element by element is A*.B. If A is (NxM) and B is (MxK) size then the code for C=A*B is
update
for i=1:N
for j=1:K
C(i,j) = A(i,:)*B(:,j)
end
end