Matrix Exponential BLAS/LAPACK - lapack

I'd like to know if BLAS or LAPACK implement any function for getting the exponential of a certain double or float matrix of numbers.
So far I have not been to find any similar function.

Related

numerical problem after iterations in Matlab

I encountered some numerical questions when running simulation on MatLab. Here please find the questions:
I found that A*A' (a matrix times its transpose) is not guaranteed to be symmetric in MatLab. Can I know what is the reason? And because I will have A*C*A', where C is a symmetric matrix, and I would like to keep A*C*A' as symmetric. Is there any method to fix the numerical difference created by the transpose operation?
I implemented a for loop in Matlab to compute a set of matrices. Small numerical difference (around 10^(-10)) in each round accumulates to the next run, and it finally diverges after around 30 rounds. Is there any method to fix small error in each run and do not affect the result at the same time.
Thank you for reading my questions!
"I found that A*A' (a matrix times its transpose) is not guaranteed to be symmetric in MatLab."
I would dispute that statement as written. The MATLAB parser is smart enough to recognize that the operands of A*A' are the same and call a symmetric BLAS routine in the background to do the work, and then manually copy one triangle into the other resulting in an exactly symmetric result. Where one usually gets into trouble is by coding something that the parser cannot recognize. E.g.,
A = whatever;
B = whatever;
X = A + B;
(A+B) * (A+B)' <-- MATLAB parser will call generic BLAS routine
X * X' <-- MATLAB parser will call symmetric BLAS routine
In the first matrix multiply above, the MATLAB parser may not be not smart enough to recognize the symmetry so a generic matrix multiply BLAS routine (e.g., dgemm) could be called to do the work and the result is not guaranteed to be exactly symmetric. But in the second matrix multiply above the MATLAB parser does recognize the symmetry and calls a symmetric BLAS matrix multiply routine.
For the ACA' case, I don't know of any method to force MATLAB to generate an exact symmetric result. You could manually copy one resulting triangle into the other after the fact. I suppose you could also factor C into two parts X*X' and then regroup but that seems like too much work for what you are trying to do.

Suitesparse equivalent for MATLAB A/b of complex semi-symmetric matrix

I am currently using MATLAB to do matrix division of very large, very sparse, complex matrices that are symmetric in structure, but asymmetric in value (i.e. A(1,2)=3+4i and A(2,1)=3-4i).
I am now converting my code to Java. What is the proper equivalent function of A\b in Suitesparse/LApack?
I know this is what MATLAB is running for A\b, but chol seems to be limited to real, symmetric matrices.

Variable precision arithmetic for definite integrals in Matlab

I want to evaluate a definite integral with variable precision arithmetic in Matlab. It can be done using symbolic math toolbox in this way:
syms x
f = (x.^2000).*((1-x).^4000)
vpa(int(f,0,1))
This gives me the answer of the integral with variable precision arithmetic.
But I like to evaluate the integral without symbolic math toolbox. I can use the command 'integral' to calculate the integral but since the integral is calculated in fixed precision, it returns zero, i.e. the output of the following code is zero.
f = #(x) (x.^2000).*((1-x).^4000)
integral(f,0,1)
How can I combine vpa with numerical integration without using symbolic math toolbox?
Impossible, because the vpa function is a part of the symbolic math toolbox. If you are using vpa, you are using this toolbox.

How do I find the eigenvalues and eigenvectors from a matrix using the Accelerate framework?

I have a function written in C to calculate eigenvalues and eigenvectors, but it takes a lot of CPU time since I am calling this function several times as part of another algorithm. According to Apple the Accelerate framework can be used to find eigenvalues from matrices very fast using BLAS and LAPACK.
As I am new to the Accelerate framework, so which functions should I be using to find the eigenvalues and eigenvectors of a square matrix?
That depends a bit on the character of the matrix that you wish to decompose. There are different routines in Lapack for symmetric/Hermitian matrices, banded diagonal matrices, or general matrices. If you have a general matrix (w/ no particular structure) you will need to use the generalized Schur decomposition routines. The routines are divided between single and double precision and between matrices with real or complex elements - as is all of Lapack.
The general eigen-problem solver routines are named: SGEEV CGEEV DGEEV ZGEEV where the S = single precision real, C = single precision complex, D = double precision real, Z = double precision complex.
IBM has a good online reference for lapack, here's a link describing the above routines.
Good luck!
Paul

Fminunc returns indefinite Hessian matrix for a convex objective

In minimizing a convex objective function, does it mean that the Hessian matrix at minimizer should be PSD? If fminunc in Matlab returns a hessian which is not psd what does it mean? am I using a wrong objective?
I do that in environments other than matlab.
Non-PSD means you can't take the Cholesky transform of it (i.e. the matrix square-root), so you can't use it to get standard errors, for example.
To get a good hessian, your objective function has to be really smooth, because you're taking a second derivative, which doubly amplifies any noise. If possible, it is best to use analytic derivatives rather than finite-difference. That is, if you really need the hessian.