Decompositions LAPACK - lapack

Could you please provide an example of these 3 decompositions on LAPACK, or just an idea how to use this library to solve them??
Eigen-value decomposition.
Orthogonal decomposition.
Schur decomposition.

Examples of eigenvalue problems are vibrations in mechanical systems; the eigenvalues are the natural frequencies and the eigenvectors are the normalized modes of vibration.
It turns out that PageRank is also just a huge eigenvalue decomposition. Page and Brin are billionaires because of it.
I don't know what's in LAPACK, but look for Jacobi, Householder, or Lanczos methods.
Orthogonal decomposition can be used to invert a special class of matrix:
http://en.wikipedia.org/wiki/Orthogonal_matrix
Here are the LAPACK docs:
http://www.netlib.org/lapack/lug/node39.html
Schur decomposition is similar to orthogonal decomposition, except for a diagonal matrix in the middle whose values are equal to the diagonal values of the matrix in question:
http://en.wikipedia.org/wiki/Schur_decomposition
I've never heard it called Schur decomposition, but here are the LAPACK docs for symmetric, real matricies:
http://www.netlib.org/lapack/lug/node48.html
The latter two are techniques for solving special classes of matricies.

Related

scipy eigs finds complex eigenvectors, although they should be real

I have a 1047x1047 sparse matrix and am interested in its eigenvectors and eigenvalues. From the mathematical derivation I know that these must be real. scipy eigs finds complex ones though. Unfortunately this destroys everything. There is no possibility to create a minimal example, because for small matrices eigs also calculates real eigenvectors, i.e. complex eigenvectors with imaginary part zero.
Desperate thanks!

Numerical Instability Kalman Filter in MatLab

I am trying to run a standard Kalman Filter algorithm to calculate likelihoods, but I keep getting a problema of a non positive definite variance matrix when calculating normal densities.
I've researched a little and seen that there may be in fact some numerical instabitlity; tried some numerical ways to avoid a non-positive definite matrix, using both choleski decomposition and its variant LDL' decomposition.
I am using MatLab.
Does anyone suggest anything?
Thanks.
I have encountered what might be the same problem before when I needed to run a Kalman filter for long periods but over time my covariance matrix would degenerate. It might just be a problem of losing symmetry due to numerical error. One simple way to enforce your covariance matrix (let's call it P) to remain symmetric is to do:
P = (P + P')/2 # where P' is transpose(P)
right after estimating P.
post your code.
As a rule of thumb, if the model is not accurate and the regularization (i.e. the model noise matrix Q) is not sufficiently "large" an underfitting will occur and the covariance matrix of the estimator will be ill-conditioned. Try fine tuning your Q matrix.
The Kalman Filter implemented using the Joseph Form is known to be numerically unstable, as any old timer who once worked with single precision implementation of the filter can tell. This problem was discovered zillions of years ago and prompt a lot of research in implementing the filter in a stable manner. Probably the best well-known implementation is the UD, where the Covariance matrix is factorized as UDU' and the two factors are updated and propagated using special formulas (see Thoronton and Bierman). U is an upper diagonal matrix with "1" in its diagonal, and D is a diagonal matrix.

MATLAB: Eigenvalue Analysis for System of Homogeneous Second order Equations with Damping Terms

I have a system of dynamic equations that ultimately can be written in the well-known "spring-mass-damper" form:
[M]{q''}+[C]{q'}+[K]{q}={0}
[M], [C], [K]: n-by-n Coefficient Matrices
{q}: n-by-1 Vector of the Degrees of Freedom
(the ' mark represents a time derivative)
I want to find the eigenvalues and eigenvectors of this system. Obviously due to the term [C]{q'}, the standard MATLAB function eig() will not be useful.
Does anyone know of a simple MATLAB routine to determine the eigenvalues, eigenvectors of this system? The system is homogeneous so an efficient eigenvalue analysis should be very feasible, but I'm struggling a bit.
Obviously I can use brute force and a symbolic computing software to find the gigantic characteristic polynomial. But this seems inefficient for me, especially because I'm looping this through the other parts of the code to determine frequencies as a function of other varied parameters.

How to speed up c++ eigen decomposition

I use the MATLAB to do eigenvalue decomposition, and the dimension of data is about 10000, so the covariance matrix is 10000*10000. When I use the eig() function in MATLAB, it is very slow. Is there any way to speed up the eigenvalue decomposition.
I use the eigenvalue decomposition to do principal component analysis(PCA), so I just use the top K eigenvalues and eigenvectors. There is no need to get all the eigenvalues and eigenvectors. I have tried to use the Intel-MKL to do eigen decomposition, but when I use the mex interface, there are some errors. I posted it in the link https://stackoverflow.com/questions/19220271/how-to-use-intel-mkl-for-speed-my-own-matlab-mex-cpp-applications
Please give me some advice, Thanks.
use eigs if your data is sparse, or if you are interested in the first k values. For example,
eigs(A,k) returns the k largest magnitude eigenvalues. Note that eigs will be faster only for the first few eigen-values, and will be slower for k > some value (probably 5...)

How do I find the eigenvalues and eigenvectors from a matrix using the Accelerate framework?

I have a function written in C to calculate eigenvalues and eigenvectors, but it takes a lot of CPU time since I am calling this function several times as part of another algorithm. According to Apple the Accelerate framework can be used to find eigenvalues from matrices very fast using BLAS and LAPACK.
As I am new to the Accelerate framework, so which functions should I be using to find the eigenvalues and eigenvectors of a square matrix?
That depends a bit on the character of the matrix that you wish to decompose. There are different routines in Lapack for symmetric/Hermitian matrices, banded diagonal matrices, or general matrices. If you have a general matrix (w/ no particular structure) you will need to use the generalized Schur decomposition routines. The routines are divided between single and double precision and between matrices with real or complex elements - as is all of Lapack.
The general eigen-problem solver routines are named: SGEEV CGEEV DGEEV ZGEEV where the S = single precision real, C = single precision complex, D = double precision real, Z = double precision complex.
IBM has a good online reference for lapack, here's a link describing the above routines.
Good luck!
Paul