lapack C code for matrix inversion - lapack

I am programming in c to implement the matrix inversion by calling zgetrf and zgetri routines. Here I tried two different matrices, M and N. The weird part is that M works perfectly but when I added N in the code, it shows segmentation fault. I already use matlab to calculate the result, and the availability of N is totally fine. Anyone could help me out? Thank you.
code screenshot

Related

Why is matlab's mldivide so much better than dgels?

Solve Ax = b. Real double. A is overdetermined Mx2 with M >> 2. b is Mx1. I've run a ton of data against mldivide, and the results are excellent. I wrote a mex routine with MKL LAPACKE_dgels and it's nowhere near as good. The results have a ton of noise and the underlying signal is barely there. I checked the routine against the MKL example results first. I've searched through the mldivide doc (flowchart) and the SO questions. All I found is Matlab uses QR factorization for overdetermined rectangular.
What should I try next? Am I using the wrong LAPACK routine? Please help guide me in the right direction.
Update:
To within E-15 floating point difference on the solution vector, Intel MKL LAPACKE_dgels has the same result as Matlab mldivide for real double overdetermined (rectangular) problems. As far as I can tell, this is the QR method used.
Beware the residuals returned from this dgels. They do not equate to b - Ax. Many of them are close to this value, whereas some are far from it.
The problem was not the solution x, rather the returned residuals from DGELS. This routine's outputs are modify-in-place on the input array pointers. The MKL doc says the input array b is overwritten with the output vector x for the first N rows, then the residuals in N+1 to M. I confirmed this with my code.
The mistake was in aligning the b[N+1] residuals to original inputs b[1], and making further algorithmic decisions on that. The correct alignment of residual to original input is b[1] to b[1]. The first N residuals are not available; you have to compute those afterwards.
The doc doesn't say they are residuals per se, rather specifically
the residual sum of squares for the solution in each column is given by the sum of squares of modulus of elements n+1 to m in that column.

What is the difference between 'qr' and 'SVD' in Matlab to get the single vectors of a matrix?

Spefifically, the following two kinds of code can get the same S and V idealy. However, the second one's speed is usually faster than the first one in Matlab. Can someone tell me the reason?
Moreover, which method is more numerically stable?
Thanks.
[~,S,V] = svd(B,'econ');
[Qc,Rc] = qr(B',0);
[U,S,~] = svd(Rc,'econ');
V = Qc*U;
The second method does not have to be faster. For almost squared matrices it can be slower. Consider as example the Golub-Reinsch SVD-algorithm:
Its work depends on the output you want to calculate (only S, Sand V or S,V and U).
If you want to calculate Sand V without performing any preprocessing the required work is 4mn^2+8n^3.
If you perform QR-decomposition before this the needed amount of work is: 2/3n^3+n^2+1/3n-2 for the Housholder transformation. Now if your Matrix was almost squared, i.e m=n, you will have gained not much as R is still m x n. However if m is larger than n you can reduce R to an n x n matrix (called thin QR factorization). Now you want to calculate Uand S which will add 12n^3 for your SVD-algorithm.
So only SVD: 4mn^2+8n^3
SVD with QR: (12+2/3)n^3+n^2+1/3n-2
However most SVD-algorithms should inculde some (R-) bidiagonalizations which will reduce the work to: 2mn^2+11n^3
You can also apply QR, the R-bifactorization and then SVD to make it even faster but it all depends on your matrix dimensions.
Matlab uses for SVD the Lapack libraries. You can look up the exact runtimes here. They're approximately the same as above algorithm.
Hope this helps.

Solve system of linear equations in Spark

I have a system of linear equations in the form of Ax = b to solve in Spark.
A is n by n
b is n by 1
I representA in the form of IndexedRowMatrix or RowMatrix and b in the form of DenseMatrix or DenseVector.
How can I solve this system to calculate the x vector?
If the suggested solution is Cholesky Decomposition, would you please guide me through doing it as it is not part of the public API ? For example if the original matrix A is:
1,2,3,4
2,1,5,6
3,5,1,7
4,6,7,1
and b is:
5,6,7,8
What is passed as argument to the solve method ?
Any other solution other than inversing A would be very helpful.
I don't know if the question is still relevant to you or not, but another solution to the question is to invert the coefficient matrix and then multiply the inverted matrix to the vector b. There are many matrix inversion algorithm. One such algorithm can be found in the following paper
SPIN: A Fast and Scalable Matrix Inversion Method in Apache Spark
You can find the complete code on GitHub link also.
Cheers!!

Numerical Instability Kalman Filter in MatLab

I am trying to run a standard Kalman Filter algorithm to calculate likelihoods, but I keep getting a problema of a non positive definite variance matrix when calculating normal densities.
I've researched a little and seen that there may be in fact some numerical instabitlity; tried some numerical ways to avoid a non-positive definite matrix, using both choleski decomposition and its variant LDL' decomposition.
I am using MatLab.
Does anyone suggest anything?
Thanks.
I have encountered what might be the same problem before when I needed to run a Kalman filter for long periods but over time my covariance matrix would degenerate. It might just be a problem of losing symmetry due to numerical error. One simple way to enforce your covariance matrix (let's call it P) to remain symmetric is to do:
P = (P + P')/2 # where P' is transpose(P)
right after estimating P.
post your code.
As a rule of thumb, if the model is not accurate and the regularization (i.e. the model noise matrix Q) is not sufficiently "large" an underfitting will occur and the covariance matrix of the estimator will be ill-conditioned. Try fine tuning your Q matrix.
The Kalman Filter implemented using the Joseph Form is known to be numerically unstable, as any old timer who once worked with single precision implementation of the filter can tell. This problem was discovered zillions of years ago and prompt a lot of research in implementing the filter in a stable manner. Probably the best well-known implementation is the UD, where the Covariance matrix is factorized as UDU' and the two factors are updated and propagated using special formulas (see Thoronton and Bierman). U is an upper diagonal matrix with "1" in its diagonal, and D is a diagonal matrix.

A specific analytic integration can not be done

The following definite integral can not be done in "Matlab R2013a", although it can be done analytically in other mathematics programs. Why?
syms r M c real
assume(M>0)
assume(c>M)
y=1/(sqrt((r^2-M)*(r^2/c^2-1))*r);
int(y,r,c,inf)
The answer is
atanh(sqrt(M)/c)/sqrt(M).
Thanks
There's another way to write the solution:
-log((-M-c^2+2*sqrt(M)*c)/(M-c^2))/(2*sqrt(M))
I don't use Matlab, but can you try assuming that M does not equal c^2?