Matrix inversion is difficult in matlab when deal with sparse matrix - matlab

I implement a algorithm which is related to sparse matrix inversion.
The code:
kapa_t=phi_t*F_x'*(inv(inv(R_t)+F_x*phi_t*F_x'))*F_x*phi_t;
I write down the code in matlab. It give me a warning Matrix is close to singular or badly scaled. Results may be inaccurate. RCOND = 4.419037e-18.. But as per my algorithm matrix inversion is important part. So, I am trying to search some efficient way for matrix inversion.So I find out this link how to compute inverse of a matrix accurately?
So I changed my code as suggest.
kapa_t=phi_t*F_x'*(inv(inv(R_t)+F_x*phi_t*F_x'))\F_x*phi_t;
After that I get an error Error using \
Matrix dimensions must agree.
Error in EKF_SLAM_known (line 105)
kapa_t=phi_tF_x'(inv(inv(R_t)+F_xphi_tF_x'))\F_x*phi_t;
The algorithm I am using is
Here line no: 8 of the algorithm is equivalent to code kapa_t=phi_tF_x'(inv(inv(R_t)+F_xphi_tF_x'))F_xphi_t;
What should I do with my code to get rid of this warning.

kapa_t=phi_t*F_x'*(inv(inv(R_t)+F_x*phi_t*F_x'))\F_x*phi_t;
should be
kapa_t=phi_t*F_x'*((inv(R_t)+F_x*phi_t*F_x')\F_x)*phi_t;
The A \ B operator is roughly equivalent to inv(A) * B when A is square, so you don't need the outer inv.

Related

Solving a sparse matrix using \ in matlab

I am trying to solve a problem of the form Ax=b in which I have a tridiagonal matrix A, and a full vector b.
When doing x=A\b I get the error message:
Warning: Matrix is close to singular or badly scaled. Results may be
inaccurate. RCOND = 3.301735e-150.
I have theorised that this may be due to the sparsity of matrix A, is there a more efficient built in way of dealing with this in Matlab?

How to multiply matrix of nxm with matrix nxmxp different dimensions in matlab

In my current analysis, I am trying to multiply a matrix (flm), of dimension nxm, with the inverse of a matrix nxmxp, and then use this result to multiply it by the inverse of the matrix (flm).
I was trying using the following code:
flm = repmat(Data.fm.flm(chan,:),[1 1 morder]); %chan -> is a vector 1by3
A = (flm(:,:,:)/A_inv(:,:,:))/flm(:,:,:);
However. due to the problem of dimensions, I am getting the following error message:
Error using ==> mrdivide
Inputs must be 2-D, or at least one
input must be scalar.
To compute elementwise RDIVIDE, use
RDIVIDE (./) instead.
I have no idea on how to proceed without using a for loop, so anyone as any suggestion?
I think you are looking for a way to conveniently multiply matrices when one is of higher dimensionality than the other. In that case you can use bxsfun to automatically 'expand' the smaller matrix.
x = rand(3,4);
y = rand(3,4,5);
bsxfun(#times,x,y)
It is quite simple, and very efficient.
Make sure to check out doc bsxfun for more examples.

How to calculate the squared inverse of a matrix in Matlab

I have to calculate:
gamma=(I-K*A^-1)*OLS;
where I is the identity matrix, K and A are diagonal matrices of the same size, and OLS is the ordinary least squares estimate of the parameters.
I do this in Matlab using:
gamma=(I-A\K)*OLS;
However I then have to calculate:
gamma2=(I-K^2*A-2)*OLS;
I calculate this in Matlab using:
gamma2=(I+A\K)*(I-A\K)*OLS;
Is this correct?
Also I just want to calculate the variance of the OLS parameters:
The formula is simple enough:
Var(B)=sigma^2*(Delta)^-1;
Where sigma is a constant and Delta is a diagonal matrix containing the eigenvalues.
I tried doing this by:
Var_B=Delta\sigma^2;
But it comes back saying matrix dimensions must agree?
Please can you tell me how to calculate Var(B) in Matlab, as well as confirming whether or not my other calculations are correct.
In general, matrix multiplication does not commute, which makes A^2 - B^2 not equal to (A+B)*(A-B). However your case is special, because you have an identity matrix in the equation. So your method for finding gamma2 is valid.
'Var_B=Delta\sigma^2' is not a valid mldivide expression. See the documentation. Try Var_B=sigma^2*inv(Delta). The function inv returns a matrix inverse. Although this function can also be applied in your expression to find gamma or gamma2, the use of the operator \ is more recommended for better accuracy and faster computation.

How to find if a matrix is Singular in Matlab

I use the function below to generate the betas for a given set of guess lambdas from my optimiser.
When running I often get the following warning message:
Warning: Matrix is singular to working precision.
In NSS_betas at 9
In DElambda at 19
In Individual_Lambdas at 36
I'd like to be able to exclude any betas that form a singular matrix form the solution set, however I don't know how to test for it?
I've been trying to use rcond() but I don't know where to make the cut off between singular and non singular?
Surely if Matlab is generating the warning message it already knows if the matrix is singular or not so if I could just find where that variable was stored I could use that?
function betas=NSS_betas(lambda,data)
mats=data.mats2';
lambda=lambda;
yM=data.y2';
nObs=size(yM,1);
G= [ones(nObs,1) (1-exp(-mats./lambda(1)))./(mats./lambda(1)) ((1-exp(-mats./lambda(1)))./(mats./lambda(1))-exp(-mats./lambda(1))) ((1-exp(-mats./lambda(2)))./(mats./lambda(2))-exp(-mats./lambda(2)))];
betas=G\yM;
r=rcond(G);
end
Thanks for the advice:
I tested all three examples below after setting the lambda values to be equal so guiving a singular matrix
if (~isinf(G))
r=rank(G);
r2=rcond(G);
r3=min(svd(G));
end
r=3, r2 =2.602085213965190e-16; r3= 1.075949299504113e-15;
So in this test rank() and rcond () worked assuming I take the benchmark values as given below.
However what happens when I have two values that are close but not exactly equal?
How can I decide what is too close?
rcond is the right way to go here. If it nears the machine precision of zero, your matrix is singular. I usually go with:
if( rcond(A) < 1e-12 )
% This matrix doesn't look good
end
You can experiment with a value that suites your needs, but taking the inverse of a matrix that is even close to singular with MATLAB can produce garbage results.
You could compare the result of rank(G) with the number of columns of G. If the rank is less than the column dimension, you will have a singular matrix.
you can also check this by:
min(svd(A))>eps
and verifying that the smallest singular value is larger than eps, or any other numerical tolerance that is relevant to your needs. (the code will return 1 or 0)
Here's more info about it...
Condition number (Maximal singular value/Minimal singular value) is another good method:
cond(A)
It uses svd. It should be as close to 1 as possible. Very large values mean that the matrix is almost singular. Inf means that it is precisely singular.
Note that almost all of the methods mentioned in other answers use somehow svd :
There are special tools designed for this problem, appropriately called "rank revealing matrix factorizations". To my best (albeit a little old) knowledge, a good enough way to decide whether a n x n matrix A is nonsingular is to go with
det(A) <> 0 <=> rank(A) = n
and use a rank-revealing QR factorization of A:
AP = QR
where Q is orthogonal, P is a permutation matrix and R is an upper triangular matrix with the property that the magnitude of the diagonal elements is decreased along the diagonal.

Matlab inverse issue - fmri data - partial correlation algorithm

I'm using the following code To get a partial correlation matrix (original code from http://www.fmrib.ox.ac.uk/analysis/netsim/)
ic=-inv(cov(ts1)); % raw negative inverse covariance matrix
r=(ic ./ repmat(sqrt(diag(ic)),1,Nnodes)) ./ repmat(sqrt(diag(ic))',Nnodes,1); % use diagonal to get normalised coefficients
r=r+eye(Nnodes); % remove diagonal
My original matrix (ts1) is a brain activity over time course (X variable) in multiple voxels -volumetric pixel 3X3 (Y variable).
The problem is, I have more dependent variables(y -voxels ) than independent variables(x - time course).
I get the following Warning-
Warning: Matrix is close to singular or badly scaled.
Results may be inaccurate. RCOND = 4.998365e-022.
Any thoughts on how to fix the code so I'll get the partial correlation between all of the voxels?
The warning is from Matlab having a problem inverting the covariance matrix.
One solution might be to try pinv()
http://www.mathworks.com/help/techdoc/ref/pinv.html