Matlab inverse issue - fmri data - partial correlation algorithm - matlab

I'm using the following code To get a partial correlation matrix (original code from http://www.fmrib.ox.ac.uk/analysis/netsim/)
ic=-inv(cov(ts1)); % raw negative inverse covariance matrix
r=(ic ./ repmat(sqrt(diag(ic)),1,Nnodes)) ./ repmat(sqrt(diag(ic))',Nnodes,1); % use diagonal to get normalised coefficients
r=r+eye(Nnodes); % remove diagonal
My original matrix (ts1) is a brain activity over time course (X variable) in multiple voxels -volumetric pixel 3X3 (Y variable).
The problem is, I have more dependent variables(y -voxels ) than independent variables(x - time course).
I get the following Warning-
Warning: Matrix is close to singular or badly scaled.
Results may be inaccurate. RCOND = 4.998365e-022.
Any thoughts on how to fix the code so I'll get the partial correlation between all of the voxels?

The warning is from Matlab having a problem inverting the covariance matrix.
One solution might be to try pinv()
http://www.mathworks.com/help/techdoc/ref/pinv.html

Related

Why is this polynomial equation badly conditioned?

I have 1x1024 matrix. So I'd like to estimate a polynomial equation.
X= (0:1023)'
Y= acquired data. A 1024 element vector
Then I try this in MATLAB:
polyfit(x,y,5)
But MATLAB makes an abnormal result with warning.
Warning: Polynomial is badly conditioned. Add points with distinct X values, reduce the degree of the ...
I don't understand what am I doing wrong?
Update
I got a bunch of numbers like this.
Y=
-0.0000000150
...
0.00001
...
0
...
0.17
X= 0~255
polyfit(X,Y,4)
I got a polynomial but it does not match to original curve.
Is there any options to match between original curve and polyfit's curve?
The problem can be attributed to the type of coefficient matrix that polyfit builds from the x vector: a Vandermonde matrix.
When
the elements of the x vector vary too much in magnitude, and
the degree of the fitting polynomial is too high,
you get an ill-conditioned matrix, and the associated linear system cannot be solved reliably.
Try to centre and scale your x vector first, before applying polyfit, as advised at the bottom of the polyfit help page:
Since the columns in the Vandermonde matrix are powers of the vector x, the condition number of V is often large for high-order fits, resulting in a singular coefficient matrix. In those cases centering and scaling can improve the numerical properties of the system to produce a more reliable fit.
(my emphasis)
The warning is because the data that you are supplying to polyfit with your desired degree of polynomial isn't suitable. Specifically, there is an insufficient amount of variability in your data so that you can successfully achieve a good fit. Therefore, MATLAB gives you that warning because the data can't be fit properly with your desired degree polynomial.
The solution to this is to either get more points so that you can get the desired fit of the polynomial degree you want or to decrease the degree of polynomial you want.
Try values that are less than 5... 4, 3 or perhaps 2:
coeff = polyfit(x, y, 4);
%// or
%coeff = polyfit(x, y, 3);
%coeff = polyfit(x, y, 2);
Try each degree until you don't get the warning anymore. However, without the actual data, I can only speculate what's wrong, and this is my best guess.

Multivariate Gaussian distribution formula implementation

I have a certain problem while implementing multivariate Gaussian distribution for anomaly detection.
I have referred the formula from Andrew Ng notes
http://www.holehouse.org/mlclass/15_Anomaly_Detection.html
below is the problem I face
Suppose I have a data set with 2 features and m number of training set i.e n=2 and wants to determine my multivariate Gaussian probability p(x;mu;sigma) which should be a [m*1] matrix because it produces estimated Gaussian value by feature correlation.
The problem I face is I am unable to use the formula to produce the matrix [m*1].
I am using Octave as IDE to develop the algorithm.
Below is a snapshot showcasing my problem
Considering the multiplication of the Red boundary equation because the LHS of the red boundary is just a real number
PLEASE HELP ME UNDERSTAND WHERE AM I GOING WRONG
Thanks
I think you got the dimensions wrong.
Let's assume you have a 2-dimensional (n=2) data of m instances. We can store this data as a n-by-m matrix in MATLAB (columns are data instances, rows represent features/dimensions). In this case we have:
X the data matrix of size nxm, each instance x = X(:,i) is a vector of size nx1 (column vector in our convention).
mu is the mean vector (mu = mean(X,2)). This is also a column vector of same size as an instance nx1.
sigma is the covariance matrix (sigma = cov(X.')). It has size nxn (it describes how each dimensions co-vary with each other dimension).
So the part that you highlighted in red involves expressions of the following sizes:
= ([nx1] - [nx1])' * [nxn] * ([nx1] - [nx1])
= [1xn] * [nxn] * [nx1]
= 1x1

it is possible determinant of matrix(256*256) be infinite

i have (256*1) vectors of feature come from (16*16) of gray images. number of vectors is 550
when i compute Sample covariance of this vectors and compute covariance matrix determinant
answer is inf
it is possible determinant of finite matrix with finite range (0:255) value be infinite or i mistake some where?
in fact i want classification with bayesian estimation , my distribution is gaussian and when
i compute determinant be inf and ultimate Answer(likelihood) is zero .
some part of my code:
Mean = mean(dataSet,2);
MeanMatrix = Mean*ones(1,NoC);
Xc = double(dataSet)-MeanMatrix; % transform data to the origine
Sigma = (1/NoC) *Xc*Xc'; % calculate sample covariance matrix
Parameters(i).M = Mean';
Parameters(i).C = Sigma;
likelihoods(i) = (1/(2*pi*sqrt(det(params(i).C)))) * (exp(-0.5 * (double(X)-params(i).M)' * inv(params(i).C) * (double(X)-params(i).M)));
variable i show my classes;
variable X show my feature vector;
Can the determinant of such matrix be infinite? No it cannot.
Can it evaluate as infinite? Yes definitely.
Here is an example of a matrix with a finite amount of elements, that are not too big, yet the determinant will rarely evaluate as a finite number:
det(rand(255)*255)
In your case, probably what is happening is that you have too few datapoints to produce a full-rank covariance matrix.
For instance, if you have N examples, each with dimension d, and N<d, then your d x d covariance matrix will not be full rank and will have a determinant of zero.
In this case, a matrix inverse (precision matrix) does not exist. However, attempting to compute the determinant of the inverse (by taking 1/|X'*X|=1/0 -> \infty) will produce an infinite value.
One way to get around this problem is to set the covariance to X'*X+eps*eye(d), where eps is a small value. This technique corresponds to placing a weak prior distribution on elements of X.
no it is not possible. it may be singular but taking elements a large value has will have a determinant value.

How to calculate the squared inverse of a matrix in Matlab

I have to calculate:
gamma=(I-K*A^-1)*OLS;
where I is the identity matrix, K and A are diagonal matrices of the same size, and OLS is the ordinary least squares estimate of the parameters.
I do this in Matlab using:
gamma=(I-A\K)*OLS;
However I then have to calculate:
gamma2=(I-K^2*A-2)*OLS;
I calculate this in Matlab using:
gamma2=(I+A\K)*(I-A\K)*OLS;
Is this correct?
Also I just want to calculate the variance of the OLS parameters:
The formula is simple enough:
Var(B)=sigma^2*(Delta)^-1;
Where sigma is a constant and Delta is a diagonal matrix containing the eigenvalues.
I tried doing this by:
Var_B=Delta\sigma^2;
But it comes back saying matrix dimensions must agree?
Please can you tell me how to calculate Var(B) in Matlab, as well as confirming whether or not my other calculations are correct.
In general, matrix multiplication does not commute, which makes A^2 - B^2 not equal to (A+B)*(A-B). However your case is special, because you have an identity matrix in the equation. So your method for finding gamma2 is valid.
'Var_B=Delta\sigma^2' is not a valid mldivide expression. See the documentation. Try Var_B=sigma^2*inv(Delta). The function inv returns a matrix inverse. Although this function can also be applied in your expression to find gamma or gamma2, the use of the operator \ is more recommended for better accuracy and faster computation.

Cryptic matlab error when computing eigs

I am trying to find the 2 eignevectors of the 2 smallest eigenvalues of a laplacian. I do this by
[v,c]=eigs(L,M,2,'SM');
Where L is the lapalcian and M is the mass matrix.
As a result I get the error
Error using eigs/checkInputs/LUfactorAminusSigmaB (line 1041)
The shifted operator is singular. The shift is an eigenvalue.
Try to use some other shift please.
Error in eigs/checkInputs (line 855)
[L,U,pp,qq,dgAsB] = LUfactorAminusSigmaB;
Error in eigs (line 94)
[A,Amatrix,isrealprob,issymA,n,B,classAB,k,eigs_sigma,whch, ...
Does this mean I am doing something wrong, or is this just matlab choosing a bad initial guess for its iteration process?
The matrices I am using should have a descent condition number...
I ran into the same problem while implementing normalized cuts segmentation. The condition number is actually infinite because the smallest eigenvalue is 0, and this is basically what MATLAB's error message is about. It's running LU decomposition first.
I just added a multiple of I, 10*eps*speye, to the normalized Laplacian to improve conditioning and that fixed it.
I had the same problem with the eigs function. So I went the long (and maybe stupid) way but it did the job for me as my problem is not that big: (I will try to keep your notation)
% Solve the eigenvalue problem using the full matrices
[v,c]=eig(full(L),full(M));
% Sort out the eigenvalues using the sort function (the "-" sign is because you want the smallest real eigenvalues in magnitude)
[E,P] = sort(real(c),'descend'); % Here I am assuming you know all the eigenvalues have` negative real parts
% Now P is a vector that contains (in order) the indices of the row permutation operated by the % function sort.
% In order to obtain the two eigenvectors corresponding to the 2 smallest eigenvalues:
for k = 1:2
index = P(k);
lambda(k) = c(index,index); % use this to check if c(index,index)=E(k,k)
eigvec(:,k) = v(:,index); % corresponding eigenvector
end
Hope this helps
G