I am looking for solving a generalized eigenvectors and eigen value problem in Matlab. For this, I have tested 2 methods.
if Generalized problem is formulated as :
Then, we could multiply by B^(-1) on each side, such as :
So, from a theorical point of view, it is the simple and classical eigen values problem.
Finally, in Matlab, I simply did, with A=FISH_sp and B=FISH_xc :
[Phi, Lambda] = eig(inv(FISH_xc)*FISH_sp);
But results are not correct when I make after a simple Fisher synthesis (constraints are too bad and also making appear nan values. I don't know why I don't get the same results than the second ones below.
the second method comes from the following paper.
To summarize, the algorithm used is described on page 7. I have followed all the steps of this algorithm and it seems to give better results when I make the Fisher synthesis.
Here the interested part (sorry, I think Latex is not available on stakoverflow) :
Here my little Matlab script for this method :
% Diagonalize A = FISH_sp and B = Fish_xc
[V1,D1] = eig(FISH_sp);
[V2,D2] = eig(FISH_xc);
% Applying each step of algorithm 1 on page 7
phiB_bar = V2*(D2.^(0.5)+1e-10*eye(7))^(-1);
barA = inv(phiB_bar)*FISH_sp*phiB_bar;
[phiA, vA] = eig(barA);
Phi = phiB_bar*phiA;
So at the end, I find phi eigenvectors matrix (phi) and lambda diagonal matrix (D1).
Now, I would like to do the link between this generalized problem and the eventual common eigenvectors between A and B matrices (respectively Fish_sp and Fish_xc). Is there a way to perform this?
Indeed, what I have done up to now is to to find a parallel relation between A*Phi and B*Phi, linked by Lambda diagonal matrix. Maybe, we could arrange this relation such that :
A*Phi'=Phi'*Lambda_A'
and
B*Phi'=Phi'*Lambda_B'
From a numerical point of view, why don't I get the same results between the method in 1) and the method in 3) ? I mean about the Phi eigen vectors matrix and the Lambda diagonal matrix.
However, this is the same formulation.
EDIT :
I have wrong results if I want to say that phi diagonalizes both A=FISH_sp and B=FISH_xc matrices.
Indeed, by doing :
% Marginalizing over uncommon parameters between the two matrices
COV_GCsp_first = inv(FISH_GCsp);
COV_XC_first = inv(FISH_XC);
COV_GCsp = COV_GCsp_first(1:N,1:N);
COV_XC = COV_XC_first(1:N,1:N);
% Invert to get Fisher matrix
FISH_sp = inv(COV_GCsp);
FISH_xc = inv(COV_XC);
% Diagonalize
[V1,D1] = eig(FISH_sp);
[V2,D2] = eig(FISH_xc);
% Build phi matrix
% V2 corresponds to eigen vectors of FISH_xc
phiB_bar = V2*diag(diag(D2.^(-0.5)));
% DEBUG : check identity matrix => OK, Identity matrix found !
id = (phiB_bar')*FISH_xc*phiB_bar
% phi matrix
barA = (phiB_bar')*FISH_sp*phiB_bar
[phiA, vA] = eig(barA);
phi = phiB_bar*phiA;
% Check eigen values : OK, columns of eigenvalues found !
FISH_sp*V1./V1
% Check eigen values : OK, columns of eigenvalues found !
FISH_xc*V2./V2
% Check if phi diagolize FISH_sp : NOT OK, not identical eigenvalues
FISH_sp*phi./phi
% Check if phi diagolize FISH_sp : NOT OK, not identical eigenvalues
FISH_xc*phi./phi
So, I don't find that matrix of eigenvectors Phi diagonalizes A and B since the eigenvalues expected are not columns of identical values.
By the way, I find the eigenvalues D1 and D2 coming from :
[V1,D1] = eig(FISH_sp);
[V2,D2] = eig(FISH_xc);
% Check eigen values : OK, columns of eigenvalues D1 found !
FISH_sp*V1./V1
% Check eigen values : OK, columns of eigenvalues D2 found !
FISH_xc*V2./V2
How could I fix this wrong result (I am talking about the ratios :
FISH_sp*phi./phi
FISH_xc*phi./phi
which don't give same values for a given column of FISH_sp and FISH_xc)
)
?? In the paper, they say that phi diagonalizes A=FISH_sp and B=FISH_xc but I can't reproduce it.
If someone could see where is my error ...
You defined barA = inv(phiB_bar)*FISH_sp*phiB_bar. From Eq. (39) in the manuscript it looks like it should be barA = transpose(phiB_bar)*FISH_sp*phiB_bar instead.
Also, your mehtod 1 fails when B is singular (an inverse does not exist). MATLAB's eig(A,B) however should handle also singular Bs if memory serves me well.
Related
In Matlab, I have to study the eventual existence of common eigenvectors basis between 2 Fisher matrices FISH_sp and FISH_xc of size 7x7 and diagonalisable.
I get from my computation the following result:
>> x=null(FISH_sp*FISH_xc-FISH_xc*FISH_sp)
x =
-0.0085
-0.0048
-0.2098
0.9776
-0.0089
-0.0026
0.0109
In this result, It appears that condition to get a common eigenvectors basis on commutator is true. But I need to further examine the mathematics. If one gets a single column vector, then nullspace of the commutator is 1-dimensional as far as Matlab can tell. With that result, one can think about how to verify that vector is indeed an eigenvector of FISH_sp and FISH_xc down to a small tolerance.
But I don't know how to introduce this tolerance in a small Matlab script.
All I have done for instant is :
x=null(FISH_sp*FISH_xc-FISH_xc*FISH_sp)
How can I introduce tolerance in the checking of eigenvector x as being really an eigenvector given a tolerance tol.
And what about the eigenvalues ? : normally, they should not equal to D1 in [V1, D1] =eig(FISH_sp)and not equal to D2 in [V2, D2] =eig(FISH_xc) ? I said they shouldn't since we have to express them in a new and different basis of eigenvectors : then I call these 2 news diagonal matrices D1_new and D2_new. So, I could write :
If I have a passing matrix of all the common eigen vectors basis called P, then one has :
F = P (D1_new + D2_new) p^-1
This endomorphism F is wanted with this expression (to respect the Maximum Likelihood Estimator = MLE).
the problem for instant is that I have only one eigen vector x and not the entire passing matrix P of new eigenvectors. How can I build this passing matrix P from only the single x values of common eigen vector mentioned above ?
I'm trying to reconstruct a 3d image from two calibrated cameras. One of the steps involved is to calculate the 3x3 essential matrix E, from two sets of corresponding (homogeneous) points (more than the 8 required) P_a_orig and P_b_orig and the two camera's 3x3 internal calibration matrices K_a and K_b.
We start off by normalizing our points with
P_a = inv(K_a) * p_a_orig
and
P_b = inv(K_b) * p_b_orig
We also know the constraint
P_b' * E * P_a = 0
I'm following it this far, but how do you actually solve that last problem, e.g. finding the nine values of the E matrix? I've read several different lecture notes on this subject, but they all leave out that crucial last step. Likely because it is supposedly trivial math, but I can't remember when I last did this and I haven't been able to find a solution yet.
This equation is actually pretty common in geometry algorithms, essentially, you are trying to calculate the matrix X from the equation AXB=0. To solve this, you vectorise the equation, which means,
vec() means vectorised form of a matrix, i.e., simply stack the coloumns of the matrix one over the another to produce a single coloumn vector. If you don't know the meaning of the scary looking symbol, its called Kronecker product and you can read it from here, its easy, trust me :-)
Now, say I call the matrix obtained by Kronecker product of B^T and A as C.
Then, vec(X) is the null vector of the matrix C and the way to obtain that is by doing the SVD decomposition of C^TC (C transpose multiplied by C) and take the the last coloumn of the matrix V. This last coloumn is nothing but your vec(X). Reshape X to 3 by 3 matrix. This is you Essential matrix.
In case you find this maths too daunting to code, simply use the following code by Y.Ma et.al:
% p are homogenius coordinates of the first image of size 3 by n
% q are homogenius coordinates of the second image of size 3 by n
function [E] = essentialDiscrete(p,q)
n = size(p);
NPOINTS = n(2);
% set up matrix A such that A*[v1,v2,v3,s1,s2,s3,s4,s5,s6]' = 0
A = zeros(NPOINTS, 9);
if NPOINTS < 9
error('Too few mesurements')
return;
end
for i = 1:NPOINTS
A(i,:) = kron(p(:,i),q(:,i))';
end
r = rank(A);
if r < 8
warning('Measurement matrix rank defficient')
T0 = 0; R = [];
end;
[U,S,V] = svd(A);
% pick the eigenvector corresponding to the smallest eigenvalue
e = V(:,9);
e = (round(1.0e+10*e))*(1.0e-10);
% essential matrix
E = reshape(e, 3, 3);
You can do several things:
The Essential matrix can be estimated using the 8-point algorithm, which you can implement yourself.
You can use the estimateFundamentalMatrix function from the Computer Vision System Toolbox, and then get the Essential matrix from the Fundamental matrix.
Alternatively, you can calibrate your stereo camera system using the estimateCameraParameters function in the Computer Vision System Toolbox, which will compute the Essential matrix for you.
I'm trying to write a program that gets a matrix A of any size, and SVD decomposes it:
A = U * S * V'
Where A is the matrix the user enters, U is an orthogonal matrix composes of the eigenvectors of A * A', S is a diagonal matrix of the singular values, and V is an orthogonal matrix of the eigenvectors of A' * A.
Problem is: the MATLAB function eig sometimes returns the wrong eigenvectors.
This is my code:
function [U,S,V]=badsvd(A)
W=A*A';
[U,S]=eig(W);
max=0;
for i=1:size(W,1) %%sort
for j=i:size(W,1)
if(S(j,j)>max)
max=S(j,j);
temp_index=j;
end
end
max=0;
temp=S(temp_index,temp_index);
S(temp_index,temp_index)=S(i,i);
S(i,i)=temp;
temp=U(:,temp_index);
U(:,temp_index)=U(:,i);
U(:,i)=temp;
end
W=A'*A;
[V,s]=eig(W);
max=0;
for i=1:size(W,1) %%sort
for j=i:size(W,1)
if(s(j,j)>max)
max=s(j,j);
temp_index=j;
end
end
max=0;
temp=s(temp_index,temp_index);
s(temp_index,temp_index)=s(i,i);
s(i,i)=temp;
temp=V(:,temp_index);
V(:,temp_index)=V(:,i);
V(:,i)=temp;
end
s=sqrt(s);
end
My code returns the correct s matrix, and also "nearly" correct U and V matrices. But some of the columns are multiplied by -1. obviously if t is an eigenvector, then also -t is an eigenvector, but with the signs inverted (for some of the columns, not all) I don't get A = U * S * V'.
Is there any way to fix this?
Example: for the matrix A=[1,2;3,4] my function returns:
U=[0.4046,-0.9145;0.9145,0.4046]
and the built-in MATLAB svd function returns:
u=[-0.4046,-0.9145;-0.9145,0.4046]
Note that eigenvectors are not unique. Multiplying by any constant, including -1 (which simply changes the sign), gives another valid eigenvector. This is clear given the definition of an eigenvector:
A·v = λ·v
MATLAB chooses to normalize the eigenvectors to have a norm of 1.0, the sign is arbitrary:
For eig(A), the eigenvectors are scaled so that the norm of each is 1.0.
For eig(A,B), eig(A,'nobalance'), and eig(A,B,flag), the eigenvectors are not normalized
Now as you know, SVD and eigendecomposition are related. Below is some code to test this fact. Note that svd and eig return results in different order (one sorted high to low, the other in reverse):
% some random matrix
A = rand(5);
% singular value decomposition
[U,S,V] = svd(A);
% eigenvectors of A'*A are the same as the right-singular vectors
[V2,D2] = eig(A'*A);
[D2,ord] = sort(diag(D2), 'descend');
S2 = diag(sqrt(D2));
V2 = V2(:,ord);
% eigenvectors of A*A' are the same as the left-singular vectors
[U2,D2] = eig(A*A');
[D2,ord] = sort(diag(D2), 'descend');
S3 = diag(sqrt(D2));
U2 = U2(:,ord);
% check results
A
U*S*V'
U2*S2*V2'
I get very similar results (ignoring minor floating-point errors):
>> norm(A - U*S*V')
ans =
7.5771e-16
>> norm(A - U2*S2*V2')
ans =
3.2841e-14
EDIT:
To get consistent results, one usually adopts a convention of requiring that the first element in each eigenvector be of a certain sign. That way if you get an eigenvector that does not follow this rule, you multiply it by -1 to flip the sign...
I have a 1000 5x5 matrices (Xm) like this:
Each $(x_ij)m$ is a point estimate drawn from a distribution. I'd like to calculate the covariance cov of each $x{ij}$, where i=1..n, and j=1..n in the direction of the red arrow.
For example the variance of $X_m$ is `var(X,0,3) which gives a 5x5 matrix of variances. Can I calculate the covariance in the same way?
Attempt at answer
So far I've done this:
for m=1:1000
Xm_new(m,:)=reshape(Xm(:,:,m)',25,1);
end
cov(Xm_new)
spy(Xm_new) gives me this unusual looking sparse matrix:
If you look at cov (edit cov in the command window) you might see why it doesn't support multi-dimensional arrays. It perform a transpose and a matrix multiplication of the input matrices: xc' * xc. Both operations don't support multi-dimensional arrays and I guess whoever wrote the function decided not to do the work to generalize it (it still might be good to contact the Mathworks however and make a feature request).
In your case, if we take the basic code from cov and make a few assumptions, we can write a covariance function M-file the supports 3-D arrays:
function x = cov3d(x)
% Based on Matlab's cov, version 5.16.4.10
[m,n,p] = size(x);
if m == 1
x = zeros(n,n,p,class(x));
else
x = bsxfun(#minus,x,sum(x,1)/m);
for i = 1:p
xi = x(:,:,i);
x(:,:,i) = xi'*xi;
end
x = x/(m-1);
end
Note that this simple code assumes that x is a series of 2-D matrices stacked up along the third dimension. And the normalization flag is 0, the default in cov. It could be exapnded to multiple dimensions like var with a bit of work. In my timings, it's over 10 times faster than a function that calls cov(x(:,:,i)) in a for loop.
Yes, I used a for loop. There may or may not be faster ways to do this, but in this case for loops are going to be faster than most schemes, especially when the size of your array is not known a priori.
The answer below also works for a rectangular matrix xi=x(:,:,i)
function xy = cov3d(x)
[m,n,p] = size(x);
if m == 1
x = zeros(n,n,p,class(x));
else
xc = bsxfun(#minus,x,sum(x,1)/m);
for i = 1:p
xci = xc(:,:,i);
xy(:,:,i) = xci'*xci;
end
xy = xy/(m-1);
end
My answer is very similar to horchler, however horchler's code does not work with rectangular matrices xi (whose dimensions are different from xi'*xi dimensions).
Given L and U LU decomposition and vector of constants b such that LU*x=b , is there any built in function which find the x ? Mean something like -
X = functionName(L,U,b)
Note that in both L and U we are dealing with triangular matrices which can be solved directly by forward and backward substitution without using the Gaussian elimination process.
Edit :
Solving this linear equation system should be according to the following steps -
1. define y - s.t Ux=y
2. solve Ly=b by forward substitution
3. solve Ux=y by backward substitution
4. return y
Edit 2 :
I found linalg::matlinsolveLU but I didn't try it cause I have too old version (R2010a) . Is it working for anyone ?
If you have:
A = rand(3);
b = rand(3,1);
then the solution to the system can be simply computed as:
x = A\b
Or if you already have an LU decomposition of A, then:
[L,U] = lu(A);
xx = U\(L\b)
the mldivide function is smart enough to detect that the matrix is triangular and chose an algorithm accordingly (forward/backward substitution)
I think this is what you're looking for:
A = rand(3,3); % Random 3-by-3 matrix
b = rand(3,1); % Random 3-by-1 vector
[L,U] = lu(A); % LU decomposition
x = U\(L\b) % Solve system of equations via mldivide (same as x = A\b or x = (L*U)\b)
err = L*U*x-b % Numerical error
The system of equations is solved using mldivide. You might also look at qr which implements QR decomposition instead of using LU decomposition. qr can directly solve A*x = b type problems and is more efficient. Also look at linsolve. For symbolic systems you may still be able to use mldivide, or try linalg::matlinsolveLU in MuPAD.