MATLAB : Block diagonalize a complex antisymmetric matrix - matlab

I am using the following function to block diagonalize antisymmetric matrices.
function [R, RI , S ] = Matrix_block (A)
[U,D]= schur (A);
E=ordeig(double(D)) ;
[R, S]= ordschur (U,D, abs(E)<1000*eps ) ;
RI=R';
The code works perfectly fine for real antisymmetric matrices but fails for complex antisymmetric matrices as follows :-
a = rand(6); a = a-a'; [r,ri,s] = Matrix_block(a);
b = rand(6)+1i*rand(6); b= b-conj(b)'; [r,ri,s] = Matrix_block(b);
How can I correct my code for it to work also for complex matrices ? I want a block-diagonal matrix (of the following form) as the output for both real and complex matrices.
0 e1 -0.0000 -0.0000 0.0000 -0.0000
-e1 0 0.0000 0.0000 -0.0000 0.0000
0 0 -0.0000 e2 0.0000 -0.0000
0 0 -e2 -0.0000 0.0000 -0.0000
0 0 0 0 -0.0000 e3
0 0 0 0 -e3 -0.0000

You need a different algorithm for the complex case. The Matlab documentation says:
If A is complex, schur returns the complex Schur form in matrix T. The complex Schur form is upper triangular with the eigenvalues of A on the diagonal.
Also, I could notice that you cast your matrix D into double(D). This has no real effects since D is already double. Nevertheless I have seen that the ordeig return different values for the eigenvalues depending if you input D or double(D) even for the real case. It is something to dig more deeply.

Related

Error in MATLAB: first input subs must contain positive integer

Is there any operation in MATLAB that removes zeros from number
150.0000 150.0000 0.0000
143.0000 148.0000 0.0000
152.0000 152.0000 0.0000
152.0000 144.0000 0.0000
146.0000 150.0000 0.0000
149.0000 145.0000 0.0000
148.0000 152.0000 0.0000
152.0000 157.0000 0.0000
157.0000 164.0000 0.0000
160.0000 155.0000 0.0000
154.0000 154.0000 0.0000
The reason I ask is because I want to use them in function called accumarray and I keep getting the error
First input SUBS must contain positive integer subscripts.
Thanks
Update: Eventually I would like to use as follows
[X] = round(x)
[Y] = round(y)
data = [X,Y,Z];
plotdata = accumarray(data(:,1:2),data(:,3));
surface(plotdata)
colorbar
[X] = round(x-min(x))+1;
[Y] = round(y-min(y))+1;
data = [X,Y,Z];
plotdata = accumarray(data(:,1:2),data(:,3));
surface(unique(x),unique(y),plotdata)
colorbar
min() moves your array to the upper-left corner of your matrix, basically removing leading zero rows and columns. The +1 ensures it does not end up at zero, which gives the error in accumarray.

Extracting Elements From a Matrix in Matlab

I'm trying to extract elements from the first column of the following matrix using a for loop
[1.0000 1.0000;
0.4401 4.0000;
0.0000 2.0000;
0.0000 3.0000]
But I'm getting inaccurate values for the zero ones (example: 6.2421e-010 instead of zero)
How to fix this?
Code:
for h=1:K
summation=0;
for i=1:F
x(i,1)
summation=summation+x(i,1);
end
end
You don't need a for loop to extract the first column. You could do this:
a = [1.0000 1.0000; 0.4401 4.0000; 0.0000 2.0000; 0.0000 3.0000];
b = a(:,1);
Giving:
b =
1.0000
0.4401
0
0
From what I have read so far:
A = [1.0000 1.0000; 0.4401 4.0000; 0.0000 2.0000; 0.0000 3.0000];
B = A(:,1);
No for loops needed..
if vpa(x(i,1)) < 0.0000003
summation=summation+x(i,1);
else
summation=summation+vpa(x(i,1));
If your matrix numbers are integers you dont need to write all the zeros after the dot.
To extract a column of a matrix use M(:,j) where M is the matrix and j the column you want to extract.
If you want to sum the elements of that column, just do sum(M(:,j))
M= [1 1; 0.4401 4; 0 2; 0 3]
c1=M(:,1)
summation = sum(c1)

Finding generalized eigenvectors numerically in Matlab

I have a matrix such as this example (my actual matrices can be much larger)
A = [-1 -2 -0.5;
0 0.5 0;
0 0 -1];
that has only two linearly-independent eigenvalues (the eigenvalue -1 is repeated). I would like to obtain a complete basis with generalized eigenvectors. One way I know how to do this is with Matlab's jordan function in the Symbolic Math toolbox, but I'd prefer something designed for numeric inputs (indeed, with two outputs, jordan fails for large matrices: "Error in MuPAD command: Similarity matrix too large."). I don't need the Jordan canonical form, which is notoriously unstable in numeric contexts, just a matrix of generalized eigenvectors. Is there a function or combination of functions that automates this in a numerically stable way or must one use the generic manual method (how stable is such a procedure)?
NOTE: By "generalized eigenvector," I mean a non-zero vector that can be used to augment the incomplete basis of a so-called defective matrix. I do not mean the eigenvectors that correspond to the eigenvalues obtained from solving the generalized eigenvalue problem using eig or qz (though this latter usage is quite common, I'd say that it's best avoided). Unless someone can correct me, I don't believe the two are the same.
UPDATE 1 – Five months later:
See my answer here for how to obtain generalized eigenvectors symbolically for matrices larger than 82-by-82 (the limit for my test matrix in this question).
I'm still interested in numeric schemes (or how such schemes might be unstable if they're all related to calculating the Jordan form). I don't wish to blindly implement the linear algebra 101 method that has been marked as a duplicate of this question as it's not a numerical algorithm, but rather a pencil-and paper method used to evaluate students (I suppose that it could be implemented symbolically however). If anyone can point me to either an implementation of that scheme or a numerical analysis of it, I'd be interested in that.
UPDATE 2 – Feb. 2015: All of the above is still true as tested in R2014b.
As mentioned in my comments, if your matrix is defective, but you know which eigenvectors/eigenvalue pair you want to consider as identical given your tolerance, you can proceed as with this example below:
% example matrix A:
A = [1 0 0 0 0;
3 1 0 0 0;
6 3 2 0 0;
10 6 3 2 0;
15 10 6 3 2]
% Produce eigenvalues and eigenvectors (not generalized ones)
[vecs,vals] = eig(A)
This should output:
vecs =
0 0 0 0 0.0000
0 0 0 0.2236 -0.2236
0 0 0.0000 -0.6708 0.6708
0 0.0000 -0.0000 0.6708 -0.6708
1.0000 -1.0000 1.0000 -0.2236 0.2236
vals =
2 0 0 0 0
0 2 0 0 0
0 0 2 0 0
0 0 0 1 0
0 0 0 0 1
Where we see that the first three eigenvectors are almost identical to working precision, as are the two last ones. Here, you must know the structure of your problem and identify the identical eigenvectors of identical eigenvalues. Here, eigenvalues are exactly identical, so we know which ones to consider, and we will assume that corresponding vectors 1-2-3 are identical and vectors 4-5. (In practice you will likely check the norm of the differences of eigenvectors and compare it to your tolerance)
Now we proceed to compute the generalized eigenvectors, but this is ill-conditioned to solve simply with matlab's \, because obviously (A - lambda*I) is not full rank. So we use pseudoinverses:
genvec21 = pinv(A - vals(1,1)*eye(size(A)))*vecs(:,1);
genvec22 = pinv(A - vals(1,1)*eye(size(A)))*genvec21;
genvec1 = pinv(A - vals(4,4)*eye(size(A)))*vecs(:,4);
Which should give:
genvec21 =
-0.0000
0.0000
-0.0000
0.3333
0
genvec22 =
0.0000
-0.0000
0.1111
-0.2222
0
genvec1 =
0.0745
-0.8832
1.5317
0.6298
-3.5889
Which are our other generalized eigenvectors. If we now check these to obtain the jordan normal form like this:
jordanJ = [vecs(:,1) genvec21 genvec22 vecs(:,4) genvec1];
jordanJ^-1*A*jordanJ
We obtain:
ans =
2.0000 1.0000 0.0000 -0.0000 -0.0000
0 2.0000 1.0000 -0.0000 -0.0000
0 0.0000 2.0000 0.0000 -0.0000
0 0.0000 0.0000 1.0000 1.0000
0 0.0000 0.0000 -0.0000 1.0000
Which is our Jordan normal form (with working precision errors).

what type of sigma does corrcov function of matlab return?

I have a 3X3 matrix
S_2 =
0.0001 -0.0004 0.0001
-0.0004 0.0029 -0.0002
0.0001 -0.0002 0.0003
when i apply [R,s] = corrcov(S_2)
it returns a vector with standard deviations sigma in s
R =
1.0000 -0.7834 0.3187
-0.7834 1.0000 -0.2631
0.3187 -0.2631 1.0000
s =
0.0099
0.0538
0.0163
what type calculation does corrcov perform to get the sigma "s" ??
Inspect the source code of corrcov to find out: edit corrcov.m
It turns out its: sigma = sqrt(diag(C)), where C is the input covariance matrix (the diagonal elements represent the variances, whose square roots are the standard deviations).
Remember that the correlation matrix is the covariance matrix normalized by the standard deviations: http://en.wikipedia.org/wiki/Covariance_and_correlation

spectral clustering

First off I must say that I'm new to matlab (and to this site...) , so please excuse my ignorance.
I'm trying to write a function in matlab that will use Spectral Clustering to split a set of points into two clusters.
my code is as follows
function Groups = TrySpectralClustering(data)
dist_mat = squareform(pdist(data));
W= zeros(length(data),length(data));
for i=1:length(data),
for j=(i+1):length(data),
W(i,j)=10^(-dist_mat(i,j));
W(j,i)=W(i,j);
end
end
D = zeros(length(data),length(data));
for i=1:length(W),
D(i,i)=sum(W(i,:));
end
L=D-W;
L=D^(-0.5)*L*D^(-0.5);
[ V E ] = eig(L);
disp ('V:');
disp (V);
If I understand correctly, then by using the second smallest eigenvector I should be able to perform a partition of the data into two clusters - If the ith member of the 2nd eigenvector is positive, the ith data point would be in the one cluster, otherwise it would be in the other cluster.
However, when I try the following
f=[1,1;0,0;1,0;0,1;100,100;100,101;101,101;101,100]
TrySpectralClustering(f)
I would expect that the first four points would form one cluster, and the last four would form another.
However, I receive
V:
-0.0000 -0.5000 0.0000 -0.5777 0.0000 0.4078 -0.0000 0.5000
-0.0000 -0.5000 0.0000 0.5777 0.0000 -0.4078 -0.0000 0.5000
-0.0000 -0.5000 0.0000 0.4078 0.0000 0.5777 -0.0000 -0.5000
-0.0000 -0.5000 0.0000 -0.4078 0.0000 -0.5777 -0.0000 -0.5000
-0.5000 -0.0000 -0.0000 -0.0000 -0.7071 -0.0000 0.5000 -0.0000
-0.5000 -0.0000 0.7071 0.0000 -0.0000 -0.0000 -0.5000 -0.0000
-0.5000 0.0000 -0.0000 0.0000 0.7071 0.0000 0.5000 0.0000
-0.5000 0 -0.7071 0 0 0 -0.5000 0
Taking the 2nd eigenvector
-0.0000 -0.5000 0.0000 0.5777 0.0000 -0.4078 -0.0000 0.5000
I find the one cluster includes the points 1,0;0,1;100,100;101,100
and the other cluster is made from the points 1,1;0,0;100,101;101,101
I wonder what am I doing wrong.
Note: I am working on the above as a part of a homework project.
Thanks in advance!
What you are getting is correct. Let U be the matrix containing the eigenvectors as shown above and let them be arranged such that the 1st column corresponds to the smallest eigenvalue and progressive columns correspond to the ascending eigenvalues. Then, take a subset of columns of U by retaining the eigenvectors corresponding to the smaller eigenvalues. Now, read these columns row-wise into a new set of vectors, call it Y. Cluster Y to get the spectral clusters. So, let us assume our subset is only the first column. We clearly see that if u were to cluster the first column, u would get the first 4 into 1 cluster and the next 4 into another cluster, which is what you want.
Take a look at the implementation on Prof. J. Shi's webpage. Pay close attention to discretisation.m function.
Moreover, your code is very inefficient. You need to take more advantage of Matlab's vectorization:
W = 10.^( - dist_mat ); % single liner of nested loop for comuting W
% computing the symmetric laplacian
d = sum( W, 2 ); % sum each row
d( d == 0 ) = 1; % avoid division by zero
d_half = 1./sqrt( d );
L = eye( n ) - bsxfun( #times, bsxfun( #times, W, d_half' ), d_half );
Two observations:
L=D-W; L=D^(-0.5)*L*D^(-0.5);
Why do you let him calculate the identity matrix? Just use the identity matrix eye(n) and substract D^(-0.5) * W * D^(-0.5) from that to calculate the Laplacian L
eig returns the eigenvectors as columns, why do you take the row? Did you check the values of the corresponding eigenvalues in E, so you can be sure you are looking at a eigenvec corresponding to the 2nd smallest eigenval?