Eigen vector in SVD - matlab

Im going to compute the eigen value and eigen vector from my Matrix data fro the classification.
The rows represent the different classes and the columns represent the features.
So, for example if I have
X=
[2 3 4]
[3 2 4]
[4 5 6]
[8 9 0]
I have to use SVD instead of PCA because the matrix is not square.
What I have done are:
Compute the mean for each row. So I have
Mean=
M1
M2
M3
M4
Substract my matrix X with the Mean
Substract=
[2-M1 3-M1 4-M1]
[3-M2 2-M2 4-M2]
[4-M3 5-M3 6-M3]
[8-M4 9-M4 0-M4]
Covariance Matrix = (Substract*Substract^t)/(4-1)
[U,S,V] = svd(X)
Are all my step right? By compute the mean for each row (as the classes)?
If I want to project my data into eigen space (for dimensionality reduction), which is the eigen vector (U or V)??

You can do PCA whether your matrix is square or not. In fact, your matrix is rarely square because it has a form n*p where n is the number of observations and p is the number of features. Thus you can use MATLAB's pricomp function
[W, pc] = princomp(data);
where W is a weight matrix and pc is the principal component score. You can see your data projected into the principal component space by,
plot(pc(1,:),pc(2,:),'.');
which shows your data in the first- and second- principal component directions.

Related

Matrix Multiplication Issue - Matlab

In an attempt to create my own covariance function in MatLab I need to perform matrix multiplication on a row to create a matrix.
Given a matrix D where
D = [-2.2769 0.8746
0.6690 -0.4720
-1.0030 -0.9188
2.6111 0.5162]
Now for each row I need manufacture a matrix. For example the first row R = [-2.2770, 0.8746] I would want the matrix M to be returned where M = [5.1847, -1.9915; -1.9915, 0.7649].
Below is what I have written so far. I am asking for some advice to explain how to use matrix multiplication on a rows to produce matrices?
% Find matrices using matrix multiplication
for i=1:size(D, 1)
P1 = (D(i,:))
P2 = transpose(P1)
M = P1*P2
end
You are trying to compute the outer product of each row with itself stored as individual slices in a 3D matrix.
Your code almost works. What you're doing instead is computing the inner product or the dot product of each row with itself. As such it'll give you a single number instead of a matrix. You need to change the transpose operation so that it's done on P1 not P2 and P2 will now simply be P1. Also you are overwriting the matrix M at each iteration. I'm assuming you'd like to store these as individual slices in a 3D matrix. To do this, allocate a 3D matrix where each 2D slice has an equal number of rows and columns which is the number of columns in D while the total number of slices is equal to the total number of rows in D. Then just index into each slice and place the result accordingly:
M = zeros(size(D,2), size(D,2), size(D,1));
% Find matrices using matrix multiplication
for ii=1:size(D, 1)
P = D(ii,:);
M(:,:,ii) = P.'*P;
end
We get:
>> M
M(:,:,1) =
5.18427361 -1.99137674
-1.99137674 0.76492516
M(:,:,2) =
0.447561 -0.315768
-0.315768 0.222784
M(:,:,3) =
1.006009 0.9215564
0.9215564 0.84419344
M(:,:,4) =
6.81784321 1.34784982
1.34784982 0.26646244
Depending on your taste, I would recommend using bsxfun to help you perform the same operation but perhaps doing it faster:
M = bsxfun(#times, permute(D, [2 3 1]), permute(D, [3 2 1]));
In fact, this solution is related to a similar question I asked in the past: Efficiently compute a 3D matrix of outer products - MATLAB. The only difference is that the question wanted to find the outer product of columns instead of the rows.
The way the code works is that we shift the dimensions with permute of D so that we get two matrices of the sizes 2 x 1 x 4 and 1 x 2 x 4. By performing bsxfun and specifying the times function, this allows you to efficiently compute the matrix of outer products per slice simultaneously.

Reducing dimensionality of features with PCA in MATLAB

I'm totally confused regarding PCA. I have a 4D image of size 90x60x12x350. That means that each voxel is a vector of size 350 (time series).
Now I divide the 3D image (90x60x12) into cubes. So let's say a cube contains n voxels, so I have n vectors of size 350. I want to reduce this n vectors to only one vector and then calculate the correlations between all vectors of all cubes.
So for a cube I can construct the matrix M where I just put each voxel after each other, i.e. M = [v1 v2 v3 ... vn] and each v is of size 350.
Now I can apply PCA in Matlab by using [coeff, score, latent, ~, explained] = pca(M); and taking the first component. And now my confusion begins.
Should I transpose the matrix M, i.e. PCA(M')?
Should I take the first column of coeff or of score?
This third question is now a bit unrelated. Let's assume we have a
matrix A = rand(30,100) where the rows are the datapoints and the
columns are the features. Now I want to reduce the dimensionality of
the feature vectors but keeping all data points.
How can I do this with PCA?
When I do [coeff, score, latent, ~, explained] = pca(M); then
coeff is of dimension 100x29 and score is of size 30x29. I'm
totally confused.
Yes, according to the pca help, "Rows of X correspond to observations and columns to variables."
score just tells you the representation of M in the principal component space. You want the first column of coeff.
numberOfDimensions = 5;
coeff = pca(A);
reducedDimension = coeff(:,1:numberOfDimensions);
reducedData = A * reducedDimension;
I disagree with the answer above.
[coeff,score]=pca(A)
where A has rows as observations and column as features.
If A has 3 featuers and >3 observations (Let's say 100) and you want the "feature" of 2 dimensions, say matrix B (the size of B is 100X2). What you should do is:
B = score(:,1:2);

Eigen Values from Matlab

I'm trying to figure out Eigenvalues/Eigenvectors for large datasets in order to compute
the PCA. I can calculate the Eigenvalues and Eigenvectors for 2x2, 3x3 etc..
The problem is, I have a dataset containing 451x128 I compute the covariance matrix which
gives me 128x128 values from this. This, therefore looks like the following:
A = [ [1, 2, 3,
2, 3, 1,
..........,
= 128]
[5, 4, 1,
3, 2, 1,
2, 1, 2,
..........
= 128]
.......,
128]
Computing the Eigenvalues and vectors for a 128x128 vector seems really difficult and
would take a lot of computing power. However, if I allow for each of the blocks in A to be a 2-dimensional (3xN) I can then compute the covariance matrix which will give me a 3x3 matrix.
My question is this: Would this be a good or reasonable assumption for solving the eigenvalues and vectors? Something like this:
A is a 2-dimensional vector containing 128x451,
foreach of the blocks compute the eigenvalues and eigenvectors of the covariance vector,
like so:
Eig1 = eig(cov(A[0]))
Eig2 = eig(cov(A[1]))
This would then give me 128 Eigenvalues (for each of the blocks inside the 128x128 vector)..
If this is not correct, how does MATLAB handle such large dimensional data?
Have you tried svd()
Do the singular value decomposition
[U,S,V] = svd(X)
U and V are orthogonal matrices and S contains the eigen values. Sort U and V in descending order based on S.
As kkuilla mentions, you can use the SVD of the original matrix, as the SVD of a matrix is related to the Eigenvalues and Eigenvectors of the covariance matrix as I demonstrate in the following example:
A = [1 2 3; 6 5 4]; % A rectangular matrix
X = A*A'; % The covariance matrix of A
[V, D] = eig(X); % Get the eigenvectors and eigenvalues of the covariance matrix
[U,S,W] = svd(A); % Get the singular values of the original matrix
V is a matrix containing the eigenvectors, and D contains the eigenvalues. Now, the relationship:
SST ~ D
U ~ V
As to your own assumption, I may be misreading it, but I think it is false. I can't see why the Eigenvalues of the blocks would relate to the Eigenvalues of the matrix as a whole; they wouldn't correspond to the same Eigenvectors, as the dimensionality of the Eigenvectors wouldn't match. I think your covariances would be different too, but then I'm not completely clear on how you are creating these blocks.
As to how Matlab does it, it does use some tricks. Perhaps the link below might be informative (though it might be a little old). I believe they use (or used) LAPACK and a QZ factorisation to obtain intermediate values.
https://au.mathworks.com/company/newsletters/articles/matlab-incorporates-lapack.html
Use the word
[Eigenvectors, Eigenvalues] = eig(Matrix)

dimensionality reduction for non square matrix?

Im going to do dimensionality reduction by using PCA/SVD for my extracted features.
Suppose if I want to do classification using SIFT as the features and SVM as the classifier.
I have 3 images for training and I arrange them in a different row..
So 1st row for 1st images, 2nd row for second images and 3rd row for the 3rd image...
And the columns represents the features
A=[ 1 2 3 4 5
4 5 6 7 8
0 0 1 9 0]
To do dimensionality reduction (from my 3x5 matrix/non square matrix), we have to do A*EigenVector
Now I have to extract the eigen vector from my A matrix And from what I understand, that SVD is for non square matrix, so to perform PCA (eigs function), I need to make it square by multiplying it with it's transpose)
Here is if I do SVD directly
[u1,s1,v1] = svd(A);
I got
u1 =
-0.4369 0.1426 0.8882
-0.8159 0.3530 -0.4580
-0.3788 -0.9247 -0.0379
v1 =
-0.2229 0.2206 -0.7088 -0.6070 -0.1754
-0.2984 0.2910 -0.3857 0.4705 0.6754
-0.3966 0.2301 -0.0910 0.5382 -0.7012
-0.6547 -0.7495 0.0045 -0.0598 0.0779
-0.5248 0.5020 0.5836 -0.3419 0.1233
and when I use PCA (eigs function) {as I arrange the feat of images in different row, so I need to do A*A'}, I got
c=A*A'
[e1 e2]=eigs(c);
e1 =
0.4369 0.1426 0.8882
0.8159 0.3530 -0.4580
0.3788 -0.9247 -0.0379
My question is:
is that right that I used it in SVD or in the PCA (by converting t into A*A' matrix) will give me he same eig vectoe (e1 and u1)??
As I arrange my images in different rows and the features for each images in different column. and PCA/SVD is suing to extract the eig vector which represent the relation between the variable.. So in this case the variable would be the row (images) or the columns (features)??
Do I have to convert my matrix into covariance matrix by using (Cov function) if I want to use eigs function?? Or it will be done by eigs function manually??
I do really appriciate any answer
Suppose you have n-dimension samples and you want to reduce it to d-dimension data by PCA.
Suppose your data are stored in matrix AnxN (N is the number of samples(here images)).
here n=3 and N=5.
We define a correlation matrix R = A*A' (nxn). You can use the covariance matrix instead.
Calculate the eignen vectors of R and corresponding eigen values:
R = A*A';
[eigVec, eigVal] = eig(R)
eigVec =
0.8882 0.1426 0.4369
-0.4580 0.3530 0.8159
-0.0379 -0.9247 0.3788
eigVal =
1.7728 0 0
0 49.6457 0
0 0 275.5815
Note that columns of eigVec are the eigen vectors of R.
Some of the eigen values will be zero or if not, you can take a threshold. So you can eliminate the corresponding eigen vectors:
T = eigVec(:, 2:3)
T =
0.1426 0.4369
0.3530 0.8159
-0.9247 0.3788
now T is a nxd matrix.
For any row vector X1xn the X1xn*Tnxd will result a Y1xd output.
The final answer;
B = A'*T;

singular value decomposition and low rank tensorial approximation

according this article
http://www.wseas.us/e-library/conferences/2012/Vouliagmeni/MMAS/MMAS-07.pdf
matrix can be approximated by one rank matrices using tensorial approximation,i know that in matlab kronecker product plays same role as tensorial product,function is kron,now let us suppose that we have following matrix
a=[2 1 3;4 3 5]
a =
2 1 3
4 3 5
SVD of this matrix is
[U E V]=svd(a)
U =
-0.4641 -0.8858
-0.8858 0.4641
E =
7.9764 0 0
0 0.6142 0
V =
-0.5606 0.1382 -0.8165
-0.3913 0.8247 0.4082
-0.7298 -0.5484 0.4082
please help me to implement algorithm with using tensorial approximation reconstructs original matrix in matlab languages,how can i apply tensorial product?like this
X=kron(U(:,1),V(:,1));
or?thanks in advance
I'm not quite sure about the Tensorial interpretation but the closest rank-1 approximation to the matrix is essentially the outer-product of the two dominant singular vectors amplified by the singular value.
In simple words, if [U E V] = svd(X), then the closest rank-1 approximation to X is the outer-product of the first singular vectors multiplied by the first singular value.
In MATLAB, you could do this as:
U(:,1)*E(1,1)*V(:,1)'
Which yields:
ans =
2.0752 1.4487 2.7017
3.9606 2.7649 5.1563
Also, mathematically speaking, the kronecker product of a row vector and a column vector is essentially their outer product. So, you could do the same thing using Kronecker products as:
(kron(U(:,1)',V(:,1))*E(1,1))'
Which yields the same answer.