dimensionality reduction for non square matrix? - matlab

Im going to do dimensionality reduction by using PCA/SVD for my extracted features.
Suppose if I want to do classification using SIFT as the features and SVM as the classifier.
I have 3 images for training and I arrange them in a different row..
So 1st row for 1st images, 2nd row for second images and 3rd row for the 3rd image...
And the columns represents the features
A=[ 1 2 3 4 5
4 5 6 7 8
0 0 1 9 0]
To do dimensionality reduction (from my 3x5 matrix/non square matrix), we have to do A*EigenVector
Now I have to extract the eigen vector from my A matrix And from what I understand, that SVD is for non square matrix, so to perform PCA (eigs function), I need to make it square by multiplying it with it's transpose)
Here is if I do SVD directly
[u1,s1,v1] = svd(A);
I got
u1 =
-0.4369 0.1426 0.8882
-0.8159 0.3530 -0.4580
-0.3788 -0.9247 -0.0379
v1 =
-0.2229 0.2206 -0.7088 -0.6070 -0.1754
-0.2984 0.2910 -0.3857 0.4705 0.6754
-0.3966 0.2301 -0.0910 0.5382 -0.7012
-0.6547 -0.7495 0.0045 -0.0598 0.0779
-0.5248 0.5020 0.5836 -0.3419 0.1233
and when I use PCA (eigs function) {as I arrange the feat of images in different row, so I need to do A*A'}, I got
c=A*A'
[e1 e2]=eigs(c);
e1 =
0.4369 0.1426 0.8882
0.8159 0.3530 -0.4580
0.3788 -0.9247 -0.0379
My question is:
is that right that I used it in SVD or in the PCA (by converting t into A*A' matrix) will give me he same eig vectoe (e1 and u1)??
As I arrange my images in different rows and the features for each images in different column. and PCA/SVD is suing to extract the eig vector which represent the relation between the variable.. So in this case the variable would be the row (images) or the columns (features)??
Do I have to convert my matrix into covariance matrix by using (Cov function) if I want to use eigs function?? Or it will be done by eigs function manually??
I do really appriciate any answer

Suppose you have n-dimension samples and you want to reduce it to d-dimension data by PCA.
Suppose your data are stored in matrix AnxN (N is the number of samples(here images)).
here n=3 and N=5.
We define a correlation matrix R = A*A' (nxn). You can use the covariance matrix instead.
Calculate the eignen vectors of R and corresponding eigen values:
R = A*A';
[eigVec, eigVal] = eig(R)
eigVec =
0.8882 0.1426 0.4369
-0.4580 0.3530 0.8159
-0.0379 -0.9247 0.3788
eigVal =
1.7728 0 0
0 49.6457 0
0 0 275.5815
Note that columns of eigVec are the eigen vectors of R.
Some of the eigen values will be zero or if not, you can take a threshold. So you can eliminate the corresponding eigen vectors:
T = eigVec(:, 2:3)
T =
0.1426 0.4369
0.3530 0.8159
-0.9247 0.3788
now T is a nxd matrix.
For any row vector X1xn the X1xn*Tnxd will result a Y1xd output.
The final answer;
B = A'*T;

Related

Compare three big matrices - Best way to get a meaningful and easy to understand indicator of the relation between the matrices?

I have 3 matrices (55000x3 double) and want to compare them.
I'm taking the arithmetic mean of the value of each position and want to provide in addition an indicator how the three matrices correlate.
The values in one position of the matrices are for example:
Matrix1 pos(1:1): 3.679
Matrix2 pos(1:1): 3.721
Matrix3 pos(1:1): 3.554
As I cannot just give the standard deviation for each value because it would be to much information I'm looking for a way to give a meaningful statement for the correlation without having to much information.
What's the best way to do this?
I think you want the correlation coefficient. You can reshape each of your matrices into a vector (using (:)), and then compute the correlation coefficient for each pair of vectors (originally matrices) using corrcoef.
For example, let:
Matrix1 = [ 1 2; 3 4; 5 6 ];
Matrix2 = -2*[ 1 2; 3 4; 5 6 ];
Matrix3 = [ 1.1 2.3; 3.4 4.1; 4.9 6.3 ];
Then
C = corrcoef([Matrix1(:) Matrix2(:) Matrix3(:)]);
gives
C =
1.0000 -1.0000 0.9952
-1.0000 1.0000 -0.9952
0.9952 -0.9952 1.0000
This tells you that, in this case,
Each of the three matrices is totally correlated with itself (C(1,1), C(2,2) and C(3,3) equal 1). This is obvious.
Matrices 1 and 2 have correlation coefficient C(1,2) equal to -1. This was expected, because matrix 2 is a negative multiple of matrix 1.
Matrices 1 and 3 are highly correlated (C(1,3) is 0.9952). This is because matrix 3 was defined as matrix 1 with some random "noise".
Matrices 2 and 3 are also highly correlated but with negative sign (C(2,3) is -0.9952), as should be clear from the above.
Have you tried representing your data using boxplot?
boxplot(([data(:,1); data(:,2); data(:,3)])');

Reducing dimensionality of features with PCA in MATLAB

I'm totally confused regarding PCA. I have a 4D image of size 90x60x12x350. That means that each voxel is a vector of size 350 (time series).
Now I divide the 3D image (90x60x12) into cubes. So let's say a cube contains n voxels, so I have n vectors of size 350. I want to reduce this n vectors to only one vector and then calculate the correlations between all vectors of all cubes.
So for a cube I can construct the matrix M where I just put each voxel after each other, i.e. M = [v1 v2 v3 ... vn] and each v is of size 350.
Now I can apply PCA in Matlab by using [coeff, score, latent, ~, explained] = pca(M); and taking the first component. And now my confusion begins.
Should I transpose the matrix M, i.e. PCA(M')?
Should I take the first column of coeff or of score?
This third question is now a bit unrelated. Let's assume we have a
matrix A = rand(30,100) where the rows are the datapoints and the
columns are the features. Now I want to reduce the dimensionality of
the feature vectors but keeping all data points.
How can I do this with PCA?
When I do [coeff, score, latent, ~, explained] = pca(M); then
coeff is of dimension 100x29 and score is of size 30x29. I'm
totally confused.
Yes, according to the pca help, "Rows of X correspond to observations and columns to variables."
score just tells you the representation of M in the principal component space. You want the first column of coeff.
numberOfDimensions = 5;
coeff = pca(A);
reducedDimension = coeff(:,1:numberOfDimensions);
reducedData = A * reducedDimension;
I disagree with the answer above.
[coeff,score]=pca(A)
where A has rows as observations and column as features.
If A has 3 featuers and >3 observations (Let's say 100) and you want the "feature" of 2 dimensions, say matrix B (the size of B is 100X2). What you should do is:
B = score(:,1:2);

Unsupervised Filter Feature Selection - Rank by Correlation

I have a set of features which and I wish to rank according to their Correlation Coefficient with each other, without accounting for the true label (that would by a Supervised feature selection, right?).
My objective is selecting the first feature as the one more correlated with every other, take it out and so on.
The problem is how to test the correlation of a vector with a matrix (all the other vectors/features)? Is it possible to do this or am I doing this all right.
PS: I'm using MATLAB 2013b
Thank you all
Say you had a n-by-d matrix X where the rows are instances and columns are the features/dimensions, then you can compute the correlation coefficient matrix simply using the corr or corrcoeff functions:
% Fisher Iris dataset, 150x4
>> load fisheriris
>> X = meas;
>> C = corr(X)
C =
1.0000 -0.1176 0.8718 0.8179
-0.1176 1.0000 -0.4284 -0.3661
0.8718 -0.4284 1.0000 0.9629
0.8179 -0.3661 0.9629 1.0000
The result is a d-by-d matrix containing correlation coefficients of each feature against every other feature. The diagonal is thus all ones (because corr(x,x) = 1), the matrix is also symmetric (because corr(x,y) = corr(y,x)). Values range from -1 to 1, where -1 means inverse correlation between two variables, 1 means positive correlation, and 0 means no linear correlation.
Now because you want to remove the feature which is on average the most correlated with other features, you have to summarize that matrix as one number per feature. One way to do that is to compute the mean:
% mean
>> mean_corr = mean(C)
mean_corr =
0.6430 0.0220 0.6015 0.6037
% most correlated feature on average
>> [~,idx] = max(mean_corr)
idx =
1
% drop that feature
>> X(:,idx) = [];
EDIT:
I probably should have taken the mean of the absolute value of C in the above code, because we don't care if two variables are positively or negatively correlated, only how strong the correlation is.

Eigen vector in SVD

Im going to compute the eigen value and eigen vector from my Matrix data fro the classification.
The rows represent the different classes and the columns represent the features.
So, for example if I have
X=
[2 3 4]
[3 2 4]
[4 5 6]
[8 9 0]
I have to use SVD instead of PCA because the matrix is not square.
What I have done are:
Compute the mean for each row. So I have
Mean=
M1
M2
M3
M4
Substract my matrix X with the Mean
Substract=
[2-M1 3-M1 4-M1]
[3-M2 2-M2 4-M2]
[4-M3 5-M3 6-M3]
[8-M4 9-M4 0-M4]
Covariance Matrix = (Substract*Substract^t)/(4-1)
[U,S,V] = svd(X)
Are all my step right? By compute the mean for each row (as the classes)?
If I want to project my data into eigen space (for dimensionality reduction), which is the eigen vector (U or V)??
You can do PCA whether your matrix is square or not. In fact, your matrix is rarely square because it has a form n*p where n is the number of observations and p is the number of features. Thus you can use MATLAB's pricomp function
[W, pc] = princomp(data);
where W is a weight matrix and pc is the principal component score. You can see your data projected into the principal component space by,
plot(pc(1,:),pc(2,:),'.');
which shows your data in the first- and second- principal component directions.

singular value decomposition and low rank tensorial approximation

according this article
http://www.wseas.us/e-library/conferences/2012/Vouliagmeni/MMAS/MMAS-07.pdf
matrix can be approximated by one rank matrices using tensorial approximation,i know that in matlab kronecker product plays same role as tensorial product,function is kron,now let us suppose that we have following matrix
a=[2 1 3;4 3 5]
a =
2 1 3
4 3 5
SVD of this matrix is
[U E V]=svd(a)
U =
-0.4641 -0.8858
-0.8858 0.4641
E =
7.9764 0 0
0 0.6142 0
V =
-0.5606 0.1382 -0.8165
-0.3913 0.8247 0.4082
-0.7298 -0.5484 0.4082
please help me to implement algorithm with using tensorial approximation reconstructs original matrix in matlab languages,how can i apply tensorial product?like this
X=kron(U(:,1),V(:,1));
or?thanks in advance
I'm not quite sure about the Tensorial interpretation but the closest rank-1 approximation to the matrix is essentially the outer-product of the two dominant singular vectors amplified by the singular value.
In simple words, if [U E V] = svd(X), then the closest rank-1 approximation to X is the outer-product of the first singular vectors multiplied by the first singular value.
In MATLAB, you could do this as:
U(:,1)*E(1,1)*V(:,1)'
Which yields:
ans =
2.0752 1.4487 2.7017
3.9606 2.7649 5.1563
Also, mathematically speaking, the kronecker product of a row vector and a column vector is essentially their outer product. So, you could do the same thing using Kronecker products as:
(kron(U(:,1)',V(:,1))*E(1,1))'
Which yields the same answer.