When I get the eigenvalues of the diagonal of a PCA transformed image, I always get 1, whatever the image. What's the reason behind this?
I used the following code.
coeff = pca(pmap);
disp(coeff);
[V,L]=eig (coeff'*coeff);
Lamda = diag(L);
disp(Lamda);
The coeff which pca outputs are already eigenvectors, which are all orthogonal. They are even orthonormal, since MATLAB normalises them. Relative weight is in the explained output parameter of pca.
So transpose(coeff)*coeff gives you the identity matrix, which just contains ones and the eigenvectors of the identity matrix are, obviously, all just 1 in a single dimension.
The reason is thus because that's how linear algebra works.
My task is to do PCA and whitening transform with given 2dimentional 5000data.
What I understand with PCA is analyzing the main axis of the data with covariance Matrix's Eigen Vector and rotate the main axis to the x axis!
So here's what I did.
[BtEvector,BtEvalue]=eig(MYCov);% Eigen value and vector using built-in function
I first calculated eigen values and vectors. The result was
BtEvalue=[4.027487815706757,0;0,8.903923357227459]
and
BtEvector=[0.033937679569230,-0.999423951036524;-0.999423951036524,-0.033937679569230]
So I figured out that the main axis will have eigen value of 8.903923357227459 and eigen vector of [-0.999423951036524,-0.033937679569230] which is the second corresponding term.
After then, because it's two dimentional data, I let cos(theta) as -0.9994.. and sin(theta)=-0.033937. Because I thought the main axis of the data(eigen vector [-0.999423951036524,-0.033937679569230]) has to be x axis I made rotational axis R= [cos(-Theta)-sin(-theta);sin(-theta) cos(-theta)]. Let original data sets A=>2*5000, I did A*R to get rotated data.
Also, For whitening case, using Cholesky whitening, I made whitening transformation matrix as inv(Covariance Matrix).
Is there something wrong with my algorithm? Could someone testify if there's error or misunderstanding please? Thank you a lot in advance.
Since your data is two-dimensional, the covariance matrix that you calculated is not accurate. If you only calculate the covariance with respect to one axis (say x), you're assuming that the covariance along the y axis is identity. This is obviously not true. Although you've attempted to address this, there's a sound procedure that you can use (I've explained below).
Unfortunately, this is a common mistake. Have a look at this paper, where it is explained exactly how the covariance should be calculated.
In summary, you can calculate the covariance along each axis (Sx and Sy). Then approximate the 2D covariance of the vectorized matrix as kron(Sx,Sy). This will be a better approximation of the 2D covariance.
Suppose there is a matrix B, where its size is a 500*1000 double(Here, 500 represents the number of observations and 1000 represents the number of features).
sigma is the covariance matrix of B, and D is a diagonal matrix whose diagonal elements are the eigenvalues of sigma. Assume A is the eigenvectors of the covariance matrix sigma.
I have the following questions:
I need to select the first k = 800 eigenvectors corresponding to the eigenvalues with the largest magnitude to rank the selected features. The final matrix named Aq. How can I do this in MATLAB?
What is the meaning of these selected eigenvectors?
It seems the size of the final matrix Aq is 1000*800 double once I calculate Aq. The time points/observation information of 500 has disappeared. For the final matrix Aq, what does the value 1000 in matrix Aq represent now? Also, what does the value 800 in matrix Aq represent now?
I'm assuming you determined the eigenvectors from the eig function. What I would recommend to you in the future is to use the eigs function. This not only computes the eigenvalues and eigenvectors for you, but it will compute the k largest eigenvalues with their associated eigenvectors for you. This may save computational overhead where you don't have to compute all of the eigenvalues and associated eigenvectors of your matrix as you only want a subset. You simply supply the covariance matrix of your data to eigs and it returns the k largest eigenvalues and eigenvectors for you.
Now, back to your problem, what you are describing is ultimately Principal Component Analysis. The mechanics behind this would be to compute the covariance matrix of your data and find the eigenvalues and eigenvectors of the computed result. It has been known that doing it this way is not recommended due to numerical instability with computing the eigenvalues and eigenvectors for large matrices. The most canonical way to do this now is via Singular Value Decomposition. Concretely, the columns of the V matrix give you the eigenvectors of the covariance matrix, or the principal components, and the associated eigenvalues are the square root of the singular values produced in the diagonals of the matrix S.
See this informative post on Cross Validated as to why this is preferred:
https://stats.stackexchange.com/questions/79043/why-pca-of-data-by-means-of-svd-of-the-data
I'll throw in another link as well that talks about the theory behind why the Singular Value Decomposition is used in Principal Component Analysis:
https://stats.stackexchange.com/questions/134282/relationship-between-svd-and-pca-how-to-use-svd-to-perform-pca
Now let's answer your question one at a time.
Question #1
MATLAB generates the eigenvalues and the corresponding ordering of the eigenvectors in such a way where they are unsorted. If you wish to select out the largest k eigenvalues and associated eigenvectors given the output of eig (800 in your example), you'll need to sort the eigenvalues in descending order, then rearrange the columns of the eigenvector matrix produced from eig then select out the first k values.
I should also note that using eigs will not guarantee sorted order, so you will have to explicitly sort these too when it comes down to it.
In MATLAB, doing what we described above would look something like this:
sigma = cov(B);
[A,D] = eig(sigma);
vals = diag(D);
[~,ind] = sort(abs(vals), 'descend');
Asort = A(:,ind);
It's a good thing to note that you do the sorting on the absolute value of the eigenvalues because scaled eigenvalues are also eigenvalues themselves. These scales also include negatives. This means that if we had a component whose eigenvalue was, say -10000, this is a very good indication that this component has some significant meaning to your data, and if we sorted purely on the numbers themselves, this gets placed near the lower ranks.
The first line of code finds the covariance matrix of B, even though you said it's already stored in sigma, but let's make this reproducible. Next, we find the eigenvalues of your covariance matrix and the associated eigenvectors. Take note that each column of the eigenvector matrix A represents one eigenvector. Specifically, the ith column / eigenvector of A corresponds to the ith eigenvalue seen in D.
However, the eigenvalues are in a diagonal matrix, so we extract out the diagonals with the diag command, sort them and figure out their ordering, then rearrange A to respect this ordering. I use the second output of sort because it tells you the position of where each value in the unsorted result would appear in the sorted result. This is the ordering we need to rearrange the columns of the eigenvector matrix A. It's imperative that you choose 'descend' as the flag so that the largest eigenvalue and associated eigenvector appear first, just like we talked about before.
You can then pluck out the first k largest vectors and values via:
k = 800;
Aq = Asort(:,1:k);
Question #2
It's a well known fact that the eigenvectors of the covariance matrix are equal to the principal components. Concretely, the first principal component (i.e. the largest eigenvector and associated largest eigenvalue) gives you the direction of the maximum variability in your data. Each principal component after that gives you variability of a decreasing nature. It's also good to note that each principal component is orthogonal to each other.
Here's a good example from Wikipedia for two dimensional data:
I pulled the above image from the Wikipedia article on Principal Component Analysis, which I linked you to above. This is a scatter plot of samples that are distributed according to a bivariate Gaussian distribution centred at (1,3) with a standard deviation of 3 in roughly the (0.878, 0.478) direction and of 1 in the orthogonal direction. The component with a standard deviation of 3 is the first principal component while the one that is orthogonal is the second component. The vectors shown are the eigenvectors of the covariance matrix scaled by the square root of the corresponding eigenvalue, and shifted so their tails are at the mean.
Now let's get back to your question. The reason why we take a look at the k largest eigenvalues is a way of performing dimensionality reduction. Essentially, you would be performing a data compression where you would take your higher dimensional data and project them onto a lower dimensional space. The more principal components you include in your projection, the more it will resemble the original data. It actually begins to taper off at a certain point, but the first few principal components allow you to faithfully reconstruct your data for the most part.
A great visual example of performing PCA (or SVD rather) and data reconstruction is found by this great Quora post I stumbled upon in the past.
http://qr.ae/RAEU8a
Question #3
You would use this matrix to reproject your higher dimensional data onto a lower dimensional space. The number of rows being 1000 is still there, which means that there were originally 1000 features in your dataset. The 800 is what the reduced dimensionality of your data would be. Consider this matrix as a transformation from the original dimensionality of a feature (1000) down to its reduced dimensionality (800).
You would then use this matrix in conjunction with reconstructing what the original data was. Concretely, this would give you an approximation of what the original data looked like with the least amount of error. In this case, you don't need to use all of the principal components (i.e. just the k largest vectors) and you can create an approximation of your data with less information than what you had before.
How you reconstruct your data is very simple. Let's talk about the forward and reverse operations first with the full data. The forward operation is to take your original data and reproject it but instead of the lower dimensionality, we will use all of the components. You first need to have your original data but mean subtracted:
Bm = bsxfun(#minus, B, mean(B,1));
Bm will produce a matrix where each feature of every sample is mean subtracted. bsxfun allows the subtraction of two matrices in unequal dimension provided that you can broadcast the dimensions so that they can both match up. Specifically, what will happen in this case is that the mean of each column / feature of B will be computed and a temporary replicated matrix will be produced that is as large as B. When you subtract your original data with this replicated matrix, the effect will subtract every data point with their respective feature means, thus decentralizing your data so that the mean of each feature is 0.
Once you do this, the operation to project is simply:
Bproject = Bm*Asort;
The above operation is quite simple. What you are doing is expressing each sample's feature as a linear combination of principal components. For example, given the first sample or first row of the decentralized data, the first sample's feature in the projected domain is a dot product of the row vector that pertains to the entire sample and the first principal component which is a column vector.. The first sample's second feature in the projected domain is a weighted sum of the entire sample and the second component. You would repeat this for all samples and all principal components. In effect, you are reprojecting the data so that it is with respect to the principal components - which are orthogonal basis vectors that transform your data from one representation to another.
A better description of what I just talked about can be found here. Look at Amro's answer:
Matlab Principal Component Analysis (eigenvalues order)
Now to go backwards, you simply do the inverse operation, but a special property with the eigenvector matrix is that if you transpose this, you get the inverse. To get the original data back, you undo the operation above and add the means back to the problem:
out = bsxfun(#plus, Bproject*Asort.', mean(B, 1));
You want to get the original data back, so you're solving for Bm with respect to the previous operation that I did. However, the inverse of Asort is just the transpose here. What's happening after you perform this operation is that you are getting the original data back, but the data is still decentralized. To get the original data back, you must add the means of each feature back into the data matrix to get the final result. That's why we're using another bsxfun call here so that you can do this for each sample's feature values.
You should be able to go back and forth from the original domain and projected domain with the above two lines of code. Now where the dimensionality reduction (or the approximation of the original data) comes into play is the reverse operation. What you need to do first is project the data onto the bases of the principal components (i.e. the forward operation), but now to go back to the original domain where we are trying to reconstruct the data with a reduced number of principal components, you simply replace Asort in the above code with Aq and also reduce the amount of features you're using in Bproject. Concretely:
out = bsxfun(#plus, Bproject(:,1:k)*Aq.', mean(B, 1));
Doing Bproject(:,1:k) selects out the k features in the projected domain of your data, corresponding to the k largest eigenvectors. Interestingly, if you just want the representation of the data with regards to a reduced dimensionality, you can just use Bproject(:,1:k) and that'll be enough. However, if you want to go forward and compute an approximation of the original data, we need to compute the reverse step. The above code is simply what we had before with the full dimensionality of your data, but we use Aq as well as selecting out the k features in Bproject. This will give you the original data that is represented by the k largest eigenvectors / eigenvalues in your matrix.
If you'd like to see an awesome example, I'll mimic the Quora post that I linked to you but using another image. Consider doing this with a grayscale image where each row is a "sample" and each column is a feature. Let's take the cameraman image that's part of the image processing toolbox:
im = imread('camerman.tif');
imshow(im); %// Using the image processing toolbox
We get this image:
This is a 256 x 256 image, which means that we have 256 data points and each point has 256 features. What I'm going to do is convert the image to double for precision in computing the covariance matrix. Now what I'm going to do is repeat the above code, but incrementally increasing k at each go from 3, 11, 15, 25, 45, 65 and 125. Therefore, for each k, we are introducing more principal components and we should slowly start to get a reconstruction of our data.
Here's some runnable code that illustrates my point:
%%%%%%%// Pre-processing stage
clear all;
close all;
%// Read in image - make sure we cast to double
B = double(imread('cameraman.tif'));
%// Calculate covariance matrix
sigma = cov(B);
%// Find eigenvalues and eigenvectors of the covariance matrix
[A,D] = eig(sigma);
vals = diag(D);
%// Sort their eigenvalues
[~,ind] = sort(abs(vals), 'descend');
%// Rearrange eigenvectors
Asort = A(:,ind);
%// Find mean subtracted data
Bm = bsxfun(#minus, B, mean(B,1));
%// Reproject data onto principal components
Bproject = Bm*Asort;
%%%%%%%// Begin reconstruction logic
figure;
counter = 1;
for k = [3 11 15 25 45 65 125 155]
%// Extract out highest k eigenvectors
Aq = Asort(:,1:k);
%// Project back onto original domain
out = bsxfun(#plus, Bproject(:,1:k)*Aq.', mean(B, 1));
%// Place projection onto right slot and show the image
subplot(4, 2, counter);
counter = counter + 1;
imshow(out,[]);
title(['k = ' num2str(k)]);
end
As you can see, the majority of the code is the same from what we have seen. What's different is that I loop over all values of k, project back onto the original space (i.e. computing the approximation) with the k highest eigenvectors, then show the image.
We get this nice figure:
As you can see, starting with k=3 doesn't really do us any favours... we can see some general structure, but it wouldn't hurt to add more in. As we start increasing the number of components, we start to get a clearer picture of what the original data looks like. At k=25, we actually can see what the cameraman looks like perfectly, and we don't need components 26 and beyond to see what's happening. This is what I was talking about with regards to data compression where you don't need to work on all of the principal components to get a clear picture of what's going on.
I'd like to end this note by referring you to Chris Taylor's wonderful exposition on the topic of Principal Components Analysis, with code, graphs and a great explanation to boot! This is where I got started on PCA, but the Quora post is what solidified my knowledge.
Matlab - PCA analysis and reconstruction of multi dimensional data
I have a matrix M of size NxP. Every P columns are orthogonal (M is a basis). I also have a vector V of size N.
My objective is to transform the first vector of M into V and to update the others in order to conservate their orthogonality. I know that the origins of V and M are the same, so it is basically a rotation from a certain angle. I assume we can find a matrix T such that T*M = M'. However, I can't figure out the details of how to do it (with MATLAB).
Also, I know there might be an infinite number of transforms doing that, but I'd like to get the simplest one (in which others vectors of M approximately remain the same, i.e no rotation around the first vector).
A small picture to illustrate. In my actual case, N and P can be large integers (not necessarily 3):
Thanks in advance for your help!
[EDIT] Alternative solution to Gram-Schmidt (accepted answer)
I managed to get a correct solution by retrieving a rotation matrix R by solving an optimization problem minimizing the 2-norm between M and R*M, under the constraints:
V is orthogonal to R*M[1] ... R*M[P-1] (i.e V'*(R*M[i]) = 0)
R*M[0] = V
Due to the solver constraints, I couldn't indicate that R*M[0] ... R*M[P-1] are all pairwise orthogonal (i.e (R*M)' * (R*M) = I).
Luckily, it seems that with this problem and with my solver (CVX using SDPT3), the resulting R*M[0] ... R*M[P-1] are also pairwise orthogonal.
I believe you want to use the Gram-Schmidt process here, which finds an orthogonal basis for a set of vectors. If V is not orthogonal to M[0], you can simply change M[0] to V and run Gram-Schmidt, to arrive at an orthogonal basis. If it is orthogonal to M[0], instead change another, non-orthogonal vector such as M[1] to V and swap the columns to make it first.
Mind you, the vector V needs to be in the column space of M, or you will always have a different basis than you had before.
Matlab doesn't have a built-in Gram-Schmidt command, although you can use the qr command to get an orthogonal basis. However, this won't work if you need V to be one of the vectors.
Option # 1 : if you have some vector and after some changes you want to rotate matrix to restore its orthogonality then, I believe, this method should work for you in Matlab
http://www.mathworks.com/help/symbolic/mupad_ref/numeric-rotationmatrix.html
(edit by another user: above link is broken, possible redirect: Matrix Rotations and Transformations)
If it does not, then ...
Option # 2 : I did not do this in Matlab but a part of another task was to find Eigenvalues and Eigenvectors of the matrix. To achieve this I used SVD. Part of SVD algorithm was Jacobi Rotation. It says to rotate the matrix until it is almost diagonalizable with some precision and invertible.
https://math.stackexchange.com/questions/222171/what-is-the-difference-between-diagonalization-and-orthogonal-diagonalization
Approximate algorithm of Jacobi rotation in your case should be similar to this one. I may be wrong at some point so you will need to double check this in relevant docs :
1) change values in existing vector
2) compute angle between actual and new vector
3) create rotation matrix and ...
put Cosine(angle) to diagonal of rotation matrix
put Sin(angle) to the top left corner of the matric
put minus -Sin(angle) to the right bottom corner of the matrix
4) multiple vector or matrix of vectors by rotation matrix in a loop until your vector matrix is invertible and diagonalizable, ability to invert can be calculated by determinant (check for singularity) and orthogonality (matrix is diagonalized) can be tested with this check - if Max value in LU matrix is less then some constant then stop rotation, at this point new matrix should contain only orthogonal vectors.
Unfortunately, I am not able to find exact pseudo code that I was referring to in the past but these links may help you to understand Jacobi Rotation :
http://www.physik.uni-freiburg.de/~severin/fulltext.pdf
http://web.stanford.edu/class/cme335/lecture7.pdf
https://www.nada.kth.se/utbildning/grukth/exjobb/rapportlistor/2003/rapporter03/maleko_mercy_03003.pdf
Does anybody know how the sign of the singular vectors resulting from Matlab's svd function is determined?
Let:
B = U*S*V'
be a valid svd decomposition of a real or complex 2-by-2 matrix B, then:
B = (U*c)*S *(V*c)'
is also valid, where c is a matrix that changes the sign of one or both singular vectors:
c = diag([1 -1]), diag([-1 1]) or diag([-1 -1]).
I want to know how Matlab's svd algorithm determines the sign of the singular vectors in U and V.
Matlab uses LAPACK's DGESVD implementation for singular value decomposition, which doesn't take into account direction of the resulting vectors. In applications, when SVD is performed, decomposed data is processed and then data is reconstructed back signs make no difference. They became only important, when decomposed data is being analyzed.
One might apply sign correction algorithm after performing SVD with Matlab. But I believe sign correction depends on actual meaning of the data.
In the paper you provided direction is chosen to be the same as the most of the data points. This won't work for data with symmetrical distribution as theoretical direction is zero and sample direction will be random resulting in high numerical instability.
If the goal is just to have numerical stability of the solution, then it would be enough to choose some vector and change all SVD vectors to lie in the same half-space with it.