sign determination of singular vectors ind matlabs svd function - matlab

Does anybody know how the sign of the singular vectors resulting from Matlab's svd function is determined?
Let:
B = U*S*V'
be a valid svd decomposition of a real or complex 2-by-2 matrix B, then:
B = (U*c)*S *(V*c)'
is also valid, where c is a matrix that changes the sign of one or both singular vectors:
c = diag([1 -1]), diag([-1 1]) or diag([-1 -1]).
I want to know how Matlab's svd algorithm determines the sign of the singular vectors in U and V.

Matlab uses LAPACK's DGESVD implementation for singular value decomposition, which doesn't take into account direction of the resulting vectors. In applications, when SVD is performed, decomposed data is processed and then data is reconstructed back signs make no difference. They became only important, when decomposed data is being analyzed.
One might apply sign correction algorithm after performing SVD with Matlab. But I believe sign correction depends on actual meaning of the data.
In the paper you provided direction is chosen to be the same as the most of the data points. This won't work for data with symmetrical distribution as theoretical direction is zero and sample direction will be random resulting in high numerical instability.
If the goal is just to have numerical stability of the solution, then it would be enough to choose some vector and change all SVD vectors to lie in the same half-space with it.

Related

What does selecting the largest eigenvalues and eigenvectors in the covariance matrix mean in data analysis?

Suppose there is a matrix B, where its size is a 500*1000 double(Here, 500 represents the number of observations and 1000 represents the number of features).
sigma is the covariance matrix of B, and D is a diagonal matrix whose diagonal elements are the eigenvalues of sigma. Assume A is the eigenvectors of the covariance matrix sigma.
I have the following questions:
I need to select the first k = 800 eigenvectors corresponding to the eigenvalues with the largest magnitude to rank the selected features. The final matrix named Aq. How can I do this in MATLAB?
What is the meaning of these selected eigenvectors?
It seems the size of the final matrix Aq is 1000*800 double once I calculate Aq. The time points/observation information of 500 has disappeared. For the final matrix Aq, what does the value 1000 in matrix Aq represent now? Also, what does the value 800 in matrix Aq represent now?
I'm assuming you determined the eigenvectors from the eig function. What I would recommend to you in the future is to use the eigs function. This not only computes the eigenvalues and eigenvectors for you, but it will compute the k largest eigenvalues with their associated eigenvectors for you. This may save computational overhead where you don't have to compute all of the eigenvalues and associated eigenvectors of your matrix as you only want a subset. You simply supply the covariance matrix of your data to eigs and it returns the k largest eigenvalues and eigenvectors for you.
Now, back to your problem, what you are describing is ultimately Principal Component Analysis. The mechanics behind this would be to compute the covariance matrix of your data and find the eigenvalues and eigenvectors of the computed result. It has been known that doing it this way is not recommended due to numerical instability with computing the eigenvalues and eigenvectors for large matrices. The most canonical way to do this now is via Singular Value Decomposition. Concretely, the columns of the V matrix give you the eigenvectors of the covariance matrix, or the principal components, and the associated eigenvalues are the square root of the singular values produced in the diagonals of the matrix S.
See this informative post on Cross Validated as to why this is preferred:
https://stats.stackexchange.com/questions/79043/why-pca-of-data-by-means-of-svd-of-the-data
I'll throw in another link as well that talks about the theory behind why the Singular Value Decomposition is used in Principal Component Analysis:
https://stats.stackexchange.com/questions/134282/relationship-between-svd-and-pca-how-to-use-svd-to-perform-pca
Now let's answer your question one at a time.
Question #1
MATLAB generates the eigenvalues and the corresponding ordering of the eigenvectors in such a way where they are unsorted. If you wish to select out the largest k eigenvalues and associated eigenvectors given the output of eig (800 in your example), you'll need to sort the eigenvalues in descending order, then rearrange the columns of the eigenvector matrix produced from eig then select out the first k values.
I should also note that using eigs will not guarantee sorted order, so you will have to explicitly sort these too when it comes down to it.
In MATLAB, doing what we described above would look something like this:
sigma = cov(B);
[A,D] = eig(sigma);
vals = diag(D);
[~,ind] = sort(abs(vals), 'descend');
Asort = A(:,ind);
It's a good thing to note that you do the sorting on the absolute value of the eigenvalues because scaled eigenvalues are also eigenvalues themselves. These scales also include negatives. This means that if we had a component whose eigenvalue was, say -10000, this is a very good indication that this component has some significant meaning to your data, and if we sorted purely on the numbers themselves, this gets placed near the lower ranks.
The first line of code finds the covariance matrix of B, even though you said it's already stored in sigma, but let's make this reproducible. Next, we find the eigenvalues of your covariance matrix and the associated eigenvectors. Take note that each column of the eigenvector matrix A represents one eigenvector. Specifically, the ith column / eigenvector of A corresponds to the ith eigenvalue seen in D.
However, the eigenvalues are in a diagonal matrix, so we extract out the diagonals with the diag command, sort them and figure out their ordering, then rearrange A to respect this ordering. I use the second output of sort because it tells you the position of where each value in the unsorted result would appear in the sorted result. This is the ordering we need to rearrange the columns of the eigenvector matrix A. It's imperative that you choose 'descend' as the flag so that the largest eigenvalue and associated eigenvector appear first, just like we talked about before.
You can then pluck out the first k largest vectors and values via:
k = 800;
Aq = Asort(:,1:k);
Question #2
It's a well known fact that the eigenvectors of the covariance matrix are equal to the principal components. Concretely, the first principal component (i.e. the largest eigenvector and associated largest eigenvalue) gives you the direction of the maximum variability in your data. Each principal component after that gives you variability of a decreasing nature. It's also good to note that each principal component is orthogonal to each other.
Here's a good example from Wikipedia for two dimensional data:
I pulled the above image from the Wikipedia article on Principal Component Analysis, which I linked you to above. This is a scatter plot of samples that are distributed according to a bivariate Gaussian distribution centred at (1,3) with a standard deviation of 3 in roughly the (0.878, 0.478) direction and of 1 in the orthogonal direction. The component with a standard deviation of 3 is the first principal component while the one that is orthogonal is the second component. The vectors shown are the eigenvectors of the covariance matrix scaled by the square root of the corresponding eigenvalue, and shifted so their tails are at the mean.
Now let's get back to your question. The reason why we take a look at the k largest eigenvalues is a way of performing dimensionality reduction. Essentially, you would be performing a data compression where you would take your higher dimensional data and project them onto a lower dimensional space. The more principal components you include in your projection, the more it will resemble the original data. It actually begins to taper off at a certain point, but the first few principal components allow you to faithfully reconstruct your data for the most part.
A great visual example of performing PCA (or SVD rather) and data reconstruction is found by this great Quora post I stumbled upon in the past.
http://qr.ae/RAEU8a
Question #3
You would use this matrix to reproject your higher dimensional data onto a lower dimensional space. The number of rows being 1000 is still there, which means that there were originally 1000 features in your dataset. The 800 is what the reduced dimensionality of your data would be. Consider this matrix as a transformation from the original dimensionality of a feature (1000) down to its reduced dimensionality (800).
You would then use this matrix in conjunction with reconstructing what the original data was. Concretely, this would give you an approximation of what the original data looked like with the least amount of error. In this case, you don't need to use all of the principal components (i.e. just the k largest vectors) and you can create an approximation of your data with less information than what you had before.
How you reconstruct your data is very simple. Let's talk about the forward and reverse operations first with the full data. The forward operation is to take your original data and reproject it but instead of the lower dimensionality, we will use all of the components. You first need to have your original data but mean subtracted:
Bm = bsxfun(#minus, B, mean(B,1));
Bm will produce a matrix where each feature of every sample is mean subtracted. bsxfun allows the subtraction of two matrices in unequal dimension provided that you can broadcast the dimensions so that they can both match up. Specifically, what will happen in this case is that the mean of each column / feature of B will be computed and a temporary replicated matrix will be produced that is as large as B. When you subtract your original data with this replicated matrix, the effect will subtract every data point with their respective feature means, thus decentralizing your data so that the mean of each feature is 0.
Once you do this, the operation to project is simply:
Bproject = Bm*Asort;
The above operation is quite simple. What you are doing is expressing each sample's feature as a linear combination of principal components. For example, given the first sample or first row of the decentralized data, the first sample's feature in the projected domain is a dot product of the row vector that pertains to the entire sample and the first principal component which is a column vector.. The first sample's second feature in the projected domain is a weighted sum of the entire sample and the second component. You would repeat this for all samples and all principal components. In effect, you are reprojecting the data so that it is with respect to the principal components - which are orthogonal basis vectors that transform your data from one representation to another.
A better description of what I just talked about can be found here. Look at Amro's answer:
Matlab Principal Component Analysis (eigenvalues order)
Now to go backwards, you simply do the inverse operation, but a special property with the eigenvector matrix is that if you transpose this, you get the inverse. To get the original data back, you undo the operation above and add the means back to the problem:
out = bsxfun(#plus, Bproject*Asort.', mean(B, 1));
You want to get the original data back, so you're solving for Bm with respect to the previous operation that I did. However, the inverse of Asort is just the transpose here. What's happening after you perform this operation is that you are getting the original data back, but the data is still decentralized. To get the original data back, you must add the means of each feature back into the data matrix to get the final result. That's why we're using another bsxfun call here so that you can do this for each sample's feature values.
You should be able to go back and forth from the original domain and projected domain with the above two lines of code. Now where the dimensionality reduction (or the approximation of the original data) comes into play is the reverse operation. What you need to do first is project the data onto the bases of the principal components (i.e. the forward operation), but now to go back to the original domain where we are trying to reconstruct the data with a reduced number of principal components, you simply replace Asort in the above code with Aq and also reduce the amount of features you're using in Bproject. Concretely:
out = bsxfun(#plus, Bproject(:,1:k)*Aq.', mean(B, 1));
Doing Bproject(:,1:k) selects out the k features in the projected domain of your data, corresponding to the k largest eigenvectors. Interestingly, if you just want the representation of the data with regards to a reduced dimensionality, you can just use Bproject(:,1:k) and that'll be enough. However, if you want to go forward and compute an approximation of the original data, we need to compute the reverse step. The above code is simply what we had before with the full dimensionality of your data, but we use Aq as well as selecting out the k features in Bproject. This will give you the original data that is represented by the k largest eigenvectors / eigenvalues in your matrix.
If you'd like to see an awesome example, I'll mimic the Quora post that I linked to you but using another image. Consider doing this with a grayscale image where each row is a "sample" and each column is a feature. Let's take the cameraman image that's part of the image processing toolbox:
im = imread('camerman.tif');
imshow(im); %// Using the image processing toolbox
We get this image:
This is a 256 x 256 image, which means that we have 256 data points and each point has 256 features. What I'm going to do is convert the image to double for precision in computing the covariance matrix. Now what I'm going to do is repeat the above code, but incrementally increasing k at each go from 3, 11, 15, 25, 45, 65 and 125. Therefore, for each k, we are introducing more principal components and we should slowly start to get a reconstruction of our data.
Here's some runnable code that illustrates my point:
%%%%%%%// Pre-processing stage
clear all;
close all;
%// Read in image - make sure we cast to double
B = double(imread('cameraman.tif'));
%// Calculate covariance matrix
sigma = cov(B);
%// Find eigenvalues and eigenvectors of the covariance matrix
[A,D] = eig(sigma);
vals = diag(D);
%// Sort their eigenvalues
[~,ind] = sort(abs(vals), 'descend');
%// Rearrange eigenvectors
Asort = A(:,ind);
%// Find mean subtracted data
Bm = bsxfun(#minus, B, mean(B,1));
%// Reproject data onto principal components
Bproject = Bm*Asort;
%%%%%%%// Begin reconstruction logic
figure;
counter = 1;
for k = [3 11 15 25 45 65 125 155]
%// Extract out highest k eigenvectors
Aq = Asort(:,1:k);
%// Project back onto original domain
out = bsxfun(#plus, Bproject(:,1:k)*Aq.', mean(B, 1));
%// Place projection onto right slot and show the image
subplot(4, 2, counter);
counter = counter + 1;
imshow(out,[]);
title(['k = ' num2str(k)]);
end
As you can see, the majority of the code is the same from what we have seen. What's different is that I loop over all values of k, project back onto the original space (i.e. computing the approximation) with the k highest eigenvectors, then show the image.
We get this nice figure:
As you can see, starting with k=3 doesn't really do us any favours... we can see some general structure, but it wouldn't hurt to add more in. As we start increasing the number of components, we start to get a clearer picture of what the original data looks like. At k=25, we actually can see what the cameraman looks like perfectly, and we don't need components 26 and beyond to see what's happening. This is what I was talking about with regards to data compression where you don't need to work on all of the principal components to get a clear picture of what's going on.
I'd like to end this note by referring you to Chris Taylor's wonderful exposition on the topic of Principal Components Analysis, with code, graphs and a great explanation to boot! This is where I got started on PCA, but the Quora post is what solidified my knowledge.
Matlab - PCA analysis and reconstruction of multi dimensional data

compute SVD using Matlab function

I have a doubt about SVD. in the literature that i had read, it's written that we have to convert our input matrix into covariance matrix first, and then SVD function from matlab (SVD) is used.
But, in Mathworks website we can use SVD function directly to the input matrix (no need to convert it into covariance matrix)..
[U,S,V]=svd(inImageD);
Which one is the true??
And if we want to do dimensionality reduction, we have to project our data into eigen vector.. But where is the eigen vector generated by SVD function..
I know that S is the eigen value.. But what is U and S??
To reduce our data dimensional, do we need to substract the input matrix with its mean and then multiply it with eigen vector?? or we can just multiply our input matrix with the eigen vector (no need to substract it first with its mean)..
EDIT
Suppose if I want to do classification using SIFT as the features and SVM as the classifier.
I have 10 images for training and I arrange them in a different row..
So 1st row for 1st images, 2nd row for second images and so on...
Feat=[1 2 5 6 7 >> Images1
2 9 0 6 5 >> Images2
3 4 7 8 2 >> Images3
2 3 6 3 1 >> Images4
..
.
so on. . ]
To do dimensionality reduction (from my 10x5 matrix), we have yo do A*EigenVector
And from what U had explained (#Sam Roberts), I can compute it by using EIGS function from the covariance matrix (instead of using SVD function).
And as I arrange the feat of images in different row, so I need to do A'*A
So it becomes:
Matrix=A'*A
MAT_Cov=Cov(Matrix)
[EigVector EigValue] = eigs (MAT_Cov);
is that right??
Eigenvector decomposition (EVD) and singular value decomposition (SVD) are closely related.
Let's say you have some data a = rand(3,4);. Note that this not a square matrix - it represents a dataset of observations (rows) and variables (columns).
Do the following:
[u1,s1,v1] = svd(a);
[u2,s2,v2] = svd(a');
[e1,d1] = eig(a*a');
[e2,d2] = eig(a'*a);
Now note a few things.
Up to the sign (+/-), which is arbitrary, u1 is the same as v2. Up to a sign and an ordering of the columns, they are also equal to e1. (Note that there may be some very very tiny numerical differences as well, due to slight differences in the svd and eig algorithms).
Similarly, u2 is the same as v1 and e2.
s1 equals s2, and apart from some extra columns and rows of zeros, both also equal sqrt(d1) and sqrt(d2). Again, there may be some very tiny numerical differences as well just due to algorithmic issues (they'll be on the order of something to the -10 or so).
Note also that a*a' is basically the covariances of the rows, and a'*a is basically the covariances of the columns (that's not quite true - a would need to be centred first by subtracting the column or row mean for them to be equal, and there might be a multiplicative constant difference as well, but it's basically pretty similar).
Now to answer your questions, I assume that what you're really trying to do is PCA. You can do PCA either by taking the original data matrix and applying SVD, or by taking its covariance matrix and applying EVD. Note that Statistics Toolbox has two functions for PCA - pca (in older versions princomp) and pcacov.
Both do essentially the same thing, but from different starting points, because of the above equivalences between SVD and EVD.
Strictly speaking, u1, v1, u2 and v2 above are not eigenvectors, they are singular vectors - and s1 and s2 are singular values. They are singular vectors/values of the matrix a. e1 and d1 are the eigenvectors and eigenvalues of a*a' (not a), and e2 and d2 are the eigenvectors and eigenvalues of a'*a (not a). a does not have any eigenvectors - only square matrices have eigenvectors.
Centring by subtracting the mean is a separate issue - you would typically do that prior to PCA, but there are situations where you wouldn't want to. You might also want to normalise by dividing by the standard deviation but again, you wouldn't always want to - it depends what the data represents and what question you're trying to answer.

How to calculate the squared inverse of a matrix in Matlab

I have to calculate:
gamma=(I-K*A^-1)*OLS;
where I is the identity matrix, K and A are diagonal matrices of the same size, and OLS is the ordinary least squares estimate of the parameters.
I do this in Matlab using:
gamma=(I-A\K)*OLS;
However I then have to calculate:
gamma2=(I-K^2*A-2)*OLS;
I calculate this in Matlab using:
gamma2=(I+A\K)*(I-A\K)*OLS;
Is this correct?
Also I just want to calculate the variance of the OLS parameters:
The formula is simple enough:
Var(B)=sigma^2*(Delta)^-1;
Where sigma is a constant and Delta is a diagonal matrix containing the eigenvalues.
I tried doing this by:
Var_B=Delta\sigma^2;
But it comes back saying matrix dimensions must agree?
Please can you tell me how to calculate Var(B) in Matlab, as well as confirming whether or not my other calculations are correct.
In general, matrix multiplication does not commute, which makes A^2 - B^2 not equal to (A+B)*(A-B). However your case is special, because you have an identity matrix in the equation. So your method for finding gamma2 is valid.
'Var_B=Delta\sigma^2' is not a valid mldivide expression. See the documentation. Try Var_B=sigma^2*inv(Delta). The function inv returns a matrix inverse. Although this function can also be applied in your expression to find gamma or gamma2, the use of the operator \ is more recommended for better accuracy and faster computation.

How do I draw samples from multivariate gaussian distribution parameterized by precision in matlab

I am wondering how to draw samples in matlab, where I have precision matrix and mean as the input argument.
I know mvnrnd is a typical way to do so, but it requires the covariance matrix (i.e inverse of precision)) as the argument.
I only have precision matrix, and due to the computational issue, I can't invert my precision matrix, since it will take too long (my dimension is about 2000*2000)
Good question. Note that you can generate samples from a multivariant normal distribution using samples from the standard normal distribution by way of the procedure described in the relevant Wikipedia article.
Basically, this boils down to evaluating A*z + mu where z is a vector of independent random variables sampled from the standard normal distribution, mu is a vector of means, and A*A' = Sigma is the covariance matrix. Since you have the inverse of the latter quantity, i.e. inv(Sigma), you can probably do a Cholesky decomposition (see chol) to determine the inverse of A. You then need to evaluate A * z. If you only know inv(A) this can still be done without performing a matrix inverse by instead solving a linear system (e.g. via the backslash operator).
The Cholesky decomposition might still be problematic for you, but I hope this helps.
If you want to sample from N(μ,Q-1) and only Q is available, you can take the Cholesky factorization of Q, L, such that LLT=Q. Next take the inverse of LT, L-T, and sample Z from a standard normal distribution N(0, I).
Considering that L-T is an upper triangular dxd matrix and Z is a d-dimensional column vector,
μ + L-TZ will be distributed as N(μ, Q-1).
If you wish to avoid taking the inverse of L, you can instead solve the triangular system of equations LTv=Z by back substitution. μ+v will then be distributed as N(μ, Q-1).
Some illustrative matlab code:
% make a 2x2 covariance matrix and a mean vector
covm = [3 0.4*(sqrt(3*7)); 0.4*(sqrt(3*7)) 7];
mu = [100; 2];
% Get the precision matrix
Q = inv(covm);
%take the Cholesky decomposition of Q (chol in matlab already returns the upper triangular factor)
L = chol(Q);
%draw 2000 samples from a standard bivariate normal distribution
Z = normrnd(0,1, [2, 2000]);
%solve the system and add the mean
X = repmat(mu, 1, 2000)+L\Z;
%check the result
mean(X')
var(X')
corrcoef(X')
% compare to the sampling from the covariance matrix
Y=mvnrnd(mu,covm, 2000)';
mean(Y')
var(Y')
corrcoef(Y')
scatter(X(1,:), X(2,:),'b')
hold on
scatter(Y(1,:), Y(2,:), 'r')
For more efficiency, I guess you can search for some package that efficiently solves triangular systems.

Obtain null space or single dimensional space which is its best approximation efficiently

I have been doing this using an svd computation
[U, S, V] = svd(A)
wherein I use the last column of A as my null space approximation. Since A gets really large, I realized that this is slowing down my computation.
For null(A), the documentation seems to suggest that it does an SVD anyways. Also, it does not work if A is full rank. An SVD proceeds by finding the largest singular value, then the next one and so on whereas I just need the smallest one.
This seems to be a big bottleneck. Will really appreciate help on this.
Am using MATLAB.
Thanks.
This Wikipedia article describes three methods for the numerical computation of the null space: reduction (Gaussian elimination), SVD, and QR decomposition. In brief, (1) reduction is "not suitable for a practical computation of the null space because of numerical accuracy problems in the presence of rounding errors", (2) SVD is the "state-of-the art approach", but it "generally costs about the same as several matrix-matrix multiplications with matrices of the same size", and (3) the numerical stability and the cost of QR decomposition are "between those of the SVD and the reduction approaches".
So if SVD is too slow, you could give a chance to QR decomposition. The algorithm with your notations is as follows: "A is a 4xN matrix with 4<N. Using the QR factorization of A', we can find a matrix such that A'*P = Q*R = [Q1 Q2]*R, where where P is a permutation matrix, Q is NxN and R is Nx4. Matrix Q1 is Nx4 and consists of the first 4 columns of Q. Matrix Q2 is Nx(N-4) and is made up of the last N-4 columns of Q. Since A*Q2 = 0, the columns of Q2 span the null space of A."
Matlab implementation: [Q, R, P] = qr(A', 'matrix'); The columns of matrix Q2 = Q(:, 5:end); give the null space of A.
This answers builds on your comment that what you actually want to do is to solve Ax = 0. For this purpose, a complete nullspace computation is usually inefficient. If you want a least-squares approximation to x, have a look into the matlab operator \ (see help mldivide).
In other cases, an "economic" SVD via svd(A,0) might be helpful for non-square matrices (it does not compute the full S, but only the non-zero block).
If all points are from a plane, call SVD with just a sample.