Matlab correlation between two matrices - matlab

Basically I have two matrices, something like this:
> Matrix A (100 rows x 2 features)
Height - Weight
1.48 75
1.55 65
1.60 70
etc...
And Matrix B (same dimension of Matrix A but with different values of course)
I would like to understand if there is some correlation between Matrix A and Matrix B, which strategy do you suggest me?

The concept you are looking for is known as canonical correlation. It is a well developed bit of theory in the field of multivariate analysis. Essentially, the idea is to find a linear combination of the columns in your first matrix and a linear combination of the columns in your second matrix, such that the correlation between the two linear combinations is maximized.
This can be done manually using eigenvectors and eigenvalues, but if you have the statistics toolbox, then Matlab has already got it packaged and ready to go for you. The function is called canoncorr, and the documentation is here
A brief example of the usage of this function follows:
%# Set up some example data
CovMat = randi(5, 4, 4) + 20 * eye(4); %# Build a random covariance matrix
CovMat = (1/2) * (CovMat + CovMat'); %# Ensure random covriance matrix is symmetrix
X = mvnrnd(zeros(500, 4), CovMat); %# Simulate data using multivariate Normal
%# Partition the data into two matrices
X1 = X(:, 1:2);
X2 = X(:, 3:4);
%# Find the canonical correlations of the two matrices
[A, B, r] = canoncorr(X1, X2);
The first canonical correlation is the first element of r, and the second canonical correlation is the second element of r.
The canoncorr function also has a lot of other outputs. I'm not sure I'm clever enough to provide a satisfactory yet concise explanation of them here so instead I'm going to be lame and recommend you read up on it in a multivariate analysis textbook - most multivariate analysis textbooks will have a full chapter dedicated to canonical correlations.
Finally, if you don't have the statistics toolbox, then a quick google revealed the following FEX submission that claims to provide canonical correlation analysis - note, I haven't tested it myself.

Ok, let's have a short try:
A = [1:20; rand(1,20)]'; % Generate some data...
The best way to examine a 2-dimensional relationship is by looking at the data plots:
plot(A(:,1), A(:,2), 'o') % In the random data you should not see some pattern...
If we really want to compute some correlation coefficients, we can do this with corrcoef, as you mentioned:
B = corrcoef(A)
ans =
1.0000 -0.1350
-0.1350 1.0000
Here, B(1,1) is the correlation between column 1 and column 1, B(2,1) between column 1 and column 2 (and vice versa, thus B is symmetric).
One may argue about the usefulness of such a measure in a two-dimensional context - in my opinion you usually gain more insights by looking at the plots.

Related

Computing the SVD of a rectangular matrix

I have a matrix like M = K x N ,where k is 49152 and is the dimension of the problem and N is 52 and is the number of observations.
I have tried to use [U,S,V]=SVD(M) but doing this I get less memory space.
I found another code which uses [U,S,V]=SVD(COV(M)) and it works well. My questions are what is the meaning of using the COV(M) command inside the SVD and what is the meaning of the resultant [U,S,V]?
Finding the SVD of the covariance matrix is a method to perform Principal Components Analysis or PCA for short. I won't get into the mathematical details here, but PCA performs what is known as dimensionality reduction. If you like a more formal treatise on the subject, you can read up on my post about it here: What does selecting the largest eigenvalues and eigenvectors in the covariance matrix mean in data analysis?. However, simply put dimensionality reduction projects your data stored in the matrix M onto a lower dimensional surface with the least amount of projection error. In this matrix, we are assuming that each column is a feature or a dimension and each row is a data point. I suspect the reason why you are getting more memory occupied by applying the SVD on the actual data matrix M itself rather than the covariance matrix is because you have a significant amount of data points with a small amount of features. The covariance matrix finds the covariance between pairs of features. If M is a m x n matrix where m is the total number of data points and n is the total number of features, doing cov(M) would actually give you a n x n matrix, so you are applying SVD on a small amount of memory in comparison to M.
As for the meaning of U, S and V, for dimensionality reduction specifically, the columns of V are what are known as the principal components. The ordering of V is in such a way where the first column is the first axis of your data that describes the greatest amount of variability possible. As you start going to the second columns up to the nth column, you start to introduce more axes in your data and the variability starts to decrease. Eventually when you hit the nth column, you are essentially describing your data in its entirety without reducing any dimensions. The diagonal values of S denote what is called the variance explained which respect the same ordering as V. As you progress through the singular values, they tell you how much of the variability in your data is described by each corresponding principal component.
To perform the dimensionality reduction, you can either take U and multiply by S or take your data that is mean subtracted and multiply by V. In other words, supposing X is the matrix M where each column has its mean computed and the is subtracted from each column of M, the following relationship holds:
US = XV
To actually perform the final dimensionality reduction, you take either US or XV and retain the first k columns where k is the total amount of dimensions you want to retain. The value of k depends on your application, but many people choose k to be the total number of principal components that explains a certain percentage of your variability in your data.
For more information about the link between SVD and PCA, please see this post on Cross Validated: https://stats.stackexchange.com/q/134282/86678
Instead of [U, S, V] = svd(M), which tries to build a matrix U that is 49152 by 49152 (= 18 GB 😱!), do svd(M, 'econ'). That returns the “economy-class” SVD, where U will be 52 by 52, S is 52 by 52, and V is also 52 by 52.
cov(M) will remove each dimension’s mean and evaluate the inner product, giving you a 52 by 52 covariance matrix. You can implement your own version of cov, called mycov, as
function [C] = mycov(M)
M = bsxfun(#minus, M, mean(M, 1)); % subtract each dimension’s mean over all observations
C = M' * M / size(M, 1);
(You can verify this works by looking at mycov(randn(49152, 52)), which should be close to eye(52), since each element of that array is IID-Gaussian.)
There’s a lot of magical linear algebraic properties and relationships between the SVD and EVD (i.e., singular value vs eigenvalue decompositions): because the covariance matrix cov(M) is a Hermitian matrix, it’s left- and right-singular vectors are the same, and in fact also cov(M)’s eigenvectors. Furthermore, cov(M)’s singular values are also its eigenvalues: so svd(cov(M)) is just an expensive way to get eig(cov(M)) 😂, up to ±1 and reordering.
As #rayryeng explains at length, usually people look at svd(M, 'econ') because they want eig(cov(M)) without needing to evaluate cov(M), because you never want to compute cov(M): it’s numerically unstable. I recently wrote an answer that showed, in Python, how to compute eig(cov(M)) using svd(M2, 'econ'), where M2 is the 0-mean version of M, used in the practical application of color-to-grayscale mapping, which might help you get more context.

compute SVD using Matlab function

I have a doubt about SVD. in the literature that i had read, it's written that we have to convert our input matrix into covariance matrix first, and then SVD function from matlab (SVD) is used.
But, in Mathworks website we can use SVD function directly to the input matrix (no need to convert it into covariance matrix)..
[U,S,V]=svd(inImageD);
Which one is the true??
And if we want to do dimensionality reduction, we have to project our data into eigen vector.. But where is the eigen vector generated by SVD function..
I know that S is the eigen value.. But what is U and S??
To reduce our data dimensional, do we need to substract the input matrix with its mean and then multiply it with eigen vector?? or we can just multiply our input matrix with the eigen vector (no need to substract it first with its mean)..
EDIT
Suppose if I want to do classification using SIFT as the features and SVM as the classifier.
I have 10 images for training and I arrange them in a different row..
So 1st row for 1st images, 2nd row for second images and so on...
Feat=[1 2 5 6 7 >> Images1
2 9 0 6 5 >> Images2
3 4 7 8 2 >> Images3
2 3 6 3 1 >> Images4
..
.
so on. . ]
To do dimensionality reduction (from my 10x5 matrix), we have yo do A*EigenVector
And from what U had explained (#Sam Roberts), I can compute it by using EIGS function from the covariance matrix (instead of using SVD function).
And as I arrange the feat of images in different row, so I need to do A'*A
So it becomes:
Matrix=A'*A
MAT_Cov=Cov(Matrix)
[EigVector EigValue] = eigs (MAT_Cov);
is that right??
Eigenvector decomposition (EVD) and singular value decomposition (SVD) are closely related.
Let's say you have some data a = rand(3,4);. Note that this not a square matrix - it represents a dataset of observations (rows) and variables (columns).
Do the following:
[u1,s1,v1] = svd(a);
[u2,s2,v2] = svd(a');
[e1,d1] = eig(a*a');
[e2,d2] = eig(a'*a);
Now note a few things.
Up to the sign (+/-), which is arbitrary, u1 is the same as v2. Up to a sign and an ordering of the columns, they are also equal to e1. (Note that there may be some very very tiny numerical differences as well, due to slight differences in the svd and eig algorithms).
Similarly, u2 is the same as v1 and e2.
s1 equals s2, and apart from some extra columns and rows of zeros, both also equal sqrt(d1) and sqrt(d2). Again, there may be some very tiny numerical differences as well just due to algorithmic issues (they'll be on the order of something to the -10 or so).
Note also that a*a' is basically the covariances of the rows, and a'*a is basically the covariances of the columns (that's not quite true - a would need to be centred first by subtracting the column or row mean for them to be equal, and there might be a multiplicative constant difference as well, but it's basically pretty similar).
Now to answer your questions, I assume that what you're really trying to do is PCA. You can do PCA either by taking the original data matrix and applying SVD, or by taking its covariance matrix and applying EVD. Note that Statistics Toolbox has two functions for PCA - pca (in older versions princomp) and pcacov.
Both do essentially the same thing, but from different starting points, because of the above equivalences between SVD and EVD.
Strictly speaking, u1, v1, u2 and v2 above are not eigenvectors, they are singular vectors - and s1 and s2 are singular values. They are singular vectors/values of the matrix a. e1 and d1 are the eigenvectors and eigenvalues of a*a' (not a), and e2 and d2 are the eigenvectors and eigenvalues of a'*a (not a). a does not have any eigenvectors - only square matrices have eigenvectors.
Centring by subtracting the mean is a separate issue - you would typically do that prior to PCA, but there are situations where you wouldn't want to. You might also want to normalise by dividing by the standard deviation but again, you wouldn't always want to - it depends what the data represents and what question you're trying to answer.

Principal Components calculated using different functions in Matlab

I am trying to understand principal component analysis in Matlab,
There seems to be at least 3 different functions that do it.
I have some questions re the code below:
Am I creating approximate x values using only one eigenvector (the one corresponding to the largest eigenvalue) correctly? I think so??
Why are PC and V which are both meant to be the loadings for (x'x) presented differently? The column order is reversed because eig does not order the eigenvalues with the largest value first but why are they the negative of each other?
Why are the eig values not in ordered with the eigenvector corresponding to the largest eigenvalue in the first column?
Using the code below I get back to the input matrix x when using svd and eig, but the results from princomp seem to be totally different? What so I have to do to make princomp match the other two functions?
Code:
x=[1 2;3 4;5 6;7 8 ]
econFlag=0;
[U,sigma,V] = svd(x,econFlag);%[U,sigma,coeff] = svd(z,econFlag);
U1=U(:,1);
V1=V(:,1);
sigma_partial=sigma(1,1);
score1=U*sigma;
test1=score1*V';
score_partial=U1*sigma_partial;
test1_partial=score_partial*V1';
[PC, D] = eig(x'*x)
score2=x*PC;
test2=score2*PC';
PC1=PC(:,2);
score2_partial=x*PC1;
test2_partial=score2_partial*PC1';
[o1 o2 o3]=princomp(x);
Yes. According to the documentation of svd, diagonal elements of the output S are in decreasing order. There is no such guarantee for the the output D of eig though.
Eigenvectors and singular vectors have no defined sign. If a is an eigenvector, so is -a.
I've often wondered the same. Laziness on the part of TMW? Optimization, because sorting would be an additional step and not everybody needs 'em sorted?
princomp centers the input data before computing the principal components. This makes sense as normally the PCA is computed with respect to the covariance matrix, and the eigenvectors of x' * x are only identical to those of the covariance matrix if x is mean-free.
I would compute the PCA by transforming to the basis of the eigenvectors of the covariance matrix (centered data), but apply this transform to the original (uncentered) data. This allows to capture a maximum of variance with as few principal components as possible, but still to recover the orginal data from all of them:
[V, D] = eig(cov(x));
score = x * V;
test = score * V';
test is identical to x, up to numerical error.
In order to easily pick the components with the most variance, let's fix that lack of sorting ourselves:
[V, D] = eig(cov(x));
[D, ind] = sort(diag(D), 'descend');
V = V(:, ind);
score = x * V;
test = score * V';
Reconstruct the signal using the strongest principal component only:
test_partial = score(:, 1) * V(:, 1)';
In response to Amro's comments: It is of course also possible to first remove the means from the input data, and transform these "centered" data. In that case, for perfect reconstruction of the original data it would be necessary to add the means again. The way to compute the PCA given above is the one described by Neil H. Timm, Applied Multivariate Analysis, Springer 2002, page 446:
Given an observation vector Y with mean mu and covariance matrix Sigma of full rank p, the goal of PCA is to create a new set of variables called principal components (PCs) or principal variates. The principal components are linear combinations of the variables of the vector Y that are uncorrelated such that the variance of the jth component is maximal.
Timm later defines "standardized components" as those which have been computed from centered data and are then divided by the square root of the eigenvalues (i.e. variances), i.e. "standardized principal components" have mean 0 and variance 1.

Ornstein Uhlenbeck with correlated paths

I'm trying to do a MCS on a spread option, where one leg of the spread option is dependent on four underlyings. So I found the code on goddard http://www.goddardconsulting.ca/matlab-monte-carlo-assetpaths-corr.html
and modified it to handle more than two assets because it didn't work with more than two.
My code looks like this:
% do a Cholesky factorization on the correlation matrix
R = chol(corr);
% pre-allocate the output
S = nan(steps+1,nsims,nAssets);
% generate uncorrelated random sequence
x = randn(steps,size(corr,2));
% correlate the sequences
ep = x*R;
% generate correlated random sequences and paths
for idx = 1:nsims
A = repmat( drift*dt,steps,1 );
B = ep*diag(sig)*sqrt(dt);
[s1 s2] = size (B);
A = reshape(A, s1,s2); % A and B have to be reshaped to fit
% Generate potential paths
S(:,idx,:) = [ones(1,nAssets); cumprod(exp(A + B)) ]*diag(S0);
end
% If only one simulation then remove the unitary dimension
if nsims==1
S = squeeze(S);
end
and plotting it all I get a visually useful result. That is, I can use the generated paths as input to the "leg" I need for the option calculation.
My quite ridiculous problem is: I don't actually understand what
S(:,idx,:) = [ones(1,nAssets); cumprod(exp(A + B)) ]*diag(S0);
is doing. Could some one explain? I have iterated through it but it doesn't get any clearer. SO are the four end values of my four underlyings that I use as start value for the simulations.
What I actually would have preferred to do, as it worked well with non-correlated paths and I understand the matlab code, is to simply create a model with HWV in matlab, calculate drift and long-term mean and other needed factors, and in some way apply the cholesky to the wiener process, and then generate the correlated paths instead of as before uncorrelated paths. I woulf need to pass the random numbers, multiplied by the cholesky in order to get them correlated, to the matlab function "simulate", but I can't find out how to do that.
%%% EDIT SEPTEMBER 26 %%%
Ok, I finally managed to create an HWV-object for my 4 asstes, passing all kinds of vectors and arrays and matrices of the right sizes...
This is the correlation matrix I passed:
(order NATGAS GASOIL FUELOIL EURUSD)
1.0000 0.8819 0.8118 -0.5096
0.8819 1.0000 0.9744 -0.3065
0.8118 0.9744 1.0000 -0.2832
-0.5096 -0.3065 -0.2832 1.0000
As you can see, obviously the eurusd fx rate is negatively correlated with the hydrocarbons.
So why are the simulated assets paths practically identical?
The EURUSD should be completely different, as when I do the correlated GBM using the code above.

Multivariate Random Number Generation in Matlab

I'm probably being a little dense but I'm not very mathsy and can't seem to understand the covariance element of creating multivariate data.
I'm after two columns of random data (representing two correlated variables).
I think I am right in needing to use the mvnrnd function and I understand that 'mu' must be a column of my mean vectors. As I need 4 distinct classes within my data these are going to be (1, 1) (-1 1) (1 -1) and (-1 -1). I assume I will have to do the function 4x with a different column of mean vectors each time and then combine them to get my full data set.
I don't understand what I should put for SIGMA - Matlab help tells me that it must be 'a d-by-d symmetric positive semi-definite matrix, or a d-by-d-by-n array' i.e. a covariance matrix. I don't understand how I create a covariance matrix for numbers that I am yet to generate.
Any advice would be greatly appreciated!
Assuming that I understood your case properly, I would go this way:
data = [normrnd(0,1,5000,1),normrnd(0,1,5000,1)]; %% your starting data series
MU = mean(data,1);
SIGMA = cov(data);
Now, it should be possible to feed mvnrnd with MU and SIGMA:
r = mvnrnd(MU,SIGMA,5000);
plot(r(:,1),r(:,2),'+') %% in case you wanna plot the results
I hope this helps.
I think your aim is to generate the simulated multivariate gaussian distributed data. For example, I use
k = 6; % feature dimension
mu = rand(1,k);
sigma = 10*eye(k,k);
unit matrix by 10 times is a symmetric positive semi-definite matrix. And the gaussian distribution will be more round than other type of sigma.
then you can use it as the above example of mvnrnd function and see the plot.