I would like to know how to calculate Minimum Description Length (MDL) to evaluate the clustering result.
I was looking at some papers on clustering algorithms, and one of them refers to MDL as a measurement to check if the clusters which are given by K-means follow Gaussian distribution.
According to that paper, MDL is given by:
MDL(K) = -log[p_y(y/K)] + 1/2 * L * log(n)
L = K(1 + n + (n + 1)n / 2) - 1
, where K is the number of the clusters, n is the total number of data values, and y is an n dimensional vector.
I am aware that the above explanation might be insufficient to answer this question, but the above is all the information I have now, and I have no idea how to reproduce the calculation introduced by the paper.
I would appreciate explanations on how to calculate MDL to evaluate clustering results.
MDL calculations always require some assumptions about how to encode the data. And that is where MDL papers are often wrong, because they compare their new encoding to a sub-quality encoding as baseline to get massive gains... Anyway, this value may be legit, but without context and proper definitions it's hard to tell.
When you approximate data with k-means, you have to store:
k itself
log k bits for each of n points to map points to centers
k vectors of d dimensions
the deviation of each point from the mean. If you assume small deviations are more frequent (Gaussian), use fewer bits for this, more bits for large deviations
Related
I have a feature vector of size [4096 x 180], where 180 is the number of samples and 4096 is the feature vector length of each sample.
I want to reduce the dimensionality of the data using PCA.
I tried using the built in pca function of MATLAB [V U]=pca(X) and reconstructed the data by X_rec= U(:, 1:n)*V(:, 1:n)', n being the dimension I chose. This returns a matrix of 4096 x 180.
Now I have 3 questions:
How to obtain the reduced dimension?
When I put n as 200, it gave an error as matrix dimension increased, which gave me the assumption that we cannot reduce dimension lesser than the sample size. Is this true?
How to find the right number of reduced dimensions?
I have to use the reduced dimension feature set for further classification.
If anyone can provide a detailed step by step explanation of the pca code for this I would be grateful. I have looked at many places but my confusion still persists.
You may want to refer to Matlab example to analyse city data.
Here is some simplified code:
load cities;
[~, pca_scores, ~, ~, var_explained] = pca(ratings);
Here, pca_scores are the pca components with respective variances of each component in var_explained. You do not need to do any explicit multiplication after running pca. Matlab will give you the components directly.
In your case, consider that data X is a 4096-by-180 matrix, i.e. you have 4096 samples and 180 features. Your goal is to reduce dimensionality such that you have p features, where p < 180. In Matlab, you can simply run the following,
p = 100;
[~, pca_scores, ~, ~, var_explained] = pca(X, 'NumComponents', p);
pca_scores will be a 4096-by-p matrix and var_explained will be a vector of length p.
To answer your questions:
How to obtain the reduced dimension? I above example, pca_scores is your reduced dimension data.
When I put n as 200, it gave an error as matrix dimension increased, which gave me the assumption that we cannot reduce dimension lesser than the sample size. Is this true? You can't use 200, since the reduced dimensions have to be less than 180.
How to find the right number of reduced dimensions? You can make this decision by checking the var_explained vector. Typically you want to retain about 99% variance of the features. You can read more about this here.
I have to use SVD in Matlab to obtain a reduced version of my data.
I've read that the function svds(X,k) performs the SVD and returns the first k eigenvalues and eigenvectors. There is not mention in the documentation if the data have to be normalized.
With normalization I mean both substraction of the mean value and division by the standard deviation.
When I implemented PCA, I used to normalize in such way. But I know that it is not needed when using the matlab function pca() because it computes the covariance matrix by using cov() which implicitly performs the normalization.
So, the question is. I need the projection matrix useful to reduce my n-dim data to k-dim ones by SVD. Should I perform data normalization of the train data (and therefore, the same normalization to further projected new data) or not?
Thanks
Essentially, the answer is yes, you should typically perform normalization. The reason is that features can have very different scalings, and we typically do not want to take scaling into account when considering the uniqueness of features.
Suppose we have two features x and y, both with variance 1, but where x has a mean of 1 and y has a mean of 1000. Then the matrix of samples will look like
n = 500; % samples
x = 1 + randn(n,1);
y = 1000 + randn(n,1);
svd([x,y])
But the problem with this is that the scale of y (without normalizing) essentially washes out the small variations in x. Specifically, if we just examine the singular values of [x,y], we might be inclined to say that x is a linear factor of y (since one of the singular values is much smaller than the other). But actually, we know that that is not the case since x was generated independently.
In fact, you will often find that you only see the "real" data in a signal once we remove the mean. At the extremely end, you could image that we have some feature
z = 1e6 + sin(t)
Now if somebody just gave you those numbers, you might look at the sequence
z = 1000001.54, 1000001.2, 1000001.4,...
and just think, "that signal is boring, it basically is just 1e6 plus some round off terms...". But once we remove the mean, we see the signal for what it actually is... a very interesting and specific one indeed. So long story short, you should always remove the means and scale.
It really depends on what you want to do with your data. Centering and scaling can be helpful to obtain principial components that are representative of the shape of the variations in the data, irrespective of the scaling. I would say it is mostly needed if you want to further use the principal components itself, particularly, if you want to visualize them. It can also help during classification since your scores will then be normalized which may help your classifier. However, it depends on the application since in some applications the energy also carries useful information that one should not discard - there is no general answer!
Now you write that all you need is "the projection matrix useful to reduce my n-dim data to k-dim ones by SVD". In this case, no need to center or scale anything:
[U,~] = svd(TrainingData);
RecudedData = U(:,k)'*TestData;
will do the job. The svds may be worth considering when your TrainingData is huge (in both dimensions) so that svd is too slow (if it is huge in one dimension, just apply svd to the gram matrix).
It depends!!!
A common use in signal processing where it makes no sense to normalize is noise reduction via dimensionality reduction in correlated signals where all the fearures are contiminated with a random gaussian noise with the same variance. In that case if the magnitude of a certain feature is twice as large it's snr is also approximately twice as large so normalizing the features makes no sense since it would just make the parts with the worse snr larger and the parts with the good snr smaller. You also don't need to subtract the mean in that case (like in PCA), the mean (or dc) isn't different then any other frequency.
I have a matrix like M = K x N ,where k is 49152 and is the dimension of the problem and N is 52 and is the number of observations.
I have tried to use [U,S,V]=SVD(M) but doing this I get less memory space.
I found another code which uses [U,S,V]=SVD(COV(M)) and it works well. My questions are what is the meaning of using the COV(M) command inside the SVD and what is the meaning of the resultant [U,S,V]?
Finding the SVD of the covariance matrix is a method to perform Principal Components Analysis or PCA for short. I won't get into the mathematical details here, but PCA performs what is known as dimensionality reduction. If you like a more formal treatise on the subject, you can read up on my post about it here: What does selecting the largest eigenvalues and eigenvectors in the covariance matrix mean in data analysis?. However, simply put dimensionality reduction projects your data stored in the matrix M onto a lower dimensional surface with the least amount of projection error. In this matrix, we are assuming that each column is a feature or a dimension and each row is a data point. I suspect the reason why you are getting more memory occupied by applying the SVD on the actual data matrix M itself rather than the covariance matrix is because you have a significant amount of data points with a small amount of features. The covariance matrix finds the covariance between pairs of features. If M is a m x n matrix where m is the total number of data points and n is the total number of features, doing cov(M) would actually give you a n x n matrix, so you are applying SVD on a small amount of memory in comparison to M.
As for the meaning of U, S and V, for dimensionality reduction specifically, the columns of V are what are known as the principal components. The ordering of V is in such a way where the first column is the first axis of your data that describes the greatest amount of variability possible. As you start going to the second columns up to the nth column, you start to introduce more axes in your data and the variability starts to decrease. Eventually when you hit the nth column, you are essentially describing your data in its entirety without reducing any dimensions. The diagonal values of S denote what is called the variance explained which respect the same ordering as V. As you progress through the singular values, they tell you how much of the variability in your data is described by each corresponding principal component.
To perform the dimensionality reduction, you can either take U and multiply by S or take your data that is mean subtracted and multiply by V. In other words, supposing X is the matrix M where each column has its mean computed and the is subtracted from each column of M, the following relationship holds:
US = XV
To actually perform the final dimensionality reduction, you take either US or XV and retain the first k columns where k is the total amount of dimensions you want to retain. The value of k depends on your application, but many people choose k to be the total number of principal components that explains a certain percentage of your variability in your data.
For more information about the link between SVD and PCA, please see this post on Cross Validated: https://stats.stackexchange.com/q/134282/86678
Instead of [U, S, V] = svd(M), which tries to build a matrix U that is 49152 by 49152 (= 18 GB š±!), do svd(M, 'econ'). That returns the āeconomy-classā SVD, where U will be 52 by 52, S is 52 by 52, and V is also 52 by 52.
cov(M) will remove each dimensionās mean and evaluate the inner product, giving you a 52 by 52 covariance matrix. You can implement your own version of cov, called mycov, as
function [C] = mycov(M)
M = bsxfun(#minus, M, mean(M, 1)); % subtract each dimensionās mean over all observations
C = M' * M / size(M, 1);
(You can verify this works by looking at mycov(randn(49152, 52)), which should be close to eye(52), since each element of that array is IID-Gaussian.)
Thereās a lot of magical linear algebraic properties and relationships between the SVD and EVD (i.e., singular value vs eigenvalue decompositions): because the covariance matrix cov(M) is a Hermitian matrix, itās left- and right-singular vectors are the same, and in fact also cov(M)ās eigenvectors. Furthermore, cov(M)ās singular values are also its eigenvalues: so svd(cov(M)) is just an expensive way to get eig(cov(M)) š, up to Ā±1 and reordering.
As #rayryeng explains at length, usually people look at svd(M, 'econ') because they want eig(cov(M)) without needing to evaluate cov(M), because you never want to compute cov(M): itās numerically unstable. I recently wrote an answer that showed, in Python, how to compute eig(cov(M)) using svd(M2, 'econ'), where M2 is the 0-mean version of M, used in the practical application of color-to-grayscale mapping, which might help you get more context.
If we have a matrix for 6 rows and 10 columns we have to determine the k value.If we assume default k value is 5 and if we have less columns than 5 with same number of rows 6 can we assume that number of columns=k value is it correct?i.e rows=6 cols=4 then k=col-1 => k=3
k=n^(1/2)
Where n is number of instances and not features. reference 1 , reference 2
Check this question, value of k in k nearest neighbour algorithm
Same as the previous one. Usually, the rule of thumb is squareroot of number of features
k=n^(1/2)
where n is the number of features. In your case square-root of 10 is approximately 3, so the answer should be 3.
k=sqrt(n) has not optimal result with the various dataset. Some dataset, its result is quite awful. For example, one paper for 90's paper link says the best result of k is between 5-10 bu sqrt(n) gives us a 17. Some other papers suggest interesting suggestions such as local k value or weighted k. Ä°t's obvious choose k it's not an easy choice. That does not have an easy formula for these and depends on our dataset. Best way to choose optimal k is calculate accuracy of which k is best for our dataset. Generally, if our dataset is getting bigger, optimal k value is also increasing.
Basically I have two matrices, something like this:
> Matrix A (100 rows x 2 features)
Height - Weight
1.48 75
1.55 65
1.60 70
etc...
And Matrix B (same dimension of Matrix A but with different values of course)
I would like to understand if there is some correlation between Matrix A and Matrix B, which strategy do you suggest me?
The concept you are looking for is known as canonical correlation. It is a well developed bit of theory in the field of multivariate analysis. Essentially, the idea is to find a linear combination of the columns in your first matrix and a linear combination of the columns in your second matrix, such that the correlation between the two linear combinations is maximized.
This can be done manually using eigenvectors and eigenvalues, but if you have the statistics toolbox, then Matlab has already got it packaged and ready to go for you. The function is called canoncorr, and the documentation is here
A brief example of the usage of this function follows:
%# Set up some example data
CovMat = randi(5, 4, 4) + 20 * eye(4); %# Build a random covariance matrix
CovMat = (1/2) * (CovMat + CovMat'); %# Ensure random covriance matrix is symmetrix
X = mvnrnd(zeros(500, 4), CovMat); %# Simulate data using multivariate Normal
%# Partition the data into two matrices
X1 = X(:, 1:2);
X2 = X(:, 3:4);
%# Find the canonical correlations of the two matrices
[A, B, r] = canoncorr(X1, X2);
The first canonical correlation is the first element of r, and the second canonical correlation is the second element of r.
The canoncorr function also has a lot of other outputs. I'm not sure I'm clever enough to provide a satisfactory yet concise explanation of them here so instead I'm going to be lame and recommend you read up on it in a multivariate analysis textbook - most multivariate analysis textbooks will have a full chapter dedicated to canonical correlations.
Finally, if you don't have the statistics toolbox, then a quick google revealed the following FEX submission that claims to provide canonical correlation analysis - note, I haven't tested it myself.
Ok, let's have a short try:
A = [1:20; rand(1,20)]'; % Generate some data...
The best way to examine a 2-dimensional relationship is by looking at the data plots:
plot(A(:,1), A(:,2), 'o') % In the random data you should not see some pattern...
If we really want to compute some correlation coefficients, we can do this with corrcoef, as you mentioned:
B = corrcoef(A)
ans =
1.0000 -0.1350
-0.1350 1.0000
Here, B(1,1) is the correlation between column 1 and column 1, B(2,1) between column 1 and column 2 (and vice versa, thus B is symmetric).
One may argue about the usefulness of such a measure in a two-dimensional context - in my opinion you usually gain more insights by looking at the plots.