how can one make seaborn clustermap cluster rows and cols jointly - cluster-analysis

How can one constrain the rows and cols clustering to be the same (e.g. for finding groups in a pair-wise matrix).
In the docs you can turn row or col clustering on/off but it's independent of each other.

It seems like per-computing the linkage and then feed it into both the rows and columns works. Since my D matrix is symmetric the linkage will be identical for both rows and columns.
This can be accomplished with the following code:
from scipy.cluster.hierarchy import linkage
link = linkage(D) # D being the measurement
seaborn.clustermap(D, row_linkage=link, col_linkage=link)

Maybe you can first calculate the correlation matrix upon D and then plot the clustermap.
corr_df=D.corr()
seaborn.clustermap(corr_df.as_matrix())

Related

Difference in Matlab results when using PCA() and PCACOV()

Closest match I can get is to run:
data=rand(100,10); % data set
[W,pc] = pca(cov(data));
then don't demean
data2=data
[W2, EvalueMatrix2] = eig(cov(data2));
[W3, EvalueMatrix3] = svd(cov(data2));
In this case W2 and W3 agree and W is the transpose of them?
Still not clear why W should be the transpose of the other two?
As an extra check I use pcacov:
[W4, EvalueMatrix4] = pcacov(cov(data2));
Again it agrees with WE and W3 but is the transpose of W?
The results are different because you're subtracting the mean of each row of the data matrix. Based on the way you're computing things, rows of the data matrix correspond to data points and columns correspond to dimensions (this is how the pca() function works too). With this setup, you should subtract the mean from each column, not row. This corresponds to 'centering' the data; the mean along each dimension is set to zero. Once you do this, results should be equivalent to pca(), up to sign flips.
Edit to address edited question:
Centering issue looks ok now. When you run the eigenvalue decomposition on the covariance matrix, remember to sort the eigenvectors in order of descending eigenvalues. This should match the output of pcacov(). When calling pca(), you have to pass it the data matrix, not the covariance matrix.

Kullback Leibler Divergence of 2 Histograms in MatLab

I would like a function to calculate the KL distance between two histograms in MatLab. I tried this code:
http://www.mathworks.com/matlabcentral/fileexchange/13089-kldiv
However, it says that I should have two distributions P and Q of sizes n x nbins. However, I am having trouble understanding how the author of the package wants me to arrange the histograms. I thought that providing the discretized values of the random variable together with the number of bins would suffice (I would assume the algorithm would use an arbitrary support to evaluate the expectations).
Any help is appreciated.
Thanks.
The function you link to requires that the two histograms passed be aligned and thus have the same length NBIN x N (not N X NBIN), that is, if N>1 then the number of rows in the inputs should be equal to the number of bins in the histograms. If you are just going to compare two histograms (that is if N=1) it doesn't really matter, you can pass either row or column vector versions of these as long as you are consistent and the order of bins matches.
A generic call to the function looks like this:
dists = kldiv(bins,P,Q)
The implementation allows comparison of multiple histograms to each other (that is, N>1), in which case pairs of columns (with matching column index) in each array are compared and the result is a row vector with distances for each matching pair.
Array bins should be the same size as P and Q and is used to perform a very minimal check that the inputs are of the same size, but is not used in the computation. The routine expects bins to contain the numeric labels of your bins so that it can check for repeated bin labels and warn you if repeats occur, but otherwise doesn't use the information.
You could do away with bins and compute the distance with
KL = sum(P .* (log2(P)-log2(Q)));
without using the Matlab Central versions. However the version you link to performs the abovementioned minimal checks and in addition allows computation of two alternative distances (consult the documentation).
The version linked to by eigenchris checks that no histogram bins are empty (which would make the computation blow up numerically) and if there are, removes their contribution to the sum (not sure this is entirely appropriate - consult an expert on the subject). It should probably also be aware of the exact form of the formula, specifically note the use of log2 above versus natural logarithm in the version linked to by eigenchris.

Remove duplicates in correlations in matlab

Please see the following issue:
P=rand(4,4);
for i=1:size(P,2)
for j=1:size(P,2)
[r,p]=corr(P(:,i),P(:,j))
end
end
Clearly, the loop will cause the number of correlations to be doubled (i.e., corr(P(:,1),P(:,4)) and corr(P(:,4),P(:,1)). Does anyone have a suggestion on how to avoid this? Perhaps not using a loop?
Thanks!
I have four suggestions for you, depending on what exactly you are doing to compute your matrices. I'm assuming the example you gave is a simplified version of what needs to be done.
First Method - Adjusting the inner loop index
One thing you can do is change your j loop index so that it only goes from 1 up to i. This way, you get a lower triangular matrix and just concentrate on the values within the lower triangular half of your matrix. The upper half would essentially be all set to zero. In other words:
for i = 1 : size(P,2)
for j = 1 : i
%// Your code here
end
end
Second Method - Leave it unchanged, but then use unique
You can go ahead and use the same matrix like you did before with the full two for loops, but you can then filter the duplicates by using unique. In other words, you can do this:
[Y,indices] = unique(P);
Y will give you a list of unique values within the matrix P and indices will give you the locations of where these occurred within P. Note that these are column major indices, and so if you wanted to find the row and column locations of where these locations occur, you can do:
[rows,cols] = ind2sub(size(P), indices);
Third Method - Use pdist and squareform
Since you're looking for a solution that requires no loops, take a look at the pdist function. Given a M x N matrix, pdist will find distances between each pair of rows in a matrix. squareform will then transform these distances into a matrix like what you have seen above. In other words, do this:
dists = pdist(P.', 'correlation');
distMatrix = squareform(dists);
Fourth Method - Use the corr method straight out of the box
You can just use corr in the following way:
[rho, pvals] = corr(P);
corr in this case will produce a m x m matrix that contains the correlation coefficient between each pair of columns an n x m matrix stored in P.
Hopefully one of these will work!
this works ?
for i=1:size(P,2)
for j=1:i
Since you are just correlating each column with the other, then why not just use (straight from the documentation)
[Rho,Pval] = corr(P);
I don't have the Statistics Toolbox, but according to http://www.mathworks.com/help/stats/corr.html,
corr(X) returns a p-by-p matrix containing the pairwise linear correlation coefficient between each pair of columns in the n-by-p matrix X.

Matlab - vector divide by vector, use loop

I have to two evenly sized very large vectors (columns) A and B. I would like to divide vector A by vector B. This will give me a large matrix AxB filled with zeros, except the last column. This column contains the values I'm interested in. When I simple divide the vectors in a Matlab script, I run out of memory. Probably because the matrix AxB becomes very large. Probably I can prevent this from happening by repeating the following:
calculating the first row of matrix AxB
filter the last value and put it into another vector C.
delete the used row of matrix AxB
redo step 1-4 for all rows in vector A
How can I make a loop which does this?
You're question doesn't make it clear what you are trying to do, although it sounds like you want to do an element wise division.
Try:
C = A./B
"Matrix product AxB" and "dividing vectors" are distinct operations.
If we understood this correctly, what you do want to calculate is "C = last column from AxB", such that:
lastcolsel=zeros(size(B,2),1)
C=(A*B)*lastcolsel
If that code breaks your memory limit, recall that matrix product is associative (MxN)xP = Mx(NxP). Simplifying your example, we get:
lastcolsel=zeros(size(B,2),1)
simplifier=B*lastcolsel
C=A*simplifier

Matlab - Stratified Sampling of Multidimensional Data

I want to divide a corpus into training & testing sets in a stratified fashion.
The observation data points are arranged in a Matrix A as
A=[16,3,0;12,6,4;19,2,1;.........;17,0,2;13,3,2]
Each column of the matrix represent a distinct feature.
In Matlab, the cvpartition(A,'holdout',p) function requires A to be a vector. How can I perform the same action with A as a Matrix i.e. resulting sets have roughly the same distribution of each feature as in the original corpus.
By using a matrix A rather than grouped data, you are making the assumption that a random partition of your data will return a test and train set with the same column distributions.
In general, the assumption you are making in your question is that there is a partition of A such that each of the marginal distributions of A (1 per column) has the same distribution across all three variables. There is no guarantee that this is true. Check whether the columns of your matrix are correlated. If they are not, simply partition on 1 and use the row indices to define a test matrix:
cv = cvpartition(A(:, 1), 'holdout', p);
text_mat = A(cv.test, :);
If they are correlated, you may need to go back and reconsider what you are trying to do.