Computing Mahalanobis Distance Between Set of Points and Set of Reference Points - matlab

I have an n x p matrix - mX which is composed of n points in R^p.
I have another m x p matrix - mY which is composed of m reference points in R^p.
I would like to create an n x m matrix - mD which is the Mahalanobis Distance matrix.
D(i, j) means the Mahalanobis Distance between point i in mX, mX(i, :) and point j in mY, mY(j, :).
Namely, is computes the following:
mD(i, j) = (mX(i, :) - mY(j, :)) * inv(mC) * (mX(i, :) - mY(j, :)).';
Where mC is the given Mahalanobis Distance PSD Matrix.
It is easy to be done in a loop, is there a way to vectorize it?
Namely, is the a function which its inputs are mX, mY and mC and its output is mD and fully vectorized without using any MATLAB toolbox?
Thank You.

Approach #1
Assuming infinite resources, here's one vectorized solution using bsxfun and matrix-multiplication -
A = reshape(bsxfun(#minus,permute(mX,[1 3 2]),permute(mY,[3 1 2])),[],p);
out = reshape(diag(A*inv(mC)*A.'),n,m);
Approach #2
Here's a comprise solution trying to reduce the loop complexity -
A = reshape(bsxfun(#minus,permute(mX,[1 3 2]),permute(mY,[3 1 2])),[],p);
imC = inv(mC);
out = zeros(n*m,1);
for ii = 1:n*m
out(ii) = A(ii,:)*imC*A(ii,:).';
end
out = reshape(out,n,m);
Sample run -
>> n = 3; m = 4; p = 5;
mX = rand(n,p);
mY = rand(m,p);
mC = rand(p,p);
imC = inv(mC);
>> %// Original solution
for i = 1:n
for j = 1:m
mD(i, j) = (mX(i, :) - mY(j, :)) * inv(mC) * (mX(i, :) - mY(j, :)).'; %//'
end
end
>> mD
mD =
-8.4256 10.032 2.8929 7.1762
-44.748 -4.3851 -13.645 -9.6702
-4.5297 3.2928 0.11132 2.5998
>> %// Approach #1
A = reshape(bsxfun(#minus,permute(mX,[1 3 2]),permute(mY,[3 1 2])),[],p);
out = reshape(diag(A*inv(mC)*A.'),n,m); %//'
>> out
out =
-8.4256 10.032 2.8929 7.1762
-44.748 -4.3851 -13.645 -9.6702
-4.5297 3.2928 0.11132 2.5998
>> %// Approach #2
A = reshape(bsxfun(#minus,permute(mX,[1 3 2]),permute(mY,[3 1 2])),[],p);
imC = inv(mC);
out1 = zeros(n*m,1);
for ii = 1:n*m
out1(ii) = A(ii,:)*imC*A(ii,:).'; %//'
end
out1 = reshape(out1,n,m);
>> out1
out1 =
-8.4256 10.032 2.8929 7.1762
-44.748 -4.3851 -13.645 -9.6702
-4.5297 3.2928 0.11132 2.5998
Instead if you had :
mD(j, i) = (mX(i, :) - mY(j, :)) * inv(mC) * (mX(i, :) - mY(j, :)).';
The solutions would translate to the versions listed next.
Approach #1
A = reshape(bsxfun(#minus,permute(mY,[1 3 2]),permute(mX,[3 1 2])),[],p);
out = reshape(diag(A*inv(mC)*A.'),m,n);
Approach #2
A = reshape(bsxfun(#minus,permute(mY,[1 3 2]),permute(mX,[3 1 2])),[],p);
imC = inv(mC);
out1 = zeros(m*n,1);
for i = 1:n*m
out(i) = A(i,:)*imC*A(i,:).'; %//'
end
out = reshape(out,m,n);
Sample run -
>> n = 3; m = 4; p = 5;
mX = rand(n,p); mY = rand(m,p); mC = rand(p,p); imC = inv(mC);
>> %// Original solution
for i = 1:n
for j = 1:m
mD(j, i) = (mX(i, :) - mY(j, :)) * inv(mC) * (mX(i, :) - mY(j, :)).'; %//'
end
end
>> mD
mD =
0.81755 0.33205 0.82254
1.7086 1.3363 2.4209
0.36495 0.78394 -0.33097
0.17359 0.3889 -1.0624
>> %// Approach #1
A = reshape(bsxfun(#minus,permute(mY,[1 3 2]),permute(mX,[3 1 2])),[],p);
out = reshape(diag(A*inv(mC)*A.'),m,n); %//'
>> out
out =
0.81755 0.33205 0.82254
1.7086 1.3363 2.4209
0.36495 0.78394 -0.33097
0.17359 0.3889 -1.0624
>> %// Approach #2
A = reshape(bsxfun(#minus,permute(mY,[1 3 2]),permute(mX,[3 1 2])),[],p);
imC = inv(mC);
out1 = zeros(m*n,1);
for i = 1:n*m
out1(i) = A(i,:)*imC*A(i,:).'; %//'
end
out1 = reshape(out1,m,n);
>> out1
out1 =
0.81755 0.33205 0.82254
1.7086 1.3363 2.4209
0.36495 0.78394 -0.33097
0.17359 0.3889 -1.0624

Here is one solution that eliminates one loop
function d = mahalanobis(mX, mY)
n = size(mX, 2);
m = size(mY, 2);
data = [mX, mY];
mc = cov(transpose(data));
dist = zeros(n,m);
for i = 1 : n
diff = repmat(mX(:,i), 1, m) - mY;
dist(i,:) = sum((mc\diff).*diff , 1);
end
d = sqrt(dist);
end
You would invoke it as:
d = mahalanobis(transpose(X),transpose(Y))

Reduce to L2
It seems that Mahalanobis Distance can be reduced to ordinary L2 distance if you are allowed to preprocess matrix mC and you are not afraid of numerical differences.
First of all, compute Cholesky decomposition of mC:
mR = chol(mC) % C = R^t * R, where R is upper-triangular
Now we can use these factors to reformulate Mahalanobis Distance:
(Xi-Yj) * inv(C) * (Xi-Yj)^t = || (Xi-Yj) inv(R) ||^2 = ||TXi - TYj||^2
where: TXi = Xi * inv(R)
TYj = Yj * inv(R)
So the idea is to transform points Xi, Yj to TXi, TYj first, and then compute euclidean distances between them. Here is the algorithm outline:
Compute mR - Cholesky factor of covariance matrix mC (takes O(p^3) time).
Invert triangular matrix mR (takes O(p^3) time).
Multiply both mX and mY by inv(mR) on the right (takes O(p^2 (m+n)) time).
Compute squared L2 distances between pairs of points (takes O(m n p) time).
Total time is O(m n p + (m + n) p^2 + p^3) versus original O(m n p^2). It should work faster when 1 << p << n,m. In such case step 4 would takes most of the time and should be vectorized.
Vectorization
I have little experience of MATLAB, but quite a lot of SIMD vectorization on x86 CPUs. In raw computations, it would be enough to vectorize along one sufficiently large array dimension, and make trivial loops for the other dimensions.
If you expect p to be large enough, it may probably be OK to vectorize along coordinates of points, and make two nested loops for i <= n and j <= m. That's similar to what #Daniel posted.
If p is not sufficiently large, you can vectorize along one of the point sequences instead. This would be similar to solution posted by #dpmcmlxxvi: you have to subtract single row of one matrix from all the rows of the second matrix, then compute squared norms of the resulting rows. Repeat n times (
or m times).
As for me, full vectorization (which means rewriting with matrix operations instead of loops in MATLAB) does not sound like a clever performance goal. Most likely partially vectorized solutions would be optimally fast.

I came to the conclusion that vectorizing this problem is not efficient. My best idea for vectorizing this problem would require m x n x p x p working memory, at least if everything is processed at once. This means with n=m=p=152 the code would already require 4GB Ram. At these dimensions, my system can run the loop in less than a second:
mD=zeros(size(mX,1),size(mY,1));
ImC=inv(mC);
for i=1:size(mX,1)
for j=1:size(mY,1)
d=mX(i, :) - mY(j, :);
mD(i, j) = (d) * ImC * (d).';
end
end

Related

Memory usage by Matlab

I'm running a Matlab code in the HPC of my university. I have two versions of the code. The second version, despite generating a smaller array, seems to require more memory. I would like your help to understand if this is in fact the case and why.
Let me start from some preliminary lines:
clear
rng default
%Some useful components
n=7^4;
vectors{1}=[1,20,20,20,-1,Inf,-Inf];
vectors{2}=[-19,19,19,19,-20,Inf,-Inf];
vectors{3}=[-19,0,0,0,-20,Inf,-Inf];
vectors{4}=[-19,0,0,0,-20,Inf,-Inf];
T_temp = cell(1,4);
[T_temp{:}] = ndgrid(vectors{:});
T_temp = cat(4+1, T_temp{:});
T = reshape(T_temp,[],4); %all the possible 4-tuples from vectors{1}, ..., vectors{4}
This is the first version 1 of the code: I construct the matrix D1 listing all possible pairs of unordered rows from T
indices_pairs=pairIndices(n);
D1=[T(indices_pairs(:,1),:) T(indices_pairs(:,2),:)];
This is the second version of the code: I construct the matrix D2 listing a random draw of m=10^6 unordered pairs of rows from T
m=10^6;
p=n*(n-1)/2;
random_indices_pairs = randperm(p, m).';
[C1, C2] = myind2ind (random_indices_pairs, n);
indices_pairs=[C1 C2];
D2=[T(indices_pairs(:,1),:) T(indices_pairs(:,2),:)];
My question: when generating D2 the HPC goes out of memory. When generating D1 the HPC works fine, despite D1 being a larger array than D2. Why is that the case?
These are complementary functions used above:
function indices = pairIndices(n)
[y, x] = find(tril(logical(ones(n)), -1)); %#ok<LOGL>
indices = [x, y];
end
function [R , C] = myind2ind(ii, N)
jj = N * (N - 1) / 2 + 1 - ii;
r = (1 + sqrt(8 * jj)) / 2;
R = N -floor(r);
idx_first = (floor(r + 1) .* floor(r)) / 2;
C = idx_first-jj + R + 1;
end

Correlation coefficients between two matrices to find intercorrelation

I am trying to calculate Pearson coefficients between all pair combinations of my variables of all my samples.
Say i have an m*n matrix where m are the variables and n are the samples
i want to calculate for each variable of my data what is the correlation to every other variable.
So, i managed to do that with nested loops:
X = rand[1000 100];
for i = 1:1000
base = X(i, :);
for j = 1:1000
target = X(j, :);
correlation = corrcoef(base, target);
correlation = correlation(2, 1);
corData(1, j) = correlation
end
totalCor(i, :) = corData
end
and it works, but takes too much time to run
I am trying to find a way to run the corrcoef function on a row basis, meaning maybe to create an additional matrix with repmat of the base values and correlate to the X data using some FUN function.
Could not figure out how to use the fun with inputs from to arrays, running between individuals lines/columns
help will be appreciated
This post involves a bit of hacking, so bear with it!
Stage #0 To start off, we have -
for i = 1:N
base = X(i, :);
for j = 1:N
target = X(j, :);
correlation = corrcoef(base, target);
correlation = correlation(2, 1)
corData(1, j) = correlation;
end
end
Stage #1 From the documentation of corrcoef in its source code :
If C is the covariance matrix, C = COV(X), then CORRCOEF(X) is the
matrix whose (i,j)'th element is : C(i,j)/SQRT(C(i,i)*C(j,j)).
After hacking into the code of covariance, we see that for the default case of one input, the covariance formula is simply -
[m,n] = size(x);
xc = bsxfun(#minus,x,sum(x,1)/m);
xy = (xc' * xc) / (m-1);
Thus, mixing the two definitions and putting them into the problem at hand, we have -
m = size(X,2);
for i = 1:N
base = X(i, :);
for j = 1:N
target = X(j, :);
BT = [base(:) target(:)];
xc = bsxfun(#minus,BT,sum(BT,1)/m);
C = (xc' * xc) / (m-1); %//'
corData = C(2,1)/sqrt(C(2,2)*C(1,1))
end
end
Stage #2 This is the final stage where we use the real fun aka bsxfun to kill all loops, like so -
%// Broadcasted subtract of each row by the average of it.
%// This corresponds to "xc = bsxfun(#minus,BT,sum(BT,1)/m)"
p1 = bsxfun(#minus,X,mean(X,2));
%// Get pairs of rows from X and get the dot product.
%// Thus, a total of "N x N" such products would be obtained.
p2 = sum(bsxfun(#times,permute(p1,[1 3 2]),permute(p1,[3 1 2])),3);
%// Scale them down by "size(X,2)-1".
%// This was for the part : "C = (xc' * xc) / (m-1)".
p3 = p2/(size(X,2)-1);
%// "C(2,2)" and "C(1,1)" are diagonal elements from "p3", so store them.
dp3 = diag(p3);
%// Get "sqrt(C(2,2)*C(1,1))" by broadcasting elementwise multiplication
%// of "dp3". Finally do elementwise division of "p3" by it.
totalCor_out = p3./sqrt(bsxfun(#times,dp3,dp3.'));
Benchmarking
This section compares the original approach against the proposed one and also verifies the output. Here's the benchmarking code -
disp('---------- With original approach')
tic
X = rand(1000,100);
corData = zeros(1,1000);
totalCor = zeros(1000,1000);
for i = 1:1000
base = X(i, :);
for j = 1:1000
target = X(j, :);
correlation = corrcoef(base, target);
correlation = correlation(2, 1);
corData(1, j) = correlation;
end
totalCor(i, :) = corData;
end
toc
disp('---------- With the real fun aka BSXFUN')
tic
p1 = bsxfun(#minus,X,mean(X,2));
p2 = sum(bsxfun(#times,permute(p1,[1 3 2]),permute(p1,[3 1 2])),3);
p3 = p2/(size(X,2)-1);
dp3 = diag(p3);
totalCor_out = p3./sqrt(bsxfun(#times,dp3,dp3.')); %//'
toc
error_val = max(abs(totalCor(:)-totalCor_out(:)))
Output -
---------- With original approach
Elapsed time is 186.501746 seconds.
---------- With the real fun aka BSXFUN
Elapsed time is 1.423448 seconds.
error_val =
4.996e-16

How would you remove the loop from this matlab code

Given that we have:
x is 2d matrix with size [numSamples x numFeatures]
A is 2d square matrix with size [numFeatures x numFeatures]
B is a 1d vector with size [1 x numFeatures]
I would like to evaluate the following code without a loop: (or in a way faster way)
out = zeros(1,numSamples);
for i = 1:numSamples
res = sum(repmat(B - x(i,:), numSamples, 1)*A.*(x - repmat(x(i,:), numSamples, 1)), 2).^2;
out(i) = var(res);
end
If you have other suggestions on a faster improvement of the above, it is also more than welcome.
You can bsxfun those piece-by-piece for a vectorized solution -
P1 = bsxfun(#minus,B,x)*A;
P2 = bsxfun(#minus,x,permute(x,[3 2 1]));
out = var(squeeze(sum(bsxfun(#times,P1,P2),2)).^2.');
Partially vectorized approach -
P = (bsxfun(#minus,B,x)*A).'; %//'
out = zeros(1,numSamples);
for i = 1:numSamples
out(i) = var((bsxfun(#minus,x,x(i,:))*P(:,i)).^2);
end

Matlab: Argmax and dot product for each row in a matrix

I have 2 matrices = X in R^(n*m) and W in R^(k*m) where k<<n.
Let x_i be the i-th row of X and w_j be the j-th row of W.
I need to find, for each x_i what is the j that maximizes <w_j,x_i>
I can't see a way around iterating over all the rows in X, but it there a way to find the maximum dot product without iterating every time over all of W?
A naive implementation would be:
n = 100;
m = 50;
k = 10;
X = rand(n,m);
W = rand(k,m);
Y = zeros(n, 1);
for i = 1 : n
max_ind = 1;
max_val = dot(W(1,:), X(i,:));
for j = 2 : k
cur_val = dot(W(j,:),X(i,:));
if cur_val > max_val
max_val = cur_val;
max_ind = j;
end
end
Y(i,:) = max_ind;
end
Dot product is essentially matrix multiplication:
[~, Y] = max(W*X');
bsxfun based approach to speed-up things for you -
[~,Y] = max(sum(bsxfun(#times,X,permute(W,[3 2 1])),2),[],3)
On my system, using your dataset I am getting a 100x+ speedup with this.
One can think of two more "closeby" approaches, but they don't seem to give any huge improvement over the earlier one -
[~,Y] = max(squeeze(sum(bsxfun(#times,X,permute(W,[3 2 1])),2)),[],2)
and
[~,Y] = max(squeeze(sum(bsxfun(#times,X',permute(W,[2 3 1]))))')

Lagrange interpolation method

I use convolution and for loops (too much for loops) for calculating the interpolation using
Lagrange's method , here's the main code :
function[p] = lagrange_interpolation(X,Y)
L = zeros(n);
p = zeros(1,n);
% computing L matrice, so that each row i holds the polynom L_i
% Now we compute li(x) for i=0....n ,and we build the polynomial
for k=1:n
multiplier = 1;
outputConv = ones(1,1);
for index = 1:n
if(index ~= k && X(index) ~= X(k))
outputConv = conv(outputConv,[1,-X(index)]);
multiplier = multiplier * ((X(k) - X(index))^-1);
end
end
polynimialSize = length(outputConv);
for index = 1:polynimialSize
L(k,n - index + 1) = outputConv(polynimialSize - index + 1);
end
L(k,:) = multiplier .* L(k,:);
end
% continues
end
Those are too much for loops for computing the l_i(x) (this is done before the last calculation of P_n(x) = Sigma of y_i * l_i(x)) .
Any suggestions into making it more matlab formal ?
Thanks
Yeah, several suggestions (implemented in version 1 below): if loop can be combined with for above it (just make index skip k via something like jr(jr~=j) below); polynomialSize is always equal length(outputConv) which is always equal n (because you have n datapoints, (n-1)th polynomial with n coefficients), so the last for loop and next line can be also replaced with simple L(k,:) = multiplier * outputConv;
So I replicated the example on http://en.wikipedia.org/wiki/Lagrange_polynomial (and adopted their j-m notation, but for me j goes 1:n and m is 1:n and m~=j), hence my initialization looks like
clear; clc;
X=[-9 -4 -1 7]; %example taken from http://en.wikipedia.org/wiki/Lagrange_polynomial
Y=[ 5 2 -2 9];
n=length(X); %Lagrange basis polinomials are (n-1)th order, have n coefficients
lj = zeros(1,n); %storage for numerator of Lagrange basis polyns - each w/ n coeff
Lj = zeros(n); %matrix of Lagrange basis polyns coeffs (lj(x))
L = zeros(1,n); %the Lagrange polynomial coefficients (L(x))
then v 1.0 looks like
jr=1:n; %j-range: 1<=j<=n
for j=jr %my j is your k
multiplier = 1;
outputConv = 1; %numerator of lj(x)
mr=jr(jr~=j); %m-range: 1<=m<=n, m~=j
for m = mr %my m is your index
outputConv = conv(outputConv,[1 -X(m)]);
multiplier = multiplier * ((X(j) - X(m))^-1);
end
Lj(j,:) = multiplier * outputConv; %jth Lagrange basis polinomial lj(x)
end
L = Y*Lj; %coefficients of Lagrange polinomial L(x)
which can be further simplified if you realize that numerator of l_j(x) is just a polynomial with specific roots - for that there is a nice command in matlab - poly. Similarly the denominator is just that polyn evaluated at X(j) - for that there is polyval. Hence, v 1.9:
jr=1:n; %j-range: 1<=j<=n
for j=jr
mr=jr(jr~=j); %m-range: 1<=m<=n, m~=j
lj=poly(X(mr)); %numerator of lj(x)
mult=1/polyval(lj,X(j)); %denominator of lj(x)
Lj(j,:) = mult * lj; %jth Lagrange basis polinomial lj(x)
end
L = Y*Lj; %coefficients of Lagrange polinomial L(x)
Why version 1.9 and not 2.0? well, there is probably a way to get rid of this last for loop, and write it all in 1 line, but I can't think of it right now - it's a todo for v 2.0 :)
And, for dessert, if you want to get the same picture as wikipedia:
figure(1);clf
x=-10:.1:10;
hold on
plot(x,polyval(Y(1)*Lj(1,:),x),'r','linewidth',2)
plot(x,polyval(Y(2)*Lj(2,:),x),'b','linewidth',2)
plot(x,polyval(Y(3)*Lj(3,:),x),'g','linewidth',2)
plot(x,polyval(Y(4)*Lj(4,:),x),'y','linewidth',2)
plot(x,polyval(L,x),'k','linewidth',2)
plot(X,Y,'ro','linewidth',2,'markersize',10)
hold off
xlim([-10 10])
ylim([-10 10])
set(gca,'XTick',-10:10)
set(gca,'YTick',-10:10)
grid on
produces
enjoy and feel free to reuse/improve
Try:
X=0:1/20:1; Y=cos(X) and create L and apply polyval(L,1).
polyval(L,1)=0.917483227909543
cos(1)=0.540302305868140
Why there is huge difference?