I am using pdist2(x(i), y(j), 'euclidean') formula to find euclidean distance between x and y instead of the manual method
sqrt((x(i)-y(i))^2).
And to find co relation coefficient I am using corrcoeff( x(i), y(j)) . Is this the correct way to find correlation coefficient and euclidean distance between x and y matrix?
I get different answers when I use formula and manual method.
I think it is not correct.
I suppose x and y are matrixes, so you are using pdist2 to compute the distance between the observation i (from x(i)) and j (from y(j)). In the case of the manual method you are using the i index in both of them. Maybe the differences are due to the incorrect use of the indexes i and j. Show us your code so we can confirm it.
By the way, as #Luis pointed out, it is better to use pdist2 to compute all the distances at the same time (it is much faster). So, if you have two matrixes x and y, use: pdist2(x,y).
The same for the correlation.
The correlation between the two matrices can be calculated as:
r = corr2(x,y)
Now, if you are after the element-wise distance, how about:
dist=gsqrt((x-y).^2);
Related
I have a point point=[x y] and a vector vec=[X Y], where X and Y are vectors containing x,y values for many point. I have calculated the euclidean distances of all the points in vec from point with the following code:
diff=vec-point;
squared=diff.*diff;
distances=sqrt(sum(squared,2));
I have seen the pdist() function, but could not find a good way to use it in my code. Is there a more elegant way to do so?
You can use something like m = [point; vec] and then distances=pdist(m, 'euclidean'), however it will compute O((n+1)ˆ2) distances rather than the O(n) you need. If the code is not performance critical I wouldn't worry about it and just use the code which is more elegant and easier to understand.
I have the following equation:
I want to do a exponential curve fitting using MATLAB for the above equation, where y = f(u,a). y is my output while (u,a) are my inputs. I want to find the coefficients A,B for a set of provided data.
I know how to do this for simple polynomials by defining states. As an example, if states= (ones(size(u)), u u.^2), this will give me L+Mu+Nu^2, with L, M and N being regression coefficients.
However, this is not the case for the above equation. How could I do this in MATLAB?
Building on what #eigenchris said, simply take the natural logarithm (log in MATLAB) of both sides of the equation. If we do this, we would in fact be linearizing the equation in log space. In other words, given your original equation:
We get:
However, this isn't exactly polynomial regression. This is more of a least squares fitting of your points. Specifically, what you would do is given a set of y and set pair of (u,a) points, you would build a system of equations and solve for this system via least squares. In other words, given the set y = (y_0, y_1, y_2,...y_N), and (u,a) = ((u_0, a_0), (u_1, a_1), ..., (u_N, a_N)), where N is the number of points that you have, you would build your system of equations like so:
This can be written in matrix form:
To solve for A and B, you simply need to find the least-squares solution. You can see that it's in the form of:
Y = AX
To solve for X, we use what is called the pseudoinverse. As such:
X = A^{*} * Y
A^{*} is the pseudoinverse. This can eloquently be done in MATLAB using the \ or mldivide operator. All you have to do is build a vector of y values with the log taken, as well as building the matrix of u and a values. Therefore, if your points (u,a) are stored in U and A respectively, as well as the values of y stored in Y, you would simply do this:
x = [u.^2 a.^3] \ log(y);
x(1) will contain the coefficient for A, while x(2) will contain the coefficient for B. As A. Donda has noted in his answer (which I embarrassingly forgot about), the values of A and B are obtained assuming that the errors with respect to the exact curve you are trying to fit to are normally (Gaussian) distributed with a constant variance. The errors also need to be additive. If this is not the case, then your parameters achieved may not represent the best fit possible.
See this Wikipedia page for more details on what assumptions least-squares fitting takes:
http://en.wikipedia.org/wiki/Least_squares#Least_squares.2C_regression_analysis_and_statistics
One approach is to use a linear regression of log(y) with respect to u² and a³:
Assuming that u, a, and y are column vectors of the same length:
AB = [u .^ 2, a .^ 3] \ log(y)
After this, AB(1) is the fit value for A and AB(2) is the fit value for B. The computation uses Matlab's mldivide operator; an alternative would be to use the pseudo-inverse.
The fit values found this way are Maximum Likelihood estimates of the parameters under the assumption that deviations from the exact equation are constant-variance normally distributed errors additive to A u² + B a³. If the actual source of deviations differs from this, these estimates may not be optimal.
I am doing a project where i find an approximation of the Sine function, using the Least Squares method. Also i can use 12 values of my own choice.Since i couldn't figure out how to solve it i thought of using Taylor's series for Sine and then solving it as a polynomial of order 5. Here is my code :
%% Find the sine of the 12 known values
x=[0,pi/8,pi/4,7*pi/2,3*pi/4,pi,4*pi/11,3*pi/2,2*pi,5*pi/4,3*pi/8,12*pi/20];
y=zeros(12,1);
for i=1:12
y=sin(x);
end
n=12;
j=5;
%% Find the sums to populate the matrix A and matrix B
s1=sum(x);s2=sum(x.^2);
s3=sum(x.^3);s4=sum(x.^4);
s5=sum(x.^5);s6=sum(x.^6);
s7=sum(x.^7);s8=sum(x.^8);
s9=sum(x.^9);s10=sum(x.^10);
sy=sum(y);
sxy=sum(x.*y);
sxy2=sum( (x.^2).*y);
sxy3=sum( (x.^3).*y);
sxy4=sum( (x.^4).*y);
sxy5=sum( (x.^5).*y);
A=[n,s1,s2,s3,s4,s5;s1,s2,s3,s4,s5,s6;s2,s3,s4,s5,s6,s7;
s3,s4,s5,s6,s7,s8;s4,s5,s6,s7,s8,s9;s5,s6,s7,s8,s9,s10];
B=[sy;sxy;sxy2;sxy3;sxy4;sxy5];
Then at matlab i get this result
>> a=A^-1*B
a =
-0.0248
1.2203
-0.2351
-0.1408
0.0364
-0.0021
However when i try to replace the values of a in the taylor series and solve f.e t=pi/2 i get wrong results
>> t=pi/2;
fun=t-t^3*a(4)+a(6)*t^5
fun =
2.0967
I am doing something wrong when i replace the values of a matrix in the Taylor series or is my initial thought flawed ?
Note: i can't use any built-in function
If you need a least-squares approximation, simply decide on a fixed interval that you want to approximate on and generate some x abscissae on that interval (possibly equally spaced abscissae using linspace - or non-uniformly spaced as you have in your example). Then evaluate your sine function at each point such that you have
y = sin(x)
Then simply use the polyfit function (documented here) to obtain least squares parameters
b = polyfit(x,y,n)
where n is the degree of the polynomial you want to approximate. You can then use polyval (documented here) to obtain the values of your approximation at other values of x.
EDIT: As you can't use polyfit you can generate the Vandermonde matrix for the least-squares approximation directly (the below assumes x is a row vector).
A = ones(length(x),1);
x = x';
for i=1:n
A = [A x.^i];
end
then simply obtain the least squares parameters using
b = A\y;
You can clearly optimise the clumsy Vandermonde generation loop above I have just written to illustrate the concept. For better numerical stability you would also be better to use a nice orthogonal polynomial system like Chebyshev polynomials of the first kind. If you are not even allowed to use the matrix divide \ function then you will need to code up your own implementation of a QR factorisation and solve the system that way (or some other numerically stable method).
I have two matrices X and Y. Both represent a number of positions in 3D-space. X is a 50*3 matrix, Y is a 60*3 matrix.
My question: why does applying the mean-function over the output of pdist2() in combination with 'Mahalanobis' not give the result obtained with mahal()?
More details on what I'm trying to do below, as well as the code I used to test this.
Let's suppose the 60 observations in matrix Y are obtained after an experimental manipulation of some kind. I'm trying to assess whether this manipulation had a significant effect on the positions observed in Y. Therefore, I used pdist2(X,X,'Mahalanobis') to compare X to X to obtain a baseline, and later, X to Y (with X the reference matrix: pdist2(X,Y,'Mahalanobis')), and I plotted both distributions to have a look at the overlap.
Subsequently, I calculated the mean Mahalanobis distance for both distributions and the 95% CI and did a t-test and Kolmogorov-Smirnoff test to asses if the difference between the distributions was significant. This seemed very intuitive to me, however, when testing with mahal(), I get different values, although the reference matrix is the same. I don't get what the difference between both ways of calculating mahalanobis distance is exactly.
Comment that is too long #3lectrologos:
You mean this: d(I) = (Y(I,:)-mu)inv(SIGMA)(Y(I,:)-mu)'? This is just the formula for calculating mahalanobis, so should be the same for pdist2() and mahal() functions. I think mu is a scalar and SIGMA is a matrix based on the reference distribution as a whole in both pdist2() and mahal(). Only in mahal you are comparing each point of your sample set to the points of the reference distribution, while in pdist2 you are making pairwise comparisons based on a reference distribution. Actually, with my purpose in my mind, I think I should go for mahal() instead of pdist2(). I can interpret a pairwise distance based on a reference distribution, but I don't think it's what I need here.
% test pdist2 vs. mahal in matlab
% the purpose of this script is to see whether the average over the rows of E equals the values in d...
% data
X = []; % 50*3 matrix, data omitted
Y = []; % 60*3 matrix, data omitted
% calculations
S = nancov(X);
% mahal()
d = mahal(Y,X); % gives an 60*1 matrix with a value for each Cartesian element in Y (second matrix is always the reference matrix)
% pairwise mahalanobis distance with pdist2()
E = pdist2(X,Y,'mahalanobis',S); % outputs an 50*60 matrix with each ij-th element the pairwise distance between element X(i,:) and Y(j,:) based on the covariance matrix of X: nancov(X)
%{
so this is harder to interpret than mahal(), as elements of Y are not just compared to the "mahalanobis-centroid" based on X,
% but to each individual element of X
% so the purpose of this script is to see whether the average over the rows of E equals the values in d...
%}
F = mean(E); % now I averaged over the rows, which means, over all values of X, the reference matrix
mean(d)
mean(E(:)) % not equal to mean(d)
d-F' % not zero
% plot output
figure(1)
plot(d,'bo'), hold on
plot(mean(E),'ro')
legend('mahal()','avaraged over all x values pdist2()')
ylabel('Mahalanobis distance')
figure(2)
plot(d,'bo'), hold on
plot(E','ro')
plot(d,'bo','MarkerFaceColor','b')
xlabel('values in matrix Y (Yi) ... or ... pairwise comparison Yi. (Yi vs. all Xi values)')
ylabel('Mahalanobis distance')
legend('mahal()','pdist2()')
One immediate difference between the two is that mahal subtracts the sample mean of X from each point in Y before computing distances.
Try something like E = pdist2(X,Y-mean(X),'mahalanobis',S); to see if that gives you the same results as mahal.
Note that
mahal(X,Y)
is equivalent to
pdist2(X,mean(Y),'mahalanobis',cov(Y)).^2
Well, I guess there are two different ways to calculate mahalanobis distance between two clusters of data like you explain above:
1) you compare each data point from your sample set to mu and sigma matrices calculated from your reference distribution (although labeling one cluster sample set and the other reference distribution may be arbitrary), thereby calculating the distance from each point to this so called mahalanobis-centroid of the reference distribution.
2) you compare each datapoint from matrix Y to each datapoint of matrix X, with, X the reference distribution (mu and sigma are calculated from X only)
The values of the distances will be different, but I guess the ordinal order of dissimilarity between clusters is preserved when using either method 1 or 2? I actually wonder when comparing 10 different clusters to a reference matrix X, or to each other, if the order of the dissimilarities would differ using method 1 or method 2? Also, I can't imagine a situation where one method would be wrong and the other method not. Although method 1 seems more intuitive in some situations, like mine.
I am trying to find the Mahalanobis distance of some points from the origin.The MATLAB command for that is mahal(Y,X)
But if I use this I get NaN as the matrix X =0 as the distance needs to be found from the origin.Can someone please help me with this.How should it be done
I think you are a bit confused about what mahal() is doing. First, computation of the Mahalanobis distance requires a population of points, from which the covariance will be calculated.
In the Matlab docs for this function it makes it clear that the distance being computed is:
d(I) = (Y(I,:)-mu)*inv(SIGMA)*(Y(I,:)-mu)'
where mu is the population average of X and SIGMA is the population covariance matrix of X. Since your population consists of a single point (the origin), it has no covariance, and so the SIGMA matrix is not invertible, hence the error where you get NaN/Inf values in the distances.
If you know the covariance structure that you want to use for the Mahalanobis distance, then you can just use the formula above to compute it for yourself. Let's say that the covariance you care about is stored in a matrix S. You want the distance w.r.t. the origin, so you don't need to subtract anything from the values in Y, all you need to compute is:
for ii = 1:size(Y,1)
d(ii) = Y(ii,:)*inv(S)*Y(ii,:)'; % Where Y(ii,:) is assumed to be a row vector.'
end