I have a high dimensional Gaussian with mean M and covariance matrix V. I would like to calculate the distance from point p to M, taking V into consideration (I guess it's the distance in standard deviations of p from M?).
Phrased differentially, I take an ellipse one sigma away from M, and would like to check whether p is inside that ellipse.
If V is a valid covariance matrix of a gaussian, it then is symmetric positive definite and therefore defines a valid scalar product. By the way inv(V) also does.
Therefore, assuming that M and p are column vectors, you could define distances as:
d1 = sqrt((M-p)'*V*(M-p));
d2 = sqrt((M-p)'*inv(V)*(M-p));
the Matlab way one would rewrite d2as (probably some unnecessary parentheses):
d2 = sqrt((M-p)'*(V\(M-p)));
The nice thing is that when V is the unit matrix, then d1==d2and it correspond to the classical euclidian distance. To find wether you have to use d1 or d2is left as an exercise (sorry, part of my job is teaching). Write the multi-dimensional gaussian formula and compare it to the 1D case, since the multidimensional case is only a particular case of the 1D (or perform some numerical experiment).
NB: in very high dimensional spaces or for very many points to test, you might find a clever / faster way from the eigenvectors and eigenvalues of V (i.e. the principal axes of the ellipsoid and their corresponding variance).
Hope this helps.
A.
Consider computing the probability of the point given the normal distribution:
M = [1 -1]; %# mean vector
V = [.9 .4; .4 .3]; %# covariance matrix
p = [0.5 -1.5]; %# 2d-point
prob = mvnpdf(p,M,V); %# probability P(p|mu,cov)
The function MVNPDF is provided by the Statistics Toolbox
Maybe I'm totally off, but isn't this the same as just asking for each dimension: Am I inside the sigma?
PSEUDOCODE:
foreach(dimension d)
(M(d) - sigma(d) < p(d) < M(d) + sigma(d)) ?
Because you want to know if p is inside every dimension of your gaussian. So actually, this is just a space problem and your Gaussian hasn't have to do anything with it (except for M and sigma which are just distances).
In MATLAB you could try something like:
all(M - sigma < p < M + sigma)
A distance to that place could be, where I don't know the function for the Euclidean distance. Maybe dist works:
dist(M, p)
Because M is just a point in space and p as well. Just 2 vectors.
And now the final one. You want to know the distance in a form of sigma's:
% create a distance vector and divide it by sigma
M - p ./ sigma
I think that will do the trick.
Related
I have a matrix of A=[60x60],and two coefficients a,b. Since matrix A was moved by a,b, how to multiply the coefficients into matrix A so that I could obtain A_moved? Any function to do it?
Here's part of matlab code implemented:
A=rand(60); %where it's in 2D, A(k1,k2)
a=0.5; b=0.8;
[m, n]=size(A);
[M,N] = meshgrid(1:m,1:n);
X = [M(:), N(:)];
A_moved=A(:)(X)*[a b] %I know this is not valid but you get the idea
In another word A_moved is calculated by A_moved=a*k1+b*k2.
This line of code A_moved=A(:)(X)*[a b] is to represent my idea that a,b multiply back into the original A because X represent correspond coordinates of k1 and k2. The first column represent k1, and second column represent k2. Thus it become A_moved=a*k1+b*k2. But this couldn't get me anyway.
In the end A-moved is a 60x60 matrix which have been multiplied by coefficients a,b correspondingly. To make it clearer,A is the phase of image. a,b moved it phase.
Appreciate any help. Thank you!
Reference paper: Here
EDIT:
As suggested by Noel for better understanding.
A=[2 3;5 7], a=1.5 and b=2.5.
Since A is approximated as a*k1+b*k2
Thus,
A_moved=[1.5*k1_1+2.5k2_1 1.5*k1_2+2.5k2_2; 1.5*k1_2+2.5k2_1 1.5*k1_2+2.5k2_2];
where k1 and k2, If I'm understood correctly is the coordinates of the original A matrix, as defined in X above.
On the chat we found that your problem was matrix algebra related
What you want to obtain in A_moved is the x coordinate multiplied by a contant a plus the y coordinate multiplied by a constant b.
You already have this coordinates in M and N, so you can obtain A_moved as
A_moved = (a*M) + (b*N);
And it will retain same shape as A
Lets say I have a samples matrix samples (n_samples x n1) and a labels vector labels (n_samples x 1), where the labels are in the range [1:n2]
I am looking for an efficient way to create an empirical joint probability matrix P in the size n2 x n1.
Where for every sample i, we add its row samples(i, :) to P in the location indicated by labels(i).
I.e. (pseudo code)
for i = 1:n_samples
P(l(i), :) += M(i, :)
Is there a killer matlab command for doing that? Rather than a for loop or arrayfun?
Following #BillBokeey comment: Here is the solution
[xx, yy] = ndgrid(labels,1:size(samples,2));
P = accumarray([xx(:) yy(:)],samples(:));
This question is about the use of the covariance matrix in the multidimensional normal distribution:
I want to generate multi-dimensional random numbers x in Matlab with a given mean mu and covariance matrix Sigma. Assuming Z is a standard normally distributed random number (e.g. generated using randn), what is the correct code:
x = mu + chol(Sigma) * Z
or
x = mu + Sigma ^ 0.5 * Z
?
I am not sure about the use of the covariance matrix in the definition of the multidimensional normal distribution – whether the determinant in the denominator is of the square root or the Cholesky factor...
If by definition you refer to the density of the multivariate normal distribution:
it contains neither the Cholesky decomposition nor the matrix square root of Σ, but its inverse and the scalar square root of its determinant.
But for numerically generating random numbers from this distribution, the density is not helpful. It is not even the most general description of the multivariate normal distribution, since the density formula makes only sense for positive definite matrices Σ, while the distribution is also defined if there are zero eigenvalues – that just means that the variance is 0 in the direction of the respective eigenvector.
Your question follows the approach to start from standard multivariate normally distributed random numbers Z as produced by randn, and then apply a linear transformation. Assuming that mu is a p-dimensional row vector we want an nxp-dimensional random matrix (each row one observation, each column one variable):
Z = randn(n, p);
x = mu + Z * A;
We need a matrix A such that the covariance of x is Sigma. Since the covariance of Z is the identity matrix, the covariance of x is given by A' * A. A solution to this is given by the Cholesky decomposition, so the natural choice is
A = chol(Sigma);
where A is an upper triangular matrix.
However, we can also search for a Hermitian solution, A' = A, and then A' * A becomes A^2, the matrix square. A solution to this is given by a matrix square root, which is computed by replacing each eigenvalue of Sigma by its square root (or its negative); in general there are 2ⁿ possible solutions for n positive eigenvalues. The Matlab function sqrtm returns the principal matrix square root, which is the unique nonnegative-definite solution. Therefore,
A = sqrtm(Sigma)
works also. A ^ 0.5 should in principle do the same.
Simulations using this code
p = 10;
n = 1000;
nr = 1000;
cp = nan(nr, 1);
sp = nan(nr, 1);
pp = nan(nr, 1);
for i = 1 : nr
x = randn(n, p);
Sigma = cov(x);
cS = chol(Sigma);
cp(i) = norm(cS' * cS - Sigma);
sS = sqrtm(Sigma);
sp(i) = norm(sS' * sS - Sigma);
pS = Sigma ^ 0.5;
pp(i) = norm(pS' * pS - Sigma);
end
mean([cp sp pp])
yield that chol is more precise than the two other methods, and profiling shows that it is also much faster, for both p = 10 and p = 100.
The Cholesky decomposition does however have the disadvantage that it is only defined for positive-definite Σ, while the requirement of the matrix square root is merely that Σ is nonnegative-definite (sqrtm returns a warning for a singular input, but returns a valid result).
I am just wondering, how would I go about fitting a line to histogram, using the z-counts as weights? An example of this is shown below (although this post just discusses overlaying multiple plots), taken from Scatter plot with density in Matlab).
My initial thought is to make an array consisting of each pixel from the density plot, repeated n times to make a scatter plot (n == the number of counts), then do a linear polyfit. This seems awfully redundant though.
The other approach is to do a weighted least squares solution. You need the (x,y) location of each pixel and the number of counts n within each pixel. Then, I think that you'd do the weighted least-squares this way:
%gather your known data...have x,y, and n all in the same order as each other
A = [x(:) ones(length(x),1)]; %here are the x values from your histogram
b = y(:); %here are the y-values from your histogram
C = diag(n(:)); %counts from each pixel in your 2D histogram
%Define polynomial coefficients as p = [slope; y_offset];
%usual least-squares solution...written here for reference
% b = A*p; %remember, p = [slope; y_offset];
% p = inv(A'*A)*(A'*b); %remember, p = [slope; y_offset];
%We want to apply a weighting matrix, so incorporate the weighting matrix
% A' * b = A' * C * A * p;
p = inv(A' * C * A)*(A' * b); %remember, p = [slope; y_offset];
The biggest uncertainty for me with this solution is whether the C matrix should be made up of n or n.^2, I can never remember. Hopefully, someone can correct me in the comments, if needed.
If you have the original data, which is a collection of (x,y) points, you simply do a polyfit on the original data:
p = polyfit(x(:),y(:),1); %linear fit
That will give you a best fit (in the least-squares sense) to the original data, which is what you want.
If you do not have the original data, and you only have the 2D histogram, the approach that you defined (which basically recreates a facsimile of the original data) will give a similar answer as if you did the polyfit on the original data.
I've got matrices A and B
size(A) = [n x]; size(B) = [n y];
Now I need to compare euclidian distance of each column vector of A from each column vector of B. I'm using dist method right now
Q = dist([A B]); Q = Q(1:x, x:end);
But it does also lot of needless work (like calculating distances between vectors of A and B separately).
What is the best way to calculate this?
You are looking for pdist2.
% Compute the ordinary Euclidean distance
D = pdist2(A.',B.','euclidean'); % euclidean distance
You should take the transpose of the matrices since pdist2 assumes the observations are in rows, not in columns.
An alternative solution to pdist2, if you don't have the Statistics Toolbox, is to compute this manually. For example, one way to do it is:
[X, Y] = meshgrid(1:size(A, 2), 1:size(B, 2)); %// or meshgrid(1:x, 1:y)
Q = sqrt(sum((A(:, X(:)) - B(:, Y(:))) .^ 2, 1));
The indices of the columns from A and B for each value in vector Q can be obtained by computing:
[X(:), Y(:)]
where each row contains a pair of indices: the first is the column index in matrix A, and the second is the column index in matrix B.
Another solution if you don't have pdist2 and which may also be faster for very large matrices is to vectorize the following mathematical fact:
||x-y||^2 = ||x||^2 + ||y||^2 - 2*dot(x,y)
where ||a|| is the L2-norm (euclidean norm) of a.
Comments:
C=-2*A'*B (this is a x by y matrix) is the vectorization of the dot products.
||x-y||^2 is the square of the euclidean distance which you are looking for.
Is that enough or do you need the explicit code?
The reason this may be faster asymptotically is that you avoid doing the metric calculation for all x*y comparisons, since you are instead making the bottleneck a matrix multiplication (matrix multiplication is highly optimized in matlab). You are taking advantage of the fact that this is the euclidean distance and not just some unknown metric.