Affine transformation matlab [duplicate] - matlab

I have two images which one of them is the Original image and the second one is Transformed image.
I have to find out how many degrees Transformed image was rotated using 3x3 transformation matrix. Plus, I need to find how far translated from origin.
Both images are grayscaled and held in matrix variables. Their sizes are same [350 500].
I have found a few lecture notes like this.
Lecture notes say that I should use the following matrix formula for rotation:
For translation matrix the formula is given:
Everything is good. But there are two problems:
I could not imagine how to implement the formulas using MATLAB.
The formulas are shaped to find x',y' values but I already have got x,x',y,y' values. I need to find rotation angle (theta) and tx and ty.
I want to know the equivailence of x, x', y, y' in the the matrix.
I have got the following code:
rotationMatrix = [ cos(theta) sin(theta) 0 ; ...
-sin(theta) cos(theta) 0 ; ...
0 0 1];
translationMatrix = [ 1 0 tx; ...
0 1 ty; ...
0 0 1];
But as you can see, tx, ty, theta variables are not defined before used. How can I calculate theta, tx and ty?
PS: It is forbidden to use Image Processing Toolbox functions.

This is essentially a homography recovery problem. What you are doing is given co-ordinates in one image and the corresponding co-ordinates in the other image, you are trying to recover the combined translation and rotation matrix that was used to warp the points from the one image to the other.
You can essentially combine the rotation and translation into a single matrix by multiplying the two matrices together. Multiplying is simply compositing the two operations together. You would this get:
H = [cos(theta) -sin(theta) tx]
[sin(theta) cos(theta) ty]
[ 0 0 1]
The idea behind this is to find the parameters by minimizing the error through least squares between each pair of points.
Basically, what you want to find is the following relationship:
xi_after = H*xi_before
H is the combined rotation and translation matrix required to map the co-ordinates from the one image to the other. H is also a 3 x 3 matrix, and knowing that the lower right entry (row 3, column 3) is 1, it makes things easier. Also, assuming that your points are in the augmented co-ordinate system, we essentially want to find this relationship for each pair of co-ordinates from the first image (x_i, y_i) to the other (x_i', y_i'):
[p_i*x_i'] [h11 h12 h13] [x_i]
[p_i*y_i'] = [h21 h22 h23] * [y_i]
[ p_i ] [h31 h32 1 ] [ 1 ]
The scale of p_i is to account for homography scaling and vanishing points. Let's perform a matrix-vector multiplication of this equation. We can ignore the 3rd element as it isn't useful to us (for now):
p_i*x_i' = h11*x_i + h12*y_i + h13
p_i*y_i' = h21*x_i + h22*y_i + h23
Now let's take a look at the 3rd element. We know that p_i = h31*x_i + h32*y_i + 1. As such, substituting p_i into each of the equations, and rearranging to solve for x_i' and y_i', we thus get:
x_i' = h11*x_i + h12*y_i + h13 - h31*x_i*x_i' - h32*y_i*x_i'
y_i' = h21*x_i + h22*y_i + h23 - h31*x_i*y_i' - h32*y_i*y_i'
What you have here now are two equations for each unique pair of points. What we can do now is build an over-determined system of equations. Take each pair and build two equations out of them. You will then put it into matrix form, i.e.:
Ah = b
A would be a matrix of coefficients that were built from each set of equations using the co-ordinates from the first image, b would be each pair of points for the second image and h would be the parameters you are solving for. Ultimately, you are finally solving this linear system of equations reformulated in matrix form:
You would solve for the vector h which can be performed through least squares. In MATLAB, you can do this via:
h = A \ b;
A sidenote for you: If the movement between images is truly just a rotation and translation, then h31 and h32 will both be zero after we solve for the parameters. However, I always like to be thorough and so I will solve for h31 and h32 anyway.
NB: This method will only work if you have at least 4 unique pairs of points. Because there are 8 parameters to solve for, and there are 2 equations per point, A must have at least a rank of 8 in order for the system to be consistent (if you want to throw in some linear algebra terminology in the loop). You will not be able to solve this problem if you have less than 4 points.
If you want some MATLAB code, let's assume that your points are stored in sourcePoints and targetPoints. sourcePoints are from the first image and targetPoints are for the second image. Obviously, there should be the same number of points between both images. It is assumed that both sourcePoints and targetPoints are stored as M x 2 matrices. The first columns contain your x co-ordinates while the second columns contain your y co-ordinates.
numPoints = size(sourcePoints, 1);
%// Cast data to double to be sure
sourcePoints = double(sourcePoints);
targetPoints = double(targetPoints);
%//Extract relevant data
xSource = sourcePoints(:,1);
ySource = sourcePoints(:,2);
xTarget = targetPoints(:,1);
yTarget = targetPoints(:,2);
%//Create helper vectors
vec0 = zeros(numPoints, 1);
vec1 = ones(numPoints, 1);
xSourcexTarget = -xSource.*xTarget;
ySourcexTarget = -ySource.*xTarget;
xSourceyTarget = -xSource.*yTarget;
ySourceyTarget = -ySource.*yTarget;
%//Build matrix
A = [xSource ySource vec1 vec0 vec0 vec0 xSourcexTarget ySourcexTarget; ...
vec0 vec0 vec0 xSource ySource vec1 xSourceyTarget ySourceyTarget];
%//Build RHS vector
b = [xTarget; yTarget];
%//Solve homography by least squares
h = A \ b;
%// Reshape to a 3 x 3 matrix (optional)
%// Must transpose as reshape is performed
%// in column major format
h(9) = 1; %// Add in that h33 is 1 before we reshape
hmatrix = reshape(h, 3, 3)';
Once you are finished, you have a combined rotation and translation matrix. If you want the x and y translations, simply pick off column 3, rows 1 and 2 in hmatrix. However, we can also work with the vector of h itself, and so h13 would be element 3, and h23 would be element number 6. If you want the angle of rotation, simply take the appropriate inverse trigonometric function to rows 1, 2 and columns 1, 2. For the h vector, this would be elements 1, 2, 4 and 5. There will be a bit of inconsistency depending on which elements you choose as this was solved by least squares. One way to get a good overall angle would perhaps be to find the angles of all 4 elements then do some sort of average. Either way, this is a good starting point.
References
I learned about homography a while ago through Leow Wee Kheng's Computer Vision course. What I have told you is based on his slides: http://www.comp.nus.edu.sg/~cs4243/lecture/camera.pdf. Take a look at slides 30-32 if you want to know where I pulled this material from. However, the MATLAB code I wrote myself :)

Related

Swap frames on Matlab [duplicate]

I have two images which one of them is the Original image and the second one is Transformed image.
I have to find out how many degrees Transformed image was rotated using 3x3 transformation matrix. Plus, I need to find how far translated from origin.
Both images are grayscaled and held in matrix variables. Their sizes are same [350 500].
I have found a few lecture notes like this.
Lecture notes say that I should use the following matrix formula for rotation:
For translation matrix the formula is given:
Everything is good. But there are two problems:
I could not imagine how to implement the formulas using MATLAB.
The formulas are shaped to find x',y' values but I already have got x,x',y,y' values. I need to find rotation angle (theta) and tx and ty.
I want to know the equivailence of x, x', y, y' in the the matrix.
I have got the following code:
rotationMatrix = [ cos(theta) sin(theta) 0 ; ...
-sin(theta) cos(theta) 0 ; ...
0 0 1];
translationMatrix = [ 1 0 tx; ...
0 1 ty; ...
0 0 1];
But as you can see, tx, ty, theta variables are not defined before used. How can I calculate theta, tx and ty?
PS: It is forbidden to use Image Processing Toolbox functions.
This is essentially a homography recovery problem. What you are doing is given co-ordinates in one image and the corresponding co-ordinates in the other image, you are trying to recover the combined translation and rotation matrix that was used to warp the points from the one image to the other.
You can essentially combine the rotation and translation into a single matrix by multiplying the two matrices together. Multiplying is simply compositing the two operations together. You would this get:
H = [cos(theta) -sin(theta) tx]
[sin(theta) cos(theta) ty]
[ 0 0 1]
The idea behind this is to find the parameters by minimizing the error through least squares between each pair of points.
Basically, what you want to find is the following relationship:
xi_after = H*xi_before
H is the combined rotation and translation matrix required to map the co-ordinates from the one image to the other. H is also a 3 x 3 matrix, and knowing that the lower right entry (row 3, column 3) is 1, it makes things easier. Also, assuming that your points are in the augmented co-ordinate system, we essentially want to find this relationship for each pair of co-ordinates from the first image (x_i, y_i) to the other (x_i', y_i'):
[p_i*x_i'] [h11 h12 h13] [x_i]
[p_i*y_i'] = [h21 h22 h23] * [y_i]
[ p_i ] [h31 h32 1 ] [ 1 ]
The scale of p_i is to account for homography scaling and vanishing points. Let's perform a matrix-vector multiplication of this equation. We can ignore the 3rd element as it isn't useful to us (for now):
p_i*x_i' = h11*x_i + h12*y_i + h13
p_i*y_i' = h21*x_i + h22*y_i + h23
Now let's take a look at the 3rd element. We know that p_i = h31*x_i + h32*y_i + 1. As such, substituting p_i into each of the equations, and rearranging to solve for x_i' and y_i', we thus get:
x_i' = h11*x_i + h12*y_i + h13 - h31*x_i*x_i' - h32*y_i*x_i'
y_i' = h21*x_i + h22*y_i + h23 - h31*x_i*y_i' - h32*y_i*y_i'
What you have here now are two equations for each unique pair of points. What we can do now is build an over-determined system of equations. Take each pair and build two equations out of them. You will then put it into matrix form, i.e.:
Ah = b
A would be a matrix of coefficients that were built from each set of equations using the co-ordinates from the first image, b would be each pair of points for the second image and h would be the parameters you are solving for. Ultimately, you are finally solving this linear system of equations reformulated in matrix form:
You would solve for the vector h which can be performed through least squares. In MATLAB, you can do this via:
h = A \ b;
A sidenote for you: If the movement between images is truly just a rotation and translation, then h31 and h32 will both be zero after we solve for the parameters. However, I always like to be thorough and so I will solve for h31 and h32 anyway.
NB: This method will only work if you have at least 4 unique pairs of points. Because there are 8 parameters to solve for, and there are 2 equations per point, A must have at least a rank of 8 in order for the system to be consistent (if you want to throw in some linear algebra terminology in the loop). You will not be able to solve this problem if you have less than 4 points.
If you want some MATLAB code, let's assume that your points are stored in sourcePoints and targetPoints. sourcePoints are from the first image and targetPoints are for the second image. Obviously, there should be the same number of points between both images. It is assumed that both sourcePoints and targetPoints are stored as M x 2 matrices. The first columns contain your x co-ordinates while the second columns contain your y co-ordinates.
numPoints = size(sourcePoints, 1);
%// Cast data to double to be sure
sourcePoints = double(sourcePoints);
targetPoints = double(targetPoints);
%//Extract relevant data
xSource = sourcePoints(:,1);
ySource = sourcePoints(:,2);
xTarget = targetPoints(:,1);
yTarget = targetPoints(:,2);
%//Create helper vectors
vec0 = zeros(numPoints, 1);
vec1 = ones(numPoints, 1);
xSourcexTarget = -xSource.*xTarget;
ySourcexTarget = -ySource.*xTarget;
xSourceyTarget = -xSource.*yTarget;
ySourceyTarget = -ySource.*yTarget;
%//Build matrix
A = [xSource ySource vec1 vec0 vec0 vec0 xSourcexTarget ySourcexTarget; ...
vec0 vec0 vec0 xSource ySource vec1 xSourceyTarget ySourceyTarget];
%//Build RHS vector
b = [xTarget; yTarget];
%//Solve homography by least squares
h = A \ b;
%// Reshape to a 3 x 3 matrix (optional)
%// Must transpose as reshape is performed
%// in column major format
h(9) = 1; %// Add in that h33 is 1 before we reshape
hmatrix = reshape(h, 3, 3)';
Once you are finished, you have a combined rotation and translation matrix. If you want the x and y translations, simply pick off column 3, rows 1 and 2 in hmatrix. However, we can also work with the vector of h itself, and so h13 would be element 3, and h23 would be element number 6. If you want the angle of rotation, simply take the appropriate inverse trigonometric function to rows 1, 2 and columns 1, 2. For the h vector, this would be elements 1, 2, 4 and 5. There will be a bit of inconsistency depending on which elements you choose as this was solved by least squares. One way to get a good overall angle would perhaps be to find the angles of all 4 elements then do some sort of average. Either way, this is a good starting point.
References
I learned about homography a while ago through Leow Wee Kheng's Computer Vision course. What I have told you is based on his slides: http://www.comp.nus.edu.sg/~cs4243/lecture/camera.pdf. Take a look at slides 30-32 if you want to know where I pulled this material from. However, the MATLAB code I wrote myself :)

Not getting what 'spatial weights' for HOG are

I am using HOG for sunflower detection. I understand most of what HOG is doing now, but have some things that I do not understand in the final stages. (I am going through the MATLAB code from Mathworks).
Let us assume we are using the Dalal-Triggs implementation. (That is, 8x8 pixels make 1 cell, 2x2 cells make 1 block, blocks are taken at 50% overlap in both directions, and lastly, that we have quantized the histograms into 9 bins, unsigned. (meaning, from 0 to 180 degrees)). Finally, our image here is 64x128 pixels.
Let us say that we are on the first block. This block has 4 cells. I understand that we are going to weight the orientations of each of the orientations by their magnitude. I also understand that we are going to weight them further, by a gaussian centered on the block.
So far so good.
However in the MATLAB implementation, they have an additional step, whereby they create a 'spatial' weight:
If we dive into this function, it looks like this:
Finally, the function 'computeLowerHistBin' looks like this:
function [x1, b1] = computeLowerHistBin(x, binWidth)
% Bin index
width = single(binWidth);
invWidth = 1./width;
bin = floor(x.*invWidth - 0.5);
% Bin center x1
x1 = width * (bin + 0.5);
% add 2 to get to 1-based indexing
b1 = int32(bin + 2);
end
Now, I believe that those 'spatial' weights are being used during the tri-linear interpolation part later on... but what I do not get is just how exactly they are being computed, or the logic behind that code. I am completely lost on this issue.
Note: I understand the need for the tri-linear interpolation, and (I think) how it works. What I do not understand is why we need those 'spatial weights', and what the logic behind their computation here is.
Thanks.
The idea here is that each pixel contributes not only to its own histogram cell, but also to the neighboring cell to some degree. These contributions are weighed differently, depending on how close the pixel is to the edge of the cell. The closer you are to an edge of your cell, the more you contribute to the corresponding neighboring cell, and the less you contribute to your own cell.
This code is pre-computing the spatial weights for the trilinear interpolation. Take a look at the equation here for trilinear interpolation:
HOG Trilinear Interpolation of Histogram Bins
There you see things like (x-x1)/bx, (y-y1)/by, (1 - (x-x1)/bx), etc. In the code, wx1 and wy1 correspond to:
wx1 = (1 - (x-x1)/bx)
wy1 = (1 - (y-y1)/by)
Here, x1 and y1 are centers of the histogram bins for the X and Y directions. It's easier to describe these things in 1D. So in 1D, a value x will fall between 2 bin centers x1 <= x < x2. It doesn't matter exactly bin (1 or 2) it belongs. The important thing is to figure out the fraction of x that belongs to x1, the rest belongs to x2. Using the distance from x to x1 and dividing by the width of the bin gives a percentage distance. 1 minus that is the fraction that belongs to bin 1. So if x == x1, wx1 is 1. And if x == x2, wx1 is zero because x2 - x1 == bx (the width of a bin).
Going back to the code that creates the 4 matrices is just pre-computing all the multiplications of the weights needed for the interpolation of all the pixels in a HOG block. That is why it is a matrix of weights: each element in the matrix if for one of the pixels in the HOG block.
For example, you look at the equation for the wieghts for h(x1, y2, ~) you'll see these 2 weights for x and y (ignoring the z component).
(1 - (x-x1)/bx) * ((y-y1)/by)
Going back to the code, this multiplication is pre-computed for every pixel in the block using:
weights.x1y2 = (1-wy1)' * wx1;
where
(1-wy1) == (y - y1)/by
The same logic applies to the other weight matrices.
As for the code in "computeLowerHistBin", it's just finding the x1 in the trilinear interpolation equation, where x1 <= x < x2 (same for y1). There are probably a bunch of ways to solve this problem given a pixel location x and the width of a bin bx as long as you satisfy x1 <= x < x2.
For example, "|" indicate bin edges. "o" are the bin centers.
-20 0 20 40
|------o-------|-------o-------|-------o-------|
-10 10 30
if x = [2 9 11], the lower bin center x1 is [-10 -10 10].

Calculating the essential matrix from two sets of corresponding points

I'm trying to reconstruct a 3d image from two calibrated cameras. One of the steps involved is to calculate the 3x3 essential matrix E, from two sets of corresponding (homogeneous) points (more than the 8 required) P_a_orig and P_b_orig and the two camera's 3x3 internal calibration matrices K_a and K_b.
We start off by normalizing our points with
P_a = inv(K_a) * p_a_orig
and
P_b = inv(K_b) * p_b_orig
We also know the constraint
P_b' * E * P_a = 0
I'm following it this far, but how do you actually solve that last problem, e.g. finding the nine values of the E matrix? I've read several different lecture notes on this subject, but they all leave out that crucial last step. Likely because it is supposedly trivial math, but I can't remember when I last did this and I haven't been able to find a solution yet.
This equation is actually pretty common in geometry algorithms, essentially, you are trying to calculate the matrix X from the equation AXB=0. To solve this, you vectorise the equation, which means,
vec() means vectorised form of a matrix, i.e., simply stack the coloumns of the matrix one over the another to produce a single coloumn vector. If you don't know the meaning of the scary looking symbol, its called Kronecker product and you can read it from here, its easy, trust me :-)
Now, say I call the matrix obtained by Kronecker product of B^T and A as C.
Then, vec(X) is the null vector of the matrix C and the way to obtain that is by doing the SVD decomposition of C^TC (C transpose multiplied by C) and take the the last coloumn of the matrix V. This last coloumn is nothing but your vec(X). Reshape X to 3 by 3 matrix. This is you Essential matrix.
In case you find this maths too daunting to code, simply use the following code by Y.Ma et.al:
% p are homogenius coordinates of the first image of size 3 by n
% q are homogenius coordinates of the second image of size 3 by n
function [E] = essentialDiscrete(p,q)
n = size(p);
NPOINTS = n(2);
% set up matrix A such that A*[v1,v2,v3,s1,s2,s3,s4,s5,s6]' = 0
A = zeros(NPOINTS, 9);
if NPOINTS < 9
error('Too few mesurements')
return;
end
for i = 1:NPOINTS
A(i,:) = kron(p(:,i),q(:,i))';
end
r = rank(A);
if r < 8
warning('Measurement matrix rank defficient')
T0 = 0; R = [];
end;
[U,S,V] = svd(A);
% pick the eigenvector corresponding to the smallest eigenvalue
e = V(:,9);
e = (round(1.0e+10*e))*(1.0e-10);
% essential matrix
E = reshape(e, 3, 3);
You can do several things:
The Essential matrix can be estimated using the 8-point algorithm, which you can implement yourself.
You can use the estimateFundamentalMatrix function from the Computer Vision System Toolbox, and then get the Essential matrix from the Fundamental matrix.
Alternatively, you can calibrate your stereo camera system using the estimateCameraParameters function in the Computer Vision System Toolbox, which will compute the Essential matrix for you.

Creating Filter's Laplacian Matrix and Solving the Linear Equation for Image Filtering

I have an optimization problem to solve in order to filter an image.
I created a Linear Equation of the problem which deals with Sparse Matrices.
At first I will show the problem.
First, the Laplacian (Adjacency) matrix of the problem:
The matrix Dx / Dy is the forward difference operator -> Hence its transpose is the backward difference operator.
The matrix Ax / Ay is diagonal matrix with weights which are function of the gradient of the image (Point wise, namely the value depends only on the gradient on that pixel by itself).
The weights are:
Where Ix(i) is the horizontal gradient of the input image at the i-th pixel (When you vectorize the input image).
Assuming input Image G -> g = vec(G) = G(:).
I want to find and image U -> u = vec(U) = U(:) s.t.:
My questions are:
How can I build the matrices Dx / Dy / Ax / Ay effectively (They are all sparse)?
By setting M = (I + \lambda * {L}_{g}), Is there an optimized way to create M directly?
What would be the best way to solve this linear problem in MATLAB? Is there a way to by pass memory limitations (Namely, dealing with large images and still be able to solve it)?
Is there an Open Source library to solve it under limited memory resources? Any library with MATLAB API?
Thank You.
Given your comments, let's answer each question in a synopsis and go from there:
I will answer that question below using sparse and other related functions
Using (1), we can definitely build M in an optimized way.
Simply put, the \ operator is the best thing to use when solving an inverse. MathWorks have spent so much time trying to optimize it, and it pretty much uses LAPACK and BLAS under the hood, that you would be insane not to use it. The only time you wouldn't be able to use it is answered in (4).
There are some MATLAB scripts that can handle solving the matrix iteratively, like the Successive Overrelaxation technique, but you should only use those if your run out of memory (i.e. if \ doesn't give you an answer). With the sparse representation of the matrices, this shouldn't (hopefully) happen, so let's avoid using those functions for now.
Going back to your question, we can produce a sparse representation of L_g very nicely. Given the definition of Dx and Dy, we can use the sparse version of the eye command called speye. Therefore, Dx and Dy can be calculated by Dx = diff(speye(size(inputImage))); As an example, this is what would be produced if you tried doing this on a 7 x 5 image.
>> diff(speye(7,5))
ans =
(1,1) -1
(1,2) 1
(2,2) -1
(2,3) 1
(3,3) -1
(3,4) 1
(4,4) -1
(4,5) 1
(5,5) -1
As you can see, we are referencing only non-zero entries. Row 1, column 1 has a coefficient of -1, row 1, column 2 has a coefficient of 1 and so on. As for your Ax and Ay, that's also very easy to do. We have a diagonal matrix and we can set each of the entries manually. All we would do is specify a set of row indices, column indices, and what the values are at each point. Therefore, we can do that by:
inputImage = im2double(inputImage); %//Important
rows = 1 : numel(inputImage); %// Assuming a 2D matrix
cols = rows; % // Row and column indices are the same
valuesDx = exp(-(gradX(rows).^2 / 2*sigma*sigma ));
valuesDy = exp(-(gradY(rows).^2 / 2*sigma*sigma ));
The reason for the first call is because we want to make sure that the pixels are in double precision, as finding the inverse in MATLAB requires that you do this. It also ensures we don't overflow the type as we are normalizing the intensities between 0 and 1. You may have to adjust your standard deviation to reflect this. Now we just need to construct our Ax and Ay matrices, and let's put it together with Dx and Dy:
numberElements = numel(inputImage);
Ax = sparse(rows, cols, valuesDx, numberElements, numberElements);
Ay = sparse(rows, cols, valuesDy, numberElements, numberElements);
identity = speye(numberElements, numberElements);
Dx = diff(identity);
Dy = Dx.'; %// Transpose
The reason why I'm transposing Dx to get Dy is because the difference operator in the vertical direction should simply be the transpose (makes sense to me). These should all be sparse representations of each of the matrices you want. Matrix operations can also be performed on sparse matrices, including multiplication and the inverse. As such:
Lg = Dx.' * Ax * Dx + Dy.' * Ay * Dy;
You can now solve for u via:
u = (identity + lambda*Lg) \ g;
This assumes that g is structured with your pixels in your image in column-major format. The way I sampled the pixels to build Ax and Ay naturally follows this. As such, do g = inputImage(:);, assuming that we have converted to double and normalized between 0 and 1.
When you finally solve for u, you can reshape it back to an image by doing:
u = reshape(u, size(inputImage, 1), size(inputImage, 2));
u may also be sparse, so if you want the original image back, cast it using full():
u = full(u);
Hope this helps!

How to generate random cartesian coordinates given distance constraint in Matlab

I need to generate N random coordinates for a 2D plane. The distance between any two points are given (number of distance is N(N - 1) / 2). For example, say I need to generate 3 points i.e. A, B, C. I have the distance between pair of them i.e. distAB, distAC and distBC.
Is there any built-in function in MATLAB that can do this? Basically, I'm looking for something that is the reverse of pdist() function.
My initial idea was to choose a point (say A is the origin). Then, I can randomly find B and C being on two different circles with radii distAB and distAC. But then the distance between B and C might not satisfy distBC and I'm not sure how to proceed if this happens. And I think this approach will get very complicated if N is a large number.
Elaborating on Ansaris answer I produced the following. It assumes a valid distance matrix provided, calculates positions in 2D based on cmdscale, does a random rotation (random translation could be added also), and visualizes the results:
%Distance matrix
D = [0 2 3; ...
2 0 4; ...
3 4 0];
%Generate point coordinates based on distance matrix
Y = cmdscale(D);
[nPoints dim] = size(Y);
%Add random rotation
randTheta = 2*pi*rand(1);
Rot = [cos(randTheta) -sin(randTheta); sin(randTheta) cos(randTheta) ];
Y = Y*Rot;
%Visualization
figure(1);clf;
plot(Y(:,1),Y(:,2),'.','markersize',20)
hold on;t=0:.01:2*pi;
for r = 1 : nPoints - 1
for c = r+1 : nPoints
plot(Y(r,1)+D(r,c)*sin(t),Y(r,2)+D(r,c)*cos(t));
plot(Y(c,1)+D(r,c)*sin(t),Y(c,2)+D(r,c)*cos(t));
end
end
You want to use a technique called classical multidimensional scaling. It will work fine and losslessly if the distances you have correspond to distances between valid points in 2-D. Luckily there is a function in MATLAB that does exactly this: cmdscale. Once you run this function on your distance matrix, you can treat the first two columns in the first output argument as the points you need.