Selecting data based on the distance from a query point in Matlab - matlab

I have a data-set that has four columns [X Y Z C]. I would like to find all the C values that are in a given sphere centered at [X, Y, Z] with a radius r. What is the best approach to address this problem? Should I use the clusterdata command?

Here is one solution that uses naively euclidean distance:
say V = [X Y Z C] is your dataset, Center = [x,y,z] is the center of the sphere, then
dist = bsxfun(#minus,V(:,1:3),Center); % // finds the distance vectors
% // between the points and the center
dist = sum(dist.^2,2); % // evaluate the squares of the euclidean distances (scalars)
idx = (dist < r^2); % // Find the indexes of the matching points
The good C values are
good = V(idx,4); % // here I kept just the C column

This is not "cluster analysis": You do not attempt to discover structure in your data.
Instead, what you are doing, is commonly called a "range query" or "radius query". In classic database terms, a SELECT, with a distance selector.
You probably want to define your sphere using euclidean distance. For computational purposes, it actually is beneficial to instead of squared Euclidean, by simply taking the square of your radius.
I don't use matlab, but there must be tons of examples of how to compute the distance of each instance in a data set from a query point, and then selecting those objects where the distance is small enough.
I don't know if there is any good index structures package for Matlab. But in general, at 3D, this can be well accelerated with index structures. Computing all distances is O(n), but with an index structure only O(log n).

Related

Finding length between a lot of elements

I have an image of a cytoskeleton. There are a lot of small objects inside and I want to calculate the length between all of them in every axis and to get a matrix with all this data. I am trying to do this in matlab.
My final aim is to figure out if there is any axis with a constant distance between the object.
I've tried bwdist and to use connected components without any luck.
Do you have any other ideas?
So, the end goal is that you want to globally stretch this image in a certain direction (linearly) so that the distances between nearest pairs end up the closest together, hopefully the same? Or may you do more complex stretching ? (note that with arbitrarily complex one you can always make it work :) )
If linear global one, distance in x' and y' is going to be a simple multiplication of the old distance in x and y, applied to every pair of points. So, the final euclidean distance will end up being sqrt((SX*x)^2 + (SY*y)^2), with SX being stretch in x and SY stretch in y; X and Y are distances in X and Y between pairs of points.
If you are interested in just "the same" part, solution is not so difficult:
Find all objects of interest and put their X and Y coordinates in a N*2 matrix.
Calculate distances between all pairs of objects in X and Y. You will end up with 2 matrices sized N*N (with 0 on the diagonal, symmetric and real, not sure what is the name for that type of matrix).
Find minimum distance (say this is between A an B).
You probably already have this. Now:
Take C. Make N-1 transformations, which all end up in C->nearestToC = A->B. It is a simple system of equations, you have X1^2*SX^2+Y1^2*SY^2 = X2^2*SX^2+Y2*SY^2.
So, first say A->B = C->A, then A->B = C->B, then A->B = C->D etc etc. Make sure transformation is normalized => SX^2 + SY^2 = 1. If it cannot be found, the only valid transformation is SX = SY = 0 which means you don't have solution here. Obviously, SX and SY need to be real.
Note that this solution is unique except in case where X1 = X2 and Y1 = Y2. In this case, grab some other point than C to find this transformation.
For each transformation check the remaining points and find all nearest neighbours of them. If distance is always the same as these 2 (to a given tolerance), great, you found your transformation. If not, this transformation does not work and you should continue with the next one.
If you want a transformation that minimizes variations between distances (but doesn't require them to be nearly equal), I would do some optimization method and search for a minimum - I don't know how to find an exact solution otherwise. I would pick this also in case you don't have linear or global stretch.
If i understand your question correctly, the first step is to obtain all of the objects center of mass points in the image as (x,y) coordinates. Then, you can easily compute all of the distances between all points. I suggest taking a look on a histogram of those distances which may provide some information as to the nature of distance distribution (for example if it is uniformly random, or are there any patterns that appear).
Obtaining the center of mass points is not an easy task, consider transforming the image into a binary one, or some sort of background subtraction with blob detection or/and edge detector.
For building a histogram you can use histogram.

How to order one dimensional matrices base on values

I want to determine a point in space by geometry and I have math computations that gives me several theta values. After evaluating the theta values, I could get N 1 x 3 dimension matrix where N is the number of theta evaluated. Since I have my targeted point, I only need to decide which of the matrices is closest to the target with adequate focus on the three coordinates (x,y,z).
Take a view of the analysis in the figure below:
Fig 1: Determining Closest Point with all points having minimal error
It can easily be seen that the third matrix is closest using sum(abs(Matrix[x,y,z])). However, if the method is applied on another figure given below, obviously, the result is wrong.
Fig 2: One Point has closest values with 2-axes of the reference point
Looking at point B, it is closer to the reference point on y-,z- axes but just that it strayed greatly on x-axis.
So how can I evaluate the matrices and select the closest one to point of reference and adequate emphasis will be on error differences in all coordinates (x,y,z)?
If your results is in terms of (x,y,z), why don't evaluate the euclidean distance of each matrix you have obtained from the reference point?
Sort of matlab code:
Ref_point = [48.98, 20.56, -1.44];
Curr_point = [x,y,z];
Xd = (x-Ref_point(1))^2 ;
Yd = (y-Ref_point(2))^2 ;
Zd = (z-Ref_point(3))^2 ;
distance = sqrt(Xd + Yd + Zd);
%find the minimum distance

How to understand the Matlab build in function "kmeans"?

Suppose I have a matrix A, the size of which is 2000*1000 double. Then I apply
Matlab build in function "kmeans"to the matrix A.
k = 8;
[idx,C] = kmeans(A, k, 'Distance', 'cosine');
I get C = 8*1000 double; idx = 2000*1 double, with values from 1 to 8;
According to the documentation, C returns the k cluster centroid locations in the k-by-p (8 by 1000) matrix. And idx returns an n-by-1 vector containing cluster indices of each observation.
My question is:
1) I do not know how to understand the C, the centroid locations. Locations should be represented as (x,y), right? How to understand the matrix C correctly?
2) What are the final centers c1, c2,...,ck? Are they just values or locations?
3) For each cluster, if I only want to get the vector closest to the center of this cluster, how to calculate and get it?
Thanks!
Before I answer the three parts, I'll just explain the syntax that is used in MATLAB's explanation of k-means (http://www.mathworks.com/help/stats/kmeans.html).
A is your data matrix (it's represented as X in the link). There are n rows (in this case, 2000), which represent the number of observations/data points that you have. There are also p columns (in this case, 1000), which represent the number of "features" that each data points has. For example, if your data consisted of 2D points, then p would equal 2.
k is the number of clusters that you want to group the data into. Based on the dimensions of C that you gave, k must be 8.
Now I will answer the three parts:
The C matrix has dimensions k x p. Each row represents a centroid. Centroid locations DO NOT have to be (x, y) at all. The dimensions of the centroid locations are equal to p. In other words, if you have 2D points, you could graph the centroids as (x, y). If you have 3D points, you could graph the centroids as (x, y, z). Since each data point in A has 1000 features, your centroids therefore have 1000 dimensions.
This is sort of difficult to explain without knowing what your data is exactly. Centroids are certainly not just values, and they may not necessarily be locations. If your data A were coordinate points, you could certainly represent the centroids as locations. However, we can view it more generally. If you had a cluster centroid i and the data points v that are grouped with that centroid, the centroid would represent the data point that is most similar to those in its cluster. Hopefully, that makes sense, and I can give a clearer explanation if necessary.
The k-means method actually gives us a good way to accomplish this. The function actually has 4 possible outputs, but I will focus on the 4th, which I will call D:
[idx,C,sumd,D] = kmeans(A, k, 'Distance', 'cosine');
D has dimensions n x k. For a data point i, the row i in the D matrix gives the distance from that point to every centroid. Therefore, for each centroid, you simply need to find the data point closest to this, and return that corresponding data point. I can supply the short code for this if you need it.
Also, just a tip. You should probably use kmeans++ method of initializing the centroids. It's faster and generally better. You can call it using this:
[idx,C,sumd,D] = kmeans(A, k, 'Distance', 'cosine', 'Start', 'plus');
Edit:
Here is the code necessary for part 3:
[~, min_idxs] = min(D, [], 1);
closest_vecs = A(min_idxs, :);
Each row i of closest_vecs is the vector that is closest to centroid i.
OK, before we actually get into the details, let's give a brief overview on what K-means clustering is first.
k-means clustering works such that for some data that you have, you want to group them into k groups. You initially choose k random points in your data, and these will have labels from 1,2,...,k. These are what we call the centroids. Then, you determine how close the rest of the data are to each of these points. You then group those points so that whichever points are closest to any of these k points, you assign those points to belong to that particular group (1,2,...,k). After, for all of the points for each group, you update the centroids, which actually is defined as the representative point for each group. For each group, you compute the average of all of the points in each of the k groups. These become the new centroids for the next iteration. In the next iteration, you determine how close each point in your data is to each of the centroids. You keep iterating and repeating this behaviour until the centroids don't move anymore, or they move very little.
How you use the kmeans function in MATLAB is that assuming you have a data matrix (A in your case), it is arranged such that each row is a sample and each column is a feature / dimension of a sample. For example, we could have N x 2 or N x 3 arrays of Cartesian coordinates, either in 2D or 3D. In colour images, we could have N x 3 arrays where each column is a colour component in an image - red, green or blue.
How you invoke kmeans in MATLAB is the following way:
[IDX, C] = kmeans(X, K);
X is the data matrix we talked about, K is the total number of clusters / groups you would like to see and the outputs IDX and C are respectively an index and centroid matrix. IDX is a N x 1 array where N is the total number of samples that you have put into the function. Each value in IDX tells you which centroid the sample / row in X best matched with a particular centroid. You can also override the distance measure used to measure the distance between points. By default, this is the Euclidean distance, but you used the cosine distance in your invocation.
C has K rows where each row is a centroid. Therefore, for the case of Cartesian coordinates, this would be a K x 2 or K x 3 array. Therefore, you would interpret IDX as telling which group / centroid that the point is closest to when computing k-means. As such, if we got a value of IDX=1 for a point, this means that the point best matched with the first centroid, which is the first row of C. Similarly, if we got a value of IDX=1 for a point, this means that the point best matched with the third centroid, which is the third row of C.
Now to answer your questions:
We just talked about C and IDX so this should be clear.
The final centres are stored in C. Each row gives you a centroid / centre that is representative of a group.
It sounds like you want to find the closest point to each cluster in the data, besides the actual centroid itself. That's easy to do if you use knnsearch which performs K-Nearest Neighbour search by giving a set of points and it outputs the K closest points within your data that are close to a query point. As such, you supply the clusters as the input and your data as the output, then use K=2 and skip the first point. The first point will have a distance of 0 as this will be equal to the centroid itself and the second point will give you the closest point that is closest to the cluster.
You can do that by the following, assuming you already ran kmeans:
out = knnsearch(A, C, 'k', 2);
out = out(:,2);
You run knnsearch, then toss out the closest point as it would essentially have a distance of 0. The second column is what you're after, which gives you the closest point to the cluster excluding the actual centroid. out will give you which points in your data matrix A that was closest to each centroid. To get the actual points, do this:
pts = A(out,:);
Hope this helps!

Finding a matching row in two separate datasets in matlab

I have two correlated Nx3 datasets (one is xyz points, the other is the normal vector for those points). I have a point in my first dataset and now I want to find the matching row in the second dataset. What's the best way to do this? I was thinking print out the row number but not sure exactly what the code is to do that?
Given that you have a point in your one dataset that is size 1 x 3, there are two possible ways that you can do this.
Method #1 - Using knnsearch
The easiest way would be to use knnsearch from the Statistics Toolbox.
knnsearch stands for K-Nearest Neighbour search. Given an input query point, knnsearch finds the k closest points to your dataset given the input query point. In your case, k=1. Also, the distance metric is the Euclidean distance, but seeing how your points are in 3D Cartesian space, I don't see this being a problem.
Therefore, assuming your xyz points are stored in X and the query point (normal vector) is in y, just do this:
IDX = knnsearch(X, y);
The above defaults to k=1. If you'd like more than 1 point returned, you'd do this:
IDX = knnsearch(X, y, 'K', n);
n is the number of points you want returned or the n closest points given the query y. IDX contains the index of which point in X is closest to y. I would also like to point out that X is arranged such that each row is a point and each column is a variable.
Therefore, the closest point using IDX would be:
closest_point = X(IDX,:);
Method #2 - Using bsxfun
If you don't have the Statistics Toolbox, you can very easily achieve the same thing using bsxfun. Bear in mind that the code I will write is only for returning the closest point, or k=1:
dists = sqrt(sum(bsxfun(#minus, X, y).^2, 2));
[~,IDX] = min(dists);
The bsxfun call first determines the component-wise distance between y and every point in X. Once we do this, we square each component, add up all of the components together then take the square root. This essentially finds the Euclidean distance with y and all of the points in X. This gives us N distances where N is the total number of points in the dataset. We then find the minimum distance with min and determine the index of where the closest matching point is, which corresponds to the closest point between y and the dataset.
If you'd like to extend this to more than one point, you'd sort the distances in ascending order, then retrieve those number of points with the smallest distances. Remember, smaller Euclidean distances mean that the points are similar, which is why we sort in ascending order. Something like this:
dists = sqrt(sum(bsxfun(#minus, X, y).^2, 2));
[~,ind] = sort(dists);
IDX = ind(1:n);
Just a small step upwards from what we had before. Instead of using min, you'd use sort and get the second output of sort to determine the locations of the minimum distances. We'd then index into ind to get the n closest indices and finally index into X to get our actual points.
You would again do the same thing to retrieve the actual points that are closest:
closest_point = X(IDX,:);
Some Bonus Material
If you'd like to read more about how K-Nearest Neighbour works, I encourage you to read my post about it here:
Finding K-nearest neighbors and its implementation
Good luck!

How to find the nearest points to given coordinates with MATLAB?

I need to solve a minimization problem with Matlab and I'm wondering which is the easiest solution. All the potential solutions that I've been thinking in require lot of programming effort.
Suppose that I have a lat/long coordinate point (A,B), what I need is to search for the nearest point to this one in a map of lat/lon coordinates.
In particular, the latitude and longitude arrays are two matrices of 2030x1354 elements (1km distance) and the idea is to find the unique indexes in those matrices that minimize the distance to the coordinates (A,B), i.e., to find the closest values to the given coordinates (A,B).
Any help would be very appreciated.
Thanks!
This is always a fun one :)
First off: Mohsen Nosratinia's answer is OK, as long as
you don't need to know the actual distance
you can guarantee with absolute certainty that you will never go near the polar regions
and will never go near the ±180° meridian
For a given latitude, -180° and +180° longitude are actually the same point, so simply looking at differences between angles is not sufficient. This will be more of a problem in the polar regions, since large longitude differences there will have less of an impact on the actual distance.
Spherical coordinates are very useful and practical for purposes of navigation, mapping, and that sort of thing. For spatial computations however, like the on-surface distances you are trying to compute, spherical coordinates are actually pretty cumbersome to work with.
Although it is possible to do such calculations using the angles directly, I personally don't consider it very practical: you often have to have a strong background in spherical trigonometry, and considerable experience to know its many pitfalls -- very often there are instabilities or "special points" you need to work around (the poles, for example), quadrant ambiguities you need to consider because of trig functions you've introduced, etc.
I've learned to do all this in university, but I also learned that the spherical trig approach often introduces complexity that mathematically speaking is not strictly required, in other words, the spherical trig is not the simplest representation of the underlying problem.
For example, your distance problem is pretty trivial if you convert your latitudes and longitudes to 3D Cartesian X,Y,Z coordinates, and then find the distances through the simple formula
distance (a, b) = R · arccos( a/|a| · b/|b| )
where a and b are two such Cartesian vectors on the sphere. Note that |a| = |b| = R, with R = 6371 the radius of Earth.
In MATLAB code:
% Some example coordinates (degrees are assumed)
lon = 360*rand(2030, 1354);
lat = 180*rand(2030, 1354) - 90;
% Your point of interest
P = [4, 54];
% Radius of Earth
RE = 6371;
% Convert the array of lat/lon coordinates to Cartesian vectors
% NOTE: sph2cart expects radians
% NOTE: use radius 1, so we don't have to normalize the vectors
[X,Y,Z] = sph2cart( lon*pi/180, lat*pi/180, 1);
% Same for your point of interest
[xP,yP,zP] = sph2cart(P(1)*pi/180, P(2)*pi/180, 1);
% The minimum distance, and the linear index where that distance was found
% NOTE: force the dot product into the interval [-1 +1]. This prevents
% slight overshoots due to numerical artifacts
dotProd = xP*X(:) + yP*Y(:) + zP*Z(:);
[minDist, index] = min( RE*acos( min(max(-1,dotProd),1) ) );
% Convert that linear index to 2D subscripts
[ii,jj] = ind2sub(size(lon), index)
If you insist on skipping the conversion to Cartesian and use lat/lon directly, you'll have to use the Haversine formula, as outlined on this website for example, which is also the method used by distance() from the mapping toolbox.
Now, all of this is valid for the whole Earth, provided you find the smooth spherical Earth accurate enough an approximation. If you want to include the Earth's oblateness or some higher order shape model (or God forbid, distances including terrain), you need to do far more complicated stuff. But I don't think that is your goal here :)
PS - I wouldn't be surprised that if you would write everything out that I did, you'll probably re-discover the Haversine formula. I just prefer to be able to calculate something as simple as distances along the sphere from first principles alone, rather than from some black box formula you had implanted in your head sometime long ago :)
Let Lat and Long denote latitude and longitude matrices, then
dist2=sum(bsxfun(#minus, cat(3,A,B), cat(3,Lat,Long)).^2,3);
[I,J]=find(dist2==min(dist2(:)));
I and J contain the indices in A and B that correspond to nearest point. Note that if there are multiple answers, I and J will not be scalar values, but vectors.