Related
I am trying to randomly generate uniformly distributed vectors, which are of Euclidian length of 1. By uniformly distributed I mean that each entry (coordinate) of the vectors is uniformly distributed.
More specifically, I would like to create a set of, say, 1000 vectors (lets call them V_i, with i=1,…,1000), where each of these random vectors has unit Euclidian length and the same dimension V_i=(v_1i,…,v_ni)' (let’s say n = 5, but the algorithm should work with any dimension). If we then look on the distribution of e.g. v_1i, the first element of each V_i, then I would like that this is uniformly distributed.
In the attached MATLAB example you see that you cannot simply draw random vectors from a uniform distribution and then normalize the vectors to Euclidian length of 1, as the distribution of the elements across the vectors is then no longer uniform.
Is there a way to generate this set of vectors such, that the distribution of the single elements across the vector-set is uniform?
Thank you for any ideas.
PS: MATLAB is our Language of choice, but solutions in any languages are, of course, welcome.
clear all
rng('default')
nvar=5;
sample = 1000;
x = zeros(nvar,sample);
for ii = 1:sample
y=rand(nvar,1);
x(:,ii) = y./norm(y);
end
hist(x(1,:))
figure
hist(x(2,:))
figure
hist(x(3,:))
figure
hist(x(4,:))
figure
hist(x(5,:))
What you want cannot be accomplished.
Vectors with a length of 1 sit on a circle (or sphere or hypersphere depending on the number of dimensions). Let's focus on the 2D case, if it cannot be done there, it will be clear that it cannot be done with more dimensions either.
Because the points are on a circle, their x and y coordinates are dependent, the one can be computed based on the other. Thus, the distributions of x and y coordinates cannot be defined independently. We can define the distribution of the one, generate random values for it, but the other coordinate must be computed from the first.
Let's make points on a half circle with a uniform x coordinate (can be extended to a full circle by adding a random sign to the y coordinate):
N = 1000;
x = 2 * rand(N,1) - 1;
y = sqrt(1 - x.^2);
plot(x,y,'.')
axis equal
histogram(y)
The plot generates shows a clearly non-uniform distribution, with many more samples generated near y=1 than near y=0. If we add a random sign to the y-coordinate we'd have more samples near y=1 and y=-1 than near y=0.
I have an image of a cytoskeleton. There are a lot of small objects inside and I want to calculate the length between all of them in every axis and to get a matrix with all this data. I am trying to do this in matlab.
My final aim is to figure out if there is any axis with a constant distance between the object.
I've tried bwdist and to use connected components without any luck.
Do you have any other ideas?
So, the end goal is that you want to globally stretch this image in a certain direction (linearly) so that the distances between nearest pairs end up the closest together, hopefully the same? Or may you do more complex stretching ? (note that with arbitrarily complex one you can always make it work :) )
If linear global one, distance in x' and y' is going to be a simple multiplication of the old distance in x and y, applied to every pair of points. So, the final euclidean distance will end up being sqrt((SX*x)^2 + (SY*y)^2), with SX being stretch in x and SY stretch in y; X and Y are distances in X and Y between pairs of points.
If you are interested in just "the same" part, solution is not so difficult:
Find all objects of interest and put their X and Y coordinates in a N*2 matrix.
Calculate distances between all pairs of objects in X and Y. You will end up with 2 matrices sized N*N (with 0 on the diagonal, symmetric and real, not sure what is the name for that type of matrix).
Find minimum distance (say this is between A an B).
You probably already have this. Now:
Take C. Make N-1 transformations, which all end up in C->nearestToC = A->B. It is a simple system of equations, you have X1^2*SX^2+Y1^2*SY^2 = X2^2*SX^2+Y2*SY^2.
So, first say A->B = C->A, then A->B = C->B, then A->B = C->D etc etc. Make sure transformation is normalized => SX^2 + SY^2 = 1. If it cannot be found, the only valid transformation is SX = SY = 0 which means you don't have solution here. Obviously, SX and SY need to be real.
Note that this solution is unique except in case where X1 = X2 and Y1 = Y2. In this case, grab some other point than C to find this transformation.
For each transformation check the remaining points and find all nearest neighbours of them. If distance is always the same as these 2 (to a given tolerance), great, you found your transformation. If not, this transformation does not work and you should continue with the next one.
If you want a transformation that minimizes variations between distances (but doesn't require them to be nearly equal), I would do some optimization method and search for a minimum - I don't know how to find an exact solution otherwise. I would pick this also in case you don't have linear or global stretch.
If i understand your question correctly, the first step is to obtain all of the objects center of mass points in the image as (x,y) coordinates. Then, you can easily compute all of the distances between all points. I suggest taking a look on a histogram of those distances which may provide some information as to the nature of distance distribution (for example if it is uniformly random, or are there any patterns that appear).
Obtaining the center of mass points is not an easy task, consider transforming the image into a binary one, or some sort of background subtraction with blob detection or/and edge detector.
For building a histogram you can use histogram.
How can i generate random-uniform points in the surface of a N-dimensional cube with edge E?
There is a code for generating for a N-dimensional sphere, but I can't figure it out how can I generate it for a cube.
The nice thing with the N-dimensional hypercube is that its faces are hypercubes of dimension (N-1). Therefore I would proceed in four steps steps.
Draw a random integer called d in the range 1..N to select the hypercube face direction. d=randi(N)
To select a specific face among the two possible ones, draw a random integer called s which can take either of the two values: 0 or 1. s =randi(2)-1
Draw a random uniformly distributed vector called v of length N in the range 0..1. v=rand(N,1)
replace s as the d-th coordinate in v and multiply the result by the edge length E. v(d)=s, v=E*v
Plotting 1000 points on the surface or the 3-d cube of edge-length 2 would we something like:
N=3;
E=2;
Nsamples=1000;
d=randi(N,1,Nsamples);
s =randi(2,1,Nsamples)-1;
v=rand(N,Nsamples);
for i=1:Nsamples
v(d(i),i)=s(i);
end
v = E*v;
plot3(v(1,:),v(2,:),v(3,:),'.');
This implementation is probably not the best in terms of pure efficiency, but you understand how it works.
Hope this helps.
Adrien.
Suppose I have a matrix A, the size of which is 2000*1000 double. Then I apply
Matlab build in function "kmeans"to the matrix A.
k = 8;
[idx,C] = kmeans(A, k, 'Distance', 'cosine');
I get C = 8*1000 double; idx = 2000*1 double, with values from 1 to 8;
According to the documentation, C returns the k cluster centroid locations in the k-by-p (8 by 1000) matrix. And idx returns an n-by-1 vector containing cluster indices of each observation.
My question is:
1) I do not know how to understand the C, the centroid locations. Locations should be represented as (x,y), right? How to understand the matrix C correctly?
2) What are the final centers c1, c2,...,ck? Are they just values or locations?
3) For each cluster, if I only want to get the vector closest to the center of this cluster, how to calculate and get it?
Thanks!
Before I answer the three parts, I'll just explain the syntax that is used in MATLAB's explanation of k-means (http://www.mathworks.com/help/stats/kmeans.html).
A is your data matrix (it's represented as X in the link). There are n rows (in this case, 2000), which represent the number of observations/data points that you have. There are also p columns (in this case, 1000), which represent the number of "features" that each data points has. For example, if your data consisted of 2D points, then p would equal 2.
k is the number of clusters that you want to group the data into. Based on the dimensions of C that you gave, k must be 8.
Now I will answer the three parts:
The C matrix has dimensions k x p. Each row represents a centroid. Centroid locations DO NOT have to be (x, y) at all. The dimensions of the centroid locations are equal to p. In other words, if you have 2D points, you could graph the centroids as (x, y). If you have 3D points, you could graph the centroids as (x, y, z). Since each data point in A has 1000 features, your centroids therefore have 1000 dimensions.
This is sort of difficult to explain without knowing what your data is exactly. Centroids are certainly not just values, and they may not necessarily be locations. If your data A were coordinate points, you could certainly represent the centroids as locations. However, we can view it more generally. If you had a cluster centroid i and the data points v that are grouped with that centroid, the centroid would represent the data point that is most similar to those in its cluster. Hopefully, that makes sense, and I can give a clearer explanation if necessary.
The k-means method actually gives us a good way to accomplish this. The function actually has 4 possible outputs, but I will focus on the 4th, which I will call D:
[idx,C,sumd,D] = kmeans(A, k, 'Distance', 'cosine');
D has dimensions n x k. For a data point i, the row i in the D matrix gives the distance from that point to every centroid. Therefore, for each centroid, you simply need to find the data point closest to this, and return that corresponding data point. I can supply the short code for this if you need it.
Also, just a tip. You should probably use kmeans++ method of initializing the centroids. It's faster and generally better. You can call it using this:
[idx,C,sumd,D] = kmeans(A, k, 'Distance', 'cosine', 'Start', 'plus');
Edit:
Here is the code necessary for part 3:
[~, min_idxs] = min(D, [], 1);
closest_vecs = A(min_idxs, :);
Each row i of closest_vecs is the vector that is closest to centroid i.
OK, before we actually get into the details, let's give a brief overview on what K-means clustering is first.
k-means clustering works such that for some data that you have, you want to group them into k groups. You initially choose k random points in your data, and these will have labels from 1,2,...,k. These are what we call the centroids. Then, you determine how close the rest of the data are to each of these points. You then group those points so that whichever points are closest to any of these k points, you assign those points to belong to that particular group (1,2,...,k). After, for all of the points for each group, you update the centroids, which actually is defined as the representative point for each group. For each group, you compute the average of all of the points in each of the k groups. These become the new centroids for the next iteration. In the next iteration, you determine how close each point in your data is to each of the centroids. You keep iterating and repeating this behaviour until the centroids don't move anymore, or they move very little.
How you use the kmeans function in MATLAB is that assuming you have a data matrix (A in your case), it is arranged such that each row is a sample and each column is a feature / dimension of a sample. For example, we could have N x 2 or N x 3 arrays of Cartesian coordinates, either in 2D or 3D. In colour images, we could have N x 3 arrays where each column is a colour component in an image - red, green or blue.
How you invoke kmeans in MATLAB is the following way:
[IDX, C] = kmeans(X, K);
X is the data matrix we talked about, K is the total number of clusters / groups you would like to see and the outputs IDX and C are respectively an index and centroid matrix. IDX is a N x 1 array where N is the total number of samples that you have put into the function. Each value in IDX tells you which centroid the sample / row in X best matched with a particular centroid. You can also override the distance measure used to measure the distance between points. By default, this is the Euclidean distance, but you used the cosine distance in your invocation.
C has K rows where each row is a centroid. Therefore, for the case of Cartesian coordinates, this would be a K x 2 or K x 3 array. Therefore, you would interpret IDX as telling which group / centroid that the point is closest to when computing k-means. As such, if we got a value of IDX=1 for a point, this means that the point best matched with the first centroid, which is the first row of C. Similarly, if we got a value of IDX=1 for a point, this means that the point best matched with the third centroid, which is the third row of C.
Now to answer your questions:
We just talked about C and IDX so this should be clear.
The final centres are stored in C. Each row gives you a centroid / centre that is representative of a group.
It sounds like you want to find the closest point to each cluster in the data, besides the actual centroid itself. That's easy to do if you use knnsearch which performs K-Nearest Neighbour search by giving a set of points and it outputs the K closest points within your data that are close to a query point. As such, you supply the clusters as the input and your data as the output, then use K=2 and skip the first point. The first point will have a distance of 0 as this will be equal to the centroid itself and the second point will give you the closest point that is closest to the cluster.
You can do that by the following, assuming you already ran kmeans:
out = knnsearch(A, C, 'k', 2);
out = out(:,2);
You run knnsearch, then toss out the closest point as it would essentially have a distance of 0. The second column is what you're after, which gives you the closest point to the cluster excluding the actual centroid. out will give you which points in your data matrix A that was closest to each centroid. To get the actual points, do this:
pts = A(out,:);
Hope this helps!
In matlab, how to fill up the Cartesian plane with randomly distributed points?
That is, for each coordinate x(i,j) in the graph, a point is placed or is not placed based on some random criteria (for example a point is placed there iff a random number is > 0).
Seems like this should be easy to implement, but I'm stumped.
just use rand as usual,
A=rand(N,M)
will create a matrix of size NxM of random numbers between 0 and 1 (rand(N) will create a NxN matrix) . You can then select A>0.9 to select only those point at which A>0.9...
For example:
A=rand(50)>0.9;
imshow(A);