Error in Centroid calculation of kmeans in matlab - matlab

I got a strange output of kMeans implemented in matlab.
All my entries in my input matrix F of dimension d x n are between 0 and 1. When i run the kmeans algorithm using the following matlab command which creates 50 cluster.
[IDX, B] = kmeans(F,50,'MaxIter',1000,'EmptyAction','singleton')
Here IDX is the label returned and B is the centroid of cluster created. Since all your data points are in [0,1]^d , you expect the calculated centroid is also in [0,1]^d , where d is the dimension of the point.
However, the resultant centroid which i got from kmeans after several different initialization contains negative value.
Can anyone let me know the reason for it?

I can't really answer your question without the actual data matrix "F". However, I note that, if size(F) == [d, n] then the code
[IDX, B] = kmeans(F,50,'MaxIter',1000,'EmptyAction','singleton')
will treat F as a set of d points, each of n-variables. So all d points belong to [0,1]^n.
Also
Do you really need the optional arguments? What happens if you remove them?
What happens if you reduce the the number of data points in the input matrix F?
what happens if you reduce the the number of clusters, to say 10, instead of 50?

Related

How to understand the Matlab build in function "kmeans"?

Suppose I have a matrix A, the size of which is 2000*1000 double. Then I apply
Matlab build in function "kmeans"to the matrix A.
k = 8;
[idx,C] = kmeans(A, k, 'Distance', 'cosine');
I get C = 8*1000 double; idx = 2000*1 double, with values from 1 to 8;
According to the documentation, C returns the k cluster centroid locations in the k-by-p (8 by 1000) matrix. And idx returns an n-by-1 vector containing cluster indices of each observation.
My question is:
1) I do not know how to understand the C, the centroid locations. Locations should be represented as (x,y), right? How to understand the matrix C correctly?
2) What are the final centers c1, c2,...,ck? Are they just values or locations?
3) For each cluster, if I only want to get the vector closest to the center of this cluster, how to calculate and get it?
Thanks!
Before I answer the three parts, I'll just explain the syntax that is used in MATLAB's explanation of k-means (http://www.mathworks.com/help/stats/kmeans.html).
A is your data matrix (it's represented as X in the link). There are n rows (in this case, 2000), which represent the number of observations/data points that you have. There are also p columns (in this case, 1000), which represent the number of "features" that each data points has. For example, if your data consisted of 2D points, then p would equal 2.
k is the number of clusters that you want to group the data into. Based on the dimensions of C that you gave, k must be 8.
Now I will answer the three parts:
The C matrix has dimensions k x p. Each row represents a centroid. Centroid locations DO NOT have to be (x, y) at all. The dimensions of the centroid locations are equal to p. In other words, if you have 2D points, you could graph the centroids as (x, y). If you have 3D points, you could graph the centroids as (x, y, z). Since each data point in A has 1000 features, your centroids therefore have 1000 dimensions.
This is sort of difficult to explain without knowing what your data is exactly. Centroids are certainly not just values, and they may not necessarily be locations. If your data A were coordinate points, you could certainly represent the centroids as locations. However, we can view it more generally. If you had a cluster centroid i and the data points v that are grouped with that centroid, the centroid would represent the data point that is most similar to those in its cluster. Hopefully, that makes sense, and I can give a clearer explanation if necessary.
The k-means method actually gives us a good way to accomplish this. The function actually has 4 possible outputs, but I will focus on the 4th, which I will call D:
[idx,C,sumd,D] = kmeans(A, k, 'Distance', 'cosine');
D has dimensions n x k. For a data point i, the row i in the D matrix gives the distance from that point to every centroid. Therefore, for each centroid, you simply need to find the data point closest to this, and return that corresponding data point. I can supply the short code for this if you need it.
Also, just a tip. You should probably use kmeans++ method of initializing the centroids. It's faster and generally better. You can call it using this:
[idx,C,sumd,D] = kmeans(A, k, 'Distance', 'cosine', 'Start', 'plus');
Edit:
Here is the code necessary for part 3:
[~, min_idxs] = min(D, [], 1);
closest_vecs = A(min_idxs, :);
Each row i of closest_vecs is the vector that is closest to centroid i.
OK, before we actually get into the details, let's give a brief overview on what K-means clustering is first.
k-means clustering works such that for some data that you have, you want to group them into k groups. You initially choose k random points in your data, and these will have labels from 1,2,...,k. These are what we call the centroids. Then, you determine how close the rest of the data are to each of these points. You then group those points so that whichever points are closest to any of these k points, you assign those points to belong to that particular group (1,2,...,k). After, for all of the points for each group, you update the centroids, which actually is defined as the representative point for each group. For each group, you compute the average of all of the points in each of the k groups. These become the new centroids for the next iteration. In the next iteration, you determine how close each point in your data is to each of the centroids. You keep iterating and repeating this behaviour until the centroids don't move anymore, or they move very little.
How you use the kmeans function in MATLAB is that assuming you have a data matrix (A in your case), it is arranged such that each row is a sample and each column is a feature / dimension of a sample. For example, we could have N x 2 or N x 3 arrays of Cartesian coordinates, either in 2D or 3D. In colour images, we could have N x 3 arrays where each column is a colour component in an image - red, green or blue.
How you invoke kmeans in MATLAB is the following way:
[IDX, C] = kmeans(X, K);
X is the data matrix we talked about, K is the total number of clusters / groups you would like to see and the outputs IDX and C are respectively an index and centroid matrix. IDX is a N x 1 array where N is the total number of samples that you have put into the function. Each value in IDX tells you which centroid the sample / row in X best matched with a particular centroid. You can also override the distance measure used to measure the distance between points. By default, this is the Euclidean distance, but you used the cosine distance in your invocation.
C has K rows where each row is a centroid. Therefore, for the case of Cartesian coordinates, this would be a K x 2 or K x 3 array. Therefore, you would interpret IDX as telling which group / centroid that the point is closest to when computing k-means. As such, if we got a value of IDX=1 for a point, this means that the point best matched with the first centroid, which is the first row of C. Similarly, if we got a value of IDX=1 for a point, this means that the point best matched with the third centroid, which is the third row of C.
Now to answer your questions:
We just talked about C and IDX so this should be clear.
The final centres are stored in C. Each row gives you a centroid / centre that is representative of a group.
It sounds like you want to find the closest point to each cluster in the data, besides the actual centroid itself. That's easy to do if you use knnsearch which performs K-Nearest Neighbour search by giving a set of points and it outputs the K closest points within your data that are close to a query point. As such, you supply the clusters as the input and your data as the output, then use K=2 and skip the first point. The first point will have a distance of 0 as this will be equal to the centroid itself and the second point will give you the closest point that is closest to the cluster.
You can do that by the following, assuming you already ran kmeans:
out = knnsearch(A, C, 'k', 2);
out = out(:,2);
You run knnsearch, then toss out the closest point as it would essentially have a distance of 0. The second column is what you're after, which gives you the closest point to the cluster excluding the actual centroid. out will give you which points in your data matrix A that was closest to each centroid. To get the actual points, do this:
pts = A(out,:);
Hope this helps!

How do I classify data points without label information?

Based on the previous question from here
I have another question. classification accuracy?
I'll address your last point first to get it out of the way. If you don't know what the classification labels were to begin with, then there's no way to assess classification accuracy. How do you know whether the correct label was assigned to the point in C or D if you don't know what label it was to begin with? In that case, we're going to have to leave that alone.
However, what you could do is calculate the percentage of what values get classified as A or B in the matrices C and D to get a sense of the distribution of samples in them both. Specifically, if for example in matrix C, the majority of samples get classified to belong to the group defined by matrix A, then that is probably a good indication that a C is very much like A in distribution.
In any case, one thing I can suggest for you to classify which points in C or D belong to either A or B is to use the k-nearest neighbours algorithm. Concretely, you have a bunch of source data points, namely those that belong in matrices A and B, where A and B have their own labels. In your case, samples in A are assigned a label of 1 and samples in B are assigned a label of -1. To determine where an unknown point belongs to for a group, you can simply find the distance between this point in feature space with all values in A and B. Whichever point in A or B that is the closest with the unknown point, then whatever group that point belonged to in your source points, that's the group you would apply to this unknown point.
As such, simply concatenate C and D into a single N x 1000 matrix, apply k-nearest neighbour to another concatenated matrix with A and B and figure out which point it's the closest to in this other concatenated matrix. Then, read off what the label was and that'll give you what the label of the unknown point can possibly be.
In MATLAB, use the knnsearch function that's part of the Statistics Toolbox. However, I encourage you to take a look at my previous post on explaining the k-nearest neighbour algorithm here: Finding K-nearest neighbors and its implementation
In any case, here's how you'd apply what I said above with your problem statement, assuming A, B, C and D are already defined:
labels = [ones(size(A,1),1); -ones(size(B,1),1)]; %// Create labels for A and B
%// Create source and query points
sourcePoints = [A; B];
queryPoints = [C; D];
%// Perform knnsearch
IDX = knnsearch(sourcePoints, queryPoints);
%// Extract out the groups per point
groups = labels(IDX);
groups will contain the labels associated with each of the points provided by queryPoints. knnsearch returns the row location of the source point in sourcePoints that best matched with a query point. As such, each value of the output tells you which point in the source point matrix best matched with that particular query point. Ultimately, this returns the location we need in the labels array to figure out what the actual labels are.
Therefore, if you want to see what labels were assigned to the points in C, you can do:
labelsC = groups(1:size(C,1));
labelsD = groups(size(C,1)+1:end);
Therefore, in labelsC and labelsD, they contain the labels assigned for each of the unknown points in both matrices. Any values that are 1 meant that the particular points resembled those from matrix A. Similarly, any values that are -1 meant that the particular points resembled those from matrix B.
If you want to plot all of this together, just combine what you did in the previous question with your new data from this question:
%// Code as before
[coeffA, scoreA] = pca(A);
[coeffB, scoreB] = pca(B);
numDimensions = 2;
scoreAred = scoreA(:,1:numDimensions);
scoreBred = scoreB(:,1:numDimensions);
%// New - Perform dimensionality reduction on C and D
[coeffC, scoreC] = pca(C);
[coeffD, scoreD] = pca(D);
scoreCred = scoreC(:,1:numDimensions);
scoreDred = scoreD(:,1:numDimensions);
%// Plot the data
plot(scoreAred(:,1), scoreAred(:,2), 'rx', scoreBred(:,1), scoreBred(:,2), 'bo');
hold on;
plot(scoreCred(labelsC == 1,1), scoreCred(labelsC == 1,2), 'gx', ...
scoreCred(labelsC == -1,1), scoreCred(labelsC == -1,2), 'mo');
plot(scoreDred(labelsD == 1,1), scoreDred(labelsD == 1,2), 'kx', ...
scoreDred(labelsD == -1,1), scoreDred(labelsD == -1,2), 'co');
The above is the case for two dimensions. We plot both A and B with their dimensions reduced to 2. Similarly, we apply PCA to C and D, then plot everything together. The first line plots A and B normally. Next, we have to use hold on; so we can invoke plot multiple times and append results to the same figure. We have to call plot four times to account for four different combinations:
Matrix C having labels from A
Matrix C having labels from B
Matrix D having labels from A
Matrix D having labels from B
Each case I have placed a different colour, but used the same marker to denote which class each point belongs to: x for group A and o for group B.
I'll leave it to you to extend this to three dimensions.

k-means clustering using function 'kmeans' in MATLAB

I have this matrix:
x = [2+2*i 2-2*i -2+2*i -2-2*i];
I want to simulate transmitting it and adding noise to it. I represented the components of the complex number as below:
A = randn(150, 2) + 2*ones(150, 2); C = randn(150, 2) - 2*ones(150, 2);
At the receiver, I received the below vector, where the components are ordered based on what I sent originally, i.e., the components of x).
X = [A A A C C A C C];
Now I want to apply the kmeans(X) to have four clusters, so kmeans(X, 4). I am experiencing the following problems:
I am not sure if I can represent the complex numbers as shown in X above.
I can't plot the result of the kmeans to show the clusters.
I could not understand the clusters centroid results.
How can I find the best error rate, if this example was to represent a communication system and at the receiver, k-means clustering was used in order to decide what the transmitted signal was?
If you don't "understand" the cluster centroid results, then you don't understand how k-means works. I'll present a small summary here.
How k-means works is that for some data that you have, you want to group them into k groups. You initially choose k random points in your data, and these will have labels from 1,2,...,k. These are what we call the centroids. Then, you determine how close the rest of the data are to each of these points. You then group those points so that whichever points are closest to any of these k points, you assign those points to belong to that particular group (1,2,...,k). After, for all of the points for each group, you update the centroids, which actually is defined as the representative point for each group. For each group, you compute the average of all of the points in each of the k groups. These become the new centroids for the next iteration. In the next iteration, you determine how close each point in your data is to each of the centroids. You keep iterating and repeating this behaviour until the centroids don't move anymore, or they move very little.
Now, let's answer your questions one-by-one.
1. Complex number representation
k-means in MATLAB doesn't define how complex data is handled. A common way for people to deal with complex numbered data is to split up the real and imaginary parts into separate dimensions as you have done. This is a perfectly valid way to use k-means for complex valued data.
See this post on the MathWorks MATLAB forum for more details: https://www.mathworks.com/matlabcentral/newsreader/view_thread/78306
2. Plot the results
You aren't constructing your matrix X properly. Note that A and C are both 150 x 2 matrices. You need to structure X such that each row is a point, and each column is a variable. Therefore, you need to concatenate your A and C row-wise. Therefore:
X = [A; A; A; C; C; A; C; C];
Note that you have duplicate points. This is actually no different than doing X = [A; C]; as far as kmeans is concerned. Perhaps you should generate X, then add the noise in rather than taking A and C, adding noise, then constructing your signal.
Now, if you want to plot the results as well as the centroids, what you need to do is use the two output version of kmeans like so:
[idx, centroids] = kmeans(X, 4);
idx will contain the cluster number that each point in X belongs to, and centroids will be a 4 x 2 matrix where each row tells you the mean of each cluster found in the data. If you want to plot the data, as well as the clusters, you simply need to do following. I'm going to loop over each cluster membership and plot the results on a figure. I'm also going to colour in where the mean of each cluster is located:
x = X(:,1);
y = X(:,2);
figure;
hold on;
colors = 'rgbk';
for num = 1 : 4
plot(x(idx == num), y(idx == num), [colors(num) '.']);
end
plot(centroids(:,1), centroids(:,2), 'c.', 'MarkerSize', 14);
grid;
The above code goes through each cluster, plots them in a different colour, then plots the centroids in cyan with a slightly larger thickness so you can see what the graph looks like.
This is what I get:
3. Understanding centroid results
This is probably because you didn't construct X properly. This is what I get for my centroids:
centroids =
-1.9176 -2.0759
1.5980 2.8071
2.7486 1.6147
0.8202 0.8025
This is pretty self-explanatory and I talked about how this is structured earlier.
4. Best representation of the signal
What you can do is repeat the clustering a number of times, then the algorithm will decide what the best clustering was out of these times. You would simply use the Replicates flag and denote how many times you want this run. Obviously, the more times you run this, the better your results may be. Therefore, do something like:
[idx, centroids] = kmeans(X, 4, 'Replicates', 5);
This will run kmeans 5 times and give you the best centroids of these 5 times.
Now, if you want to determine what the best sequence that was transmitted, you'd have to split up your X into 150 rows each (as your random sequence was 150 elements), then run a separate kmeans on each subset. You can try to find the best representation of each part of the sequence by using the Replicates flag each time.... so you can do something like:
for num = 1 : 8
%// Look at 150 points at a time
[idx, centroids] = kmeans(X((num-1)*150 + 1 : num*150, :), 4, 'Replicates', 5);
%// Do your analysis
%//...
%//...
end
idx and centroids would be the results for each portion of your transmitted signal. You probably want to look at centroids at each iteration to determine what symbol was transmitted at a particular time.
If you want to plot the decision regions, then you're probably looking for a Voronoi diagram. All you do is given a set of points that are defined within the domain of your problem, you just have to determine which cluster each point belongs to. Given that our data spans between -5 <= (x,y) <= 5, let's go through each point in the grid and determine which cluster each point belongs to. We'd then colour the appropriate point according to which cluster it belongs to.
Something like:
colors = 'rgbk';
[X,Y] = meshgrid(-5:0.05:5, -5:0.05:5);
X = X(:);
Y = Y(:);
figure;
hold on;
for idx = 1 : numel(X)
[~,ind] = min(sum(bsxfun(#minus, [X(idx) Y(idx)], centroids).^2, 2));
plot(X(idx), Y(idx), [colors(ind), '.']);
end
plot(centroids(:,1), centroids(:,2), 'c.', 'MarkerSize', 14);
The above code will plot the decision regions / Voronoi diagram of the particular configuration, as well as where the cluster centres are located. Note that the code is rather unoptimized and it'll take a while for the graph to generate, but I wanted to write something quick to illustrate my point.
Here's what the decision regions look like:
Hope this helps! Good luck!

How to remove the outliers located outside the predication bound in Matlab?

Hey guys I have one question related to processing of Time series, I have xy data and want to remove the outliers, so i defined it by ones that located outside the the prediction bound, I applied the regress functions [B, Bint, R, Rint, stats] = regress(y, x);but iam confused how to remove that ones?
any help??
Straight from the docs
[b,bint,r,rint] = regress(y,X) returns an n-by-2 matrix rint of
intervals that can be used to diagnose outliers. If the interval
rint(i,:) for observation i does not contain zero, the corresponding
residual is larger than expected in 95% of new observations,
suggesting an outlier.
Therefore, to find the location of outliers in your data, it should be just:
n = rint(:,1)>0|rint(:,2)<0;
Then you can either remove them, plot them in a different colour, or whatever.

Converting 'labels' of matrices to matrices in Matlab

I'm trying to write a short Matlab code to perform a certain mathematical function. The code generates a vector H which has entries of either 1, 2 or 3 (and size dependent on other factors). (In my mind), the numbers 1, 2 and 3 correspond to three particular matrices. Once my program has calculated H, I would like it to be able to multiply together all the matrices represented by its entries. To clarify, if H = [1 2 3 2], I'd like my code to calculate A*B*C*B. What is the simplest way of doing this? I thought about creating a vector with entries that are matrices, and using a function that gives the product of the entries of the vector, but I couldn't get that to work (and don't know if it can work - I'm very new to Matlab).
Ideally I'd rather not rewrite the rest of my code - if there's a way to get this to work with what I've done so far then that'd be great. I'm looking for functionality as opposed to slick coding - it doesn't matter if it's clumsy, as long as it works.
#zuloos answer might be problematic if the sizes of matrices are not the same - especially if number of rows are different. Should work if you put the matrices in cells.
matrices = {A,B,C,D};
result = matrices{H(1)};
for i=2:numel(H)
result = result * matrices{H(i)};
end
put all your matrices in another matice then you can use values of H as a the key to choose the right matrice matices = [A, B, C, D]. you would then go throug H one by one and multiply it with the result of the last operation. you will start with a diagonal matrice of the same dimensions as the other matrices and multiply this in each round of the loop with the matrice in matrices coreesponding to the value of H
matrices = [A, B, C, D]
// d is dimension of your matrices (i guess they are square)
erg = diag(d)
for i=length(H):1
// supposed your matices are 2d
erg = matrices(H(i),:,:)*init
end
i dont know if it makes sense here to multiply each step from left (like you would do in openGL) but i thought this allows you split up the operation in steps (like its done in openGL)