Retrieve cluster centers / centroids from linkage matrix - scipy

In scipy's hierarchical clustering one can build clusters starting from the linkage matrix Z. For instance,
fcluster(Z, 6,criterion='maxclust' )
would cut the dendrogram so that there will be 6 clusters in the end. Is there a way to get the coordinates of the center of each of those clusters? The position of the centers will differ depending on the metric and method used to build the dendrogram, but I would like to get the centers corresponding to the particular method that was used to build up Z.

Hierarchical clustering does not use centers.
The centers may even be outside of the cluster.
Because of that, you will simply have to call mean yourself to compute centers, if you want centers.

Related

K-means boundaries

Is there any way to find boundaries (coordinates) for a x-y data in kmeans clustering. I produced 8 clusters from the xy data which looks like below (each color represent one cluster). I need to get values of the boundaries for each cluster.
The ELKI tool that I usually use for clustering will generate the boundaries for you in the visualization. I don't know if it will also output the coordinates to a file though.
It's called a Voronoi diagram, and you need the dual, the Delaunay Triangulation to build it. You can easily find algorithms for that.
Beware that some edges will go to infinity (just imagine two clusters, how does their boundary look like? What are the coordinates of the boundary?)
Note that on your data set, this clustering does not appear to be very good. The boundaries between clusters look quite arbitrary to me.

Representative instance of a cluster - hierarchical clustering

I'm using the Agglomerative hierarchical cluster method to cluster a set of data. Where the dataset I use for clusrting is a trajectories.
I use a custom distance function to estimate the distance between the trajectories.
The matlab code is as follow: Z = linkage(ID,'single','#my_distfun');
After clustering the data; I would like to find the representative instance ( or trajectory).
How can I find the representative instance (trajectory) of each cluster?
Hierarchical clustering does not have a concept of representative instances.
You will have to decide upon a definition yourself.
For example, you could use the element with the smallest average distance to all others. Or the one with the smallest average squared distance, or ... many other options.
"Representative" is a subjective term.

Center of clusteres in rapidminer

I have six features that are clustered using k-means algorithm in Rapidminer, I want detect outlier data from these. there is centroid table in Rapidminer that show the center of each feature in each cluster. I want to detect outlier using cluster method(k-means) so i have avg within centroid distance-cluster but i want to calculate distance between each data from center of cluster. I don't know how to calculate a center point for each cluster with 6 features in rapidminer? and i have 6 feature for each data how calculate a point for each data and calculate distance of each data to center of cluster in rapidminer?
You can use the Cross Distances operator for this. This calculates the distances between all pairs of examples in two example sets. Use the Extract Cluster Prototype operator to find the cluster centroids and connect the output of this to one of the inputs of the Cross Distances operator. The original example set is connected to the other input. You can change the distance measure in this operator used but the default is Euclidean distance.

How to calculate Density in clustering

I am working with data-set having 2 co-ordinates. Currently I am calculating density by at first calculating total distance from each point to other points and then dividing it by total points. I want to know is this the correct method to calculate density as I am not getting desired result.
This is the cluster file https://dl.dropboxusercontent.com/u/45772222/samp.txt
this cluster should have 3 cluster -> 2 ellipse and one pipe connecting them
any idea how can I separate them?
Now that is a total toy example.
DBSCAN cannot separate clusters of different densities that touch each other. By definition of density connectedness, they must be separated by an area of low density. In your toy example, the two large clusters are actually connected by an area of higher density.
So essentially, this is an example of non-density based clusters... If you want density based clustering to be able to separate these clusters, you must reduce the density of the connecting bar to have a lower density than the clusters. (But maybe don't even bother to use such toy examples at all)

MATLAB: Self-Organizing Map (SOM) clustering

I'm trying to cluster some images depending on the angles between body parts.
The features extracted from each image are:
angle1 : torso - torso
angle2 : torso - upper left arm
..
angle10: torso - lower right foot
Therefore the input data is a matrix of size 1057x10, where 1057 stands for the number of images, and 10 stands for angles of body parts with torso.
Similarly a testSet is 821x10 matrix.
I want all the rows in input data to be clustered with 88 clusters.
Then I will use these clusters to find which clusters does TestData fall into?
In a previous work, I used K-Means clustering which is very straightforward. We just ask K-Means to cluster the data into 88 clusters. And implement another method that calculates the distance between each row in test data and the centers of each cluster, then pick the smallest values. This is the cluster of the corresponding input data row.
I have two questions:
Is it possible to do this using SOM in MATLAB?
AFAIK SOM's are for visual clustering. But I need to know the actual class of each cluster so that I can later label my test data by calculating which cluster it belongs to.
Do you have a better solution?
Self-Organizing Map (SOM) is a clustering method considered as an unsupervised variation of the Artificial Neural Network (ANN). It uses competitive learning techniques to train the network (nodes compete among themselves to display the strongest activation to a given data)
You can think of SOM as if it consists of a grid of interconnected nodes (square shape, hexagonal, ..), where each node is an N-dim vector of weights (same dimension size as the data points we want to cluster).
The idea is simple; given a vector as input to SOM, we find the node closet to it, then update its weights and the weights of the neighboring nodes so that they approach that of the input vector (hence the name self-organizing). This process is repeated for all input data.
The clusters formed are implicitly defined by how the nodes organize themselves and form a group of nodes with similar weights. They can be easily seen visually.
SOM are in a way similar to the K-Means algorithm but different in that we don't impose a fixed number of clusters, instead we specify the number and shape of nodes in the grid that we want it to adapt to our data.
Basically when you have a trained SOM, and you want to classify a new test input vector, you simply assign it to the nearest (distance as a similarity measure) node on the grid (Best Matching Unit BMU), and give as prediction the [majority] class of the vectors belonging to that BMU node.
For MATLAB, you can find a number of toolboxes that implement SOM:
The Neural Network Toolbox from MathWorks can be used for clustering using SOM (see the nctool clustering tool).
Also worth checking out is the SOM Toolbox