ASAP Clustering Analysis. How to adjust the parameters of cluster analysis? - cluster-analysis

How to adjust the parameters of cluster analysis, if the subject area is not familiar to you, does not contain information "noise" and anomalies in the data, but you know that potential clusters have a "banana-like" shape?

Related

How to identify found clusters in Lumer Faieta Ant clustering

I have been experimenting with Lumer-Faieta clustering and I am getting
promising results:
However, as clusters formed I was wondering how to identify the final clusters? Do I run another clustering algorithm to identify the clusters (that seems counter-productive)?
I had the idea of starting each data point in its own cluster. Then, when a laden ant drops a data point, its gets the same cluster as the data points that dominates its neighborhood. The problem with this is that if clusters are broken up, they share share the same cluster number.
I am stuck. Any suggestions?
To solve this problem, I employed DBSCAN as a post processing step. The effect as follows:
Given that we have a projection of a high dimensional problem on a 2D grid, with known distances and uniform densities, DBSCAN is ideal for this problem. Choosing the right value for epsilon and the minimum number of neighbours are trivial (I used 3 for both values). Once the clusters have been identified, it can be projected back to the n-dimension space.
See The 5 Clustering Algorithms Data Scientists Need to Know for a quick overview (and graphic demo) of DBSCAN and some other clustering algorithms.

How to decide the numbers of clusters based on a distance threshold between clusters for agglomerative clustering with sklearn?

With sklearn.cluster.AgglomerativeClustering from sklearn I need to specify the number of resulting clusters in advance. What I would like to do instead is to merge clusters until a certain maximum distance between clusters is reached and then stop the clustering process.
Accordingly, the number of clusters might vary depending on the structure of the data. I also do not care about the number of resulting clusters nor the size of the clusters but only that the cluster centroids do not exceed a certain distance.
How can I achieve this?
This pull request for a distance_threshold parameter in scikit-learn's agglomerative clustering may be of interest:
https://github.com/scikit-learn/scikit-learn/pull/9069
It looks like it'll be merged in version 0.22.
EDIT: See my answer to my own question for an example of implementing single linkage clustering with a distance based stopping criterion using scipy.
Use scipy directly instead of sklearn. IMHO, it is much better.
Hierarchical clustering is a three step process:
Compute the dendrogram
Visualize and analyze
Extract branches
But that doesn't fit the supervised-learning-oriented API preference of sklearn, which would like everything to implement a fit, predict API...
SciPy has a function for you:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.fcluster.html#scipy.cluster.hierarchy.fcluster

ELKI ClusteringVectorDumper - which ones are the outliers?

I am using ELKI for DBSCAN clustering and its ClusteringVectorDumper to output the cluster ids into a text file.
Which id do outliers get?
I assumed it was '0' but that does not seem to be true.
You can find the source code of ClusteringVectorDumper online.
There is no special treatment of noise clusters, but they will be processed as returned by Clustering.getAllClusters(). To provide some stability, this method currently sorts by name. DBSCAN does not provide further cluster names (some algorithms do assign e.g. interval names, or subspace names), so if I recall correctly, all clusters will be named either "Cluster" or "Noise". Because "Noise" sorts after "Cluster", the largest index should be the noise cluster.
Feel free to send a pull request to improve either the naming, or the output. I had been considering to use negative numbers for noise clusters, but it would increase the code complexity; and people would likely not expect to see a cluster -1 either.
It does not work well to abuse DBSCAN for outlier detection. It will miss outliers because they are reachable, and it will assign low-density clusters as outliers that other methods will easily recognize correctly. It also does not provide a ranking, so you have little control over how many outliers you get. If you would modify DBSCAN to provide such a ranking, you would likely reinvent one of the oldest outlier detection method, kNN outlier detection. (DB-Outlier is also very closely related to DBSCAN).

Density Based Clustering with Representatives

I'm looking for a method to perform density based clustering. The resulting clusters should have a representative unlike DBSCAN.
Mean-Shift seems to fit those needs but doesn't scale enough for my needs. I have looked into some subspace clustering algorithms and only found CLIQUE using representatives, but this part is not implemented in Elki.
As I noted in the comments on the previous iteration of your question,
https://stackoverflow.com/questions/34720959/dbscan-java-library-with-corepoints
Density-based clustering does not assume there is a center or representative.
Consider the following example image from Wikipedia user Chire (BY-CC-SA 3.0):
Which object should be the representative of the red cluster?
Density-based clustering is about finding "arbitrarily shaped" clusters. These do not have a meaningful single representative object. They are not meant to "compress" your data - this is not a vector quantization method, but structure discovery. But it is the nature of such complex structure that it cannot be reduced to a single representative. The proper representation of such a cluster is the set of all points in the cluster. For geometric understanding in 2D, you can also compute convex hulls, for example, to get an area as in that picture.
Choosing representative objects is a different task. This is not needed for discovering this kind of structure, and thus these algorithms do not compute representative objects - it would waste CPU.
You could choose the object with the highest density as representative of the cluster.
It is a fairly easy modification to DBSCAN to store the neighbor count of every object.
But as Anony-Mousse mentioned, the object may nevertheless be a rather bad choice. Density-based clustering is not designed to yield representative objects.
You could try AffinityPropagation, but it will also not scale very well.

Decision on number of clusters in Data Mining

When ever we want to cluster some data then It is required to give the number of cluster by user. Like K-Means algorithm we need to specify that how cluster are required.
My question is it possible that the algorithm decides itself that how cluster are feasible for particular data set.
There are several clustering algorithms that do not require a desired number of clusters as an input to the algorithm. An example of such an algorithm is the mean-shift clustering algorithm. However, you will need to specify a kernel as an input to the algorithm. This kernel selection (e.g., the size and shape of the kernel) will impact the number of clusters that you get as an output.
Some more information:
http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/TUZEL1/MeanShift.pdf
http://scikit-learn.org/stable/auto_examples/cluster/plot_mean_shift.html
I'm not expert with that, but to answer to your question, yes there are methods to determine automatically the number of cluster for a kmeans for example.
It's quite complicated but given a dataset and a cluster method you can compute what is called gap statistic in order to estime the number of clusters.
If you are a R user, try to check clusGap and maxSE functions.