I have a 3d box with some points in it (1800).
Like this:
Now I have to cluster these points and it can't be done with k-means because you don't now the number of clusters. An other problem is that the box is periodic. So the points at the side top and bottom can belong to eacht other. Like in this image:
The right en left belong to each other.
How can I define these clusters with a specific distance as threshold, and implement that the box is periodic (so when you are one the end of one axis look at the beginning if these distances are below the threshold)?
Kind regards,
Glenn
The Wikipedia article on cluster analysis will answer your question.
Look for density based clustering algorithms, as your data looks very much like the design scenario of density based clustering to me.
Well, first things first, you can indeed use K-Means. Of course you will need to use a cluster validity index (google Silhouette width index, Calinski-Harabasz index, Dunn's index, etc.).
If you really don't want to use K-Means for some other reason, you may wish to use a hierarchical clustering algorithm such as the Ward Method (description in Wikipedia). You won't need to know the number of clusters a priori (however, can you truly claim that you are creating a taxonomy without being able to answer the most basic of questions: how many taxons are there?).
The fact that your box is periodic raises an interesting challenge. My first thought here is that the best way to approach the problem is not by changing the distance measure (which you could do), but by transforming the data (feature extraction).
Your box has 6 sides, but because its periodic its like if it had 3 sides. So, the left side and right side are "the same" (as are the top and bottom, and the front and back).
How about redefining each object over three features? each feature is the distance between the object and one of the "three" sides.
Best of luck!
Related
I have a series (let's say 1000) of images of a biological sample...living cells. Over this series, the data for each pixel will describe a time variant "wave", if you will, giving the measure of light intensity vs time. After performing an FFT for this wave, I'll have the frequency content and phase for each pixel.
My goal is to be able to find all the pixels that are measuring a single cell, and was wondering if some sort of clustering technique would give me what I'm looking for. After some research (I know almost nothing of cluster analysis) looking at KMeans, DBSCAN, and a few others, I'm unsure how to proceed.
Here's my criteria:
a cluster should consist of connected pixels, with a maximum size of
around 9-12 pixels (this is defined by the actual size of the cell in
the field of view). Putting more pixels in a cluster likely means
that the cluster contains more than one cell, and I'd prefer each
cluster to represent a single cell.
the cells are signalling (glowing) with some frequency/phase. These are not necessarily in sync, so I think that this might be useful in segregating the cells/clusters.
there is an unknown number of cells in each image, so an unknown number of clusters.
the images are segmented into smaller, sub-images for analysis (the reason for this is not relevant here). These sub-images are to be analyzed separately for clusters. The sub-images are about 100 x 100 pixels.
Any suggestions would be greatly appreciated. I'm just looking for help getting pointed in the right direction.
Probably the most flexible is the classic old hierarchical agglomerative clustering (HAC). For some reason, people always overlook this powerful method, and prefer the much more limited kmeans.
HAC is very nice to parameterize. It needs a distance or similarity (little requirements here - probably should be symmetric, but no triangle inequality necessary). And with the linkage you can control the cluster shape or diameters nicely. For example, with complete linkage you can control the maximum diameter of a cluster. This is probably useful here, and my suggestion.
The main drawbacks of HAC are (1) scalability: at 50.000 instances it will be slow and use too much memory, and of course that (2) you need to know what you want to do: you need to choose distance, linkage, and cut the dendrogram. With k-means, you only need to choose k to get a (bad) result.
DBSCAN is a great algorithm, but in your case it is likely to form clusters with multiple cells. So I'd rather try OPTICS instead which may be able to discover substructures where DBSCAN only sees a large blob.
I have a dataset that is represented by this picture.
As you can see, there is a thin strip on top of the rest of data points. The question is how I can separate the strip from the rest, using clustering analysis or any other technique.
I have tried DBSCAN, KMeans, and Hierarchical Clustering and all gave me similar results shown by colors in the graph.
DBSCAN and OPTICS are your best candidates. If the data is not too big, you can also try meanshift. But they will not be able to do it perfectly - some points will be "noise" to them.
It's fairly obvious that k-means and most hierarchical clustering cannot solve this.
Keep minPts small (5 to 10), and focus on choosing epsilon. It must be small enough to not cover the gap. OPTICS will be easier to use, since you only need to give an upper bound on epsilon.
Consider manually specifying a model. Tweaking parameters until you get your desired result is not any better. Draw a line on your plot with a ruler, turn that into a linear model by reading off the parameters...
I've been looking around scipy and sklearn for clustering algorithms for a particular problem I have. I need some way of characterizing a population of N particles into k groups, where k is not necessarily know, and in addition to this, no a priori linking lengths are known (similar to this question).
I've tried kmeans, which works well if you know how many clusters you want. I've tried dbscan, which does poorly unless you tell it a characteristic length scale on which to stop looking (or start looking) for clusters. The problem is, I have potentially thousands of these clusters of particles, and I cannot spend the time to tell kmeans/dbscan algorithms what they should go off of.
Here is an example of what dbscan find:
You can see that there really are two separate populations here, though adjusting the epsilon factor (the max. distance between neighboring clusters parameter), I simply cannot get it to see those two populations of particles.
Is there any other algorithms which would work here? I'm looking for minimal information upfront - in other words, I'd like the algorithm to be able to make "smart" decisions about what could constitute a separate cluster.
I've found one that requires NO a priori information/guesses and does very well for what I'm asking it to do. It's called Mean Shift and is located in SciKit-Learn. It's also relatively quick (compared to other algorithms like Affinity Propagation).
Here's an example of what it gives:
I also want to point out that in the documentation is states that it may not scale well.
When using DBSCAN it can be helpful to scale/normalize data or
distances beforehand, so that estimation of epsilon will be relative.
There is a implementation of DBSCAN - I think its the one
Anony-Mousse somewhere denoted as 'floating around' - , which comes
with a epsilon estimator function. It works, as long as its not fed
with large datasets.
There are several incomplete versions of OPTICS at github. Maybe
you can find one to adapt it for your purpose. Still
trying to figure out myself, which effect minPts has, using one and
the same extraction method.
You can try a minimum spanning tree (zahn algorithm) and then remove the longest edge similar to alpha shapes. I used it with a delaunay triangulation and a concave hull:http://www.phpdevpad.de/geofence. You can also try a hierarchical cluster for example clusterfck.
Your plot indicates that you chose the minPts parameter way too small.
Have a look at OPTICS, which does no longer need the epsilon parameter of DBSCAN.
I'm trying to cluster web page content based on visual proximity.
You can see a visual display of blocks on link below
http://i.stack.imgur.com/qzGKE.png
I tried to use a DBSCAN clustering with sckikit-learn with features below with not much success :
- left X coordinate of block (because content are frequently left aligned)
- right X coordinate of block (because content are frequently right aligned)
- top Y coordinate of block (to further close blocks)
Do you have any idea of better features
Have a look at Generalized DBSCAN (not available in scipy, though).
How about clustering objects together when they overlap or almost overlap (by 1 pixel)?
See: DBSCAN doesn't really use the distance. It is based on a binary "is close enough to" decision only.
Also note that DBSCAN is not restricted to vectors. DBSCAN can work with anything where you can define the "similar enough" predicate for.
So you might not need to "extract features", instead consider when you want two objects to be in the same cluster.
Here is my problem: I have a list of villages. For each village I computed the path distance between them and prepared a distance matrix. Now I want to identify clusters of villages which are close to each other.
I use Python 2.7 and I already used hierarchical clustering (provided by scypy) to cluster the distance matrix. By looking at it as a human being, I can identify the nearest villages, but I need to automate it. I need to get the elements which belong to each cluster.
I was also wondering how to retrieve the clusters once I had created and cut the dendrogram. Since this is unanswered and may come up for others with a similar question, I'll answer according to what I was looking for, making some assumptions since this is an old question.
The first step is that you need to determine where to cut the dendrogram. You can do this a variety of ways, but I'll assume you already know how to do this, since you're looking at the dendrogram and seem to have satisfied yourself that you have clustered the data. If you don't know where to cut, you could start with something simple like cutting at the max distance. But really, where to cut is a different, very long discussion which I will assume you have figured out how to do (since I had done so at this point in my search).
Now I assume you have a dendrogram, and you know where to cut it, and maybe you even have it plotted with the cut line. But you want to do something more with the clusters, so you need to label the points you clustered. This can be done using the flat cluster (fcluster()) function in scipy.
from scipy.cluster.hierarchy import fcluster
clusters=fcluster(Z,distance,criterion='distance')
print(clusters)
Z is the hierarchical linkage matrix (as from scipy's linkage() function) which I assume you had already created. distance is the distance at which you are cutting the dendrogram (but there are other ways to cut the dendrogram, see source for how to do this with fcluster).
This returns a numpy array denoting which observation is in which cluster. Now you can append this to your data as a new column and go to town (or village) with it.