Will the clusters still be meaningful despite the disparity between the variance of my two features using the k-means algorithm? - cluster-analysis

I have a set of data which consists of couples (x,y) with a big disparity in the variance of each variable. I want to cluster my data using the k-means algorithm as I believe there is a rationale behind that.
Will the clusters still be meaningful despite the disparity between the variance of my two features ?

Depends on your data.
If you have physical positions on x and y, but the objects are largely located on a line, it is perfectly reasonable to have different variance.
If you have the variance because you used feet on the x axis, and millimeters on the y axis, the results will be bad.

Related

distance metrics for clustering non-normally distributed data

The dataset I want to cluster consists of ~1000 samples and 10 features, which have different scales and ranges (negative, positive, both). Using scipy.stats.normaltest() I found that none of the features are normally-distributed (all p-values < 1e-4, small enough to reject the null hypothesis that the data are taken from a normal distribution). But all of the distance measures that I'm aware of assume normally-distributed data (I was using Mahalanobis until I realized how non-uniform the data was). What distance measures would one use in this situation? Or is this where one simply has to normalize every feature and hope that that doesn't introduce bias?
Why do you think all distances would assume normal (which btw. is not the same as uniform) data?
Consider Euclidean distance. In many physical applications this distance makes perfect sense, because it is "as the crow flies". Manhattan distance makes a lot of sense when movement is constrained to two axes that cannot be used at the same time. These are completely appropriate for non-normal distributed data.

Detecting Gaussians in an image

I have a greyscale image, represented by a histogram below (x and y axes are pixels, z axis is pixel intensity).
Each cluster of bars represents an object, with the local maxima fairly approximating the centroid of the object. My goal is to find the Full Width Half Max of each object – so I'm roughly approximating each object as a Gaussian distribution.
How can I detect each cluster individually? I understand how to mathematically calculate the FWHM, but I'm not sure how to detect each cluster based on its (roughly) Gaussian features. (e.g., in the example below I would want to detect 6 clusters. One can see a small cluster in the middle but its amplitude is so small that I am okay with missing it).
I appreciate any advice - and efficiency is not a major issue, so I can implement relatively expensive solutions.
To find the centers of each of these groupings you could use a type of A* search algorithm, or similar linear optimization algorithm.
It will find its way to the maxima of a grouping. The issue after that is you wont know if you are at a local maxima (which in your scenario is likely). After your current search has bottomed out at the highest point, and you have calculated the FWHM for that area, you could set all the nodes your A* has traversed to 0, (or mark each node as visited so as to not be visited again), and start the A* algorithm again, until all nodes have been seen, and all groupings found.

Clustering of 3D points

I have a large dataset of around 20 million points (x,y,z) in a 3-dimensional space. I know these points are organized in dense regions, but that these regions vary in size. I think a standard unsupervised 3D clustering should solve my problem.
Since I can't estimate the number of clusters a priori, I tried using k-means with a wide range for k, but it is slow and also, I would have to estimate how significant each k-partition is.
Basically, my question is: how can I extract the most significant partition of my points into clusters?
k-means is probably not the best alhorithm for such data.
DBSCAN should be closer to your intuition of dense regions.
Try on a sample first, then figure out how to scale up.
It is not clear to me from the above if you're going to use k-means or not, but if you are, you should be following the responses from the post below which shows how to measure variance of the clusters.
Calculating the percentage of variance measure for k-means?
Additionally, you can get a good fit using 'the elbow method' by trying 2 to 15 k sized clusters. See the answer from Amro for the process on this.
One simple idea in this case is to use 3 different clusterings, along each dimension. That might speed things up.
So you find clusters along X axis (project all the points down to X axis) and then continue to form sub clusters along the Y axis and then along the Z axis.
I think 1-D k-means can be solved very efficiently using dynamic programming http://www.sciencedirect.com/science/article/pii/0025556473900072.

Remove outliers from a set of 3d points before clustering Matlab

I have a set of 3d points in Matlab but the problem is that my data found here. And as you can see there are some outliers which are affecting my clustering results. So if anyone could please advise how I can delete these outliers from my data.
Having looked at your data, I don't think any clustering algorithm will do what you want. Instead, you will probably need to train a classifier. This is what the Kinect people did, train a classifier using millions of real and synthetic postures, to have it label limbs, head, etc.
The reason why I don't think density based clustering will work either is because your data is a single, density-connected, body-with-two-boxes-shaped blob. But without knowing what a "body" and a "box" is, segmentation will be rather arbitrary. Or in the case of density based clustering: it will not segment at all, or it will segment e.g. by the rather low resultion of your z axis. Furthermore, your X and Y axes come from a grid based image scan (I assume), so you have a very uniform density on the X and Y axes - but the arms, for example, are not of a lower density than the body or boxes.
You can, however, use DBSCAN with rather broad (and easy to set) parameters to remove the noise.
E.g. in ELKI the following parameters yield reasonable results:
java -jar elki.jar -dbc.in /tmp/XX.csv -algorithm clustering.DBSCAN \
-dbscan.epsilon 0.05 -dbscan.minpts 100
The majority cluster is your data with the outliers removed; even with this blob near the foot removed.
To speed up the clustering process, you can add the parameters
-db.index tree.spatial.rstarvariants.rstar.RStarTreeFactory \
-pagefile.pagesize 1000 -spatial.bulkstrategy SortTileRecursiveBulkSplit
which yields a runtime opf 4.5 seconds here. This obviously is not good enough for realtime operation as on a Kinect; but it is not surprising to see a directed classification algorithm to outperform an unsupervised method - this is in fact to be expected.
Here is the result of clustering the data set with the parameters above:

Finding elongated clusters using MATLAB

Let me explain what I'm trying to do.
I have plot of an Image's points/pixels in the RGB space.
What I am trying to do is find elongated clusters in this space. I'm fairly new to clustering techniques and maybe I'm not doing things correctly, I'm trying to cluster using MATLAB's inbuilt k-means clustering but it appears as if that is not the best approach in this case.
What I need to do is find "color clusters".
This is what I get after applying K-means on an image.
This is how it should look like:
for an image like this:
Can someone tell me where I'm going wrong, and what I can to do improve my results?
Note: Sorry for the low-res images, these are the best I have.
Are you trying to replicate the results of this paper? I would say just do what they did.
However, I will add since there are some issues with the current answers.
1) Yes, your clusters are not spherical- which is an assumption k-means makes. DBSCAN and MeanShift are two more common methods for handling such data, as they can handle non spherical data. However, your data appears to have one large central clump that spreads outwards in a few finite directions.
For DBSCAN, this means it will put everything into one cluster, or everything is its own cluster. As DBSCAN has the assumption of uniform density and requires that clusters be separated by some margin.
MeanShift will likely have difficulty because everything seems to be coming from one central lump - so that will be the area of highest density that the points will shift toward, and converge to one large cluster.
My advice would be to change color spaces. RGB has issues, and it the assumptions most algorithms make will probably not hold up well under it. What clustering algorithm you should be using will then likely change in the different feature space, but hopefully it will make the problem easier to handle.
k-means basically assumes clusters are approximately spherical. In your case they are definitely NOT. Try fit a Gaussian to each cluster with non-spherical covariance matrix.
Basically, you will be following the same expectation-maximization (EM) steps as in k-means with the only exception that you will be modeling and fitting the covariance matrix as well.
Here's an outline for the algorithm
init: assign each point at random to one of k clusters.
For each cluster estimate mean and covariance
For each point estimate its likelihood to belong to each cluster
note that this likelihood is based not only on the distance to the center (mean) but also on the shape of the cluster as it is encoded by the covariance matrix
repeat stages 2 and 3 until convergence or until exceeded pre-defined number of iterations
Take a look at density-based clustering algorithms, such as DBSCAN and MeanShift. If you are doing this for segmentation, you might want to add pixel coordinates to your vectors.