I have a greyscale image, represented by a histogram below (x and y axes are pixels, z axis is pixel intensity).
Each cluster of bars represents an object, with the local maxima fairly approximating the centroid of the object. My goal is to find the Full Width Half Max of each object – so I'm roughly approximating each object as a Gaussian distribution.
How can I detect each cluster individually? I understand how to mathematically calculate the FWHM, but I'm not sure how to detect each cluster based on its (roughly) Gaussian features. (e.g., in the example below I would want to detect 6 clusters. One can see a small cluster in the middle but its amplitude is so small that I am okay with missing it).
I appreciate any advice - and efficiency is not a major issue, so I can implement relatively expensive solutions.
To find the centers of each of these groupings you could use a type of A* search algorithm, or similar linear optimization algorithm.
It will find its way to the maxima of a grouping. The issue after that is you wont know if you are at a local maxima (which in your scenario is likely). After your current search has bottomed out at the highest point, and you have calculated the FWHM for that area, you could set all the nodes your A* has traversed to 0, (or mark each node as visited so as to not be visited again), and start the A* algorithm again, until all nodes have been seen, and all groupings found.
Related
Dear friends I am currently working on a disparity algorithm that visits only a small fraction of disparity space in order to find a semi-dense disparity map. It works by growing from a small set of correspondence seeds. But before that I am implementing the standard region growing algorithm in matlab to understand how it works.
The first step of the baseline growing algorithm says that:
Require: Rectified images Il, Ir, initial correspondence
seeds S, image similarity threshold. Compute similarity simil(s) for every seed s belonging to S.
Now i cannot understand this step. First of all how do i calculate initial seed points from two rectified images. Should i use SIFT algorithm in matlab or is there any better way to do it.???Can anybody also give me some idea about how does a region growing based disparity calculating algorithm works and whether it is better than SAD or SSD.
If you have rectified images, finding disparity is a matter of calculating costs between pixels in left and right images on the same horizontal line.
You can take a few selected points in the images (for example the ones that have high gradient or feature points coming from SIFT), set those as roots/seeds of your regions and calculate cost for a range of disparities using SAD/SSD or whatever cost function you prefer.
Then take the best disparity for a root and assign it to a neighbor. If the cost for that is lower than a predefined threshold, add it to the region otherwise go to next neighbor. When you cannot add any more points the region growing is finished.
This is a detailed example of the process: http://arxiv.org/pdf/0812.1340.pdf
This is a follow-up question on the one below:
Second moments question
MATLAB's regionprops function estimates an ellipse from a given set of 2d-points. This is done by using the image moments, they claim to use normalized second central moments, the formulas also follow what is suggested by the wikipedia link on image moments.
Effectively the covariance matrix of the region is calculated (in a slightly more efficient way) and then the square root of the eigenvalues of this matrix are calculated and put out as the major and minor axes - with one change: They are multiplied by a factor of 4.
Why?
Essentially, covariance estimation assumes a multivariate normal distribution. However, an arbitrary image region is most likely not normally distributed, I would rather expect a factor based on the assumption that data is uniformly distributed. So what is the justification for choosing 4?
Meanwhile I found the answer: Factor 4 yields correct results for regions with an elliptical shape. For e.g. rectangular or non-solid regions, the estimated axis lengths are incorrect, and the error varies nonlinearly with changes in the region.
Let me explain what I'm trying to do.
I have plot of an Image's points/pixels in the RGB space.
What I am trying to do is find elongated clusters in this space. I'm fairly new to clustering techniques and maybe I'm not doing things correctly, I'm trying to cluster using MATLAB's inbuilt k-means clustering but it appears as if that is not the best approach in this case.
What I need to do is find "color clusters".
This is what I get after applying K-means on an image.
This is how it should look like:
for an image like this:
Can someone tell me where I'm going wrong, and what I can to do improve my results?
Note: Sorry for the low-res images, these are the best I have.
Are you trying to replicate the results of this paper? I would say just do what they did.
However, I will add since there are some issues with the current answers.
1) Yes, your clusters are not spherical- which is an assumption k-means makes. DBSCAN and MeanShift are two more common methods for handling such data, as they can handle non spherical data. However, your data appears to have one large central clump that spreads outwards in a few finite directions.
For DBSCAN, this means it will put everything into one cluster, or everything is its own cluster. As DBSCAN has the assumption of uniform density and requires that clusters be separated by some margin.
MeanShift will likely have difficulty because everything seems to be coming from one central lump - so that will be the area of highest density that the points will shift toward, and converge to one large cluster.
My advice would be to change color spaces. RGB has issues, and it the assumptions most algorithms make will probably not hold up well under it. What clustering algorithm you should be using will then likely change in the different feature space, but hopefully it will make the problem easier to handle.
k-means basically assumes clusters are approximately spherical. In your case they are definitely NOT. Try fit a Gaussian to each cluster with non-spherical covariance matrix.
Basically, you will be following the same expectation-maximization (EM) steps as in k-means with the only exception that you will be modeling and fitting the covariance matrix as well.
Here's an outline for the algorithm
init: assign each point at random to one of k clusters.
For each cluster estimate mean and covariance
For each point estimate its likelihood to belong to each cluster
note that this likelihood is based not only on the distance to the center (mean) but also on the shape of the cluster as it is encoded by the covariance matrix
repeat stages 2 and 3 until convergence or until exceeded pre-defined number of iterations
Take a look at density-based clustering algorithms, such as DBSCAN and MeanShift. If you are doing this for segmentation, you might want to add pixel coordinates to your vectors.
I was trying to implement Shape Context (in MatLab). I was trying to achieve rotation invariance.
The general approach for shape context is to compute distances and angles between each set of interest points in a given image. You then bin into a histogram based on whether these calculated values fall into certain ranges. You do this for both a standard and a test image. To match two different images, from this you use a chi-square function to estimate a "cost" between each possible pair of points in the two different histograms. Finally, you use an optimization technique such as the hungarian algorithm to find optimal assignments of points and then sum up the total cost, which will be lower for good matches.
I've checked several websites and papers, and they say that to make the above approach rotation invariant, you need to calculate each angle between each pair of points using the tangent vector as the x-axis. (ie http://www.cs.berkeley.edu/~malik/papers/BMP-shape.pdf page 513)
What exactly does this mean? No one seems to explain it clearly. Also, from which of each pair of points would you get the tangent vector - would you average the two?
A couple other people suggested I could use gradients (which are easy to find in Matlab) and use this as a substitute for the tangent points, though it does not seem to compute reasonable cost scores with this. Is it feasible to do this with gradients?
Should gradient work for this dominant orientation?
What do you mean by ordering the bins with respect to that orientation? I was originally going to have a square matrix of bins - with the radius between two given points determining the column in the matrix and the calculated angle between two given points determining the row.
Thank you for your insight.
One way of achieving (somewhat) rotation invariance is to make sure that where ever you compute your image descriptor their orientation (that is ordering of the bins) would be (roughly) the same. In order to achieve that you pick the dominant orientation at the point where you extract each descriptor and order the bins with respect to that orientation. This way you can compare bin-to-bin of different descriptors knowing that their ordering is the same: with respect to their local dominant orientation.
From my personal experience (which is not too much) these methods looks better on paper than in practice.
I have a dataset consisting of a large collection of points in three dimensional euclidian space. In this collection of points, i am trying to find the point that is nearest to the area with the highest density of points.
So my problem consists of two steps:
1: Determine where density of the distribution of points is at its highest
2: Determine which point is nearest to the point found in 1
Point 2 i can manage, but i'm not sure how to solve point 1. I know there are a lot of functions for density estimation in Matlab, but i'm not sure which one would be the most suitable, or straightforward to use.
Does anyone know?
My command of statistics is a little bit rusty, but as far as i can tell, this type of problem calls for multivariate analysis. Someone suggested i use multivariate kernel density estimation, but i'm not really sure if that's the best solution.
Density is a measure of mass per unit volume. On the assumption that your points all have the same mass then you are, I suppose, trying to measure the number of points per unit volume. So one approach is to divide your subset of Euclidean space into lots of little unit volumes (let's call them voxels like everyone does) and count how many points there are in each one. The voxel with the most points is where the density of points is at its highest. This is, of course, numerical integration of a sort. If your points were distributed according to some analytic function (and I guess they are not) you could solve the problem with pencil and paper.
You might make this approach as sophisticated as you like, perhaps initially dividing your space into 2 x 2 x 2 voxels, then choosing the voxel with most points and sub-dividing that in turn until your criteria are satisfied.
I hope this will get you started on your point 1; you seem to be OK with point 2 so I'll stop now.
EDIT
It looks as if triplequad might be what you are looking for.