MATLAB: Using hough transform to detect circle - matlab

I am writing a matlab code that takes in a photo and detects the circular object. For example, the function takes a picture of a peach (circular object) as an input and will return the same image with the peach circled.
Currently, I am using hough transform, utilizing imfindcircles function. However, this function requires me to specify radius range and some sort of sensitivity/threshold value. These values differ for different sizes of image and round objects. So, to get the desired output, I will have to manually change these values for each input image, which is not what I want. I'm going to use this function on 100+ images, so it's impossible for me to do this manually.
My question is is there any way I can make my circular object detection function less manual and possibly completely automatic (does not require me to input any values, just the image)?

Complexity of circle detection
The Hough transform is a voting procedure that requires assumptions be made about the minimum and maximum radii of your circles. Generally speaking using the Randomized Hough Transform for Circles you would pick three-points and then try to form a circle and check if the radius is within the desired range. Running this for a good number of iterations you should find peaks (multiple hits) in your accumulator matrix that represent circles. If you didn't make any assumptions about object size I think it is obvious this method wouldn't work.

Do some routine pre-processing to adjust for contrast and brightness e.g. contrast stretching, histogram equalization. If you might have some noise in the images, then apply bit of gaussian smoothing as well.
Normalizing images this way will reduce inter-image variance and help you with setting thresholds.

the Hough Transform can be used to detect circles, lines, etc.You can refer the demos in Matlab. There are several cases for the application of Hough Transform.

Related

How to check if two shapes in a binary image are similar in MATLAB?

I have two binary images, each of which have a single white filled parallelogram and a black background. The only difference between the two images is that the parallelograms are in different locations and are slightly different from one another in shape. All the parameters between the two images are the same except for that one change.
I want to check how similar the shape of the two parallelograms are, by using some sort of comparing measure.
I looked into ssimval function in MATLAB but it seems to be taking the whole image into consideration rather than just the white blobs. Is there any other function I can use for this purpose?
For visually checking similarity, you can plot their probability density function and for numeric similarity, compute some similarity measure, such as, KL Divergence, etc.
In a simple way, you can segment your binary image with simple bwlabel function. Then use regionprops function to find perimeter and area of your desire segment. Moreover, center of region is also another comparison point.
You could do it with polygons, by using the polyshape class.
First convert the binary mask to a set of corner points. You can do it with a convex hull, by calling regionprops(bwI, 'ConvexHull').
Then convert the corner points into polygons, by calling polyshape.
Finally measure the dissimiliarities of the polygons by measuring their turning distance. Turning distance is rotation- and scaling invariant, so you might want to add additive terms to your distance metric for those if your problem demands it.
A very simple solution for comparing two binary image is using boolean operations.
Your images contains zero and one values. so If you use boolean operation.
suppose your two images are : B1 , B2
C = B1 & (~B2)
if sum(C(:))==0
% two image are same
else
% two image are different
end

use scale space representation to filter one image

Currently I hope to use scale space representation to filter one image. Features in one image can be filtered using an Gaussian smooth filter with one optimal sigma. It means different features in one image can be expressed best in different scale under scale space representation.
For example, I have one image with one tree in it. In the scale space representation, three sigma values are used and they are represented as sigma0, sigma1 and sigma2. The ground is best expressed in the smoothed image with sigma0 because it contains textures mainly. The branches are best expressed in the smoother image with sigma1 and the trunk is with the smoother image with sigma2. If I hope to filter the image, I hope that the filtered pixels for the group is from the smoothed image with sigma0.
The filtered pixels for the branches are from the smoothed image with sigma1. The filtered pixels for the trunk are from the smoothed image with sigma2.
It requires that I need to determine in which smoothed image one pixel is expressed best. Is this idea plausible?
I am trying to use differece-of-Gaussian of two successive smoothed images to perform the above task. Is there any other way to combine the three smoothed image?
I use Matlab to implement the idea. The values of the three sigmas is 1.0, 2.0 and 3.0. The corresponding size of Gaussian kernel is 3, 5 and 7. I use the function fspecial to generate the kernel. Are the parameter reasonable? Please share your experience with the scale space representation to help me. You can provide some links to useful papers.
your idea is very much plausible! You are just one step away from it. I did something very similar once and it looked like this:
After smoothing your images and extracting the edges for each smoothing step (I used a weighted [to compensate for maxima supression after Gauss filtering] Sobel filter for this since DOG was not quite stable for my aplication), you can proyect (and normalize) your whole stack of edge images into a single image ("cummulative edges") which will contain the characteristic edges. You can then compare the cummulative edges image (using cross-correlation or whatever you wish) with every single image in your edge stack, the biggest value of this comparation is then the smooth-scale in which the pixel is expressed the best.
Hope that makes sense for you after reading it a couple of times.
Also don't be afraid of using much bigger kernel sizes, while it all depends on your application, I ended up using things of 51 and bigger!!! (was working with 40MP images though...)
T. Lindeberg has literally dozens of papers related to this problem. I found this one the most useful, but since you are already in the right track, I don't think reading the 50 pages will make you that much smarter. The most important part of it is maybe this one:
Principle for scale selection:
In the absence of other evidence, assume that a scale level, at which some
(possibly non-linear) combination of normalized derivatives assumes a
local maximum over scales, can be treated as reflecting a characteristic
length of a corresponding structure in the data.

Find edge orientations of strongly unclosed edges in unsharp point clouds

I want to extract orientations of strongly unclosed edges from a binary image. The image consists of blobs, blob rows and unsharp edges as shown below. In the end every pixel should be assigned to an information about the orientation of the edge. If the existence of an edge is not confident the point should not be assigned. Parameters of a line or a whole curve would be fine but are not necessarily needed. The edges to be found are marked as red curves:
I tried a lot and I hope for some hints in regarding to methods I could use.
Hough Transformation with Lines: Because of the existence of curves as well as point clouds it is difficult to extract the relevant extreme values of the HT.
Hough Transformation with Ellipses: Same disadvantages as ‘HT with Lines’. Plus the amount of curves and point arrangements to be detected exceeds the limits of a fast process.
Local masks: Go from pixel to pixel and estimate the orientation with the help of a directed mask (Example: Count all white pixels for every considered direction and make a decision in regarding to the highest number of found pixels). By using this method the view on bigger structures like whole blob rows is obscured. It is easy to see that this method will fail in clouds an edge goes through.
I guess an estimation of the orientation by considering local and global information is the only way. I need to know something about the connectivity of these blobs before making local decisions.
Btw, I am using MATLAB.
What about using image moments? you can calculate the angle, mayor axis, and eccentricity of each single blob and define parameters to merge interceeding ones.
You can use the regionprops() or start from scratch with this code I just so happend to have here:
function M=ImMoment(Image,ii,jj)
ImSize=size(Image);
M=0;
for k=1:ImSize(1);
for l=1:ImSize(2);
M=M+k^ii*l^jj*Image(k,l);
end
end
end
and for the covariance matrix:
function [Matrix,Centroid,Angle,Len,Wid,Eccentricity]=CovMat(Image)
Centroid=[ImMoment(Image,0,1)/ImMoment(Image,0,0),...
ImMoment(Image,1,0)/ImMoment(Image,0,0)];
Miu20=ImMoment(Image,0,2)/ImMoment(Image,0,0)-Centroid(1)^2;
Miu02=ImMoment(Image,2,0)/ImMoment(Image,0,0)-Centroid(2)^2;
Miu11=ImMoment(Image,1,1)/ImMoment(Image,0,0)-Centroid(1)*Centroid(2);
Matrix=[Miu20,Miu11
Miu11,Miu02];
Lambda1=(Miu20+Miu02)/2+sqrt(4*Miu11^2+(Miu20-Miu02)^2)/2;
Lambda2=(Miu20+Miu02)/2-sqrt(4*Miu11^2+(Miu20-Miu02)^2)/2;
Angle=1/2*atand(2*Miu11/(Miu20-Miu02));
Len=4*sqrt(max(Lambda1,Lambda2));
Wid=4*sqrt(min(Lambda1,Lambda2));
Eccentricity=sqrt(1-Lambda2/Lambda1);
end
Play a little bit around with that, I'm pretty sure that should work.

Segmenting 3D shapes out of thick "lines"

I am looking for a method that looks for shapes in 3D image in matlab. I don't have a real 3D sample image right now; in fact, my 3D image is actually a set of quantized 2D images.
The figure below is what I am trying to accomplish:
Although the example figure above is a 2D image, please understand that I am trying to do this in 3D. The input shape has these "tentacles", and I have to look for irregular shapes among them. The size of the tentacle from one point to another can change around but at "consistent and smooth" pace - that is it can be big at first, then gradually smaller later. But if suddenly, the shape just gets bigger not so gradually, like the red bottom right area in the figure above, then this is one of the volume of interests. Note that these shapes have more tendency to be rounded and spherical, but some of them are completely arbitrary and random.
I've tried the following methods so far:
Erode n times and dilate n times: given that the "tentacles" are always smaller than the volume of interest, this method will work as long as the volume is not too small. And, we need to have a mechanism to deal with thicker portion of the tentacle that becomes false positive somehow.
Hough Transform: although I have been suggested this method earlier (from Segmenting circle-like shapes out of Binary Image), I see that it works for some of the more rounded shape cases, but at the same time, more difficult cases such that of less-rounded, distorted, and/or arbitrary shapes can slip through this method.
Isosurface: because of my input is a set of 2D quantized images, using an isosurface allow me to reconstruct image in 3D and see things clearer. However, I'm not sure what could be done further in this case.
So can anyone suggests some other techniques for segmenting such shape out of these "tentacles"?
Every point on your image has the property that it is either part of the tentacle, or part of the volume of interest. If it is unknown apriori what the expected girth of the tentacle is, then 1 wont work because we won't be able to set n. However, we know that the n that erases the tentacle is smaller than the n that erases the node. You can for each point replace it with an integer representing the distance to the edge. Effectively, this can be done via successive single pixel erosion, and replacing each pixel with the count of the iteration at which it was erased. Lets call this the thickness at the pixel, but my rusty old mind tells me that there was a term of art for this.
Now we want to search for regions that have a higher-than-typical morphological distance from the boundary. I would do this by first skeletonizing the image (http://www.mathworks.com/help/toolbox/images/ref/bwmorph.html) and then searching for local maxima of the thickness along the skeleton. These are points on the skeleton where the thickness is larger than the neighbor points.
Finally I would sort the local maxima by the thickness, a threshold on which should help to separate the volumes of interest from the false positives.

How to detect curves in a binary image?

I have a binary image, i want to detect/trace curves in that image. I don't know any thing (coordinates, angle etc). Can any one guide me how should i start? suppose i have this image
I want to separate out curves and other lines. I am only interested in curved lines and their parameters. I want to store information of curves (in array) to use afterward.
It really depends on what you mean by "curve".
If you want to simply identify each discrete collection of pixels as a "curve", you could use a connected-components algorithm. Each component would correspond to a collection of pixels. You could then apply some test to determine linearity or some other feature of the component.
If you're looking for straight lines, circular curves, or any other parametric curve you could use the Hough transform to detect the elements from the image.
The best approach is really going to depend on which curves you're looking for, and what information you need about the curves.
reference links:
Circular Hough Transform Demo
A Brief Description of the Application of the Hough
Transform for Detecting Circles in Computer Images
A method for detection of circular arcs based on the Hough transform
Google goodness
Since you already seem to have a good binary image, it might be easiest to just separate the different connected components of the image and then calculate their parameters.
First, you can do the separation by scanning through the image, and when you encounter a black pixel you can apply a standard flood-fill algorithm to find out all the pixels in your shape. If you have matlab image toolbox, you can find use bwconncomp and bwselect procedures for this. If your shapes are not fully connected, you might apply a morphological closing operation to your image to connect the shapes.
After you have segmented out the different shapes, you can filter out the curves by testing how much they deviate from a line. You can do this simply by picking up the endpoints of the curve, and calculating how far the other points are from the line defined by the endpoints. If this value exceeds some maximum, you have a curve instead of a line.
Another approach would be to measure the ratio of the distance of the endpoints and length of the object. This ratio would be near 1 for lines and larger for curves and wiggly shapes.
If your images have angles, which you wish to separate from curves, you might inspect the directional gradient of your curves. Segment the shape, pick set of equidistant points from it and for each point, calculate the angle to the previous point and to the next point. If the difference of the angle is too high, you do not have a smooth curve, but some angled shape.
Possible difficulties in implementation include thick lines, which you can solve by skeleton transformation. For matlab implementation of skeleton and finding curve endpoints, see matlab image processing toolkit documentation
1) Read a book on Image Analysis
2) Scan for a black pixel, when found look for neighbouring pixels that are also black, store their location then make them white. This gets the points in one object and removes it from the image. Just keep repeating this till there are no remaining black pixels.
If you want to separate the curves from the straight lines try line fitting and then getting the coefficient of correlation. Similar algorithms are available for curves and the correlation tells you the closeness of the point to the idealised shape.
There is also another solution possible with the use of chain codes.
Understanding Freeman chain codes for OCR
The chain code basically assigns a value between 1-8(or 0 to 7) for each pixel saying at which pixel location in a 8-connected neighbourhood does your connected predecessor lie. Thus like mention in Hackworths suggestions one performs connected component labeling and then calculates the chain codes for each component curve. Look at the distribution and the gradient of the chain codes, one can distinguish easily between lines and curves. The problem with the method though is when we have osciallating curves, in which case the gradient is less useful and one depends on the clustering of the chain codes!
Im no computer vision expert, but i think that you could detect lines/curves in binary images relatively easy using some basic edge-detection algorithms (e.g. sobel filter).