I have a binary image, i want to detect/trace curves in that image. I don't know any thing (coordinates, angle etc). Can any one guide me how should i start? suppose i have this image
I want to separate out curves and other lines. I am only interested in curved lines and their parameters. I want to store information of curves (in array) to use afterward.
It really depends on what you mean by "curve".
If you want to simply identify each discrete collection of pixels as a "curve", you could use a connected-components algorithm. Each component would correspond to a collection of pixels. You could then apply some test to determine linearity or some other feature of the component.
If you're looking for straight lines, circular curves, or any other parametric curve you could use the Hough transform to detect the elements from the image.
The best approach is really going to depend on which curves you're looking for, and what information you need about the curves.
reference links:
Circular Hough Transform Demo
A Brief Description of the Application of the Hough
Transform for Detecting Circles in Computer Images
A method for detection of circular arcs based on the Hough transform
Google goodness
Since you already seem to have a good binary image, it might be easiest to just separate the different connected components of the image and then calculate their parameters.
First, you can do the separation by scanning through the image, and when you encounter a black pixel you can apply a standard flood-fill algorithm to find out all the pixels in your shape. If you have matlab image toolbox, you can find use bwconncomp and bwselect procedures for this. If your shapes are not fully connected, you might apply a morphological closing operation to your image to connect the shapes.
After you have segmented out the different shapes, you can filter out the curves by testing how much they deviate from a line. You can do this simply by picking up the endpoints of the curve, and calculating how far the other points are from the line defined by the endpoints. If this value exceeds some maximum, you have a curve instead of a line.
Another approach would be to measure the ratio of the distance of the endpoints and length of the object. This ratio would be near 1 for lines and larger for curves and wiggly shapes.
If your images have angles, which you wish to separate from curves, you might inspect the directional gradient of your curves. Segment the shape, pick set of equidistant points from it and for each point, calculate the angle to the previous point and to the next point. If the difference of the angle is too high, you do not have a smooth curve, but some angled shape.
Possible difficulties in implementation include thick lines, which you can solve by skeleton transformation. For matlab implementation of skeleton and finding curve endpoints, see matlab image processing toolkit documentation
1) Read a book on Image Analysis
2) Scan for a black pixel, when found look for neighbouring pixels that are also black, store their location then make them white. This gets the points in one object and removes it from the image. Just keep repeating this till there are no remaining black pixels.
If you want to separate the curves from the straight lines try line fitting and then getting the coefficient of correlation. Similar algorithms are available for curves and the correlation tells you the closeness of the point to the idealised shape.
There is also another solution possible with the use of chain codes.
Understanding Freeman chain codes for OCR
The chain code basically assigns a value between 1-8(or 0 to 7) for each pixel saying at which pixel location in a 8-connected neighbourhood does your connected predecessor lie. Thus like mention in Hackworths suggestions one performs connected component labeling and then calculates the chain codes for each component curve. Look at the distribution and the gradient of the chain codes, one can distinguish easily between lines and curves. The problem with the method though is when we have osciallating curves, in which case the gradient is less useful and one depends on the clustering of the chain codes!
Im no computer vision expert, but i think that you could detect lines/curves in binary images relatively easy using some basic edge-detection algorithms (e.g. sobel filter).
Related
hi i want to detect fingertips point and valleypoint of hand by using hough transform.Simply the Question is what is the [H,theta,rho]=hough(BW) is good for extract these point.
the image is here:
https://www.dropbox.com/sh/n1lz7b5eedzbui7/AADwy5O1l7sWf5aOx7KWAmhOa?dl=0
tnx
The standard hough transformation is just for detecting straight lines. Not more and not less. The Matlab function hough (please see here) returns the so-called hough space H, a parametric space which is used to find these lines and the parametric representation of each line: rho = x*cos(theta) + y*sin(theta).
You will have to do more than this to detect your desired points. Since your fingers usually won't consist of straight lines, I think you should think of something else anyway, e.g. if you can assume such a perfect curve as the one in your image maybe this is interesting for you.
Another simple technique you might consider is to compare the straight line distance between two points on your hand line to the distance between those two points along the perimeter (geodesic distance). For this you would need an ordered list of points along the perimeter.
Along regions of high curvature, the straight line distance between two points will be smaller than the number of pixels between those two points along the perimeter.
For example, you could check perimeter pixels separate by 10 pixels. That is, you would search through the list and compare the point at index N and the point index N+10. (You'll need to loop back around to the beginning of the list as you approach the end.) If the straight line distance between these two points is nearly 10 pixels as well, then you know those points lie on a straight section of the perimeter. If the straight line distance is much smaller than 10, then you know the perimeter curves in some fashion between those points. Whether you check pixels that are 5, 10, 20, or 30 items apart in the list will depend on the resolution of your image and the curves you're looking for.
This technique is useful because it's simple and quick to implement. Maybe it would work well enough for your needs.
Yet another way: simplify the outline to small line segments, and then you can calculate the line-line angle between adjacent segments. To simplify the curves, implement the Ramer-Douglas-Puecker algorithm. A little experimentation will reveal what parameter settings will work for your application.
https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm
Finally, you could look into piecewise curve fitting: a curve would be fitted to small segments of the outline. This can get very complicated, and researchers continue to find ways to decompose complex figures into a limited number of more basic shapes or curves. I recommend trying the simplest technique and then only adding complexity if you need it.
As an input I have a segmented image of the upper body, I'm trying to detect shoulders from this image.
I minimized the region by a threshold calculated by simple known ratios between head size and shoulder width.
Now I have the shoulders region, performed edge detection on it.
Now I need to find the points of shoulders.
is there a fast way to detect the shoulder curves ?
I'm using Matlab.
This is my input image :
Bezier Curve is just a mathematical description of a curve, (linear interpolation, using control points).
It is not a curvetracer.
If you need bezier curve descriptions, you would need to do a best fit between a bezier curve model, and the data. Before you get started, you should probably play around with bezier curves, to get a feeling of how they operate.
See here: http://www.mathworks.com/matlabcentral/fileexchange/33828-generalised-bezier-curve-matlab-code
for a Bezier Curve render, in matlab.
It displays the bezier curve, when you provide some control points.
There is a few methods to actually fit a bezier curve to a set of data, here is one for matlab (using the least squares method).
http://www.mathworks.com/matlabcentral/fileexchange/15542-cubic-bezier-least-square-fitting
It will some times work nicely, and sometimes fail miserably, this is due to the least squares method, and the uniform parameterization used. It should work OK for your shoulder problem.
you need to extract the edge data, as data points, but that should be trivial
I want to extract orientations of strongly unclosed edges from a binary image. The image consists of blobs, blob rows and unsharp edges as shown below. In the end every pixel should be assigned to an information about the orientation of the edge. If the existence of an edge is not confident the point should not be assigned. Parameters of a line or a whole curve would be fine but are not necessarily needed. The edges to be found are marked as red curves:
I tried a lot and I hope for some hints in regarding to methods I could use.
Hough Transformation with Lines: Because of the existence of curves as well as point clouds it is difficult to extract the relevant extreme values of the HT.
Hough Transformation with Ellipses: Same disadvantages as ‘HT with Lines’. Plus the amount of curves and point arrangements to be detected exceeds the limits of a fast process.
Local masks: Go from pixel to pixel and estimate the orientation with the help of a directed mask (Example: Count all white pixels for every considered direction and make a decision in regarding to the highest number of found pixels). By using this method the view on bigger structures like whole blob rows is obscured. It is easy to see that this method will fail in clouds an edge goes through.
I guess an estimation of the orientation by considering local and global information is the only way. I need to know something about the connectivity of these blobs before making local decisions.
Btw, I am using MATLAB.
What about using image moments? you can calculate the angle, mayor axis, and eccentricity of each single blob and define parameters to merge interceeding ones.
You can use the regionprops() or start from scratch with this code I just so happend to have here:
function M=ImMoment(Image,ii,jj)
ImSize=size(Image);
M=0;
for k=1:ImSize(1);
for l=1:ImSize(2);
M=M+k^ii*l^jj*Image(k,l);
end
end
end
and for the covariance matrix:
function [Matrix,Centroid,Angle,Len,Wid,Eccentricity]=CovMat(Image)
Centroid=[ImMoment(Image,0,1)/ImMoment(Image,0,0),...
ImMoment(Image,1,0)/ImMoment(Image,0,0)];
Miu20=ImMoment(Image,0,2)/ImMoment(Image,0,0)-Centroid(1)^2;
Miu02=ImMoment(Image,2,0)/ImMoment(Image,0,0)-Centroid(2)^2;
Miu11=ImMoment(Image,1,1)/ImMoment(Image,0,0)-Centroid(1)*Centroid(2);
Matrix=[Miu20,Miu11
Miu11,Miu02];
Lambda1=(Miu20+Miu02)/2+sqrt(4*Miu11^2+(Miu20-Miu02)^2)/2;
Lambda2=(Miu20+Miu02)/2-sqrt(4*Miu11^2+(Miu20-Miu02)^2)/2;
Angle=1/2*atand(2*Miu11/(Miu20-Miu02));
Len=4*sqrt(max(Lambda1,Lambda2));
Wid=4*sqrt(min(Lambda1,Lambda2));
Eccentricity=sqrt(1-Lambda2/Lambda1);
end
Play a little bit around with that, I'm pretty sure that should work.
I am writing a matlab code that takes in a photo and detects the circular object. For example, the function takes a picture of a peach (circular object) as an input and will return the same image with the peach circled.
Currently, I am using hough transform, utilizing imfindcircles function. However, this function requires me to specify radius range and some sort of sensitivity/threshold value. These values differ for different sizes of image and round objects. So, to get the desired output, I will have to manually change these values for each input image, which is not what I want. I'm going to use this function on 100+ images, so it's impossible for me to do this manually.
My question is is there any way I can make my circular object detection function less manual and possibly completely automatic (does not require me to input any values, just the image)?
Complexity of circle detection
The Hough transform is a voting procedure that requires assumptions be made about the minimum and maximum radii of your circles. Generally speaking using the Randomized Hough Transform for Circles you would pick three-points and then try to form a circle and check if the radius is within the desired range. Running this for a good number of iterations you should find peaks (multiple hits) in your accumulator matrix that represent circles. If you didn't make any assumptions about object size I think it is obvious this method wouldn't work.
Do some routine pre-processing to adjust for contrast and brightness e.g. contrast stretching, histogram equalization. If you might have some noise in the images, then apply bit of gaussian smoothing as well.
Normalizing images this way will reduce inter-image variance and help you with setting thresholds.
the Hough Transform can be used to detect circles, lines, etc.You can refer the demos in Matlab. There are several cases for the application of Hough Transform.
What coding techniques would allow me to to differentiate between circles, rectangles, and triangles in black and white image bitmaps?
You could train an Artificial Neural Network to classify the shapes :P
If noise is low enough to extract curves, approximations can be used: for each shape select parameters giving least error (he method of least squares may help here) and then compare these errors...
If the image is noisy, I would consider Hough transform - it may be used to detect shapes with small count of parameters, like circle (harder for rectangles and triangles).
just an idea off of the top of my head: scan the (pixel) image line-by-line, pixel-by-pixel. If you encounter the first white pixel (assuming it has a black background) you keep it's position as a starting point and look at the eight pixels surrounding it in every direction for the next white pixel. If you find an adjacent second pixel you can establish a directional vector between those two pixels.
Now repeat this until the direction of your vector changes (or the change is above a certain threshold). Keep the last point before the change as the endpoint of your first line and repeat the process for the next line.
Then calculate the angle between the two lines and store it. Now trace the third line. Calculate the angle between the 2nd and 3rd line as well.
If both angles are rectangular you probably found a rectangle, otherwise you probably found a triangle. If you can't find any straight line you could conclude that you found a circle.
I know the algorithm is a bit sketchy but I think (with some refinement) it could work if your image's quality is not too bad (too much noise, gaps in the lines etc.).
You are looking for the Hough Transform. For an implementation, try the AForge.NET framework. It includes circle and line hough transformations.