I am trying to detect corners (x/y coordinates) in 2D scatter vectors of data.
The data is from a laser rangefinder and our current platform uses Matlab (though standalone programs/libs are an option, but the Nav/Control code is on Matlab so it must have an interface).
Corner detection is part of a SLAM algorithm and the corners will serve as the landmarks.
I am also looking to achieve something close to 100Hz in terms of speed if possible (I know its Matlab, but my data set is pretty small.)
Sample Data:
[Blue is the raw data, red is what I need to detect. (This view is effectively top down.)]
[Actual vector data from above shots]
Thus far I've tried many different approaches, some more successful than others.
I've never formally studied machine vision of any kind.
My first approach was a homebrew least squares line fitter, that would split lines in half resurivly until they met some r^2 value and then try to merge ones with similar slope/intercepts. It would then calculate the intersections of these lines. It wasn't very good, but did work around 70% of the time with decent accuracy, though it had some bad issues with missing certain features completely.
My current approach uses the clusterdata function to segment my data based on mahalanobis distance, and then does basically the same thing (least squares line fitting / merging). It works ok, but I'm assuming there are better methods.
[Source Code to Current Method] [cnrs, dat, ~, ~] = CornerDetect(data, 4, 1) using the above data will produce the locations I am getting.
I do not need to write this from scratch, it just seemed like most of the higher-class methods are meant for 2D images or 3D point clouds, not 2D scatter data. I've read a lot about Hough transforms and all sorts of data clustering methods (k-Means etc). I also tried a few canned line detectors without much success. I tried to play around with Line Segment Detector but it needs a greyscale image as an input and I figured it would be prohibitivly slow to convert my vector into a full 2D image to feed it into something like LSD.
Any help is greatly appreciated!
I'd approach it as a problem of finding extrema of curvature that are stable at multiple scales - and the split-and-merge method you have tried with lines hints at that.
You could use harris corner detector for detecting corners.
Related
I would like to follow a curve (with matlab or opencv) and to find the other end of it when it is cut by an empty space like this example, which is simplified to illustrate the problem:
Link to image of cut curve
Real images are more like this one: Link to real image to analyse
To follow the curve, I can use a skeleton and look at the neighbourhood. The problem is that I don't know how to find the other end efficiently.
I don't think that closing or opening operations could help because as shown on the previous image, there are other curves and the two parts of the curve are quite far from each other so it could lead to boundaries between the different curves instead of the two parts.
I was thinking about polynomial evaluation which could be a solution for simple curves but I am not sure about the precision I could get. If I use a skeleton, I have to find exactly the right pixel or to search in a reasonable neighbourhood which would take some time and once again, as there are other curves in the images, I have to be sure that I will find the good one.
That's why I am searching for an existing function which could estimate precisely the trajectory of the curve and give an usefull output to go further and find the second part of the curve.
If that kind of function doesn't exist, I'm open to any other way of analysing the problem if it can help.
I will start to explain with the first image you provided, you can implement common OpenCV function useful for detecting contour(black region in your case as you have binary image) known as cv2.findContours(), which returns the coordinates of the edges of the surface detected then you can plot each detected contour separately in a blank image to get the edge of your desired line.
Now coming to your 2nd image you have to be slightly careful while performing above analysis as there are many tiny lines. get back to me for further help
I want to extract orientations of strongly unclosed edges from a binary image. The image consists of blobs, blob rows and unsharp edges as shown below. In the end every pixel should be assigned to an information about the orientation of the edge. If the existence of an edge is not confident the point should not be assigned. Parameters of a line or a whole curve would be fine but are not necessarily needed. The edges to be found are marked as red curves:
I tried a lot and I hope for some hints in regarding to methods I could use.
Hough Transformation with Lines: Because of the existence of curves as well as point clouds it is difficult to extract the relevant extreme values of the HT.
Hough Transformation with Ellipses: Same disadvantages as ‘HT with Lines’. Plus the amount of curves and point arrangements to be detected exceeds the limits of a fast process.
Local masks: Go from pixel to pixel and estimate the orientation with the help of a directed mask (Example: Count all white pixels for every considered direction and make a decision in regarding to the highest number of found pixels). By using this method the view on bigger structures like whole blob rows is obscured. It is easy to see that this method will fail in clouds an edge goes through.
I guess an estimation of the orientation by considering local and global information is the only way. I need to know something about the connectivity of these blobs before making local decisions.
Btw, I am using MATLAB.
What about using image moments? you can calculate the angle, mayor axis, and eccentricity of each single blob and define parameters to merge interceeding ones.
You can use the regionprops() or start from scratch with this code I just so happend to have here:
function M=ImMoment(Image,ii,jj)
ImSize=size(Image);
M=0;
for k=1:ImSize(1);
for l=1:ImSize(2);
M=M+k^ii*l^jj*Image(k,l);
end
end
end
and for the covariance matrix:
function [Matrix,Centroid,Angle,Len,Wid,Eccentricity]=CovMat(Image)
Centroid=[ImMoment(Image,0,1)/ImMoment(Image,0,0),...
ImMoment(Image,1,0)/ImMoment(Image,0,0)];
Miu20=ImMoment(Image,0,2)/ImMoment(Image,0,0)-Centroid(1)^2;
Miu02=ImMoment(Image,2,0)/ImMoment(Image,0,0)-Centroid(2)^2;
Miu11=ImMoment(Image,1,1)/ImMoment(Image,0,0)-Centroid(1)*Centroid(2);
Matrix=[Miu20,Miu11
Miu11,Miu02];
Lambda1=(Miu20+Miu02)/2+sqrt(4*Miu11^2+(Miu20-Miu02)^2)/2;
Lambda2=(Miu20+Miu02)/2-sqrt(4*Miu11^2+(Miu20-Miu02)^2)/2;
Angle=1/2*atand(2*Miu11/(Miu20-Miu02));
Len=4*sqrt(max(Lambda1,Lambda2));
Wid=4*sqrt(min(Lambda1,Lambda2));
Eccentricity=sqrt(1-Lambda2/Lambda1);
end
Play a little bit around with that, I'm pretty sure that should work.
So, this is going to be pretty hard for me to explain, or try to detail out since I only think I know what I'm asking, but I could be asking it with bad wording, so please bear with me and ask questions if need-be.
Currently I have a 3D vector field that's being plotted which corresponds to 40 levels of wind vectors in a 3D space (obviously). These are plotted in 3D levels and then stacked on top of each other using a dummy altitude for now (we're debating how to go about pressure altitude conversion most accurately--not to worry here). The goal is to start at a point within the vector space, modeling that point as a particle that can experience physics, and iteratively go through the vector field reacting to the forces, thus creating a trajectory of sorts through the vector field.
Currently what I'm trying to do is whip up code that would allow me to to start a point within this field and calculate the forces that the particle would feel at that point and then establish a resultant force vector that would indicate the next path of movement throughout the vector space.
Right now I'm stuck in the theoretical aspects of the code, as I'm trying to think through how the particle would feel vectors at a distance.
Any suggestions on ways to attack this problem within MatLab or relevant equations to use?
In order to run my code, you'll need read_grib.r4 and to compile that mex file here is a link to a zip with the code and the required files.
https://www.dropbox.com/s/uodvixdff764frq/WindSim_StackOverflow_Files.zip
I would try to interpolate the wind vector from the adjecent ones. You seem to have a regular grid, that should be no problem. (You can use interp3 for this)
Afterwards, you can use any differential-equation solver for your problem, as you have basically a field of gradients and an initial value. Forward euler would be the simplest one but need a small step size. (N.B.: Your field should be a gradient field)
You may read about this in Wikipedia: http://en.wikipedia.org/wiki/Vector_field#Flow_curves
In response to comment #1:
Yes. In a regular grid, any (arbitrary chosen) point will have eight neighbors. interp3 will so a trilinear interpolation to determine an interpolated gradient vector.
If you use forward-euler, you will then move a small distance in that direction. There you interpolate a gradient and go a small step into this new direction and so on. What happens are two things:
You get a series of points that lie on a streamline and thus form the trajectory of a particle moving along the field
Get large errors, the further you move and the larger the step size is. Use a small step size or use a better solver (Runge-Kutta comes to my mind)
If all you want is plotting, then the streamline function might help.
I have a binary image, i want to detect/trace curves in that image. I don't know any thing (coordinates, angle etc). Can any one guide me how should i start? suppose i have this image
I want to separate out curves and other lines. I am only interested in curved lines and their parameters. I want to store information of curves (in array) to use afterward.
It really depends on what you mean by "curve".
If you want to simply identify each discrete collection of pixels as a "curve", you could use a connected-components algorithm. Each component would correspond to a collection of pixels. You could then apply some test to determine linearity or some other feature of the component.
If you're looking for straight lines, circular curves, or any other parametric curve you could use the Hough transform to detect the elements from the image.
The best approach is really going to depend on which curves you're looking for, and what information you need about the curves.
reference links:
Circular Hough Transform Demo
A Brief Description of the Application of the Hough
Transform for Detecting Circles in Computer Images
A method for detection of circular arcs based on the Hough transform
Google goodness
Since you already seem to have a good binary image, it might be easiest to just separate the different connected components of the image and then calculate their parameters.
First, you can do the separation by scanning through the image, and when you encounter a black pixel you can apply a standard flood-fill algorithm to find out all the pixels in your shape. If you have matlab image toolbox, you can find use bwconncomp and bwselect procedures for this. If your shapes are not fully connected, you might apply a morphological closing operation to your image to connect the shapes.
After you have segmented out the different shapes, you can filter out the curves by testing how much they deviate from a line. You can do this simply by picking up the endpoints of the curve, and calculating how far the other points are from the line defined by the endpoints. If this value exceeds some maximum, you have a curve instead of a line.
Another approach would be to measure the ratio of the distance of the endpoints and length of the object. This ratio would be near 1 for lines and larger for curves and wiggly shapes.
If your images have angles, which you wish to separate from curves, you might inspect the directional gradient of your curves. Segment the shape, pick set of equidistant points from it and for each point, calculate the angle to the previous point and to the next point. If the difference of the angle is too high, you do not have a smooth curve, but some angled shape.
Possible difficulties in implementation include thick lines, which you can solve by skeleton transformation. For matlab implementation of skeleton and finding curve endpoints, see matlab image processing toolkit documentation
1) Read a book on Image Analysis
2) Scan for a black pixel, when found look for neighbouring pixels that are also black, store their location then make them white. This gets the points in one object and removes it from the image. Just keep repeating this till there are no remaining black pixels.
If you want to separate the curves from the straight lines try line fitting and then getting the coefficient of correlation. Similar algorithms are available for curves and the correlation tells you the closeness of the point to the idealised shape.
There is also another solution possible with the use of chain codes.
Understanding Freeman chain codes for OCR
The chain code basically assigns a value between 1-8(or 0 to 7) for each pixel saying at which pixel location in a 8-connected neighbourhood does your connected predecessor lie. Thus like mention in Hackworths suggestions one performs connected component labeling and then calculates the chain codes for each component curve. Look at the distribution and the gradient of the chain codes, one can distinguish easily between lines and curves. The problem with the method though is when we have osciallating curves, in which case the gradient is less useful and one depends on the clustering of the chain codes!
Im no computer vision expert, but i think that you could detect lines/curves in binary images relatively easy using some basic edge-detection algorithms (e.g. sobel filter).
like what i want is if i move my finger fast on the iphone screen , then i want like something that it make a proper curve using quartz 2d or opengl es whatever.
i want to draw a path in curve style......
i had seen that GLPaint(OpenglES) example ,but it will not help me alot , considering if your finger movement is fast.....
something like making a smooth curve.....
any one have some kind of example please tell me
thanks
Edit: Moved from answer below:
thanks to all.......
but i had tried the bezier curve algo with two control points but problem is first how to calculate the control points whether there is no predefined points....
as i mentioned my movement of finger is fast...... so most of the time i got straight line instead of curve, due to getting less number of touch points.......
now as mark said piecewise fashion, ihad tried it like considering first four touch points and render them on screen , then remove the first point then again go for next four points ex. step 1: 1,2,3,4 step 2: 2,3,4,5 like that where as in that approach i got an overlap , which is not the issue actually , but didn't get smooth curve........
but for fast movement of finger i have to find something else?????
Depending on the number of sample points you are looking at, there are two approaches that I would recommend:
Simple Interpolation
You can simply sample the finger location at set intervals and then interpolate the sample points using something like a Catmull-Rom spline. This is easier than it sounds since you can easily convert a Catmull-Rom spline into a series of cubic Bezier curves.
Here's how. Say you have four consecutive sample points P0, P1, P2 and P3, the cubic Bezier curve that connects P1 to P2 is defined by the following control points:
B0 = P1
B1 = P1 + (P2 - P0)/6
B3 = P2 + (P1 - P3)/6
B4 = P2
This should work well as long as your sample points aren't too dense and it's super easy. The only problem might be at the beginning and end of your samples since the first and last sample point aren't interpolated in an open curve. One common work-around is to double-up your first and last sample point so that you have enough points for the curve to pass through each of the original samples.
To get an idea of how Catmull-Rom curves look, you can try out this Java applet demonstrating Catmull-Rom splines.
Fit a curve to your samples
A more advance (and more difficult) approach would be to do a Least Squares approximation to your sample points. If you want try this, the procedure looks something like the following:
Collect sample points
Define a NURBS curve (including its knot vector)
Set up a system of linear equations for the samples & curve
Solve the system in the Least Squares sense
Assuming you can pick a reasonable NURBS knot vector, this will give you a NURBS curve that closely approximates your sample points, minimizing the squared distance between the samples and your curve. The NURBS curve can even be decomposed into a series of Bezier curves if needed.
If you decide to explore this approach, then the book "Curves and Surfaces for CAGD" by Gerald Farin, or a similar reference, would be very helpful. In the 5th edition of Farin's book, section 9.2 deals specifically with this problem. Section 7.8 shows how to do this with a Bezier curve, but you'd probably need a high-degree curve to get a good fit.
Naaff gives a great overview of the NURBS technique. Unfortunately, I think generating a smooth bezier on-the-fly might be too much for the iPhone. I write drawing apps, and getting a large number of touchesMoved events per second is quite a challenge to begin with. You really need to optimize your drawing code just to get good performance while recording individual points - much less constructing a bezier path.
If you end up going with a bezier or NURBS curve representation - you'll probably have to wait until the user has finished touching the screen to compute the smoothed path. Doing the math continuously as the user moves their finger (and then redrawing the entire recomputed path using Quartz) is not going to give you a high enough data collection rate to do anything useful...
Good luck!
Do something like Shadow suggested. Get the position of the touch with some frequency and then make a Bézier curve out of it. This is how paths are drawn with a mouse (or tablet) in programs like Illustrator.