Quantitively Fitting a Linear Curve to Non-Linear Data - matlab

I have some data that fits closely to an exponential curve on the left hand side and then flattens out on the right hand side. For the upper portion of the the curve, the data should fit closely to a straight line. I am trying to find a way to quantitatively work out exactly which part of my curve is 'most straight'.
I have tried qualitatively choosing the 'straightest' part of the data, which gives a good approximation, however I would like to do it quantitatively. I have also tried playing around in CFTOOL with to no avail.
See screenshot of data

Related

Find the other end of a curve after a cut in an image

I would like to follow a curve (with matlab or opencv) and to find the other end of it when it is cut by an empty space like this example, which is simplified to illustrate the problem:
Link to image of cut curve
Real images are more like this one: Link to real image to analyse
To follow the curve, I can use a skeleton and look at the neighbourhood. The problem is that I don't know how to find the other end efficiently.
I don't think that closing or opening operations could help because as shown on the previous image, there are other curves and the two parts of the curve are quite far from each other so it could lead to boundaries between the different curves instead of the two parts.
I was thinking about polynomial evaluation which could be a solution for simple curves but I am not sure about the precision I could get. If I use a skeleton, I have to find exactly the right pixel or to search in a reasonable neighbourhood which would take some time and once again, as there are other curves in the images, I have to be sure that I will find the good one.
That's why I am searching for an existing function which could estimate precisely the trajectory of the curve and give an usefull output to go further and find the second part of the curve.
If that kind of function doesn't exist, I'm open to any other way of analysing the problem if it can help.
I will start to explain with the first image you provided, you can implement common OpenCV function useful for detecting contour(black region in your case as you have binary image) known as cv2.findContours(), which returns the coordinates of the edges of the surface detected then you can plot each detected contour separately in a blank image to get the edge of your desired line.
Now coming to your 2nd image you have to be slightly careful while performing above analysis as there are many tiny lines. get back to me for further help

MATLAB smooth transition between two polyfit curves

I got this data curve.
Because it's real data it's kind of shaky.
I want to differentiate the curve... this looks pretty ugly because of the shakyness.
So I went on and used ployfit and polyval to smooth the curve. As this doesn't resemble a ploynomal curve, I needed to split this curve in three and smooth them all separately and later fit them together again.
But polyval tend to overdo the edges... (the smooth one in red, the original in blue)
so when I add them together I get non smooth junctions like this one here: (I know this is extreme, but it accurs always)
when I later differentiate the curve, I get huge errors at the junctions:
any ideas to solve my problem?
I do need a clean curve for calculations and so on...
Edit
I implemented the comments:
here the results
spline isn't smooth enough for differentiation,
Savitzky-Golay Filter also not perfect

Corner Detection in 2D Vector Data

I am trying to detect corners (x/y coordinates) in 2D scatter vectors of data.
The data is from a laser rangefinder and our current platform uses Matlab (though standalone programs/libs are an option, but the Nav/Control code is on Matlab so it must have an interface).
Corner detection is part of a SLAM algorithm and the corners will serve as the landmarks.
I am also looking to achieve something close to 100Hz in terms of speed if possible (I know its Matlab, but my data set is pretty small.)
Sample Data:
[Blue is the raw data, red is what I need to detect. (This view is effectively top down.)]
[Actual vector data from above shots]
Thus far I've tried many different approaches, some more successful than others.
I've never formally studied machine vision of any kind.
My first approach was a homebrew least squares line fitter, that would split lines in half resurivly until they met some r^2 value and then try to merge ones with similar slope/intercepts. It would then calculate the intersections of these lines. It wasn't very good, but did work around 70% of the time with decent accuracy, though it had some bad issues with missing certain features completely.
My current approach uses the clusterdata function to segment my data based on mahalanobis distance, and then does basically the same thing (least squares line fitting / merging). It works ok, but I'm assuming there are better methods.
[Source Code to Current Method] [cnrs, dat, ~, ~] = CornerDetect(data, 4, 1) using the above data will produce the locations I am getting.
I do not need to write this from scratch, it just seemed like most of the higher-class methods are meant for 2D images or 3D point clouds, not 2D scatter data. I've read a lot about Hough transforms and all sorts of data clustering methods (k-Means etc). I also tried a few canned line detectors without much success. I tried to play around with Line Segment Detector but it needs a greyscale image as an input and I figured it would be prohibitivly slow to convert my vector into a full 2D image to feed it into something like LSD.
Any help is greatly appreciated!
I'd approach it as a problem of finding extrema of curvature that are stable at multiple scales - and the split-and-merge method you have tried with lines hints at that.
You could use harris corner detector for detecting corners.

Fitting data smoothly in Matlab and Gnuplot

I would like to find a better way of fitting my data. Right now, that it the best I can do, see Figure.
It's done using Gnuplot and smooth when plotting. However, as you might see in the Figure, 'csplines' seems to be the most acuarate technique, but it is not enough. It is fine in the first half of the graph, but not good at all in the second half.
The real data, just 4 points in 'x=[1,2,4,8]', is marked in 'Line 1'. Is there a better way of doing it using Gnuplot?
What about Matlab (or even other tools)? How can I easily create a smooth curve connecting a few points?
Why not have a look at the scipy interpolation documentation:
http://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html
There are plenty of schemes there which will help you plot your smoothed data using matplotib.
HTH

How to detect curves in a binary image?

I have a binary image, i want to detect/trace curves in that image. I don't know any thing (coordinates, angle etc). Can any one guide me how should i start? suppose i have this image
I want to separate out curves and other lines. I am only interested in curved lines and their parameters. I want to store information of curves (in array) to use afterward.
It really depends on what you mean by "curve".
If you want to simply identify each discrete collection of pixels as a "curve", you could use a connected-components algorithm. Each component would correspond to a collection of pixels. You could then apply some test to determine linearity or some other feature of the component.
If you're looking for straight lines, circular curves, or any other parametric curve you could use the Hough transform to detect the elements from the image.
The best approach is really going to depend on which curves you're looking for, and what information you need about the curves.
reference links:
Circular Hough Transform Demo
A Brief Description of the Application of the Hough
Transform for Detecting Circles in Computer Images
A method for detection of circular arcs based on the Hough transform
Google goodness
Since you already seem to have a good binary image, it might be easiest to just separate the different connected components of the image and then calculate their parameters.
First, you can do the separation by scanning through the image, and when you encounter a black pixel you can apply a standard flood-fill algorithm to find out all the pixels in your shape. If you have matlab image toolbox, you can find use bwconncomp and bwselect procedures for this. If your shapes are not fully connected, you might apply a morphological closing operation to your image to connect the shapes.
After you have segmented out the different shapes, you can filter out the curves by testing how much they deviate from a line. You can do this simply by picking up the endpoints of the curve, and calculating how far the other points are from the line defined by the endpoints. If this value exceeds some maximum, you have a curve instead of a line.
Another approach would be to measure the ratio of the distance of the endpoints and length of the object. This ratio would be near 1 for lines and larger for curves and wiggly shapes.
If your images have angles, which you wish to separate from curves, you might inspect the directional gradient of your curves. Segment the shape, pick set of equidistant points from it and for each point, calculate the angle to the previous point and to the next point. If the difference of the angle is too high, you do not have a smooth curve, but some angled shape.
Possible difficulties in implementation include thick lines, which you can solve by skeleton transformation. For matlab implementation of skeleton and finding curve endpoints, see matlab image processing toolkit documentation
1) Read a book on Image Analysis
2) Scan for a black pixel, when found look for neighbouring pixels that are also black, store their location then make them white. This gets the points in one object and removes it from the image. Just keep repeating this till there are no remaining black pixels.
If you want to separate the curves from the straight lines try line fitting and then getting the coefficient of correlation. Similar algorithms are available for curves and the correlation tells you the closeness of the point to the idealised shape.
There is also another solution possible with the use of chain codes.
Understanding Freeman chain codes for OCR
The chain code basically assigns a value between 1-8(or 0 to 7) for each pixel saying at which pixel location in a 8-connected neighbourhood does your connected predecessor lie. Thus like mention in Hackworths suggestions one performs connected component labeling and then calculates the chain codes for each component curve. Look at the distribution and the gradient of the chain codes, one can distinguish easily between lines and curves. The problem with the method though is when we have osciallating curves, in which case the gradient is less useful and one depends on the clustering of the chain codes!
Im no computer vision expert, but i think that you could detect lines/curves in binary images relatively easy using some basic edge-detection algorithms (e.g. sobel filter).