How to fit an image feature to a curve - matlab

I process a bunch 2D each having a sinusoidal feature randomly located in it:
Whereas the amplitude and the period of the sine are known in advance, the exact position is not.
I want to find an exact position of the sine in each image using MATLAB. Standard fitting techniques like Surface Fit won't work here because I only need to fit one feature, not the whole image.
The best idea that comes to my mind is to generate a reference image with a sine with a known location, and then use cross-corrrelation (xcorr2) to find the offset between the two. Maybe you could suggest any faster and simpler solution?

Related

Find the other end of a curve after a cut in an image

I would like to follow a curve (with matlab or opencv) and to find the other end of it when it is cut by an empty space like this example, which is simplified to illustrate the problem:
Link to image of cut curve
Real images are more like this one: Link to real image to analyse
To follow the curve, I can use a skeleton and look at the neighbourhood. The problem is that I don't know how to find the other end efficiently.
I don't think that closing or opening operations could help because as shown on the previous image, there are other curves and the two parts of the curve are quite far from each other so it could lead to boundaries between the different curves instead of the two parts.
I was thinking about polynomial evaluation which could be a solution for simple curves but I am not sure about the precision I could get. If I use a skeleton, I have to find exactly the right pixel or to search in a reasonable neighbourhood which would take some time and once again, as there are other curves in the images, I have to be sure that I will find the good one.
That's why I am searching for an existing function which could estimate precisely the trajectory of the curve and give an usefull output to go further and find the second part of the curve.
If that kind of function doesn't exist, I'm open to any other way of analysing the problem if it can help.
I will start to explain with the first image you provided, you can implement common OpenCV function useful for detecting contour(black region in your case as you have binary image) known as cv2.findContours(), which returns the coordinates of the edges of the surface detected then you can plot each detected contour separately in a blank image to get the edge of your desired line.
Now coming to your 2nd image you have to be slightly careful while performing above analysis as there are many tiny lines. get back to me for further help

Force Calculation at a Point Within a Vector Field, and then Reacting to that Force

So, this is going to be pretty hard for me to explain, or try to detail out since I only think I know what I'm asking, but I could be asking it with bad wording, so please bear with me and ask questions if need-be.
Currently I have a 3D vector field that's being plotted which corresponds to 40 levels of wind vectors in a 3D space (obviously). These are plotted in 3D levels and then stacked on top of each other using a dummy altitude for now (we're debating how to go about pressure altitude conversion most accurately--not to worry here). The goal is to start at a point within the vector space, modeling that point as a particle that can experience physics, and iteratively go through the vector field reacting to the forces, thus creating a trajectory of sorts through the vector field.
Currently what I'm trying to do is whip up code that would allow me to to start a point within this field and calculate the forces that the particle would feel at that point and then establish a resultant force vector that would indicate the next path of movement throughout the vector space.
Right now I'm stuck in the theoretical aspects of the code, as I'm trying to think through how the particle would feel vectors at a distance.
Any suggestions on ways to attack this problem within MatLab or relevant equations to use?
In order to run my code, you'll need read_grib.r4 and to compile that mex file here is a link to a zip with the code and the required files.
https://www.dropbox.com/s/uodvixdff764frq/WindSim_StackOverflow_Files.zip
I would try to interpolate the wind vector from the adjecent ones. You seem to have a regular grid, that should be no problem. (You can use interp3 for this)
Afterwards, you can use any differential-equation solver for your problem, as you have basically a field of gradients and an initial value. Forward euler would be the simplest one but need a small step size. (N.B.: Your field should be a gradient field)
You may read about this in Wikipedia: http://en.wikipedia.org/wiki/Vector_field#Flow_curves
In response to comment #1:
Yes. In a regular grid, any (arbitrary chosen) point will have eight neighbors. interp3 will so a trilinear interpolation to determine an interpolated gradient vector.
If you use forward-euler, you will then move a small distance in that direction. There you interpolate a gradient and go a small step into this new direction and so on. What happens are two things:
You get a series of points that lie on a streamline and thus form the trajectory of a particle moving along the field
Get large errors, the further you move and the larger the step size is. Use a small step size or use a better solver (Runge-Kutta comes to my mind)
If all you want is plotting, then the streamline function might help.

image focus and FFT

I am a new to Matlab and I have a project that involves image processing.
I have a number of RGB images and I need to find a way to separate the out of focus from the in focus images. I do not need to correct the focus of the out of focus ones, I just need to find which are out of focus and remove them. I have done FFT2 to the image and then used the radial average of the image of the power spectrum to see if there is a difference between the in focus or out of focus but I do not see a difference between the two.
I decided to use the gradient of the image
[gradx,grady]=gradient(image)
and then take the magnitude
new_image=sqrt((gradx.^2)+(grady.^2))
and try to do the FFT2 using the new_image now instead of the image. The power spectrum does not look like what I expect so I am not sure if I should do the FFT2 on the new_image of the gradx and grady separately. Has anyone have any thoughts about whether this is the right way to do this?
I was also thinking that instead of using the gradient to use a Sobel mask
mask=fspecial('sobel')
mask_x=imfilter(image,mask)
mask_y=imfilter(image,mask')
new_image=sqrt((mask_x.^2)+(mask_y.^2))
and then do FFT2 in the new_image but again the power spectrum is not right. I expect it to start from zero and instead it starts from the highest value and drops exponentially.
Has anyone tried to classify images using this method? Thank you for reading.
A DCT, instead of an FFT/DFT, will get rid of any high frequency discontinuities between the opposite edges of your images.

Corner Detection in 2D Vector Data

I am trying to detect corners (x/y coordinates) in 2D scatter vectors of data.
The data is from a laser rangefinder and our current platform uses Matlab (though standalone programs/libs are an option, but the Nav/Control code is on Matlab so it must have an interface).
Corner detection is part of a SLAM algorithm and the corners will serve as the landmarks.
I am also looking to achieve something close to 100Hz in terms of speed if possible (I know its Matlab, but my data set is pretty small.)
Sample Data:
[Blue is the raw data, red is what I need to detect. (This view is effectively top down.)]
[Actual vector data from above shots]
Thus far I've tried many different approaches, some more successful than others.
I've never formally studied machine vision of any kind.
My first approach was a homebrew least squares line fitter, that would split lines in half resurivly until they met some r^2 value and then try to merge ones with similar slope/intercepts. It would then calculate the intersections of these lines. It wasn't very good, but did work around 70% of the time with decent accuracy, though it had some bad issues with missing certain features completely.
My current approach uses the clusterdata function to segment my data based on mahalanobis distance, and then does basically the same thing (least squares line fitting / merging). It works ok, but I'm assuming there are better methods.
[Source Code to Current Method] [cnrs, dat, ~, ~] = CornerDetect(data, 4, 1) using the above data will produce the locations I am getting.
I do not need to write this from scratch, it just seemed like most of the higher-class methods are meant for 2D images or 3D point clouds, not 2D scatter data. I've read a lot about Hough transforms and all sorts of data clustering methods (k-Means etc). I also tried a few canned line detectors without much success. I tried to play around with Line Segment Detector but it needs a greyscale image as an input and I figured it would be prohibitivly slow to convert my vector into a full 2D image to feed it into something like LSD.
Any help is greatly appreciated!
I'd approach it as a problem of finding extrema of curvature that are stable at multiple scales - and the split-and-merge method you have tried with lines hints at that.
You could use harris corner detector for detecting corners.

How to detect curves in a binary image?

I have a binary image, i want to detect/trace curves in that image. I don't know any thing (coordinates, angle etc). Can any one guide me how should i start? suppose i have this image
I want to separate out curves and other lines. I am only interested in curved lines and their parameters. I want to store information of curves (in array) to use afterward.
It really depends on what you mean by "curve".
If you want to simply identify each discrete collection of pixels as a "curve", you could use a connected-components algorithm. Each component would correspond to a collection of pixels. You could then apply some test to determine linearity or some other feature of the component.
If you're looking for straight lines, circular curves, or any other parametric curve you could use the Hough transform to detect the elements from the image.
The best approach is really going to depend on which curves you're looking for, and what information you need about the curves.
reference links:
Circular Hough Transform Demo
A Brief Description of the Application of the Hough
Transform for Detecting Circles in Computer Images
A method for detection of circular arcs based on the Hough transform
Google goodness
Since you already seem to have a good binary image, it might be easiest to just separate the different connected components of the image and then calculate their parameters.
First, you can do the separation by scanning through the image, and when you encounter a black pixel you can apply a standard flood-fill algorithm to find out all the pixels in your shape. If you have matlab image toolbox, you can find use bwconncomp and bwselect procedures for this. If your shapes are not fully connected, you might apply a morphological closing operation to your image to connect the shapes.
After you have segmented out the different shapes, you can filter out the curves by testing how much they deviate from a line. You can do this simply by picking up the endpoints of the curve, and calculating how far the other points are from the line defined by the endpoints. If this value exceeds some maximum, you have a curve instead of a line.
Another approach would be to measure the ratio of the distance of the endpoints and length of the object. This ratio would be near 1 for lines and larger for curves and wiggly shapes.
If your images have angles, which you wish to separate from curves, you might inspect the directional gradient of your curves. Segment the shape, pick set of equidistant points from it and for each point, calculate the angle to the previous point and to the next point. If the difference of the angle is too high, you do not have a smooth curve, but some angled shape.
Possible difficulties in implementation include thick lines, which you can solve by skeleton transformation. For matlab implementation of skeleton and finding curve endpoints, see matlab image processing toolkit documentation
1) Read a book on Image Analysis
2) Scan for a black pixel, when found look for neighbouring pixels that are also black, store their location then make them white. This gets the points in one object and removes it from the image. Just keep repeating this till there are no remaining black pixels.
If you want to separate the curves from the straight lines try line fitting and then getting the coefficient of correlation. Similar algorithms are available for curves and the correlation tells you the closeness of the point to the idealised shape.
There is also another solution possible with the use of chain codes.
Understanding Freeman chain codes for OCR
The chain code basically assigns a value between 1-8(or 0 to 7) for each pixel saying at which pixel location in a 8-connected neighbourhood does your connected predecessor lie. Thus like mention in Hackworths suggestions one performs connected component labeling and then calculates the chain codes for each component curve. Look at the distribution and the gradient of the chain codes, one can distinguish easily between lines and curves. The problem with the method though is when we have osciallating curves, in which case the gradient is less useful and one depends on the clustering of the chain codes!
Im no computer vision expert, but i think that you could detect lines/curves in binary images relatively easy using some basic edge-detection algorithms (e.g. sobel filter).