How to warp a sector like area of an image into another sector like area using MATLAB? - matlab

I am trying to achieve, very crudely speaking, a stretching of borders of this shape:
All parameters (r,theta,delta(r), delta(theta)) can vary.
I've tried using fitgeotrans, projective2dand affine2d but they seem to work mostly for polygonal shapes.
Is it at all possible to use a polar coordinated image (using cart2pol) and then give polar points to fitgeotrans? Or is a polygon encapsulating the sector area the only option?

Ideally, you should do a back-warp. This means for each pixel in the dst image, find the corresponding pixel in the src and copy it there.
You could use pol2cart to do this, but it seems a little forced, since you have to do the adjustment yourself anyway.

Related

Matlab - Transforming an image to receive a "View from Top"

I'm trying to transform a picture of a pool table so that it would look as if the picture was taken from the top.
For example, I'd like to take a picture like this and transform it to get an image of just the table itself as a perfect rectangle.
For starters, I don't mind entering the coordinates of the corners manually.
I looked at Matlab's fitgeotrans and tformfwd functions, but to be honest, couldn't really make sense of them, being quite new at image processing.
I'd really appreciate your help!
Image:
If you do not need this to be fully automatic, you can select the 4 corners of the table by hand, using cpselect. Then you have to define the 4 corners of the rectangle that you want your table to map to. That means define the image coordinates that you want the corners of the table to be. Now you have to sets of 4 x-y points. Use fitgeotrans with the TRANSFORMTYPE set to 'projective' to compute the projective transformation between the two sets of points. Then use imwarp to transform your image.

"Simple" edge - line - detection

At image i need find "table" - simple rectangle.
Problem is with edge recognition, because potencial photos will be "dark".
I tried edge - sobel, canny, log, .... - recognition and after that Hough transformation and line finding. But this algorithms are not enough for this task.
Something what can help me:
- it is rectangle!, only in perspective view (something like fitting perspective rectangle?)
- that object MUST cover atleast for example 90% of photo (i know i need looking near photo edges)
- that rectangle have fast same color (for example wood dining table)
- i need find atleast "only" 4 corners..(but yes, better will be find the edges of that table)
I know how for example sobel, canny or log algorithms works and Hough as well. And naturally those algorithms fail at dark or non-contrast images. But is there some another method for example based at "fitting"?
Images showing photo i can get (you see it would be dark) and what i need find:
and this is really "nice" picture (without noise). I tested it on more noise pictures and the result was..simply horrible..
Result of this picture with actual algorithm log (with another ones it looks same):
I know image and edge recognition is not simple challenge but are there some new better methods or something like that what i can try to use?
In one of posts in here i found LSD algorithm. It seems very nice descripted and it seems it is recognizing really nice straight lines as well. Do you think it would be better to use it insted of the canny or sobel detection?
Another solution will be corner detection, on my sample images it works better but it recognize too much points and there will problem with time..i will need to connect all the points and "find" the table..
Another solution:
I thought about point to point mapping. That i will have some "virtual" table and try to map that table above with that "virtual" table (simple 2d square in painting :] )..But i think point to point mapping will give me big errors or it will not working.
Does someone have any advice what algorithm use to?
I tried recognize edges in FIJI and then put the edge detected image in matlab, but with hough it works bad as well..:/..
What do you think it would be best to use? In short i need find some algorithm working on non contrast, dark images.
I'd try some modified snakes algorithm:
you parameterize your rectangle with 4 points and initialize them somewhere in the image corners. Then you move the points towards image features using some optimization algorithm (e.g. gradient descent, simulated annealing, etc.).
The image features could be a combination of edge features (e.g. sobel directly or sobel of some gaussian filtered image) to be evaluated on the lines between those four points and corner features to be evaluated at those 4 points.
Additionally you can penalize unlikely rectangles (maybe depending on the angles between the points or on the distance to the image boundary).

Image processing: Rotational alignment of an object

I have a stack of images with a bar close to the center. As the stack progresses the bar pivots around one end and the entire stack contains images with the bar rotated at many different angles up to 45 degrees above or below horizontal.
As shown here:
I'm looking for a way to rotate the bar and/or entire image and align everything horizontally before I do my other processing. Ideally this would be done in Matlab / imageJ / ImageMagick. I'm currently trying to work out a method using first Canny edge detection, followed by a Hough transform, followed by an image rotation, but I'm hoping this is a specific case of a more general problem which has already been solved.
If you have the image processing toolbox you can use regionprops with the 'Orientation' property to find the angle.
http://www.mathworks.com/help/images/ref/regionprops.html#bqkf8ji
The problem you are solving is known as image registration or image alignment.
-The first thing you need to due is to treshold the image, so you end up with a black and white image. This will simplify the process.
-Then you need to calculate the mass center of the imgaes and then translate them to match each others centers.
Then you need to rotate the images to matcheach other. This could be done using the principal axis measure. The principal axis will give you the two axis that explain most of the variance in the population. Which will basically give you a vector showing which way your bar is pointing. Then all you need to due is rotate the bars in the same direction.
-After the principal axis transformation you can try rotating the pictues a little bit more in each direction to try and optimise the rotation.
All the way through your translation and rotation you need a measure for showing you how good a fit your tranformation is. This measure can be many thing. If the picture is black and white a simple subtraction of the pictures is enough. Otherwise you can use measures like mutual information.
...you can also look at procrustes analysis see this link for a matlab function http://www.google.dk/search?q=gpa+image+analysis&oq=gpa+image+analysis&sugexp=chrome,mod=9&sourceid=chrome&ie=UTF-8#hl=da&tbo=d&sclient=psy-ab&q=matlab+procrustes+analysis&oq=matlab+proanalysis&gs_l=serp.3.1.0i7i30l4.5399.5883.2.9481.3.3.0.0.0.0.105.253.2j1.3.0...0.0...1c.1.5UpjL3-8aC0&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.&bvm=bv.1355534169,d.Yms&fp=afcd637d8ae07bde&bpcl=40096503&biw=1600&bih=767
You might want to look into the SIFT transform.
You should take as your image the rectangle that represents a worst case guess for your bar and determine the rotation matrix for that.
See http://www.vlfeat.org/overview/sift.html
Use the StackReg plugin of ImageJ. I'm not 100% sure but I think it already comes installed with FIJI (FIJI Is Just ImageJ).
EDIT: I think I have misread your question. That is not a stack of images you are trying to fix, right? In that case, a simple approach (probably not the most efficient but definetly works), is the following algorithm:
threshold the image (seems easy, your background is always white)
get a long horizontal line as a structuring element and dilate the image with it
rotate the structuring element and keep dilating image, measuring the size of the dilation.
the angle that maximizes it, is the rotation angle you'll need to fix your image.
There are several approaches to this problem as suggested by other answers. One approach possibly similar to what you are already trying, is to use Hough transform. Hough transform is good at detecting line orientations. Combining this with morphological processing and image rotation after detecting the angle you can create a system that corrects for angular variations. The basic steps would be
Use Morphological operations to make the bar a single line blob.
Use Hough transform on this image.
Find the maximum in the transform output and use that to find orientation angle.
Use the angle to fix original image.
A full example which comes with Computer Vision System Toolbox for this method. See
http://www.mathworks.com/help/vision/examples/rotation-correction-1.html
you can try givens or householder transform, I prefer givens.
it require an angle, using cos(angle) and sin(angle) to make the givens matrix.

Is there a way to figure out 3D distance/view angle from a 2D environment using the iPhone/iPad camera?

Maybe I'm asking this too soon in my research, but I'd better know if this is possible sooner than later.
Imagine I have the following square printed on a paper on top of a table:
The table is brown, so it does not match with any of the colors in the square. Is there a way for me, from a common iPhone camera (non-stereo view), to figure out the distance and angle from which Im looking at the square in the table?
In the end what I'm looking for is being able to draw a 3D square on top of this one using the camera image, but I'm not sure if I am going to be able to figure out the distance and position of the object in space using only a 2D image. Any hints are well appreciated.
Short answer: http://weblog.bocoup.com/javascript-augmented-reality
Big answer:
First posterize, Then vectorize, With the vectors in your power you may need to do some math tricks to define, based on the vectors position, the perspective and then the camera position.
Maybe this help:
www.pixastic.com/lib/docs/actions/posterize/
github.com/selead/cl-vectorizer
vectormagic.com/home
autotrace.sourceforge.net
www.scipy.org/PyLab
raphaeljs.com/
technabob.com/blog/2007/12/29/video-games-get-vectorized/
superuser.com/questions/88415/is-there-an-open-source-alternative-to-vector-magic
Oughta be possible. Scan the image for the red/blue/yellow pattern, then do edge detection to figure out how warped the squares are (they'll be parallelograms in anything but straight-on view). Distance would depend on the camera's zoom setting and scan resolution. But basically you'd count how many pixels are visible in each of the squares, run that past the camera's specs and you should be able to determine a rough distance.

I need help compensating for the shifting of images when trying to create a grid with one image and apply it on another

I have two images of yeast plates:
Permissive:
Xgal:
The to images should be in the same spot and roughly the same size. I am trying to use one of the images to generate a grid and then apply that grid to the other image. The grid is made by looking at the colonies on permissive plate, the plate should have 1536 colonies on it. The problem is that the camera that was used to take the images moves a bit up and down and the images can also be shifted slightly due to the other plate not being in exactly the same place.
This then means that when I use the permissive plate to generate the grid on the xgal plate the grid shifts. Does anyone know a way in which I can compensate for this? I am using perl with the gd module. Any advice would be greatly appreciated. Thank you
I've done this in other languages in relation to motion analysis. You can mathematically determine the shift in position between two images using cross correlation.
Fortunately, you may not need to actually do the maths :) You could use something like ImageMagick, which provides a lot of image processing functions for you, and is perl scriptable. Independently scripts already exists for tasks very much like yours -- see.
If you have only a few pairs of images and, as in the examples, they are very different in appearance then an alternative method to Tim Barrass' would be
Open the first image in gimp, find the co-ordinates of a landmark feature
Open the second image in gimp, find the co-ordinates of the same landmark
Calculate the offset
Shift the second image using ImageMagick's convert command with the affine option. Set the parameters sx=sy=1.0, rx=ry=0.0, tx= negative horizontal offset, ty= negative vertical offset