what affine operation i should use to make image output look like one on right hand side? - matlab

i want to make output of left hand image like right hand by performing affine operations like, scaling , translation ,shear and rotation.

If you are asking how to perform image registration in matlab then have a look at this great toolbox off the file exchange. Image registration is the process of uncovering the transformation parameters than will align a source image to a reference image as closely as possible. If you use affine registration, the result will be an affine transformation matrix that will transform your left image to your right image. One thing to watch out for is that by default the registration may take the upper left corner of the image for the centre of rotation but you will more likely want the centre of the image to be the centre of rotation in which case simply translate the image by half its dimensions before and after applying a transformation / registration.
However if you literally just want to know the angle of rotation for this one single image, then use a protractor.

Related

Restoring the image of a face's plane

I am using an API to analyze faces in Matlab, where I get for each picture a 3X3 rotation matrix of the face's orientation, telling which direction the head is pointing.
I am trying to normalize the image according to that matrix, so that it will be distorted to get the image of the face's plane. This is something like 'undoing' the projection of the face to the camera plane. For example, if the head is directed a little to the left, it will stretch the left side to (more or less) preserve the face's original proportions.
Tried using 'affine2d' and 'projective2d' with 'imwarp', but it didn't achieve that goal
Achieving your goal with simple tools like affine transformations seems impossible to me since a face is hardly a flat surface. An extreme example: Imagine the camera recording a profile view of someone's head. How are you going to reconstruct the missing half of the face?
There have been successful attempts to change the orientation of faces in images and real-time video, but the methods used are quite complex:
[We] propose a gaze correction method that needs just a
single webcam. We apply recent shape deformation techniques
to generate a 3D face model that matches the user’s face. We
then render a gaze-corrected version of this face model and
seamlessly insert it into the original image.
(Giger et al., https://graphics.ethz.ch/Downloads/Publications/Papers/2014/Gig14a/Gig14a.pdf)

Matlab - Transforming an image to receive a "View from Top"

I'm trying to transform a picture of a pool table so that it would look as if the picture was taken from the top.
For example, I'd like to take a picture like this and transform it to get an image of just the table itself as a perfect rectangle.
For starters, I don't mind entering the coordinates of the corners manually.
I looked at Matlab's fitgeotrans and tformfwd functions, but to be honest, couldn't really make sense of them, being quite new at image processing.
I'd really appreciate your help!
Image:
If you do not need this to be fully automatic, you can select the 4 corners of the table by hand, using cpselect. Then you have to define the 4 corners of the rectangle that you want your table to map to. That means define the image coordinates that you want the corners of the table to be. Now you have to sets of 4 x-y points. Use fitgeotrans with the TRANSFORMTYPE set to 'projective' to compute the projective transformation between the two sets of points. Then use imwarp to transform your image.

Image processing: Rotational alignment of an object

I have a stack of images with a bar close to the center. As the stack progresses the bar pivots around one end and the entire stack contains images with the bar rotated at many different angles up to 45 degrees above or below horizontal.
As shown here:
I'm looking for a way to rotate the bar and/or entire image and align everything horizontally before I do my other processing. Ideally this would be done in Matlab / imageJ / ImageMagick. I'm currently trying to work out a method using first Canny edge detection, followed by a Hough transform, followed by an image rotation, but I'm hoping this is a specific case of a more general problem which has already been solved.
If you have the image processing toolbox you can use regionprops with the 'Orientation' property to find the angle.
http://www.mathworks.com/help/images/ref/regionprops.html#bqkf8ji
The problem you are solving is known as image registration or image alignment.
-The first thing you need to due is to treshold the image, so you end up with a black and white image. This will simplify the process.
-Then you need to calculate the mass center of the imgaes and then translate them to match each others centers.
Then you need to rotate the images to matcheach other. This could be done using the principal axis measure. The principal axis will give you the two axis that explain most of the variance in the population. Which will basically give you a vector showing which way your bar is pointing. Then all you need to due is rotate the bars in the same direction.
-After the principal axis transformation you can try rotating the pictues a little bit more in each direction to try and optimise the rotation.
All the way through your translation and rotation you need a measure for showing you how good a fit your tranformation is. This measure can be many thing. If the picture is black and white a simple subtraction of the pictures is enough. Otherwise you can use measures like mutual information.
...you can also look at procrustes analysis see this link for a matlab function http://www.google.dk/search?q=gpa+image+analysis&oq=gpa+image+analysis&sugexp=chrome,mod=9&sourceid=chrome&ie=UTF-8#hl=da&tbo=d&sclient=psy-ab&q=matlab+procrustes+analysis&oq=matlab+proanalysis&gs_l=serp.3.1.0i7i30l4.5399.5883.2.9481.3.3.0.0.0.0.105.253.2j1.3.0...0.0...1c.1.5UpjL3-8aC0&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.&bvm=bv.1355534169,d.Yms&fp=afcd637d8ae07bde&bpcl=40096503&biw=1600&bih=767
You might want to look into the SIFT transform.
You should take as your image the rectangle that represents a worst case guess for your bar and determine the rotation matrix for that.
See http://www.vlfeat.org/overview/sift.html
Use the StackReg plugin of ImageJ. I'm not 100% sure but I think it already comes installed with FIJI (FIJI Is Just ImageJ).
EDIT: I think I have misread your question. That is not a stack of images you are trying to fix, right? In that case, a simple approach (probably not the most efficient but definetly works), is the following algorithm:
threshold the image (seems easy, your background is always white)
get a long horizontal line as a structuring element and dilate the image with it
rotate the structuring element and keep dilating image, measuring the size of the dilation.
the angle that maximizes it, is the rotation angle you'll need to fix your image.
There are several approaches to this problem as suggested by other answers. One approach possibly similar to what you are already trying, is to use Hough transform. Hough transform is good at detecting line orientations. Combining this with morphological processing and image rotation after detecting the angle you can create a system that corrects for angular variations. The basic steps would be
Use Morphological operations to make the bar a single line blob.
Use Hough transform on this image.
Find the maximum in the transform output and use that to find orientation angle.
Use the angle to fix original image.
A full example which comes with Computer Vision System Toolbox for this method. See
http://www.mathworks.com/help/vision/examples/rotation-correction-1.html
you can try givens or householder transform, I prefer givens.
it require an angle, using cos(angle) and sin(angle) to make the givens matrix.

How to detect the any 4 sides polygen in the image and adjust it to rectangle?

One TV screen recognition project, i need to clip the TV Screen from one image.
The TV screen actually is rectangle. But It's obvious that the TV screen is out of shape in the image from phone camera. My question are:
How to detect the any 4 sides polygen(it's not rectangle) in the image.
After i know the polygen area on the image ,how to retrieve the area to Mat.
After solve quest2, How to convert the Mat of 4 sides polygen to rectangle Mat which is fixed W/H radio.
It's very helpful that give some code sample to reference.
Thanks your answers!
if you want to detect the edges of your TV screen you can use some border
detection (like Canny) and then use Hough transform to obtained the lines.
If you then extract the points corresponding to the intersection of the lines
you can create an homography matrix H (3x3). Finally, using this homgraphy you can
"deform" your original image to a reference frame (in our case the rectangle
with a given aspect ratio). The homography is a transformation from plane
to plane, so it's exactly what you will need here.
If your going to use OpenCV (which is always a good choice!),
here are the functions that you could use:
Canny() - find edges in the image
HoughLines() - detect lines
findHomography() - this function finds from a set of correspondances,
the homography matrix. In your case, you will need to pass the method
as 0.
warpPerspective() - the function that your going to use to "deform"
the image to a reference frame.
Obviously, you can find similar functions for MATLAB and others...
I hope this helps you.

Not able to calibrate camera view to 3D Model

I am developing an app which uses LK for tracking and POSIT for estimation. I am successful in getting rotation matrix, projection matrix and able to track perfectly but the problem for me is I am not able to translate 3D object properly. The object is not fitting in to the right place where it has to fit.
Will some one help me regarding this?
Check this links, they may provide you some ideas.
http://computer-vision-talks.com/2011/11/pose-estimation-problem/
http://www.morethantechnical.com/2010/11/10/20-lines-ar-in-opencv-wcode/
Now, you must also check whether the intrinsic camera parameters are correct. Even a small error in estimating the field of view can cause troubles when trying to reconstruct 3D space. And from your details, it seems that the problem are bad fov angles (field of view).
You can try to measure them, or feed the half or double value to your algorithm.
There are two conventions for fov: half-angle (from image center to top or left, or from bottom to top, respectively from left to right) Maybe you just mixed them up, using full-angle instead of half, or vice-versa
Maybe you can show us how you build a transformation matrix from R and T components?
Remember, that cv::solvePnP function returns inverse transformation (e.g camera in world) - it finds object pose in 3D space where camera is in (0;0;0). For almost all cases you need inverse it to get correct result: {Rt; -T}