Recover simulated affine transformation for matched asift features (Morel's implementation) - affinetransform

Has anyone tried to recover the simulated affine transformation for the ASIFT feature detector? (From the author's implementation). In the original paper the simulated affine is clearly recovered by the equation 2.2 but I cannot seem to find a clear point where this is performed. Has anyone tried before? The function compensate_affine_coor1 in compute_asift_keypoints.cpp seems to be what I'm looking for but the scale seems to be normalized plus, the center of coordinates to perform the transformation is not clear to me.

It is performed in function you mentioned. Center coordinates are shifted because when you rotate image, top-left corner(origin) is moved, so they have to compensate it. And scale doesn`t change at all.

Related

MATLAB - What are the units of Matlab Camera Calibration Toolbox

When showing the extrinsic parameters of calibration (the 3D model including the camera position and the position of the calibration checkerboards), the toolbox does not include units for the axes. It seemed logical to assume that they are in mm, but the z values displayed can not possibly be correct if they are indeed in mm. I'm assuming that there is some transformation going on, perhaps having to do with optical coordinates and units, but I can't figure it out from the documentation. Has anyone solved this problem?
If you marked the side length of your squares in mm, then the z-distance shown would be in mm.
I know next to nothing about matlabs (not entirely true but i avoid matlab wherever I can, and that would be almost always possible) tracking utilities but here's some general info.
Pixel dimension on the sensor has nothing to do with the size of the pixel on screen, or in model space. For all purposes a camera produces a picture that has no meaningful units. A tracking process is unaware of the scale of the scene. (the perspective projection takes care of that). You can re insert a scale by taking 2 tracked points and measuring the distance between those points. This is the solver spaces distance is pretty much arbitrary. Now if you know the real distance between these points you can get a conversion factor. By doing:
real distance / solver space distance.
There's really now way to knowing this distance form the cameras settings as the camera is unable to differentiate between different scales of scenes. So a perfect 1:100 replica is no different for the solver than the real deal. So you must allays relate to something you can measure separately for each measuring session. The camera always produces something that's relative in nature.

SLERP rotates in the wrong direction (i.e. not shortest path)

I have two ellipsoids in R3 described in terms of their centre points (P), their axes lengths (a,b,c), and their rotation vector (R). I wish to interpolate a tubular structure between these two ellipsoids along a given centre line. This is done by creating an ellipsoid centred at each point along the centre line. Its axes lengths are interpolated linearly between those at the two endpoints, and the rotation is obtained as a quaternion using spherical linear interpolation, or SLERP.
I previously asked a similar question on this problem here. I have since isolated the issue a little further, and thought it warranted a new post. The difference here is that before doing SLERP, I first rotate the two reference ellipsoids by the inverse of the rotation matrix that describes one of them, such that one of them is now axis-aligned (i.e. has no rotation). Previously this appeared to solve the problem, but I have encountered an example where this fix does not work.
The source code to reproduce this issue is available here. The relevant function is ellipsoidSLERP and the functions it calls. Here is a screenshot of the output:
What you are seeing is an interpolation of ellipsoid volumes (blue) between two reference ellipsoid volumes at either end (green) along a centreline (cyan).
Problem Statement
The interpolation on the left works correctly, resulting in a smooth tubular structure. The interpolation on the right does not work correctly, and results in a twist.
What is causing this behaviour, and how can I correct it?
Please let me know if there's anything I can do to clarify.

How to calculate perspective transformation using ellipse

I'm very new to 3D image processing.i'm working in my project to find the perspective angle of an circle.
A plate having set of white circles,using those circles i want to find the rotation angles (3D) of that plate.
For that i had finished camera calibration part and got camera error parameters.The next step i have captured an image and apply the sobel edge detection.
After that i have a little bit confusion about the ellipse fitting algorithm.i saw a lot of algorithms in ellipse fit.which one is the best method and fast method?
after finished ellipse fit i don't know how can i proceed further?how to calculate rotation and translation matrix using that ellipse?
can you tell me which algorithm is more suitable and easy. i need some matlab code to understand concept.
Thanks in advance
sorry for my English.
First, find the ellipse/circle centres (e.g. as Eddy_Em in other comments described).
You can then refer to Zhang's classic paper
https://research.microsoft.com/en-us/um/people/zhang/calib/
which allows you to estimate camera pose from a single image if some camera parameters are known, e.g. centre of projection. Note that the method fails for frontal recordings, i.e. the more of a perspective effect, the more accurate your estimate will be. The algorithm is fairly simple, you'll need a SVD and some cross products.

Derive a rotational/transformational matrix given an image and a rotated image in Java?

Need some advise and point me in the right direction.
My object detection system reads in this image(see below) and returns coordinates for bounding boxes for some detection results(in this case, a hammer)
http://i1116.photobucket.com/albums/k572/Ruihong_Zhou/z3IJx-1.png
However I wish to examine the accuracy of the detection results for the same image by feeding the system, rotated images of the original images and allow it to detect and return coordinates for detection results if any.
For example:
http://i1116.photobucket.com/albums/k572/Ruihong_Zhou/myJQA-1.jpg
Let's say the coordinates of the yellow point(in the image above) is found but it is with respect to the rotated frame of reference. How do i actually transform/rotate these coordinates and find out where do they actually lie in the original image with respect to the original frame of reference.
Someone has pointed out to me that I should use affine transformation but I'm not sure how to go about it as honestly this is the 1st time i have heard of affine transformation and i'm still trying to brute force my learning of it now.
Further research indicates that I need both the original set of coordinates in the original image and the same set of coordinates in the rotated image to come up with a transformation matrice but I only have the detected set of coordinates in the rotated image.

How to deduce angle an image was rotated through?

I have an image that was rotated to an unknown angle, and I don't have the original image. How I determine the angle of rotation with matlab commands?
I need to rotate the image back with this angle to reach the original image.
As #High Performance Mark mentions in his comment, it is difficult to give an answer when it is unclear how you can recognize that the image is rotated, or what would make you decide that the rotation has properly been corrected.
In other words, you will first have to find a way to determine the rotation angle by analyzing the image with respect to specific features that inform you about a potential rotation. For example, if your image contains a face, you'd do face detection (for which there is plenty of code on the File Exchange and then rotate so that the eyes are up and the mouth down. If your image contains lines that should be vertical and/or horizontal in an un-rotated image, you can apply a Hough-transform to your image and find the most likely angle of rotation using houghpeaks.
Finally, to rotate your image, you can use imrotate.
Without examples or a more detailed description, it's hard to give good advice. But generally, this can be done for some types of images.
For example, suppose the image shows buildings, poles, furniture or something that should have vertical edges. Run an edge detector, then take a Fourier transform. There should be peaks, or some visible pattern in the power spectrum, along the Y axis for an unrotated image. The power spectrum rotates the same way as the image. If you can devise an algorithm to find the spectral features that indicate vertical edges, you can measure its angle w.r.t. the origin (zero frequency). That is the angle of image rotation.
But you will have to distinguish that particular feature from all other image features that show in the power spectrum. Have fun with that - this is the kind of detail that will take most of your creativity and time.