Decomposing rotation matrix (x,y',z'') - Cartesian angles - matlab

Decomposing rotation matrix (x,y',z'') - Cartesian angles
Im currently working with rotation matrices and I have the following problem:
Given three coordinate systems (O0,x0,y0,z0; O1,x1,y1,z1; O2,x2,y2,z2) which coincide. We rotate first the frame #1 with the respect to frame #0, then the frame #2 with respect to frame #1.
The order of the rotations: R = Rx_alpha * Ry_beta * Rz_gamma, so first about x, then y', then z'', which are also known as the Cartesian angles.
If R1 stands for the 1st and R2 for the 2nd rotation, we are looking for the angles of the 2nd frame with respect to initial frame (#0) after both of the rotations. This can be done by decomposing the rotation matrix R (where:R = R1*R2 ). There are many literature available, how it can be done by Euler- and RPY-angles, but I don't find any, how to solve this problem in case of Cartesian angles.
I have a matlab function which works only by simple rotations. If all the angles have values different than 0 (example below), then the result becomes really unstable.
Orientation of the 1st frame with respect to the frame #0:
alpha1 = 30*pi/180;
beta1 = 10*pi/180;
gamma1 = 0*pi/180;
Orientation of the 2nd frame with respect to the frame #1
alpha2 = 10*pi/180;
beta2 = 10*pi/180;
gamma2 = 0*pi/180;
The matlab function I was using for solving the problem:
function [q] = cartesian_angles(R)
beta = asin(R(1,3));
*% Catching the numerical singularty*
if abs(abs(beta)-pi/2) > eps;
*% singulartiy of acos*
gamma1 = acos(R(1,1) / cos(beta));
gamma2 = asin(-R(1,2) / cos(beta));
if gamma2<0
gamma=2*pi-gamma1;
else
gamma=gamma1;
end
alpha1 = acos(R(3,3) / cos(beta));
alpha2 = asin(-R(2,3) / cos(beta));
if alpha2<0
alpha = 2*pi-alpha1;
else
alpha = alpha1;
end
else
fprintf('beta=pi/2 \n')
gamma = 0;
alpha = 0;
beta = 0;
end;
alpha = alpha*180/pi;
beta = beta*180/pi;
gamma = gamma*180/pi;
q = [alpha; beta; gamma];
Thank you for any help! If you have some questions don't hesitate to ask!
Marci

First, I'm going to assume you are passing into your function a well conditioned, right-handed rotation matrix. I'm going to use the same rotation sequence as you listed above, X Y' Z''
If you know the symbolic construction of the rotation matrix you are trying to extract angles from, the math is pretty straight forward. Below is an example of matlab code to determine the construction of the rotation matrix of order X-Y'-Z''
a = sym('a');%x
b = sym('b');%y
g = sym('g');%z
Rx = [1 0 0;0 cos(a) -sin(a);0 sin(a) cos(a)];
Ry = [cos(b) 0 sin(b);0 1 0;-sin(b) 0 cos(b)];
Rz = [cos(g) -sin(g) 0;sin(g) cos(g) 0;0 0 1];
R = Rz*Ry*Rx
The output looks like this:
R =
[ cos(b)*cos(g), cos(g)*sin(a)*sin(b) - cos(a)*sin(g), sin(a)*sin(g) + cos(a)*cos(g)*sin(b)]
[ cos(b)*sin(g), cos(a)*cos(g) + sin(a)*sin(b)*sin(g), cos(a)*sin(b)*sin(g) - cos(g)*sin(a)]
[ -sin(b), cos(b)*sin(a), cos(a)*cos(b)]
Here's the same result in a nicer looking format:
Now let's go over the math to extract the angles from this matrix. Now would be a good time to become comfortable with the atan2() function.
First solve for the beta angle (by the way, alpha is the rotation about the X axis, beta is the rotation about Y' axis, and gamma is the angle about the Z'' axis):
beta = atan2(-1*R(3,1),sqrt(R(1,1)^2+R(2,1)^2))
Written more formally,
Now that we have solved for the beta angle we can solve more simply for the other two angles:
alpha = atan2(R(3,2)/cos(beta),R(3,3)/cos(beta))
gamma = atan2(R(2,1)/cos(beta),R(1,1)/cos(beta))
Simplified and in a nicer format,
The above method is a pretty robust way of getting the Euler angles out of your rotation matrix. The atan2 function really makes it much simpler.
Finally I will answer how to solve for the rotation angles after a series of rotations. First consider the following notation. A vector or rotation matrix will be notated in the following way:
Here "U" represents the universal frame, or global coordinate system. "Fn" represents the nth local coordinate system that is different from U. R means rotation matrix (this notation could also be used for homogeneous transformations). The left side superscript will always represent the parent frame of reference of the rotation matrix or vector. The left side subscript indicates the child frame of reference. For example, if I have a vector in F1 and I want to know what it is equivalently in the universal frame of reference I would perform the following operation:
To get the vector resolved in the universal frame I simply multiplied it by the rotation matrix that transforms things from F1 to U. Notice how the subscripts are "cancelled" out by the superscript of the next item in the equation. This is a clever notation to help someone from getting things mixed up. If you recall, a special property of well conditioned rotation matrices is that the inverse matrix is the transpose of the matrix, which is will also be the inverse transformation like this:
Now that the notation details are out of the way, we can start to consider solving for complicated series of rotations. Lets say I have "n" number of coordinate frames (another way of saying "n" distinct rotations). To figure out a vector in the "nth" frame in the universal frame I would do the following:
To determine the Cardan/Euler angles that result from "n" rotations, you already know how to decompose the matrix to get the correct angles (also known as inverse kinematics in some fields), you simply need the correct matrix. In this example I am interested in the rotation matrix that takes things in the "nth" coordinate frame and resolves them into the Universal frame U:
There is it, I combined all the rotations into the one of interest simply by multiplying in the correct order. This example was easy. More complicated cases come when someone wants to find the reference frame of one rigid body resolved in the frame of another and the only thing the two rigid bodies have in common is their measurement in a universal frame.
I want to also note that this notation and method can also be used with homogeneous transformations but with some key differences. The inverse of a rotation matrix is its transpose, this is not true for homogeneous transformations.

Thank you for you answer willpower2727, your answer was really helpful!
But I would like to mention, that the code you have shown is useful to decompose rotational matrices, which are built in the following way:
R = Rz*Ry*Rx
What I'm looking for:
R = Rx*Ry*Rz
Which results into the following rotational matrix:
However, it's not a problem, as following the method how you calculate the angles alpha, beta and gamma, it was easy to modify the code so it decomposes the matrix shown above.
The angles:
beta = atan2( R(1,3), sqrt(R(1,1)^2+(-R(1,2))^2) )
alpha = atan2( -(R(2,3)/cos(beta)),R(3,3)/cos(beta) )
gamma = atan2( -(R(1,2)/cos(beta)),R(1,1)/cos(beta) )
One thing is still not clear though. The method is perfectly useful, bunt only if I calculate the angles after one rotation. As there are more rotations linked after each other, the results are false. However, it's still solvable, I guess, considering the following way: Let's say, we have two rotations linked after each other (R1 and R2). q1 shows the angles of R1, q2 of R2. after decomposing the single matrices. The total angle of rotation of the matrix R=R1*R2can be easily calculated through summing up the rangles before: q=q1+q2
Is there no way, how to calculate the angles of the total rotation, not by summing the partial angles, but decomposing the matrix R=R1*R2?
UPDATE:
Considering the following basic example. The are to rotations linked after each other:
a1 = 10*pi/180
b1 = 20*pi/180
g1 = 40*pi/180
R1 = Rx_a1*Ry_b1_Rz_g1
a2 = 20*pi/180
b2 = 30*pi/180
g2 = 30*pi/180
R2 = Rx_a2*Ry_b2*Rz_g2
Decomposing the individual matrices R1 and R2 results in the rights angles. The problem occures, when I link the rotations after each other and I try to determinate the angles of the last frame in the inertial frame. Theoretically this could be done by decomposing the product of all rotational matrices of the chain of transformations.
R = R1*R2
Decomposing this matrix gives the following false result shown in degrees:
a = 0.5645
b = 54.8024
g = 61.4240
Marci

Related

How to convert Matrix4x4 imported from .X to Unity? [duplicate]

I would like to change a 4x4 matrix from a right handed system where:
x is left and right, y is front and back and z is up and down
to a left-handed system where:
x is left and right, z is front and back and y is up and down.
For a vector it's easy, just swap the y and z values, but how do you do it for a matrix?
Let me try to explain it a little better.
I need to export a model from Blender, in which the z axis faces up, into OpenGL, where the y axis faces up.
For every coordinate (x, y, z) it's simple; just swap the y and z values: (x, z, y).
Because I have swapped the all the y and z values, any matrix that I use also needs to be flipped so that it has the same effect.
After a lot of searching I've eventually found a solution at gamedev:
If your matrix looks like this:
{ rx, ry, rz, 0 }
{ ux, uy, uz, 0 }
{ lx, ly, lz, 0 }
{ px, py, pz, 1 }
To change it from left to right or right to left, flip it like this:
{ rx, rz, ry, 0 }
{ lx, lz, ly, 0 }
{ ux, uz, uy, 0 }
{ px, pz, py, 1 }
I think I understand your problem because I am currently facing a similar one.
You start with a world matrix which transforms a vector in a space where Z is up (e.g. a world matrix).
Now you have a space where Y is up and you want to know what to do with your old matrix.
Try this:
There is a given world matrix
Matrix world = ... //space where Z is up
This Matrix changes the Y and Z components of a Vector
Matrix mToggle_YZ = new Matrix(
{1, 0, 0, 0}
{0, 0, 1, 0}
{0, 1, 0, 0}
{0, 0, 0, 1})
You are searching for this:
//same world transformation in a space where Y is up
Matrix world2 = mToggle_YZ * world * mToggle_YZ;
The result is the same matrix cmann posted below. But I think this is more understandable as it combines the following calculation:
1) Switch Y and Z
2) Do the old transformation
3) Switch back Z and Y
It is often the case that you want to change a matrix from one set of forward/right/up conventions to another set of forward/right/up conventions. For example, ROS uses z-up, and Unreal uses y-up. The process works whether or not you need to do a handedness-flip.
Note that the phrase "switch from right-handed to left-handed" is ambiguous. There are many left-handed forward/right/up conventions. For example: forward=z, right=x, up=y; and forward=x, right=y, up=z. You should really think of it as "how do I convert ROS' notion of forward/right/up to Unreal's notion of forward/right/up".
So, it's a straightforward job to create a matrix that converts between conventions. Let's assume we've done that and we now have
mat4x4 unrealFromRos = /* construct this by hand */;
mat4x4 rosFromUnreal = unrealFromRos.inverse();
Let's say the OP has a matrix that comes from ROS, and she wants to use it in Unreal. Her original matrix takes a ROS-style vector, does some stuff to it, and emits a ROS-style vector. She needs a matrix that takes an Unreal-style vector, does the same stuff, and emits an Unreal-style vector. That looks like this:
mat4x4 turnLeft10Degrees_ROS = ...;
mat4x4 turnLeft10Degrees_Unreal = unrealFromRos * turnLeft10Degrees_ROS * rosFromUnreal;
It should be pretty clear why this works. You take a Unreal vector, convert it to ROS-style, and now you can use the ROS-style matrix on it. That gives you a ROS vector, which you convert back to Unreal style.
Gerrit's answer is not quite fully general, because in the general case, rosFromUnreal != unrealFromRos. It's true if you're just inverting a single axis, but not true if you're doing something like converting X→Y, Y→Z, Z→X. I've found that it's less error-prone to always use a matrix and its inverse to do these convention switches, rather than to try to write special functions that flip just the right members.
This kind of matrix operation M * X * inverse(M) comes up a lot. You can think of it as a "change of basis" operation; to learn more about it, see https://en.wikipedia.org/wiki/Matrix_similarity.
I have been working on converting the Unity SteamVR_Utils.RigidTransform to ROS geometry_msgs/Pose and needed to convert Unity left handed coordinate system to the ROS right handed coordinate system.
This was the code I ended up writing to convert coordinate systems.
var device = SteamVR_Controller.Input(index);
// Modify the unity controller to be in the same coordinate system as ROS.
Vector3 ros_position = new Vector3(
device.transform.pos.z,
-1 * device.transform.pos.x,
device.transform.pos.y);
Quaternion ros_orientation = new Quaternion(
-1 * device.transform.rot.z,
device.transform.rot.x,
-1 * device.transform.rot.y,
device.transform.rot.w);
Originally I tried using the matrix example from #bleater, but I couldn't seem to get it to work. Would love to know if I made a mistake somewhere.
HmdMatrix44_t m = device.transform.ToHmdMatrix44();
HmdMatrix44_t m2 = new HmdMatrix44_t();
m2.m = new float[16];
// left -> right
m2.m[0] = m.m[0]; m2.m[1] = m.m[2]; m2.m[2] = m.m[1]; m2.m[3] = m.m[3];
m2.m[4] = m.m[8]; m2.m[5] = m.m[10]; m2.m[6] = m.m[9]; m2.m[7] = m.m[7];
m2.m[8] = m.m[4]; m2.m[9] = m.m[6]; m2.m[10] = m.m[5]; m2.m[11] = m.m[11];
m2.m[12] = m.m[12]; m2.m[13] = m.m[14]; m2.m[14] = m.m[13]; m2.m[15] = m.m[15];
SteamVR_Utils.RigidTransform rt = new SteamVR_Utils.RigidTransform(m2);
Vector3 ros_position = new Vector3(
rt.pos.x,
rt.pos.y,
rt.pos.z);
Quaternion ros_orientation = new Quaternion(
rt.rot.x,
rt.rot.y,
rt.rot.z,
rt.rot.w);
After 12 years, the question is still misleading because of the lack of description of axis direction.
What question asked for should probably be how to convert to .
The answer by #cmann is correct for the above question and #Gerrit explains the reason. And I will explain how to graphically get that conversion on the transform matrix.
We should be clear that orthogonal matrix contains both rotation matrix and point reflection(only point reflection will change the coordinate system between left-handed and right-handed). Thus they can be expressed as a 4x4 matrix and obey to transform matrix multiplying order. "The matrix of a composite transformation is obtained by multiplying the matrices of individual transformations."
to contains both rotation matrix and point reflection. But we can get the composite transformation graphically.
According to above image, after transformation, in RhC(Right-handedCorrdinate) will be in LfC as below
where is a transform bring points expressed in above RhC to points expressed in LhC.
Now We are able to convert () to () accroding to transform matrix multiplying order as below image.
The result is the same as #cmann's.
Result:
It depends if you transform your points by multiplying the matrix from the left or from the right.
If you multiply from the left (e.g: Ax = x', where A is a matrix and x' the transformed point), you just need to swap the second and third column.
If you multiply from the right (e.g: xA = x'), you need to swap the second and third row.
If your points are column vectors then you're in the first scenario.
Change sin factor to -sin for swaping coordinate spaces between right and left handed
Since this seems like a homework answer; i'll give you a start at a hint: What can you do to make the determinant of the matrix negative?
Further (better hint): Since you already know how to do that transformation with individual vectors, don't you think you'd be able to do it with the basis vectors that span the transformation the matrix represents? (Remember that a matrix can be viewed as a linear transormation performed on a tuple of unit vectors)

Affine transformation matlab [duplicate]

I have two images which one of them is the Original image and the second one is Transformed image.
I have to find out how many degrees Transformed image was rotated using 3x3 transformation matrix. Plus, I need to find how far translated from origin.
Both images are grayscaled and held in matrix variables. Their sizes are same [350 500].
I have found a few lecture notes like this.
Lecture notes say that I should use the following matrix formula for rotation:
For translation matrix the formula is given:
Everything is good. But there are two problems:
I could not imagine how to implement the formulas using MATLAB.
The formulas are shaped to find x',y' values but I already have got x,x',y,y' values. I need to find rotation angle (theta) and tx and ty.
I want to know the equivailence of x, x', y, y' in the the matrix.
I have got the following code:
rotationMatrix = [ cos(theta) sin(theta) 0 ; ...
-sin(theta) cos(theta) 0 ; ...
0 0 1];
translationMatrix = [ 1 0 tx; ...
0 1 ty; ...
0 0 1];
But as you can see, tx, ty, theta variables are not defined before used. How can I calculate theta, tx and ty?
PS: It is forbidden to use Image Processing Toolbox functions.
This is essentially a homography recovery problem. What you are doing is given co-ordinates in one image and the corresponding co-ordinates in the other image, you are trying to recover the combined translation and rotation matrix that was used to warp the points from the one image to the other.
You can essentially combine the rotation and translation into a single matrix by multiplying the two matrices together. Multiplying is simply compositing the two operations together. You would this get:
H = [cos(theta) -sin(theta) tx]
[sin(theta) cos(theta) ty]
[ 0 0 1]
The idea behind this is to find the parameters by minimizing the error through least squares between each pair of points.
Basically, what you want to find is the following relationship:
xi_after = H*xi_before
H is the combined rotation and translation matrix required to map the co-ordinates from the one image to the other. H is also a 3 x 3 matrix, and knowing that the lower right entry (row 3, column 3) is 1, it makes things easier. Also, assuming that your points are in the augmented co-ordinate system, we essentially want to find this relationship for each pair of co-ordinates from the first image (x_i, y_i) to the other (x_i', y_i'):
[p_i*x_i'] [h11 h12 h13] [x_i]
[p_i*y_i'] = [h21 h22 h23] * [y_i]
[ p_i ] [h31 h32 1 ] [ 1 ]
The scale of p_i is to account for homography scaling and vanishing points. Let's perform a matrix-vector multiplication of this equation. We can ignore the 3rd element as it isn't useful to us (for now):
p_i*x_i' = h11*x_i + h12*y_i + h13
p_i*y_i' = h21*x_i + h22*y_i + h23
Now let's take a look at the 3rd element. We know that p_i = h31*x_i + h32*y_i + 1. As such, substituting p_i into each of the equations, and rearranging to solve for x_i' and y_i', we thus get:
x_i' = h11*x_i + h12*y_i + h13 - h31*x_i*x_i' - h32*y_i*x_i'
y_i' = h21*x_i + h22*y_i + h23 - h31*x_i*y_i' - h32*y_i*y_i'
What you have here now are two equations for each unique pair of points. What we can do now is build an over-determined system of equations. Take each pair and build two equations out of them. You will then put it into matrix form, i.e.:
Ah = b
A would be a matrix of coefficients that were built from each set of equations using the co-ordinates from the first image, b would be each pair of points for the second image and h would be the parameters you are solving for. Ultimately, you are finally solving this linear system of equations reformulated in matrix form:
You would solve for the vector h which can be performed through least squares. In MATLAB, you can do this via:
h = A \ b;
A sidenote for you: If the movement between images is truly just a rotation and translation, then h31 and h32 will both be zero after we solve for the parameters. However, I always like to be thorough and so I will solve for h31 and h32 anyway.
NB: This method will only work if you have at least 4 unique pairs of points. Because there are 8 parameters to solve for, and there are 2 equations per point, A must have at least a rank of 8 in order for the system to be consistent (if you want to throw in some linear algebra terminology in the loop). You will not be able to solve this problem if you have less than 4 points.
If you want some MATLAB code, let's assume that your points are stored in sourcePoints and targetPoints. sourcePoints are from the first image and targetPoints are for the second image. Obviously, there should be the same number of points between both images. It is assumed that both sourcePoints and targetPoints are stored as M x 2 matrices. The first columns contain your x co-ordinates while the second columns contain your y co-ordinates.
numPoints = size(sourcePoints, 1);
%// Cast data to double to be sure
sourcePoints = double(sourcePoints);
targetPoints = double(targetPoints);
%//Extract relevant data
xSource = sourcePoints(:,1);
ySource = sourcePoints(:,2);
xTarget = targetPoints(:,1);
yTarget = targetPoints(:,2);
%//Create helper vectors
vec0 = zeros(numPoints, 1);
vec1 = ones(numPoints, 1);
xSourcexTarget = -xSource.*xTarget;
ySourcexTarget = -ySource.*xTarget;
xSourceyTarget = -xSource.*yTarget;
ySourceyTarget = -ySource.*yTarget;
%//Build matrix
A = [xSource ySource vec1 vec0 vec0 vec0 xSourcexTarget ySourcexTarget; ...
vec0 vec0 vec0 xSource ySource vec1 xSourceyTarget ySourceyTarget];
%//Build RHS vector
b = [xTarget; yTarget];
%//Solve homography by least squares
h = A \ b;
%// Reshape to a 3 x 3 matrix (optional)
%// Must transpose as reshape is performed
%// in column major format
h(9) = 1; %// Add in that h33 is 1 before we reshape
hmatrix = reshape(h, 3, 3)';
Once you are finished, you have a combined rotation and translation matrix. If you want the x and y translations, simply pick off column 3, rows 1 and 2 in hmatrix. However, we can also work with the vector of h itself, and so h13 would be element 3, and h23 would be element number 6. If you want the angle of rotation, simply take the appropriate inverse trigonometric function to rows 1, 2 and columns 1, 2. For the h vector, this would be elements 1, 2, 4 and 5. There will be a bit of inconsistency depending on which elements you choose as this was solved by least squares. One way to get a good overall angle would perhaps be to find the angles of all 4 elements then do some sort of average. Either way, this is a good starting point.
References
I learned about homography a while ago through Leow Wee Kheng's Computer Vision course. What I have told you is based on his slides: http://www.comp.nus.edu.sg/~cs4243/lecture/camera.pdf. Take a look at slides 30-32 if you want to know where I pulled this material from. However, the MATLAB code I wrote myself :)

Change from one cartesian 3D co-ordinate system to another by translation and rotation

There are two reasons for me to ask this question:
I want to know if my understanding on this issue is correct.
To clarify a doubt I have.
I want to change the co-ordinate system of a set of points (Old cartesian coordinates system to New cartesian co-ordinate system). This transformation will involve Translation as well as Rotation. This is what I plan to do:
With respect to this image I have a set of points which are in the XYZ coordinate system (Red). I want to change it with respect to the axes UVW (Purple). In order to do so, I have understood that there are two steps involved: Translation and Rotation.
When I translate, I only change the origin. (say, I want the UVW origin at (5,6,7). Then, for all points in my data, the x co-ordinates will be subtracted by 5, y by 6 and z by 7. By doing so. I get a set of Translated data.)
Now I have to apply a rotation transform (on the Translated data). The Rotation matrix is shown in the image. The values Ux, Uy and Uz are the co-ordinates of a point on the U axis which has unit distance from origin. Similarly, the values Vx, Vy and Vz are the coordinates of a point on the V axis which has a unit distance from origin. (I want to know if I am right here.) Wx, Wy, Wz is calculated as ((normalized u) X (normalised v))
(Also, if it serves any purpose, I would like to let you know that I am using MATLAB.)
edit:
I have a set of 42 points in 3D (42 X 3 matrix A) I want the first point to be considered as origin of UVW plane. So the values of the first point will be my translation vector. Correct?
Next, to calculate the Rotation vector: According to my requirement, the 6th row of matrix A has to be the U axis while 37th row has to be V axis. Consequently, vector u will be (1st row minus 6th row) of matrix A. While vector v will be (1st row minus 37th row) of matrix A.
The first row of Rotation Matrix will be vector u/|u| (normalized). Second row will be vector v/|v| (v normalized). The third row will be (u X v) . Am I right here?
Given this information, how can I calculate the value of Wx, Wy and Wx. How can I calculate the 3rd row of rotation matrix R?
Since you already have U and V, the two basis vectors of the orthonormal UVW system, the W basis vector would be the cross product of U and V. The cross product gives out the vector that is perpendicular to its operands; hence W = U × V. The components of W would fill in the third row of the rotation matrix.
Is my approach correct?
The order of the transforms matter; changing the order would lead to different results. When doing transformations of systems, usually scaling and rotation are tackled first and translation is dealt with lastly. The reason for this is that rotation would always be with respect to the origin. If the new system isn't on the old one's origin then applying rotation would rotate the new system not around its own origin but around the old system's origin. See the rightside case of figure 3-4 on this page to understand the difference what would happen if it's not on the origin; imagine the pot as the UVW coordinate system.
Think of both the coordinate systems being super-imposed (laid one atop the other). Now when you rotate UVW system with respect to the origin of XYZ, you will end up with the effect of rotating UVW w.r.t its own origin. Once rightly oriented, you can apply translation to it. However, if you'd already translated, then rotating would lead to translated rotation.
If you're using column-vector convention then TR would be the order i.e. rotation followed by translation. If you're using the row-vector convention then RT would be the order, again the order is rotation followed by translation.
You can apply the cross product of the Vectors OU and OV.
I think it's easier to perform it in steps. 1) Translation. 2) Rotation about x-axis. 3) Rotation about y-axis. 4) Rotation about z-axis.
% Assuming this is your coordinates before any operation
x0 = 5; y0 = 5; z0 = 5;
% This is the new origin
u = 5; v = 6, w = 7;
% If you wish to rotate pi/4 about x-axis, pi/3 about y-axis, pi/2 about z- axis, the three representative rotation matrix will be:
rx = [1 0 0; 0 cos(pi/4) -sin(pi/4); 0 sin(pi/4) cos(pi/4)];
ry = [cos(pi/3) 0 sin(pi/3); 0 1 0; -sin(pi/3) 0 cos(pi/3)];
rz = [cos(pi/2) -sin(pi/2) 0; sin(pi/2) cos(pi/2) 0; 0 0 1];
% First perform translation
xT = x0-u; yT = y0-v; zT = z0-w;
% Then perform rotation about x
rotated_x = mtimes( rx,[xT;yT;zT]);
% Then perform rotation about y
rotated_xy = mtimes( ry, rotated_x);
% Then perform rotation about z
rotated_xyz = mtimes( rz, rotated_xy);

recovery plane from 4 point imaging using calibration camera

I have a camera and its K matrix (calibration matrix) also I have image of plane, I know the real points of the 4 corners and thier correspondence pixel. I know how to compute the H matrix if z=0 (H is homography matrix between Image and the real plane).
And Now I try to get the real point of the plane (3D point) with the rotation matrix and the transltion vector
I follow this paper :Calibrating an Overhead Video Camera by Raul Rojas in section 3 - 3.3.
My code is:
ImagePointsScreen=[16,8,1;505,55,1;505,248,1;44,301,1;];
screenImage=imread( 'screen.jpg');
RealPointsMirror=[0,0,1;9,0,1;9,6,1;0,6,1]; %Mirror
RealPointsScreen=[0,0,1;47.5,0,1;47.5,20,1;0,20,1];%Screen
imagesc(screenImage);
hold on
for i=1:4
drawBubble(ImagePointsScreen(i,1),ImagePointsScreen(i,2),1,'g',int2str(i),'r')
end
Points3DScreen=Get3DpointSurface(RealPointsScreen,ImagePointsScreen,'Screen');
figure
hold on
plot3(Points3DScreen(:,1),Points3DScreen(:,2),Points3DScreen(:,3));
for i=1:4
drawBubble(Points3DScreen(i,1),Points3DScreen(i,2),1,'g',int2str(i),'r')
end
function [ Points3D ] = Get3DpointSurface( RealPoints,ImagePoints,name)
M=zeros(8,9);
for i=1:4
M((i*2)-1,1:3)=-RealPoints(i,:);
M((i*2)-1,7:9)=RealPoints(i,:)*ImagePoints(i,1);
M(i*2,4:6)=-RealPoints(i,:);
M(i*2,7:9)=RealPoints(i,:)*ImagePoints(i,2);
end
[U S V] = svd(M);
X = V(:,end);
H(1,:)=X(1:3,1)';
H(2,:)=X(4:6,1)';
H(3,:)=X(7:9,1)';
K=[680.561906875074,0,360.536967117290;0,682.250270165388,249.568615725655;0,0,1;];
newRO=pinv(K)*H;
h1=newRO(1:3,1);
h2=newRO(1:3,2);
scaleFactor=(norm(h1)+norm(h2))/2;
newRO=newRO./scaleFactor;
r1=newRO(1:3,1);
r2=newRO(1:3,2);
r3=cross(r1,r2);
r3=r3/norm(r3);
R=[r1,r2,r3];
RInv=pinv(R);
O=-RInv*newRO(1:3,3);
M=K*[R,-R*O];
for i=1:4
res=pinv(M)* [ImagePoints(i,1),ImagePoints(i,2),1]';
res=res';
res=res*(1/res(1,4));
Points3D(i,:)=res';
end
Points3D(i+1,:)=Points3D(1,:); %just add the first point to the end of the array for draw square
end
My result is :
Now I have two problem :
1.The point 1 is at (0,0,0) and this is not the real location
2.the points are upside down
What I am doing worng?
A homography is normally the transform of a plane in two positions/rotations.
The position in camera coordinates of a plane is normally called pose or extrinsic parameters
opencv has a solvePnP() function which uses Ransac to estimate the position of a known plane.
ps. Sorry don't know the equivalent matlab but Bouguet has a matlab version of the openCV 3D functions on his site
I found the answer in the paper: Calibrating an Overhead Video Camera by Raul Rojas in section 3 - 3.3.
for the start: H=K^-1*H
Given four points in the image and their known coordinates in the world, the
matrix H can be recovered, up to a scaling factor . We know that the first
two columns of the rotation matrix R must be the first two columns of the
transformation matrix. Let us denote by h1, h2, and h3 the three columns of
the matrix H.Due to the scaling factor we then have that
xr1 = h1
and
xr2 = h2
Since |r1| = 1, then x= |h1|/|r1| = |h1| and x = |h2|/|r2| = |h2|. We can thus
compute the factor and eliminate it from the recovered matrix H. We just set
H'= H/x
In this way we recover the first two columns of the rotation matrix R.
The third column of R can be found remembering that any column in a rotation
matrix is the cross product of the other two columns (times the appropriate
plus or minus sign). In particular
r3 = r1 × r2
Therefore, we can recover from H the rotation matrix R. We can also recover
the translation vector (the position of the camera in field coordinates). Just
remember that
h'3 = −R^t
Therefore the position vector of the camera pin-hole t is given by
t = −R^-1 h3

obtaining the rotation and size of a UIImageView based on its transformation matrices

If I have the original transform matrix of a rectangular UIImageView and this image is scaled and rotated and by the end I can read the final transform matrix of this same view, how can I calculate how much the image scaled and rotated?
I suppose that somehow these matrix contain these two informations. The problem is how to extract it...
any clues?
thanks for any help.
A little bit of matrix algebra and trigonometric identities can help you solve this.
We'll work forward to generate a matrix that scales and rotates, and then use that to figure out how to extract the scale factors and rotations analytically.
A scaling matrix to scale by Sx (in the X axis) and Sy (in the Y axis) looks like this:
⎡Sx 0 ⎤
⎣0 Sy⎦
A matrix to rotate clockwise by R radians looks like this:
⎡cos(R) sin(R)⎤
⎣-sin(R) cos(R)⎦
Using standard matrix multiplication, the combined scaling and rotation matrix will look like this:
⎡Sx.cos(R) Sx.sin(R)⎤
⎣-Sy.sin(R) Sy.cos(R)⎦
Note that linear transformations could also include shearing or other transformations, but I'll assume for this question that only rotation and scaling have occurred (if a shear transform is in the matrix, you will get inconsistent results from following the algebra here; but the same approach can be used to determine an analytical solution).
A CGAffineTransform has four members a, b, c, d, corresponding to the 2-dimensional matrix:
⎡a b⎤
⎣c d⎦
Now we want to extract from this matrix the values of Sx, Sy, and R. We can use a simple trigonometric identity here:
tan(A) = sin(A) / cos(A)
We can use this with the first row of the matrix to conclude that:
tan(R) = Sx.sin(R) / Sx.cos(R) = b / a and therefore R = atan(b / a)
And now we know R, we can extract the scale factors by using the main diagonal:
a = Sx.cos(R) and therefore Sx = a / cos(R)
d = Sy.cos(R) and therefore Sy = d / cos(R)
So you now know Sx, Sy, and R.