I have two samples of 3D point cloud of human face. The blue point cloud denote target face and the red point cloud indicates the template. The image below shows that the target and template face are aligned in different directions (target face roughly along x-axis, template face roughly along y-axis).
Figure 1:
The region around the nose is displayed in Figure 1.
I want to rotate my target face (blue face) with nasal tip as the center of rotation (I translated the target to the template prior to Figure1 so that the tip of nose, i.e., the centerpt, for both faces are superimposed) to grossly align with the template face (red face). I rotated the target face with the following MATLAB code:
% PCA for the target face
targetFaceptfmt = pointCloud(targetFace); % Convert to point cloud format
point = [templateFace(3522, 1), templateFace(3522, 2), templateFace(3522, 3)]; % The 3522th point in the templateFace is the nasal tip point used as center of rotation later on
radius = 20; % 20mm
[NNTarIndex, NNTarDist] = findNeighborsInRadius(Locationptfmt, point, radius); % Find all vertices within 20 of the nasal tip point on the target face
NNTar = select(Locationptfmt, NNTarIndex); % Select the identified points for PCA
[TarVec,TarSCORE,TarVal] = pca(NNTar.Location); % Do PCA for target face using vertices close to the nasal tip
% PCA for the template face
templateFaceptfmt = pointCloud(templateFace); % Convert to point cloud format
[NNTemIndex, NNTemDist] = findNeighborsInRadius( templateFaceptfmt, point, radius); % Find all vertices within 20 of the nasal tip point on the template
NNTem = select(templateFaceptfmt, NNTemIndex); % Select the identified points for PCA
[TemVec,TemSCORE,TemVal] = pca(NNTem.Location); % Do PCA for template face using vertices close to the nasal tip
% Rotate target face with nasal tip point as the center of rotation
targetFace_r = R * (targetFace-cenertpt)' + centerpt';
targetFace_new = targetFace_r';
where targetFace and templateFace contains coordinates for the unrotated target face and the template face, respectively. The targetFace_r contains coordinates for the target face after rotation around nasal tip, R is the rotation matrix calculated through PCA (See here for source of formula for rotation), and centerpt is the nasal tip point which is used as the center of rotation. I then plotted the transposed targetFace_r, i.e., the targetFace_new, with normals added to each vertex:
Figure 2:
Before rotation, the normals for the target face and template face are generally pointing toward similar directions (Figure 1). After rotation, the target and template face are both aligned along the y-axis (which is what I want), however, normals for the target face and template face point toward opposite directions. Bearing in mind that no changes were made to the template face, I realized that normals of the target face calculated after rotation are flipped. But I do not know why. I used the checkFaceOrientation function of the Rvcg package in R to check if expansion along normals increases centroid size. I was returned TRUE for the template face but FALSE for the target face, which confirms that vertex normals for the target face are flipped.
Vertex normals were calculated in MATLAB as follows:
TR = triangulation(Faces, Vertices); % Triangulation based on face and vertex information
VN = vertexNormal(TR); % Calculate vertext normal
where Faces contains face information, i.e., the connectivity list, and Vertices contains coordiantes for vertices. For target face before rotation, target face after rotation, and template face, vertex normals were calcuated separately. I used the same Faces data for calculation of vertex normal before and after rotating the target face.
The flipped vertex normals resulted in errors for some further analyses. As a result, I have to manually flip the normals to make them pointing similarly to normals of the template face.
Figure 3:
Figure 3 shows that after manually flip the normals, normals of the target and template face are generally pointing similarly in direction.
My question is why does the normals of the target face calculated after rotation flipped? In what case does rotation of 3D point cloud result in flipping of vertex normals?
Some further informaiton that may be useful: the rotation matrix R I obtained is as follows for your reference:
0.0473096146726546 0.867593376108813 -0.495018720950670
0.987013081649028 0.0355601323276586 0.156654567895508
-0.153515396665006 0.496001220483328 0.854643675613313
Since trace(R) = 1 + 2cos(alpha), I calcualted alpha through acos((trace(R)-1)/2)*180/pi, which yielded an angle of rotation of 91.7904, relative to the nasal tip point.
If I'm understanding everything correctly, it looks like your rotation matrix is actually encoding a rotation plus a reflection. If your matrix is approximately:
0.04 0.86 -0.49
0.98 0.03 0.15
-0.15 0.49 0.85
Then the image of each unit vector pointing along the positive axes are:
x = [ 0.04 0.98 -0.15]
y = [ 0.86 0.03 0.49]
z = [-0.49 0.15 0.85]
However, if you take the cross-product of x and y (cross(x, y)), you get approximately [0.49 -0.15 -0.85], which is the negation of z, which implies that the matrix is encoding both a rotation and a reflection. Naturally, multiplying a mesh's vertices by a reflection matrix will reverse the winding order of its polygons, yielding inverted normals.
In the slides that you referenced, it states that the PCA method of generating a rotation matrix should only be considering four different combinations of axes in the 3D case, to ensure that the output matrix obeys the right-hand rule. If all combinations of axes were checked, that would allow PCA to consider both rotated and reflected spaces when searching for a best match. If that were the case, and if there some noise in the data such that the left half of the template is a slightly better match to the right half of the target and vice versa, then the PCA method might generate a reflection matrix like the one you observe. Perhaps you might want to reexamine the logic of how R is generated from the PCA results?
As alluded to in the comments, the direction of your vertex normals will depend on how you've ordered the triangular facets in your Faces matrix. This will follow a right-hand rule, where your fingers follow the vertex order around the triangle and your thumb indicates the surface normal direction. Here's a simple example to help illustrate:
Vertices = [0 0; 0 1; 1 1; 1 0]; % Points clockwise around a unit square in x-y plane
Faces = [1 2 3; 1 3 4]; % Two triangular facets, clockwise vertex ordering
TR = triangulation(Faces, Vertices);
VN = vertexNormal(TR)
VN =
0 0 -1
0 0 -1
0 0 -1
0 0 -1
In this example, Vertices contains the 4 vertices of a unit square in the x-y plane, ordered clockwise if you're looking down from positive z. Two triangular facets are defined in Faces, and the order of the indices in each row traces along the vertices in a clockwise fashion as well. This results in a surface normal for each face that points in the negative z direction. When the vertex normals are computed, they are pointing in the negative z direction as well.
What happens when we flip the order of one triangle so that its points are counter-clockwise?...
Faces = [1 2 3; 1 4 3]; % Second facet is 1 4 3 instead of 1 3 4
TR = triangulation(Faces, Vertices);
VN = vertexNormal(TR)
VN =
0 0 0
0 0 -1
0 0 0
0 0 1
The surface normal of the second triangle will now point in the positive z direction. The vertices that are only used by one triangle (rows 2 and 4) will have vertex normals that match the surface normals, while the vertices shared by each (rows 1 and 3) will have vertex normals of 0 (the two surface normals cancel).
How will this help you with your problem? Well, it's hard to say since I don't know exactly how you are defining Faces and Vertices. However, if you know for certain that every vertex normal in your mesh is pointing in the wrong direction, you can easily flip them all by swapping two columns in your Faces matrix before computing the normals:
Faces = [1 2 3; 1 3 4]; % Clockwise-ordered vertices
TR = triangulation(Faces(:, [1 3 2]), Vertices); % Change to counter-clockwise
VN = vertexNormal(TR)
VN =
0 0 1 % Normals are now pointing in positive z
0 0 1
0 0 1
0 0 1
Related
I have two planes and know the planes' equations and normals. I want to rotate the points of the blue plane to the orange plane.
I use normals to get the rotation axis and rotation angle and use the Rodrigues' rotation formula to get the rotation matrix.
Multiplying the blue plane's normal by the rotation matrix, it works, the normal is equal to the normal of the orange plane. But when multiplying the points coordinates in the blue plane, the result is not I want. Which part did I ignore?
The blue and orange pane:
After rotation:
blue plane: 0.4273x-0.0075y-0.9041z+13.5950=0;
normal: [0.4273;-0.0075;-0.9041]
orange plane: -0.8111x+0.0019y-0.5849z+7.8024=0;
normal: [-0.811;0.0019;-0.5849]
theta = acos(dot(n_left,n_right)/(norm(n_left)*norm(n_right)));
theta = 1.3876;
axis = cross(n_left,n_right) / norm(cross(n_left,n_right));
axis = (-0.0062;-1.0000;0.0053);
C_matrix = [0 -axis(3) axis(2);
axis(3) 0 -axis(1);
-axis(2) axis(1) 0]; %cross product matrix
R = diag([1 1 1]) + (1-cos(theta))*C_matrix^2 + sin(theta)*C_matrix;
R = [0.1823,-0.0001,-0.9833;
0.0104,0.9999,0.0018;
0.9832,-0.0105,0.1822];
after_rotation = R*blue_points;
one point of blue plane: [-1.1056;-0.2270;14.8712]
after rotation: [14.8197;-0.4144;-1.6222]
one point of orange plane: [-0.2366;-0.4296;14.9292)]
I have a new question, like before. But I still cannot solve perfectly. Could you tell me which part should I fill?
left plane: 0.0456x+0.0016y-0.999z+1.1333=0;
normal: [0.0456,0.0016,-0.999]
right plane: -0.0174x+0.0037y-0.998z+0.9728=0;
normal: [-0.0174,0.0037,-0.9998]
rotation matrix:
R = [0.9980 -0.0001 0.0630
0.0003 1.0000 -0.0021
-0.0630 0.0021 0.9980]
one point on the left plane:[-2.4 -0.6446 1.0031]
after rotate: [-2.4012 -0.6446 0.9916]
one point on the right plane:[0.4095 -0.6447 0.9634]
Before rotation:
After rotation:
After rotation, I guess they are in the same plane, but they don't meet. What should I do to make the right side of the yellow plane to meet the left side of the blue plane? Which point should I rotate around? Origin? Thanks a lot for your answer!
Your current rotation performs the rotation about the origin.
If you want the planes to coincide, perform the rotation about a shared point between both planes (assuming the original planes were not parallel).
You can find a shared point from the 2 plane equations:
% Store data about plane
n_left = [0.43273 -0.0075 -0.9041];
n_left = n_left/norm(n_left);
b_left = -13.5950;
n_right = [-0.8111 0.0019 -0.5849];
n_right = n_right/norm(n_right);
b_right = -7.8024;
% Find a point on both planes
shared_point = [n_left; n_right]\[b_left; b_right];
To rotate around this shared_point instead of the origin, perform this operation:
% Transform points
after_rotation = R*(blue_points - shared_point) + shared_point;
I use camera calibration in matlab to detect some checkerboard patterns, after
figure; showExtrinsics(cameraParams, 'CameraCentric');
Now, I want to rotate the checkerboard patterns around the x-axis such that all of them have nearly the same y coordinates in the camera frame.
Method:
I get the positions of all patterns in the camera's frame. Then I do optimization,where the objective function is to minimize variance in y and the variable is rotation about x ranging from o to 360.
Problem:
But when I plot the transformed y-coordinates, they are even nearly in a line.
Code:
Get the checkerboad points:
%% Get rotation and translation matrices for each image;
T_cw=cell(num_imgs,1); % stores camera to world rotation and translation for each image
pixel_coordinates=zeros(num_imgs,2); % stores the pixel coordinates of each checkerboard origin
for ii=1:num_imgs,
% Calibrate the camera
im=imread(list_imgs_path{ii});
[imagePoints, boardSize] = detectCheckerboardPoints(im);
[r_wc, t_wc] = extrinsics(imagePoints, worldPoints, cameraParams);
T_wc=[r_wc,t_wc';0 0 0 1];
% World to camera matrix
T_cw{ii} = inv(T_wc);
t_cw{ii}=T_cw{ii}(1:3,4); % x,y,z coordinates in camera's frame
end
Data(num_imgs=10):
t_cw
[-1072.01388542262;1312.20387622761;-1853.34408157349]
[-1052.07856598756;1269.03455126794;-1826.73576892251]
[-1091.85978641218;1351.08261414473;-1668.88197803184]
[-1337.56358084648;1373.78548638383;-1396.87603554914]
[-1555.19509876309;1261.60428874489;-1174.63047408086]
[-1592.39596647158;1066.82210015055;-1165.34417772659]
[-1523.84307918660;963.781819272748;-1207.27444716506]
[-1614.00792252030;893.962075837621;-1114.73528985018]
[-1781.83112607964;708.973204727939;-797.185326205240]
[-1781.83112607964;708.973204727939;-797.185326205240]
Main code (Optimization and transformation):
%% Get theta for rotation
f_obj = #(x)var_ycors(x,t_cw);
opt_theta = fminbnd(f_obj,0,360);
%% Plotting (rotate ycor and check to fix theta)
y_rotated=zeros(1,num_imgs);
for ii=1:num_imgs,
y_rotated(ii)=rotate_cor(opt_theta,t_cw{ii});
end
plot(1:numel(y_rotated),y_rotated);
function var_computed=var_ycors(theta,t_cw)
ycor=zeros(1,numel(t_cw));
for ii =1:numel(t_cw),
ycor(ii)=rotate_cor(theta,t_cw{ii});
end
var_computed=var(ycor);
end
function ycor=rotate_cor(theta,mat)
r_x=[1 0 0; 0 cosd(theta) -sind(theta); 0 sind(theta) cosd(theta)];
rotate_mat=mat'*r_x;
ycor=rotate_mat(2);
end
This is a clear eigenvector problem!
Take your centroids:
t_cw=[-1072.01388542262;1312.20387622761;-1853.34408157349
-1052.07856598756;1269.03455126794;-1826.73576892251
-1091.85978641218;1351.08261414473;-1668.88197803184
-1337.56358084648;1373.78548638383;-1396.87603554914
-1555.19509876309;1261.60428874489;-1174.63047408086
-1592.39596647158;1066.82210015055;-1165.34417772659
-1523.84307918660;963.781819272748;-1207.27444716506
-1614.00792252030;893.962075837621;-1114.73528985018
-1781.83112607964;708.973204727939;-797.185326205240
-1781.83112607964;708.973204727939;-797.185326205240];
t_cw=reshape(t_cw,[3,10])';
compute PCA on them, so we know the principal conponents:
[R]=pca(t_cw);
And.... thats it! R is now the transformation matrix between your original points and the rotated coordinate system. As an example, I will draw in red the old points and in blue the new ones:
hold on
plot3(t_cw(:,1),t_cw(:,2),t_cw(:,3),'ro')
trans=t_cw*R;
plot3(trans(:,1),trans(:,2),trans(:,3),'bo')
You can see that now the blue ones are in a plane, with the best possible fit to the X direction. If you want them in Y direction, just rotate 90 degrees in Z (I am sure you can figure out how to do this with 2 minutes of Google ;) ).
Note: This is mathematically the best possible fit. I know they are not as "in a row" as one would like, but this is because of the data, this is honestly the best possible fit, as that is what the eigenvectors are!
Calibration:
I have calibrated the camera using this vision toolbox in Matlab. I used checkerboard images to do so. After calibration I get the cameraParams
which contains:
Camera Extrinsics
RotationMatrices: [3x3x18 double]
TranslationVectors: [18x3 double]
and
Camera Intrinsics
IntrinsicMatrix: [3x3 double]
FocalLength: [1.0446e+03 1.0428e+03]
PrincipalPoint: [604.1474 359.7477]
Skew: 3.5436
Aim:
I have recorded trajectories of some objects in motion using this camera. Each object corresponds to a single point in a frame. Now, I want to project the points such that I get a top-view.
Note all these points I wish to transform are are the on the same plane.
ex: [xcor_i,ycor_i ]
-101.7000 -77.4040
-102.4200 -77.4040
KEYPOINT: This plane is perpendicular to one of images of checkerboard used for calibration. For that image(below), I know the height of origin of the checkerboard of from ground(193.040 cm). And the plane to project the points on is parallel to the ground and perpendicular to this image.
Code
(Ref:https://stackoverflow.com/a/27260492/3646408 and answer by #Dima below):
function generate_homographic_matrix()
%% Calibrate camera
% Define images to process
path=['.' filesep 'Images' filesep];
list_imgs=dir([path '*.jpg']);
list_imgs_path=strcat(path,{list_imgs.name});
% Detect checkerboards in images
[imagePoints, boardSize, imagesUsed] = detectCheckerboardPoints(list_imgs_path);
imageFileNames = list_imgs_path(imagesUsed);
% Generate world coordinates of the corners of the squares
squareSize = 27; % in units of 'mm'
worldPoints = generateCheckerboardPoints(boardSize, squareSize);
% Calibrate the camera
[cameraParams, imagesUsed, estimationErrors] = estimateCameraParameters(imagePoints, worldPoints, ...
'EstimateSkew', true, 'EstimateTangentialDistortion', true, ...
'NumRadialDistortionCoefficients', 3, 'WorldUnits', 'mm');
%% Compute homography for peripendicular plane to checkerboard
% Detect the checkerboard
im=imread(['.' filesep 'Images' filesep 'exp_19.jpg']); %exp_19.jpg is the checkerboard orthogonal to the floor
[imagePoints, boardSize] = detectCheckerboardPoints(im);
% Compute rotation and translation of the camera.
[Rc, Tc] = extrinsics(imagePoints, worldPoints, cameraParams);
% Rc(rotation of the calibration view w.r.t the camera) = [x y z])
%then the floor has rotation Rf = [z x -y].(Normal vector of the floor goes up.)
Rf=[Rc(:,3),Rc(:,1),Rc(:,2)*-1];
% Translate it to the floor
H=452;%distance btw origin and floor
Fc = Rc * [0; H; 0];
Tc = Tc + Fc';
% Combine rotation and translation into one matrix:
Rf(3, :) = Tc;
% Compute the homography between the checkerboard and the image plane:
H = Rf * cameraParams.IntrinsicMatrix;
save('homographic_matrix.mat','H')
end
%% Transform points
function [x_transf,y_transf] =transform_points(xcor_i,ycor_i)
% creates a projective2D object and then transforms the points forward to
% get a top-view
% xcor_i and ycor_i are 1d vectors comprising of the x-coordinates and
% y-coordinates of trajectories.
data=load('homographic_matrix.mat');
homo_matrix=data.H;
tform=projective2d(inv(homo_matrix));
[x_transf,y_transf] = transformPointsForward(tform,xcor_i,ycor_i);
end
Quoting text from OReilly Learning OpenCV Pg 412:
"Once we have the homography matrix and the height parameter set as we wish, we could
then remove the chessboard and drive the cart around, making a bird’s-eye view video
of the path..."
This what I essentially wish to achieve.
Abhishek,
I don't entirely understand what you are trying to do. Are your points on a plane, and are you trying to create a bird's eye view of that plane?
If so, then you need to know the extrinsics, R and t, describing the relationship between that plane and the camera. One way to get R and t is to place a checkerboard on the plane, and then use the extrinsics function.
After that, you can follow the directions in the question you cited to get the homography. Once you have the homography, you can create a projective2D object, and use its transformPointsForward method to transform your points.
Since you have the size of squares on the grid, then given 2 points that you know are connected by an edge of size E (in real world units), you can calculate their 3D position.
Taking the camera intrinsic matrix K and the 3D position C and the camera orientation matrix R, you can calculate a ray to each of the points p by doing:
D = R^T * K^-1 * p
Each 3D point is defined as:
P = C + t*D
and you have the constraint that ||P1-P2|| = E
then it's a matter of solving for t1,t2 and finding the 3D position of the two points.
In order to create a top view, you can take the 3D points and project them using a camera model for that top view to generate a new image.
If all your points are on a single plane, it's enough to calculate the position of 3 points, and you can extrapolate the rest.
If your points are located on a plane that you know one coordinate of, you can do it simply for each point. For example, if you know that your camera is located at height h=C.z, and you want to find the 3D location of points in the frame, given that they are on the floor (z=0), then all you have to do is calculate the direction D as above, and then:
t=abs( (h-0)/D.z )
The 0 represent the height of the plane. Substitute for any other value for other planes.
Now that you have the value of t, you can calculate the 3D position of each point: P=C+t*D.
Then, to create a top view, create a new camera position and rotation to match your required projection, and you can project each point onto this camera's image plane.
If you want a full image, you can interpolate positions and fill in the blanks where no feature point was present.
For more details, you can always read: http://www.robots.ox.ac.uk/~vgg/hzbook/index.html
I want to calculate the angle between 2 planes, Reference plane and Plane1. When I feed the X,Y,Z co-ordinates of pointCloud to the function plane_fit.m (by Kevin Mattheus Moerman), I get the coefficients:
reference_plane_coeff: [-0.13766204 -0.070385590 130.69409]
Plane1_coeff: [0.0044337390 -0.0013548643 95.890228]
Next, I find the intersection of both planes, separately on the XZ plane and get a line equation; ref_line_XZ and plane1_line_XZ respectively. For this, I make the second coefficient 0. (Is this right?)
Aref = reference_plane_coeff(1);
Cref = reference_plane_coeff(3);
ref_line_XZ = [Aref Cref];
Arun = Plane1_coeff(1);
Crun = Plane1_coeff(3);
plane1_line_XZ = [Arun Crun];
angle_XZ = acos( dot(ref_line_XZ,plane1_line_XZ ) / (norm(ref_line_XZ) * norm(plane1_line_XZ )) )
I get the angle_XZ value as 0.0012 rad. i.e. 0.0685 degrees
When I plot these planes on a graph and view it, the angle seems to be much more than 0.0012 degrees. I'm talking about the angle made by the two lines after intersection of both planes with the XZ plane.
What am I doing wrong?
Also, when I tried to find angle between its normals, using:
angle_beta_deg = acosd( dot(reference_plane_coeff,Plane1_coeff) / (norm(reference_plane_coeff) * norm(Plane1_coeff)) )
I got the angle as 0.0713.
On visual inspection of both planes' plots and manually calculating from the plot, angle_XZ should be around 9 degrees.
plane_fit.m (by Kevin Mattheus Moerman)
Imagine a dome with its centre in the +z direction. What I want to do is to move that dome's centre to a different axis (e.g. 20 degrees x axis, 20 degrees y axis, 20 degrees z axis). How can I do that ? Any hint/tip helps.
Add more info:
I've been dabbling with rotation matrices in wiki for a while. The problem is, it is not a commutative operation. RxRyRz is not same as RzRyRx. So based on the way I multiple it I get a different final results. For example, I want my final projection to have 20 degrees from the original X axis, 20 degrees from original Y axis and 20 degrees from original Z axis. Based on the matrix, giving alpha, beta, gamma values 20 (or its corresponding radian) does NOT result the intended rotation. Am I missing something? Is there a matrix that I can just put the intended angles and get it at the end ?
Using a rotation matrix is an easy way to rotate a collection of (x,y,z) points. You can calculate a rotation matrix for your case using the equations in the general rotation section. Note that figuring out the angle values to plug into those equations can be tricky. Think of it as rotating about one axis at a time and remember that the order of your rotations (order of multiplications) does matter.
An alternative to the general rotation equations is to calculate a rotation matrix from axis and angle. It may be easier for you to define correct parameters with this method.
Update: After perusing Wikipedia, I found a simple way to calculate rotation axis and angle between two vectors. Just fill in your starting and ending vectors for a and b here:
a = [0.0 0.0 1.0];
b = [0.5 0.5 0.0];
vectorMag = #(x) sqrt(sum(x.^2));
rotAngle = acos(dot(a,b) / (vectorMag(a) * vectorMag(b)))
rotAxis = cross(a,b)
rotAxis =
-0.5 0.5 0
rotAngle =
1.5708