Can I project a rectangle to 2D by simply setting Z to 0? - unity3d

I have a rectangle in 3D space that I need to project to 2D to the screen.
The camera is orthographic, so I figured - can I just set the Z coordinates of the 4 points of the rectangle to 0, so they would splat on the screen?
When I rotate a rectangle on the Y axis for instance, since the camera is orthographic - all I see is the rectangle in front of me getting narrower, because the X and Y components are being altered(along with the Z component).
But if I set the Z to 0 and leave the X and Y, it would still look the same on the orthographic camera.
The question is - is this a viable method? Are there cases where it breaks?

Yes, for building orthographic projection onto OXY plane it is enough to set z=0.
matrix is
(1 0 0 0)
(0 1 0 0)
(0 0 0 0)
(0 0 0 1)
When you rotate origin-centered axis-aligned rectangle about axis Y, it's projection will change width, but height remains the same.
Example: right top corner has coordinates (1, 1, 0). After rotation about Y-axis by angle Fi, it has 3d coordinates (Cos(Fi), 1, Sin(Fi)) and screen coordinates (Cos(Fi), 1)

Related

How can I use the rotation angle and axis to rotate a 3D plane?

I have two planes and know the planes' equations and normals. I want to rotate the points of the blue plane to the orange plane.
I use normals to get the rotation axis and rotation angle and use the Rodrigues' rotation formula to get the rotation matrix.
Multiplying the blue plane's normal by the rotation matrix, it works, the normal is equal to the normal of the orange plane. But when multiplying the points coordinates in the blue plane, the result is not I want. Which part did I ignore?
The blue and orange pane:
After rotation:
blue plane: 0.4273x-0.0075y-0.9041z+13.5950=0;
normal: [0.4273;-0.0075;-0.9041]
orange plane: -0.8111x+0.0019y-0.5849z+7.8024=0;
normal: [-0.811;0.0019;-0.5849]
theta = acos(dot(n_left,n_right)/(norm(n_left)*norm(n_right)));
theta = 1.3876;
axis = cross(n_left,n_right) / norm(cross(n_left,n_right));
axis = (-0.0062;-1.0000;0.0053);
C_matrix = [0 -axis(3) axis(2);
axis(3) 0 -axis(1);
-axis(2) axis(1) 0]; %cross product matrix
R = diag([1 1 1]) + (1-cos(theta))*C_matrix^2 + sin(theta)*C_matrix;
R = [0.1823,-0.0001,-0.9833;
0.0104,0.9999,0.0018;
0.9832,-0.0105,0.1822];
after_rotation = R*blue_points;
one point of blue plane: [-1.1056;-0.2270;14.8712]
after rotation: [14.8197;-0.4144;-1.6222]
one point of orange plane: [-0.2366;-0.4296;14.9292)]
I have a new question, like before. But I still cannot solve perfectly. Could you tell me which part should I fill?
left plane: 0.0456x+0.0016y-0.999z+1.1333=0;
normal: [0.0456,0.0016,-0.999]
right plane: -0.0174x+0.0037y-0.998z+0.9728=0;
normal: [-0.0174,0.0037,-0.9998]
rotation matrix:
R = [0.9980 -0.0001 0.0630
0.0003 1.0000 -0.0021
-0.0630 0.0021 0.9980]
one point on the left plane:[-2.4 -0.6446 1.0031]
after rotate: [-2.4012 -0.6446 0.9916]
one point on the right plane:[0.4095 -0.6447 0.9634]
Before rotation:
After rotation:
After rotation, I guess they are in the same plane, but they don't meet. What should I do to make the right side of the yellow plane to meet the left side of the blue plane? Which point should I rotate around? Origin? Thanks a lot for your answer!
Your current rotation performs the rotation about the origin.
If you want the planes to coincide, perform the rotation about a shared point between both planes (assuming the original planes were not parallel).
You can find a shared point from the 2 plane equations:
% Store data about plane
n_left = [0.43273 -0.0075 -0.9041];
n_left = n_left/norm(n_left);
b_left = -13.5950;
n_right = [-0.8111 0.0019 -0.5849];
n_right = n_right/norm(n_right);
b_right = -7.8024;
% Find a point on both planes
shared_point = [n_left; n_right]\[b_left; b_right];
To rotate around this shared_point instead of the origin, perform this operation:
% Transform points
after_rotation = R*(blue_points - shared_point) + shared_point;

Why do vertex normals flip after rotating 3D point cloulds?

I have two samples of 3D point cloud of human face. The blue point cloud denote target face and the red point cloud indicates the template. The image below shows that the target and template face are aligned in different directions (target face roughly along x-axis, template face roughly along y-axis).
Figure 1:
The region around the nose is displayed in Figure 1.
I want to rotate my target face (blue face) with nasal tip as the center of rotation (I translated the target to the template prior to Figure1 so that the tip of nose, i.e., the centerpt, for both faces are superimposed) to grossly align with the template face (red face). I rotated the target face with the following MATLAB code:
% PCA for the target face
targetFaceptfmt = pointCloud(targetFace); % Convert to point cloud format
point = [templateFace(3522, 1), templateFace(3522, 2), templateFace(3522, 3)]; % The 3522th point in the templateFace is the nasal tip point used as center of rotation later on
radius = 20; % 20mm
[NNTarIndex, NNTarDist] = findNeighborsInRadius(Locationptfmt, point, radius); % Find all vertices within 20 of the nasal tip point on the target face
NNTar = select(Locationptfmt, NNTarIndex); % Select the identified points for PCA
[TarVec,TarSCORE,TarVal] = pca(NNTar.Location); % Do PCA for target face using vertices close to the nasal tip
% PCA for the template face
templateFaceptfmt = pointCloud(templateFace); % Convert to point cloud format
[NNTemIndex, NNTemDist] = findNeighborsInRadius( templateFaceptfmt, point, radius); % Find all vertices within 20 of the nasal tip point on the template
NNTem = select(templateFaceptfmt, NNTemIndex); % Select the identified points for PCA
[TemVec,TemSCORE,TemVal] = pca(NNTem.Location); % Do PCA for template face using vertices close to the nasal tip
% Rotate target face with nasal tip point as the center of rotation
targetFace_r = R * (targetFace-cenertpt)' + centerpt';
targetFace_new = targetFace_r';
where targetFace and templateFace contains coordinates for the unrotated target face and the template face, respectively. The targetFace_r contains coordinates for the target face after rotation around nasal tip, R is the rotation matrix calculated through PCA (See here for source of formula for rotation), and centerpt is the nasal tip point which is used as the center of rotation. I then plotted the transposed targetFace_r, i.e., the targetFace_new, with normals added to each vertex:
Figure 2:
Before rotation, the normals for the target face and template face are generally pointing toward similar directions (Figure 1). After rotation, the target and template face are both aligned along the y-axis (which is what I want), however, normals for the target face and template face point toward opposite directions. Bearing in mind that no changes were made to the template face, I realized that normals of the target face calculated after rotation are flipped. But I do not know why. I used the checkFaceOrientation function of the Rvcg package in R to check if expansion along normals increases centroid size. I was returned TRUE for the template face but FALSE for the target face, which confirms that vertex normals for the target face are flipped.
Vertex normals were calculated in MATLAB as follows:
TR = triangulation(Faces, Vertices); % Triangulation based on face and vertex information
VN = vertexNormal(TR); % Calculate vertext normal
where Faces contains face information, i.e., the connectivity list, and Vertices contains coordiantes for vertices. For target face before rotation, target face after rotation, and template face, vertex normals were calcuated separately. I used the same Faces data for calculation of vertex normal before and after rotating the target face.
The flipped vertex normals resulted in errors for some further analyses. As a result, I have to manually flip the normals to make them pointing similarly to normals of the template face.
Figure 3:
Figure 3 shows that after manually flip the normals, normals of the target and template face are generally pointing similarly in direction.
My question is why does the normals of the target face calculated after rotation flipped? In what case does rotation of 3D point cloud result in flipping of vertex normals?
Some further informaiton that may be useful: the rotation matrix R I obtained is as follows for your reference:
0.0473096146726546 0.867593376108813 -0.495018720950670
0.987013081649028 0.0355601323276586 0.156654567895508
-0.153515396665006 0.496001220483328 0.854643675613313
Since trace(R) = 1 + 2cos(alpha), I calcualted alpha through acos((trace(R)-1)/2)*180/pi, which yielded an angle of rotation of 91.7904, relative to the nasal tip point.
If I'm understanding everything correctly, it looks like your rotation matrix is actually encoding a rotation plus a reflection. If your matrix is approximately:
0.04 0.86 -0.49
0.98 0.03 0.15
-0.15 0.49 0.85
Then the image of each unit vector pointing along the positive axes are:
x = [ 0.04 0.98 -0.15]
y = [ 0.86 0.03 0.49]
z = [-0.49 0.15 0.85]
However, if you take the cross-product of x and y (cross(x, y)), you get approximately [0.49 -0.15 -0.85], which is the negation of z, which implies that the matrix is encoding both a rotation and a reflection. Naturally, multiplying a mesh's vertices by a reflection matrix will reverse the winding order of its polygons, yielding inverted normals.
In the slides that you referenced, it states that the PCA method of generating a rotation matrix should only be considering four different combinations of axes in the 3D case, to ensure that the output matrix obeys the right-hand rule. If all combinations of axes were checked, that would allow PCA to consider both rotated and reflected spaces when searching for a best match. If that were the case, and if there some noise in the data such that the left half of the template is a slightly better match to the right half of the target and vice versa, then the PCA method might generate a reflection matrix like the one you observe. Perhaps you might want to reexamine the logic of how R is generated from the PCA results?
As alluded to in the comments, the direction of your vertex normals will depend on how you've ordered the triangular facets in your Faces matrix. This will follow a right-hand rule, where your fingers follow the vertex order around the triangle and your thumb indicates the surface normal direction. Here's a simple example to help illustrate:
Vertices = [0 0; 0 1; 1 1; 1 0]; % Points clockwise around a unit square in x-y plane
Faces = [1 2 3; 1 3 4]; % Two triangular facets, clockwise vertex ordering
TR = triangulation(Faces, Vertices);
VN = vertexNormal(TR)
VN =
0 0 -1
0 0 -1
0 0 -1
0 0 -1
In this example, Vertices contains the 4 vertices of a unit square in the x-y plane, ordered clockwise if you're looking down from positive z. Two triangular facets are defined in Faces, and the order of the indices in each row traces along the vertices in a clockwise fashion as well. This results in a surface normal for each face that points in the negative z direction. When the vertex normals are computed, they are pointing in the negative z direction as well.
What happens when we flip the order of one triangle so that its points are counter-clockwise?...
Faces = [1 2 3; 1 4 3]; % Second facet is 1 4 3 instead of 1 3 4
TR = triangulation(Faces, Vertices);
VN = vertexNormal(TR)
VN =
0 0 0
0 0 -1
0 0 0
0 0 1
The surface normal of the second triangle will now point in the positive z direction. The vertices that are only used by one triangle (rows 2 and 4) will have vertex normals that match the surface normals, while the vertices shared by each (rows 1 and 3) will have vertex normals of 0 (the two surface normals cancel).
How will this help you with your problem? Well, it's hard to say since I don't know exactly how you are defining Faces and Vertices. However, if you know for certain that every vertex normal in your mesh is pointing in the wrong direction, you can easily flip them all by swapping two columns in your Faces matrix before computing the normals:
Faces = [1 2 3; 1 3 4]; % Clockwise-ordered vertices
TR = triangulation(Faces(:, [1 3 2]), Vertices); % Change to counter-clockwise
VN = vertexNormal(TR)
VN =
0 0 1 % Normals are now pointing in positive z
0 0 1
0 0 1
0 0 1

Unity Get Pixel Coordinate Values from RectTransform

I am in need of a way to obtain the pixel coordinates of a rect transform.
x position where 0 is the left of the screen and y position where 0 is the bottom of the screen.
You want to use Camera.WorldToScreenPoint.
Convert the object origin.
Convert the object origin+width.
Convert the object origin+height.
Your pixel coords are Array[origin.x, origin.y, origin.x+width, origin.y+height]
In 3d there are not pixels. Only a units in unity 3d.
float x = transform.x;
float y = transform.y;
You will get units in x, y.
If you working with textures you might use Texture2D, it has some methods for pixel working.

point projection into yx rotated plane

I want to simulate depth in a 2D space, If I have a point P1 I suppose that I need to project that given point P1 into a plane x axis rotated "theta" rads clockwise, to get P1'
It seems that P1'.x coord has to be the same as the P1.x and the P1'.y has to b shorter than P1.y. In a 3D world:
cosa = cos(theta)
sina = sin(theta)
P1'.x = P1.x
P1'.y = P1.y * cosa - P1.z * sina
P1'.z = P1.y * sina + P1.z * cosa
Is my P1.z = 0? I tried it and P1'.y = P1.y * cosa doesn't result as expected
Any response would be appreciated, Thanks!
EDIT: What I want, now I rotate camera and translate matrix
EDIT 2: an example of a single line with a start1 point and a end1 point (it's an horizontal line, result expected is a falling line to the "floor" as long as tilt angle increases)
I think it's a sign error or an offset needed (java canvas drawing (0,0) is at top-left), because my new line with a tilt of 0 is the one below of all and with a value of 90ยบ the new line and the original one match
The calculation you are performing is correct if you would like to perform a rotation around the x axis clockwise. If you think of your line as a sheet of paper, a rotation of 0 degrees is you looking directly at the line.
For the example you have given the line is horizontal to the x axis. This will not change on rotation around the x axis (the line and the axis around which it is rotating are parallel to one another). As you rotate between 0 and 90 degrees the y co-ordinates of the line will decrease with P1.y*cos(theta) down to 0 at 90 degrees (think about the piece of paper we have been rotating around it's bottom edge, the x axis, at 90 degrees the paper is flat, and the y axis is perpendicular to the page, thus both edges of the page have the same y co-ordinate, both the side that is the "x-axis" and the opposite parallel side will have y=0).
Thus as you can see for your example this has worked correctly.
EDIT: The reason that multiplying by 90 degrees does not give an exactly zero answer is simply floating point rounding

Matlab 3D view matrix

Let A be MATLAB's 4x4 view matrix, obtained from the view function by:
A = view;
A(1:3,1:3) should correspond to rotation and scaling,
A(1:3,4) should correspond to translation, and
A(4,:) should simply be [0 0 0 1].
When setting the camera parameters to the following simple scenario:
camproj('orthographic')
set(gca, 'CameraPosition', [0,0,0])
set(gca, 'CameraTarget', [0,0,1])
set(gca, 'CameraUpVector', [0,1,1])
I get that A = view is:
-1 0 0 0.5
0 1 0 -0.5
0 0 1 -0.5
0 0 0 1
Now I can't figure our where the 0.5's are coming from. Note that I set the camera position to [0,0,0] so there should be no translation.
Another peculiarity, setting the camera position to [0,0,10] by:
set(gca, 'CameraPosition', [0,0,10])
results in the A:=view matrix becoming
1 0 0 -0.5
0 1 0 -0.5
0 0 -1 5.5
0 0 0 1
So I've noticed the -0.5 has changed to 5.5 in A(3,4) and this somehow has to do with 5 = 10 / 2.
That is, changing the camera position to [0,0,a] changes the view matrix at A(3,4) by roughly a / 2.
This is... weird? Peculiar? Odd?
Update:
Yet another pecularity is that the determinant of A(1:3,1:3) is -1 although for a rotation matrix it should be 1. When it's -1 it means that it's not only rotation but also reflection. Why would we need reflection?
Try the same in Matlab 2013a .. you will find the results matching the expectation ...I don't know which version of Matlab you are using .. but it is certainly fixed in version 8.1
My educated guess is that matlab lets you set the it as if the pixel coordinates are in the range of (-0.5*viewport size, 0.5*viewport size), but internally uses the more common pixel coordinates system in which the coordinate of each pixel is in the range of (0, viewport size).
Not familiar with matlab, but: In 3d graphics you always distinguish between projection and camera matrices.
Projection goes from "camera space" where the camera is at zero to projective space. After the projective matrix is applied screen coordinates are computed as x' = x/w etc. So under perspective all the projective matrix does is to move z into w. In orthographic is might add z to x instead.
But it also often includes window transforms. In camera space, the camera is at 0 and looking down z, so coordinates are more -1..1. But window coordinates are 0..1, thus often a *.5, +.5 or negate etc.
The weirdness you see is by mixing camera and projection. I am sure matlab has both. Use the camera matrix to move and rotate the camera. Use the projection only for window coordinates and perspective effects.