DirectX and Negative Scale - coordinates

I am starting to experiment with some DirectX type stuff, and I had a question about the scaling matrices. If I set my view matrix to:
XMMatrixTranspose(XMMatrixIdentity() * XMMatrixScaling(1.0f,2.0f,1.0f))
Then everything (a centered square) on the y-axis appears twice as big, which is what I expect. If I set it to a negative number, ala:
XMMatrixScaling(1.0f,-2.0f,1.0f)
Then everything disappears. In fact, if I set any value of the scale matrix to < 0 then nothing shows up. I was expecting that the image would just be 'flipped' along the corresponding axis, but it just doesn't show up at all. Is it possible to use negative values when scaling, or am I just doing something completely wrong ??

This is caused by back-face culling and clipping. Assuming you have culling enabled (on by default), when you set the scale of the x or y axis to negative, it flips the winding order of all the triangles. This causes all the triangles to be considered "backwards" and the GPU doesn't draw them. Assuming you have no other transformation matrices applied, and the square is in the x-y plane, flipping the z sign doesn't cause the triangles to be back-facing, but rather just causes the square to be outside the viewport (e.g. moving from z = 0.5, halfway inside the veiwport depth range, to z = -0.5, outside the viewport depth range).
Back-face culling is a performance optimization since most 3D scenes have closed geometry (or at least, they don't let you see the open parts). This means that every backward facing triangle should have a front facing triangle that covers it, so there's no point in drawing the backward ones. This sometimes isn't always true though, if you've ever played a game and gotten too close to a rock or wall, and been able to see through the whole object, that's because you've clipped through the front, and since the other side is back-facing, it doesn't get drawn.

Related

Unity Rotate Sphere To Point Directly Upwards Based On Child Point

I've got a 3d sphere which I've been able to plot a point on using longitude and latitude thanks to some work of another developer I've found online. I think I understand what its doing.
What I need to do now is rotate my planet so the point is always at the top most point (ie the north pole) but I'm not sure how to do this. I'm probably missing some important fundamentals here so I'm hoping the answer can assist in my future learning.
Here's an image showing what I have - The blue line is a line coming from the longitude and latitude I have plotted and I need to rotate the planet so that line is basically pointing directly upwards.
https://ibb.co/2y24FxS
If anyone is able to advise it'd be very much appreciated.
If I'm not mistaken, Unity uses a coordinate system where the y-axis points up.
If the point on your sphere was in the xy-plane, you'd just have to determine the angle between the radius-vector (starts in the center of the sphere, ends on the point in question) and the y-axis, and than rotate by that amount around the z-axis, so that the radius vector becomes vertical. But your point is at an arbitrary location in 3D space - see the image below. So one way to go about it is to first bring the point to the xy-plane, then continue from there.
Calculate the radius vector, which is just r = x-sphereCenter. Make a copy of it, set y to zero, so that you have (x, 0, z) - which is just the projection of the vector r on the horizontal xz-plane - let's call the copy rXZ.
Determine the signed angle between the x-axis and rXZ (use Vector3.SignedAngle(xAxis, rXZ, yAxis), see docs), and create a rotation matrix M1 that rotates the sphere in the opposite direction around the vertical (negate the angle). This should place your point in the xy-plane.
Now determine the angle between r and the y-axis (Vector3.SignedAngle(r, yAxis, zAxis)), and create a new rotation matrix M2 that rotates by that angle around the zAxis. (I think for this second one, the simpler Vector3.Angle will work as well.)
So, what you want now is to combine the two matrices (by multiplying them) into a single transform (I'm assuming this is a transformation in the local coordinate system of the sphere, where (0, 0, 0) is the sphere's center). If I'm not mistaken, Unity uses column-major matrices, so the multiplication order should be M = M2 * M1 (the rightmost matrix is applied first).
Reorient your globe using M as a local transform, and it should bring your point to the top. You can also create M3 = M1.inverse, and then do M = M3 * M2 * M1, to preserve the original angular offset from the xy-plane.
Check for edge cases, such as r already being vertical (pointing straight up, or straight down).

Oriented Bounding Box algorithm, Need some understanding/clarification of a few lines of existing (working) code

I am reviewing some MATLAB code that is publicly available at the following location:
https://github.com/mattools/matGeom/blob/master/matGeom/geom2d/orientedBox.m
This is an implementation of the rotating calipers algorithm on the convex hull of a set of points in order to compute an oriented bounding box. My review was to understand intuitively how the algorithm works however I seek clarification on certain lines within the file which I am confused on.
On line 44: hull = bsxfun(#minus, hull, center);. This appears to translate all the points within the convex hull set so the calculated centroid is at (0,0). Is there any particular reason why this is performed? My only guess would be that it allows straightforward rotational transforms later on in the code, as rotating about the real origin would cause significant problems.
On line 71 and 74: indA2 = mod(indA, nV) + 1; , indB2 = mod(indB, nV) + 1;. Is this a trick in order to prevent the access index going out of bounds? My guess is to prevent out of bounds access, it will roll the index over upon reaching the end.
On line 125: y2 = - x * sit + y * cot;. This is the correct transformation as the code behaves properly, but I am not sure why this is actually used and different from the other rotational transforms done later and also prior (with the calls to rotateVector). My best guess is that I am simply not visualizing what rotation needs to be done in my head correctly.
Side note: The external function calls vectorAngle, rotateVector, createLine, and distancePointLine can all be found under the same repository, in files named after the function name (as per MATLAB standard). They are relatively uninteresting and do what you would expect aside from the fact that there is normalization of vector angles going on.
I'm the author of the above piece of code, so I can give some explanations about it:
First of all, the algorithm is indeed a rotating caliper algorithm. In the current implementation, only the width of the algorithm is tested (I did not check the west and est vertice). Actually, it seems the two results correspond most of the time.
Line 44 -> the goal of translating to origin was to improve numerical accuracy. When a polygon is located far away from the origin, coordinates may be large, and close together. Many computation involve products of coordinates. By translating the polygon around the origin, the coordinates are smaller, and the precision of the resulting products are expected to be improved. Well, to be honest, I did not evidenced this effect directly, this is more a careful way of coding than a fix…
Line 71-74! Yes. The idea is to find the index of the next vertex along the polygon. If the current vertex is the last vertex of the polygon, then the next vertex index should be 1. The use of modulo rescale between 0 and N-1. The two lines ensure correct iteration.
Line 125: There are several transformations involved. Using the rotateVector() function, one simply computes the minimal with for a given edge. On line 125, one rotate the points (of the convex hull) to align with the “best” direction (the one that minimizes the width). The last change of coordinates (lines 132->140) is due to the fact that the center of the oriented box is different from the centroid of the polygon. Then we add a shift, which is corrected by the rotation.
I did not really look at the code, this is an explanation of how the rotating calipers work.
A fundamental property is that the tightest bounding box is such that one of its sides overlaps an edge of the hull. So what you do is essentially
try every edge in turn;
for a given edge, seen as being horizontal, south, find the farthest vertices north, west and east;
evaluate the area or the perimeter of the rectangle that they define;
remember the best area.
It is important to note that when you switch from an edge to the next, the N/W/E vertices can only move forward, and are readily found by finding the next decrease of the relevant coordinate. This is how the total processing time is linear in the number of edges (the search for the initial N/E/W vertices takes 3(N-3) comparisons, then the updates take 3(N-1)+Nn+Nw+Ne comparisons, where Nn, Nw, Ne are the number of moves from a vertex to the next; obviously Nn+Nw+Ne = 3N in total).
The modulos are there to implement the cyclic indexing of the edges and vertices.

MATLAB flip definition of angles (or alternative angular metric)

I am doing work on symmetric images where I would like to define a symmetric (polar) coordinate space. Basically for the left image, I want 0 degrees to be defined along the right horizontal axis (as is the default). However, for the right image, I want 0 degrees to be defined along the left horizontal axis.
I know a phase shift of pi would do the trick. However, for comparison purposes, I am trying to keep the range of angles the same, [-pi : pi).
In the above color plot of the rotations in an object, note that they are both defined in the same direction. Ideally I'd like to see the colors of the right object flipped across its vertical axis.
I should note that these angles are calculated by taking the arctan(y/x) of the perimeter coordinates when measured from the centroid. Is there a different trig function that may result in the proper symmetry? I couldn't seem to come up with one while still claiming it was representative of direction.

eye position mapping with the screen pixel

I am currently doing a project called eye controlled cursor using MATLAB.
I have few stages before I extract out the center of the iris (which can be considered as a pupil location). face detetcion - > eye detection -- > iris detection -->And finally i have obtained the center of the iris as show in the figure.
Now, I am trying to map this position (X,Y) to my computer screen pixel (1366 x 768). In most of the journals I have found, they require a reference point such as lips, nose or eye corner. But I am only able to extract the center of iris by doing certain thresholding. How can i map this position (X,Y) to my computer screen pixel (1366 x 768)?
Well you either have to fix the head to a certain position (which isn't very practical) or you will have to adapt to the face position. Depending on your image, you will have to choose points that are always on that image and are easy to detect. If you just have one point (like the nose), you can only adjust for the x/y shift of your head. If you have more points (like the 4 corners of the eye, the nose, maybe the corners of the mouth), you can also extract the 3 rotational values of the head and therefore calculate the direction of sight much better. For a first approach, I guess only the two inner corners of the eye (they are "easy" to detect) will do.
I would also recommend using a calibration sequency. You present the user with a sequence of 4 red points in the corners of the screen and he has to look at them. You can then record the positions of the pupils and interpolate between them.

What is DOT3 lighting?

An answer to my question suggests that DOT3 lighting can help with OpenGL ES rendering, but I'm having trouble finding a decent definition of what DOT3 lighting is.
Edit 1
iPhone related information is greatly appreciated.
DOT3-lighting is often referred to as per-pixel lighting. With vertex lighting the lighting is calculated at every vertex and the resulting lighting is interpolated over the triangle. In per-pixel lighting, as the name implies, the object is to calculate the lighting at every pixel.
The way this is done on fixed function hardware as the iPhone is with so called register combiners. The name DOT3 comes from this render state:
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_DOT3_RGB);
Look at this blog entry on Wolfgang Engels blog for more info on exactly how to set this up.
When doing per-pixel lighting it's popular to also utilize a so called normal map. This means that the normal of every point on an object is stored in a special texture map, a normal map. This was popularized in the game DOOM 3 by ID software where pretty low polygon models where used but with high resolution normal maps. The reason for using this technique is that the eye is more sensitive to variation in lighting than variation in shape.
I saw in your other question that the reason this came up was that you wanted to reduce the memory footprint of the vertex data. This is true, instead of storing three components for a normal in every vertex, you only need to store two components for the texture coordinates to the normal map. Enabling per-pixel lighting will come with a performance cost though so I'm not sure if this will be a net win, as usual the advice is to try and see.
Finally the diffuse lighting intensity in a point is proportional to the cosine of the angle between the surface normal and the direction of the light. For two vector the dot product is defined as:
a dot b = |a||b| cos(theta)
where |a| and |b| is the length of the vectors a and b respectively and theta is the angle between them. If the length is equal to one, |a| and |b| are referred to as unit vectors and the formula simplifies to:
a dot b = cos(theta)
this means that the diffuse lighting intensity is given by the dot product between the surface normal and the direction of the light. This means that all diffuse lighting is a form of DOT3-lighting, even if the name has come to refer to the per-pixel kind.
From here:
Bumpmapping is putting a texture on a model where each texel's brightness defines the height of that texel.
The height of each texel is then used to perturb the lighting across the surface.
Normal mapping is putting a texture on a model where each texel's color is really three values that define the direction that location on the surface points.
A color of (255, 0, 0) for example, might mean that the surface at that location points down the positive X axis.
In other words, each texel is a normal.
The Dot3 name comes from what you actually do with these normals.
Let's say you have a vector which points in the direction your light source points. And let's say you have the vector which is the normal at a specific texel on your model that tells you which direction that texel points.
If you do a simple math equation called a "dot product" on these two normal vectors, like so:
Dot = N1xN2x + N1yN2y + N1z*N2z
Then the resulting value is a number which tells you how much those two vectors point in the same direction.
If the value is -1, then they point in opposite directions, which actually means that the texel is pointing at the light source, and the light source is pointing at the texel, so the texel should be lit.
If the value is 1, then they point in the same direction, which means the texel is pointing away from the light source.
And if the value is 0, then one of the vectors points at 90 degrees relative to the other. Ie: If you are standing on the ground looking forward, then your view vector is 90 degrees relative to the normal of the ground which points up.