First of all, hello,
I have several questions tied together to this title, because I can't summarize all into one good question.
To put the settings, I am using Unity 2020.1.2f1 URP and I am trying to rebuild myself the Unity's projection matrix used with Direct3D 11 in order to fully understand the working of it.
I know that Unity uses the left-handed system for the object and world spaces, but not for the view space, which still use the OpenGL's old convention of the right-handed one. I could say that the clip space is LH too as the Z axis points towards the screen, but Unity makes me doubt a lot.
Let me explain : we all know that the handedness is given by the matrix, which is why the projection matrix (column-major here) used by Unity for OpenGL-like APIs looks like that :
[ x 0 0 0 ] x = cot(fovH/2) c = (f+n)/(n-f)
[ 0 y 0 0 ] y = cot(fovV/2) e = (2*f*n)/(n-f)
[ 0 0 c e ] d = -1
[ 0 0 d 0 ]
where 'c' and 'e' clip and flip 'z' into the depth buffer from the RH view space to the LH clip space (or NDC once the perspective division is applied), 'w' holds the flipped view depth, and the depth buffer is not reversed.
With the near plane = 0.3 and the far plane = 100, the Unity's frame debugger confirms that our matrix sent to the shader is equal to 'glstate_matrix_projection' (it's the matrix behind UNITY_MATRIX_P macro in the shader), as well as the projection matrix from the camera itself 'camera.projectionMatrix' since it's the matrix built internally by Unity, following the OpenGL convention. It is even confirmed with 'GL.GetGPUProjectionMatrix()' which tweaks the projection matrix of our camera to match the Graphic API requirements before sending it to the GPU, but changes nothing in this case.
// _CamProjMat
float n = viewCam.nearClipPlane;
float f = viewCam.farClipPlane;
float fovV = Mathf.Deg2Rad * viewCam.fieldOfView;
float fovH = 2f * Mathf.Atan(Mathf.Tan(fovH / 2f) * viewCam.aspect);
Matrix4x4 projMat = new Matrix4x4();
projMat.m00 = 1f / Mathf.Tan(fovH / 2f);
projMat.m11 = 1f / Mathf.Tan(fovV / 2f);
projMat.m22 = (f + n) / (n - f);
projMat.m23 = 2 * f * n / (n - f);
projMat.m32 = -1f;
// _GPUProjMat
Matrix4x4 GPUMat = GL.GetGPUProjectionMatrix(viewCam.projectionMatrix, false);
Shader.SetGlobalMatrix("_GPUProjMat", projMat);
// _UnityProjMat
Shader.SetGlobalMatrix("_UnityProjMat", viewCam.projectionMatrix);
gives us :
frame_debugger_OpenGL
HOWEVER, when I switch to Direct3D11 the 'glstate_matrix_projection' is flipped vertically. I mean that the m11 component of the matrix is negative, which flips the Y axis when applied to the vertex. The projection matrix for Direct3D used in Unity applies the Reversed Z buffer technique, giving us a matrix like :
[ x 0 0 0 ] x = cot(fovH/2) c = n/(f-n)
[ 0 y 0 0 ] y = -cot(fovV/2) e = (f*n)/(f-n)
[ 0 0 c e ] d = -1
[ 0 0 d 0 ]
(you'll notice that 'c' and 'e' are respectively the same as f/(n-f) and (fn)/(n-f)* given by Direct3D documentation of D3DXMatrixPerspectiveFovRH() function, with 'f' and 'n' swapped to apply the Reversed Z buffer )
From there, there are several concerns :
if we try to give a projection matrix to the shader, instead of 'glstate_matrix_projection', using 'GL.GetGPUProjectionMatrix()' specifying false as the second parameter, the matrix won't be correct, the rendered screen will be flipped vertically, which is not wrong given the parameter.
frame_debugger_Direct3D
Indeed, this boolean parameter is to modify the matrix whether the image is rendered into a renderTexture or not, and it is justified since OpenGL vs Direct3D render texture coordinates are like this :
D3D_vs_OGL_rt_coord
In a way that makes sense because the screen space of Direct3D is in pixel coordinates, where the handedness is the same as for render texture coordinates, accessed in the pixel shader through the 'SV_Position' semantic. The clip space is only flipped vertically then, into a right-handed system with the positive Y going down the screen, and the positive Z going towards the screen.
Nontheless, I render my vertices directly to the screen, and not into any render texture ... is this parameter from 'GL.GetGPUProjectionMatrix()' a trick to set to true when used with Direct3D like APIs ?
another concern is that we can guess that, given the clip space, NDC, and screen space are left-handed in OpenGL-like APIs, these spaces are right-handed in Direct3D-like APIs... right ? where am I wrong ? Although nobody never qualified or stated on any topic, documentation, dev blog, etc.. I ever read, the handedness of those doesn't seem to bother anyone. Even the projection matrices provided by the official Direct3D documentation don't flip the Y-axis, why then ? I admit I only tried to render graphics with D3D or OGL only inside Unity, perhaps Unity does black magic again under the coat, as usual heh.
I hope I explained clearly enough all this mess, thanks to everyone who reach this point ;)
I really need to find out what's going on here, because Unity's documentation becomes more and more legacy, with poor explanation on precise engine parts.
Any help is really appreciated !!
I want to find the equivalent of the rotate(image, degree) script command to rotate around the x or y axis (I only need 90ยบ rotations). I know I can do it using the tool menu but it would be much faster if I could find a command or a function to do it using a script.
Thank you in advance!
Using the Slice command can be confusing at first, so here is a
detailed explanation on using the command for a rotation around the
X-axis.
This example shows how one would rotate a 3D data clockwise around its X axis (viewing along X) using the Slice3 command.
The Slice3 command specifies a new view onto an existing data array.
It first specifies the origin pixel, i.e. the coordinates (in the original data) which will be represented by (0,0,0) in the new view.
It then specifes the sampling direction, length and stepsize for each of its new three axes. The first triplet specifies how the coordinates (in the original data) change along the new image's x-direction, the second triplet for the new images's y-direction and the last triplet for the new image's z-direction.
So a rotation around the x-axis can be imagined as:
For the "resampled" data:
The new (rotated) data has it's origin at the original's data point (0,0,SZ-1).
Its X-direction remains the same, i.e. one step in X in the new data, would increment the coordinate triplet in the original's data also by (1,0,0).And one goes SX steps with a step-size of 1.
Its Y-direction is essentially the negative Z-direction from before, i.e. one step in Y in the new data, would increment the coordinate triplet in the original's data also by (0,0,-1).So one goes SZ steps with a step-size of -1.
Its Z-direction is essentially the Y-direction from before, i.e. one step in Z in the new data, would increment the coordinate triplet in the original's data by (0,1,0).So one goes SY steps with a step-size of 1.
So, for a clockwise rotation around the X-axis, the command is:
img.Slice3( 0,0,SZ-1, 0,SX,1, 2,SZ,-1, 1,SY,1 )
This command will just create a new view onto the same data (i.e. no addtional memory is used.) So to get the rotated image as a new image (with data values aligned as they should be in memory), one would clone this view into a new image usign ImageClone()
In total, the following script shows this as an example:
// Demo of rotating 3D data orthogonally around the X axis
// This is done by resampling the data using the Slice3 command
// Creation of test image with regcognizeable pattern
number SX = 100
number SY = 30
number SZ = 50
image img := RealImage("Test",4, SX,SY,SZ)
// trig. modulated linear increase in X
img = icol/iwidth* sin( icol/(iwidth-1) * 5 * Pi() ) **2
// Simple linear increase in Y
img += (irow/iheight) * 2
// Modulation of values in Z
// (doubling values for index 0,1, 3, 4, 9, 16, 25, 36, 49)
img *= (SQRT(iplane) == trunc(SQRT(iplane)) ? 2 : 1 )
img.ShowImage()
// Show captions. Image coordinate system is
// Origin (0,0,0) in top-left-front most pixel
// X axis goes left to right
// Y axis goes top to down
// Z axis goes front to back
img.ImageSetDimensionCalibration(0,0,1,"orig X",0)
img.ImageSetDimensionCalibration(1,0,1,"orig Y",0)
img.ImageSetDimensionCalibration(2,0,1,"orig Z",0)
img.ImageGetImageDisplay(0).ImageDisplaySetCaptionOn(1)
// Rotation around X axis, clockwise looking along X
// X --> X` (unchanged)
// Y --> Z'
// Z --> -Y'
// old origin moves to bottom-left-front most
// This means for "new" sampling:
// Specify sampling starting point:
// New origin (0,0,0)' will be value which was at (0,0,SZ-1)
// Going one step in X' in the new data, will be like going one step in X
// Going one step in Y' in the new data, will be like going one step backwards in Z
// Going one step in Z' in the new data, will be like going one step in Y
image rotXCW := img.Slice3( 0,0,SZ-1, 0,SX,1, 2,SZ,-1, 1,SY,1 ).ImageClone()
rotXCW.SetName("rotated X, CW")
rotXCW.ShowImage()
rotXCW.ImageGetImageDisplay(0).ImageDisplaySetCaptionOn(1)
The following methods perform 90degree rotations:
// Functions for 90degree rotations of data
image RotateXCW( image input )
{
number SX,SY,SZ
input.Get3DSize(SX,SY,SZ)
return input.Slice3( 0,0,SZ-1, 0,SX,1, 2,SZ,-1, 1,SY,1 ).ImageClone()
}
image RotateXCCW( image input )
{
number SX,SY,SZ
input.Get3DSize(SX,SY,SZ)
return input.Slice3( 0,SY-1,0, 0,SX,1, 2,SZ,1, 1,SY,-1 ).ImageClone()
}
image RotateYCW( image input )
{
number SX,SY,SZ
input.Get3DSize(SX,SY,SZ)
return input.Slice3( SX-1,0,0, 2,SZ,1, 1,SY,1, 0,SX,-1 ).ImageClone()
}
image RotateYCCW( image input )
{
number SX,SY,SZ
input.Get3DSize(SX,SY,SZ)
return input.Slice3( 0,0,SZ-1, 2,SZ,-1, 1,SY,1, 0,SX,1 ).ImageClone()
}
image RotateZCW( image input )
{
number SX,SY,SZ
input.Get3DSize(SX,SY,SZ)
return input.Slice3( 0,SY-1,0, 1,SY,-1, 0,SX,1, 2,SZ,1 ).ImageClone()
}
image RotateZCCW( image input )
{
number SX,SY,SZ
input.Get3DSize(SX,SY,SZ)
return input.Slice3( SX-1,0,0, 1,SY,1, 0,SX,-1, 2,SZ,1 ).ImageClone()
}
Rotations around the z-axis could als be done with RotateRight() and RotateLeft(). Note, however, that these commands will not adapt the images' dimension calibrations, while the Slice3 command will.
For pure orthogonal rotation the easiest (and fastest) way is to use the'slice' commands, i.e. 'slice3' for 3D images.
It turns out that the latest version of GMS has an example of it in the help documention, so I'm just copy-pasting the code here:
number sx = 10
number sy = 10
number sz = 10
number csx, csy, csz
image img3D := RealImage( "3D", 4, sx, sy, sz )
img3D = 1000 + sin( 2*PI() * iplane/(idepth-1) ) * 100 + icol * 10 + irow
img3D.ShowImage()
// Rotate existing image
if ( OKCancelDialog( "Rotate clockwise (each plane)\n= Rotate block around z-axis" ) )
img3D.RotateRight()
if ( OKCancelDialog( "Rotate counter-clockwise (each plane)\n= Rotate block around z-axis" ) )
img3D.RotateLeft()
if ( OKCancelDialog( "Rotate block counter-clockwise around X-axis" ) )
{
// Equivalent of sampling the data anew
// x-axis remains
// y- and z-axis change their role
img3D.Get3DSize( csx, csy, csz ) // current size along axes
img3D = img3D.Slice3( 0,0,0, 0,csx,1, 2,csz,1, 1,csy,1 )
}
if ( OKCancelDialog( "Rotate block clockwise around X-axis" ) )
{
// Equivalent of sampling the data anew
// x-axis remains
// y- and z-axis change their role
img3D.Get3DSize( csx, csy, csz ) // current size along axes
img3D = img3D.Slice3( 0,csy-1,csz-1, 0,csx,1, 2,csz,-1, 1,csy,-1 )
}
if ( OKCancelDialog( "Rotate 30 degree (each plane)\n= Rotate block around z-axis" ) )
{
number aDeg = 30
number interpolMeth = 2
number keepSize = 1
image rotImg := img3D.Rotate( 2*Pi()/360 * aDeg, interpolMeth, keepSize )
rotImg.ShowImage()
}
You may also want to look at this answer for some more info on subsampling and creating differnt views on data.
I have a cube in Unity3D. I know the vectors of its 8 vertices. It is rotated and scaled in 3D around all axes. How can I instantiate an object at run time inside that cube at a random position?
If you know the 8 vertices of your cube, it's easy to randomize an object inside of this cube. Consider the random object to have an x, y and z value in the position of the Transform. Both UnityScript and C# provide a nice Random class which can easily give you a random number between two values. Use this class three times:
Create a random number between the max x value and the min x
value of all 8 vertices.
Create a random number between the max y value and the min y
value of all 8 vertices.
Create a random number between the max z value and the min z
value of all 8 vertices.
Next, create your gameobject which has to be instantiated in this cube, and use the x, y and z value you've calculated from above three steps. That would randomly create your object in the cube.
Note that if your random object has a certain size, it would technically be possible to generate the object randomly on the edge of the cube, thus letting the random object 'stick out' of the cube. To avoid that, make sure to substract half the size of the object from the max values you enter in the randomize function and to add up half the size of the object from the min values you enter in the randomize function.
EDIT: To get your points when the object is rotated, you can use cube.transform.localScale / 2. This will get you the local position of one of the cube's corners. Vector3.Scale(cube.transform.localScale / 2, new Vector3(1,1,-1)) will get you one of the others (different combinations of 1 and -1 there will get you all eight). Then, to find them in world space, use cube.transform.TransformPoint().
If I understand what you're trying to do correctly, I'd probably suggest something like the following.
public class Instantiation : MonoBehaviour {
void Start() {
for (int y = 0; y < 5; y++) {
for (int x = 0; x < 5; x++) {
GameObject cube = GameObject.CreatePrimitive(PrimitiveType.Cube);
cube.AddComponent<Rigidbody>();
cube.transform.position = new Vector3(x, y, 0);
}
}
}
}
It will create the GameObject cube (or whatever you desire) at the new transform.position. However instead of it's position being a specific Vector3, you have it as a randomly generated Vector3 from a new method. This method will be created to randomise the numbers for x then y and z within specific boundaries. You then just feed it into the new position.
I hope that makes sense, I'm not a fantastic teacher.
Edit: http://docs.unity3d.com/Documentation/Manual/InstantiatingPrefabs.html this is a good reference for instantiating Prefabs. Your run time instantiated object should be of a prefab.