How to apply transformation using 3x3 rotation matrix and a translation vector? - unity3d

I am working on an Augmented Reality project using ARCore. Coordinate system of ARCore changes every time you launch the application making the initial position as origin. I have 5 points in another coordinate system and i can find 4 of these positions in Unity world space using ARCore Augmented Image. These points have different values in my other coordinate system of course. I have to find position of a 5th point in Unity world space using its position in other coordinate system.
I have followed this tutorial to achieve this. But since Unity does not support 3x3 matrices i used Accord.NET framework. Using the tutorial and Accord matrices i can calculate a 3x3 Rotation matrix and a Translation vector.
However, when i tried to apply this to my 5th point using TestObject.transform.Translate(AccordtoUnity(Translation),Space.World)i am having trouble. When the initial 4 objects and reference objects are at same orientation my translation works perfect. However, when my reference objects are rotated this translation does not work. This makes sense of course since i have only done a translation. My question is how can i apply rotation and translation to my 5th point. Or is there a way to convert my 3x3 rotation matrix and translation to Unity 4x4Matrix since then i can use Matrix4x4.MultiplyPoint3x4. Or is it possible to convert my 3x3 rotation matrix to a Quaternion which let me use4x4Matrix.SetTRS. I am a bit confused about this conversion because 4x4Matrix includes scaling as well but i am not doing any scaling.
I would be happy if someone can give me some hint or offer a better approach to find a way to find 5th point. Thanks!
EDIT:
I actually solved the problem based on Daveloper's answer. I constructed a Unity 4x4 matrix like this:
[ R00 R01 R02 T.x ]
[ R10 R11 R12 T.y ]
[ R20 R21 R22 T.z ]
[ 0 0 0 1 ]
I tested this creating primitive objects in Unity and apply translation and rotation using the matrix above like this:
TestObject.transform.position = TransformationMatrix.MultiplyPoint3x4(TestObject.transform.position);
TestObject.transform.rotation *= Quaternion.LookRotation(TransformationMatrix.GetColumn(2), TransformationMatrix.GetColumn(1));

to use a 3x3 rotation matrix and a translation vector to set a transform, use
// rotationMatrixCV = your 3x3 rotation matrix; translation = your translation vector
var rotationMatrix = new Matrix4x4();
for (int i = 0; i < 3; i++)
{
for (int j = 0; j < 3; j++)
{
rotationMatrix[i, j] = rotationMatrixCV[i, j];
}
}
rotationMatrix[3, 3] = 1f;
var localToWorldMatrix = Matrix4x4.Translate(translation) * rotationMatrix);
Vector3 scale;
scale.x = new Vector4(localToWorldMatrix.m00, localToWorldMatrix.m10, matrix.m20, localToWorldMatrix.m30).magnitude;
scale.y = new Vector4(localToWorldMatrix.m01, localToWorldMatrix.m11, matrix.m21, localToWorldMatrix.m31).magnitude;
scale.z = new Vector4(localToWorldMatrix.m02, localToWorldMatrix.m12, matrix.m22, localToWorldMatrix.m32).magnitude;
transform.localScale = scale;
Vector3 position;
position.x = localToWorldMatrix.m03;
position.y = localToWorldMatrix.m13;
position.z = localToWorldMatrix.m23;
transform.position = position;
Vector3 forward;
forward.x = localToWorldMatrix.m02;
forward.y = localToWorldMatrix.m12;
forward.z = localToWorldMatrix.m22;
Vector3 upwards;
upwards.x = localToWorldMatrix.m01;
upwards.y = localToWorldMatrix.m11;
upwards.z = localToWorldMatrix.m21;
transform.rotation = Quaternion.LookRotation(forward, upwards);
NOTICE:
This is only useful if this rotation and translation define your 5th point's location in the world in the coordinate system that is actively being used...
If your rotation and translation mean anything else, you'll have to do more. Glad to help further if you can define what this rotation and translation mean exactly.

If I understand your question correctly, you want to be able to create a rotation quaternion from a 3x3 matrix.
You can think of a 3x3 rotation matrix as three vectors of length 1 all at 90 degrees to each other. e.g.:
| forward.x forward.y forward.z |
| up.x up.y up.z |
| right.x right.y right.z |
A pretty reliable way to do the conversion is to take the forward and up vectors out of your matrix and applying them to the Unity method https://docs.unity3d.com/ScriptReference/Quaternion.LookRotation.html
This will create a quaternion that corresponds to your matrix. Depending on your actual situation, you might need to use the inverse of the quaternion, but essentially this is what you need.
Note you only need the forward and up, because the right is always the cross product of those two and adds no information. Also take care that you matrix is a pure rotation matrix (i.e. no scaling or skewing), otherwise you might get unexpected results.

Related

how do I get mouse world position. X Y plane only in unity

how do I get mouse world position. X Y plane only in unity . ScreenToWorldPosition isn't working. I think I need to cast a ray to mouse but not sure.
This is what I am using. doesnt seem to give the correct coordinates or right plane. need for targeting and raycasting.
private void Get3dMousePoint()
{
var screenPosition = Input.mousePosition;
screenPosition.z = 1;
worldPosition = mainCamera.ScreenToWorldPoint(screenPosition);
worldPosition.z = 0;
}
Just need XY coords.
I tried with ScreenToWorldPoint () and it works.
The key I think is in understanding the z coordinate of the position.
Geometrically, in 3D space we need 3 coordinates to define a point. With only 2 coordinates we have a straight line with variable z parameter. To obtain a point from that line, we must choose at what distance (i.e. set z) we want the point sought to be.
Obviously, since the camera is perspective, the coordinates you have at z = 1 are different from those at z = 100, differently from the 2D plane.
If you can figure out how far away, that is, to set the z correctly, you can find the point you want.
Just remember that the z must be greater than the minimum rendering distance of the chamber. I set that very value in the script.
Also remember that the resulting vector will have the z equal to the z position of the camera + the z value of the vector used in ScreenToWorldPoint.
void Get3dMousePoint()
{
Vector3 worldPosition = Camera.main.ScreenToWorldPoint(new Vector3(Input.mousePosition.x, Input.mousePosition.y, Camera.main.nearClipPlane));
print(worldPosition);
}
if you think my answer helped you, you can mark it as accepted and vote positively. I would very much appreciate it :)

Make ring of vectors "flat" relative to world space

I am trying to simulate liquid conformity in a container. The container is a Unity cylinder and so is the liquid. I track current volume and max volume and use them to determine the coordinates of the center of where the surface should be. When the container is tilted, each vertex in the upper ring of the cylinder should maintain it's current local x and z values but have a new local y value that is the same height in the global space as the surface center.
In my closest attempt, the surface is flat relative to the world space but the liquid does not touch the walls of the container.
Vector3 v = verts[i];
Vector3 newV = new Vector3(v.x, globalSurfaceCenter.y, v.z);
verts[i] = transform.InverseTransformPoint(newV);
(I understand that inversing the point after using v.x and v.z changes them, but if I change them after the fact the surface is no longer flat...)
I have tried many different approaches and I always end up at this same point or a stranger one.
Also, I'm not looking for any fundamentally different approach to the problem. It's important that I alter the vertices of a cylinder.
EDIT
Thank you, everyone, for your feedback. It helped me make progress with this problem but I've reached another roadblock. I made my code more presentable and took some screenshots of some results as well as a graph model to help you visualize what's happening and give variable names to refer to.
In the following images, colored cubes are instantiated and given the coordinates of some of the different vectors I am using to get my results.
F(red) A(green) B(blue)
H(green) E(blue)
Graphed Model
NOTE: when I refer to capital A and B, I'm referring to the Vector3's in my code.
The cylinders in the images have the following rotations (left to right):
(0,0,45) (45,45,0) (45,0,20)
As you can see from image 1, F is correct when only one dimension of rotation is applied. When two or more are applied, the surface is flat, but not oriented correctly.
If I adjust the rotation of the cylinder after generating these results, I can get the orientation of the surface to make sense, but the number are not what you might expect.
For example: cylinder 3 (on the right side), adjusted to have a surface flat to the world space, would need a rotation of about (42.2, 0, 27.8).
Not sure if that's helpful but it is something that increases my confusion.
My code: (refer to graph model for variable names)
Vector3 v = verts[iter];
Vector3 D = globalSurfaceCenter;
Vector3 E = transform.TransformPoint(new Vector3(v.x, surfaceHeight, v.z));
Vector3 H = new Vector3(gsc.x, E.y, gsc.z);
float a = Vector3.Distance(H, D);
float b = Vector3.Distance(H, E);
float i = (a / b) * a;
Vector3 A = H - D;
Vector3 B = H - E;
Vector3 F = ((A + B)) + ((A + B) * i);
Instantiate(greenPrefab, transform).transform.position = H;
Instantiate(bluePrefab, transform).transform.position = E;
//Instantiate(redPrefab, transform).transform.position = transform.TransformPoint(F);
//Instantiate(greenPrefab, transform).transform.position = transform.TransformPoint(A);
//Instantiate(bluePrefab, transform).transform.position = transform.TransformPoint(B);
Some of the variables in my code and in the graphed model may not be necessary in the end, but my hope is it gives you more to work with.
Bear in mind that I am less than proficient in geometry and math in general. Please use Laymans's terms. Thank you!
And thanks again for taking the time to help me.
As a first step, we can calculate the normal of the upper cylinder surface in the cylinder's local coordinate system. Given the world transform of your cylinder transform, this is simply:
localNormal = inverse(transform) * (0, 1, 0, 0)
Using this normal and the cylinder height h, we can define the plane of the upper cylinder in normal form as
dot(localNormal, (x, y, z) - (0, h / 2, 0)) = 0
I am assuming that your cylinder is centered around the origin.
Using this, we can calculate the y-coordinate for any x/z pair as
y = h / 2 - (localNormal.x * x + localNormal.z * z) / localNormal.y

Cheapest way to find Vector magnitude from a given point and angle

I am trying to determine a players depth position on a plane, which defines the walkable ground in a 2D brawler game. The problem is depictured in the following drawing:
C represents the players current position. I need to find the magnitude of vector V. Since I am not strong on linear algebra, the one thing I can think of is: determining the intersection point P of L1 and L2, and then take the magnitude from AP. However, I get the feeling there must be an easier way to find V, since I already know the angle the vector should have, given by vector from AB.
Any input would be appreciated, since I am looking forward to step up my linear algebra game.
Edit: As it is unclear thanks to my lack of drawing skills: the geometry depicted above is a parallelogram. The vector V I am looking for is parallel to the left and right side of the parallelogram. Depth does not mean, that I am looking for the vector perpendicular to the top side, but it refers to the fake depth of a purely 2D game. The parallelogram is therefore used as a means for creating the feeling of walking along a z axis.
The depth of your player (length of V) as measured from the top line in your drawing, is just the difference between A.y and C.y. This is seperate from the slant in the parralelogram, as we're just looking at depth.
example:
float v;
Vector2 a = new Vector2(100, 100); //The point you're measuring from
Vector2 c = new Vector2(150, 150); //Your character position
v = c.y - a.y; // This is the length of V.
//In numbers: 50 = 150 - 100
Illustrated: image not to scale
This works for any coördinate in your plane.
Now if you'd want to get the length of AC is when you'd need to apply some pythagoras, which is a² + b² = c². In the example that would mean in code:
Vector2 a = new Vector2(100, 100);
Vector2 c = new Vector2(150, 150);
float ac1 = Mathf.Sqrt(Mathf.Pow(c.x - a.x, 2) + Mathf.Pow(c.y - a.y, 2));
Now that is quite a sore to have to type out every time, and looks quite scary. But Unity has you covered! There is a Vector method called Distance
float ac2 = Vector2.Distance(a, c);
Which both return 70.71068 which is the length of AC.
This works because for any point c in your area you can draw a right angled triangle from a to c.
Edit as per comment:
If you want your "depth" vector to be parallel with the sides of the paralellogram we can just create a triangle in the parallelogram of which we calculate the hypotenuse.
Since we want the new hypotenuse of our triangle to be parallel to the parallelogram we can use the same angle θ as point B has in your drawing (indicated by pink in mine), of which I understood you know the value.
We also know the length of the adjacent (indicated in blue) side of this new triangle, as that is the height we calculated earlier (c.y - a.y).
Using these two values we can use cosine to find the length of hypotenuse (indicated in red) of the triangle, which is equal to the vector V, in parallel with the parallelogram.
the formula for that is: hypotenuse = adjacent/cos(θ)
Now if we were to put some numbers in this, and for my example I took 55 for the angle θ. It would look like this
float v = 50/(cos(55));
image not to scale
Let's call the lower right vertex of the parallelogram D.
If the long sides of the parallelogram are horizontal, you can find magnitude of V vector by:
V.magnitude = (c.y - a.y) / sin(BAD)
Or if you prefer:
V.magnitude = AB.magnitude * (c.y - a.y)/(b.y - a.y)

Unity function to access the 2D box immediately from the 3D pipeline?

In Unity, say you have a 3D object,
Of course, it's trivial to get the AABB, Unity has direct functions for that,
(You might have to "add up all the bounding boxes of the renderers" in the usual way, no issue.)
So Unity does indeed have a direct function to give you the 3D AABB box instantly, out of the internal mesh/render pipeline every frame.
Now, for the Camera in question, as positioned, that AABB indeed covers a certain 2D bounding box ...
In fact ... is there some sort of built-in direct way to find that orange 2D box in Unity??
Question - does Unity have a function which immediately gives that 2D frustrum box from the pipeline?
(Note that to do it manually you just make rays (or use world to screen space as Draco mentions, same) for the 8 points of the AABB; encapsulate those in 2D to make the orange box.)
I don't need a manual solution, I'm asking if the engine gives this somehow from the pipeline every frame?
Is there a call?
(Indeed, it would be even better to have this ...)
My feeling is that one or all of the
occlusion system in particular
the shaders
the renderer
would surely know the orange box, and perhaps even the blue box inside the pipeline, right off the graphics card, just as it knows the AABB for a given mesh.
We know that Unity lets you tap the AABB 3D box instantly every frame for a given mesh: In fact does Unity give the "2D frustrum bound" as shown here?
As far as I am aware, there is no built in for this.
However, finding the extremes yourself is really pretty easy. Getting the mesh's bounding box (the cuboid shown in the screenshot) is just how this is done, you're just doing it in a transformed space.
Loop through all the verticies of the mesh, doing the following:
Transform the point from local to world space (this handles dealing with scale and rotation)
Transform the point from world space to screen space
Determine if the new point's X and Y are above/below the stored min/max values, if so, update the stored min/max with the new value
After looping over all vertices, you'll have 4 values: min-X, min-Y, max-X, and max-Y. Now you can construct your bounding rectangle
You may also wish to first perform a Gift Wrapping of the model first, and only deal with the resulting convex hull (as no points not part of the convex hull will ever be outside the bounds of the convex hull). If you intend to draw this screen space rectangle while the model moves, scales, or rotates on screen, and have to recompute the bounding box, then you'll want to do this and cache the result.
Note that this does not work if the model animates (e.g. if your humanoid stands up and does jumping jacks). Solving for the animated case is much more difficult, as you would have to treat every frame of every animation as part of the original mesh for the purposes of the convex hull solving (to insure that none of your animations ever move a part of the mesh outside the convex hull), increasing the complexity by a power.
3D bounding box
Get given GameObject 3D bounding box's center and size
Compute 8 corners
Transform positions to GUI space (screen space)
Function GUI3dRectWithObject will return the 3D bounding box of given GameObject on screen.
2D bounding box
Iterate through every vertex in a given GameObject
Transform every vertex's position to world space, and transform to GUI space (screen space)
Find 4 corner value: x1, x2, y1, y2
Function GUI2dRectWithObject will return the 2D bounding box of given GameObject on screen.
Code
public static Rect GUI3dRectWithObject(GameObject go)
{
Vector3 cen = go.GetComponent<Renderer>().bounds.center;
Vector3 ext = go.GetComponent<Renderer>().bounds.extents;
Vector2[] extentPoints = new Vector2[8]
{
WorldToGUIPoint(new Vector3(cen.x-ext.x, cen.y-ext.y, cen.z-ext.z)),
WorldToGUIPoint(new Vector3(cen.x+ext.x, cen.y-ext.y, cen.z-ext.z)),
WorldToGUIPoint(new Vector3(cen.x-ext.x, cen.y-ext.y, cen.z+ext.z)),
WorldToGUIPoint(new Vector3(cen.x+ext.x, cen.y-ext.y, cen.z+ext.z)),
WorldToGUIPoint(new Vector3(cen.x-ext.x, cen.y+ext.y, cen.z-ext.z)),
WorldToGUIPoint(new Vector3(cen.x+ext.x, cen.y+ext.y, cen.z-ext.z)),
WorldToGUIPoint(new Vector3(cen.x-ext.x, cen.y+ext.y, cen.z+ext.z)),
WorldToGUIPoint(new Vector3(cen.x+ext.x, cen.y+ext.y, cen.z+ext.z))
};
Vector2 min = extentPoints[0];
Vector2 max = extentPoints[0];
foreach (Vector2 v in extentPoints)
{
min = Vector2.Min(min, v);
max = Vector2.Max(max, v);
}
return new Rect(min.x, min.y, max.x - min.x, max.y - min.y);
}
public static Rect GUI2dRectWithObject(GameObject go)
{
Vector3[] vertices = go.GetComponent<MeshFilter>().mesh.vertices;
float x1 = float.MaxValue, y1 = float.MaxValue, x2 = 0.0f, y2 = 0.0f;
foreach (Vector3 vert in vertices)
{
Vector2 tmp = WorldToGUIPoint(go.transform.TransformPoint(vert));
if (tmp.x < x1) x1 = tmp.x;
if (tmp.x > x2) x2 = tmp.x;
if (tmp.y < y1) y1 = tmp.y;
if (tmp.y > y2) y2 = tmp.y;
}
Rect bbox = new Rect(x1, y1, x2 - x1, y2 - y1);
Debug.Log(bbox);
return bbox;
}
public static Vector2 WorldToGUIPoint(Vector3 world)
{
Vector2 screenPoint = Camera.main.WorldToScreenPoint(world);
screenPoint.y = (float)Screen.height - screenPoint.y;
return screenPoint;
}
Reference: Is there an easy way to get on-screen render size (bounds)?
refer to this
It needs the game object with skinnedMeshRenderer.
Camera camera = GetComponent();
SkinnedMeshRenderer skinnedMeshRenderer = target.GetComponent();
// Get the real time vertices
Mesh mesh = new Mesh();
skinnedMeshRenderer.BakeMesh(mesh);
Vector3[] vertices = mesh.vertices;
for (int i = 0; i < vertices.Length; i++)
{
// World space
vertices[i] = target.transform.TransformPoint(vertices[i]);
// GUI space
vertices[i] = camera.WorldToScreenPoint(vertices[i]);
vertices[i].y = Screen.height - vertices[i].y;
}
Vector3 min = vertices[0];
Vector3 max = vertices[0];
for (int i = 1; i < vertices.Length; i++)
{
min = Vector3.Min(min, vertices[i]);
max = Vector3.Max(max, vertices[i]);
}
Destroy(mesh);
// Construct a rect of the min and max positions
Rect r = Rect.MinMaxRect(min.x, min.y, max.x, max.y);
GUI.Box(r, "");

Convert object rotation from Three.js to Unity3D

I'm creating a Three.js Scene exporter from Three.js to Unity3D. My problem is in converting Euler angles from Three.js to Unity.
I know that:
Three.js is right-handed space and Unity3D is left-handed;
In Unity3D the plane is constructed being flat on the floor, while in Three.js is standing facing to positive z.
Can somebody please give me an example on how to do that?
UPDATE
I tried to follow #StefanDragnev advice but i can't make it work.This is my Three.JS code to obtain matrix for Unity:
var originalMatrix = object3D.matrix.clone();
var mirrorMatrix = new THREE.Matrix4().makeScale(1, 1, -1);
var leftHandMatrix = new THREE.Matrix4();
leftHandMatrix.multiplyMatrices(originalMatrix,mirrorMatrix);
var rotationMatrix = new THREE.Matrix4().makeRotationX(Math.PI / 2);
var unityMatrix = new THREE.Matrix4();
unityMatrix.multiplyMatrices(leftHandMatrix,rotationMatrix);
jsonForUnity.object.worldMatrix = unityMatrix.toArray();
I tried mirrorMatrix (-1,1,1),too, or makeRotationX(Math.PI / 2) but it didn't work, either. Unity doesn't allow to set the object transformation from object's world matrix directly. I had to extract quaternion from matrix. This is my Unity code:
Vector4 row0 = new Vector4 (threeObject.matrix[0],threeObject.matrix[4],threeObject.matrix[8],threeObject.matrix[12]);
Vector4 row1 = new Vector4 (threeObject.matrix[1],threeObject.matrix[5],threeObject.matrix[9],threeObject.matrix[13]);
Vector4 row2 = new Vector4 (threeObject.matrix[2],threeObject.matrix[6],threeObject.matrix[10],threeObject.matrix[14]);
Vector4 row3 = new Vector4 (threeObject.matrix[3],threeObject.matrix[7],threeObject.matrix[11],threeObject.matrix[15]);
Matrix4x4 matrix = new Matrix4x4();
matrix.SetRow (0,row0);
matrix.SetRow (1,row1);
matrix.SetRow (2,row2);
matrix.SetRow (3,row3);
Quaternion qr = Quaternion.LookRotation(matrix.GetColumn(2), matrix.GetColumn(1));
gameObject.transform.localRotation = qr;
Where am I failing?
It's not just angles that you need to convert. You'll also need to convert the translations. Euler angles are not very confortable to use when doing general transformations. It's much easier to work with the object's world matrix directly.
Converting from right-handed to left-handed - you need to mirror the object's matrix along an axis, say Z in your case. Multiply the object's matrix by Matrix4().makeScale(1, 1, -1).
Then going from XY to XZ being parallel to the viewport, you need to rotate the object along the X axis by 90 degrees (or -90 degrees, if rotations are clockwise). Multiply the object's matrix by Matrix4().makeRotationX(Math.PI / 2).
Then, you need to import the final matrix into Unity. In case you can't just import the matrix wholesale, you can try to first decompose it into scaling, rotation and translation parts, but if at all possible, avoid that.
You can do it like this
let position = new THREE.Vector3();
let rotation = new THREE.Quaternion();
let scale = new THREE.Vector3();
(new THREE.Matrix4().makeScale(1, 1, -1).multiply(this.el.object3D.matrix.clone())).multiply(new THREE.Matrix4().makeRotationX(Math.PI/2)).decompose(position, rotation, scale);