I'm creating a Three.js Scene exporter from Three.js to Unity3D. My problem is in converting Euler angles from Three.js to Unity.
I know that:
Three.js is right-handed space and Unity3D is left-handed;
In Unity3D the plane is constructed being flat on the floor, while in Three.js is standing facing to positive z.
Can somebody please give me an example on how to do that?
UPDATE
I tried to follow #StefanDragnev advice but i can't make it work.This is my Three.JS code to obtain matrix for Unity:
var originalMatrix = object3D.matrix.clone();
var mirrorMatrix = new THREE.Matrix4().makeScale(1, 1, -1);
var leftHandMatrix = new THREE.Matrix4();
leftHandMatrix.multiplyMatrices(originalMatrix,mirrorMatrix);
var rotationMatrix = new THREE.Matrix4().makeRotationX(Math.PI / 2);
var unityMatrix = new THREE.Matrix4();
unityMatrix.multiplyMatrices(leftHandMatrix,rotationMatrix);
jsonForUnity.object.worldMatrix = unityMatrix.toArray();
I tried mirrorMatrix (-1,1,1),too, or makeRotationX(Math.PI / 2) but it didn't work, either. Unity doesn't allow to set the object transformation from object's world matrix directly. I had to extract quaternion from matrix. This is my Unity code:
Vector4 row0 = new Vector4 (threeObject.matrix[0],threeObject.matrix[4],threeObject.matrix[8],threeObject.matrix[12]);
Vector4 row1 = new Vector4 (threeObject.matrix[1],threeObject.matrix[5],threeObject.matrix[9],threeObject.matrix[13]);
Vector4 row2 = new Vector4 (threeObject.matrix[2],threeObject.matrix[6],threeObject.matrix[10],threeObject.matrix[14]);
Vector4 row3 = new Vector4 (threeObject.matrix[3],threeObject.matrix[7],threeObject.matrix[11],threeObject.matrix[15]);
Matrix4x4 matrix = new Matrix4x4();
matrix.SetRow (0,row0);
matrix.SetRow (1,row1);
matrix.SetRow (2,row2);
matrix.SetRow (3,row3);
Quaternion qr = Quaternion.LookRotation(matrix.GetColumn(2), matrix.GetColumn(1));
gameObject.transform.localRotation = qr;
Where am I failing?
It's not just angles that you need to convert. You'll also need to convert the translations. Euler angles are not very confortable to use when doing general transformations. It's much easier to work with the object's world matrix directly.
Converting from right-handed to left-handed - you need to mirror the object's matrix along an axis, say Z in your case. Multiply the object's matrix by Matrix4().makeScale(1, 1, -1).
Then going from XY to XZ being parallel to the viewport, you need to rotate the object along the X axis by 90 degrees (or -90 degrees, if rotations are clockwise). Multiply the object's matrix by Matrix4().makeRotationX(Math.PI / 2).
Then, you need to import the final matrix into Unity. In case you can't just import the matrix wholesale, you can try to first decompose it into scaling, rotation and translation parts, but if at all possible, avoid that.
You can do it like this
let position = new THREE.Vector3();
let rotation = new THREE.Quaternion();
let scale = new THREE.Vector3();
(new THREE.Matrix4().makeScale(1, 1, -1).multiply(this.el.object3D.matrix.clone())).multiply(new THREE.Matrix4().makeRotationX(Math.PI/2)).decompose(position, rotation, scale);
Related
I have written a script in Unity which takes a SkinnedMeshRenderer and AnimationClip and rotates the vertices in each by a specified number of degrees. It looks mostly correct except that rotations seem to be incorrect. Here is an example bone rotation (in euler angles) in the skeleton along with the correct values that would be needed for the animation to look correct.
With no rotation: (0, 0, -10)
Rotated 90 degrees: (-10, 0, 0)
Rotate 180 degrees: (0, 0, 10)
I have been trying to find a way to rotate these bones to make this conversion make sense with the data I have here, but have come up short. I know I want to rotate these values around the Y axis, but don't actually want the Y value in the euler angle to change. I am aware I could just reorient the root bone around the Y axis and the problem would be solved, but I want to have no rotation in the Y axis. I am "fixing" some older animations that have unnecessary rotation values in them.
var localBoneRotation = new Quaternion(keysX[j].value, keysY[j].value, keysZ[j].value, keysW[j].value).eulerAngles;
var reorientedForward = Quaternion.AngleAxis(rotation, Vector3.up) * Vector3.forward;
localBoneRotation.x *= reorientedForward.x;
localBoneRotation.y *= reorientedForward.y;
localBoneRotation.z *= reorientedForward.z;
var finalRotation = Quaternion.Euler(localBoneRotation);
keysX[j].value = finalRotation.x;
keysY[j].value = finalRotation.y;
keysZ[j].value = finalRotation.z;
keysW[j].value = finalRotation.w;
I have also tried using a matrix and Vector3 but most of the time I end up with values in the Y. Perhaps I am going about this incorrectly. I just need to be able to specify an angle rotation and then have the input data match the final euler angles with each of these data points.
I am trying to have a gameobject in unity react with sound if another object is inside it. I want the gameobject to use the entering objects location to then see what voxel is closest and then play audio based on the voxel intensity/colour. Does anyone have any ideas? I am working with a dataset that is 512x256x512 voxels. I want it to work if the object is resized as well. Any help is much appreciated :).
The dataset I'm working with is a 3d .mhd medical scan of a body. Here is how the texture is added to the renderer on start:
for (int k = 0; k < NumberOfFrames; k++) {
string fname_ = "T" + k.ToString("D2");
Color[] colors = LoadData(Path.Combine (imageDir, fname_+".raw"));
_volumeBuffer.Add (new Texture3D (dim [0], dim [1], dim [2], TextureFormat.RGBAHalf, mipmap));
_volumeBuffer[k].SetPixels(colors);
_volumeBuffer [k].Apply ();
}
GetComponent<Renderer>().material.SetTexture("_Data", _volumeBuffer[0]);
The size of the object is defined by using the mdh header files spacing as well as voxel dimensions:
transform.localScale = new Vector3(mhdheader.spacing[0] * volScale, mhdheader.spacing[1] * volScale * dim[1] / dim[0], mhdheader.spacing[2] * volScale * dim[2] / dim[0]);
I have tried making my own function to get the index from the world by offsetting it to the beginning of the render mesh (not sure if this is right). Then, scaling it by the local scale. Then, multiplying by the amount of voxels in each dimension. However, I am not sure if my logic is right whatsoever... Here is the code I tried:
public Vector3Int GetIndexFromWorld(Vector3 worldPos)
{
Vector3 startOfTex = gameObject.GetComponent<Renderer>().bounds.min;
Vector3 localPos = transform.InverseTransformPoint(worldPos);
Vector3 localScale = gameObject.transform.localScale;
Vector3 OffsetPos = localPos - startOfTex;
Vector3 VoxelPosFloat = new Vector3(OffsetPos[0] / localScale[0], OffsetPos[1] / localScale[1], OffsetPos[2] / localScale[2]);
VoxelPosFloat = Vector3.Scale(VoxelPosFloat, new Vector3(voxelDims[0], voxelDims[1], voxelDims[2]));
Vector3Int voxelPos = Vector3Int.FloorToInt(VoxelPosFloat);
return voxelPos;
}
You can try setting up a large amount of box colliders and the OnTriggerEnter() function running on each. But a much better solution is to sort your array of voxels and then use simple math to clamp the moving objects position vector to ints and do some maths to map the vector to an index in the array. For example the vector (0,0,0) could map to voxels[0]. Then just fetch that voxels properties as you like. For a voxel application this would be a much needed faster calculation than colliders.
I figured it out I think. If anyone sees any flaw in my coding, please let me know :).
public Vector3Int GetIndexFromWorld(Vector3 worldPos)
{
Vector3 deltaBounds = rend.bounds.max - rend.bounds.min;
Vector3 OffsetPos = worldPos - rend.bounds.min;
Vector3 normPos = new Vector3(OffsetPos[0] / deltaBounds[0], OffsetPos[1] / deltaBounds[1], OffsetPos[2] / deltaBounds[2]);
Vector3 voxelPositions = new Vector3(normPos[0] * voxelDims[0], normPos[1] * voxelDims[1], normPos[2] * voxelDims[2]);
Vector3Int voxelPos = Vector3Int.FloorToInt(voxelPositions);
return voxelPos;
}
I have a Steerings system that have an align steering that aligns one object to other object.
That woks well.
So when I start to programming the face behaivour to face gameobject to other object using the Aling steering.
I want a steering that one object1 faces to other object2 contnuously that if object2 moves or stay in place the object1 rotate to face object1 this in all rotation axies.
The problem is:
I need to get the quaternion rotated in direction of vector in 3D space. That vector is the diference between position of two gameobjects in Unity.
The Face script is:
Vector3 directionToTarget = target.transform.position - ownKS.position;
//Debug.Log(directionToTarget+"ssssss");
SURROGATE_TARGET.transform.rotation =Quaternion.Euler(Utils3D.VectorToOrientation(directionToTarget));
// Align with surrogate target
return Align3D.GetSteering (ownKS, SURROGATE_TARGET, targetAngularRadius,
slowDownAngularRadius, timeToDesiredAngularSpeed);
I tried to do with Quaterion.Euler(vector.normalized). not working.
vector = vector.normalized;
// Debug.Log(vector.normalized+"v normaliyes");
Quaternion v = Quaternion.Euler(vector);
Debug.Log(v.eulerAngles);
return v;
I tried that :
where m is the vector of diference position between two objects.
The funtion VectorToOrientation(vector3 m) is:
m = m.normalized;
Debug.Log(m);
Vector3 axis = Vector3.Cross(m, new Vector3(1,1,1));
axis.Normalize();
double anglex = Math.Acos(Vector3.Dot(m, new Vector3(1,0,0)) / m.magnitude / new Vector3(1,0,0).magnitude);
double angley = Math.Acos(Vector3.Dot(m, new Vector3(0,1,0)) / m.magnitude / new Vector3(0,1,0).magnitude);
double anglez = Math.Acos(Vector3.Dot(m, new Vector3(0,0,1)) / m.magnitude / new Vector3(0,0,1).magnitude);
Debug.Log(anglex*180/Mathf.PI + "x "+angley*180/Mathf.PI+"y z"+anglez*180/Mathf.PI);
```
I don't konw if you undestand the problem?
If you have questions?? You can ask me.
If anyone can help me??
Thanks
I am working on an Augmented Reality project using ARCore. Coordinate system of ARCore changes every time you launch the application making the initial position as origin. I have 5 points in another coordinate system and i can find 4 of these positions in Unity world space using ARCore Augmented Image. These points have different values in my other coordinate system of course. I have to find position of a 5th point in Unity world space using its position in other coordinate system.
I have followed this tutorial to achieve this. But since Unity does not support 3x3 matrices i used Accord.NET framework. Using the tutorial and Accord matrices i can calculate a 3x3 Rotation matrix and a Translation vector.
However, when i tried to apply this to my 5th point using TestObject.transform.Translate(AccordtoUnity(Translation),Space.World)i am having trouble. When the initial 4 objects and reference objects are at same orientation my translation works perfect. However, when my reference objects are rotated this translation does not work. This makes sense of course since i have only done a translation. My question is how can i apply rotation and translation to my 5th point. Or is there a way to convert my 3x3 rotation matrix and translation to Unity 4x4Matrix since then i can use Matrix4x4.MultiplyPoint3x4. Or is it possible to convert my 3x3 rotation matrix to a Quaternion which let me use4x4Matrix.SetTRS. I am a bit confused about this conversion because 4x4Matrix includes scaling as well but i am not doing any scaling.
I would be happy if someone can give me some hint or offer a better approach to find a way to find 5th point. Thanks!
EDIT:
I actually solved the problem based on Daveloper's answer. I constructed a Unity 4x4 matrix like this:
[ R00 R01 R02 T.x ]
[ R10 R11 R12 T.y ]
[ R20 R21 R22 T.z ]
[ 0 0 0 1 ]
I tested this creating primitive objects in Unity and apply translation and rotation using the matrix above like this:
TestObject.transform.position = TransformationMatrix.MultiplyPoint3x4(TestObject.transform.position);
TestObject.transform.rotation *= Quaternion.LookRotation(TransformationMatrix.GetColumn(2), TransformationMatrix.GetColumn(1));
to use a 3x3 rotation matrix and a translation vector to set a transform, use
// rotationMatrixCV = your 3x3 rotation matrix; translation = your translation vector
var rotationMatrix = new Matrix4x4();
for (int i = 0; i < 3; i++)
{
for (int j = 0; j < 3; j++)
{
rotationMatrix[i, j] = rotationMatrixCV[i, j];
}
}
rotationMatrix[3, 3] = 1f;
var localToWorldMatrix = Matrix4x4.Translate(translation) * rotationMatrix);
Vector3 scale;
scale.x = new Vector4(localToWorldMatrix.m00, localToWorldMatrix.m10, matrix.m20, localToWorldMatrix.m30).magnitude;
scale.y = new Vector4(localToWorldMatrix.m01, localToWorldMatrix.m11, matrix.m21, localToWorldMatrix.m31).magnitude;
scale.z = new Vector4(localToWorldMatrix.m02, localToWorldMatrix.m12, matrix.m22, localToWorldMatrix.m32).magnitude;
transform.localScale = scale;
Vector3 position;
position.x = localToWorldMatrix.m03;
position.y = localToWorldMatrix.m13;
position.z = localToWorldMatrix.m23;
transform.position = position;
Vector3 forward;
forward.x = localToWorldMatrix.m02;
forward.y = localToWorldMatrix.m12;
forward.z = localToWorldMatrix.m22;
Vector3 upwards;
upwards.x = localToWorldMatrix.m01;
upwards.y = localToWorldMatrix.m11;
upwards.z = localToWorldMatrix.m21;
transform.rotation = Quaternion.LookRotation(forward, upwards);
NOTICE:
This is only useful if this rotation and translation define your 5th point's location in the world in the coordinate system that is actively being used...
If your rotation and translation mean anything else, you'll have to do more. Glad to help further if you can define what this rotation and translation mean exactly.
If I understand your question correctly, you want to be able to create a rotation quaternion from a 3x3 matrix.
You can think of a 3x3 rotation matrix as three vectors of length 1 all at 90 degrees to each other. e.g.:
| forward.x forward.y forward.z |
| up.x up.y up.z |
| right.x right.y right.z |
A pretty reliable way to do the conversion is to take the forward and up vectors out of your matrix and applying them to the Unity method https://docs.unity3d.com/ScriptReference/Quaternion.LookRotation.html
This will create a quaternion that corresponds to your matrix. Depending on your actual situation, you might need to use the inverse of the quaternion, but essentially this is what you need.
Note you only need the forward and up, because the right is always the cross product of those two and adds no information. Also take care that you matrix is a pure rotation matrix (i.e. no scaling or skewing), otherwise you might get unexpected results.
I have a Quad whose vertices I'm printing like this:
public MeshFilter quadMeshFilter;
for(var vertex in quadMeshFilter.mesh.vertices)
{
print(vertex);
}
And, the localScale like this:
public GameObject quad;
print(quad.transform.localScale);
Vertices are like this:
(-0.5, -0.5), (0.5, 0.5), (0.5, -0.5), (-0.5, 0.5)
while the localScale is:
(6.4, 4.8, 0)
How is this possible - because the vertices make a square but localScale does not.
How do I use vertices and draw another square in front of the quad?
I am not well versed in the matters of meshes, but I believe I know the answer to this question.
Answer
How is this possible
Scale is a value which your mesh is multiplied in size by in given directions (x, y, z). A scale of 1 is default size. A scale of 2 is double size and so on. Your localSpace coordinates will then be multiplied by this scale.
Say a localSpace coordinate is (1, 0, 2), the scale however, is (3, 1, 3). Meaning that the result is (1*3, 0*1, 2*3).
How do I use vertices and draw another square in front of the quad?
I'd personally just create the object and then move it via Unity's Transform system. Since it allows you to change the worldSpace coordinates using transform.position = new Vector3(1f, 5.4f, 3f);
You might be able to move each individual vertex in WorldSpace too, but I haven't tried that before.
I imagine it is related to this bit of code though: vertices[i] = transform.TransformPoint(vertices[i]); since TransformPoint converts from localSpace to worldSpace based on the Transform using it.
Elaboration
Why do I get lots of 0's and 5's in my space coordinates despite them having other positions in the world?
If I print the vertices of a quad using the script below. I get these results, which have 3 coordinates and can be multiplied as such by localScale.
Print result:
Script:
Mesh mesh = GetComponent<MeshFilter>().mesh;
var vertices = mesh.vertices;
Debug.Log("Local Space.");
foreach (var v in vertices)
{
Debug.Log(v);
}
This first result is what we call local space.
There also exists something called WorldSpace. You can convert between local- and worldSpace.
localSpace is the objects mesh vertices in relation to the object itself while worldSpace is the objects location in the Unity scene.
Then you get the results as seen below, first the localSpace coordinates as in the first image, then the WorldSpace coordinates converted from these local coordinates.
Here is the script I used to print the above result.
Mesh mesh = GetComponent<MeshFilter>().mesh;
var vertices = mesh.vertices;
Debug.Log("Local Space.");
foreach (var v in vertices)
{
Debug.Log(v);
}
Debug.Log("World Space");
for (int i = 0; i < vertices.Length; ++i)
{
vertices[i] = transform.TransformPoint(vertices[i]);
Debug.Log(vertices[i]);
}
Good luck with your future learning process.
This becomes clear once you understand how Transform hierarchies work. Its a tree, in which parent transform [3x3] matrix (position, rotation, scale (rotation is actually a quaternion but lets assume its euler for simplicity so that math works). by extension of this philosophy, the mesh itself can be seen as child to the gameoobject that holds it.
If you imagine a 1x1 quad (which is what is described by your vertexes), parented to a gameobject, and that gameobject's Transform has a non-one localScale, all the vertexes in the mesh get multiplied by that value, and all the positions are added.
now if you parent that object to another gameObject, and give it another localScale, this will again multiply all the vertex positions by that scale, translate by its position etc.
to answer your question - global positions of your vertexes are different than contained in the source mesh, because they are feed through a chain of Transforms all the way up to the scene root.
This is both the reason that we only have localScale and not scale, and this is also the reason why non-uniform scaling of objects which contain rotated children can sometimes give very strange results. Transforms stack.