I have two example objects in Unity structured as follows:
EmptyGameObject1: scale(-1, 1, 1)
- Child1: rotation(-4, 167, 179)
EmptyGameObject2: scale(1, -1, -1)
- Child2: rotation(-1, -10, 0)
Now I want to get the difference between the euler angles of the childs considering the scale of its parent. Vector3.Distance returns a quite high value, but I see that the rotation of the childs is very similar in the scene view.
I know that a negative scale of the parent mirrors the child object - but
what does it mathematically do to the rotation?
How can I calculate this rotation difference in unity for x, y and z?
Let's say that var rotation1 = Child1.transform.rotation; and var rotation2 = Child2.transform.rotation; (we're working in world space, right?). We want to find a rotation (let's call it difference) from Child1 to Child2 such that rotation1 * difference == rotation2. This means that we can find it simply by calculating var difference = rotation2 * Quaternion.Inverse(rotation1);. Now that we know the rotation, we can just access it's Euler angles property to determine the x, y and z angles: difference.eulerAngles.x and so on.
Related
I have written a script in Unity which takes a SkinnedMeshRenderer and AnimationClip and rotates the vertices in each by a specified number of degrees. It looks mostly correct except that rotations seem to be incorrect. Here is an example bone rotation (in euler angles) in the skeleton along with the correct values that would be needed for the animation to look correct.
With no rotation: (0, 0, -10)
Rotated 90 degrees: (-10, 0, 0)
Rotate 180 degrees: (0, 0, 10)
I have been trying to find a way to rotate these bones to make this conversion make sense with the data I have here, but have come up short. I know I want to rotate these values around the Y axis, but don't actually want the Y value in the euler angle to change. I am aware I could just reorient the root bone around the Y axis and the problem would be solved, but I want to have no rotation in the Y axis. I am "fixing" some older animations that have unnecessary rotation values in them.
var localBoneRotation = new Quaternion(keysX[j].value, keysY[j].value, keysZ[j].value, keysW[j].value).eulerAngles;
var reorientedForward = Quaternion.AngleAxis(rotation, Vector3.up) * Vector3.forward;
localBoneRotation.x *= reorientedForward.x;
localBoneRotation.y *= reorientedForward.y;
localBoneRotation.z *= reorientedForward.z;
var finalRotation = Quaternion.Euler(localBoneRotation);
keysX[j].value = finalRotation.x;
keysY[j].value = finalRotation.y;
keysZ[j].value = finalRotation.z;
keysW[j].value = finalRotation.w;
I have also tried using a matrix and Vector3 but most of the time I end up with values in the Y. Perhaps I am going about this incorrectly. I just need to be able to specify an angle rotation and then have the input data match the final euler angles with each of these data points.
I am trying to simulate liquid conformity in a container. The container is a Unity cylinder and so is the liquid. I track current volume and max volume and use them to determine the coordinates of the center of where the surface should be. When the container is tilted, each vertex in the upper ring of the cylinder should maintain it's current local x and z values but have a new local y value that is the same height in the global space as the surface center.
In my closest attempt, the surface is flat relative to the world space but the liquid does not touch the walls of the container.
Vector3 v = verts[i];
Vector3 newV = new Vector3(v.x, globalSurfaceCenter.y, v.z);
verts[i] = transform.InverseTransformPoint(newV);
(I understand that inversing the point after using v.x and v.z changes them, but if I change them after the fact the surface is no longer flat...)
I have tried many different approaches and I always end up at this same point or a stranger one.
Also, I'm not looking for any fundamentally different approach to the problem. It's important that I alter the vertices of a cylinder.
EDIT
Thank you, everyone, for your feedback. It helped me make progress with this problem but I've reached another roadblock. I made my code more presentable and took some screenshots of some results as well as a graph model to help you visualize what's happening and give variable names to refer to.
In the following images, colored cubes are instantiated and given the coordinates of some of the different vectors I am using to get my results.
F(red) A(green) B(blue)
H(green) E(blue)
Graphed Model
NOTE: when I refer to capital A and B, I'm referring to the Vector3's in my code.
The cylinders in the images have the following rotations (left to right):
(0,0,45) (45,45,0) (45,0,20)
As you can see from image 1, F is correct when only one dimension of rotation is applied. When two or more are applied, the surface is flat, but not oriented correctly.
If I adjust the rotation of the cylinder after generating these results, I can get the orientation of the surface to make sense, but the number are not what you might expect.
For example: cylinder 3 (on the right side), adjusted to have a surface flat to the world space, would need a rotation of about (42.2, 0, 27.8).
Not sure if that's helpful but it is something that increases my confusion.
My code: (refer to graph model for variable names)
Vector3 v = verts[iter];
Vector3 D = globalSurfaceCenter;
Vector3 E = transform.TransformPoint(new Vector3(v.x, surfaceHeight, v.z));
Vector3 H = new Vector3(gsc.x, E.y, gsc.z);
float a = Vector3.Distance(H, D);
float b = Vector3.Distance(H, E);
float i = (a / b) * a;
Vector3 A = H - D;
Vector3 B = H - E;
Vector3 F = ((A + B)) + ((A + B) * i);
Instantiate(greenPrefab, transform).transform.position = H;
Instantiate(bluePrefab, transform).transform.position = E;
//Instantiate(redPrefab, transform).transform.position = transform.TransformPoint(F);
//Instantiate(greenPrefab, transform).transform.position = transform.TransformPoint(A);
//Instantiate(bluePrefab, transform).transform.position = transform.TransformPoint(B);
Some of the variables in my code and in the graphed model may not be necessary in the end, but my hope is it gives you more to work with.
Bear in mind that I am less than proficient in geometry and math in general. Please use Laymans's terms. Thank you!
And thanks again for taking the time to help me.
As a first step, we can calculate the normal of the upper cylinder surface in the cylinder's local coordinate system. Given the world transform of your cylinder transform, this is simply:
localNormal = inverse(transform) * (0, 1, 0, 0)
Using this normal and the cylinder height h, we can define the plane of the upper cylinder in normal form as
dot(localNormal, (x, y, z) - (0, h / 2, 0)) = 0
I am assuming that your cylinder is centered around the origin.
Using this, we can calculate the y-coordinate for any x/z pair as
y = h / 2 - (localNormal.x * x + localNormal.z * z) / localNormal.y
I have a Quad whose vertices I'm printing like this:
public MeshFilter quadMeshFilter;
for(var vertex in quadMeshFilter.mesh.vertices)
{
print(vertex);
}
And, the localScale like this:
public GameObject quad;
print(quad.transform.localScale);
Vertices are like this:
(-0.5, -0.5), (0.5, 0.5), (0.5, -0.5), (-0.5, 0.5)
while the localScale is:
(6.4, 4.8, 0)
How is this possible - because the vertices make a square but localScale does not.
How do I use vertices and draw another square in front of the quad?
I am not well versed in the matters of meshes, but I believe I know the answer to this question.
Answer
How is this possible
Scale is a value which your mesh is multiplied in size by in given directions (x, y, z). A scale of 1 is default size. A scale of 2 is double size and so on. Your localSpace coordinates will then be multiplied by this scale.
Say a localSpace coordinate is (1, 0, 2), the scale however, is (3, 1, 3). Meaning that the result is (1*3, 0*1, 2*3).
How do I use vertices and draw another square in front of the quad?
I'd personally just create the object and then move it via Unity's Transform system. Since it allows you to change the worldSpace coordinates using transform.position = new Vector3(1f, 5.4f, 3f);
You might be able to move each individual vertex in WorldSpace too, but I haven't tried that before.
I imagine it is related to this bit of code though: vertices[i] = transform.TransformPoint(vertices[i]); since TransformPoint converts from localSpace to worldSpace based on the Transform using it.
Elaboration
Why do I get lots of 0's and 5's in my space coordinates despite them having other positions in the world?
If I print the vertices of a quad using the script below. I get these results, which have 3 coordinates and can be multiplied as such by localScale.
Print result:
Script:
Mesh mesh = GetComponent<MeshFilter>().mesh;
var vertices = mesh.vertices;
Debug.Log("Local Space.");
foreach (var v in vertices)
{
Debug.Log(v);
}
This first result is what we call local space.
There also exists something called WorldSpace. You can convert between local- and worldSpace.
localSpace is the objects mesh vertices in relation to the object itself while worldSpace is the objects location in the Unity scene.
Then you get the results as seen below, first the localSpace coordinates as in the first image, then the WorldSpace coordinates converted from these local coordinates.
Here is the script I used to print the above result.
Mesh mesh = GetComponent<MeshFilter>().mesh;
var vertices = mesh.vertices;
Debug.Log("Local Space.");
foreach (var v in vertices)
{
Debug.Log(v);
}
Debug.Log("World Space");
for (int i = 0; i < vertices.Length; ++i)
{
vertices[i] = transform.TransformPoint(vertices[i]);
Debug.Log(vertices[i]);
}
Good luck with your future learning process.
This becomes clear once you understand how Transform hierarchies work. Its a tree, in which parent transform [3x3] matrix (position, rotation, scale (rotation is actually a quaternion but lets assume its euler for simplicity so that math works). by extension of this philosophy, the mesh itself can be seen as child to the gameoobject that holds it.
If you imagine a 1x1 quad (which is what is described by your vertexes), parented to a gameobject, and that gameobject's Transform has a non-one localScale, all the vertexes in the mesh get multiplied by that value, and all the positions are added.
now if you parent that object to another gameObject, and give it another localScale, this will again multiply all the vertex positions by that scale, translate by its position etc.
to answer your question - global positions of your vertexes are different than contained in the source mesh, because they are feed through a chain of Transforms all the way up to the scene root.
This is both the reason that we only have localScale and not scale, and this is also the reason why non-uniform scaling of objects which contain rotated children can sometimes give very strange results. Transforms stack.
I have an issue with rotating an a node multiple times. I am working on a game with a rolling ball, and while I can rotate the ball along one axis, or two axis by the same amount, I cannot rotate at partial angles.
example:
// Roll right 90 -
SCNNode.pivot = SCNMatrix4MakeRotation(Float(M_PI_2), 0, 1, 0)
// Roll right 180 -
SCNNode.pivot = SCNMatrix4MakeRotation(Float(M_PI_2) * 2, 0, 1, 0)
// Roll up 90 -
SCNNode.pivot = SCNMatrix4MakeRotation(Float(M_PI_2), 1, 0, 0)
// Roll up & right 90 -
SCNNode.pivot = SCNMatrix4MakeRotation(Float(M_PI_2), 1, 1, 0)
All of which will work, however if I need to roll ball right 180 and up 90 I'm stuck.
Even if there was some way to add the vectors together that would do me.
Any help greatly appreciated.
To combine the effects of rotation matrices, use matrix multiplication.
To do that in SceneKit, you can either:
Create separate rotation matrices and multiply them together using SCNMatrix4Mult.
Apply a rotation directly to an existing matrix using SCNMatrix4Rotate. (This is equivalent to the SCNMatrix4MakeRotation + SCNMatrix4Mult option; it just combines those steps into a single function call.)
If the order of transformations is important to your app, remember that matrix multiplication order is the reverse of transformation order.
I have some 3D models that I render in OpenGL in a 3D space, and I'm experiencing some headaches in moving the 'character' (that is the camera) with rotations and translation inside this world.
I receive the input (ie the coordinates where to move/the dregrees to turn) from some extern event (image a user input or some data from a GPS+compass device) and the kind of event is rotation OR translation .
I've wrote this method to manage these events:
- (void)moveThePlayerPositionTranslatingLat:(double)translatedLat Long:(double)translatedLong andRotating:(double)degrees{
[super startDrawingFrame];
if (degrees != 0)
{
glRotatef(degrees, 0, 0, 1);
}
if (translatedLat != 0)
{
glTranslatef(translatedLat, -translatedLong, 0);
}
[self redrawView];
}
Then in redrawView I'm actualy drawing the scene and my models. It is something like:
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
NSInteger nModels = [models count];
for (NSInteger i = 0; i < nModels; i++)
{
MD2Object * mdobj = [models objectAtIndex:i];
glPushMatrix();
double * deltas = calloc(sizeof(double),2);
deltas[0] = currentCoords[0] - mdobj.modelPosition[0];
deltas[1] = currentCoords[1] - mdobj.modelPosition[1];
glTranslatef(deltas[0], -deltas[1], 0);
free(deltas);
[mdobj setupForRenderGL];
[mdobj renderGL];
[mdobj cleanupAfterRenderGL];
glPopMatrix();
}
[super drawView];
The problem is that when translation an rotation events are called one after the other: for example when I'm rotating incrementally for some iterations (still around the origin) then I translate and finally rotate again but it appears that the last rotation does not occur around the current (translated) position but around the old one (the old origin). I'm well aware that this happens when the order of transformations is inverted, but I believed that after a drawing the new center of the world was given by the translated system.
What am I missing? How can I fix this? (any reference to OpenGL will be appreciated too)
I would recommend not doing cummulative transformations in the event handler, but internally storing the current values for your transformation and then only transforming once, but I don't know if this is the behaviour that you want.
Pseudocode:
someEvent(lat, long, deg)
{
currentLat += lat;
currentLong += long;
currentDeg += deg;
}
redraw()
{
glClear()
glRotatef(currentDeg, 0, 0, 1);
glTranslatef(currentLat, -currentLong, 0);
... // draw stuff
}
It sounds like you have a couple of things that are happening here:
The first is that you need to be aware that rotations occur about the origin. So when you translate then rotate, you are not rotating about what you think is the origin, but the new origin which is T-10 (the origin transformed by the inverse of your translation).
Second, you're making things quite a bit harder than you really need. What you might want to consider instead is to use gluLookAt. You essentially give it a position within your scene and a point in your scene to look at and an 'up' vector and it will set up the scene properly. To use it properly, keep track of where you camera is located, call that vector p, and a vector n (for normal ... indicates the direction you're looking) and u (your up vector). It will make things easier for more advanced features if n and u are orthonormal vectors (i.e. they are orthoginal to each other and have unit length). If you do this, you can compute r = n x u, (your 'right' vector), which will be a normal vector orthoginal to the other two. You then 'look at' p+n and provide the u as the up vector.
Ideally, your n, u and r have some canonical form, for instance:
n = <0, 0, 1>
u = <0, 1, 0>
r = <1, 0, 0>
You then incrementally accumulate your rotations and apply them to the canonical for of your oritentation vectors. You can use either Euler Rotations or Quaternion Rotations to accumulate your rotations (I've come to really appreciate the quaternion approach for a variety of reasons).