Orientations Offset - unity3d

I am using the MVN-Unity plug-in to access Xsens motion capture data and animate a Unity character in real-time. The character in white and green and yellow is a Unity skeleton that is animated based on Xsens motion data.
I am now trying to animate a different (non-Unity) character using Xsens (the other character that looks like a human) so similar to what the plug-in does, the motion data (positions & orientations) are being mapped to his joints/bones.
But as you can see below, something is wrong with orientations...
I think the reason might be that the rotations from MVN are not properly offset. As you can see in the next two pictures, the MVN hips have the x-axis (red) pointing to the puppet's backside, whereas for the guy's hips, the x-axis points to the right of him.
It might also be that the plug-in is using global rotations somewhere where it should use local rotations. That this must be the case can be demonstrated when I rotate the guy around before I start the Unity app; i.e. select the guy's root game object and try setting the y-rotation to 0/90/180/270 before pressing play, and compare the results: every time the distortions are different.
I don't know how to properly fix this. The code snippet that updates the Unity model (mapped to the MVN puppet or the guy) is as follows. I took this from the plug-in scripts:
private void updateModel(Transform[] pose, Transform[] model)
{
// Re-set the target, then set it up based on the segments.
Vector3 pelvisPos = new Vector3();
Vector3 lastPos = target.position;
target.position = Vector3.zero;
// Map only 23 joints.
for (int i = 0; i < 23; i++)
{
switch (i)
{
// Position only on y axis, and leave x and z to the body. Apply the 'global' position & orientation to the pelvis.
case (int)XsAnimationSegment.Pelvis:
pelvisPos = pose[i].position * scale;
model[i].position = new Vector3( model[i].position.x, pelvisPos.y, model[i].position.z );
model[i].rotation = pose[i].rotation * modelRotTP[i];
break;
// Update only the 'orientation' for the rest of the segments.
default:
if ( model[i] != null )
{
model[i].rotation = pose[i].rotation * modelRotTP[i];
}
break;
}
}
// Apply root motion if the flag is enabled; i.e. true.
if (applyRootMotion)
{
// Only update x and z, since pelvis is already modified by y previously.
target.position = new Vector3(pelvisPos.x + pelvisPosTP.x, lastPos.y, pelvisPos.z + pelvisPosTP.z);
}
// Set the final rotation of the full body, but only position it to face similar as the pelvis.
Quaternion q = Quaternion.Inverse(modelRotTP[(int)XsAnimationSegment.Pelvis]) * model[(int)XsAnimationSegment.Pelvis].rotation;
target.rotation = new Quaternion(target.rotation.x, q.y, target.rotation.z, target.rotation.w);
}
I sort of understand what the code does, but I don't know how to fix this problem. Most probably to do with the axes being different? I would appreciate any help...

You can modify XsLiveAnimator.cs script in the line: 496
with that
model[segmentOrder[i]].transform.rotation = orientations[segmentOrder[i]];
model[segmentOrder[i]].transform.Rotate(rotOffset, Space.World);
rotOffset is a Vector3 of your rotation

Related

Unity3d: Find which gameObject is in front

I have two gameObjects A and B. They are rotated at 90 degrees, which makes its local y axis face forward.
1st Case
In this case, the local y position of B is ahead of local y position of A
2nd Case
Even though their global position is same as the 1st case, we can observe here that local y position of A is ahead of local y position of B.
I tried using A.transform.localPosition.y and B.transform.localPosition.y to find which is greater but it doesnt work. What can I do to find which is front in these two different cases?
Vector projections are your friend here. Project both positions onto a line and compare their magnitude (or square magnitude, it's faster).
Case 1:
Vector3 a = Vector3.Project(A.position, Vector3.up);
Vector3 b = Vector3.Project(B.position, Vector3.up);
if (a.sqrMagnitude > b.sqrMagnitude)
{
// a is ahead
}
else
{
// b is ahead
}
Case 2: Project both positions onto Vector3.left.
Maybe you can even always simply project the two positions onto one of the two objects' forward vector (A.forward or B.forward assuming they're rotated equally).
Hope this helps.
You could compare Vector3.Dot(A.position, A.forward) and Vector3.Dot(B.position, B.forward) to find the one in front in relation to their forward.
The object with the bigger Dot product is in front, and this works in all rotations, including 3D ones.
You can use the following snippet to test for yourself:
// Assign these values on the Inspector
public Transform a, b;
public float RotationZ;
void Update() {
a.eulerAngles = new Vector3(0, 0, RotationZ);
b.eulerAngles = new Vector3(0, 0, RotationZ);
Debug.DrawRay(a.position, a.right, Color.green);
Debug.DrawRay(b.position, b.right, Color.red);
var DotA = Vector2.Dot(a.position, a.right);
var DotB = Vector2.Dot(b.position, b.right);
if (DotA > DotB) { Debug.Log("A is in front"); }
else { Debug.Log("B is in front"); }
}

Need to rotate object in specific ways - Stuck with Gimbal Lock?

I am working on a small mini-game that requires the rotation of a cube 90 degrees in the appropriate direction based on the direction you swipe. So you could swipe up and it would rotate up 90 degrees, and them immediately after, swipe left, and it would swipe 90 degrees to the left from your current rotation (so it would stay rotated up 90 degrees as well). I feel like this should be really simple, but it's giving me a ton of trouble.
I would like to use Lerp/Slerp so that the rotation looks nice, though it isn't entirely necessary. The way I currently have it implemented, each time I call my "SlerpRotateLeft()" function for example, it only rotates to the exact same exact rotation relative to the world each time (instead of the current rotation + 90 degrees in the correct direction).
I have been reading up on Quaternions and Euler angles all day, but I'm still not entirely sure what my problem is.
I am currently using states to determine when the object is currently rotating and in what direction, though I feel like I may be overcomplicating it. Any possible solution to this problem (where you can swipe in a particular direction, in any order, in succession, to rotate a cube 90 degrees in that particular direction). Previously, I attempted to use coroutines, but those didn't have the desired effect either (and I was unable to reset them).
Here is my class. It works and you can test it by dropping the script into any cube object in-editor, but it doesn't work as intended. You will see what my problem is by testing it (I recommend placing an image on the cube's front face to track which one it is). I'm not sure if I explained my problem properly, so please let me know if any more information is needed.
****UPDATE: I have accepted #Draco18s's answer as correct, because their solution worked. However, I did not completely understand the solution, or how to store the value. I found an answer to a similar question that also used Transform.Rotate, and stored the value, which helped clear the solution up. The key seemed to be storing it in a GameObject instead of in a Quaternion like I originally thought. I thought I should provide this code in case anyone stumbles upon this and is equally confused, though you may not need the swipe detection:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Rotater : MonoBehaviour
{
private GameObject endRotation;
//SWIPE VARIABLES
public Vector2 touchStart = new Vector2(0, 0);
public Vector2 touchEnd = new Vector2(0, 0);
public Vector2 currentSwipe = new Vector2(0, 0);
public Vector2 currentSwipeNormal = new Vector2(0, 0);
// Use this for initialization
void Start()
{
endRotation = new GameObject();
}
// Update is called once per frame
void Update()
{
if (Input.GetMouseButtonDown(0))
{
touchStart = new Vector2(Input.mousePosition.x, Input.mousePosition.y);
//Debug.Log("Touched at: " + touchStart);
}
if (Input.GetMouseButtonUp(0))
{
touchEnd = new Vector2(Input.mousePosition.x, Input.mousePosition.y);
//Get Swipe Vector information
currentSwipe = new Vector2(touchEnd.x - touchStart.x, touchEnd.y - touchStart.y);
//Normalize Swipe Vector
currentSwipeNormal = currentSwipe;
currentSwipeNormal.Normalize();
//Swipe up
if (currentSwipeNormal.y > 0 && currentSwipeNormal.x > -0.5 && currentSwipeNormal.x < 0.5)
{
endRotation.transform.Rotate(-Vector3.left, 90, Space.World);
}
//Swipe down
if (currentSwipeNormal.y < 0 && currentSwipeNormal.x > -0.5 && currentSwipeNormal.x < 0.5)
{
endRotation.transform.Rotate(Vector3.left, 90, Space.World);
}
//Swipe left
if (currentSwipeNormal.x < 0 && currentSwipeNormal.y > -0.5 && currentSwipeNormal.y < 0.5)
{
endRotation.transform.Rotate(Vector3.up, 90, Space.World);
}
//Swipe right
if (currentSwipeNormal.x > 0 && currentSwipeNormal.y > -0.5 && currentSwipeNormal.y < 0.5)
{
endRotation.transform.Rotate(-Vector3.up, 90, Space.World);
}
}
LerpRotate();
}
void LerpRotate()
{
transform.rotation = Quaternion.Lerp(transform.rotation, endRotation.transform.rotation, Time.deltaTime * 10);
}
}
Use Transform.RotateAround
You're encountering an issue where you take the current Euler angles and try and add/subtract 90, which does not necessarily correlate to the desired position, due to the rotated nature of the rotated reference frame.
But using RotateAround, you can pass in the global Up, Left, and Forward vectors, which is what you're trying to do.

Microsoft Kinect V2 + Unity 3D Depth = Warping

I've been working on a scene in Unity3D where I have the KinectV2 depth information coming in at 512 x 424 and I'm converting that in real time to Mesh that is also 512 x 424. So there is a 1:1 ratio of pixel data (depth) and vertices (mesh).
My end goal is to make the 'Monitor 3D View' scene found in 'Microsoft Kinect Studio v2.0' with the Depth.
I've pretty much got it working in terms of the point cloud. However, there is a large amount of warping in my Unity scene. I though it might of been down to my maths, etc.
However I noticed that its the same case for the Unity Demo kinect supplied in their Development kit.
I'm just wondering if I'm missing something obvious here? Each of my pixels (or vertices in this case) is mapped out in a 1 by 1 fashion.
I'm not sure if its because I need to process the data from the DepthFrame before rendering it to scene? Or if there's some additional step I've missed out to get the true representation of my room? Because it looks like theres a slight 'spherical' effect being added right now.
These two images are a top down shot of my room. The green line represents my walls.
The left image is the Kinect in a Unity scene, and the right is within Microsoft Kinect Studio. Ignoring the colour difference, you can see that the left (Unity) is warped, whereas the right is linear and perfect.
I know it's quite hard to make out, especially that you don't know the layout of the room I'm sat in :/ Side view too. Can you see the warping on the left? Use the green lines as a reference - these are straight in the actual room, as shown correctly on the right image.
Check out my video to get a better idea:
https://www.youtube.com/watch?v=Zh2pAVQpkBM&feature=youtu.be
Code C#
Pretty simple to be honest. I'm just grabbing the depth data straight from the Kinect SDK, and placing it into a point cloud mesh on the Z axis.
//called on application start
void Start(){
_Reader = _Sensor.DepthFrameSource.OpenReader();
_Data = new ushort[_lengthInPixels];
_Sensor.Open();
}
//called once per frame
void Update(){
if(_Reader != null){
var dep_frame = _Reader.AcquireLatestFrame();
dep_frame.CopyFrameDataToArray(_Data);
dep_frame.Dispose();
dep_frame = null;
UpdateScene();
}
}
//update point cloud in scene
void UpdateScene(){
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
int index = (y * width) + x;
float depthAdjust = 0.1;
Vector3 new_pos = new Vector3(points[index].x, points[index].y, _Data[index] * depthAdjust;
points[index] = new_pos;
}
}
}
Kinect API can be found here:
https://msdn.microsoft.com/en-us/library/windowspreview.kinect.depthframe.aspx
Would appreciate any advise, thanks!
With thanks to Edward Zhang, I figured out what I was doing wrong.
It's down to me not projecting my depth points correctly, in where I need to use the CoordinateMapper to map my DepthFrame into CameraSpace.
Currently, my code assumes an orthogonal depth instead of using a perspective depth camera. I just needed to implement this:
https://msdn.microsoft.com/en-us/library/windowspreview.kinect.coordinatemapper.aspx
//called once per frame
void Update(){
if(_Reader != null){
var dep_frame = _Reader.AcquireLatestFrame();
dep_frame.CopyFrameDataToArray(_Data);
dep_frame.Dispose();
dep_frame = null;
CameraSpacePoint[] _CameraSpace = new CameraSpacePoint[_Data.Length];
_Mapper.MapDepthFrameToCameraSpace(_Data, _CameraSpace);
UpdateScene();
}
}
//update point cloud in scene
void UpdateScene(){
for(int y = 0; y < height; y++){
for(int x = 0; x < width; x++){
int index = (y * width) + x;
Vector3 new_pos = new Vector3(_CameraSpace[index].X, _CameraSpace[index].Y, _CameraSpace[index].Z;
points[index] = new_pos;
}
}
}

Unity, Rigidbody Character keeps falling over

I have a problem with trying to keep my character from rolling over, yet allow it to pitch and yaw. The suggestions I have seen have been to lock different axis, but no matter if I lock the x, y, or z axis I always run into a situation where the character can fall over. The best I can get is to lock both the y and z axis. This allows the character to change pitch to conform to the terrain. But, again, if I turn left or right while going up or down a hill and I can roll the character over.
Here is my current code for movement (in case it helps). I have no other code and my rigid body is all defaults. I have a mesh collier for the character set to convex but defaults otherwise.
Any suggestions on how to make this work?
Here's a live demo of y and z axis being locked. Just waltz up the hill and hang a left or right and you'll fall over. (ASWD controls)
https://dl.dropboxusercontent.com/u/27946381/Builds/builds.html
Thanks so much!
var speed = 3.0;
var rotateSpeed = 3.0;
function FixedUpdate() {
var hAxis = 0;
var vAxis = 0;
if( Input.GetAxis("Horizontal") > 0 ) { hAxis = 1.0; } else if( Input.GetAxis("Horizontal") < 0 ) { hAxis = -1.0; } else { hAxis = 0.0; }
if( Input.GetAxis("Vertical") > 0 ) { vAxis = 1.0; } else if( Input.GetAxis("Vertical") < 0 ) { vAxis = -1.0; } else { vAxis = 0.0; }
var rigidBody: Rigidbody = GetComponent(Rigidbody);
// Rotate around y axis
// transform.Rotate(0, hAxis * rotateSpeed, 0);
var deltaRotation : Quaternion = Quaternion.Euler(0, hAxis * rotateSpeed, 0);
rigidbody.MoveRotation(rigidbody.rotation * deltaRotation);
// Move forward / backward
var forward = transform.TransformDirection(Vector3.forward);
var currSpeed = speed * vAxis;
rigidBody.MovePosition( rigidBody.position + (forward * currSpeed) );
var animatorController: Animator = GetComponent(Animator);
animatorController.SetFloat("Speed", currSpeed);
}
well maybe you should check your rigidbody properties called gravity scale.. set it to 0 and you will never fall
I believe your problem is in these MovePosition and MoveRotation. Take a look into scripting reference - these methods basically ignores collision until body is teleported at the specified pose. I.e. you may place the body into some kind of irresolvable situation, when physics engice unable to find appropriate forces to push the body away from another collider, and therefore the body will fall through. In this case it is better to use AddForce and AddTorque.
Also, you can use CharacterController instead of RigidBody. Though it cannot be rotated at all, there are fine methods for kinematic motion with precise collision detection. And you can attach another body to the character controller and rotate it as you wish.
Changing code will change nothing.
Try changing the hitbox of the player to a big cube
(Better) Try to look into the rigidbody properties, as far as I remember I had the same problem and fixed it that way, it might help

Unity 3D get camera rotated angle

In my unity app initially I have set the camera in a certain position. Later I change the camera position dynamically. After doing that I need to find the angle/rotation it rotated and another object needs to be rotated by the same angle but in the opposite direction.
I have two questions.
1.How do I find the angle the camera moved?
2.How do I rotate the game object by the same value but in the opposite direction.
I tried things like target.transform.Rotate( Camera.main.transform.position - cameraNewPos); and also googled a lot but couldn't find the answers.
You can always compare the current transform.rotation to its original value before you made the change to see the different. Then, apply the same change (in reverse) to your other object.
On camera:
var delta = transform.rotation - _originalRotation;
On that Object:
transform.rotation = transform.rotation - delta;
If your camera is at position P1, and orientation Q1 at start, and P2, Q2, after you moved it, you can find the translation T and rotation R that goes from 1 to 2 using (using transform.rotation and transform.position, which represent the world transform of you object, independent of its hierarchy) :
R = Q1.Inverse() * Q2
T = P2 - P1
Note that you can't substract quaternions.
Then you can simply move your other object accordingly.
Obj.position = initialCameraPosition - T
Obj.rotation = initialCameraRotation * R (or R.Inverse() if you want to mirror the rotation as well)
So:
If you want to get the amount of rotation applied from a time to a time, you could just save the data from the begin and then substract the end rotation - begin rotation.
Vector3 iRotation;
Vector3 amountRot;
void Start() { // for example
iRotation = obj.transform.rotation.eulerAngles;
}
void Update() { // at some point you want
if (sth) amountRot = obj.transform.rotation.eulerAngles - iRotation;
obj2.transform.rotation = Quaternion.Euler(-obj.transform.rotation.eulerAngles); // rotate opposite