Leap Motion - Angle of proximal bone to metacarpal (side to side movement) - unity3d

I am trying to get the angle between the bones, such as the metacarpal bone and the proximal bone (angle of moving the finger side to side, for example the angle when your index finger is as close to your thumb as you can move it and then the angle when your index finger is as close to your middle finger as you can move it).
I have tried using Vector3.Angle with the direction of the bones but that doesn't work as it includes the bending of the finger, so if the hand is in a fist it gives a completely different value to an open hand.
What i really want is a way i can "normalize" (i know normalizing isn't the correct term but it's the best i could think of) the direction of the bones so that even if the finger is bent, the direction vector would still point out forwards and not down, but would be in the direction of the finger (side to side).
I have added a diagram below to try and illustrate what i mean.
In the second diagram, the blue represents what i currently get if i use the bone's directions, the green is the metacarpal direction and the red is what i want (from the side view). The first diagram shows what i am looking for from a top-down view. The blue line is the metacarpal bone direction and in this example the red line is the proximal bone direction, with the green smudge representing the angle i am looking for.

To get this value, you need to "uncurl" the finger direction based on the current metacarpal direction. It's a little involved in the end; you have to construct some basis vectors in order to uncurl the hand along juuust the right axis. Hopefully the comments in this example script will explain everything.
using Leap;
using Leap.Unity;
using UnityEngine;
public class MeasureIndexSplay : MonoBehaviour {
// Update is called once per frame
void Update () {
var hand = Hands.Get(Chirality.Right);
if (hand != null) {
Debug.Log(GetIndexSplayAngle(hand));
}
}
// Some member variables for drawing gizmos.
private Ray _metacarpalRay;
private Ray _proximalRay;
private Ray _uncurledRay;
/// <summary>
/// This method returns the angle of the proximal bone of the index finger relative to
/// its metacarpal, when ignoring any angle due to the curling of the finger.
///
/// In other words, this method measures the "side-to-side" angle of the finger.
/// </summary>
public float GetIndexSplayAngle(Hand h) {
var index = h.GetIndex();
// These are the directions we care about.
var metacarpalDir = index.bones[0].Direction.ToVector3();
var proximalDir = index.bones[1].Direction.ToVector3();
// Let's start with the palm basis vectors.
var distalAxis = h.DistalAxis(); // finger axis
var radialAxis = h.RadialAxis(); // thumb axis
var palmarAxis = h.PalmarAxis(); // palm axis
// We need a basis whose forward direction is aligned to the metacarpal, so we can
// uncurl the finger with the proper uncurling axis. The hand's palm basis is close,
// but not aligned with any particular finger, so let's fix that.
//
// We construct a rotation from the palm "finger axis" to align it to the metacarpal
// direction. Then we apply that same rotation to the other two basis vectors so
// that we still have a set of orthogonal basis vectors.
var metacarpalRotation = Quaternion.FromToRotation(distalAxis, metacarpalDir);
distalAxis = metacarpalRotation * distalAxis;
radialAxis = metacarpalRotation * radialAxis;
palmarAxis = metacarpalRotation * palmarAxis;
// Note: At this point, we don't actually need the distal axis anymore, and we
// don't need to use the palmar axis, either. They're included above to clarify that
// we're able to apply the aligning rotation to each axis to maintain a set of
// orthogonal basis vectors, in case we wanted a complete "metacarpal-aligned basis"
// for performing other calculations.
// The radial axis, which has now been rotated a bit to be orthogonal to our
// metacarpal, is the axis pointing generally towards the thumb. This is our curl
// axis.
// If you're unfamiliar with using directions as rotation axes, check out the images
// here: https://en.wikipedia.org/wiki/Right-hand_rule
var curlAxis = radialAxis;
// We want to "uncurl" the proximal bone so that it is in line with the metacarpal,
// when considered only on the radial plane -- this is the plane defined by the
// direction approximately towards the thumb, and after the above step, it's also
// orthogonal to the direction our metacarpal is facing.
var proximalOnRadialPlane = Vector3.ProjectOnPlane(proximalDir, radialAxis);
var curlAngle = Vector3.SignedAngle(metacarpalDir, proximalOnRadialPlane,
curlAxis);
// Construct the uncurling rotation from the axis and angle and apply it to the
// *original* bone direction. We determined the angle of positive curl, so our
// rotation flips its sign to rotate the other direction -- to _un_curl.
var uncurlingRotation = Quaternion.AngleAxis(-curlAngle, curlAxis);
var uncurledProximal = uncurlingRotation * proximalDir;
// Upload some data for gizmo drawing (optional).
_metacarpalRay = new Ray(index.bones[0].PrevJoint.ToVector3(),
index.bones[0].Direction.ToVector3());
_proximalRay = new Ray(index.bones[1].PrevJoint.ToVector3(),
index.bones[1].Direction.ToVector3());
_uncurledRay = new Ray(index.bones[1].PrevJoint.ToVector3(),
uncurledProximal);
// This final direction is now uncurled and can be compared against the direction
// of the metacarpal under the assumption it was constructed from an open hand.
return Vector3.Angle(metacarpalDir, uncurledProximal);
}
// Draw some gizmos for debugging purposes.
public void OnDrawGizmos() {
Gizmos.color = Color.white;
Gizmos.DrawRay(_metacarpalRay.origin, _metacarpalRay.direction);
Gizmos.color = Color.blue;
Gizmos.DrawRay(_proximalRay.origin, _proximalRay.direction);
Gizmos.color = Color.red;
Gizmos.DrawRay(_uncurledRay.origin, _uncurledRay.direction);
}
}
For what it's worth, while the index finger is curled, tracked Leap hands don't have a whole lot of flexibility on this axis.

Related

Euler angles for a direction respect to rotated local axis system in unity

I want a determined angle in a local rotated axis system. Basically I want to achieve the angle in a plane of a determined rotated axis system. The best way to explain it is graphically.
I can do that projecting the direction from origin to target in my plane, and then use Vector3.Angle(origin forward dir, Projected direction in plane).
Is there is a way to obtain this in a similar way like Quaternion.FromToRotation(from, to).eulerAngles; but, with the Euler angles, with respect to a coordinate system that is not the world's one, but the local rotated one (the one represented by the rotated plane in the picture above)?
So that the desired angle would be, for the rotation in the local y axis: Quaternion.FromToRotation(from, to).localEulerAngles.y, as the locan Euler angles would be (0, -desiredAngle, 0), based on this approach.
Or is there a more direct way than the way I achieved it?
If I understand you correct there are probably many possible ways to go.
I think you could e.g. use Quaternion.ToAngleAxis which returns an Angle and the axis aroun and which was rotated. This axis you can then convert into the local space of your object
public Vector3 GetLocalEulerAngles(Transform obj, Vector3 vector)
{
// As you had it already, still in worldspace
var rotation = Quaternion.FromToRotation(obj.forward, vector);
rotation.ToAngleAxis(out var angle, out var axis);
// Now convert the axis from currently world space into the local space
// Afaik localAxis should already be normalized
var localAxis = obj.InverseTransformDirection(axis);
// Or make it float and only return the angle if you don't need the rest anyway
return localAxis * angle;
}
As alternative as mentioned I guess yes, you could also simply convert the other vector into local space first, then Quaternion.FromToRotation should already be in local space
public Vector3 GetLocalEulerAngles(Transform obj, Vector3 vector)
{
var localVector = obj.InverseTransformDirection(vector);
// Now this already is a rotation in local space
var rotation = Quaternion.FromToRotation(Vector3.forward, localVector);
return rotation.eulerAngles;
}

Car Collision Return Force - 3D Car Game

As per my game requirements, I was giving manual force when two cars collide with each other and move back.
So I want the correct code that can justify this. Here is the example, collision response that I want to get:
As per my understanding, I have written this code:
Vector3 reboundDirection = Vector3.Normalize(transform.position - other.transform.position);
reboundDirection.y = 0f;
int i = 0;
while (i < 3)
{
myRigidbody.AddForce(reboundDirection * 100f, ForceMode.Force);
appliedSpeed = speed * 0.5f;
yield return new WaitForFixedUpdate();
i++;
}
I am moving, my cars using this code:
//Move the player forward
appliedSpeed += Time.deltaTime * 7f;
appliedSpeed = Mathf.Min(appliedSpeed, speed);
myRigidbody.velocity = transform.forward * appliedSpeed;
Still, as per my observation, I was not getting, collision response in the proper direction. What is the correct way for getting above image reference collision response?
Until you clarify why you have use manual forces or how you handle forces generated by Unity Engine i would like to stress one problem in your approach. You calculate direction based on positions but positions are the center of your cars. Therefore, you are not getting a correct direction as you can see from the image below:
You calculate the direction between two pivot or center points therefore, your force is a bit tilted in left image. Instead of this you can use ContactPoint and then calculate the direction.
As more detailed information so that OP can understand what i said! In the above image you can see the region with blue rectangle. You will get all the contact points for the corresponding region using Collision.contacts
then calculate the center point or centroid like this
Vector3 centroid = new Vector3(0, 0, 0);
foreach (ContactPoint contact in col.contacts)
{
centroid += contact.point;
}
centroid = centroid / col.contacts.Length;
This is the center of the rectangle to find the direction you need to find its projection on your car like this:
Vector3 projection = gameObject.transform.position;
projection.x = centroid.x;
gameObject.GetComponent<Rigidbody>().AddForce((projection - centroid )*100, ForceMode.Impulse);
Since i do not know your set up i just got y and z values from car's position but x value from centroid therefore you get a straight blue line not an arrow tilted to left like in first image even in the case two of second image. I hope i am being clear.

Unity - get position of UI Slider Handle

I am working on Unity 4.7 project and need to create shooting on the target. I simulated gunpoint using horizontal and vertical slider moving on the time. When I click the button I need to memorize x and y coordinates of handles and instantiate bullet hole at this point but don't know how to get cords of sliders handle. It is possible to get values but it seems that it doesn't correspond to coordinates. If horizontal slider changes its value for 1, would its handle change x position for 1?
Use this then:
public static Vector3 GetScreenPositionFromWorldPosition(Vector3 targetPosition)
{
Vector3 screenPos = Camera.main.WorldToScreenPoint(targetPosition);
return screenPos;
}
Have the reference to Handles of the horizontal and vertical sliders, and use them like:
Vector3 pos = GetScreenPositionFromWorldPosition(horizontalHandle.transform.position);

Determining if quarternion rotation is clockwise or counter clockwise

I am using the following code to handle rotating my player model to the position of my mouse.
void Update() {
// Generate a plane that intersects the transform's position with an upwards normal.
Plane playerPlane = new Plane(Vector3.up, transform.position);
// Generate a ray from the cursor position
Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
// Determine the point where the cursor ray intersects the plane.
// This will be the point that the object must look towards to be looking at the mouse.
// Raycasting to a Plane object only gives us a distance, so we'll have to take the distance,
// then find the point along that ray that meets that distance. This will be the point
// to look at.
float hitdist = 0f;
// If the ray is parallel to the plane, Raycast will return false.
if (playerPlane.Raycast(ray, out hitdist)) {
// Get the point along the ray that hits the calculated distance.
var targetPoint = ray.GetPoint(hitdist);
// Determine the target rotation. This is the rotation if the transform looks at the target point.
Quaternion targetRotation = Quaternion.LookRotation(targetPoint - transform.position);
// Smoothly rotate towards the target point.
transform.rotation = Quaternion.Slerp(transform.rotation, targetRotation, speed * Time.deltaTime); // WITH SPEED
//transform.rotation = Quaternion.Slerp(transform.rotation, targetRotation, 1); // WITHOUT SPEED!!!
}
I would like to be able to determine if the rotation is clockwise or counter-clockwise for animation purposes. What would be the best way of handling this? I'm fairly unfamiliar with quaternions so I'm not really sure how to approach this.
Angles between quaternions are unsigned. You will always get the shortest distance, and there's no way of defining "counter-clockwise" or "clockwise" unless you actively specify an axis (a point of view).
What you CAN do, however, is to take the axis that you're interested in (I assume it's the normal to your base plane.. perhaps the vertical of your world?) and take the flat 2D components of your quaternions, map them there and compute a simple 2D angle between those.
Quaternion A; //first Quaternion - this is your desired rotation
Quaternion B; //second Quaternion - this is your current rotation
// define an axis, usually just up
Vector3 axis = new Vector3(0.0f, 1.0f, 0.0f);
// mock rotate the axis with each quaternion
Vector3 vecA = A * axis;
Vector3 vecB = B * axis;
// now we need to compute the actual 2D rotation projections on the base plane
float angleA = Mathf.Atan2(vecA.x, vecA.z) * Mathf.Rad2Deg;
float angleB = Mathf.Atan2(vecB.x, vecB.z) * Mathf.Rad2Deg;
// get the signed difference in these angles
var angleDiff = Mathf.DeltaAngle( angleA, angleB );
This should be it. I never had to do it myself and the code above is not tested. Similar to: http://answers.unity3d.com/questions/26783/how-to-get-the-signed-angle-between-two-quaternion.html
This should work even if A or B are not Quaternions, but one of them is an euler-angle rotation.
Two dimensional quaternions (complex numbers) have a signed angle. But, the more correct way to think about complex numbers is with an unsigned angle which is relative to either the XY oriented plane or the YX oriented plane. I.E. a combination of an unsigned angle an an oriented plane of rotation.
In 2D there are only two oriented planes of rotation so the idea of a "signed angle" is really just a trick to get both the unsigned angle and the oriented plane of rotation packed into a single number.
For a quaternion the "signed angle" trick cannot be used because in 3D you have an infinite number of oriented planes you can rotate in, so a single signed angle cannot encode all the rotation information like it can in the 2D case.
The only way for a signed angle to make sense in 3D is with reference to a particular oriented plane, such as the XY oriented plane.
-- UPDATE --
This is pretty easy to solve as a method on a quaternion class. If all you want to know is "is this counter clockwise", then since we know the rotation angle is from 0 to 180, a positive dot product between the quat's axis of rotation and the surface normal should indicate that we're rotating counter clockwise from the perspective of that surface. And a negative dot product indicates the opposite. Ignoring the zero case, this should do the trick with much less work:
public bool IsCounterClockwise( in Vector3 normal ) => I*normal.X + J*normal.Y + K*normal.Z >= 0;

Detecting touch position on 3D objects in openGL

I have created a 3D object in opengl for one of my application. The object is something like a human body and can be rotated on touch. How can I detect the position of touch on this 3D object. Means if the user touches the head, I have to detect that it is the head. If touch is on the hand, then that has to be identified. It should work even if the object is rotated to some other direction. I think the coordinates of touch on the 3D object is required.
This is the method where I am getting the position of touch on the view.
- (void) touchesBegan: (NSSet*) touches withEvent: (UIEvent*) event
{
UITouch* touch = [touches anyObject];
CGPoint location = [touch locationInView: self];
m_applicationEngine->OnFingerDown(ivec2(location.x, location.y));
}
Can anyone help? Thanks in advance!
Forget about RayTracing and other Top Notch Algorithms. We have used a simple trick for one of our applications(Iyan 3D) on App Store. But this technique need one extra render pass everytime you finish rotating the scene to a new angle. Render different objects (head, hand, leg etc) in different colors (not actual colors but unique ones). Read the color in the rendered image corresponding to the screen position. You can find the object based on its color.
In this method you can use change rendered image resolution to balance accuracy and performance.
To determine the 3D location of the object I would suggest ray tracing.
Assuming the model is in worldspace coordinates you'll also need to know the worldspace coordinates of the eye location and the worldspace coordinates of the image plane. Using those two points you can calculate a ray which you will use to intersect with the model, which I assume consists of triangles.
Then you can use the ray triangle test to determine the 3D location of the touch, by finding the triangle that has the closest intersection to the image plane. If you want which triangle is touched you will also want to save that information when you do the intersection tests.
This page gives an example of how to do ray triangle intersection tests: http://www.scratchapixel.com/lessons/3d-basic-lessons/lesson-9-ray-triangle-intersection/ray-triangle-intersection-geometric-solution/
Edit:
Updated to have some sample code. Its some slightly modified code I took from a C++ raytracing project I did a while ago so you'll need to modify it a bit to get it working for iOS. Also the code in its current form wouldn't even be useful since it doesn't return the actual intersection point but rather if the ray intersects the triangle or not.
// d is the direction the ray is heading in
// o is the origin of the ray
// verts is the 3 vertices of the triangle
// faceNorm is the normal of the triangle surface
bool
Triangle::intersect(Vector3 d, Vector3 o, Vector3* verts, Vector3 faceNorm)
{
// Check for line parallel to plane
float r_dot_n = (dot(d, faceNorm));
// If r_dot_n == 0, then the line and plane are parallel, but we need to
// do the range check due to floating point precision
if (r_dot_n > -0.001f && r_dot_n < 0.001f)
return false;
// Then we calculate the distance of the ray origin to the triangle plane
float t = ( dot(faceNorm, (verts[0] - o)) / r_dot_n);
if (t < 0.0)
return false;
// We can now calculate the barycentric coords of the intersection
Vector3 ba_ca = cross(verts[1]-verts[0], verts[2]-verts[0]);
float denom = dot(-d, ba_ca);
dist_out = dot(o-verts[0], ba_ca) / denom;
float b = dot(-d, cross(r.o-verts[0], verts[2]-verts[0])) / denom;
float c = dot(-d, cross(verts[1]-verts[0], o-verts[0])) / denom;
// Check if in tri or if b & c have NaN values
if ( b < 0 || c < 0 || b+c > 1 || b != b || c != c)
return false;
// Use barycentric coordinates to calculate the intersection point
Vector3 P = (1.f-b-c)*verts[0] + b*verts[1] + c*verts[2];
return true;
}
The actual intersection point you would be interested in is P.
Ray tracing is an option and is used in many applications for doing just that (picking). The problem with ray tracing is that this solution is a lot of work to get a pretty simple basic feature working. Also ray tracing can be slow but if you have only one ray to trace (the location of your finger say), then it should be okay.
OpenGL's API also provides a technique to pick object. I suggest you look at for instance: http://www.lighthouse3d.com/opengl/picking/
Finally a last option would consist of projecting the vertices of an object in screen space and use simple 2d techniques to find out which faces of the object your finger overlaps.