Detecting touch position on 3D objects in openGL - iphone

I have created a 3D object in opengl for one of my application. The object is something like a human body and can be rotated on touch. How can I detect the position of touch on this 3D object. Means if the user touches the head, I have to detect that it is the head. If touch is on the hand, then that has to be identified. It should work even if the object is rotated to some other direction. I think the coordinates of touch on the 3D object is required.
This is the method where I am getting the position of touch on the view.
- (void) touchesBegan: (NSSet*) touches withEvent: (UIEvent*) event
{
UITouch* touch = [touches anyObject];
CGPoint location = [touch locationInView: self];
m_applicationEngine->OnFingerDown(ivec2(location.x, location.y));
}
Can anyone help? Thanks in advance!

Forget about RayTracing and other Top Notch Algorithms. We have used a simple trick for one of our applications(Iyan 3D) on App Store. But this technique need one extra render pass everytime you finish rotating the scene to a new angle. Render different objects (head, hand, leg etc) in different colors (not actual colors but unique ones). Read the color in the rendered image corresponding to the screen position. You can find the object based on its color.
In this method you can use change rendered image resolution to balance accuracy and performance.

To determine the 3D location of the object I would suggest ray tracing.
Assuming the model is in worldspace coordinates you'll also need to know the worldspace coordinates of the eye location and the worldspace coordinates of the image plane. Using those two points you can calculate a ray which you will use to intersect with the model, which I assume consists of triangles.
Then you can use the ray triangle test to determine the 3D location of the touch, by finding the triangle that has the closest intersection to the image plane. If you want which triangle is touched you will also want to save that information when you do the intersection tests.
This page gives an example of how to do ray triangle intersection tests: http://www.scratchapixel.com/lessons/3d-basic-lessons/lesson-9-ray-triangle-intersection/ray-triangle-intersection-geometric-solution/
Edit:
Updated to have some sample code. Its some slightly modified code I took from a C++ raytracing project I did a while ago so you'll need to modify it a bit to get it working for iOS. Also the code in its current form wouldn't even be useful since it doesn't return the actual intersection point but rather if the ray intersects the triangle or not.
// d is the direction the ray is heading in
// o is the origin of the ray
// verts is the 3 vertices of the triangle
// faceNorm is the normal of the triangle surface
bool
Triangle::intersect(Vector3 d, Vector3 o, Vector3* verts, Vector3 faceNorm)
{
// Check for line parallel to plane
float r_dot_n = (dot(d, faceNorm));
// If r_dot_n == 0, then the line and plane are parallel, but we need to
// do the range check due to floating point precision
if (r_dot_n > -0.001f && r_dot_n < 0.001f)
return false;
// Then we calculate the distance of the ray origin to the triangle plane
float t = ( dot(faceNorm, (verts[0] - o)) / r_dot_n);
if (t < 0.0)
return false;
// We can now calculate the barycentric coords of the intersection
Vector3 ba_ca = cross(verts[1]-verts[0], verts[2]-verts[0]);
float denom = dot(-d, ba_ca);
dist_out = dot(o-verts[0], ba_ca) / denom;
float b = dot(-d, cross(r.o-verts[0], verts[2]-verts[0])) / denom;
float c = dot(-d, cross(verts[1]-verts[0], o-verts[0])) / denom;
// Check if in tri or if b & c have NaN values
if ( b < 0 || c < 0 || b+c > 1 || b != b || c != c)
return false;
// Use barycentric coordinates to calculate the intersection point
Vector3 P = (1.f-b-c)*verts[0] + b*verts[1] + c*verts[2];
return true;
}
The actual intersection point you would be interested in is P.

Ray tracing is an option and is used in many applications for doing just that (picking). The problem with ray tracing is that this solution is a lot of work to get a pretty simple basic feature working. Also ray tracing can be slow but if you have only one ray to trace (the location of your finger say), then it should be okay.
OpenGL's API also provides a technique to pick object. I suggest you look at for instance: http://www.lighthouse3d.com/opengl/picking/
Finally a last option would consist of projecting the vertices of an object in screen space and use simple 2d techniques to find out which faces of the object your finger overlaps.

Related

Car Collision Return Force - 3D Car Game

As per my game requirements, I was giving manual force when two cars collide with each other and move back.
So I want the correct code that can justify this. Here is the example, collision response that I want to get:
As per my understanding, I have written this code:
Vector3 reboundDirection = Vector3.Normalize(transform.position - other.transform.position);
reboundDirection.y = 0f;
int i = 0;
while (i < 3)
{
myRigidbody.AddForce(reboundDirection * 100f, ForceMode.Force);
appliedSpeed = speed * 0.5f;
yield return new WaitForFixedUpdate();
i++;
}
I am moving, my cars using this code:
//Move the player forward
appliedSpeed += Time.deltaTime * 7f;
appliedSpeed = Mathf.Min(appliedSpeed, speed);
myRigidbody.velocity = transform.forward * appliedSpeed;
Still, as per my observation, I was not getting, collision response in the proper direction. What is the correct way for getting above image reference collision response?
Until you clarify why you have use manual forces or how you handle forces generated by Unity Engine i would like to stress one problem in your approach. You calculate direction based on positions but positions are the center of your cars. Therefore, you are not getting a correct direction as you can see from the image below:
You calculate the direction between two pivot or center points therefore, your force is a bit tilted in left image. Instead of this you can use ContactPoint and then calculate the direction.
As more detailed information so that OP can understand what i said! In the above image you can see the region with blue rectangle. You will get all the contact points for the corresponding region using Collision.contacts
then calculate the center point or centroid like this
Vector3 centroid = new Vector3(0, 0, 0);
foreach (ContactPoint contact in col.contacts)
{
centroid += contact.point;
}
centroid = centroid / col.contacts.Length;
This is the center of the rectangle to find the direction you need to find its projection on your car like this:
Vector3 projection = gameObject.transform.position;
projection.x = centroid.x;
gameObject.GetComponent<Rigidbody>().AddForce((projection - centroid )*100, ForceMode.Impulse);
Since i do not know your set up i just got y and z values from car's position but x value from centroid therefore you get a straight blue line not an arrow tilted to left like in first image even in the case two of second image. I hope i am being clear.

Unity function to access the 2D box immediately from the 3D pipeline?

In Unity, say you have a 3D object,
Of course, it's trivial to get the AABB, Unity has direct functions for that,
(You might have to "add up all the bounding boxes of the renderers" in the usual way, no issue.)
So Unity does indeed have a direct function to give you the 3D AABB box instantly, out of the internal mesh/render pipeline every frame.
Now, for the Camera in question, as positioned, that AABB indeed covers a certain 2D bounding box ...
In fact ... is there some sort of built-in direct way to find that orange 2D box in Unity??
Question - does Unity have a function which immediately gives that 2D frustrum box from the pipeline?
(Note that to do it manually you just make rays (or use world to screen space as Draco mentions, same) for the 8 points of the AABB; encapsulate those in 2D to make the orange box.)
I don't need a manual solution, I'm asking if the engine gives this somehow from the pipeline every frame?
Is there a call?
(Indeed, it would be even better to have this ...)
My feeling is that one or all of the
occlusion system in particular
the shaders
the renderer
would surely know the orange box, and perhaps even the blue box inside the pipeline, right off the graphics card, just as it knows the AABB for a given mesh.
We know that Unity lets you tap the AABB 3D box instantly every frame for a given mesh: In fact does Unity give the "2D frustrum bound" as shown here?
As far as I am aware, there is no built in for this.
However, finding the extremes yourself is really pretty easy. Getting the mesh's bounding box (the cuboid shown in the screenshot) is just how this is done, you're just doing it in a transformed space.
Loop through all the verticies of the mesh, doing the following:
Transform the point from local to world space (this handles dealing with scale and rotation)
Transform the point from world space to screen space
Determine if the new point's X and Y are above/below the stored min/max values, if so, update the stored min/max with the new value
After looping over all vertices, you'll have 4 values: min-X, min-Y, max-X, and max-Y. Now you can construct your bounding rectangle
You may also wish to first perform a Gift Wrapping of the model first, and only deal with the resulting convex hull (as no points not part of the convex hull will ever be outside the bounds of the convex hull). If you intend to draw this screen space rectangle while the model moves, scales, or rotates on screen, and have to recompute the bounding box, then you'll want to do this and cache the result.
Note that this does not work if the model animates (e.g. if your humanoid stands up and does jumping jacks). Solving for the animated case is much more difficult, as you would have to treat every frame of every animation as part of the original mesh for the purposes of the convex hull solving (to insure that none of your animations ever move a part of the mesh outside the convex hull), increasing the complexity by a power.
3D bounding box
Get given GameObject 3D bounding box's center and size
Compute 8 corners
Transform positions to GUI space (screen space)
Function GUI3dRectWithObject will return the 3D bounding box of given GameObject on screen.
2D bounding box
Iterate through every vertex in a given GameObject
Transform every vertex's position to world space, and transform to GUI space (screen space)
Find 4 corner value: x1, x2, y1, y2
Function GUI2dRectWithObject will return the 2D bounding box of given GameObject on screen.
Code
public static Rect GUI3dRectWithObject(GameObject go)
{
Vector3 cen = go.GetComponent<Renderer>().bounds.center;
Vector3 ext = go.GetComponent<Renderer>().bounds.extents;
Vector2[] extentPoints = new Vector2[8]
{
WorldToGUIPoint(new Vector3(cen.x-ext.x, cen.y-ext.y, cen.z-ext.z)),
WorldToGUIPoint(new Vector3(cen.x+ext.x, cen.y-ext.y, cen.z-ext.z)),
WorldToGUIPoint(new Vector3(cen.x-ext.x, cen.y-ext.y, cen.z+ext.z)),
WorldToGUIPoint(new Vector3(cen.x+ext.x, cen.y-ext.y, cen.z+ext.z)),
WorldToGUIPoint(new Vector3(cen.x-ext.x, cen.y+ext.y, cen.z-ext.z)),
WorldToGUIPoint(new Vector3(cen.x+ext.x, cen.y+ext.y, cen.z-ext.z)),
WorldToGUIPoint(new Vector3(cen.x-ext.x, cen.y+ext.y, cen.z+ext.z)),
WorldToGUIPoint(new Vector3(cen.x+ext.x, cen.y+ext.y, cen.z+ext.z))
};
Vector2 min = extentPoints[0];
Vector2 max = extentPoints[0];
foreach (Vector2 v in extentPoints)
{
min = Vector2.Min(min, v);
max = Vector2.Max(max, v);
}
return new Rect(min.x, min.y, max.x - min.x, max.y - min.y);
}
public static Rect GUI2dRectWithObject(GameObject go)
{
Vector3[] vertices = go.GetComponent<MeshFilter>().mesh.vertices;
float x1 = float.MaxValue, y1 = float.MaxValue, x2 = 0.0f, y2 = 0.0f;
foreach (Vector3 vert in vertices)
{
Vector2 tmp = WorldToGUIPoint(go.transform.TransformPoint(vert));
if (tmp.x < x1) x1 = tmp.x;
if (tmp.x > x2) x2 = tmp.x;
if (tmp.y < y1) y1 = tmp.y;
if (tmp.y > y2) y2 = tmp.y;
}
Rect bbox = new Rect(x1, y1, x2 - x1, y2 - y1);
Debug.Log(bbox);
return bbox;
}
public static Vector2 WorldToGUIPoint(Vector3 world)
{
Vector2 screenPoint = Camera.main.WorldToScreenPoint(world);
screenPoint.y = (float)Screen.height - screenPoint.y;
return screenPoint;
}
Reference: Is there an easy way to get on-screen render size (bounds)?
refer to this
It needs the game object with skinnedMeshRenderer.
Camera camera = GetComponent();
SkinnedMeshRenderer skinnedMeshRenderer = target.GetComponent();
// Get the real time vertices
Mesh mesh = new Mesh();
skinnedMeshRenderer.BakeMesh(mesh);
Vector3[] vertices = mesh.vertices;
for (int i = 0; i < vertices.Length; i++)
{
// World space
vertices[i] = target.transform.TransformPoint(vertices[i]);
// GUI space
vertices[i] = camera.WorldToScreenPoint(vertices[i]);
vertices[i].y = Screen.height - vertices[i].y;
}
Vector3 min = vertices[0];
Vector3 max = vertices[0];
for (int i = 1; i < vertices.Length; i++)
{
min = Vector3.Min(min, vertices[i]);
max = Vector3.Max(max, vertices[i]);
}
Destroy(mesh);
// Construct a rect of the min and max positions
Rect r = Rect.MinMaxRect(min.x, min.y, max.x, max.y);
GUI.Box(r, "");

Leap Motion - Angle of proximal bone to metacarpal (side to side movement)

I am trying to get the angle between the bones, such as the metacarpal bone and the proximal bone (angle of moving the finger side to side, for example the angle when your index finger is as close to your thumb as you can move it and then the angle when your index finger is as close to your middle finger as you can move it).
I have tried using Vector3.Angle with the direction of the bones but that doesn't work as it includes the bending of the finger, so if the hand is in a fist it gives a completely different value to an open hand.
What i really want is a way i can "normalize" (i know normalizing isn't the correct term but it's the best i could think of) the direction of the bones so that even if the finger is bent, the direction vector would still point out forwards and not down, but would be in the direction of the finger (side to side).
I have added a diagram below to try and illustrate what i mean.
In the second diagram, the blue represents what i currently get if i use the bone's directions, the green is the metacarpal direction and the red is what i want (from the side view). The first diagram shows what i am looking for from a top-down view. The blue line is the metacarpal bone direction and in this example the red line is the proximal bone direction, with the green smudge representing the angle i am looking for.
To get this value, you need to "uncurl" the finger direction based on the current metacarpal direction. It's a little involved in the end; you have to construct some basis vectors in order to uncurl the hand along juuust the right axis. Hopefully the comments in this example script will explain everything.
using Leap;
using Leap.Unity;
using UnityEngine;
public class MeasureIndexSplay : MonoBehaviour {
// Update is called once per frame
void Update () {
var hand = Hands.Get(Chirality.Right);
if (hand != null) {
Debug.Log(GetIndexSplayAngle(hand));
}
}
// Some member variables for drawing gizmos.
private Ray _metacarpalRay;
private Ray _proximalRay;
private Ray _uncurledRay;
/// <summary>
/// This method returns the angle of the proximal bone of the index finger relative to
/// its metacarpal, when ignoring any angle due to the curling of the finger.
///
/// In other words, this method measures the "side-to-side" angle of the finger.
/// </summary>
public float GetIndexSplayAngle(Hand h) {
var index = h.GetIndex();
// These are the directions we care about.
var metacarpalDir = index.bones[0].Direction.ToVector3();
var proximalDir = index.bones[1].Direction.ToVector3();
// Let's start with the palm basis vectors.
var distalAxis = h.DistalAxis(); // finger axis
var radialAxis = h.RadialAxis(); // thumb axis
var palmarAxis = h.PalmarAxis(); // palm axis
// We need a basis whose forward direction is aligned to the metacarpal, so we can
// uncurl the finger with the proper uncurling axis. The hand's palm basis is close,
// but not aligned with any particular finger, so let's fix that.
//
// We construct a rotation from the palm "finger axis" to align it to the metacarpal
// direction. Then we apply that same rotation to the other two basis vectors so
// that we still have a set of orthogonal basis vectors.
var metacarpalRotation = Quaternion.FromToRotation(distalAxis, metacarpalDir);
distalAxis = metacarpalRotation * distalAxis;
radialAxis = metacarpalRotation * radialAxis;
palmarAxis = metacarpalRotation * palmarAxis;
// Note: At this point, we don't actually need the distal axis anymore, and we
// don't need to use the palmar axis, either. They're included above to clarify that
// we're able to apply the aligning rotation to each axis to maintain a set of
// orthogonal basis vectors, in case we wanted a complete "metacarpal-aligned basis"
// for performing other calculations.
// The radial axis, which has now been rotated a bit to be orthogonal to our
// metacarpal, is the axis pointing generally towards the thumb. This is our curl
// axis.
// If you're unfamiliar with using directions as rotation axes, check out the images
// here: https://en.wikipedia.org/wiki/Right-hand_rule
var curlAxis = radialAxis;
// We want to "uncurl" the proximal bone so that it is in line with the metacarpal,
// when considered only on the radial plane -- this is the plane defined by the
// direction approximately towards the thumb, and after the above step, it's also
// orthogonal to the direction our metacarpal is facing.
var proximalOnRadialPlane = Vector3.ProjectOnPlane(proximalDir, radialAxis);
var curlAngle = Vector3.SignedAngle(metacarpalDir, proximalOnRadialPlane,
curlAxis);
// Construct the uncurling rotation from the axis and angle and apply it to the
// *original* bone direction. We determined the angle of positive curl, so our
// rotation flips its sign to rotate the other direction -- to _un_curl.
var uncurlingRotation = Quaternion.AngleAxis(-curlAngle, curlAxis);
var uncurledProximal = uncurlingRotation * proximalDir;
// Upload some data for gizmo drawing (optional).
_metacarpalRay = new Ray(index.bones[0].PrevJoint.ToVector3(),
index.bones[0].Direction.ToVector3());
_proximalRay = new Ray(index.bones[1].PrevJoint.ToVector3(),
index.bones[1].Direction.ToVector3());
_uncurledRay = new Ray(index.bones[1].PrevJoint.ToVector3(),
uncurledProximal);
// This final direction is now uncurled and can be compared against the direction
// of the metacarpal under the assumption it was constructed from an open hand.
return Vector3.Angle(metacarpalDir, uncurledProximal);
}
// Draw some gizmos for debugging purposes.
public void OnDrawGizmos() {
Gizmos.color = Color.white;
Gizmos.DrawRay(_metacarpalRay.origin, _metacarpalRay.direction);
Gizmos.color = Color.blue;
Gizmos.DrawRay(_proximalRay.origin, _proximalRay.direction);
Gizmos.color = Color.red;
Gizmos.DrawRay(_uncurledRay.origin, _uncurledRay.direction);
}
}
For what it's worth, while the index finger is curled, tracked Leap hands don't have a whole lot of flexibility on this axis.

AKKit: How to select a group of 3D points from a 2D frame?

so the quest is this, I got an ARPointCloud with a bunch of 3d points and I'd like to select them based on a 2d frame from the perspective of the camera / screen.
I was thinking about converting the 2d frame to a 3d frustum and check if the points where the 3d frustum box, not sure if this is the ideal method, and not even sure how to do that.
Would anyone know how to do this or have a better method of achieving this?
Given the size of the ARKit frame W x H and the camera intrinsics we can create planes for the view frustum sides.
For example using C++ / Eigen we can construct our four planes (which pass
through the origin) as
std::vector<Eigen::Vector3d> _frustumPlanes;
frustumPlanes.emplace_back(Eigen::Vector3d( fx, 0, cx - W));
frustumPlanes.emplace_back(Eigen::Vector3d(-fx, 0, -cx));
frustumPlanes.emplace_back(Eigen::Vector3d( 0, fy, cy - H));
frustumPlanes.emplace_back(Eigen::Vector3d( 0, -fy, -cy));
We can then clip a 3D point by checking its position against the z < 0
half-space and the four sides of the frustum:
auto pointIsVisible = [&](const Eigen::Vector3d& P) -> bool {
if (P.z() >= 0) return false; // behind camera
for (auto&& N : frustumPlanes) {
if (P.dot(N) < 0)
return false; // outside frustum plane
}
return true;
};
Note that it is best to perform this clipping in 3D (before the projection) since points behind or near the camera or points far outside
the frustum can have unstable projection values (u,v).

How to find the 3D coordinates of a surface from the click location of the mouse on the ILNumerics surface plots?

Currently our system uses the ILNumerics 3D plot cube class with an ILNumerics surface component to display a 3D meshed surface. An aim for our system is to be able to interrogate individual points on the surface from a mouse click on the plot. We have the MouseClick event set up on our plot the problem is I am unsure on how to get the values for the particular point on the surface that has been clicked, could anyone help with this issue?
The conversion from 2D mouse coordinates to 3D 'model' coordinates is possible - under some limitations:
The conversion is not unambiguous. The mouse event only provides 2 dimensions: X and Y screen coordinates. In the 3D model there might be more than one point 'behind' this 2D screen point. Therefore, the best you can get is to compute a line in 3D, starting at the camera and ending in infinite depth.
While in theory it would be possible at least to try to find the crossing of the line with the 3D objects, ILNumerics currently does not. Even in the simple case of a surface it is easy to construct a 3D model which crosses the line at more than one point.
For a simplified situation a solution exists: If the Z coordinate in 3D does not matter, one can use common matrix conversions in order to acquire the X and Y coordinates in 3D and use these only. Let's say, your plot is a 2D line plot or a surface plot - but only watched from
'above' (i.e. The unrotated X-Y plane). The Z coordinate of the point clicked may not be of interest. Let's further assume, you have setup an ILScene scene in a common windows application with ILPanel:
private void ilPanel1_Load(object sender, EventArgs e) {
var scene = new ILScene() {
new ILPlotCube(twoDMode: true) {
new ILSurface(ILSpecialData.sincf(20,30))
}
};
scene.First<ILSurface>().MouseClick += (s,arg) => {
// we start at the mouse event target -> this will be the
// surface group node (the parent of "Fill" and "Wireframe")
var group = arg.Target.Parent;
if (group != null) {
// walk up to the next camera node
Matrix4 trans = group.Transform;
while (!(group is ILCamera) && group != null) {
group = group.Parent;
// collect all nodes on the path up
trans = group.Transform * trans;
}
if (group != null && (group is ILCamera)) {
// convert args.LocationF to world coords
// The Z coord is not provided by the mouse! -> choose arbitrary value
var pos = new Vector3(arg.LocationF.X * 2 - 1, arg.LocationF.Y * -2 + 1, 0);
// invert the matrix.
trans = Matrix4.Invert(trans);
// trans now converts from the world coord system (at the camera) to
// the local coord system in the 'target' group node (surface).
// In order to transform the mouse (viewport) position, we
// left multiply the transformation matrix.
pos = trans * pos;
// view result in the window title
Text = "Model Position: " + pos.ToString();
}
}
};
ilPanel1.Scene = scene;
}
What it does: it registers a MouseClick event handler on the surface group node. In the handler it accumulates the transformation matrices on the path from the clicked target (the surface group node) up to the next camera node the surface is a child of. While rendering, the (model) coordinates of the vertices are transformed by the local coordinate transformation matrix, hosted in every group node. All transformations are accumulated and so the vertex coordinates end up in the 'world coordinate' system, established by every camera. So rendering finds the 2D screen position from the 3D model vertex positions.
In order to find the 3D position from the 2D screen coordinates - one must go the other way around. In the example, we acquire the transformation matrices for every group node, multiply them all up and invert the resulting transformation matrix. This is needed, because such transforms naturally describe the conversion from the child node to the parent. Here, we need the other way around - hence the inversion is necessary.
This method gives the correct 3D coordinates at the mouse position. However, keep the limitations in mind! Here, we do not take into account any rotation of the plot cube (the plot cube must be left unrotated) and no projection transforms (plot cubes do use orthographic transform by default, which basically is a noop). In order to recognize those variables as well, you may extend the example accordingly.