I'm working on an iPhone app with a lot of different gesture inputs that you can do. Currently there is single finger select / drag, two finger scroll, and two finger pinch zoom-in / zoom-out. I want to add in two finger rotation (your fingers rotate a point in between them), but I can't figure out how to get it to work right. All the other gestures were linear so they were only a matter of using the dot or cross product, pretty much.
I'm thinking I've got to store the slope between the previous two points of each finger, and if the angle between the vectors is near 90, then there is the possibility of a rotation. If the next finger movement angle is also near 90, and the direction of the vector on one finger changed positively and changed negatively, then you've got a rotation. The problem is, I need a really clean distinction between this gesture and the other ones - and the above isn't far enough removed.
Any suggestions?
EDIT: Here's how I did it in a vector analysis manner (as opposed to the suggestion below about matching pixels, note that I use my Vector struct in here, you should be able to guess what each function does):
//First, find the vector formed by the first touch's previous and current positions.
struct Vector2f firstChange = getSubtractedVector([theseTouches get:0], [lastTouches get:0]);
//We're going to store whether or not we should scroll.
BOOL scroll = NO;
//If there was only one touch, then we'll scroll no matter what.
if ([theseTouches count] <= 1)
{
scroll = YES;
}
//Otherwise, we might scroll, scale, or rotate.
else
{
//In the case of multiple touches, we need to test the slope between the two touches.
//If they're going in roughly the same direction, we should scroll. If not, zoom.
struct Vector2f secondChange = getSubtractedVector([theseTouches get:1], [lastTouches get:1]);
//Get the dot product of the two change vectors.
float dotChanges = getDotProduct(&firstChange, &secondChange);
//Get the 2D cross product of the two normalized change vectors.
struct Vector2f normalFirst = getNormalizedVector(&firstChange);
struct Vector2f normalSecond = getNormalizedVector(&secondChange);
float crossChanges = getCrossProduct(&normalFirst, &normalSecond);
//If the two vectors have a cross product that is less than cosf(30), then we know the angle between them is 30 degrees or less.
if (fabsf(crossChanges) <= SCROLL_MAX_CROSS && dotChanges > 0)
{
scroll = YES;
}
//Otherwise, they're in different directions so we should zoom or rotate.
else
{
//Store the vectors represented by the two sets of touches.
struct Vector2f previousDifference = getSubtractedVector([lastTouches get:1], [lastTouches get:0]);
struct Vector2f currentDifference = getSubtractedVector([theseTouches get:1], [theseTouches get:0]);
//Also find the normals of the two vectors.
struct Vector2f previousNormal = getNormalizedVector(&previousDifference);
struct Vector2f currentNormal = getNormalizedVector(¤tDifference );
//Find the distance between the two previous points and the two current points.
float previousDistance = getMagnitudeOfVector(&previousDifference);
float currentDistance = getMagnitudeOfVector(¤tDifference );
//Find the angles between the two previous points and the two current points.
float angleBetween = atan2(previousNormal.y,previousNormal.x) - atan2(currentNormal.y,currentNormal.x);
//If we had a short change in distance and the angle between touches is a big one, rotate.
if ( fabsf(previousDistance - currentDistance) <= ROTATE_MIN_DISTANCE && fabsf(angleBetween) >= ROTATE_MAX_ANGLE)
{
if (angleBetween > 0)
{
printf("Rotate right.\n");
}
else
{
printf("Rotate left.\n");
}
}
else
{
//Get the dot product of the differences of the two points and the two vectors.
struct Vector2f differenceChange = getSubtracted(&secondChange, &firstChange);
float dotDifference = getDot(&previousDifference, &differenceChange);
if (dotDifference > 0)
{
printf("Zoom in.\n");
}
else
{
printf("Zoom out.\n");
}
}
}
}
if (scroll)
{
prinf("Scroll.\n");
}
You should note that if you're just doing image manipulation or direct rotation / zooming, then the above approach should be fine. However, if you're like me and you're using a gesture to cause something that takes time to load, then it's likely that you'll want to avoid doing the action until that gesture has been activated a few times in a row. The difference between each with my code is still not perfectly separate, so occasionally in a bunch of zooms you'll get a rotation, or vise versa.
I've done that before by finding the previous and current distances between the two fingers, and the angle between the previous and current lines.
Then I picked some empirical thresholds for that distance delta and angle theta, and that has worked out pretty well for me.
If the distance was greater than my threshold, and the angle was less than my threshold, I scaled the image. Otherwise I rotated it.
2 finger scroll seems easy to distinguish.
BTW in case you are actually storing the values, the touches have previous point values already stored.
CGPoint previousPoint1 = [self scalePoint:[touch1 previousLocationInView:nil]];
CGPoint previousPoint2 = [self scalePoint:[touch2 previousLocationInView:nil]];
CGPoint currentPoint1 = [self scalePoint:[touch1 locationInView:nil]];
CGPoint currentPoint2 = [self scalePoint:[touch2 locationInView:nil]];
Two fingers, both moving, opposit(ish) directions. What gesture conflicts with this?
Pinch/zoom I guess comes close, but whereas pinch/zoom will start off moving away from a center point (if you trace backwards from each line, your lines will be parallel and close), rotate will initially have parallel lines (tracing backwards) that will be far away from each other and those lines will constantly change slope (while retaining distance).
edit: You know--both of these could be solved with the same algorithm.
Rather than calculating lines, calculate the pixel under each finger. If the fingers move, translate the image so that the two initial pixels are still under the two fingers.
This solves all two-finger actions including scroll.
Two-finger scroll or Zoom might look a little wobbly at times since it will do other operations as well, but this is how the map app seems to work (excluding the rotate which it doesn't have).
Related
I am trying to get the angle between the bones, such as the metacarpal bone and the proximal bone (angle of moving the finger side to side, for example the angle when your index finger is as close to your thumb as you can move it and then the angle when your index finger is as close to your middle finger as you can move it).
I have tried using Vector3.Angle with the direction of the bones but that doesn't work as it includes the bending of the finger, so if the hand is in a fist it gives a completely different value to an open hand.
What i really want is a way i can "normalize" (i know normalizing isn't the correct term but it's the best i could think of) the direction of the bones so that even if the finger is bent, the direction vector would still point out forwards and not down, but would be in the direction of the finger (side to side).
I have added a diagram below to try and illustrate what i mean.
In the second diagram, the blue represents what i currently get if i use the bone's directions, the green is the metacarpal direction and the red is what i want (from the side view). The first diagram shows what i am looking for from a top-down view. The blue line is the metacarpal bone direction and in this example the red line is the proximal bone direction, with the green smudge representing the angle i am looking for.
To get this value, you need to "uncurl" the finger direction based on the current metacarpal direction. It's a little involved in the end; you have to construct some basis vectors in order to uncurl the hand along juuust the right axis. Hopefully the comments in this example script will explain everything.
using Leap;
using Leap.Unity;
using UnityEngine;
public class MeasureIndexSplay : MonoBehaviour {
// Update is called once per frame
void Update () {
var hand = Hands.Get(Chirality.Right);
if (hand != null) {
Debug.Log(GetIndexSplayAngle(hand));
}
}
// Some member variables for drawing gizmos.
private Ray _metacarpalRay;
private Ray _proximalRay;
private Ray _uncurledRay;
/// <summary>
/// This method returns the angle of the proximal bone of the index finger relative to
/// its metacarpal, when ignoring any angle due to the curling of the finger.
///
/// In other words, this method measures the "side-to-side" angle of the finger.
/// </summary>
public float GetIndexSplayAngle(Hand h) {
var index = h.GetIndex();
// These are the directions we care about.
var metacarpalDir = index.bones[0].Direction.ToVector3();
var proximalDir = index.bones[1].Direction.ToVector3();
// Let's start with the palm basis vectors.
var distalAxis = h.DistalAxis(); // finger axis
var radialAxis = h.RadialAxis(); // thumb axis
var palmarAxis = h.PalmarAxis(); // palm axis
// We need a basis whose forward direction is aligned to the metacarpal, so we can
// uncurl the finger with the proper uncurling axis. The hand's palm basis is close,
// but not aligned with any particular finger, so let's fix that.
//
// We construct a rotation from the palm "finger axis" to align it to the metacarpal
// direction. Then we apply that same rotation to the other two basis vectors so
// that we still have a set of orthogonal basis vectors.
var metacarpalRotation = Quaternion.FromToRotation(distalAxis, metacarpalDir);
distalAxis = metacarpalRotation * distalAxis;
radialAxis = metacarpalRotation * radialAxis;
palmarAxis = metacarpalRotation * palmarAxis;
// Note: At this point, we don't actually need the distal axis anymore, and we
// don't need to use the palmar axis, either. They're included above to clarify that
// we're able to apply the aligning rotation to each axis to maintain a set of
// orthogonal basis vectors, in case we wanted a complete "metacarpal-aligned basis"
// for performing other calculations.
// The radial axis, which has now been rotated a bit to be orthogonal to our
// metacarpal, is the axis pointing generally towards the thumb. This is our curl
// axis.
// If you're unfamiliar with using directions as rotation axes, check out the images
// here: https://en.wikipedia.org/wiki/Right-hand_rule
var curlAxis = radialAxis;
// We want to "uncurl" the proximal bone so that it is in line with the metacarpal,
// when considered only on the radial plane -- this is the plane defined by the
// direction approximately towards the thumb, and after the above step, it's also
// orthogonal to the direction our metacarpal is facing.
var proximalOnRadialPlane = Vector3.ProjectOnPlane(proximalDir, radialAxis);
var curlAngle = Vector3.SignedAngle(metacarpalDir, proximalOnRadialPlane,
curlAxis);
// Construct the uncurling rotation from the axis and angle and apply it to the
// *original* bone direction. We determined the angle of positive curl, so our
// rotation flips its sign to rotate the other direction -- to _un_curl.
var uncurlingRotation = Quaternion.AngleAxis(-curlAngle, curlAxis);
var uncurledProximal = uncurlingRotation * proximalDir;
// Upload some data for gizmo drawing (optional).
_metacarpalRay = new Ray(index.bones[0].PrevJoint.ToVector3(),
index.bones[0].Direction.ToVector3());
_proximalRay = new Ray(index.bones[1].PrevJoint.ToVector3(),
index.bones[1].Direction.ToVector3());
_uncurledRay = new Ray(index.bones[1].PrevJoint.ToVector3(),
uncurledProximal);
// This final direction is now uncurled and can be compared against the direction
// of the metacarpal under the assumption it was constructed from an open hand.
return Vector3.Angle(metacarpalDir, uncurledProximal);
}
// Draw some gizmos for debugging purposes.
public void OnDrawGizmos() {
Gizmos.color = Color.white;
Gizmos.DrawRay(_metacarpalRay.origin, _metacarpalRay.direction);
Gizmos.color = Color.blue;
Gizmos.DrawRay(_proximalRay.origin, _proximalRay.direction);
Gizmos.color = Color.red;
Gizmos.DrawRay(_uncurledRay.origin, _uncurledRay.direction);
}
}
For what it's worth, while the index finger is curled, tracked Leap hands don't have a whole lot of flexibility on this axis.
I've been working on a top view game, where there're two virtual joysticks present to control a character. For the joysticks, I've used Spritekit-Joystick. The character shoots automatically. The right Joystick is used for changing direction, and I have implemented this function to move the character:
-(void)update:(CFTimeInterval)currentTime {
/* Called before each frame is rendered */
if (self.imageJoystick.velocity.x != 0 || self.imageJoystick.velocity.y != 0)
{
myCharacter.zRotation = self.imageJoystick.angularVelocity;
}
}
For the self.imageJoystick.angularVelocity, the Joystick library uses this code (the code is available from the link above, but just for the sake of making things easier and more readily available):
angularVelocity = -atan2(thumbNode.position.x - self.anchorPointInPoints.x, thumbNode.position.y - self.anchorPointInPoints.y);
There' a problem with the rotation. When I push the joystick (thumbnode) up, the rotation points to the right. I've used NSLog to get the value, and it shows around 0. When I push the joystic (thumbnode above) to the left, the character points upwards, and I get around 1.5 reading for angularVelocity. Pushing it down points the character to left and gives a -3 reading. Finally, when I push right, I point the character to South, and geta reading of around 1.5.
The same thing happens with the 'bullets' I shoot out. For now I'm using beams, which I found the answer in this [POST] on here2. I have added it below for your convenience (Not my answer, but I've changed it to be used in my game):
[[self childNodeWithName:#"RayBeam"]removeFromParent];
int x = myCharacter.position.x + 1000 * cos(myCharacter.zRotation);
int y = myCharacter.position.y + 1000 * sin(myCharacter.zRotation);
SKShapeNode* beam1 = [SKShapeNode node];
CGMutablePathRef pathToDraw = CGPathCreateMutable();
CGPathMoveToPoint(pathToDraw, NULL, myCharacter.position.x, myCharacter.position.y);
CGPathAddLineToPoint(pathToDraw, NULL, x, y);
beam1.path = pathToDraw;
[beam1 setStrokeColor:[UIColor redColor]];
[beam1 setName:#"RayBeam"];
[self addChild:beam1];
The exact same thing happens. When I 1) push the joystick up, I shoot to right. 2) push the joystick to the left, I shoot Upward. 3) push downward, I shoot left, and 4) when I push the joystick to the right, I shoot downward.
I think I need to tweek the math and trigonometry equations, but maths isn't my forte, and I'm relatively new to this.
The image below is the image of the game (it's not completed at all), and you can see I'm pushing up but I'm pointing, and shooting to right.
Note: my sprite is also point to the right by default. I tried rotating all my sprites counter clockwise, and that works with the sprites, but not with the beam.
I really appreciate your help.
Thank you in advance.
If up is 0 radians then you should be calculating your beam's x and y with:
int x = myCharacter.position.x - 1000 * sin(myCharacter.zRotation);
int y = myCharacter.position.y + 1000 * cos(myCharacter.zRotation);
Alright, so I'm trying to do something simple here and I think I've overcomplicated it- I need an if statement saying if this object goes off the screen (into negative y coordinates), have something happen. I can't get it.
I've tried a number of things, including if statements that compare to numbers like this, having it be equal and then trying greater/less than:
if block1.position.y == -50 {
savior.hidden = true
}
I've tried having the object be less than the size of the self.size.height :
if block1.position.y < self.size.height {
savior.hidden = true
}
And I've tried placing an object at the point off the screen and having an if statement comparing the 2 object's y positions:
if block1.position.y == ptBlock1.position.y {
savior.hidden = true
}
And nothing's working. Block1, the object I'm working with, is being sent to the specific point in an SKAction, so I know that it's getting there:
var moveDownLeft = SKAction.moveTo(CGPointMake(self.size.width * 0.35,-50), duration:5.5)
block1.runAction(moveDownLeft)
Why won't the if statement work?
EDIT:
I have tried this, and even when block1 visibly has a y position lower than ptBlock1, nothing happens:
if block1.position.y < ptBlock1.position.y {
savior.hidden = true
}
else if block1.position.y > ptBlock1.position.y {
savior.hidden = false
}
this object goes off the screen (into negative y coordinates)
You're making a false assumption there. Off the screen is not necessarily negative y coordinates.
The position of an SKNode is not measured with respect to screen; it is measured with respect to its superview, which is the SKScene. And the SKScene is much bigger than the screen! You need to convert from those coordinates to screen coordinates if you want to know what's happening relative to the screen.
(Just to give an example, if you make a new SpriteKit project from the template and log on touchesBegan to show the tap position, you will discover that a tap in the top left corner is at about {303,758}. So in that coordinate system an object is off the screen to the top if its y is greater than about 760. Contrast that with screen coordinates, where you are off the screen to the top if you are less than 0. These are very different coordinate systems!)
I have created a 3D object in opengl for one of my application. The object is something like a human body and can be rotated on touch. How can I detect the position of touch on this 3D object. Means if the user touches the head, I have to detect that it is the head. If touch is on the hand, then that has to be identified. It should work even if the object is rotated to some other direction. I think the coordinates of touch on the 3D object is required.
This is the method where I am getting the position of touch on the view.
- (void) touchesBegan: (NSSet*) touches withEvent: (UIEvent*) event
{
UITouch* touch = [touches anyObject];
CGPoint location = [touch locationInView: self];
m_applicationEngine->OnFingerDown(ivec2(location.x, location.y));
}
Can anyone help? Thanks in advance!
Forget about RayTracing and other Top Notch Algorithms. We have used a simple trick for one of our applications(Iyan 3D) on App Store. But this technique need one extra render pass everytime you finish rotating the scene to a new angle. Render different objects (head, hand, leg etc) in different colors (not actual colors but unique ones). Read the color in the rendered image corresponding to the screen position. You can find the object based on its color.
In this method you can use change rendered image resolution to balance accuracy and performance.
To determine the 3D location of the object I would suggest ray tracing.
Assuming the model is in worldspace coordinates you'll also need to know the worldspace coordinates of the eye location and the worldspace coordinates of the image plane. Using those two points you can calculate a ray which you will use to intersect with the model, which I assume consists of triangles.
Then you can use the ray triangle test to determine the 3D location of the touch, by finding the triangle that has the closest intersection to the image plane. If you want which triangle is touched you will also want to save that information when you do the intersection tests.
This page gives an example of how to do ray triangle intersection tests: http://www.scratchapixel.com/lessons/3d-basic-lessons/lesson-9-ray-triangle-intersection/ray-triangle-intersection-geometric-solution/
Edit:
Updated to have some sample code. Its some slightly modified code I took from a C++ raytracing project I did a while ago so you'll need to modify it a bit to get it working for iOS. Also the code in its current form wouldn't even be useful since it doesn't return the actual intersection point but rather if the ray intersects the triangle or not.
// d is the direction the ray is heading in
// o is the origin of the ray
// verts is the 3 vertices of the triangle
// faceNorm is the normal of the triangle surface
bool
Triangle::intersect(Vector3 d, Vector3 o, Vector3* verts, Vector3 faceNorm)
{
// Check for line parallel to plane
float r_dot_n = (dot(d, faceNorm));
// If r_dot_n == 0, then the line and plane are parallel, but we need to
// do the range check due to floating point precision
if (r_dot_n > -0.001f && r_dot_n < 0.001f)
return false;
// Then we calculate the distance of the ray origin to the triangle plane
float t = ( dot(faceNorm, (verts[0] - o)) / r_dot_n);
if (t < 0.0)
return false;
// We can now calculate the barycentric coords of the intersection
Vector3 ba_ca = cross(verts[1]-verts[0], verts[2]-verts[0]);
float denom = dot(-d, ba_ca);
dist_out = dot(o-verts[0], ba_ca) / denom;
float b = dot(-d, cross(r.o-verts[0], verts[2]-verts[0])) / denom;
float c = dot(-d, cross(verts[1]-verts[0], o-verts[0])) / denom;
// Check if in tri or if b & c have NaN values
if ( b < 0 || c < 0 || b+c > 1 || b != b || c != c)
return false;
// Use barycentric coordinates to calculate the intersection point
Vector3 P = (1.f-b-c)*verts[0] + b*verts[1] + c*verts[2];
return true;
}
The actual intersection point you would be interested in is P.
Ray tracing is an option and is used in many applications for doing just that (picking). The problem with ray tracing is that this solution is a lot of work to get a pretty simple basic feature working. Also ray tracing can be slow but if you have only one ray to trace (the location of your finger say), then it should be okay.
OpenGL's API also provides a technique to pick object. I suggest you look at for instance: http://www.lighthouse3d.com/opengl/picking/
Finally a last option would consist of projecting the vertices of an object in screen space and use simple 2d techniques to find out which faces of the object your finger overlaps.
I have some 3D models that I render in OpenGL in a 3D space, and I'm experiencing some headaches in moving the 'character' (that is the camera) with rotations and translation inside this world.
I receive the input (ie the coordinates where to move/the dregrees to turn) from some extern event (image a user input or some data from a GPS+compass device) and the kind of event is rotation OR translation .
I've wrote this method to manage these events:
- (void)moveThePlayerPositionTranslatingLat:(double)translatedLat Long:(double)translatedLong andRotating:(double)degrees{
[super startDrawingFrame];
if (degrees != 0)
{
glRotatef(degrees, 0, 0, 1);
}
if (translatedLat != 0)
{
glTranslatef(translatedLat, -translatedLong, 0);
}
[self redrawView];
}
Then in redrawView I'm actualy drawing the scene and my models. It is something like:
glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
NSInteger nModels = [models count];
for (NSInteger i = 0; i < nModels; i++)
{
MD2Object * mdobj = [models objectAtIndex:i];
glPushMatrix();
double * deltas = calloc(sizeof(double),2);
deltas[0] = currentCoords[0] - mdobj.modelPosition[0];
deltas[1] = currentCoords[1] - mdobj.modelPosition[1];
glTranslatef(deltas[0], -deltas[1], 0);
free(deltas);
[mdobj setupForRenderGL];
[mdobj renderGL];
[mdobj cleanupAfterRenderGL];
glPopMatrix();
}
[super drawView];
The problem is that when translation an rotation events are called one after the other: for example when I'm rotating incrementally for some iterations (still around the origin) then I translate and finally rotate again but it appears that the last rotation does not occur around the current (translated) position but around the old one (the old origin). I'm well aware that this happens when the order of transformations is inverted, but I believed that after a drawing the new center of the world was given by the translated system.
What am I missing? How can I fix this? (any reference to OpenGL will be appreciated too)
I would recommend not doing cummulative transformations in the event handler, but internally storing the current values for your transformation and then only transforming once, but I don't know if this is the behaviour that you want.
Pseudocode:
someEvent(lat, long, deg)
{
currentLat += lat;
currentLong += long;
currentDeg += deg;
}
redraw()
{
glClear()
glRotatef(currentDeg, 0, 0, 1);
glTranslatef(currentLat, -currentLong, 0);
... // draw stuff
}
It sounds like you have a couple of things that are happening here:
The first is that you need to be aware that rotations occur about the origin. So when you translate then rotate, you are not rotating about what you think is the origin, but the new origin which is T-10 (the origin transformed by the inverse of your translation).
Second, you're making things quite a bit harder than you really need. What you might want to consider instead is to use gluLookAt. You essentially give it a position within your scene and a point in your scene to look at and an 'up' vector and it will set up the scene properly. To use it properly, keep track of where you camera is located, call that vector p, and a vector n (for normal ... indicates the direction you're looking) and u (your up vector). It will make things easier for more advanced features if n and u are orthonormal vectors (i.e. they are orthoginal to each other and have unit length). If you do this, you can compute r = n x u, (your 'right' vector), which will be a normal vector orthoginal to the other two. You then 'look at' p+n and provide the u as the up vector.
Ideally, your n, u and r have some canonical form, for instance:
n = <0, 0, 1>
u = <0, 1, 0>
r = <1, 0, 0>
You then incrementally accumulate your rotations and apply them to the canonical for of your oritentation vectors. You can use either Euler Rotations or Quaternion Rotations to accumulate your rotations (I've come to really appreciate the quaternion approach for a variety of reasons).