Through swipe gesture, is it possible to calculate the distance of swipe (up, down, left, right)?
- (CGFloat)distanceBetweenTwoPoints:(CGPoint)fromPoint toPoint:(CGPoint)toPoint
{
float x = toPoint.x - fromPoint.x;
float y = toPoint.y - fromPoint.y;
return sqrt(x * x + y * y);
}
Hope its help you, just pass starting and endpoint to this method ....
I've used an approach similar to this article to recognise swipe gestures of a certain number of pixels, but if you're using a base SDK of at least 4 try using UISwipeGestureRecognizer which can make it much easier; see this post.
Related
I'm working on a project where i detect finger movement in order to move the cursor on the screen so i have to translate the coordinates i get from the image to coordinates on the screen and move the cursor there.
example: the finger is detected on (128,127) i want to find the equivalent of that point on the screen. Image is (640 x 480) and the screen is (1366 x 768).
Can anybody help me with this. tried different methods but nothing is satisfying some of them i found on stack-overflow.
thanks in advance.
Try this:
ScreenX = event.X / PhoneWidth * ScreenWidth
ScreenY = event.Y / PhoneHeight * ScreenHeight
Where event.X would be the X coordinate where the user touched the screen.
Try using a map function.
In c language it could look like this:
long map(long x, long in_min, long in_max, long out_min, long out_max)
{
return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min;
}
I use open source, iCarousel in my application to bring the carousel control. The carousel type which I use is iCarouselTypeRotary and the images are arranged linearly in this type. But, I need the images to bring like the carousel in the attached images. What should I do to make my carousel little tilted to the top view as the style in the below images? Kindly help. Thanks in advance.
You can implement 3D tilt manually:
In iCarousel.m: 574
return CATransform3DTranslate(transform, radius * sin(angle), 0.0f, radius * cos(angle) - radius);
change to:
float tilt = MAX_TILT_VALUE * cos(angle); // greater angle means greater vertical offset
return CATransform3DTranslate(transform, radius * sin(angle), tilt, radius * cos(angle) - radius);
To make the code clear and reusable, implement tilt offset as option (similar to iCarouselOptionArc).
PS: If you want perspective scaling, you will need to add scale transform that depends on cos(angle) similarly to tilt.
Check by using the style:iCarouselTypeWheel use the horizontal wheel set the radius of the wheel as you want. I have done this in vertical wheel type. But i think it should the appearance as above using horizontal wheel type.
How can I make such an interface with cocos2d for iphone? Cortex interface
I already made a subclass of CCSprite and override the draw
method like this:
-(void)draw {
ccDrawCircle(CGPointMake(480/2, 320/2), 70, 0, 50000, NO);
ccDrawCircle(CGPointMake(480/2, 320/2), 25, 0, 50000, NO);
ccDrawLine(CGPointMake(480/2, 320/2+25), CGPointMake(480/2, 320/2+70));
ccDrawLine(CGPointMake(480/2+25, 320/2), CGPointMake(480/2+70, 320/2));
ccDrawLine(CGPointMake(480/2, 320/2-25), CGPointMake(480/2, 320/2-70));
ccDrawLine(CGPointMake(480/2-25, 320/2), CGPointMake(480/2-70, 320/2));
}
The problem is that I don't have any control over the circle (can't set the position of it)...and i don't know how to place text/images into these "cells". Another problem is the touch detection..mayby just cgrects? but what if i have more than 4 cells and one cell is "rotated"?
Any ideas?
I think you have two options here, but I don't recommend subclassing CCSprite, infact very rarely would recommend doing so, theres almost no need to.
In my opinion, you could do either of these to get your image.
1. Use OpenGL to draw your image.
2. Use CCSprite to draw your image. (Cleaner)
Once you have drawn it, its simply a matter of creating it when you press down on the screen.
Once you press down on the screen (or any prescribed object) I would then employ a simple trigonometric solution.
This is the algorithm I would use:
Press down on screen, Get the position of touch. (sourcepos) and create your cortex img
On Movement of finger on screen, get the position (currentpos) the angle and magnitude in relation to the original (sourcepos) touch.
Now, using simple angles we can install different bounds on your CCSprite using if statements. Its also a good idea to use #define kMinMagnitude X statement to ensure the user moves their finger adequately.
I suppose you can either execute the //Load Twitter or Load Facebook either on the movement or the cancelation of a touch. Thats entirely up to you.
(PSUDOCODE):
dx = currentpos.x - sourcepos.x
dy = currentpos.y - sourcepos.y
mag = sqrt(dx*dx + dy*dy);
ang = CC_RADIANS_TO_DEGREES(atan2f(dy/dx));
if (ang > 0 && ang < 80 && mag > kMinMagnitude) //Load Twitter
if (ang > 80 && ang < 120 && mag > kMinMagnitude) //Load facebook
I don't think making a subclass of CCSprite is the right choice here. You will probably want a NSObject that creates the CCSprites for you.
Also CCSprite.position = CGPointMake( X, Y ) should allow you to set the position of the sprite. Don't forget to add it to a layer just like any other CCNode object.
I'm working on an iPhone app with a lot of different gesture inputs that you can do. Currently there is single finger select / drag, two finger scroll, and two finger pinch zoom-in / zoom-out. I want to add in two finger rotation (your fingers rotate a point in between them), but I can't figure out how to get it to work right. All the other gestures were linear so they were only a matter of using the dot or cross product, pretty much.
I'm thinking I've got to store the slope between the previous two points of each finger, and if the angle between the vectors is near 90, then there is the possibility of a rotation. If the next finger movement angle is also near 90, and the direction of the vector on one finger changed positively and changed negatively, then you've got a rotation. The problem is, I need a really clean distinction between this gesture and the other ones - and the above isn't far enough removed.
Any suggestions?
EDIT: Here's how I did it in a vector analysis manner (as opposed to the suggestion below about matching pixels, note that I use my Vector struct in here, you should be able to guess what each function does):
//First, find the vector formed by the first touch's previous and current positions.
struct Vector2f firstChange = getSubtractedVector([theseTouches get:0], [lastTouches get:0]);
//We're going to store whether or not we should scroll.
BOOL scroll = NO;
//If there was only one touch, then we'll scroll no matter what.
if ([theseTouches count] <= 1)
{
scroll = YES;
}
//Otherwise, we might scroll, scale, or rotate.
else
{
//In the case of multiple touches, we need to test the slope between the two touches.
//If they're going in roughly the same direction, we should scroll. If not, zoom.
struct Vector2f secondChange = getSubtractedVector([theseTouches get:1], [lastTouches get:1]);
//Get the dot product of the two change vectors.
float dotChanges = getDotProduct(&firstChange, &secondChange);
//Get the 2D cross product of the two normalized change vectors.
struct Vector2f normalFirst = getNormalizedVector(&firstChange);
struct Vector2f normalSecond = getNormalizedVector(&secondChange);
float crossChanges = getCrossProduct(&normalFirst, &normalSecond);
//If the two vectors have a cross product that is less than cosf(30), then we know the angle between them is 30 degrees or less.
if (fabsf(crossChanges) <= SCROLL_MAX_CROSS && dotChanges > 0)
{
scroll = YES;
}
//Otherwise, they're in different directions so we should zoom or rotate.
else
{
//Store the vectors represented by the two sets of touches.
struct Vector2f previousDifference = getSubtractedVector([lastTouches get:1], [lastTouches get:0]);
struct Vector2f currentDifference = getSubtractedVector([theseTouches get:1], [theseTouches get:0]);
//Also find the normals of the two vectors.
struct Vector2f previousNormal = getNormalizedVector(&previousDifference);
struct Vector2f currentNormal = getNormalizedVector(¤tDifference );
//Find the distance between the two previous points and the two current points.
float previousDistance = getMagnitudeOfVector(&previousDifference);
float currentDistance = getMagnitudeOfVector(¤tDifference );
//Find the angles between the two previous points and the two current points.
float angleBetween = atan2(previousNormal.y,previousNormal.x) - atan2(currentNormal.y,currentNormal.x);
//If we had a short change in distance and the angle between touches is a big one, rotate.
if ( fabsf(previousDistance - currentDistance) <= ROTATE_MIN_DISTANCE && fabsf(angleBetween) >= ROTATE_MAX_ANGLE)
{
if (angleBetween > 0)
{
printf("Rotate right.\n");
}
else
{
printf("Rotate left.\n");
}
}
else
{
//Get the dot product of the differences of the two points and the two vectors.
struct Vector2f differenceChange = getSubtracted(&secondChange, &firstChange);
float dotDifference = getDot(&previousDifference, &differenceChange);
if (dotDifference > 0)
{
printf("Zoom in.\n");
}
else
{
printf("Zoom out.\n");
}
}
}
}
if (scroll)
{
prinf("Scroll.\n");
}
You should note that if you're just doing image manipulation or direct rotation / zooming, then the above approach should be fine. However, if you're like me and you're using a gesture to cause something that takes time to load, then it's likely that you'll want to avoid doing the action until that gesture has been activated a few times in a row. The difference between each with my code is still not perfectly separate, so occasionally in a bunch of zooms you'll get a rotation, or vise versa.
I've done that before by finding the previous and current distances between the two fingers, and the angle between the previous and current lines.
Then I picked some empirical thresholds for that distance delta and angle theta, and that has worked out pretty well for me.
If the distance was greater than my threshold, and the angle was less than my threshold, I scaled the image. Otherwise I rotated it.
2 finger scroll seems easy to distinguish.
BTW in case you are actually storing the values, the touches have previous point values already stored.
CGPoint previousPoint1 = [self scalePoint:[touch1 previousLocationInView:nil]];
CGPoint previousPoint2 = [self scalePoint:[touch2 previousLocationInView:nil]];
CGPoint currentPoint1 = [self scalePoint:[touch1 locationInView:nil]];
CGPoint currentPoint2 = [self scalePoint:[touch2 locationInView:nil]];
Two fingers, both moving, opposit(ish) directions. What gesture conflicts with this?
Pinch/zoom I guess comes close, but whereas pinch/zoom will start off moving away from a center point (if you trace backwards from each line, your lines will be parallel and close), rotate will initially have parallel lines (tracing backwards) that will be far away from each other and those lines will constantly change slope (while retaining distance).
edit: You know--both of these could be solved with the same algorithm.
Rather than calculating lines, calculate the pixel under each finger. If the fingers move, translate the image so that the two initial pixels are still under the two fingers.
This solves all two-finger actions including scroll.
Two-finger scroll or Zoom might look a little wobbly at times since it will do other operations as well, but this is how the map app seems to work (excluding the rotate which it doesn't have).
I doing a sound application on accelerometer.Its play different sound for movement by calculating accelerometer value.But how can i find the accelerometer direction that the user move x-axis plus or minus and y-axis plus or minus.How can i find this value on accelerometer.
Please give some instruction or helping code or project.
You need to perform a vector addition and calculate the Summation of 2 vectors to get the resultant vector. The above article explains all the common methods of calculating it. But doing it in Programmatically you just have to apply Pythagoras theorem and Tan theta = b/a
I think you would need the magnetometer direction (to at least give you a bearing you could always compare against), as well as using the vector math mentioned above. This article does a better job of explaining how to add vectors (the first one glosses over the most likely case by just saying it's "hard")...
http://blog.dotphys.net/2008/09/basics-vectors-and-vector-addition/
You have to represent it using vectors, there is a delegate method below which details what you need to do.
Now I haven't taken a look at the API too much yet, but it I believe your direction vector is returned to you from the accelerometer.
There is a delegate method which returns the values you will need.
The following code may help from a tutorial you should take a look at here:
- (void)acceleratedInX:(float)xx Y:(float)yy Z:(float)zz
{
// Create Status feedback string
NSString *xstring = [NSString stringWithFormat:
#"X (roll, %4.1f%%): %f\nY (pitch %4.1f%%): %f\nZ (%4.1f%%) : %f",
100.0 - (xx + 1.0) * 100.0, xx,
100.0 - (yy + 1.0) * 100.0, yy,
100.0 - (zz + 1.0) * 100.0, zz
];
self.textView.text = xstring;
// Revert Arrow and then rotate to new coords
float angle = atan2(xx, yy);
angle += M_PI / 2.0;
CGAffineTransform affineTransform = CGAffineTransformIdentity;
affineTransform = CGAffineTransformConcat( affineTransform, CGAffineTransformMakeRotation(angle));
self.xarrow.transform = affineTransform;
}
- (void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration {
[self acceleratedInX:acceleration.x Y:acceleration.y Z:acceleration.z];
}
There is also an easy to read article which explains it clearly here along with sample code.