I have a quick question regarding tracking touches on the iPhone and I seem to not be able to come to a conclusion on this, so any suggestions / ideas are greatly appreciated:
I want to be able to track and identify touches on the iphone, ie. basically every touch has a starting position and a current/moved position. Touches are stored in a std::vector and they shall be removed from the container, once they ended. Their position shall be updated once they move, but I still want to keep track of where they initially started (gesture recognition).
I am getting the touches from [event allTouches], thing is, the NSSet is unsorted and I seem not to be able to identify the touches that are already stored in the std::vector and refer to the touches in the NSSet (so I know which ones ended and shall be removed, or have been moved, etc.)
Here is my code, which works perfectly with only one finger on the touch screen, of course, but with more than one, I do get unpredictable results...
- (void) touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event
{
[self handleTouches:[event allTouches]];
}
- (void) touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event
{
[self handleTouches:[event allTouches]];
}
- (void) touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event
{
[self handleTouches:[event allTouches]];
}
- (void) touchesCancelled:(NSSet*)touches withEvent:(UIEvent*)event
{
[self handleTouches:[event allTouches]];
}
- (void) handleTouches:(NSSet*)allTouches
{
for(int i = 0; i < (int)[allTouches count]; ++i)
{
UITouch* touch = [[allTouches allObjects] objectAtIndex:i];
NSTimeInterval timestamp = [touch timestamp];
CGPoint currentLocation = [touch locationInView:self];
CGPoint previousLocation = [touch previousLocationInView:self];
if([touch phase] == UITouchPhaseBegan)
{
Finger finger;
finger.start.x = currentLocation.x;
finger.start.y = currentLocation.y;
finger.end = finger.start;
finger.hasMoved = false;
finger.hasEnded = false;
touchScreen->AddFinger(finger);
}
else if([touch phase] == UITouchPhaseEnded || [touch phase] == UITouchPhaseCancelled)
{
Finger& finger = touchScreen->GetFingerHandle(i);
finger.hasEnded = true;
}
else if([touch phase] == UITouchPhaseMoved)
{
Finger& finger = touchScreen->GetFingerHandle(i);
finger.end.x = currentLocation.x;
finger.end.y = currentLocation.y;
finger.hasMoved = true;
}
}
touchScreen->RemoveEnded();
}
Thanks!
It appears the "proper" way to track multiple touches is by the pointer value of the UITouch event.
You can find more details in the "Handling a Complex Multi-Touch Sequence" section of this
Apple Developer Documentation
To fix your problem scrap your "handleTouches" method. The first thing you do in your handleTouches method, is switch it on the touchPhase, but that is already given to you. If you recieve the touch in touchesBegan, you know the touch is in UITouchPhaseBegan. By funneling touches from the four touch methods into one method, you are defeating the purpose of having four delegate methods.
In each of those methods, Apple gives you an opportunity to deal with a different phase of the current touch.
The second thing is that you don't need to search the event for the current touch, it is given to you as a parameter: touches.
An event is comprised of sets of touches. For convienence, you are given the current touches even though it can also be found within event.
So, in touchesBegan, you start tracking a touch.
- (void) touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event{
NSString *startPoint = NSStringFromCGPoint([[touches anyObject] locationInView:self]);
NSDictionary * touchData = [NSDictionary dictionaryWithObjectsandKeys: startPoint, #"location", touches, #"touch"]
[startingLocations addObject:touchData];
}
I'm using an array of dictionaries to hold my touch data.
Try to seperate your code and move it into the appropriate touch method. For direction, Apple has a couple sample projects that focus on touches and show you how to setup those methods.
Remember, these methods will get called automatically for each touch during each phase, you don't need to cycle through the event to find out what happened.
The pointer to each set of touches remains constant, just the data changes.
Also, I would read the iPhone OS programming guide section on event handling which goes into greater depth of what I said above with several diagrams explaining the relationship of touches to events over time.
An excerpt:
In iPhone OS, a UITouch object
represents a touch, and a UIEvent
object represents an event. An event
object contains all touch objects for
the current multi-touch sequence and
can provide touch objects specific to
a view or window (see Figure 3-2). A
touch object is persistent for a given
finger during a sequence, and UIKit
mutates it as it tracks the finger
throughout it. The touch attributes
that change are the phase of the
touch, its location in a view, its
previous location, and its timestamp.
Event-handling code evaluates these
attributes to determine how to respond
to the event.
You should be able to properly collate your touches by storing the previous location of all touches and then comparing these previous locations when new touches are detected.
In your -handleTouches method, you could put something like this in your for loop:
// ..existing code..
CGPoint previousLocation = [touch previousLocationInView:self];
// Loop through previous touches
for (int j = 0; j < [previousTouchLocationArray count]; j++) {
if (previousLocation == [previousTouchLocationArray objectAtIndex:j]) {
// Current touch matches - retrieve from finger handle j and update position
}
}
// If touch was not found, create a new Finger and associated entry
Obviously you'll need to do some work to integrate this into your code, but I'm pretty sure you can use this idea to correctly identify touches as they move around the screen. Also I just realized CGPoint won't fit nicely into an NSArray - you'll need to wrap these in NSValue objects (or use a different type of array).
Related
I'm having an issue where, within my touchesBegan method, I'm not getting back what I think I should.
I'm testing for a hit within a specific node. I've tried several methods, and none work. I've created a work-around, but would love to know if this is a bug or if I'm doing something wrong.
Here's the code:
Standard touchesBegan method:
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
/* Called when a touch begins */
SKSpriteNode *infoPanelNode = (SKSpriteNode *)[self childNodeWithName:#"infoPanelNode"];
UITouch *touch = [touches anyObject];
if(touch){
//e.g. infoPanelNode position:{508, 23} size:{446, 265.5} (also of note - infoPanelNode is a child of self)
//This solution works
CGPoint location = [touch locationInNode:self];
//location == (x=77, y=170)
bool withinInfoPanelNode = [self myPtInNode:infoPanelNode inPoint:location];
// withinInfoPanelNode == false (CORRECT)
//This one doesn't return the same result - returns true when hit is not in the cell
CGPoint infoLocation = [touch locationInNode:infoPanelNode];
//infoLocation == (x=-862, y=294)
bool withinInfoPanelNodeBuiltInResult = [infoPanelNode containsPoint:infoLocation];
// withinInfoPanelNodeBuiltInResult == true (WRONG)
// This one doesn't work either - returns an array with the infoPanelNode in it, even though the hit point and node location are the same shown above
// NSArray *nodes = [self nodesAtPoint:location];
// for (SKNode *node in nodes) {
// if(node==infoPanelNode)
// withinInfoPanelNode = true;
// }
//
//Code omitted - doing something with the withinInfoPanelNode now
}
My custom hit test code:
-(bool) myPtInNode:(SKSpriteNode *)node inPoint:(CGPoint)inPoint {
if(node.position.x < inPoint.x && (node.position.x+node.size.width) > inPoint.x){
if(node.position.y < inPoint.y && (node.position.y+node.size.height) > inPoint.y){
return true;
}
}
return false;
}
Anyone see what's going wrong here?
Thanks,
kg
I'm not sure exactly how SKCropNode will work, but in general, in order for containsPoint: to detect touches within it you need to give it a point relative to its parent node. The following code should work. Note the addition of .parent when calling locationInNode:
CGPoint infoLocation = [touch locationInNode:infoPanelNode.parent];
BOOL withinInfoPanelNodeBuiltInResult = [infoPanelNode containsPoint:infoLocation];
Solved this problem and wanted to update everyone.
It turns out that with this specific infoPanelNode (an SKSpriteNode), I have a child node within it that is an SKCropNode. This node then crops out a much larger node (obviously it's a child of the crop node) so only a small portion is viewable (allowing for scrolling to portions of that node). Unfortunately, the call to containsPoint apparently combines the boundaries of all child nodes with the receiving node's boundaries to use as the boundary test rect. This would be understandable if it would respect the SKCropNode's boundaries of IT'S children, but apparently, it doesn't so you have to roll your own if you have this type of setup.
I made a customer control, inherit from UIView and add a lot of UIButtons on the UIView.
When a user touches and moves I will do some animation: let buttons move by the function touchesMoved:
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
but buttonClick event seems to have a higher priority.
I want to it can like UITableView, scroll things have higher priority then button click.
You need to look into UIPanGestureRecognizer.
It allows you the ability to cancel events sent to other handlers.
Updated with additional information about how to safe previous points.
In the action callback, you gett notified of the initial touch location recognizer.state == UIGestureRecognizerStateBegan. You can save this point as an instance variable. You also get callbacks at various intervals recognizer.state == UIGestureRecognizerStateChanged. You can save this information also. Then when you get the callback with recognizer.state == UIGestureRecognizerStateEnded, you reset any instance variables.
- (void)handler:(UIPanGestureRecognizer *)recognizer
{
CGPoint location = [recognizer locationInView:self];
switch (recognizer.state)
{
case UIGestureRecognizerStateBegan:
self.initialLocation = location;
self.lastLocation = location;
break;
case UIGestureRecognizerStateChanged:
// Whatever work you need to do.
// location is the current point.
// self.lastLocation is the location from the previous call.
// self.initialLocation is the location when the touch began.
// NOTE: The last thing to do is set last location for the next time we're called.
self.lastLocation = location;
break;
}
}
Hope that helps.
Here's part of the code I'm working with: http://pastie.org/2472364
I've figured out how to access the UIImageView from another method within the same class file in which it was programmatically created.
However, I was wondering how I'd access that same UIImageView from within the LetterTiles.m file, specifically within the touchesMoved method. The way I wrote the code in the sample, it will only show if the frames intersect if they're on top of each other when the otherMethod is called. Of course, I need to be able to check if the views intersect within the actual touchesMoved method. I'm sure it's something super easy, but I'm just not sure how to do it.
Thanks in advance for any help you can give me.
From your comment, and using the code you already have, I would go down this route. This isn't what I would do personally, just FYI. The structure is a bit shakey with the way it sounds like you want this.
Create the place holder UIImageView in the touchesBegan function, then check to see if they intersect when the user stops moving the image.
#import "LetterTiles.h"
#implementation LetterTiles
#synthesize placeHolder;
- (void) touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event {
// Retrieve the touch point (I consider this useful info to have, so I left it in)
CGPoint pt = [[touches anyObject] locationInView:self];
startLocation = pt;
// Create a place holder image wherever you want
[self setPlaceHolder:[[[UIImageView alloc] initWithFrame:CGRectMake(39, 104, 70, 70)] autorelease]];
[newImage setImage[UIImage imageNamed:#"placeHolder.png"]] autorelease];
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
CGPoint pt = [[touches anyObject] locationInView:[self superview]];
[self setCenterPoint:pt];
}
-(void)touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event {
LetterTiles *movingTile = self;
if (CGRectIntersectsRect([movingTile frame], [placeHolder frame])) {
NSLog(#"Touched");
[self setFrame:[placeHolder frame]];
}
}
Make a protocol called ViewMoved which will contain one method otherMethod.
implement that in myMainViewController
take a delegate property of type ViewMoved in LetterTiles.
Assign self when you make new object of type LetterTiles in myMainViewController.
On every movement of touch call oherMethod of delegate and check whether any views of type LetterTiles are intersecting or not.
This will catch any intersection when any of the view is moved.....
If above is not matching with your question then write here......
I am trying to trace the movement of the user's finger on the screen for my iPhone / cocos2d game.
So far I can do this using a ccMotionStreak declared in the interface to my GameLayer and initialized in my init method. To draw the user's touch, I put the following code in touchesMoved:
UITouch *touch = [touches anyObject];
[streak setPosition:[self convertTouchToNodeSpace:touch]];
This works until I lift my finger up and make a new touch motion across the screen. Instead of drawing a new streak, my game connects the end of the old streak to the beginning of my new swipe, and continues the same streak. This is not what I want.
Is there a way to reset my ccMotionStreak? If not, the obvious solution seems to be to create a new streak on each new touch (and remove the old one), but I can't get this to work. When I move the initialization code for my streak out of the init method and into touchesBegan, the streak no longer shows up at all.
I am guessing this should be basic to achieve, but I just can't figure out the syntax. I am still learning ObjC / cocos2d. Can someone help?
Here is how I initialize my streak in my init method:
streak = [CCMotionStreak streakWithFade:3.0 minSeg:1 image:#"streak.png" width:4 length:8 color:ccc4(128,128,128,255)];
[self addChild:streak];
Did you remove/release the old streak on ccTouchesEnded and ccTouchesCancelled?
// in ccTouchesBegan
streak = [CCMotionStreak streakWithFade:3.0 minSeg:1 image:#"streak.png" width:4 length:8 color:ccc4(128,128,128,255)];
[streak setPosition:location];
[self addChild:streak];
// in ccTouchesEnded and ccTouchesCancelled
if (streak) {
[streak removeFromParentAndCleanup:YES];
streak = NULL;
}
I have been using touches began to track as many as 8 touches, and each triggers an event. These touches can occur at the same time, or staggered.
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
NSLog(#"Touch Began");
NSSet *allTouches = [event allTouches];
for (int i=0; i<allTouches.count; i++) {
UITouch *touch = [[allTouches allObjects] objectAtIndex:i];
if (/*touch inside button in question*/) {
//Trigger the event.
}
}
}
That code works for the multitouch, and it has no problems, EXCEPT: (See if you can guess)
Due to the way allTouches works, it literally gets all of the touches. Because of this, it loops through all of the touches that are currently active when the user starts another touch, and thus triggers the event of one of the buttons twice.
Ex: Johnny is pressing button 1. Event 1 occurs. Johnny leaves his finger on button 1, and presses button 2. Event 2 occurs, BUT button 1 is still a part of allTouches, and so, event 1 is triggered again.
So here's the question: How do I get the new touch?
The same touch object will be returned on subsequent calls to touchesBegan for any continuous touch. So just save each UITouch *touch that you have already handled as begun (and not yet ended), and as you iterate the next time in touchesBegan, skip the ones you've so saved/marked.