I made a customer control, inherit from UIView and add a lot of UIButtons on the UIView.
When a user touches and moves I will do some animation: let buttons move by the function touchesMoved:
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
but buttonClick event seems to have a higher priority.
I want to it can like UITableView, scroll things have higher priority then button click.
You need to look into UIPanGestureRecognizer.
It allows you the ability to cancel events sent to other handlers.
Updated with additional information about how to safe previous points.
In the action callback, you gett notified of the initial touch location recognizer.state == UIGestureRecognizerStateBegan. You can save this point as an instance variable. You also get callbacks at various intervals recognizer.state == UIGestureRecognizerStateChanged. You can save this information also. Then when you get the callback with recognizer.state == UIGestureRecognizerStateEnded, you reset any instance variables.
- (void)handler:(UIPanGestureRecognizer *)recognizer
{
CGPoint location = [recognizer locationInView:self];
switch (recognizer.state)
{
case UIGestureRecognizerStateBegan:
self.initialLocation = location;
self.lastLocation = location;
break;
case UIGestureRecognizerStateChanged:
// Whatever work you need to do.
// location is the current point.
// self.lastLocation is the location from the previous call.
// self.initialLocation is the location when the touch began.
// NOTE: The last thing to do is set last location for the next time we're called.
self.lastLocation = location;
break;
}
}
Hope that helps.
Related
I'm having an issue where, within my touchesBegan method, I'm not getting back what I think I should.
I'm testing for a hit within a specific node. I've tried several methods, and none work. I've created a work-around, but would love to know if this is a bug or if I'm doing something wrong.
Here's the code:
Standard touchesBegan method:
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
/* Called when a touch begins */
SKSpriteNode *infoPanelNode = (SKSpriteNode *)[self childNodeWithName:#"infoPanelNode"];
UITouch *touch = [touches anyObject];
if(touch){
//e.g. infoPanelNode position:{508, 23} size:{446, 265.5} (also of note - infoPanelNode is a child of self)
//This solution works
CGPoint location = [touch locationInNode:self];
//location == (x=77, y=170)
bool withinInfoPanelNode = [self myPtInNode:infoPanelNode inPoint:location];
// withinInfoPanelNode == false (CORRECT)
//This one doesn't return the same result - returns true when hit is not in the cell
CGPoint infoLocation = [touch locationInNode:infoPanelNode];
//infoLocation == (x=-862, y=294)
bool withinInfoPanelNodeBuiltInResult = [infoPanelNode containsPoint:infoLocation];
// withinInfoPanelNodeBuiltInResult == true (WRONG)
// This one doesn't work either - returns an array with the infoPanelNode in it, even though the hit point and node location are the same shown above
// NSArray *nodes = [self nodesAtPoint:location];
// for (SKNode *node in nodes) {
// if(node==infoPanelNode)
// withinInfoPanelNode = true;
// }
//
//Code omitted - doing something with the withinInfoPanelNode now
}
My custom hit test code:
-(bool) myPtInNode:(SKSpriteNode *)node inPoint:(CGPoint)inPoint {
if(node.position.x < inPoint.x && (node.position.x+node.size.width) > inPoint.x){
if(node.position.y < inPoint.y && (node.position.y+node.size.height) > inPoint.y){
return true;
}
}
return false;
}
Anyone see what's going wrong here?
Thanks,
kg
I'm not sure exactly how SKCropNode will work, but in general, in order for containsPoint: to detect touches within it you need to give it a point relative to its parent node. The following code should work. Note the addition of .parent when calling locationInNode:
CGPoint infoLocation = [touch locationInNode:infoPanelNode.parent];
BOOL withinInfoPanelNodeBuiltInResult = [infoPanelNode containsPoint:infoLocation];
Solved this problem and wanted to update everyone.
It turns out that with this specific infoPanelNode (an SKSpriteNode), I have a child node within it that is an SKCropNode. This node then crops out a much larger node (obviously it's a child of the crop node) so only a small portion is viewable (allowing for scrolling to portions of that node). Unfortunately, the call to containsPoint apparently combines the boundaries of all child nodes with the receiving node's boundaries to use as the boundary test rect. This would be understandable if it would respect the SKCropNode's boundaries of IT'S children, but apparently, it doesn't so you have to roll your own if you have this type of setup.
I'm wondering if someone knows how to implement the "touch up inside" response when a user pushes down then lifts their finger in the touchesBegan, touchesEnded methods. I know this can be done with UITapGestureRecognizer, but actually I'm trying to make it so that it only works on a quick tap (with UITapGestureRecognizer, if you hold your finger there for a long time, then lift, it still executes). Anyone know how to implement this?
Using the UILongPressGesturizer is actually a much better solution to mimic all of the functionality of a UIButton (touchUpInside, touchUpOutside, touchDown, etc.):
- (void) longPress:(UILongPressGestureRecognizer *)longPressGestureRecognizer
{
if (longPressGestureRecognizer.state == UIGestureRecognizerStateBegan || longPressGestureRecognizer.state == UIGestureRecognizerStateChanged)
{
CGPoint touchedPoint = [longPressGestureRecognizer locationInView: self];
if (CGRectContainsPoint(self.bounds, touchedPoint))
{
[self addHighlights];
}
else
{
[self removeHighlights];
}
}
else if (longPressGestureRecognizer.state == UIGestureRecognizerStateEnded)
{
if (self.highlightView.superview)
{
[self removeHighlights];
}
CGPoint touchedPoint = [longPressGestureRecognizer locationInView: self];
if (CGRectContainsPoint(self.bounds, touchedPoint))
{
if ([self.delegate respondsToSelector:#selector(buttonViewDidTouchUpInside:)])
{
[self.delegate buttonViewDidTouchUpInside:self];
}
}
}
}
I'm not sure when it was added, but the property isTouchInside is a life saver for any UIControl derived object (e.g. UIButton).
override func endTracking(_ touch: UITouch?, with event: UIEvent?) {
super.endTracking(touch, with: event)
if isTouchInside {
// Do the thing you want to do
}
}
Here's the Apple official docs
You can implement touchesBegan and touchesEnded by creating a UIView subclass and implementing it there.
However you can also use a UILongPressGestureRecognizer and achieve the same results.
I did this by putting a timer that gets triggered in touchesBegan. If this timer is still running when touchesEnded gets called, then execute whatever code you wanted to. This gives the effect of touchUpInside.
-(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
NSTimer *tapTimer = [[NSTimer scheduledTimerWithTimeInterval:.15 invocation:nil repeats:NO] retain];
self.tapTimer = tapTimer;
[tapTimer release];
}
-(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
if ([self.tapTimer isValid])
{
}
}
You can create some BOOL variable then in -touchesBegan check what view or whatever you need was touched and set this BOOL variable to YES. After that in -touchesEnded check if this variable is YES and your view or whatever you need was touched that will be your -touchUpInside. And of course set BOOL variable to NO after.
You can add a UTapGestureRecognizer and a UILongPressGestureRecognizer and add dependency using [tap requiresGestureRecognizerToFail:longPress]; (tap and long press being the objects of added recognizers).
This way, the tap will not be detected if long press is fired.
i want to let user choose where the joystick should be. i.e., when user touch at one location, the joystick will appear there and ready to use and will remove when finger released.
-(void) ccTouchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
if ([self getChildByTag:kTagJoyStick] == nil) {
[self addJoystickWithPosition:[Helper locationFromTouches:touches]];
}
}
-(void) ccTouchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
if ([self getChildByTag:kTagJoyStick] != nil) {
[self removeChildByTag:kTagJoyStick cleanup:YES];
}
}
-(void) ccTouchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event {
[self ccTouchesEnded:touches withEvent:event];
}
(do nothing in ccTouchesMoved method)
the update methods for joystick is:
-(void) sneakyUpdate {
if ([self getChildByTag:kTagJoyStick] != nil) {
if (joystick.velocity.x < 0) {
[self controlLeft];
}
else if (joystick.velocity.x > 0) {
[self controlRight];
}
else {
[self controlStop];
}
}
else {
[self controlStop];
}
}
but the result is, the joystick will appear and auto remove. but my sprite won't move. ( i set the break point, the sneakyUpdate method did get called. but the joystick.velocity is always 0. (and the thumbSprite didn't follow our finger.
please help me.
update:
and it turns out that i have to use 2 fingers (one for touch once and let the joystick show up, move my finger away, and then use another finger to control the joystick)
I'm not 100% sure, but I think you should use ccTouchBegan instead ccTouchesBegan, because sneakyJoystick classes use ccTouchBegan/Moved/Ended/Cancelled. Also, there are for a single touch, that is what you want.
I hope it works!
It looks like the problem is in your joystick class. Every joystick implementation I've seen uses the ccTouchesBegan method to activate the joystick, then in the ccTouchesMoved method, it makes sure its activated before using it. The problem I am seeing is that you create and add the joystick AFTER the touches began method, meaning your joystick never 'activates'. One way of bypassing this is to do all of the joystick's ccTouchesBegan functions in the method that creates the joystick, and 'activate' it from there by passing a reference to the touch that will be using it.
I am using - (void) touchesMoved to do stuff when ever I enter a specific frame, in this case the area of a button.
My problem is, I only want it to do stuff when I enter the frame - not when I am moving my finger inside the frame.
Does anyone know how I can call my methods only once while I am inside the frame, and still allow me to call it once again if I re-enter it in the same touchMove.
Thank you.
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[event touchesForView:self.view] anyObject];
CGPoint location = [touch locationInView:touch.view];
if(CGRectContainsPoint(p1.frame, location))
{
//I only want the below to me called
// once while I am inside this frame
[self pP01];
[p1 setHighlighted:YES];
}else {
[p1 setHighlighted:NO];
}
}
You can use some attribute to check if the code was already called when you were entering specific area. It looks like highlighted state of p1 object (not sure what it is) may be appropriate for that:
if(CGRectContainsPoint(p1.frame, location))
{
if (!p1.isHighlighted){ // We entered the area but have not run highlighting code yet
//I only want the below to me called
// once while I am inside this frame
[self pP01];
[p1 setHighlighted:YES];
}
}else { // We left the area - so we'll call highlighting code when we enter next time
[p1 setHighlighted:NO];
}
Simply add a BOOL that you check in touchesMoved and reset in touchesEnded
if( CGRectContainsPoint([p1 frame],[touch locationInView:self.view])) {
NSLog (#"Touch Moved over p1");
if (!p14.isHighlighted) {
[self action: p1];
p1.highlighted = YES;
}
}else {
p1.highlighted = NO;
}
try using a UIButton and use the 'touch drag enter' connection in Interface Builder.
I have a quick question regarding tracking touches on the iPhone and I seem to not be able to come to a conclusion on this, so any suggestions / ideas are greatly appreciated:
I want to be able to track and identify touches on the iphone, ie. basically every touch has a starting position and a current/moved position. Touches are stored in a std::vector and they shall be removed from the container, once they ended. Their position shall be updated once they move, but I still want to keep track of where they initially started (gesture recognition).
I am getting the touches from [event allTouches], thing is, the NSSet is unsorted and I seem not to be able to identify the touches that are already stored in the std::vector and refer to the touches in the NSSet (so I know which ones ended and shall be removed, or have been moved, etc.)
Here is my code, which works perfectly with only one finger on the touch screen, of course, but with more than one, I do get unpredictable results...
- (void) touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event
{
[self handleTouches:[event allTouches]];
}
- (void) touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event
{
[self handleTouches:[event allTouches]];
}
- (void) touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event
{
[self handleTouches:[event allTouches]];
}
- (void) touchesCancelled:(NSSet*)touches withEvent:(UIEvent*)event
{
[self handleTouches:[event allTouches]];
}
- (void) handleTouches:(NSSet*)allTouches
{
for(int i = 0; i < (int)[allTouches count]; ++i)
{
UITouch* touch = [[allTouches allObjects] objectAtIndex:i];
NSTimeInterval timestamp = [touch timestamp];
CGPoint currentLocation = [touch locationInView:self];
CGPoint previousLocation = [touch previousLocationInView:self];
if([touch phase] == UITouchPhaseBegan)
{
Finger finger;
finger.start.x = currentLocation.x;
finger.start.y = currentLocation.y;
finger.end = finger.start;
finger.hasMoved = false;
finger.hasEnded = false;
touchScreen->AddFinger(finger);
}
else if([touch phase] == UITouchPhaseEnded || [touch phase] == UITouchPhaseCancelled)
{
Finger& finger = touchScreen->GetFingerHandle(i);
finger.hasEnded = true;
}
else if([touch phase] == UITouchPhaseMoved)
{
Finger& finger = touchScreen->GetFingerHandle(i);
finger.end.x = currentLocation.x;
finger.end.y = currentLocation.y;
finger.hasMoved = true;
}
}
touchScreen->RemoveEnded();
}
Thanks!
It appears the "proper" way to track multiple touches is by the pointer value of the UITouch event.
You can find more details in the "Handling a Complex Multi-Touch Sequence" section of this
Apple Developer Documentation
To fix your problem scrap your "handleTouches" method. The first thing you do in your handleTouches method, is switch it on the touchPhase, but that is already given to you. If you recieve the touch in touchesBegan, you know the touch is in UITouchPhaseBegan. By funneling touches from the four touch methods into one method, you are defeating the purpose of having four delegate methods.
In each of those methods, Apple gives you an opportunity to deal with a different phase of the current touch.
The second thing is that you don't need to search the event for the current touch, it is given to you as a parameter: touches.
An event is comprised of sets of touches. For convienence, you are given the current touches even though it can also be found within event.
So, in touchesBegan, you start tracking a touch.
- (void) touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event{
NSString *startPoint = NSStringFromCGPoint([[touches anyObject] locationInView:self]);
NSDictionary * touchData = [NSDictionary dictionaryWithObjectsandKeys: startPoint, #"location", touches, #"touch"]
[startingLocations addObject:touchData];
}
I'm using an array of dictionaries to hold my touch data.
Try to seperate your code and move it into the appropriate touch method. For direction, Apple has a couple sample projects that focus on touches and show you how to setup those methods.
Remember, these methods will get called automatically for each touch during each phase, you don't need to cycle through the event to find out what happened.
The pointer to each set of touches remains constant, just the data changes.
Also, I would read the iPhone OS programming guide section on event handling which goes into greater depth of what I said above with several diagrams explaining the relationship of touches to events over time.
An excerpt:
In iPhone OS, a UITouch object
represents a touch, and a UIEvent
object represents an event. An event
object contains all touch objects for
the current multi-touch sequence and
can provide touch objects specific to
a view or window (see Figure 3-2). A
touch object is persistent for a given
finger during a sequence, and UIKit
mutates it as it tracks the finger
throughout it. The touch attributes
that change are the phase of the
touch, its location in a view, its
previous location, and its timestamp.
Event-handling code evaluates these
attributes to determine how to respond
to the event.
You should be able to properly collate your touches by storing the previous location of all touches and then comparing these previous locations when new touches are detected.
In your -handleTouches method, you could put something like this in your for loop:
// ..existing code..
CGPoint previousLocation = [touch previousLocationInView:self];
// Loop through previous touches
for (int j = 0; j < [previousTouchLocationArray count]; j++) {
if (previousLocation == [previousTouchLocationArray objectAtIndex:j]) {
// Current touch matches - retrieve from finger handle j and update position
}
}
// If touch was not found, create a new Finger and associated entry
Obviously you'll need to do some work to integrate this into your code, but I'm pretty sure you can use this idea to correctly identify touches as they move around the screen. Also I just realized CGPoint won't fit nicely into an NSArray - you'll need to wrap these in NSValue objects (or use a different type of array).