I've been checking for multiple taps, whether it is 2 or 10 by simply calling tapCount on any touch:
[[touches anyObject] tapCount]==2
This simply checks for a double tap.
It works fine. I'm wondering if there is any particular reason to instead start using UITapGestureRecognizer.
It would seem that the UITapGestureRecognizer API provides wrappers around the same functionality as just inspecting touches directly, as above. Things like tapCount and the number of fingers on the screen don't require UITapGestureRecognizer.
For things like swipes, I can see the simplicity in letting UIKit handle recognizing those, as they are harder to code manually, but for a tapCount? Where's the real gain here, what am I missing?
Gesture recognizers provide for coordination in processing multiple gesture types on the same view. See the discussion of the state machine in the documentation.
If a tap is the only gesture of interest, you may not find much value, but the architecture comes in handy if you want to coordinate the recognition of taps with other gestures provided either by you or system supplied classes, such as scroll views. Gesture recognizers are given first crack at touches, so you will need to use this architecture, if you want, for example, to recognize touches in a child of a scroll view, before the scroll view processes them.
The gesture recognizers can also be set to defer recognition, so, for example, the action for a single tap is not called until a double tap has timed out.
In general, the gesture recognizer approach is a good one to adopt because it allows gestures to be managed in a consistent fashion across apps and code sources. If Apple wanted to add an assistive technology preference that allowed the user to select a longer over which a double tap would be recognized. they could do this without requiring any changes to the code of developers using standard gesture recognizers.
I should add that gesture recognizers can be added directly to your storyboard or nib, so in most cases you only need to code the target action, which could be a time saver in new code.
UITapGestureRecognizer provides a cleaner, easier to use API, but no new functionality. So for your case, no reason.
Related
I know that you can use a UITapDetector to detect any time the screen is tapped. But, is there a way to detect any time the screen is simply touched? Long, short, whatever, the finger doesn't even have to come up again, just any time the screen is touched at all, have something happen.
Yes, implement UIResponder touchesBegan(_:with:) and related methods. Now you are receiving the raw touches (not completely raw, since they are still associated with the hit-test view); and you can interpret or respond to them however you like. That's how you'd implement a drawing app, for example.
That in fact is what we used to have to do for all touches, before gesture recognizers were invented.
I have just managed to implement detection of a swipe gesture for my app. However I would like to confine the area where the gesture is valid. Thinking about this I came up with a possible solution which would be to check whether the start & finish coordinates are within some area. I was just wondering if there's a better or preferred method of doing something like this.
Simply create an invisible UIView (= with transparent background) and set its frame so it encloses the region you want to detect the gesture into.
Then, simply add a UISwipeGestureRecognizer to that view, and you are done.
Read the generic UIGestureRecognizer Class Reference and the part of the Event Handling Guide for iOS that talks about UIGestureRecognizers for more info.
Of course you could also manage the detection of the swipe gesture by yourself using custom code like explained here in the very same guide but why bother when UIGestureRecognizers can manage everything for you?
I am using a UIPanGestureRecognizer on several card-like views to let the user move the views around the screen. It's very nice that they can put down 3 fingers and pickup 3 cards at once, however, some of my functionality isn't designed to work like that.
I'd like to only allow 1 gesture recognizer to run at a time. Is there a preferred way to do this?
I've considered:
gestureRecognizer:shouldRecognizeSimultaneouslyWithGestureRecognizer: but it already returns 'NO' by default.
Setting an instance variable when the first gesture begins, but I'm concerned about multithreaded access to this variable (Should I use #synchronized, or would it be too much overhead?).
Keeping an array of the gesture recognizers and checking their state in gestureRecognizerShouldBegin: to ensure that none are in progress.
Thanks.
The best practice is using one (global) gesture recognizer in view that's being superview for your cards with hitTest: for determining which card has been touched. It will allow you to work with multiple touches correctly.
Put a single UIPanGestureRecognizer on the common superview of all your cards, and then do hit detection to find the card in question when the gesture starts. That way you only have 1 gesture recognizer, so only one gesture can run at a time.
Edit: BTW, your idea of keeping an ivar, while clumsy, would work. UIGestureRecognizer is part of UIKit and only operates on the main thread, so you don't have to worry about multithreaded access. But like I said, it's clumsy. Using a single "master" UIGestureRecognizer instead is cleaner.
I'm developing an iPhone/iPad app that supports dragging items between table views. Since all the tables don't fit on screen, I've written a custom UIScrollView that lays them out horizontally, and supports paging.
While I've gotten the primary drag and drop together, there are a few remaining issues I can't get past.
After the user has selected an item to drag, and is dragging, they cannot scroll the UIScrollView to find the destination UITableView.
Sometimes the user will want to drag the item within the same table view. But once the drag has begun, the table view no longer recognizes the scroll gesture.
I've tried a variety of different options, including implementing a UIGestureRecognizerDelegate and allowing multiple gesture recognizers to recognize gestures simultaneously.
The problem, as I see it stems from this description from the Event Handling Guide: "iOS recognizes one or more fingers touching the screen as part of a multitouch sequence. This sequence begins when the first finger touches down on the screen and ends when the last finger is lifted from the screen."
UIGestureRecognizer instances always match against the entire sequence. In my case, I want to split a single sequence down into discrete gestures -- some touches recognize a dragging of an item, while different touches within the same sequence should be recognized as a swipe or scroll gesture. Effectively, I want my gesture recognizers to recognize simultaneously, but only different touches. Once one recognizes a touch as part of a gesture, that touch should be ignored by the others.
I haven't found a way to solve all these issues coherently using the default UIGestureRecognizer subclasses, and am now about to write my own custom mutli-part gesture recognizer.
I'd rather not have to though -- is there any more appropriate way to achieve the same result?
Given the silence here, and a blog post I just found, I believe the answer is that, no there is no way to do sub-gesture recognition with the standard framework.
For those looking to do something similar, take a look at this project/blog post, which is an attempt to create a sub-gesture recognition library:
http://sunetos.com/items/2010/10/31/adding-subgestures-to-ios-gesture-recognition/
I haven't used it -- I ended up manually crafting my own iteractions -- but will consider refactoring to use it if it pans out.
I'm about to start a new iPhone app that requires a certain functionality but I'm not sure if it's doable. I'm willing to research but first I just wanted to know if I should at least consider it or not.
I haven't seen this in an app before (which is my main concern, even though I haven't seen too many apps since I don't own an iPhone), but an example would be the iPhone shortcuts panels: you can hold on an app, and then drag it to another panel, sweeping while still dragging it. But this is the core app, is it possible to reproduce something similar within a normal app?
I only need to be sure it can be done before I start digging, I don't need code examples or anything, but if you have some exact resources that you consider helpful, that would be appreciated.
Thanks!
Yes. If you have your custom UIView subclass instance inside a UIScrollView, your view controller just needs to set the UIScrollView to delay content touches and not allow it to cancel touch events.
[scrollView setCanCancelContentTouches:NO];
[scrollView setDelaysContentTouches:YES];
When the user taps and holds in the custom view, the event goes to that custom view, which can process the touch events to drag an item around, but if the user quickly swipes, it scrolls the view.
The "panel" view that you're referring to appears to be a UIPageControl view — although, perhaps, the specific incarnation of this view that Apple uses for the iPhone's home page may be customized.
Instances of generic UIView views that you might touch-and-drag will receive touch events. By overriding methods in the view, these events can be processed and passed to the page control, in order to tell it to "sweep" between pages.
If I wanted to do what you're asking about, that's how I might approach it. It seems doable to me, in any case.
Start with this: Swip from one view to the next view
Try using a UIButton that tracks the time since the state of the button changed to "highlighted". You may need to do this in order to track the dragging and move the button around:
Observing pinch multi-touch gestures in a UITableView
Check to see if the button starts overlapping one side of the screen while being dragged. If s certain amount of time elapses since the button first started overlapping the edge and then manipulate the UIScrollView so that it switches to the next page on the corresponding side of the screen
You may need to use NSTimer to keep track of how long the button is held down, etc.
In any case there's no reason why this couldn't work.
If UIButton doesn't work then perhaps try a custom subclass of UIControl (which tracks the same touch down actions etc.). If that doesn't work then use the window event intercept thing to track everything.