Confining a swipe gesture to a certain area (iPhone) - iphone

I have just managed to implement detection of a swipe gesture for my app. However I would like to confine the area where the gesture is valid. Thinking about this I came up with a possible solution which would be to check whether the start & finish coordinates are within some area. I was just wondering if there's a better or preferred method of doing something like this.

Simply create an invisible UIView (= with transparent background) and set its frame so it encloses the region you want to detect the gesture into.
Then, simply add a UISwipeGestureRecognizer to that view, and you are done.
Read the generic UIGestureRecognizer Class Reference and the part of the Event Handling Guide for iOS that talks about UIGestureRecognizers for more info.
Of course you could also manage the detection of the swipe gesture by yourself using custom code like explained here in the very same guide but why bother when UIGestureRecognizers can manage everything for you?

Related

Using UITapGestureRecognizer rather than manually calling tapCount

I've been checking for multiple taps, whether it is 2 or 10 by simply calling tapCount on any touch:
[[touches anyObject] tapCount]==2
This simply checks for a double tap.
It works fine. I'm wondering if there is any particular reason to instead start using UITapGestureRecognizer.
It would seem that the UITapGestureRecognizer API provides wrappers around the same functionality as just inspecting touches directly, as above. Things like tapCount and the number of fingers on the screen don't require UITapGestureRecognizer.
For things like swipes, I can see the simplicity in letting UIKit handle recognizing those, as they are harder to code manually, but for a tapCount? Where's the real gain here, what am I missing?
Gesture recognizers provide for coordination in processing multiple gesture types on the same view. See the discussion of the state machine in the documentation.
If a tap is the only gesture of interest, you may not find much value, but the architecture comes in handy if you want to coordinate the recognition of taps with other gestures provided either by you or system supplied classes, such as scroll views. Gesture recognizers are given first crack at touches, so you will need to use this architecture, if you want, for example, to recognize touches in a child of a scroll view, before the scroll view processes them.
The gesture recognizers can also be set to defer recognition, so, for example, the action for a single tap is not called until a double tap has timed out.
In general, the gesture recognizer approach is a good one to adopt because it allows gestures to be managed in a consistent fashion across apps and code sources. If Apple wanted to add an assistive technology preference that allowed the user to select a longer over which a double tap would be recognized. they could do this without requiring any changes to the code of developers using standard gesture recognizers.
I should add that gesture recognizers can be added directly to your storyboard or nib, so in most cases you only need to code the target action, which could be a time saver in new code.
UITapGestureRecognizer provides a cleaner, easier to use API, but no new functionality. So for your case, no reason.

Making multiple moving images respond to taps

In the app, I have several images (same shape and size, different colors) that move on the perimeter of a circle, kinda like cursors acting as compass needles. I want to be able to tap each of these and then display a message based on which one is tapped, but am not sure what kind of approach would be good for this. Right now I'm trying making them in to UIButtons, but it's giving me a lot of hassle. Is there a way to do this with a UITapGestureRecognizer? I doubt it's possible to have one of those on the screen, but have it keep track of 6 different moving areas and let other tap events through, and I don't think adding 6 different recognizers is a good idea, so I'm just wondering if anyone has suggestions on how to go about this. I'm using Core Graphics.
The best method to achieve this effect would probably be to subclass a UIButton (or even UIControl if you'd like) that has the method that overrides touch. For example if you were to subclass UIControl you could override the method below to detect touches and do what you wish with them:
-(BOOL)beginTrackingWithTouch:(UITouch *)touch withEvent:(UIEvent *)event
Then whatever you want it to look like could be done in the drawRect: method by overriding that (and uncommenting it). Same goes for subclassing anything and tracking touches within the subclass: UIButton, UIImageView, UIControl, etc.

Using UISwipeGestureRecognizer and drawing code

I need a control that allows user to
1) draw on it
2) swipe to go to next screen (through an event or a delegate)
I've tried to add UISwipeGestureRecognizer to the view but it didn't work the way I wanted. My UI setup is like this:
Main Controller:
view (with UISwipeGestureRecognizer)
subview (owned by another controller that captures touch events and draws the graphics)
Whenever I try to draw a horizontal line on the canvas, the UISwipeGestureRecognizer takes over and fires the "go to next screen" event.
How can I prevent UISwipeGestureRecognizer from doing that? I am thinking about differentiating horizontal line vs swipe based on the duration/length but UISwipeGestureRecognizer does not support anything like that.
It sounds to me like a pretty confusing user experience, but if you're determined to do this, you'll probably need to subclass UIGestureRecognizer and tune it to recognize exactly the type of swipes you care about.

Splitting a touch sequence between multiple UIGestureRecognizer instances

I'm developing an iPhone/iPad app that supports dragging items between table views. Since all the tables don't fit on screen, I've written a custom UIScrollView that lays them out horizontally, and supports paging.
While I've gotten the primary drag and drop together, there are a few remaining issues I can't get past.
After the user has selected an item to drag, and is dragging, they cannot scroll the UIScrollView to find the destination UITableView.
Sometimes the user will want to drag the item within the same table view. But once the drag has begun, the table view no longer recognizes the scroll gesture.
I've tried a variety of different options, including implementing a UIGestureRecognizerDelegate and allowing multiple gesture recognizers to recognize gestures simultaneously.
The problem, as I see it stems from this description from the Event Handling Guide: "iOS recognizes one or more fingers touching the screen as part of a multitouch sequence. This sequence begins when the first finger touches down on the screen and ends when the last finger is lifted from the screen."
UIGestureRecognizer instances always match against the entire sequence. In my case, I want to split a single sequence down into discrete gestures -- some touches recognize a dragging of an item, while different touches within the same sequence should be recognized as a swipe or scroll gesture. Effectively, I want my gesture recognizers to recognize simultaneously, but only different touches. Once one recognizes a touch as part of a gesture, that touch should be ignored by the others.
I haven't found a way to solve all these issues coherently using the default UIGestureRecognizer subclasses, and am now about to write my own custom mutli-part gesture recognizer.
I'd rather not have to though -- is there any more appropriate way to achieve the same result?
Given the silence here, and a blog post I just found, I believe the answer is that, no there is no way to do sub-gesture recognition with the standard framework.
For those looking to do something similar, take a look at this project/blog post, which is an attempt to create a sub-gesture recognition library:
http://sunetos.com/items/2010/10/31/adding-subgestures-to-ios-gesture-recognition/
I haven't used it -- I ended up manually crafting my own iteractions -- but will consider refactoring to use it if it pans out.

MKAnnotationView in both hovering and pinned states

I'm trying to add a pin (MKAnnotation and MKAnnotationView) to my MKMapView and allow the user to drag it around.
I'd also like to make the dragging of the pin animated and interactive like the iPhone's Map App.
My question is how do I change the state of the MKAnnoationView so that it's hovering over the map (so the pin isn't actually inside the map)?
I'm not 100% sure how to do this.
At present, my colleague as found an hovering image that he swaps with the default MKAnnotationView, but that means I can't easily animate between the two.
Not sure what you exactly want to do but I have used Apple's example in the iPhone App Programming Guide (Handling Events in an Annotation View) to implement the draggable pin.
It has a partial code but tha may be enough for you to figure it out.
Basically, you must subclass the MKAnnnotation and MKPinAnnotationView and in your CustomAnnotationView class you have to implement delegate methods to handle touch events, as shown in the Apple example.
There was a bit of filling out or modification needed because the code snippet was not complete, but I have reproduced the behaviour of the pin on the Apple's iPhone Map app exactly (except that I did not implement the right accessory button).
In it, the pin feels like it is hovering. So, I suspect that you have no need for the hovering image you have mentioned.
I also presume that by providing a BOOL property, you could make the pin draggable or "fixed" programmatically.
Does this help?