Whilst developing an app I have come up against a problem with having too many pan gesture recognisers.
My first pan gesture recogniser is on the MainViewController which is a parent of the RecipeSearchVC. This gesture recogniser slides the whole view left or right.
My second pan gesture recogniser in in the RecipeSearchParametersVC which is a parent of a Page View Controller.
The third pan gesture gesture recogniser is added to a UIControl Wheel nested inside of a view controller which is represented by the PageViewController.
I know this sounds insane and it could be argued that this is poor design. However, I believe that is this worked cohesively it would be fine.
When trying to rotate the wheel it will rotate for a second or two before the gesture is overtaken by either the PageViewController or the MainViewController. More often than not it is the MainViewController that takes over. What techniques could I employ to clearly separate each of these gesture recognisers?
EDIT:
Apologies for the vagueness of my description when it comes to the pan gesture recognisers.
The MainViewController has it's own UIPanGestureRecpgniser to allow it to move everything left or right.
The RecipeSearchParametersVC only has a UIPanGestureRecogniser because of the UIPageViewController it contains. It does not add the gesture recogniser itself, but simply takes them from the the pageViewController.
The UIControl's gesture recognisers allows it to track the rotation it should undergo.
In taking the advice given, I may remove the gestures from the page view controller and substitue them with buttons. I only intended this to work like the images (which can be scrolled to reveal more images) found in iBooks, and so I thought that it would work fine.
UIControl UIPanGestureRecogniser Code
/**
* sent to the control when a touch related to the given event enters the control’s bounds
*
* #param touch uitouch object that represents a touch on the receiving control during tracking
* #param event event object encapsulating the information specific to the user event
*/
- (BOOL)beginTrackingWithTouch:(UITouch *)touch
withEvent:(UIEvent *)event
{
[super beginTrackingWithTouch:touch withEvent:event];
CGPoint touchPoint = [touch locationInView:self];
// filter out touchs too close to centre of wheel
CGFloat magnitudeFromCentre = [self calculateDistanceFromCentre:touchPoint];
if (magnitudeFromCentre < 40) return NO;
// calculate distance from centre
CGFloat deltaX = touchPoint.x - _container.center.x;
CGFloat deltaY = touchPoint.y - _container.center.y;
// calculate the arctangent of the opposite (y axis) over the adjacent (x axis) to get the angle
_deltaAngle = atan2(deltaY, deltaX);
_startTransform = _container.transform;
// selection in limbo so set all sector image's to minimum value by changing current one
[self getSectorByValue:_currentSector].alpha = kMinimumAlpha;
return YES;
}
Unfortunately due to the nature of my controller hierarchy I was forced to rethink the design of my app.
The MainViewController with the UIPanGestureRecogniser has stayed as is.
The UIPageViewController with the UIControl has moved to a separate static view controller.
This works far better but is not yet ideal. The UIPageViewController steals any horizontal panning, however this can probably be fixed by implementing buttons as an alternative to the scrolling.
The UIControl did not have a gesture recogniser, but I override the beginTrackingWithTouch: and other methods to track the touches.
I suppose the answer should be: if you are layering too many gestures, you're doing it wrong.
You will need to add a container to the wheel, and then you can do something like that, if I am not missing something, this code must work.
UIPanGestureRecognizer* pan1 = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(pan1:)];
[self.view addGestureRecognizer:pan1];
UIPanGestureRecognizer* pan2 = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(pan2:)];
[view2 addGestureRecognizer:pan2];
UIPanGestureRecognizer* pan3 = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(pan3:)];
[view3 addGestureRecognizer:pan3];
- (void) pan1:(UIPanGestureRecognizer*)sender{
NSLog(#"%#",sender);
}
- (void) pan2:(UIPanGestureRecognizer*)sender{
NSLog(#"%#",sender);
}
- (void) pan3:(UIPanGestureRecognizer*)sender{
NSLog(#"%#",sender);
}
I believe you would have defined 3 different UIPanGesture objects and attached it to the appropriate views.
While technically this is correct it could cause some confusion, for example, if you're having several overlapping views (you have 3 here) and need a touch to be sent to a view that's not at the top of the view stack what would happen? The gesture could end up being confusing.
Instead, it's possible to attach a single gesture recognizer to the superview of several target views and delegate the gesture to the correct view based on the coordinates of where the user is touching. For that you need to normalize the touch coordinates originating from any of the subviews to the superview where the UIPanGesture is defined. That way you can know where the pan happend on the wheel or elsewhere.
PS: This being said, I strongly feel this is a design gotcha and it will hurt you. I have done things like this but eventually you would stumble on some corner case where the user interaction would be horrid. If this is the main view of your app I suggest you rethink the design.
What techniques could I employ to clearly separate each of these gesture recognisers?
You should look into the UIGestureRecognizerDelegate method:
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
Asks the delegate if two gesture recognizers should be allowed to recognize gestures simultaneously.
This method is called when recognition of a gesture by either gestureRecognizer or otherGestureRecognizer would block the other gesture recognizer from recognizing its gesture. Note that returning YES is guaranteed to allow simultaneous recognition; returning NO, on the other hand, is not guaranteed to prevent simultaneous recognition because the other gesture recognizer's delegate may return YES.
I believe this will solve your problem of gestures being 'overtaken' by other gestures.
Related
I added ICarousel to my IOS project and it worked fine. I could scroll pictures.
Then my view contain another data. So I needed to add a UISCrollView wich cover all my view. So, now, I have some elements (labels, textViews, and the UIVIew for ICarousel) in my ScrollView.
The ScrollView works fine. But now, the ICarousel doesn't switch pictures. Pictures are loaded (I see the first one and a part of the second one) but the carousel doesn't work anymore.
Did someone have the same problem? How to solve it?
Edit:
After #Wain advices, I tried this:
- (void)viewDidLoad
{
UIPanGestureRecognizer *panRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(pan:)];
[self.view addGestureRecognizer:panRecognizer];
panRecognizer.delegate = self;
}
- (void)pan:(id)sender {
NSLog(#"Pan");
}
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer{
return YES;
}
It doesn't work, but I checked with a breakpoint, it ran shouldRecognizeSimultaneouslyWithGestureRecognizer method when I tried a horizontal scroll.
Where am I wrong?
Looks like a clash between gesture recognisers because, by default, only one can be active at any one time.
Not sure why your scroll view covers everything, but you should be able to make your controller the delegate of the gestures and implement gestureRecognizer:shouldRecognizeSimultaneouslyWithGestureRecognizer: to allow them to recognise concurrently.
UIScrollView has a panGestureRecognizer property that you can access to set yourself as the delegate.
iCarousel is a bit different as it doesn't make the gesture available publicly so if setting the delegate on the scroll view doesn't work you can edit the carousel (which sets itself as the delegate) to implement the delegate method.
From reading the UIGestureRecognizer Class Reference it is implied that the API will handle the prioritizing of touches and gesture controls for you, making sure that your touchesBegan and related methods are not called on the view unless the gesture recognizers have first failed:
A window delivers touch events to a gesture recognizer before it delivers them to the hit-tested view attached to the gesture recognizer. Generally, if a gesture recognizer analyzes the stream of touches in a multi-touch sequence and does not recognize its gesture, the view receives the full complement of touches. If a gesture recognizer recognizes its gesture, the remaining touches for the view are cancelled.
I have added a swipe gesture to my view, and it is working. Via some logging, when I do a single swipe, the method reports as such. However, my touchesBegan method is also reporting via its log, despite that the touchesCancelled method is, as expected, also receiving a message.
I want, and expect, the gesture recognize to prevent touchesBegan or touchesMoved from being called.
So my question is: for the gesture recognizer to in fact delay touches based on its state, is there additional setup necessary? The docs do not suggest anything else as necessary.
My setup is simply:
swipeUpTwoFinger=[[[UISwipeGestureRecognizer alloc] initWithTarget:self action:#selector(doubleSwipeUp:)]autorelease];
swipeUpTwoFinger.direction=UISwipeGestureRecognizerDirectionUp;
swipeUpTwoFinger.numberOfTouchesRequired=2;
[self addGestureRecognizer:swipeUpTwoFinger];
I have also tried this test to make sure a recognizer has failed before processing with touchesBegan (this test should not be necessary if you believe what the docs say above) but the touchesBegan is still processing the log line after this test:
if (swipeUpTwoFinger.state==UIGestureRecognizerStateFailed)
It sounds like you need:
swipeUpTwoFinger.delaysTouchesBegan = YES;
Recently I had a problem. How to disable scrolling in a particular area of a UIScrollView, particularly the area occupied by a UIView or subview.
I've readen a lot about subclassing and other long approaches to solve this.
But recently I solved this problem in a simplier manner without subclassing:
UIPanGestureRecognizer *panrecognizer = [[UIPanGestureRecognizer alloc] init];
and then
[panrecognizer setCancelsTouchesinView:NO];
[mySubViewInScroll addGestureRecognizer:panrecognizer];
I created the UIPanGestureRecognizer without an Action passed to it and then added the recognizer to the view in the scroller. In such way the gestures on the view will be captured but expressly not handled by the view or by the superviews because we passed to the object no Action.
The question is this. Is this a correct approach to handle this type of prolem or it's better to do otherwise. I mean Apple will accept this kind of application with this approach ?
In effect I think that this, even if not the best is the most practical solution..since messing around with the classes and subclassing to achieve only a partial scroll lock of the screen seems to be very odd. So let-s see if Apple will accept this kind of solution...
try rewrite ScrollView's
-(UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event
to return chi
So I have a UIView as my Root view on my iPad. I add some subviews to it, amongst which there's also an MKMapView.
The thing I am trying to achieve ist to detect a 3-finger swipe across the screen so I can react to it. Furthermore I need to distinguish between 3-finger swipe to the left or to the right.
Before I added the mapview, I was experimenting with touchesMoved etc. Since I found out this to be inaccurate, I moved to using UISwipegestureRecognizer which worked well.
Anyway, once I added the Map, it ate all my touches. So I kept looking for answers.
This one seemed promising:
Intercepting/Hijacking iPhone Touch Events for MKMapView
as well as subclassing UIWindow and intercepting the touches.
Well, it turns out, none of them work well for me, since in both cases I end up either in
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
or in the situation to recognize the touch type etc.
This is, what I was trying to do in first place!
So my question is: Is there a way to use UIGestureRecognizers the way I described above to maintain my controls while keeping the functionality of the Mapview?
Have I decribed my problem accurately enough?
Greetz.
This answer describes subclassing a gesture recogniser to ensure that it does not interfere with other recognisers.
You could add a custom recogniser to MKMapView that intercepts three finger swipe (and reacts appropriately) but then allows all other gestures to be processed as normal by the MKMapView.
The link above give the example adding simple finger press by subclassing UIGestureRecognizer. For a three finger swipe I would subclass UISwipeGestureRecognizer.
I have had success using this method to add two finger rotate to a MKMapView without messing up pan and zoom etc.
The map should not "eat" all your touches. There's something wrong with that - I have successfully added long press and tap recognizers to the MKMapView, and they worked as expected.
Make sure to assign your controller the delegate of all the gesture recognizers you create, and be sure to implement the method:
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer :(UIGestureRecognizer *)otherGestureRecognizer;
You can gain further control by implementing
-(BOOL)gestureRecognizerShouldBegin:(UIGestureRecognizer *)gestureRecognizer;
After that, it's very much up to the details of your implementation to make the gesture recognizer work with the map view. In my experience you can add your own, and it should not be interfering with your recognizers.
Some other thoughts:
- Overlay an invisible user interaction enabled view over the MKMapView to take events from it. Depending on how much interaction you need on the map view you could turn this on and off, etc.
- Add another swipe recognizer to the map view itself (in addition to the one on the parent view).
You could also play around with UIView hitTest method but for me that was always too low level.
question is old, but actual. my two cents, especially for OSX
If You need to detect pinch from region changes by code:
(I set an ivar to detect in in orther places of controller code)
func mapView(_ mapView: MKMapView, regionWillChangeAnimated animated: Bool) {
guard let view = mapView.subviews.first else{ return }
self.regionChangeIsFromUserInteraction = false
#if os(iOS)
guard let recognizers = view.gestureRecognizers else{ return }
#elseif os(OSX)
// on OSX they are in maps..
let recognizers = mapView.gestureRecognizers
#endif
for recognizer : AimGestureRecognizer in recognizers{
let state = recognizer.state
if state == .began || state == .ended {
self.regionChangeIsFromUserInteraction = true
break
}
}
#if DEBUG
print("regionChangeIsFromUserInteraction ", self.regionChangeIsFromUserInteraction)
#endif
}
Would removing the swipe gesture recognizer from your MkMapView resolve the issue?
UISwipeGestureRecognizer *leftSwipeGesture = [[UISwipeGestureRecognizer alloc] init];
UISwipeGestureRecognizer *rightSwipeGesture = [[UISwipeGestureRecognizer alloc] init];
leftSwipeGesture.direction = UISwipeGestureRecognizerDirectionLeft;
rightSwipeGesture.direction = UISwipeGestureRecognizerDirectionRight;
[YourMKMapView removeGestureRecognizer:leftSwipeGesture];
[YourMKMapView removeGestureRecognizer:rightSwipeGesture];
That way, MKMapView will not respond to swipe gestures and resigns responder to the container view controller of the MKMapView.
I'm using [aSubview touchesBegan] to move aSubview's position around on a screen in relation to its superview. Its superview is not much larger than the subview itself. This is quite straightforward to do as the following snippet shows:
UITouch* touch = [touches anyObject];
CGPoint touchPoint = [touch locationInView:[self superview]];
self.center = touchPoint;
However once a aSubview is moved, as soon as any portion of it falls outside the bounds of its superview, touches in that section no longer register. In other words, touchesBegan no longer fire. I want touches in aSubview to register no matter where it's moved in relation to its superview.
Any thoughts?
Howard
mcpunky's answer is almost good, except you can NOT make pointInside function to always return YES. This way the view will intercept all touches.
Instead, one needs to do more fine check:
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event{
return CGRectContainsPoint(self.subviewOutsideMe.frame, point) || CGRectContainsPoint(self.bounds, point);
}
I just had this problem with subviews not receiving input because the superview was simply not sending touch events for subviews that are outside it's bounds. Also, it was crucial to keep the bounds of the superview as they were and moving subviews up the hierarchy also wasn't feasible. What did the job for me was overriding superview's
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event
to always return YES. Result: pretty much nothing changed in superview, nor the subviews, but input is received outside superview's boundaries.
I've talked to an Apple engineer about this. touchesBegan won't work in the portion of a subview that's not contained w/in its superview because the system clips each subview on the way down the hierarchy as it tries to determine which subview's touchesBegan gets called.
In order to resolve the issue, I removed the intermediate wrapper views that were causing the clipping problem and hoisted the subviews up one level. This necessitated a minor change in logic but ultimately proved to be a cleaner solution -- and more importantly, one that worked.
I'm unsure if this is the correct way, but it's certainly one way.
Listen to the touchesBegan even in your parent object.
If you get an event, pass it onto the childs view by calling its touchesBegan yourself.