AVCaptureDevice Camera Zoom - iphone

I have a simple AVCaptureSession running to get a camera feed in my app and take photos. How can I implement the 'pinch to zoom' functionality using a UIGestureRecognizer for the camera?

The accepted answer is actually outdated and I'm not sure it will actually take the photo of the zoomed in image. There is a method to zoom in like bcattle answer says. The problem of his answer is that it does not take in charge the fact that the user can zoom in and then restart from that zoom position. His solution will create some kind of jumps that are not really elegant.
The easiest and most elegant way of doing this is to use the velocity of the pinch gesture.
-(void) handlePinchToZoomRecognizer:(UIPinchGestureRecognizer*)pinchRecognizer {
const CGFloat pinchVelocityDividerFactor = 5.0f;
if (pinchRecognizer.state == UIGestureRecognizerStateChanged) {
NSError *error = nil;
if ([videoDevice lockForConfiguration:&error]) {
CGFloat desiredZoomFactor = device.videoZoomFactor + atan2f(pinchRecognizer.velocity, pinchVelocityDividerFactor);
// Check if desiredZoomFactor fits required range from 1.0 to activeFormat.videoMaxZoomFactor
device.videoZoomFactor = MAX(1.0, MIN(desiredZoomFactor, device.activeFormat.videoMaxZoomFactor));
[videoDevice unlockForConfiguration];
} else {
NSLog(#"error: %#", error);
}
}
}
I found that adding the arctan function to the velocity will ease the zoom in zoom out effect a bit. It is not exactly perfect but the effect is good enough for the needs. There could probably be another function to ease the zoom out when it almost reaches 1.
NOTE: Also, the scale of a pinch gesture goes from 0 to infinite with 0 to 1 being pinching in (zoom out) and 1 to infinite being pinching out (zoom in). To get a good zoom in zoom out effect with this you'd need to have a math equation. Velocity is actually from -infinite to infinite with 0 being the starting point.
EDIT: Fixed crash on range exception. Thanks to #garafajon!

Many have tried to do this by setting the transform property on the layer to CGAffineTransformMakeScale(gesture.scale.x, gesture.scale.y);
See here for a full fledged implementation of pinch-to-zoom.

Since iOS 7 you can set the zoom directly with the videoZoomFactor property of AVCaptureDevice.
Tie the scale property of the UIPinchGestureRecognizer to thevideoZoomFactor with a scaling constant. This will let you vary the sensitivity to taste:
-(void) handlePinchToZoomRecognizer:(UIPinchGestureRecognizer*)pinchRecognizer {
const CGFloat pinchZoomScaleFactor = 2.0;
if (pinchRecognizer.state == UIGestureRecognizerStateChanged) {
NSError *error = nil;
if ([videoDevice lockForConfiguration:&error]) {
videoDevice.videoZoomFactor = 1.0 + pinchRecognizer.scale * pinchZoomScaleFactor;
[videoDevice unlockForConfiguration];
} else {
NSLog(#"error: %#", error);
}
}
}
Note that AVCaptureDevice, along everything else related to AVCaptureSession, is not thread safe. So you probably don't want to do this from the main queue.

Swift 4
Add a pinch gesture recognizer to the front-most view and connect it to this action (pinchToZoom). captureDevice should be the instance currently providing input to the capture session. pinchToZoom provides smooth zooming for both front&back capture devices.
#IBAction func pinchToZoom(_ pinch: UIPinchGestureRecognizer) {
guard let device = captureDevice else { return }
func minMaxZoom(_ factor: CGFloat) -> CGFloat { return min(max(factor, 1.0), device.activeFormat.videoMaxZoomFactor) }
func update(scale factor: CGFloat) {
do {
try device.lockForConfiguration()
defer { device.unlockForConfiguration() }
device.videoZoomFactor = factor
} catch {
debugPrint(error)
}
}
let newScaleFactor = minMaxZoom(pinch.scale * zoomFactor)
switch sender.state {
case .began: fallthrough
case .changed: update(scale: newScaleFactor)
case .ended:
zoomFactor = minMaxZoom(newScaleFactor)
update(scale: zoomFactor)
default: break
}
}
It'll be useful to declare zoomFactor on your camera or vc. I usually put it on the same singleton that has AVCaptureSession. This will act as a default value for captureDevice's videoZoomFactor.
var zoomFactor: Float = 1.0

In swift version, you can zoom in/out by simply passing scaled number on videoZoomFactor. Following code in UIPinchGestureRecognizer handler will solve the issue.
do {
try device.lockForConfiguration()
switch gesture.state {
case .began:
self.pivotPinchScale = device.videoZoomFactor
case .changed:
var factor = self.pivotPinchScale * gesture.scale
factor = max(1, min(factor, device.activeFormat.videoMaxZoomFactor))
device.videoZoomFactor = factor
default:
break
}
device.unlockForConfiguration()
} catch {
// handle exception
}
In here, pivotPinchScale is a CGFloat property that declared in your controller somewhere.
You may also refer to following project to see how camera works with UIPinchGestureRecognizer.
https://github.com/DragonCherry/CameraPreviewController

I started from the #Gabriel Cartier's solution (thanks). In my code I've preferred to use the smoother rampToVideoZoomFactor and a simpler way to compute the device's scale factor.
(IBAction) pinchForZoom:(id) sender forEvent:(UIEvent*) event {
UIPinchGestureRecognizer* pinchRecognizer = (UIPinchGestureRecognizer *)sender;
static CGFloat zoomFactorBegin = .0;
if ( UIGestureRecognizerStateBegan == pinchRecognizer.state ) {
zoomFactorBegin = self.captureDevice.videoZoomFactor;
} else if (UIGestureRecognizerStateChanged == pinchRecognizer.state) {
NSError *error = nil;
if ([self.captureDevice lockForConfiguration:&error]) {
CGFloat desiredZoomFactor = zoomFactorBegin * pinchRecognizer.scale;
CGFloat zoomFactor = MAX(1.0, MIN(desiredZoomFactor, self.captureDevice.activeFormat.videoMaxZoomFactor));
[self.captureDevice rampToVideoZoomFactor:zoomFactor withRate:3.0];
[self.captureDevice unlockForConfiguration];
} else {
NSLog(#"error: %#", error);
}
}
}

There is an easier way to handle camera zoom level with pinch recognizer. The only thing you need to do is take cameraDevice.videoZoomFactor and set it to the recognizer on .began state like this
#objc private func viewPinched(recognizer: UIPinchGestureRecognizer) {
switch recognizer.state {
case .began:
recognizer.scale = cameraDevice.videoZoomFactor
case .changed:
let scale = recognizer.scale
do {
try cameraDevice.lockForConfiguration()
cameraDevice.videoZoomFactor = max(cameraDevice.minAvailableVideoZoomFactor, min(scale, cameraDevice.maxAvailableVideoZoomFactor))
cameraDevice.unlockForConfiguration()
}
catch {
print(error)
}
default:
break
}
}

based on #Gabriel Cartier 's answer :
- (void) cameraZoomWithPinchVelocity: (CGFloat)velocity {
CGFloat pinchVelocityDividerFactor = 40.0f;
if (velocity < 0) {
pinchVelocityDividerFactor = 5.; //zoom in
}
if (_videoInput) {
if([[_videoInput device] position] == AVCaptureDevicePositionBack) {
NSError *error = nil;
if ([[_videoInput device] lockForConfiguration:&error]) {
CGFloat desiredZoomFactor = [_videoInput device].videoZoomFactor + atan2f(velocity, pinchVelocityDividerFactor);
// Check if desiredZoomFactor fits required range from 1.0 to activeFormat.videoMaxZoomFactor
CGFloat maxFactor = MIN(10, [_videoInput device].activeFormat.videoMaxZoomFactor);
[_videoInput device].videoZoomFactor = MAX(1.0, MIN(desiredZoomFactor, maxFactor));
[[_videoInput device] unlockForConfiguration];
} else {
NSLog(#"cameraZoomWithPinchVelocity error: %#", error);
}
}
}
}

I am using iOS SDK 8.3 and the AVfoundation framework and for me using the following method worked for :
nameOfAVCaptureVideoPreviewLayer.affineTransform = CGAffineTransformMakeScale(scaleX, scaleY)
For saving the picture with the same scale I used the following method:
nameOfAVCaptureConnection.videoScaleAndCropFactor = factorNumber;
The code bellow is for getting the image in the scale
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
if(imageDataSampleBuffer != NULL){
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [UIImage imageWithData:imageData];
}
}];

Related

NSViewController slow to register mouse events?

I've been working on a small image uploading menubar app for OS X. I've created custom NSView subclass for the uploaded items.
Here's what it looks like by default:
Mouse events are handled by the view's NSViewController in the following way:
import Cocoa
class MenuItemController: NSViewController {
private var trackingArea: NSTrackingArea?
override func mouseEntered(theEvent: NSEvent) {
if let v = self.view as? MenuItemView {
v.shouldHighlight = true
v.needsDisplay = true
}
}
override func mouseExited(theEvent: NSEvent) {
if let v = self.view as? MenuItemView {
v.shouldHighlight = false
v.needsDisplay = true
}
}
override func viewDidLoad() {
super.viewDidLoad()
if (trackingArea == nil) {
trackingArea = NSTrackingArea(rect: self.view.bounds, options: [.ActiveAlways, .MouseEnteredAndExited], owner: self, userInfo: nil)
self.view.addTrackingArea(trackingArea!)
}
/* rest of the code... */
}
}
It works fine until I move my cursor fast over the items. It seems like the mouseExited() event is not called, and the view remains with a blue background (mouse is actually on the Quit button):
I also tried moving the mouse handling into the NSView, but with same results. I appreciate any input! Thanks!
In my opinion, Apple has had bugs in this area.
Assuming you update your tracking area according to apple docs,
adding this additional fix might fix your problem... it fixes it for me in many cases.
I verify in mouseMoved / mouseEntered routine that the mouse cursor is still within my views frame, and if not, call mouseExited: myself.
- (void) adjustTrackingArea
{
if ( trackingArea )
{
[self removeTrackingArea:trackingArea];
[trackingArea release];
}
// determine the tracking options
NSTrackingAreaOptions trackingOptions = // NSTrackingEnabledDuringMouseDrag | // don't track during drag
NSTrackingMouseMoved |
NSTrackingMouseEnteredAndExited |
//NSTrackingActiveInActiveApp | NSTrackingActiveInKeyWindow | NSTrackingActiveWhenFirstResponder |
NSTrackingActiveAlways;
NSRect theRect = [self visibleRect];
trackingArea = [[NSTrackingArea alloc]
initWithRect: theRect
options: trackingOptions
owner: self
userInfo: nil];
[self addTrackingArea:trackingArea];
}
- (void)resetCursorRects
{
[self adjustTrackingArea];
}
- (void)mouseEntered:(NSEvent *)ev
{
[self setNeedsDisplay:YES];
// make sure current mouse cursor location remains under the mouse cursor
NSPoint cursorPt = [self convertPoint:[[self window] mouseLocationOutsideOfEventStream] fromView:NULL];
// apple bug!!!
//NSPoint cursorPt2 = [self convertPointFromBase:[ev locationInWindow]];
//if ( cursorPt.x != cursorPt2.x )
// NSLog( #"hello old cursorPt" );
NSRect r = [self frame];
if ( cursorPt.x > NSMaxX( r ) || cursorPt.x < 0 )
{
[self mouseExited:ev];
//cursorPt.x = [self convertPointFromBase:[ev locationInWindow]];
//if ( cursorPt.x > NSMaxX( r ) || cursorPt.x < r.origin.x )
return;
}
... your custom stuff here ...
}
- (void)mouseExited:(NSEvent *)theEvent
{
if ( isTrackingCursor == NO )
return;
[[NSCursor arrowCursor] set];
isTrackingCursor = NO;
[self setNeedsDisplay:YES];
}
- (void)mouseMoved:(NSEvent *)theEvent
{
[self mouseEntered:theEvent];
}
I do not understand whether you are installing an NSTrackingArea for the whole window (in this case menu) or for each item. If you are doing the latter, don't do it, or you will spend an endless time to correct the problems you are seeing. The way I handled this buggy behaviour is to create an NSTrackingArea for the whole window and I figure myself where the mouse is and handle the highlighting of each item myself. I know this is not ideal, but this was the only way I was able to solve it after knocking myself in the head for three days.
You can track which was the old active menuitem and set manage it accordingly, something like:
override func mouseEntered(theEvent: NSEvent) {
if let v = self.view as? MenuItemView {
lastEntered.shouldHighLight = false
lastEntered.needsDisplay = true
lastEntered = v;
lastEntered.shouldHighLight = true
lastEntered.needsDisplay = true
}
}
Thus you will ensure that at most one will be active.

iphone camera show focus rectangle

I am cloning Apple's camera app using AVCaptureSession based on Apple's AppCam app sample.
The problem is I cannot see focus rectangle in the video preview screen.
I used following code for setting focus, but still focus rectangle is not shown.
AVCaptureDevice *device = [[self videoInput] device];
if ([device isFocusModeSupported:focusMode] && [device focusMode] != focusMode) {
NSError *error;
printf(" setFocusMode \n");
if ([device lockForConfiguration:&error]) {
[device setFocusMode:focusMode];
[device unlockForConfiguration];
} else {
id delegate = [self delegate];
if ([delegate respondsToSelector:#selector(acquiringDeviceLockFailedWithError:)]) {
[delegate acquiringDeviceLockFailedWithError:error];
}
}
}
When I use UIImagePickerController, auto focus, tap focus are supported by default, and can see focus rectangle.
Is there no way to show focus rectangle in the video preview layer using AVCaptureSession?
The focus animation is a complete custom animation which you have to create by your own. I am currently having exact the same problem like you:
I want to show a rectangle as a feedback for the user after he tapped the preview layer.
The first thing you want to do is implementing the tap-to-focus, probably where you initiate the preview layer:
UITapGestureRecognizer *tapGR = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(tapToFocus:)];
[tapGR setNumberOfTapsRequired:1];
[tapGR setNumberOfTouchesRequired:1];
[self.captureVideoPreviewView addGestureRecognizer:tapGR];
Now implement the tap-to-focus method itself:
-(void)tapToFocus:(UITapGestureRecognizer *)singleTap{
CGPoint touchPoint = [singleTap locationInView:self.captureVideoPreviewView];
CGPoint convertedPoint = [self.captureVideoPreviewLayer captureDevicePointOfInterestForPoint:touchPoint];
AVCaptureDevice *currentDevice = currentInput.device;
if([currentDevice isFocusPointOfInterestSupported] && [currentDevice isFocusModeSupported:AVCaptureFocusModeAutoFocus]){
NSError *error = nil;
[currentDevice lockForConfiguration:&error];
if(!error){
[currentDevice setFocusPointOfInterest:convertedPoint];
[currentDevice setFocusMode:AVCaptureFocusModeAutoFocus];
[currentDevice unlockForConfiguration];
}
}
}
The last thing, which i haven't implemented by myself yet, is to add the focusing animation to the preview layer or rather the view controller which is holding the preview layer. I believe that could be done in tapToFocus:. There you already have the touch point. Simply add a animated image view or some other view which has the touch position as its center. After the animation has finished, remove the image view.
Swift implementation
Gesture:
private func focusGesture() -> UITapGestureRecognizer {
let tapRec: UITapGestureRecognizer = UITapGestureRecognizer(target: self, action: #selector(kTapToFocus))
tapRec.cancelsTouchesInView = false
tapRec.numberOfTapsRequired = 1
tapRec.numberOfTouchesRequired = 1
return tapRec
}
Action:
private func tapToFocus(gesture : UITapGestureRecognizer) {
let touchPoint:CGPoint = gesture.locationInView(self.previewView)
let convertedPoint:CGPoint = previewLayer!.captureDevicePointOfInterestForPoint(touchPoint)
let currentDevice:AVCaptureDevice = videoDeviceInput!.device
if currentDevice.focusPointOfInterestSupported && currentDevice.isFocusModeSupported(AVCaptureFocusMode.AutoFocus){
do {
try currentDevice.lockForConfiguration()
currentDevice.focusPointOfInterest = convertedPoint
currentDevice.focusMode = AVCaptureFocusMode.AutoFocus
currentDevice.unlockForConfiguration()
} catch {
}
}
}
swift3 implementation
lazy var focusGesture: UITapGestureRecognizer = {
let instance = UITapGestureRecognizer(target: self, action: #selector(tapToFocus(_:)))
instance.cancelsTouchesInView = false
instance.numberOfTapsRequired = 1
instance.numberOfTouchesRequired = 1
return instance
}()
func tapToFocus(_ gesture: UITapGestureRecognizer) {
guard let previewLayer = previewLayer else {
print("Expected a previewLayer")
return
}
guard let device = device else {
print("Expected a device")
return
}
let touchPoint: CGPoint = gesture.location(in: cameraView)
let convertedPoint: CGPoint = previewLayer.captureDevicePointOfInterest(for: touchPoint)
if device.isFocusPointOfInterestSupported && device.isFocusModeSupported(AVCaptureFocusMode.autoFocus) {
do {
try device.lockForConfiguration()
device.focusPointOfInterest = convertedPoint
device.focusMode = AVCaptureFocusMode.autoFocus
device.unlockForConfiguration()
} catch {
print("unable to focus")
}
}
}

Prevent scrolling in a MKMapView, also when zooming

The scrollEnabled seems to be breakable once the user starts pinching in a MKMapView.
You still can't scroll with one finger, but if you scroll with two fingers while zooming in and out, you can move the map.
I have tried :
Subclassing the MKMapKit to disable the scroll view inside it.
Implementing –mapView:regionWillChangeAnimated: to enforce the center.
Disabling scrollEnabled.
but with no luck.
Can anyone tell me a sure way to ONLY have zooming in a MKMapView, so the center point always stays in the middle ?
You can try to handle the pinch gestures yourself using a UIPinchGestureRecognizer:
First set scrollEnabled and zoomEnabled to NO and create the gesture recognizer:
UIPinchGestureRecognizer* recognizer = [[UIPinchGestureRecognizer alloc] initWithTarget:self
action:#selector(handlePinch:)];
[self.mapView addGestureRecognizer:recognizer];
In the recognizer handler adjust the MKCoordinateSpan according to the zoom scale:
- (void)handlePinch:(UIPinchGestureRecognizer*)recognizer
{
static MKCoordinateRegion originalRegion;
if (recognizer.state == UIGestureRecognizerStateBegan) {
originalRegion = self.mapView.region;
}
double latdelta = originalRegion.span.latitudeDelta / recognizer.scale;
double londelta = originalRegion.span.longitudeDelta / recognizer.scale;
// TODO: set these constants to appropriate values to set max/min zoomscale
latdelta = MAX(MIN(latdelta, 80), 0.02);
londelta = MAX(MIN(londelta, 80), 0.02);
MKCoordinateSpan span = MKCoordinateSpanMake(latdelta, londelta);
[self.mapView setRegion:MKCoordinateRegionMake(originalRegion.center, span) animated:YES];
}
This may not work perfectly like Apple's implementation but it should solve your issue.
Swift 3.0 version of #Paras Joshi answer https://stackoverflow.com/a/11954355/3754976
with small animation fix.
class MapViewZoomCenter: MKMapView {
var originalRegion: MKCoordinateRegion!
override func awakeFromNib() {
self.configureView()
}
func configureView() {
isZoomEnabled = false
self.registerZoomGesture()
}
///Register zoom gesture
func registerZoomGesture() {
let recognizer = UIPinchGestureRecognizer(target: self, action:#selector(MapViewZoomCenter.handleMapPinch(recognizer:)))
self.addGestureRecognizer(recognizer)
}
///Zoom in/out map
func handleMapPinch(recognizer: UIPinchGestureRecognizer) {
if (recognizer.state == .began) {
self.originalRegion = self.region;
}
var latdelta: Double = originalRegion.span.latitudeDelta / Double(recognizer.scale)
var londelta: Double = originalRegion.span.longitudeDelta / Double(recognizer.scale)
//set these constants to appropriate values to set max/min zoomscale
latdelta = max(min(latdelta, 80), 0.02);
londelta = max(min(londelta, 80), 0.02);
let span = MKCoordinateSpanMake(latdelta, londelta)
self.setRegion(MKCoordinateRegionMake(originalRegion.center, span), animated: false)
}
}
Try implementing –mapView:regionWillChangeAnimated: or –mapView:regionDidChangeAnimated: in your map view's delegate so that the map is always centered on your preferred location.
I've read about this before, though I've never actually tried it. Have a look at this article about a MKMapView with boundaries. It uses two delegate methods to check if the view has been scrolled by the user.
http://blog.jamgraham.com/blog/2012/04/29/adding-boundaries-to-mkmapview
The article describes an approach which is similar to what you've tried, so, sorry if you've already stumbled upon it.
I did not have a lot of luck with any of these answers. Doing my own pinch just conflicted too much. I was running into cases where the normal zoom would zoom farther in than I could do with my own pinch.
Originally, I tried as the original poster to do something like:
- (void) mapView:(MKMapView *)mapView regionDidChangeAnimated:(BOOL)animated {
MKCoordinateRegion region = mapView.region;
//...
// adjust the region.center
//...
mapView.region = region;
}
What I found was that that had no effect. I also discovered through NSLogs that this method will fire even when I set the region or centerCoordinate programmatically. Which led to the question: "Wouldn't the above, if it DID work go infinite?"
So I'm conjecturing and hypothesizing now that while user zoom/scroll/rotate is happening, MapView somehow suppresses or ignores changes to the region. Something about the arbitration renders the programmatic adjustment impotent.
If that's the problem, then maybe the key is to get the region adjustment outside of the regionDidChanged: notification. AND since any adjustment will trigger another notification, it is important that it be able to determine when not to adjust anymore. This led me to the following implementation (where subject is supplying the center coordinate that I want to stay in the middle):
- (void) recenterMap {
double latDiff = self.subject.coordinate.latitude self.mapView.centerCoordinate.latitude;
double lonDiff = self.subject.coordinate.longitude - self.mapView.centerCoordinate.longitude;
BOOL latIsDiff = ABS(latDiff) > 0.00001;
BOOL lonIsDiff = ABS(lonDiff) > 0.00001;
if (self.subject.isLocated && (lonIsDiff || latIsDiff)) {
[self.mapView setCenterCoordinate: self.subject.coordinate animated: YES];
}
}
- (void) mapView:(MKMapView *)mapView regionDidChangeAnimated:(BOOL)animated {
if (self.isShowingMap) {
if (self.isInEdit) {
self.setLocationButton.hidden = NO;
self.mapEditPrompt.hidden = YES;
}
else {
if (self.subject.isLocated) { // dispatch outside so it will happen after the map view user events are done
dispatch_after(DISPATCH_TIME_NOW, dispatch_get_main_queue(), ^{
[self recenterMap];
});
}
}
}
}
The delay where it slides it back can vary, but it really does work pretty well. And lets the map interaction remain Apple-esque while it's happening.
I tried this and it works.
First create a property:
var originalCenter: CLLocationCoordinate2D?
Then in regionWillChangeAnimated, check if this event is caused by a UIPinchGestureRecognizer:
func mapView(mapView: MKMapView, regionWillChangeAnimated animated: Bool) {
let firstView = mapView.subviews.first
if let recognizer = firstView?.gestureRecognizers?.filter({ $0.state == .Began || $0.state == .Ended }).first as? UIPinchGestureRecognizer {
if recognizer.scale != 1.0 {
originalCenter = mapView.region.center
}
}
}
Then in regionDidChangeAnimated, return to original region if a pinch gesture caused the region changing:
func mapView(mapView: MKMapView, regionDidChangeAnimated animated: Bool) {
if let center = originalCenter {
mapView.setRegion(MKCoordinateRegion(center: center, span: mapView.region.span), animated: true)
originalCenter = nil
return
}
// your other code
}

How to have a UISwipeGestureRecognizer AND UIPanGestureRecognizer work on the same view

How would you setup the gesture recognizers so that you could have a UISwipeGestureRecognizer and a UIPanGestureRecognizer work at the same time? Such that if you touch and move quickly (quick swipe) it detects the gesture as a swipe but if you touch then move (short delay between touch & move) it detects it as a pan?
I've tried various permutations of requireGestureRecognizerToFail and that didn't help exactly, it made it so that if the SwipeGesture was left then my pan gesture would work up, down and right but any movement left was detected by the swipe gesture.
You're going to want to set one of the two UIGestureRecognizer's delegates to an object that makes sense (likely self) then listen, and return YES for this method:
- (BOOL) gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer
shouldRecognizeSimultaneouslyWithGestureRecognizer:
(UIGestureRecognizer *)otherGestureRecognizer {
return YES;
}
This method is called when recognition of a gesture by either gestureRecognizer or otherGestureRecognizer would block the other gesture recognizer from recognizing its gesture. Note that returning YES is guaranteed to allow simultaneous recognition; returning NO, on the other hand, is not guaranteed to prevent simultaneous recognition because the other gesture recognizer's delegate may return YES.
By default, when the user attempts to swipe, the gesture is interpreted as a pan. This is because a swiping gesture meets the necessary conditions to be interpreted as a pan (a continuous gesture) before it meets the necessary conditions to be interpreted as a swipe (a discrete gesture).
You need to indicate a relationship between two gesture recognizers by calling the requireGestureRecognizerToFail: method on the gesture recognizer that you want to delay
[self.panRecognizer requireGestureRecognizerToFail:self.swipeRecognizer];
Using a pan recognizer to detect swipping and panning:
- (void)setupRecognizer
{
UIPanGestureRecognizer* panSwipeRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePanSwipe:)];
// Here you can customize for example the minimum and maximum number of fingers required
panSwipeRecognizer.minimumNumberOfTouches = 2;
[targetView addGestureRecognizer:panSwipeRecognizer];
}
#define SWIPE_UP_THRESHOLD -1000.0f
#define SWIPE_DOWN_THRESHOLD 1000.0f
#define SWIPE_LEFT_THRESHOLD -1000.0f
#define SWIPE_RIGHT_THRESHOLD 1000.0f
- (void)handlePanSwipe:(UIPanGestureRecognizer*)recognizer
{
// Get the translation in the view
CGPoint t = [recognizer translationInView:recognizer.view];
[recognizer setTranslation:CGPointZero inView:recognizer.view];
// TODO: Here, you should translate your target view using this translation
someView.center = CGPointMake(someView.center.x + t.x, someView.center.y + t.y);
// But also, detect the swipe gesture
if (recognizer.state == UIGestureRecognizerStateEnded)
{
CGPoint vel = [recognizer velocityInView:recognizer.view];
if (vel.x < SWIPE_LEFT_THRESHOLD)
{
// TODO: Detected a swipe to the left
}
else if (vel.x > SWIPE_RIGHT_THRESHOLD)
{
// TODO: Detected a swipe to the right
}
else if (vel.y < SWIPE_UP_THRESHOLD)
{
// TODO: Detected a swipe up
}
else if (vel.y > SWIPE_DOWN_THRESHOLD)
{
// TODO: Detected a swipe down
}
else
{
// TODO:
// Here, the user lifted the finger/fingers but didn't swipe.
// If you need you can implement a snapping behaviour, where based on the location of your targetView,
// you focus back on the targetView or on some next view.
// It's your call
}
}
}
Here is a full solution for detecting pan and swipe directions (utilizing 2cupsOfTech's swipeThreshold logic):
public enum PanSwipeDirection: Int {
case up, down, left, right, upSwipe, downSwipe, leftSwipe, rightSwipe
public var isSwipe: Bool { return [.upSwipe, .downSwipe, .leftSwipe, .rightSwipe].contains(self) }
public var isVertical: Bool { return [.up, .down, .upSwipe, .downSwipe].contains(self) }
public var isHorizontal: Bool { return !isVertical }
}
public extension UIPanGestureRecognizer {
var direction: PanSwipeDirection? {
let SwipeThreshold: CGFloat = 1000
let velocity = self.velocity(in: view)
let isVertical = abs(velocity.y) > abs(velocity.x)
switch (isVertical, velocity.x, velocity.y) {
case (true, _, let y) where y < 0: return y < -SwipeThreshold ? .upSwipe : .up
case (true, _, let y) where y > 0: return y > SwipeThreshold ? .downSwipe : .down
case (false, let x, _) where x > 0: return x > SwipeThreshold ? .rightSwipe : .right
case (false, let x, _) where x < 0: return x < -SwipeThreshold ? .leftSwipe : .left
default: return nil
}
}
}
Usage:
#IBAction func handlePanOrSwipe(recognizer: UIPanGestureRecognizer) {
if let direction = recognizer.direction {
if direction == .leftSwipe {
//swiped left
} else if direction == .up {
//panned up
} else if direction.isVertical && direction.isSwipe {
//swiped vertically
}
}
}

How can I capture which direction is being panned using UIPanGestureRecognizer?

Ok so I have been looking around at just about every option under the sun for capturing multi-touch gestures, and I have finally come full circle and am back at the UIPanGestureRecognizer.
The functionality I want is really quite simple. I have setup a two finger pan gesture, and I want to be able to shuffle through some images depending on how many pixels I move. I have all that worked out okay, but I want to be able to capture if the pan gesture is REVERSED.
Is there a built in way that I'm just not seeing to detect going back on a gesture? Would I have to store my original starting point, then track the end point, then see where they move after that and se if its less than the initial ending point and then reverse accordingly? I can see that working, but I'm hoping there is a more elegant solution!!
Thanks
EDIT:
Here is the method that the recognizer is set to fire. Its a bit of a hack, but it works:
-(void) throttle:(UIGestureRecognizer *) recognize{
throttleCounter ++;
if(throttleCounter == 6){
throttleCounter = 0;
[self nextPic:nil];
}
UIPanGestureRecognizer *panGesture = (UIPanGestureRecognizer *) recognize;
UIView *view = recognize.view;
if(panGesture.state == UIGestureRecognizerStateBegan){
CGPoint translation = [panGesture translationInView:view.superview];
NSLog(#"X: %f, Y:%f", translation.x, translation.y);
}else if(panGesture.state == UIGestureRecognizerStateEnded){
CGPoint translation = [panGesture translationInView:view.superview];
NSLog(#"X: %f, Y:%f", translation.x, translation.y);
}
}
I've just gotten to the point where I am going to start trying to track the differences between values...to try and tell which way they are panning
On UIPanGestureRecognizer you can use -velocityInView: to get the velocity of the fingers at the time that gesture was recognised.
If you wanted to do one thing on a pan right and one thing on a pan left, for example, you could do something like:
- (void)handleGesture:(UIPanGestureRecognizer *)gestureRecognizer
{
CGPoint velocity = [gestureRecognizer velocityInView:yourView];
if(velocity.x > 0)
{
NSLog(#"gesture went right");
}
else
{
NSLog(#"gesture went left");
}
}
If you literally want to detect a reversal, as in you want to compare a new velocity to an old one and see if it is just in the opposite direction — whichever direction that may be — you could do:
// assuming lastGestureVelocity is a class variable...
- (void)handleGesture:(UIPanGestureRecognizer *)gestureRecognizer
{
CGPoint velocity = [gestureRecognizer velocityInView:yourView];
if(velocity.x*lastGestureVelocity.x + velocity.y*lastGestureVelocity.y > 0)
{
NSLog(#"gesture went in the same direction");
}
else
{
NSLog(#"gesture went in the opposite direction");
}
lastGestureVelocity = velocity;
}
The multiply and add thing may look a little odd. It's actually a dot product, but rest assured it'll be a positive number if the gestures are in the same direction, going down to 0 if they're exactly at right angles and then becoming a negative number if they're in the opposite direction.
Here's an easy to detect before the gesture recognizer begins:
public override func gestureRecognizerShouldBegin(_ gestureRecognizer: UIGestureRecognizer) -> Bool {
guard let panRecognizer = gestureRecognizer as? UIPanGestureRecognizer else {
return super.gestureRecognizerShouldBegin(gestureRecognizer)
}
// Ensure it's a horizontal drag
let velocity = panRecognizer.velocity(in: self)
if abs(velocity.y) > abs(velocity.x) {
return false
}
return true
}
If you want a vertical only drag, you can switch the x and y.
This code from Serghei Catraniuc worked out better for me.
https://github.com/serp1412/LazyTransitions
func addPanGestureRecognizers() {
let panGesture = UIPanGestureRecognizer(target: self, action: #selector(respondToSwipeGesture(gesture:)))
self.view.addGestureRecognizer(panGesture)
}
func respondToSwipeGesture(gesture: UIGestureRecognizer){
if let swipeGesture = gesture as? UIPanGestureRecognizer{
switch gesture.state {
case .began:
print("began")
case .ended:
print("ended")
switch swipeGesture.direction{
case .rightToLeft:
print("rightToLeft")
case .leftToRight:
print("leftToRight")
case .topToBottom:
print("topToBottom")
case .bottomToTop:
print("bottomToTop")
default:
print("default")
}
default: break
}
}
}
// Extensions
import Foundation
import UIKit
public enum UIPanGestureRecognizerDirection {
case undefined
case bottomToTop
case topToBottom
case rightToLeft
case leftToRight
}
public enum TransitionOrientation {
case unknown
case topToBottom
case bottomToTop
case leftToRight
case rightToLeft
}
extension UIPanGestureRecognizer {
public var direction: UIPanGestureRecognizerDirection {
let velocity = self.velocity(in: view)
let isVertical = fabs(velocity.y) > fabs(velocity.x)
var direction: UIPanGestureRecognizerDirection
if isVertical {
direction = velocity.y > 0 ? .topToBottom : .bottomToTop
} else {
direction = velocity.x > 0 ? .leftToRight : .rightToLeft
}
return direction
}
public func isQuickSwipe(for orientation: TransitionOrientation) -> Bool {
let velocity = self.velocity(in: view)
return isQuickSwipeForVelocity(velocity, for: orientation)
}
private func isQuickSwipeForVelocity(_ velocity: CGPoint, for orientation: TransitionOrientation) -> Bool {
switch orientation {
case .unknown : return false
case .topToBottom : return velocity.y > 1000
case .bottomToTop : return velocity.y < -1000
case .leftToRight : return velocity.x > 1000
case .rightToLeft : return velocity.x < -1000
}
}
}
extension UIPanGestureRecognizer {
typealias GestureHandlingTuple = (gesture: UIPanGestureRecognizer? , handle: (UIPanGestureRecognizer) -> ())
fileprivate static var handlers = [GestureHandlingTuple]()
public convenience init(gestureHandle: #escaping (UIPanGestureRecognizer) -> ()) {
self.init()
UIPanGestureRecognizer.cleanup()
set(gestureHandle: gestureHandle)
}
public func set(gestureHandle: #escaping (UIPanGestureRecognizer) -> ()) {
weak var weakSelf = self
let tuple = (weakSelf, gestureHandle)
UIPanGestureRecognizer.handlers.append(tuple)
addTarget(self, action: #selector(handleGesture))
}
fileprivate static func cleanup() {
handlers = handlers.filter { $0.0?.view != nil }
}
#objc private func handleGesture(_ gesture: UIPanGestureRecognizer) {
let handleTuples = UIPanGestureRecognizer.handlers.filter{ $0.gesture === self }
handleTuples.forEach { $0.handle(gesture)}
}
}
extension UIPanGestureRecognizerDirection {
public var orientation: TransitionOrientation {
switch self {
case .rightToLeft: return .rightToLeft
case .leftToRight: return .leftToRight
case .bottomToTop: return .bottomToTop
case .topToBottom: return .topToBottom
default: return .unknown
}
}
}
extension UIPanGestureRecognizerDirection {
public var isHorizontal: Bool {
switch self {
case .rightToLeft, .leftToRight:
return true
default:
return false
}
}
}