Maintain frame of UIView after UIGestures - iphone

I'm working on an iPad app that will use UIGestureRecognizers to allow the user to pan, scale and rotate objects (subclass of UIView) on the screen.
I understand that the [UIView frame] property isn't valid after a transform is done, so I'm trying to take the values of my UIGestureRecognizers and keep the "frame" myself.
Here's the code I'm using to attempt this (you may recognize a lot of code from Apple's sample project, SimpleGestureRecognizers):
// Shape.h (partial)
#interface Shape : UIView <UIGestureRecognizerDelegate> {
CGFloat centerX;
CGFloat centerY;
CGFloat rotatation;
CGFloat xScale;
CGFloat yScale;
}
// Shape.m (partial)
- (void)panPiece:(UIPanGestureRecognizer *)gestureRecognizer
{
UIView *piece = [gestureRecognizer view];
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGPoint translation = [gestureRecognizer translationInView:[piece superview]];
self.centerX += translation.x;
self.centerY += translation.y;
[piece setCenter:CGPointMake([piece center].x + translation.x, [piece center].y + translation.y)];
for ( HandleView *h in [self handles] ) {
[h setCenter:CGPointMake([h center].x + translation.x, [h center].y + translation.y)];
[h setNeedsDisplay];
}
[gestureRecognizer setTranslation:CGPointZero inView:[piece superview]];
NSLog(#"(%.0f, %.0f, %.0f, %.0f) %.2f˚, (%.2fx, %.2fx)", [self frame].origin.x, [self frame].origin.y, [self frame].size.width, [self frame].size.height, [self rotation], [self xScale], [self yScale]);
}
}
// rotate the piece by the current rotation
// reset the gesture recognizer's rotation to 0 after applying so the next callback is a delta from the current rotation
- (void)rotatePiece:(UIRotationGestureRecognizer *)gestureRecognizer
{
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGFloat rot = [self normalizeRotation:[gestureRecognizer rotation]];
self.rotation += rot * 180.0 / M_PI;
NSLog(#"Rotation: %.12f", [gestureRecognizer rotation]);
[gestureRecognizer view].transform = CGAffineTransformRotate([[gestureRecognizer view] transform], [gestureRecognizer rotation]);
[gestureRecognizer setRotation:0];
NSLog(#"(%.0f, %.0f, %.0f, %.0f) %.2f˚, (%.2fx, %.2fx)", [self frame].origin.x, [self frame].origin.y, [self frame].size.width, [self frame].size.height, [self rotation], [self xScale], [self yScale]);
}
}
// scale the piece by the current scale
// reset the gesture recognizer's rotation to 0 after applying so the next callback is a delta from the current scale
- (void)scalePiece:(UIPinchGestureRecognizer *)gestureRecognizer
{
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
self.xScale *= [gestureRecognizer scale];
self.yScale *= [gestureRecognizer scale];
[gestureRecognizer view].transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
[gestureRecognizer setScale:1];
NSLog(#"(%.0f, %.0f, %.0f, %.0f) %.2f˚, (%.2fx, %.2fx)", [self frame].origin.x, [self frame].origin.y, [self frame].size.width, [self frame].size.height, [self rotation], [self xScale], [self yScale]);
}
}
Because of some weirdness I noticed with the rotations, I implemented the following method to help keep an accurate rotation value:
- (CGFloat) normalizeRotation:(CGFloat)rot
{
if (abs(rot) > 0.05) {
if (rot > 0) {
rot -= M_PI;
} else {
rot += M_PI;
}
return [self normalizeRotation:rot];
} else {
return rot;
}
}
Anyway, the shape on-screen pans, scales and rotates fine. All is as you would expect and the performance is good.
The problem is that, after a user moves, resizes and rotates a UIView, I want to let them tap it and give them "handles" that allow for resizing other than the "square" resizing that pinching gives (i.e., when you use the pinch gesture, you upsize or downsize in the same ratio for both x and y). Now, even with the code above, the values that are stored aren't ever quite accurate.
The "handles" I'm using are simply 10x10 dots that are supposed to go at each corner and halfway down each "side" of the UIView's frame/rectangle. When I first place a square and tap it to get the handles before doing anything else, the handles appear in the appropriate place. When I move, resize and rotate an object, then tap it, the handles are all shifted off of the shape some amount. It generally seems to be about 20 pixels.
Are the values in the UIGestureRecognizer objects just not accurate enough? That doesn't seem to be the case, because those values are used to change the object on-screen, and that is accurate.
I'm pretty sure there's no way for me to get a real representation of the UIView's frame after messing with it so much, but where's the flaw in my custom code that's giving me bad values? Am I losing precision when converting between degrees and radians?
On a related note: Apple obviously has internal code that keeps track of how to draw a view that's been translated/transformed. The box of solid color that I'm currently using is moved, zoomed and rotated correctly. So, is there any way to access the values that they use for displayed a translated/transformed UIView?
Thanks!

Related

UIPinchGestureRecognizer not working in Xcode/iPhone simulator

Working in Xcode/iPhone simulator, I have an app where UIGestureRecognizers were working for pinch, tap, and pan. Without changing any code, pinches are no longer being recognized (I am holding down the option key while moving the mouse to show the two grey circles moving together or apart). Pan and tap still both work. Has anyone else experienced this problem?
It appears that something is wrong with the recognizer itself or the simulator because the code below for pinching never gets called, but it works for for panning.
- (void)pinch:(UIPinchGestureRecognizer *)gesture
{
NSLog(#"in pinch method");
if ((gesture.state == UIGestureRecognizerStateChanged) ||
(gesture.state == UIGestureRecognizerStateEnded)) {
self.scale *= gesture.scale; // adjust our scale
gesture.scale = 1; // reset gestures scale so future changes are incremental
}
}
- (void)tap:(UITapGestureRecognizer *)gesture
{
NSLog(#"in tap method");
if (gesture.state == UIGestureRecognizerStateEnded)
self.originInPixels = [gesture locationInView:self];
}
I tried creating a new app with a simple MKMapView, which loads the map properly, and pan and tap work - but pinch still doesn't work.
I'm working in iOS 5.1.
Any ideas?
Please try this
include this code to your project
example.h
CGFloat lastScale;
CGPoint lastPoint;
example.m
-(void)viewDidLoad
{
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handlePinch:)];
[self addGestureRecognizer:pinchGesture];
[pinchGesture release];
}
-(void) handlePinch:(UIPinchGestureRecognizer *)gestureRecognizer {
if([gestureRecognizer state] == UIGestureRecognizerStateBegan) {
// Reset the last scale, necessary if there are multiple objects with different scales
lastScale = [gestureRecognizer scale];
}
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan ||
[gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGFloat currentScale = [[[gestureRecognizer view].layer valueForKeyPath:#"transform.scale"] floatValue];
NSLog(#"%f",currentScale);
// Constants to adjust the max/min values of zoom
const CGFloat kMaxScale = 3.0;
const CGFloat kMinScale = 1.0;
CGFloat newScale = 1 - (lastScale - [gestureRecognizer scale]);
newScale = MIN(newScale, kMaxScale / currentScale);
newScale = MAX(newScale, kMinScale / currentScale);
CGAffineTransform transform = CGAffineTransformScale([[gestureRecognizer view] transform], newScale, newScale);
NSLog(#"%f",lastScale);
[gestureRecognizer view].transform = transform;
lastScale = [gestureRecognizer scale];
}
}
The UIGestureRecognizer began working again on its own, and I'm still not sure why. It may have been as simple as I wasn't holding down the mouse button while also holding down the option key, but I thought I tried that before.
Thanks for the suggestions above.
I had the same problem on my macbook air trackpad with XCode 4.5.2 thinking that [option]+[single finger touch movement] would do a pinch (gray balls moving in/out).
However, you need to apply [option]+[3-finger touch] for it to work.
This works in simulator, but annoyingly is no help when running on connected iOS device. Though my downloaded app works, running with iPhone connected to XCode doesn't, agh!

LongPress on UIIImageView, create a clone "ghost" image, then drag it around

I have a sample project where I am trying to test out this behavior. I have two UIImageViews side by side. I want to be able to longPress on either the left or right UIImageView, create a semi-transparent clone image, and drag it to the other UIImageView to swap images.
For example, to swap the images, the user could do:
Tap and hold on the left UIImageView
A cloned, smaller "ghost" image will appear at the touch coordinate
The user will drag the cloned image to the right UIImageView
The user will release their finger from the screen to "drop" the cloned image
The left and right UIImageViews can then swamp their images.
Here are some pics to illustrate:
Original state:
http://d.pr/i/PNVc
After long press on left-side UIImageView with smaller cloned image added as subview:
http://d.pr/i/jwxj
I can detect the longpress and make the cloned image, but I cannot pan that image around unless I release my finger and do another touch on the screen.
I'd like to be able to be able to do it all in one motion, without the user needing to remove their finger from the screen.
I don't know if this is the right approach, but this is how I'm doing it for now. Thanks for any help!
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
[self addLongPressGestureToPiece:leftImageView];
[self addLongPressGestureToPiece:rightImageView];
}
- (void)addLongPressGestureToPiece:(UIView *)piece
{
NSLog(#"addLongPressGestureToPiece");
UILongPressGestureRecognizer *longPressGesture = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:#selector(longPressPiece:)];
[longPressGesture setDelegate:self];
[piece addGestureRecognizer:longPressGesture];
[longPressGesture release];
}
- (void)addPanGestureRecognizerToPiece:(UIView *)piece
{
NSLog(#"addPanGestureRecognizerToPiece");
UIPanGestureRecognizer *panGesture = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(panPiece:)];
[panGesture setMaximumNumberOfTouches:1];
[panGesture setDelegate:self];
[piece addGestureRecognizer:panGesture];
[panGesture release];
}
- (void)longPressPiece:(UILongPressGestureRecognizer *)gestureRecognizer
{
UIImageView *piece = (UIImageView*)[gestureRecognizer view];
CGPoint point = [gestureRecognizer locationInView:self.view];
if(gestureRecognizer.state == UIGestureRecognizerStateBegan)
{
NSLog(#"UIGestureRecognizerStateBegan");
// create the semi-transparent imageview with the selected pic
UIImage *longPressImage = [piece image];
UIImageView *draggableImageView = [[UIImageView alloc] initWithFrame:CGRectMake(point.x - longPressImage.size.width/6/2, point.y - longPressImage.size.height/6/2, longPressImage.size.width/6, longPressImage.size.height/6)];
draggableImageView.image = longPressImage;
draggableImageView.alpha = 0.5;
draggableImageView.userInteractionEnabled = YES;
[self.view addSubview:draggableImageView];
[self addPanGestureRecognizerToPiece:draggableImageView];
photoView.userInteractionEnabled = NO;
}
else if(gestureRecognizer.state == UIGestureRecognizerStateChanged)
{
NSLog(#"Changed");
}
else if(gestureRecognizer.state == UIGestureRecognizerStateEnded)
{
NSLog(#"Ended");
photoView.userInteractionEnabled = YES;
}
}
- (void)adjustAnchorPointForGestureRecognizer:(UIGestureRecognizer *)gestureRecognizer
{
NSLog(#"adjustAnchorPointForGestureRecognizer");
if (gestureRecognizer.state == UIGestureRecognizerStateBegan) {
UIView *piece = gestureRecognizer.view;
CGPoint locationInView = [gestureRecognizer locationInView:piece];
CGPoint locationInSuperview = [gestureRecognizer locationInView:piece.superview];
piece.layer.anchorPoint = CGPointMake(locationInView.x / piece.bounds.size.width, locationInView.y / piece.bounds.size.height);
piece.center = locationInSuperview;
}
}
- (void)panPiece:(UIPanGestureRecognizer *)gestureRecognizer
{
NSLog(#"pan piece");
UIView *piece =[gestureRecognizer view];
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
CGPoint translation = [gestureRecognizer translationInView:[piece superview]];
// if velocity.y is positive, user is moving down, if negative, then moving up
CGPoint velocity = [gestureRecognizer velocityInView:[piece superview]];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged)
{
[piece setCenter:CGPointMake([piece center].x + translation.x, [piece center].y + translation.y)];
[gestureRecognizer setTranslation:CGPointZero inView:[piece superview]];
}
else if([gestureRecognizer state] == UIGestureRecognizerStateEnded)
{
NSLog(#"piece y %f", piece.frame.origin.y);
}
}
This is probably because the touch was already detected before the image (and thus the gesture recognizer) was created. I would suggest creating the ghost images beforehand, but have them hidden. That way, the gesture recognizer is actually fired as the ghost image is already there.
This may be expensive though, if you have a lot of images. If performance is bad, you can also consider just creating the image views, but only setting the images for those views when they are touched. That way you don't have to load the images into memory unnecessarily.
For me, I discovered i was getting ghost images on account of longGestureAction being called multiple times for the .begin state. While I knew that .changed would be continuous, I found (and docs confirmed) that each state can occur multiple times. So in the .begin state when you're creating the image from GraphicsContext, its getting interrupted with multiple calls - leading to ghosts where the old one was. Put a guard in your code as I did to prevent this from occurring. Problem solved.

How to limit pinch out to the default view size

I am working on the pdf based application where I am trying to implement UIPinchGestureRecognizer.I want to limit the pinch off functionality when user reaches to the default view size with 640,960.
In my current implementation user is able to pinch in/out infinite.
- (void)pinchZoom:(UIPinchGestureRecognizer *)gestureRecognizer
{
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
if (!zoomActive) {
zoomActive = YES;
UIPanGestureRecognizer *panGesture = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(panMove:)];
[panGesture setMaximumNumberOfTouches:2];
[panGesture setDelegate:self];
[self addGestureRecognizer:panGesture];
[panGesture release];
}
[gestureRecognizer view].transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
[delegate leavesView:self zoomingCurrentView:[gestureRecognizer scale]];
[gestureRecognizer setScale:1];
}
}
// This method will handle the PAN / MOVE gesture
- (void)panMove:(UIPanGestureRecognizer *)gestureRecognizer
{
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGPoint translation = [gestureRecognizer translationInView:[[gestureRecognizer view] superview]];
[[gestureRecognizer view] setCenter:CGPointMake([[gestureRecognizer view] center].x + translation.x, [[gestureRecognizer view] center].y + translation.y)];
[gestureRecognizer setTranslation:CGPointZero inView:[[gestureRecognizer view] superview]];
}
}
this is default view size/scale I am talking about:
and this is what I don't want or I want to limit in pinch out:
Any suggestion?
What about handling yourself the lower limit in your handler function? Something like this:
- (void)pinchZoom:(UIPinchGestureRecognizer *)gestureRecognizer {
....
if ( [gestureRecognizer scale] > MIN_SCALE )
[gestureRecognizer view].transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
...
Inside of your gestureRecognizer, you are going to have to test and understand the current scale / size of you PDF view, and NOT zoom smaller when you are taking up the full screen.

UIPanGestureRecognizer gets stuck before moving after image is idle for a bit

So I'm making a simple iPhone app that lets you move / scale images that have been imported in a view. I'm using UIGestureRecognizers with #paulsolt 's code below to accomplish this. It works great. The only problem I'm having is when I go to move an object after not having touched the screen or performed any other actions for a while, there is a slight hiccup before it starts moving smoothly.
Does anyone know how to fix this problem?
- (void)addGestureRecognizersToView:(UIView *)theView {
theView.userInteractionEnabled = YES; // Enable user interaction
UIPanGestureRecognizer *panGesture = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePanGesture:)];
[panGesture setMaximumNumberOfTouches:2];
[panGesture setDelegate:self];
[theView addGestureRecognizer:panGesture];
[panGesture release];
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handlePinchGesture:)];
[theView addGestureRecognizer:pinchGesture];
[pinchGesture release];
}
- (void)handlePinchGesture:(UIPinchGestureRecognizer *)gestureRecognizer {
if([gestureRecognizer state] == UIGestureRecognizerStateBegan) {
// Reset the last scale, necessary if there are multiple objects with different scales
lastScale = [gestureRecognizer scale];
}
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan ||
[gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGFloat currentScale = [[[gestureRecognizer view].layer valueForKeyPath:#"transform.scale"] floatValue];
// Constants to adjust the max/min values of zoom
const CGFloat kMaxScale = 1.5;
const CGFloat kMinScale = 1.0;
const CGFloat kSpeed = 0.75;
CGFloat newScale = 1 - (lastScale - [gestureRecognizer scale]) * (kSpeed);
newScale = MIN(newScale, kMaxScale / currentScale);
newScale = MAX(newScale, kMinScale / currentScale);
CGAffineTransform transform = CGAffineTransformScale([[gestureRecognizer view] transform], newScale, newScale);
[gestureRecognizer view].transform = transform;
lastScale = [gestureRecognizer scale]; // Store the previous scale factor for the next pinch gesture call
}
}
- (void)handlePanGesture:(UIPanGestureRecognizer *)gestureRecognizer {
UIView *myView = [gestureRecognizer view];
CGPoint translate = [gestureRecognizer translationInView:[myView superview]];
if ([gestureRecognizer state] == UIGestureRecognizerStateChanged || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
[myView setCenter:CGPointMake(myView.center.x + translate.x, myView.center.y + translate.y)];
[gestureRecognizer setTranslation:CGPointZero inView:[myView superview]];
}
}
Maybe because you do this [panGesture setMaximumNumberOfTouches:2];, it has to ensure that the current action is not pinching before moving your view?
This is happening when the imagepicker selects a photo taken by the iPhone camera. For other images it seems to work fine. Not sure how to fix this.

UIImageView Gestures (Zoom, Rotate) Question

I would like to make 2 operations to an UIImageView zoom, rotate, I have 2 problems:
A. I make an operation for zoom for ex. and when I try to make rotation the UIImageView is set to initial size, I would like to know how to keep the zoomed UIImageView and make the rotation from the zoomed image.
B. I would like to combine the zoom operation with rotation and I don't know ho to implement this:
- (void)viewDidLoad
{
foo = [[UIImageView alloc]initWithFrame:CGRectMake(100.0, 100.0, 600, 800.0)];
foo.userInteractionEnabled = YES;
foo.multipleTouchEnabled = YES;
foo.image = [UIImage imageNamed:#"earth.jpg"];
foo.contentMode = UIViewContentModeScaleAspectFit;
foo.clipsToBounds = YES;
[self.view addSubview:foo];
}
//---pinch gesture---
UIPinchGestureRecognizer *pinchGesture =
[[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handlePinchGesture:)];
[foo addGestureRecognizer:pinchGesture];
[pinchGesture release];
//---rotate gesture---
UIRotationGestureRecognizer *rotateGesture =
[[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(handleRotateGesture:)];
[foo addGestureRecognizer:rotateGesture];
[rotateGesture release];
//---handle pinch gesture---
-(IBAction) handlePinchGesture:(UIGestureRecognizer *) sender {
NSLog(#"Pinch");
CGFloat factor = [(UIPinchGestureRecognizer *) sender scale];
if (factor > 1) {
//---zooming in---
sender.view.transform = CGAffineTransformMakeScale(
lastScaleFactor + (factor-1),
lastScaleFactor + (factor-1));
}
else {
//---zooming out---
sender.view.transform = CGAffineTransformMakeScale(lastScaleFactor * factor, lastScaleFactor * factor);
}
if (sender.state == UIGestureRecognizerStateEnded) {
if (factor > 1) {
lastScaleFactor += (factor-1);
} else {
lastScaleFactor *= factor;
}
}
}
//---handle rotate gesture---
-(IBAction) handleRotateGesture:(UIGestureRecognizer *) sender {
CGFloat rotation = [(UIRotationGestureRecognizer *) sender rotation];
CGAffineTransform transform = CGAffineTransformMakeRotation(rotation + netRotation);
sender.view.transform = transform;
if (sender.state == UIGestureRecognizerStateEnded) {
netRotation += rotation;
}
}
Thanks
Hope this can be helpful to you, that's how I usually implement gesture recognizers:
UIRotationGestureRecognizer *rotationGesture = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(rotatePiece:)];
[piece addGestureRecognizer:rotationGesture];
[rotationGesture release];
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(scalePiece:)];
[pinchGesture setDelegate:self];
[piece addGestureRecognizer:pinchGesture];
[pinchGesture release];
Rotate method: Reset the gesture recognizer's rotation to 0 after applying so the next callback is a delta from the current rotation
- (void)rotatePiece:(UIRotationGestureRecognizer *)gestureRecognizer {
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
[gestureRecognizer view].transform = CGAffineTransformRotate([[gestureRecognizer view] transform], [gestureRecognizer rotation]);
[gestureRecognizer setRotation:0];
}
}
Scale Method, at the end reset the gesture recognizer's scale to 1 after applying so the next callback is a delta from the current scale
- (void)scalePiece:(UIPinchGestureRecognizer *)gestureRecognizer {
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
[gestureRecognizer view].transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
[gestureRecognizer setScale:1];
}
}
Than ensure that the pinch, pan and rotate gesture recognizers on a particular view can all recognize simultaneously prevent other gesture recognizers from recognizing simultaneously
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer {
// if the gesture recognizers are on different views, don't allow simultaneous recognition
if (gestureRecognizer.view != otherGestureRecognizer.view)
return NO;
// if either of the gesture recognizers is the long press, don't allow simultaneous recognition
if ([gestureRecognizer isKindOfClass:[UILongPressGestureRecognizer class]] || [otherGestureRecognizer isKindOfClass:[UILongPressGestureRecognizer class]])
return NO;
return YES;
}
Scale and rotation transforms are applied relative to the layer's anchor point this method moves a gesture recognizer's view's anchor point between the user's fingers
- (void)adjustAnchorPointForGestureRecognizer:(UIGestureRecognizer *)gestureRecognizer {
if (gestureRecognizer.state == UIGestureRecognizerStateBegan) {
UIView *piece = gestureRecognizer.view;
CGPoint locationInView = [gestureRecognizer locationInView:piece];
CGPoint locationInSuperview = [gestureRecognizer locationInView:piece.superview];
piece.layer.anchorPoint = CGPointMake(locationInView.x / piece.bounds.size.width, locationInView.y / piece.bounds.size.height);
piece.center = locationInSuperview;
}
}
Just implement gestureRecognizer:shouldRecognizeSimultaneouslyWithGestureRecognizer: in your delegate.
I have a UIPinchGestureRecognizer, a UIPanGestureRecognizer and a UIRotationGestureRecognizer set up and I want them all to work at the same time. I also have a UITapGestureRecognizer which I do not want to be recognized simultaneously. All I did was this:
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
if (![gestureRecognizer isKindOfClass:[UITapGestureRecognizer class]] && ![otherGestureRecognizer isKindOfClass:[UITapGestureRecognizer class]]) {
return YES;
}
return NO;
}
I found something that may interest you on the stanford university website:
http://www.stanford.edu/class/cs193p/cgi-bin/drupal/downloads-2010-winter
on this site you will need to scroll down until you see the number 14: "Title: Lecture #14 - MultiTouch"
Download the: "14_MultiTouchDemo.zip"
In this example you can scale and rotate every image at the same time.
hope i helped :)
When you use CGAffineTransformMakeScale, you are resetting the transformation of Identity every time you use it and you lose the previous transformation information.
Try using CGAffineTransformScale(view.transform,scale, scale) for the pinch zooming. You will need to retain the original frame size to keep the zooming under control though.
see: How can I use pinch zoom(UIPinchGestureRecognizer) to change width of a UITextView?
For rotation similarly:
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
view.transform = CGAffineTransformRotate([view transform], [gestureRecognizer rotation]);
[gestureRecognizer setRotation:0];
}
I know this is a pretty old thread, I came across this imageview subclass, which works nice for zoom, rotate and pan. It uses gesture recognizer on an imageview. I am using this for one of my app.
ZoomRotatePanImageView