I have a sample project where I am trying to test out this behavior. I have two UIImageViews side by side. I want to be able to longPress on either the left or right UIImageView, create a semi-transparent clone image, and drag it to the other UIImageView to swap images.
For example, to swap the images, the user could do:
Tap and hold on the left UIImageView
A cloned, smaller "ghost" image will appear at the touch coordinate
The user will drag the cloned image to the right UIImageView
The user will release their finger from the screen to "drop" the cloned image
The left and right UIImageViews can then swamp their images.
Here are some pics to illustrate:
Original state:
http://d.pr/i/PNVc
After long press on left-side UIImageView with smaller cloned image added as subview:
http://d.pr/i/jwxj
I can detect the longpress and make the cloned image, but I cannot pan that image around unless I release my finger and do another touch on the screen.
I'd like to be able to be able to do it all in one motion, without the user needing to remove their finger from the screen.
I don't know if this is the right approach, but this is how I'm doing it for now. Thanks for any help!
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
[self addLongPressGestureToPiece:leftImageView];
[self addLongPressGestureToPiece:rightImageView];
}
- (void)addLongPressGestureToPiece:(UIView *)piece
{
NSLog(#"addLongPressGestureToPiece");
UILongPressGestureRecognizer *longPressGesture = [[UILongPressGestureRecognizer alloc] initWithTarget:self action:#selector(longPressPiece:)];
[longPressGesture setDelegate:self];
[piece addGestureRecognizer:longPressGesture];
[longPressGesture release];
}
- (void)addPanGestureRecognizerToPiece:(UIView *)piece
{
NSLog(#"addPanGestureRecognizerToPiece");
UIPanGestureRecognizer *panGesture = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(panPiece:)];
[panGesture setMaximumNumberOfTouches:1];
[panGesture setDelegate:self];
[piece addGestureRecognizer:panGesture];
[panGesture release];
}
- (void)longPressPiece:(UILongPressGestureRecognizer *)gestureRecognizer
{
UIImageView *piece = (UIImageView*)[gestureRecognizer view];
CGPoint point = [gestureRecognizer locationInView:self.view];
if(gestureRecognizer.state == UIGestureRecognizerStateBegan)
{
NSLog(#"UIGestureRecognizerStateBegan");
// create the semi-transparent imageview with the selected pic
UIImage *longPressImage = [piece image];
UIImageView *draggableImageView = [[UIImageView alloc] initWithFrame:CGRectMake(point.x - longPressImage.size.width/6/2, point.y - longPressImage.size.height/6/2, longPressImage.size.width/6, longPressImage.size.height/6)];
draggableImageView.image = longPressImage;
draggableImageView.alpha = 0.5;
draggableImageView.userInteractionEnabled = YES;
[self.view addSubview:draggableImageView];
[self addPanGestureRecognizerToPiece:draggableImageView];
photoView.userInteractionEnabled = NO;
}
else if(gestureRecognizer.state == UIGestureRecognizerStateChanged)
{
NSLog(#"Changed");
}
else if(gestureRecognizer.state == UIGestureRecognizerStateEnded)
{
NSLog(#"Ended");
photoView.userInteractionEnabled = YES;
}
}
- (void)adjustAnchorPointForGestureRecognizer:(UIGestureRecognizer *)gestureRecognizer
{
NSLog(#"adjustAnchorPointForGestureRecognizer");
if (gestureRecognizer.state == UIGestureRecognizerStateBegan) {
UIView *piece = gestureRecognizer.view;
CGPoint locationInView = [gestureRecognizer locationInView:piece];
CGPoint locationInSuperview = [gestureRecognizer locationInView:piece.superview];
piece.layer.anchorPoint = CGPointMake(locationInView.x / piece.bounds.size.width, locationInView.y / piece.bounds.size.height);
piece.center = locationInSuperview;
}
}
- (void)panPiece:(UIPanGestureRecognizer *)gestureRecognizer
{
NSLog(#"pan piece");
UIView *piece =[gestureRecognizer view];
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
CGPoint translation = [gestureRecognizer translationInView:[piece superview]];
// if velocity.y is positive, user is moving down, if negative, then moving up
CGPoint velocity = [gestureRecognizer velocityInView:[piece superview]];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged)
{
[piece setCenter:CGPointMake([piece center].x + translation.x, [piece center].y + translation.y)];
[gestureRecognizer setTranslation:CGPointZero inView:[piece superview]];
}
else if([gestureRecognizer state] == UIGestureRecognizerStateEnded)
{
NSLog(#"piece y %f", piece.frame.origin.y);
}
}
This is probably because the touch was already detected before the image (and thus the gesture recognizer) was created. I would suggest creating the ghost images beforehand, but have them hidden. That way, the gesture recognizer is actually fired as the ghost image is already there.
This may be expensive though, if you have a lot of images. If performance is bad, you can also consider just creating the image views, but only setting the images for those views when they are touched. That way you don't have to load the images into memory unnecessarily.
For me, I discovered i was getting ghost images on account of longGestureAction being called multiple times for the .begin state. While I knew that .changed would be continuous, I found (and docs confirmed) that each state can occur multiple times. So in the .begin state when you're creating the image from GraphicsContext, its getting interrupted with multiple calls - leading to ghosts where the old one was. Put a guard in your code as I did to prevent this from occurring. Problem solved.
Related
Wondering if the following is ok to process UIGestureRecognizer touches:
if (pinchGestureRecognizer.state == UIGestureRecognizerStateBegan || pinchGestureRecognizer.state == UIGestureRecognizerStateChanged)
{
//process touches
}
else if (pinchGestureRecognizer.state == UIGestureRecognizerStateEnded)
{
//do whatever after gesture recognizer finishes
}
Basically, I'm wondering if UIGestureRecognizerStateEnded occurs should the UIGestureRecognizer still process touches, or at that point have all the touches finished? I'm getting weird values for translationInView so just wanted to ask here.
You asked:
Basically, I'm wondering if UIGestureRecognizerStateEnded occurs should the UIGestureRecognizer still process touches, or at that point have all the touches finished?
When you get UIGestureRecognizerStateEnded, yes, the gesture is done. But obviously, unless you remove the gesture recognizer from the view at that point, if the user starts a new gesture, the gesture recognition process starts over again starting at UIGestureRecognizerStateBegan.
Furthermore, you said:
I'm getting weird values for translationInView so just wanted to ask here.
Your code sample suggests that you're dealing with a pinch gesture, which doesn't do translationInView, so I'm not sure what "weird values" you're getting. You can, though have two simultaneous gestures by setting your gesture's delegate and implementing shouldRecognizeSimultaneouslyWithGestureRecognizer:
- (void)viewDidLoad
{
[super viewDidLoad];
UIPinchGestureRecognizer *pinch = [[UIPinchGestureRecognizer alloc] initWithTarget:self
action:#selector(handlePinch:)];
pinch.delegate = self;
[self.view addGestureRecognizer:pinch];
UIPanGestureRecognizer *pan = [[UIPanGestureRecognizer alloc] initWithTarget:self
action:#selector(handlePan:)];
pan.delegate = self;
[self.view addGestureRecognizer:pan];
}
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
if ([gestureRecognizer isKindOfClass:[UIPanGestureRecognizer class]] && [otherGestureRecognizer isKindOfClass:[UIPinchGestureRecognizer class]])
return YES;
if ([gestureRecognizer isKindOfClass:[UIPinchGestureRecognizer class]] && [otherGestureRecognizer isKindOfClass:[UIPanGestureRecognizer class]])
return YES;
return NO;
}
- (void)handlePinch:(UIPinchGestureRecognizer *)gesture
{
CGFloat scale = [gesture scale];
NSLog(#"%s: %#: scale=%.2f", __FUNCTION__, [self stringFromGestureState:gesture.state], scale);
}
- (void)handlePan:(UIPanGestureRecognizer *)gesture
{
CGPoint translation = [gesture translationInView:gesture.view];
NSLog(#"%s: %#: translation=%#", __FUNCTION__, [self stringFromGestureState:gesture.state], NSStringFromCGPoint(translation));
}
The above code works, where the handlePan returns the pan, and the handlePinch returns the pinch, and the translationInView of handlePan looks unexceptional. Perhaps you can show us how you're using a pinch gesture and getting translationInView and tell us what's odd in the values you're getting.
Working in Xcode/iPhone simulator, I have an app where UIGestureRecognizers were working for pinch, tap, and pan. Without changing any code, pinches are no longer being recognized (I am holding down the option key while moving the mouse to show the two grey circles moving together or apart). Pan and tap still both work. Has anyone else experienced this problem?
It appears that something is wrong with the recognizer itself or the simulator because the code below for pinching never gets called, but it works for for panning.
- (void)pinch:(UIPinchGestureRecognizer *)gesture
{
NSLog(#"in pinch method");
if ((gesture.state == UIGestureRecognizerStateChanged) ||
(gesture.state == UIGestureRecognizerStateEnded)) {
self.scale *= gesture.scale; // adjust our scale
gesture.scale = 1; // reset gestures scale so future changes are incremental
}
}
- (void)tap:(UITapGestureRecognizer *)gesture
{
NSLog(#"in tap method");
if (gesture.state == UIGestureRecognizerStateEnded)
self.originInPixels = [gesture locationInView:self];
}
I tried creating a new app with a simple MKMapView, which loads the map properly, and pan and tap work - but pinch still doesn't work.
I'm working in iOS 5.1.
Any ideas?
Please try this
include this code to your project
example.h
CGFloat lastScale;
CGPoint lastPoint;
example.m
-(void)viewDidLoad
{
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handlePinch:)];
[self addGestureRecognizer:pinchGesture];
[pinchGesture release];
}
-(void) handlePinch:(UIPinchGestureRecognizer *)gestureRecognizer {
if([gestureRecognizer state] == UIGestureRecognizerStateBegan) {
// Reset the last scale, necessary if there are multiple objects with different scales
lastScale = [gestureRecognizer scale];
}
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan ||
[gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGFloat currentScale = [[[gestureRecognizer view].layer valueForKeyPath:#"transform.scale"] floatValue];
NSLog(#"%f",currentScale);
// Constants to adjust the max/min values of zoom
const CGFloat kMaxScale = 3.0;
const CGFloat kMinScale = 1.0;
CGFloat newScale = 1 - (lastScale - [gestureRecognizer scale]);
newScale = MIN(newScale, kMaxScale / currentScale);
newScale = MAX(newScale, kMinScale / currentScale);
CGAffineTransform transform = CGAffineTransformScale([[gestureRecognizer view] transform], newScale, newScale);
NSLog(#"%f",lastScale);
[gestureRecognizer view].transform = transform;
lastScale = [gestureRecognizer scale];
}
}
The UIGestureRecognizer began working again on its own, and I'm still not sure why. It may have been as simple as I wasn't holding down the mouse button while also holding down the option key, but I thought I tried that before.
Thanks for the suggestions above.
I had the same problem on my macbook air trackpad with XCode 4.5.2 thinking that [option]+[single finger touch movement] would do a pinch (gray balls moving in/out).
However, you need to apply [option]+[3-finger touch] for it to work.
This works in simulator, but annoyingly is no help when running on connected iOS device. Though my downloaded app works, running with iPhone connected to XCode doesn't, agh!
I have an UIImageView which can be rotated, panned and scaled with gesture recognisers. As a result it is cropped in its enclosing view.
Everything is working fine but I don't know how to save the visible part of the picture in its full resolution. It's not a screen grab.
I know I get the UIImage straight from the visible content of the UIImageView but it is limited to the resolution of the screen.
I assume that I have to do the same transformations on the UIImage and crop it. IS there an easy way to do that?
Update:
For example, I have an UIImageView with an image in high resolution, let's say a 8MP iPhone 4s camera photo, which is transformed with gestures, so it becomes scaled, rotated and moved around in its enclosing view. Obviously there is some cropping going on so only a part of the image is displayed. There is a huge difference between the displayed screen resolution and the underlining image resolution, I need an image in the image resolution. The UIImageView is in UIViewContentModeScaleAspectFit, but a solution with UIViewContentModeScaleAspectFill is also fine.
This is my code:
- (void)rotatePiece:(UIRotationGestureRecognizer *)gestureRecognizer {
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
[gestureRecognizer view].transform = CGAffineTransformRotate([[gestureRecognizer view] transform], [gestureRecognizer rotation]);
[gestureRecognizer setRotation:0];
}
}
- (void)scalePiece:(UIPinchGestureRecognizer *)gestureRecognizer {
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
[gestureRecognizer view].transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
[gestureRecognizer setScale:1];
}
}
-(void)panGestureMoveAround:(UIPanGestureRecognizer *)gestureRecognizer;
{
UIView *piece = [gestureRecognizer view];
//We pass in the gesture to a method that will help us align our touches so that the pan and pinch will seems to originate between the fingers instead of other points or center point of the UIView
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGPoint translation = [gestureRecognizer translationInView:[piece superview]];
[piece setCenter:CGPointMake([piece center].x + translation.x, [piece center].y+translation.y)];
[gestureRecognizer setTranslation:CGPointZero inView:[piece superview]];
} else if([gestureRecognizer state] == UIGestureRecognizerStateEnded) {
//Put the code that you may want to execute when the UIView became larger than certain value or just to reset them back to their original transform scale
}
}
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer {
// if the gesture recognizers are on different views, don't allow simultaneous recognition
if (gestureRecognizer.view != otherGestureRecognizer.view)
return NO;
// if either of the gesture recognizers is the long press, don't allow simultaneous recognition
if ([gestureRecognizer isKindOfClass:[UILongPressGestureRecognizer class]] || [otherGestureRecognizer isKindOfClass:[UILongPressGestureRecognizer class]])
return NO;
return YES;
}
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view from its nib.
appDelegate = (AppDelegate *)[[UIApplication sharedApplication] delegate];
faceImageView.image = appDelegate.faceImage;
UIRotationGestureRecognizer *rotationGesture = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(rotatePiece:)];
[faceImageView addGestureRecognizer:rotationGesture];
[rotationGesture setDelegate:self];
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(scalePiece:)];
[pinchGesture setDelegate:self];
[faceImageView addGestureRecognizer:pinchGesture];
UIPanGestureRecognizer *panRecognizer = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(panGestureMoveAround:)];
[panRecognizer setMinimumNumberOfTouches:1];
[panRecognizer setMaximumNumberOfTouches:2];
[panRecognizer setDelegate:self];
[faceImageView addGestureRecognizer:panRecognizer];
[[UIApplication sharedApplication] setStatusBarHidden:YES withAnimation:UIStatusBarAnimationNone];
[appDelegate fadeObject:moveIcons StartAlpha:0 FinishAlpha:1 Duration:2];
currentTimer = [NSTimer timerWithTimeInterval:4.0f target:self selector:#selector(fadeoutMoveicons) userInfo:nil repeats:NO];
[[NSRunLoop mainRunLoop] addTimer: currentTimer forMode: NSDefaultRunLoopMode];
}
The following code creates a snapshot of the enclosing view (superview of faceImageView with clipsToBounds set to YES) using a calculated scale factor.
It assumes that the content mode of faceImageView is UIViewContentModeScaleAspectFit and that the frame of faceImageView is set to the enclosingView's bounds.
- (UIImage *)captureView {
float imageScale = sqrtf(powf(faceImageView.transform.a, 2.f) + powf(faceImageView.transform.c, 2.f));
CGFloat widthScale = faceImageView.bounds.size.width / faceImageView.image.size.width;
CGFloat heightScale = faceImageView.bounds.size.height / faceImageView.image.size.height;
float contentScale = MIN(widthScale, heightScale);
float effectiveScale = imageScale * contentScale;
CGSize captureSize = CGSizeMake(enclosingView.bounds.size.width / effectiveScale, enclosingView.bounds.size.height / effectiveScale);
NSLog(#"effectiveScale = %0.2f, captureSize = %#", effectiveScale, NSStringFromCGSize(captureSize));
UIGraphicsBeginImageContextWithOptions(captureSize, YES, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextScaleCTM(context, 1/effectiveScale, 1/effectiveScale);
[enclosingView.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
Depending on the current transform the resulting image will have a different size. For example when you zoom in, the size gets smaller. You can also set effectiveScale to a constant value in order to get an image with a constant size.
Your gesture recognizer code does not limit the scale factor, i.e. you can zoom out/in without being limited. That can be very dangerous! My capture method can output really large images when you've zoomed out very much.
If you have zoomed out the background of the captured image will be black. If you want it to be transparent, you must set the opaque-parameter of UIGraphicsBeginImageContextWithOptions to NO.
Why capturing the view if you have the original image? Just apply the transformations to it. Something like this may be a start:
UIImage *image = [UIImage imageNamed:#"<# original #>"];
CIImage *cimage = [CIImage imageWithCGImage:image.CGImage];
// build the transform you want
CGAffineTransform t = CGAffineTransformIdentity;
CGFloat angle = [(NSNumber *)[self.faceImageView valueForKeyPath:#"layer.transform.rotation.z"] floatValue];
CGFloat scale = [(NSNumber *)[self.faceImageView valueForKeyPath:#"layer.transform.scale"] floatValue];
t = CGAffineTransformConcat(t, CGAffineTransformMakeScale(scale, scale));
t = CGAffineTransformConcat(t, CGAffineTransformMakeRotation(-angle));
// create a new CIImage using the transform, crop, filters, etc.
CIImage *timage = [cimage imageByApplyingTransform:t];
// draw the result
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef imageRef = [context createCGImage:timage fromRect:[timage extent]];
UIImage *result = [UIImage imageWithCGImage:imageRef];
// save to disk
NSData *png = UIImagePNGRepresentation(result);
NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:#"Documents/result.png"];
if (png && [png writeToFile:path atomically:NO]) {
NSLog(#"\n%#", path);
}
CGImageRelease(imageRef);
You can easily crop the output if that's what you want (see -[CIImage imageByCroppingToRect] or take into account the translation, apply a Core Image filter, etc. depending on what are your exact needs.
I think Bellow Code Capture Your Current View ...
- (UIImage *)captureView {
CGRect rect = [self.view bounds];
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.yourImage.layer renderInContext:context];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
I think you want to save the display screen and use it ,so i post this code...
Hope,this help you...
:)
I'm working on an iPad app that will use UIGestureRecognizers to allow the user to pan, scale and rotate objects (subclass of UIView) on the screen.
I understand that the [UIView frame] property isn't valid after a transform is done, so I'm trying to take the values of my UIGestureRecognizers and keep the "frame" myself.
Here's the code I'm using to attempt this (you may recognize a lot of code from Apple's sample project, SimpleGestureRecognizers):
// Shape.h (partial)
#interface Shape : UIView <UIGestureRecognizerDelegate> {
CGFloat centerX;
CGFloat centerY;
CGFloat rotatation;
CGFloat xScale;
CGFloat yScale;
}
// Shape.m (partial)
- (void)panPiece:(UIPanGestureRecognizer *)gestureRecognizer
{
UIView *piece = [gestureRecognizer view];
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGPoint translation = [gestureRecognizer translationInView:[piece superview]];
self.centerX += translation.x;
self.centerY += translation.y;
[piece setCenter:CGPointMake([piece center].x + translation.x, [piece center].y + translation.y)];
for ( HandleView *h in [self handles] ) {
[h setCenter:CGPointMake([h center].x + translation.x, [h center].y + translation.y)];
[h setNeedsDisplay];
}
[gestureRecognizer setTranslation:CGPointZero inView:[piece superview]];
NSLog(#"(%.0f, %.0f, %.0f, %.0f) %.2f˚, (%.2fx, %.2fx)", [self frame].origin.x, [self frame].origin.y, [self frame].size.width, [self frame].size.height, [self rotation], [self xScale], [self yScale]);
}
}
// rotate the piece by the current rotation
// reset the gesture recognizer's rotation to 0 after applying so the next callback is a delta from the current rotation
- (void)rotatePiece:(UIRotationGestureRecognizer *)gestureRecognizer
{
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGFloat rot = [self normalizeRotation:[gestureRecognizer rotation]];
self.rotation += rot * 180.0 / M_PI;
NSLog(#"Rotation: %.12f", [gestureRecognizer rotation]);
[gestureRecognizer view].transform = CGAffineTransformRotate([[gestureRecognizer view] transform], [gestureRecognizer rotation]);
[gestureRecognizer setRotation:0];
NSLog(#"(%.0f, %.0f, %.0f, %.0f) %.2f˚, (%.2fx, %.2fx)", [self frame].origin.x, [self frame].origin.y, [self frame].size.width, [self frame].size.height, [self rotation], [self xScale], [self yScale]);
}
}
// scale the piece by the current scale
// reset the gesture recognizer's rotation to 0 after applying so the next callback is a delta from the current scale
- (void)scalePiece:(UIPinchGestureRecognizer *)gestureRecognizer
{
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
self.xScale *= [gestureRecognizer scale];
self.yScale *= [gestureRecognizer scale];
[gestureRecognizer view].transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
[gestureRecognizer setScale:1];
NSLog(#"(%.0f, %.0f, %.0f, %.0f) %.2f˚, (%.2fx, %.2fx)", [self frame].origin.x, [self frame].origin.y, [self frame].size.width, [self frame].size.height, [self rotation], [self xScale], [self yScale]);
}
}
Because of some weirdness I noticed with the rotations, I implemented the following method to help keep an accurate rotation value:
- (CGFloat) normalizeRotation:(CGFloat)rot
{
if (abs(rot) > 0.05) {
if (rot > 0) {
rot -= M_PI;
} else {
rot += M_PI;
}
return [self normalizeRotation:rot];
} else {
return rot;
}
}
Anyway, the shape on-screen pans, scales and rotates fine. All is as you would expect and the performance is good.
The problem is that, after a user moves, resizes and rotates a UIView, I want to let them tap it and give them "handles" that allow for resizing other than the "square" resizing that pinching gives (i.e., when you use the pinch gesture, you upsize or downsize in the same ratio for both x and y). Now, even with the code above, the values that are stored aren't ever quite accurate.
The "handles" I'm using are simply 10x10 dots that are supposed to go at each corner and halfway down each "side" of the UIView's frame/rectangle. When I first place a square and tap it to get the handles before doing anything else, the handles appear in the appropriate place. When I move, resize and rotate an object, then tap it, the handles are all shifted off of the shape some amount. It generally seems to be about 20 pixels.
Are the values in the UIGestureRecognizer objects just not accurate enough? That doesn't seem to be the case, because those values are used to change the object on-screen, and that is accurate.
I'm pretty sure there's no way for me to get a real representation of the UIView's frame after messing with it so much, but where's the flaw in my custom code that's giving me bad values? Am I losing precision when converting between degrees and radians?
On a related note: Apple obviously has internal code that keeps track of how to draw a view that's been translated/transformed. The box of solid color that I'm currently using is moved, zoomed and rotated correctly. So, is there any way to access the values that they use for displayed a translated/transformed UIView?
Thanks!
I would like to make 2 operations to an UIImageView zoom, rotate, I have 2 problems:
A. I make an operation for zoom for ex. and when I try to make rotation the UIImageView is set to initial size, I would like to know how to keep the zoomed UIImageView and make the rotation from the zoomed image.
B. I would like to combine the zoom operation with rotation and I don't know ho to implement this:
- (void)viewDidLoad
{
foo = [[UIImageView alloc]initWithFrame:CGRectMake(100.0, 100.0, 600, 800.0)];
foo.userInteractionEnabled = YES;
foo.multipleTouchEnabled = YES;
foo.image = [UIImage imageNamed:#"earth.jpg"];
foo.contentMode = UIViewContentModeScaleAspectFit;
foo.clipsToBounds = YES;
[self.view addSubview:foo];
}
//---pinch gesture---
UIPinchGestureRecognizer *pinchGesture =
[[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handlePinchGesture:)];
[foo addGestureRecognizer:pinchGesture];
[pinchGesture release];
//---rotate gesture---
UIRotationGestureRecognizer *rotateGesture =
[[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(handleRotateGesture:)];
[foo addGestureRecognizer:rotateGesture];
[rotateGesture release];
//---handle pinch gesture---
-(IBAction) handlePinchGesture:(UIGestureRecognizer *) sender {
NSLog(#"Pinch");
CGFloat factor = [(UIPinchGestureRecognizer *) sender scale];
if (factor > 1) {
//---zooming in---
sender.view.transform = CGAffineTransformMakeScale(
lastScaleFactor + (factor-1),
lastScaleFactor + (factor-1));
}
else {
//---zooming out---
sender.view.transform = CGAffineTransformMakeScale(lastScaleFactor * factor, lastScaleFactor * factor);
}
if (sender.state == UIGestureRecognizerStateEnded) {
if (factor > 1) {
lastScaleFactor += (factor-1);
} else {
lastScaleFactor *= factor;
}
}
}
//---handle rotate gesture---
-(IBAction) handleRotateGesture:(UIGestureRecognizer *) sender {
CGFloat rotation = [(UIRotationGestureRecognizer *) sender rotation];
CGAffineTransform transform = CGAffineTransformMakeRotation(rotation + netRotation);
sender.view.transform = transform;
if (sender.state == UIGestureRecognizerStateEnded) {
netRotation += rotation;
}
}
Thanks
Hope this can be helpful to you, that's how I usually implement gesture recognizers:
UIRotationGestureRecognizer *rotationGesture = [[UIRotationGestureRecognizer alloc] initWithTarget:self action:#selector(rotatePiece:)];
[piece addGestureRecognizer:rotationGesture];
[rotationGesture release];
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(scalePiece:)];
[pinchGesture setDelegate:self];
[piece addGestureRecognizer:pinchGesture];
[pinchGesture release];
Rotate method: Reset the gesture recognizer's rotation to 0 after applying so the next callback is a delta from the current rotation
- (void)rotatePiece:(UIRotationGestureRecognizer *)gestureRecognizer {
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
[gestureRecognizer view].transform = CGAffineTransformRotate([[gestureRecognizer view] transform], [gestureRecognizer rotation]);
[gestureRecognizer setRotation:0];
}
}
Scale Method, at the end reset the gesture recognizer's scale to 1 after applying so the next callback is a delta from the current scale
- (void)scalePiece:(UIPinchGestureRecognizer *)gestureRecognizer {
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
[gestureRecognizer view].transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
[gestureRecognizer setScale:1];
}
}
Than ensure that the pinch, pan and rotate gesture recognizers on a particular view can all recognize simultaneously prevent other gesture recognizers from recognizing simultaneously
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer {
// if the gesture recognizers are on different views, don't allow simultaneous recognition
if (gestureRecognizer.view != otherGestureRecognizer.view)
return NO;
// if either of the gesture recognizers is the long press, don't allow simultaneous recognition
if ([gestureRecognizer isKindOfClass:[UILongPressGestureRecognizer class]] || [otherGestureRecognizer isKindOfClass:[UILongPressGestureRecognizer class]])
return NO;
return YES;
}
Scale and rotation transforms are applied relative to the layer's anchor point this method moves a gesture recognizer's view's anchor point between the user's fingers
- (void)adjustAnchorPointForGestureRecognizer:(UIGestureRecognizer *)gestureRecognizer {
if (gestureRecognizer.state == UIGestureRecognizerStateBegan) {
UIView *piece = gestureRecognizer.view;
CGPoint locationInView = [gestureRecognizer locationInView:piece];
CGPoint locationInSuperview = [gestureRecognizer locationInView:piece.superview];
piece.layer.anchorPoint = CGPointMake(locationInView.x / piece.bounds.size.width, locationInView.y / piece.bounds.size.height);
piece.center = locationInSuperview;
}
}
Just implement gestureRecognizer:shouldRecognizeSimultaneouslyWithGestureRecognizer: in your delegate.
I have a UIPinchGestureRecognizer, a UIPanGestureRecognizer and a UIRotationGestureRecognizer set up and I want them all to work at the same time. I also have a UITapGestureRecognizer which I do not want to be recognized simultaneously. All I did was this:
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
if (![gestureRecognizer isKindOfClass:[UITapGestureRecognizer class]] && ![otherGestureRecognizer isKindOfClass:[UITapGestureRecognizer class]]) {
return YES;
}
return NO;
}
I found something that may interest you on the stanford university website:
http://www.stanford.edu/class/cs193p/cgi-bin/drupal/downloads-2010-winter
on this site you will need to scroll down until you see the number 14: "Title: Lecture #14 - MultiTouch"
Download the: "14_MultiTouchDemo.zip"
In this example you can scale and rotate every image at the same time.
hope i helped :)
When you use CGAffineTransformMakeScale, you are resetting the transformation of Identity every time you use it and you lose the previous transformation information.
Try using CGAffineTransformScale(view.transform,scale, scale) for the pinch zooming. You will need to retain the original frame size to keep the zooming under control though.
see: How can I use pinch zoom(UIPinchGestureRecognizer) to change width of a UITextView?
For rotation similarly:
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
view.transform = CGAffineTransformRotate([view transform], [gestureRecognizer rotation]);
[gestureRecognizer setRotation:0];
}
I know this is a pretty old thread, I came across this imageview subclass, which works nice for zoom, rotate and pan. It uses gesture recognizer on an imageview. I am using this for one of my app.
ZoomRotatePanImageView