I have a UISlider for zooming the imageView(instead of UIPinchGesture) and i'm using UIRotationGesture,both of them works fine independently.Zooming without doing rotation gesture works fine! but Once i perform a rotation and them zoom in or out the imageview it behaves weird as it looses the rotation scale.(that's what i guess!)
How do i fix this?
Well im not good with this maths stuff struggling with this since few days and i have searched through the forums couldn't find the solution. Kindly help me:)
For Zooming:(here i'm not handling transform as im unsure of)
-(void)scale:(UISlider *)sender
{
float sliderValue = [sender value];
CGRect newFrame = placeRing.frame;
newFrame.size = CGSizeMake(sliderValue, sliderValue);
placeRing.frame = newFrame;
}
For rotation:
- (void)twoFingersRotate:(UIRotationGestureRecognizer *)recognizer
{
isRotated = TRUE;
if ([recognizer state] == UIGestureRecognizerStateBegan || [recognizer state] == UIGestureRecognizerStateChanged) {
rotation= recognizer.rotation;
rotatedTransform = CGAffineTransformRotate([placeRing transform], [recognizer rotation]);
placeRing.transform = rotatedTransform;
[recognizer setRotation:0];
}
}
The rotation applies transform which invalidates the frame property you are using to resize the view. Use bounds and center properties to zoom.
See the warning box in: http://developer.apple.com/library/ios/#documentation/UIKit/Reference/UIView_Class/UIView/UIView.html#//apple_ref/doc/uid/TP40006816
-(void)scale:(UISlider *)sender
{
float sliderValue = [sender value];
placeRing.bounds = CGRectMake(0, 0, sliderValue, sliderValue);
}
Related
I have this image which is UIButton:
I want to add some kind of gesture (pan gesture) to slide it to the right and I want the image to stretch to the right also like this:
It's very similar to the iPhone's "Slide to unlock" methods but without the same image that moves around from left to right.
Is it possible to do that?
EDIT:
I've added a UIPanGestureRecognizer to my button like this:
UIPanGestureRecognizer *panGesture = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(buttonDidDragged:)];
[self.buttonAbout addGestureRecognizer:panGesture];
self.buttonFrame = self.buttonAbout.frame;
Also I saved the button frame into a CGRect.
Now this is the 'buttonDidDragged' methods, but I also have a problem, the button seems to go to the right but the image remains the same image. it will not stretch.
- (IBAction)buttonDidDragged:(UIPanGestureRecognizer*)gesture
{
CGPoint translation = [gesture translationInView:self.view];
// gesture.view.center = CGPointMake(gesture.view.center.x + translation.x, gesture.view.center.y);
NSLog(#"%f",gesture.view.frame.origin.x + gesture.view.frame.size.width);
gesture.view.frame = CGRectMake(gesture.view.frame.origin.x,
gesture.view.frame.origin.y,
gesture.view.frame.size.width + translation.x,
gesture.view.frame.size.height);
UIButton *button = (UIButton*)gesture.view;
UIImage *buttonImage = [UIImage imageNamed:#"about_us_button.png"];
buttonImage = [buttonImage resizableImageWithCapInsets:UIEdgeInsetsMake(0, 80, 0, 0)];
[button setImage:buttonImage forState:UIControlStateNormal];
[button setImage:buttonImage forState:UIControlStateHighlighted];
if (gesture.state == UIGestureRecognizerStateEnded)
{
[UIView animateWithDuration:0.5
delay:0
options:UIViewAnimationOptionCurveEaseOut
animations:^{
CGPoint finalPoint = CGPointMake(self.buttonFrame.origin.x + (self.buttonFrame.size.width / 2),
self.buttonFrame.origin.y + (self.buttonFrame.size.height / 2));
// gesture.view.center = finalPoint;
button.frame = newFrame; // This is where you change the frame of the button to make it stretch.
} completion:nil];
}
[gesture setTranslation:CGPointMake(0, 0) inView:self.view];
}
Nice idea! Yes, this is possible. You can have three images, left, right and center, the center one being just one pixel wide.
You can adjust the image in touchesMoved. There you can calculate the necessary stretch width based on the location of the touch.
The stretching is pretty easy with methods like
[UIImage stretchableImageWithLeftCapWidth:topCapHeight]
(UIImage *)resizableImageWithCapInsets:(UIEdgeInsets)capInsets
resizingMode:(UIImageResizingMode)resizingMode
Do like this,
In .h
CGSize dragViewSize;
then,
-(IBAction)buttonDidDragged:(UIPanGestureRecognizer*)gesture
{
CGPoint translation = [gesture translationInView:self.view];
if (gesture.state == UIGestureRecognizerStateBegan) {
dragViewSize = gesture.view.frame.size;
} else if (gesture.state == UIGestureRecognizerStateChanged) {
CGRect _rect = gesture.view.frame;
_rect.size.width=dragViewSize.width+translation.x;
gesture.view.frame = _rect;
}
else if (gesture.state == UIGestureRecognizerStateEnded) {
}
}
Its working fine... hope it will helps you...
As I wrote in the comment to Mundi the method - (UIImage *)stretchableImageWithLeftCapWidth:(NSInteger)leftCapWidth topCapHeight:(NSInteger)topCapHeight is deprecated and it is therefore not wise to use it in any new development.
You should therefore as per #msgambel suggestion move on to - (UIImage *)resizableImageWithCapInsets:(UIEdgeInsets)capInsets for iOS 5+ compatibility or even better - (UIImage *)resizableImageWithCapInsets:(UIEdgeInsets)capInsets resizingMode:(UIImageResizingMode)resizingMode for iOS 6+ compatibility.
Never use a deprecated method :)
Finally, this is what I did and it worked just great. Thank you very much everyone!
CGPoint translation = [gesture translationInView:self.view];
UIButton *button = (UIButton*)gesture.view;
if (translation.x < kTranslationMaximum && translation.x > kTranslationMinimum)
{
self.aboutCenterImage.frame = CGRectMake(self.aboutCenterImage.frame.origin.x,
self.aboutCenterImage.frame.origin.y,
translation.x,
self.aboutCenterImage.frame.size.height);
gesture.view.frame = CGRectMake(self.aboutCenterImage.frame.origin.x + self.aboutCenterImage.frame.size.width,
self.aboutCenterImage.frame.origin.y,
self.buttonAbout.frame.size.width,
self.aboutCenterImage.frame.size.height);
}
else if (translation.x > kTranslationMaximum)
{
// Push ViewController or do whatever you like to do :)
}
Working in Xcode/iPhone simulator, I have an app where UIGestureRecognizers were working for pinch, tap, and pan. Without changing any code, pinches are no longer being recognized (I am holding down the option key while moving the mouse to show the two grey circles moving together or apart). Pan and tap still both work. Has anyone else experienced this problem?
It appears that something is wrong with the recognizer itself or the simulator because the code below for pinching never gets called, but it works for for panning.
- (void)pinch:(UIPinchGestureRecognizer *)gesture
{
NSLog(#"in pinch method");
if ((gesture.state == UIGestureRecognizerStateChanged) ||
(gesture.state == UIGestureRecognizerStateEnded)) {
self.scale *= gesture.scale; // adjust our scale
gesture.scale = 1; // reset gestures scale so future changes are incremental
}
}
- (void)tap:(UITapGestureRecognizer *)gesture
{
NSLog(#"in tap method");
if (gesture.state == UIGestureRecognizerStateEnded)
self.originInPixels = [gesture locationInView:self];
}
I tried creating a new app with a simple MKMapView, which loads the map properly, and pan and tap work - but pinch still doesn't work.
I'm working in iOS 5.1.
Any ideas?
Please try this
include this code to your project
example.h
CGFloat lastScale;
CGPoint lastPoint;
example.m
-(void)viewDidLoad
{
UIPinchGestureRecognizer *pinchGesture = [[UIPinchGestureRecognizer alloc] initWithTarget:self action:#selector(handlePinch:)];
[self addGestureRecognizer:pinchGesture];
[pinchGesture release];
}
-(void) handlePinch:(UIPinchGestureRecognizer *)gestureRecognizer {
if([gestureRecognizer state] == UIGestureRecognizerStateBegan) {
// Reset the last scale, necessary if there are multiple objects with different scales
lastScale = [gestureRecognizer scale];
}
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan ||
[gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGFloat currentScale = [[[gestureRecognizer view].layer valueForKeyPath:#"transform.scale"] floatValue];
NSLog(#"%f",currentScale);
// Constants to adjust the max/min values of zoom
const CGFloat kMaxScale = 3.0;
const CGFloat kMinScale = 1.0;
CGFloat newScale = 1 - (lastScale - [gestureRecognizer scale]);
newScale = MIN(newScale, kMaxScale / currentScale);
newScale = MAX(newScale, kMinScale / currentScale);
CGAffineTransform transform = CGAffineTransformScale([[gestureRecognizer view] transform], newScale, newScale);
NSLog(#"%f",lastScale);
[gestureRecognizer view].transform = transform;
lastScale = [gestureRecognizer scale];
}
}
The UIGestureRecognizer began working again on its own, and I'm still not sure why. It may have been as simple as I wasn't holding down the mouse button while also holding down the option key, but I thought I tried that before.
Thanks for the suggestions above.
I had the same problem on my macbook air trackpad with XCode 4.5.2 thinking that [option]+[single finger touch movement] would do a pinch (gray balls moving in/out).
However, you need to apply [option]+[3-finger touch] for it to work.
This works in simulator, but annoyingly is no help when running on connected iOS device. Though my downloaded app works, running with iPhone connected to XCode doesn't, agh!
I have a simple app which places an imageview on the screen, and allows the user to move it and scale it using the pinch and pan gesture recognizers. In my uipangesturerecognizer delegate method (scale), I'm trying to calculate the value of the transformed frame before I actually perform the transformation using CGAffineTransformScale. I'm not getting the correct value however, and am getting some weird result that isn't inline with what the transformed frame should be. Below is my method:
- (void)scale:(UIPinchGestureRecognizer *)sender
{
if([sender state] == UIGestureRecognizerStateBegan)
{
lastScale = [sender scale];
}
if ([sender state] == UIGestureRecognizerStateBegan ||
[sender state] == UIGestureRecognizerStateChanged)
{
CGFloat currentScale = [[[sender view].layer valueForKeyPath:#"transform.scale"] floatValue];
CGFloat newScale = 1 - (lastScale - [sender scale]) * (CropThePhotoViewControllerPinchSpeed);
newScale = MIN(newScale, CropThePhotoViewControllerMaxScale / currentScale);
newScale = MAX(newScale, CropThePhotoViewControllerMinScale / currentScale);
NSLog(#"currentBounds: %#", NSStringFromCGRect([[sender view] bounds]));
NSLog(#"currentFrame: %#", NSStringFromCGRect([[sender view] frame]));
CGAffineTransform transform = CGAffineTransformScale([[sender view] transform], newScale, newScale);
CGRect nextFrame = CGRectApplyAffineTransform([[sender view] frame], transform);
NSLog(#"nextFrame: %#", NSStringFromCGRect(nextFrame));
//NSLog(#"nextBounds: %#", NSStringFromCGRect(nextBounds));
[sender view].transform = transform;
lastScale = [sender scale];
}
}
Here is a printed result I get:
/* currentBounds: {{0, 0}, {316, 236.013}}
currentFrame: {{-115.226,-53.4392}, {543.452, 405.891}}
nextFrame: {{-202.566, -93.9454}, {955.382, 713.551}} */
With these results, the currentBounds coordinates is obviously 0,0 like it always is, and the size and width are that of the original image before it ever gets transformed. This value seems to stay the same no matter how many transformations I do.
currentFrame is the correct coordinates and the correct size and width based on the current state of the image in it's transformed state.
nextFrame is incorrect, it should match up with the currentFrame from the next set of values that gets printed but it doesn't.
So I have some questions:
1) Why is currentFrame displaying the correct value for the frame? I thought the frame was invalid after you perform transformations? This set of values was displayed after many enlargements and minimizations I was doing on the image with my fingers. It seems like the height and width of currentBounds is what I'd expect for the currentFrame.
2) Why is my next frame value being calculated incorrectly, and how do I accurately calculate the value of the transformed frame before I actually implement the transformation?
You need to apply your new transform to the view's untransformed frame. Try this:
CGAffineTransform transform = CGAffineTransformScale([[sender view] transform], newScale, newScale);
CGAffineTransform iTransform = CGAffineTransformInvert([[sender view] transform]);
CGRect rawFrame = CGRectApplyAffineTransform([[sender view] frame], iTransform);
CGRect nextFrame = CGRectApplyAffineTransform(rawFrame, transform);
I have a pinch gesture recognizer attached to an imageview from which I use pinches to enlarge and minimize the photo. Below is the code that I'm using in the delegate method:
- (void)scale:(UIPinchGestureRecognizer *)sender
{
if([sender state] == UIGestureRecognizerStateBegan)
{
lastScale = [sender scale];
}
if ([sender state] == UIGestureRecognizerStateBegan ||
[sender state] == UIGestureRecognizerStateChanged)
{
CGFloat currentScale = [[[sender view].layer valueForKeyPath:#"transform.scale"] floatValue];
CGFloat newScale = 1 - (lastScale - [sender scale]) * (UIComicImageViewPinchSpeed);
newScale = MIN(newScale, minScale / currentScale);
newScale = MAX(newScale, maxScale / currentScale);
CGAffineTransform transform = CGAffineTransformScale([[sender view] transform], newScale, newScale);
[sender view].transform = transform;
lastScale = [sender scale];
}
}
I need to determine where the new center of the imageview frame will be before I actually perform the transformation. Is there anyway to determine this? Basically, I'm trying to halt the scaling if it's about to move the image off the screen or close to it.
UPDATE
Thanks to Robin below for suggesting that method to figure out the transformed frame. The problem I'm running into now is that the frame becomes invalid after the transform is performed, and I need to keep track of the most recent frame in order to figure out the boundary of my image. Obviously, I can do this manually and store it in an instance variable, but wondering if there is a more "elegant" way to accomplish this?
Use CGRectApplyAffineTransform like this:
CGRect currentFrame = ....;
CGRect newFrame = CGRectApplyAffineTransform(currentFrame, transform);
// Then test if newFrame is within the limits you want
I'm working on an iPad app that will use UIGestureRecognizers to allow the user to pan, scale and rotate objects (subclass of UIView) on the screen.
I understand that the [UIView frame] property isn't valid after a transform is done, so I'm trying to take the values of my UIGestureRecognizers and keep the "frame" myself.
Here's the code I'm using to attempt this (you may recognize a lot of code from Apple's sample project, SimpleGestureRecognizers):
// Shape.h (partial)
#interface Shape : UIView <UIGestureRecognizerDelegate> {
CGFloat centerX;
CGFloat centerY;
CGFloat rotatation;
CGFloat xScale;
CGFloat yScale;
}
// Shape.m (partial)
- (void)panPiece:(UIPanGestureRecognizer *)gestureRecognizer
{
UIView *piece = [gestureRecognizer view];
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGPoint translation = [gestureRecognizer translationInView:[piece superview]];
self.centerX += translation.x;
self.centerY += translation.y;
[piece setCenter:CGPointMake([piece center].x + translation.x, [piece center].y + translation.y)];
for ( HandleView *h in [self handles] ) {
[h setCenter:CGPointMake([h center].x + translation.x, [h center].y + translation.y)];
[h setNeedsDisplay];
}
[gestureRecognizer setTranslation:CGPointZero inView:[piece superview]];
NSLog(#"(%.0f, %.0f, %.0f, %.0f) %.2f˚, (%.2fx, %.2fx)", [self frame].origin.x, [self frame].origin.y, [self frame].size.width, [self frame].size.height, [self rotation], [self xScale], [self yScale]);
}
}
// rotate the piece by the current rotation
// reset the gesture recognizer's rotation to 0 after applying so the next callback is a delta from the current rotation
- (void)rotatePiece:(UIRotationGestureRecognizer *)gestureRecognizer
{
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGFloat rot = [self normalizeRotation:[gestureRecognizer rotation]];
self.rotation += rot * 180.0 / M_PI;
NSLog(#"Rotation: %.12f", [gestureRecognizer rotation]);
[gestureRecognizer view].transform = CGAffineTransformRotate([[gestureRecognizer view] transform], [gestureRecognizer rotation]);
[gestureRecognizer setRotation:0];
NSLog(#"(%.0f, %.0f, %.0f, %.0f) %.2f˚, (%.2fx, %.2fx)", [self frame].origin.x, [self frame].origin.y, [self frame].size.width, [self frame].size.height, [self rotation], [self xScale], [self yScale]);
}
}
// scale the piece by the current scale
// reset the gesture recognizer's rotation to 0 after applying so the next callback is a delta from the current scale
- (void)scalePiece:(UIPinchGestureRecognizer *)gestureRecognizer
{
[self adjustAnchorPointForGestureRecognizer:gestureRecognizer];
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan || [gestureRecognizer state] == UIGestureRecognizerStateChanged) {
self.xScale *= [gestureRecognizer scale];
self.yScale *= [gestureRecognizer scale];
[gestureRecognizer view].transform = CGAffineTransformScale([[gestureRecognizer view] transform], [gestureRecognizer scale], [gestureRecognizer scale]);
[gestureRecognizer setScale:1];
NSLog(#"(%.0f, %.0f, %.0f, %.0f) %.2f˚, (%.2fx, %.2fx)", [self frame].origin.x, [self frame].origin.y, [self frame].size.width, [self frame].size.height, [self rotation], [self xScale], [self yScale]);
}
}
Because of some weirdness I noticed with the rotations, I implemented the following method to help keep an accurate rotation value:
- (CGFloat) normalizeRotation:(CGFloat)rot
{
if (abs(rot) > 0.05) {
if (rot > 0) {
rot -= M_PI;
} else {
rot += M_PI;
}
return [self normalizeRotation:rot];
} else {
return rot;
}
}
Anyway, the shape on-screen pans, scales and rotates fine. All is as you would expect and the performance is good.
The problem is that, after a user moves, resizes and rotates a UIView, I want to let them tap it and give them "handles" that allow for resizing other than the "square" resizing that pinching gives (i.e., when you use the pinch gesture, you upsize or downsize in the same ratio for both x and y). Now, even with the code above, the values that are stored aren't ever quite accurate.
The "handles" I'm using are simply 10x10 dots that are supposed to go at each corner and halfway down each "side" of the UIView's frame/rectangle. When I first place a square and tap it to get the handles before doing anything else, the handles appear in the appropriate place. When I move, resize and rotate an object, then tap it, the handles are all shifted off of the shape some amount. It generally seems to be about 20 pixels.
Are the values in the UIGestureRecognizer objects just not accurate enough? That doesn't seem to be the case, because those values are used to change the object on-screen, and that is accurate.
I'm pretty sure there's no way for me to get a real representation of the UIView's frame after messing with it so much, but where's the flaw in my custom code that's giving me bad values? Am I losing precision when converting between degrees and radians?
On a related note: Apple obviously has internal code that keeps track of how to draw a view that's been translated/transformed. The box of solid color that I'm currently using is moved, zoomed and rotated correctly. So, is there any way to access the values that they use for displayed a translated/transformed UIView?
Thanks!