How can I zoom a UIScrollview to a fixed point? - iphone

I am creating a control for showing the timeline. Initially , it should show the year. When zooming in and the zoomScale reached a specific value, it will show the value of the individual months, and In the next zoom level, it should show the real values for the individual days.
I have creatd a scrollview and added the layers (for showing the month/year values) to this. After overriding the pinch guesture, I am able to do the zooming (Normal UIScrollview zooming). But I need to zoom it to a particular point. ie, Suppose i am zooming January, I need to keep it in the same position(for creating an effect like moving inside of January). Any suggestions to achieve this?
Is my way is currect? else please help me to start this.

Use This bellow method for your requirement,
- (void)zoomToRect:(CGRect)rect animated:(BOOL)animated
just call this above method with your selected frame or point , like if select January then just take the point ( position) of january button or view and just call above method with that frame..
for more information see this link UIScrollView_Class
i hope this helpful to you...

Here’s what it looks like:
#implementation UIScrollView (ZoomToPoint)
- (void)zoomToPoint:(CGPoint)zoomPoint withScale: (CGFloat)scale animated: (BOOL)animated
{
//Normalize current content size back to content scale of 1.0f
CGSize contentSize;
contentSize.width = (self.contentSize.width / self.zoomScale);
contentSize.height = (self.contentSize.height / self.zoomScale);
//translate the zoom point to relative to the content rect
zoomPoint.x = (zoomPoint.x / self.bounds.size.width) * contentSize.width;
zoomPoint.y = (zoomPoint.y / self.bounds.size.height) * contentSize.height;
//derive the size of the region to zoom to
CGSize zoomSize;
zoomSize.width = self.bounds.size.width / scale;
zoomSize.height = self.bounds.size.height / scale;
//offset the zoom rect so the actual zoom point is in the middle of the rectangle
CGRect zoomRect;
zoomRect.origin.x = zoomPoint.x - zoomSize.width / 2.0f;
zoomRect.origin.y = zoomPoint.y - zoomSize.height / 2.0f;
zoomRect.size.width = zoomSize.width;
zoomRect.size.height = zoomSize.height;
//apply the resize
[self zoomToRect: zoomRect animated: animated];
}
#end
Looking at it, it should be pretty straightforward to figure out how it works. If you see anything wrong with it, let me know in the comments.
Thanks :)
source: http://www.tim-oliver.com/2012/01/14/zooming-to-a-point-in-uiscrollview/

I had same problem,
I was having a world map image, & i wanted to zoom the map to certain location, India
This is how i fixed it.
I found out the position of India, in terms of CGRect(X,Y) [CGRect(500,300)]
& following code in DidLoad
[self.scrollViewForMap setZoomScale:2.0 animated:YES]; //for zooming
_scrollViewForMap.contentOffset = CGPointMake(500,300); //for location
This solved my problem. :)

- (void)pinchDetected:(UIPinchGestureRecognizer *)gestureRecognizer {
if([gestureRecognizer state] == UIGestureRecognizerStateBegan) {
// Reset the last scale, necessary if there are multiple objects with different scales
lastScale = [gestureRecognizer scale];
}
if ([gestureRecognizer state] == UIGestureRecognizerStateBegan ||
[gestureRecognizer state] == UIGestureRecognizerStateChanged) {
CGFloat currentScale = [[[gestureRecognizer view].layer valueForKeyPath:#"transform.scale"] floatValue];
// Constants to adjust the max/min values of zoom
const CGFloat kMaxScale = 2.2;
const CGFloat kMinScale = 0.64;
CGFloat newScale = 1 - (lastScale - [gestureRecognizer scale]);
newScale = MIN(newScale, kMaxScale / currentScale);
newScale = MAX(newScale, kMinScale / currentScale);
CGAffineTransform transform = CGAffineTransformScale([[gestureRecognizer view] transform], newScale, newScale);
[gestureRecognizer view].transform = transform;
[gestureRecognizer setScale:1.0];
lastScale = [gestureRecognizer scale]; // Store the previous scale factor for the next pinch gesture call
}
}

What worked for me (Swift 4):
func zoomIn(with gestureRecognizer: UIGestureRecognizer?) {
guard let location = gestureRecognizer?.location(in: gestureRecognizer?.view)
let rect = CGRect(x: location.x, y: location.y, width: 0, height: 0)
scrollView.zoom(to: rect, animated: true)
}
Or in other words creating a CGRect with zero width and height will cause the UIScrollView to zoom to the rects origin at maximumZoomScale

Related

Transforming view coordinates with CGAffineTransform

I'm fooling around with Apple's ScrollViewSuite, TapToZoom app, trying to place a view on the zoomed image. Here's what I want to do: on double tap, rotate the image by pi/2, zoom it by a factor of 8, then place a subview in the scrollView on top of the transformed image.
I know the view's desired frame in the original coordinate system. So I need to transform those coordinates to the new system after the zoom...
From the ScrollViewSuite...
- (void)handleDoubleTap:(UIGestureRecognizer *)gestureRecognizer {
// double tap zooms in
float newScale = [imageScrollView zoomScale] * 8; // my custom zoom
CGRect zoomRect = [self zoomRectForScale:newScale withCenter:[gestureRecognizer locationInView:gestureRecognizer.view]];
[imageScrollView zoomToRect:zoomRect animated:YES];
[self rotateView:imageScrollView by:M_PI_2]; // then the twist
}
My rotation code...
- (void)rotateView:(UIView *)anImageView by:(CGFloat)spin {
[UIView animateWithDuration:0.5 delay: 0.0
options: UIViewAnimationOptionAllowUserInteraction
animations:^{
anImageView.transform = CGAffineTransformMakeRotation(spin);
}
completion:^(BOOL finished){}];
}
Then, after the zoom is finished, place a view at a known location in the original coordinate system. This is where I think I need some help...
- (void)scrollViewDidEndZooming:(UIScrollView *)scrollView withView:(UIView *)view atScale:(float)_scale {
CGFloat coX = scrollView.contentOffset.x;
CGFloat coY = scrollView.contentOffset.y;
// try to build the forward transformation that got us here, then invert it
CGAffineTransform rotate = CGAffineTransformMakeRotation(M_PI_2);
CGAffineTransform scale = CGAffineTransformMakeScale(_scale, _scale);
// should these be negative? some confusion here.
CGAffineTransform translate = CGAffineTransformMakeTranslation(coX, coY);
// what's the right order of concatenation on these? more confusion
CGAffineTransform aft0 = CGAffineTransformConcat(rotate, scale);
CGAffineTransform aft1 = CGAffineTransformConcat(aft0, translate);
// invert it
CGAffineTransform aft = CGAffineTransformInvert(aft1);
// this is the fixed rect in original coordinate system of the image
// the image is at 0,0 320, 300 in the view, so this will only work
// when the image is at the origin... need to fix that later,too.
CGRect rect = CGRectMake(248, 40, 90, 160);
CGRect xformRect = CGRectApplyAffineTransform(rect, aft);
// build a subview with this rect
UIView *newView = [scrollView viewWithTag:888];
if (!newView) {
newView = [[UIView alloc] initWithFrame:xformRect];
newView.backgroundColor = [UIColor yellowColor];
newView.tag = 888;
[scrollView addSubview:newView];
} else {
newView.frame = xformRect;
}
// see where it landed...
NSLog(#"%f,%f %f,%f", xformRect.origin.x, xformRect.origin.y, xformRect.size.width, xformRect.size.height);
}

Discrepancies between bounds, frame, and frame calculated through CGAffineTransformScale

I have a simple app which places an imageview on the screen, and allows the user to move it and scale it using the pinch and pan gesture recognizers. In my uipangesturerecognizer delegate method (scale), I'm trying to calculate the value of the transformed frame before I actually perform the transformation using CGAffineTransformScale. I'm not getting the correct value however, and am getting some weird result that isn't inline with what the transformed frame should be. Below is my method:
- (void)scale:(UIPinchGestureRecognizer *)sender
{
if([sender state] == UIGestureRecognizerStateBegan)
{
lastScale = [sender scale];
}
if ([sender state] == UIGestureRecognizerStateBegan ||
[sender state] == UIGestureRecognizerStateChanged)
{
CGFloat currentScale = [[[sender view].layer valueForKeyPath:#"transform.scale"] floatValue];
CGFloat newScale = 1 - (lastScale - [sender scale]) * (CropThePhotoViewControllerPinchSpeed);
newScale = MIN(newScale, CropThePhotoViewControllerMaxScale / currentScale);
newScale = MAX(newScale, CropThePhotoViewControllerMinScale / currentScale);
NSLog(#"currentBounds: %#", NSStringFromCGRect([[sender view] bounds]));
NSLog(#"currentFrame: %#", NSStringFromCGRect([[sender view] frame]));
CGAffineTransform transform = CGAffineTransformScale([[sender view] transform], newScale, newScale);
CGRect nextFrame = CGRectApplyAffineTransform([[sender view] frame], transform);
NSLog(#"nextFrame: %#", NSStringFromCGRect(nextFrame));
//NSLog(#"nextBounds: %#", NSStringFromCGRect(nextBounds));
[sender view].transform = transform;
lastScale = [sender scale];
}
}
Here is a printed result I get:
/* currentBounds: {{0, 0}, {316, 236.013}}
currentFrame: {{-115.226,-53.4392}, {543.452, 405.891}}
nextFrame: {{-202.566, -93.9454}, {955.382, 713.551}} */
With these results, the currentBounds coordinates is obviously 0,0 like it always is, and the size and width are that of the original image before it ever gets transformed. This value seems to stay the same no matter how many transformations I do.
currentFrame is the correct coordinates and the correct size and width based on the current state of the image in it's transformed state.
nextFrame is incorrect, it should match up with the currentFrame from the next set of values that gets printed but it doesn't.
So I have some questions:
1) Why is currentFrame displaying the correct value for the frame? I thought the frame was invalid after you perform transformations? This set of values was displayed after many enlargements and minimizations I was doing on the image with my fingers. It seems like the height and width of currentBounds is what I'd expect for the currentFrame.
2) Why is my next frame value being calculated incorrectly, and how do I accurately calculate the value of the transformed frame before I actually implement the transformation?
You need to apply your new transform to the view's untransformed frame. Try this:
CGAffineTransform transform = CGAffineTransformScale([[sender view] transform], newScale, newScale);
CGAffineTransform iTransform = CGAffineTransformInvert([[sender view] transform]);
CGRect rawFrame = CGRectApplyAffineTransform([[sender view] frame], iTransform);
CGRect nextFrame = CGRectApplyAffineTransform(rawFrame, transform);

How do you determine the frame of a future transformation? UPDATE: More elegant way?

I have a pinch gesture recognizer attached to an imageview from which I use pinches to enlarge and minimize the photo. Below is the code that I'm using in the delegate method:
- (void)scale:(UIPinchGestureRecognizer *)sender
{
if([sender state] == UIGestureRecognizerStateBegan)
{
lastScale = [sender scale];
}
if ([sender state] == UIGestureRecognizerStateBegan ||
[sender state] == UIGestureRecognizerStateChanged)
{
CGFloat currentScale = [[[sender view].layer valueForKeyPath:#"transform.scale"] floatValue];
CGFloat newScale = 1 - (lastScale - [sender scale]) * (UIComicImageViewPinchSpeed);
newScale = MIN(newScale, minScale / currentScale);
newScale = MAX(newScale, maxScale / currentScale);
CGAffineTransform transform = CGAffineTransformScale([[sender view] transform], newScale, newScale);
[sender view].transform = transform;
lastScale = [sender scale];
}
}
I need to determine where the new center of the imageview frame will be before I actually perform the transformation. Is there anyway to determine this? Basically, I'm trying to halt the scaling if it's about to move the image off the screen or close to it.
UPDATE
Thanks to Robin below for suggesting that method to figure out the transformed frame. The problem I'm running into now is that the frame becomes invalid after the transform is performed, and I need to keep track of the most recent frame in order to figure out the boundary of my image. Obviously, I can do this manually and store it in an instance variable, but wondering if there is a more "elegant" way to accomplish this?
Use CGRectApplyAffineTransform like this:
CGRect currentFrame = ....;
CGRect newFrame = CGRectApplyAffineTransform(currentFrame, transform);
// Then test if newFrame is within the limits you want

Scaling image anchored at cetain point

Is it possible to scale an image by using a specified point as the anchor, i.e. the image "grows" out from this point?
Scaling isn't based on a point. What you want to do is move it so that the the corresponding point on the new and original images is at the same point. To do this, just adjust the (x,y) position of the image. Use the proportional distance to the edge multiplied by the difference in size.
You could do something like this (based on solution using UIPinchGesureRecognize, but you can get the idea...).
This is the selector called for the gestureRecognizer:
CGPoint newDistanceFromCenter;
CGPoint distanceFromCenter;
- (void) scale:(id)sender
{
UIPinchGestureRecognizer *recognizer = (UIPinchGestureRecognizer*)sender;
if(recognizer.state == UIGestureRecognizerStateBegan)
{
CGPoint pinchPoint = [recognizer locationInView:self];
distanceFromCenter.x = self.center.x - pinchPoint.x;
distanceFromCenter.y = self.center.y - pinchPoint.y;
}
else if(recognizer.state == UIGestureRecognizerStateChanged)
{
CGAffineTransform currentTransform = self.transform;
CGFloat scale = recognizer.scale;
newDistanceFromCenter.x = (distanceFromCenter.x * scale);
newDistanceFromCenter.y = (distanceFromCenter.y * scale);
CGPoint center = scalingImage_.center;
center.x -= (distanceFromCenter.x - newDistanceFromCenter.x);
center.y -= (distanceFromCenter.y - newDistanceFromCenter.y);
self.center = center;
distanceFromCenter = newDistanceFromCenter;
self.transform = CGAffineTransformScale(currentTransform, scale, scale);
recognizer.scale = 1;
}
}

Is it Possible to Center Content in a UIScrollView Like Apple's Photos App?

I have been struggling with this problem for some time and there seems to be no clear answer. Please let me know if anyone has figured this out.
I want to display a photo in a UIScrollView that is centered on the screen (the image may be in portrait or landscape orientation).
I want to be able to zoom and pan the image only to the edge of the image as in the photos app.
I've tried altering the contentInset in the viewDidEndZooming method and that sort of does the trick, but there's several glitches.
I've also tried making the imageView the same size as the scrollView and centering the image there. The problem is that when you zoom, you can scroll around the entire imageView and part of the image may scroll off the view.
I found a similar post here and gave my best workaround, but no real answer was found:
UIScrollView with centered UIImageView, like Photos app
One elegant way to center the content of UISCrollView is this.
Add one observer to the contentSize of your UIScrollView, so this method will be called everytime the content change...
[myScrollView addObserver:delegate
forKeyPath:#"contentSize"
options:(NSKeyValueObservingOptionNew)
context:NULL];
Now on your observer method:
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context {
// Correct Object Class.
UIScrollView *pointer = object;
// Calculate Center.
CGFloat topCorrect = ([pointer bounds].size.height - [pointer viewWithTag:100].bounds.size.height * [pointer zoomScale]) / 2.0 ;
topCorrect = ( topCorrect < 0.0 ? 0.0 : topCorrect );
topCorrect = topCorrect - ( pointer.frame.origin.y - imageGallery.frame.origin.y );
// Apply Correct Center.
pointer.center = CGPointMake(pointer.center.x,
pointer.center.y + topCorrect ); }
You should change the [pointer
viewWithTag:100]. Replace by your
content view UIView.
Also change imageGallery pointing to your window size.
This will correct the center of the content everytime his size change.
NOTE: The only way this content don't works very well is with standard zoom functionality of the UIScrollView.
This code should work on most versions of iOS (and has been tested to work on 3.1 upwards).
It's based on the Apple WWDC code for the PhotoScroll app.
Add the below to your subclass of UIScrollView, and replace tileContainerView with the view containing your image or tiles:
- (void)layoutSubviews {
[super layoutSubviews];
// center the image as it becomes smaller than the size of the screen
CGSize boundsSize = self.bounds.size;
CGRect frameToCenter = tileContainerView.frame;
// center horizontally
if (frameToCenter.size.width < boundsSize.width)
frameToCenter.origin.x = (boundsSize.width - frameToCenter.size.width) / 2;
else
frameToCenter.origin.x = 0;
// center vertically
if (frameToCenter.size.height < boundsSize.height)
frameToCenter.origin.y = (boundsSize.height - frameToCenter.size.height) / 2;
else
frameToCenter.origin.y = 0;
tileContainerView.frame = frameToCenter;
}
Here is the best solution I could come up with for this problem. The trick is to constantly readjust the imageView's frame. I find this works much better than constantly adjusting the contentInsets or contentOffSets. I had to add a bit of extra code to accommodate both portrait and landscape images.
If anyone can come up with a better way to do it, I would love to hear it.
Here's the code:
- (void) scrollViewDidEndZooming:(UIScrollView *)scrollView withView:(UIView *)view atScale:(float)scale {
CGSize screenSize = [[self view] bounds].size;
if (myScrollView.zoomScale <= initialZoom +0.01) //This resolves a problem with the code not working correctly when zooming all the way out.
{
imageView.frame = [[self view] bounds];
[myScrollView setZoomScale:myScrollView.zoomScale +0.01];
}
if (myScrollView.zoomScale > initialZoom)
{
if (CGImageGetWidth(temporaryImage.CGImage) > CGImageGetHeight(temporaryImage.CGImage)) //If the image is wider than tall, do the following...
{
if (screenSize.height >= CGImageGetHeight(temporaryImage.CGImage) * [myScrollView zoomScale]) //If the height of the screen is greater than the zoomed height of the image do the following...
{
imageView.frame = CGRectMake(0, 0, 320*(myScrollView.zoomScale), 368);
}
if (screenSize.height < CGImageGetHeight(temporaryImage.CGImage) * [myScrollView zoomScale]) //If the height of the screen is less than the zoomed height of the image do the following...
{
imageView.frame = CGRectMake(0, 0, 320*(myScrollView.zoomScale), CGImageGetHeight(temporaryImage.CGImage) * [myScrollView zoomScale]);
}
}
if (CGImageGetWidth(temporaryImage.CGImage) < CGImageGetHeight(temporaryImage.CGImage)) //If the image is taller than wide, do the following...
{
CGFloat portraitHeight;
if (CGImageGetHeight(temporaryImage.CGImage) * [myScrollView zoomScale] < 368)
{ portraitHeight = 368;}
else {portraitHeight = CGImageGetHeight(temporaryImage.CGImage) * [myScrollView zoomScale];}
if (screenSize.width >= CGImageGetWidth(temporaryImage.CGImage) * [myScrollView zoomScale]) //If the width of the screen is greater than the zoomed width of the image do the following...
{
imageView.frame = CGRectMake(0, 0, 320, portraitHeight);
}
if (screenSize.width < CGImageGetWidth (temporaryImage.CGImage) * [myScrollView zoomScale]) //If the width of the screen is less than the zoomed width of the image do the following...
{
imageView.frame = CGRectMake(0, 0, CGImageGetWidth(temporaryImage.CGImage) * [myScrollView zoomScale], portraitHeight);
}
}
[myScrollView setZoomScale:myScrollView.zoomScale -0.01];
}
Here is how I calculate the the min zoom scale in the viewDidLoad method.
CGSize photoSize = [temporaryImage size];
CGSize screenSize = [[self view] bounds].size;
CGFloat widthRatio = screenSize.width / photoSize.width;
CGFloat heightRatio = screenSize.height / photoSize.height;
initialZoom = (widthRatio > heightRatio) ? heightRatio : widthRatio;
[myScrollView setMinimumZoomScale:initialZoom];
[myScrollView setZoomScale:initialZoom];
[myScrollView setMaximumZoomScale:3.0];