I'm trying to implement a simple control to allow the user to zoom in and out of an image. I have a UIImageView inside a UIScrollView. Additionally, I would like to prevent the user from zooming out so much that either the width or height of the image is smaller than the scroll view's size. Here is where the problem lies: when I set the minimumZoomScale to the appropriate size, the image appears in a weird location. Here's my code to configure the scroll view and image view:
- (void)openNewImage:(UIImage *)image
{
_originalImage = image;
// Reset scroll view's zoom scales
// (must be reset before setting the image to the image view)
self.imageScrollView.zoomScale = 1.0f;
self.imageScrollView.minimumZoomScale = 0.01f;
// Set scroll view's content size to allow scrolling
self.imageScrollView.contentOffset = CGPointZero;
self.imageScrollView.contentSize = image.size;
// Set image and resize image view to image size
self.imageView.image = _originalImage;
self.imageView.frame =
CGRectMake(0.0f, 0.0f, image.size.width, image.size.height);
[self.imageScrollView zoomToRect:self.imageView.frame animated:YES];
self.imageScrollView.minimumZoomScale = self.imageScrollView.zoomScale;
}
The last line is causing problems because if I comment it out, things appear where they should, but then the user can zoom out too much. While trying to debug the problem, I found that the last line causes the image view's frame origin to change to very small negative numbers, like -1.11015e-06 rather than just 0. If I comment out the last line, the image view's frame origin is 0. I wonder if somehow this is causing problems, although that small negative number is virtually 0.
Related
Solution: For those who view this someday in the future, the solution I used was indeed viewDidLayoutSubviews. The solution was actually rather complex - I had to calculate several scaling values and dynamically re-size the Art view every time the page needed a re-layout. There were several odd problems to handle, but after each was taken down the implementation feels pretty solid.
If anybody runs in to a similar problem later on, let me know and I can post the relevant code.
I've got a 'blank' UIView, a subview that is an UIImageView containing a single image, and a second subview that is basically a collection of CGContext arcs and lines and all that.
Where I'm at: I've placed the UIImageView subview on top of the UIView so that its image is the 'background'. Then I placed the CGContext arcs and lines subview on top of that (I'll call this the Art subview for clarity). All good. Everything displays perfectly and aligned.
Problem: When I rotate, things get screwy. The arcs and lines on the Art subview end up at the wrong coordinates, and depending on my autoresizingmask settings the image gets stretched, etc. I can fix one of these problems at a time, but I can't find the right combination to fix them both!
Things I've Tried: I've tried just about every autoresizingMask option & combination of options, but per above I can't quite lick the problem with those. Given that I also tried using some custom code in viewDidLayoutSubviews, but this felt really flimsy and much less extensible vs. using autoresizingMasks. So I abandoned the path after making some nominal progress.
What I've Learned: As long as I bind my UIImageView frame and the Art subview frame to the same dimensions, then the arcs and lines stay at the proper coordinates. That is probably the easiest way to state my goal: to have a UIImageView that stays in the correct aspect ratio (not just the image within, but the view itself), and then match the Art subview exactly to its frame - even as the screen rotates, etc.
Here is a diagram of what I'd like to achieve:
+ = UIView
~ = UIImageView subview
. = Art subview
Portrait
Wherein the image within the UIImageView takes up basically the whole screen (though not quite), and the Art subview is layered on top of it with the dots below representing a crude line/arc.
++++++++++
+~~~~~~~~+
+~ . ~+
+~ . ~+
+~ . ~+
+~ . ~+
+~~~~~~~~+
++++++++++
Landscape
Wherein the UIImageView sublayer maintains its aspect ratio, and the Art sublayer stays 'locked' to the image within the UIImageView.
++++++++++++++++++++++++++
+ ~~~~~~~~ +
+ ~ . ~ +
+ ~ . ~ +
+ ~ . ~ +
+ ~ . ~ +
+ ~~~~~~~~ +
++++++++++++++++++++++++++
My View Controller (where I think the problem is - also removed all autoresizingMask settings to clean up the code)
- (void)viewWillAppear:(BOOL)animated {
// get main screen bounds & adjust to include apple status bar
CGRect frame = [[UIScreen mainScreen] applicationFrame];
// get image - this code would be meaningless to show, but suffice to say it works
UIImage *image = [.. custom method to get image from my image store ..];
// custom method to resize image to screen; see method below
image = [self resizeImageForScreen:image];
// create imageView and give it basic setup
imageView = [[UIImageView alloc] initWithImage:image];
[imageView setUserInteractionEnabled:YES];
[imageView setFrame:CGRectMake(0,
0,
image.size.width,
image.size.height)];
[imageView setCenter:CGPointMake(frame.size.width / 2,
frame.size.height / 2)];
[[self view] addSubview:imageView];
// now put the Art subview on top of it
// (customArtView is a subclass of UIView where I handle the drawing code)
artView = [[customArtView alloc] initWithFrame:imageView.frame];
[[self view] addSubview:artView];
}
the resizeImageForScreen: method (this seems to be working fine, but I figured I'd include it anyway)
- (UIImage *)resizeImageForScreen:(UIImage *)img {
// get main screen bounds & adjust to include apple status bar
CGRect frame = [[UIScreen mainScreen] applicationFrame];
// get image
UIImage *image = img;
// resize image if needed
if (image.size.width > frame.size.width || image.size.height > frame.size.height) {
// Figure out a scaling ratio to make sure we maintain the same aspect ratio
float ratio = MIN(frame.size.width / image.size.width, frame.size.height / image.size.height);
CGRect newImageRect;
newImageRect.size.width = ratio * image.size.width;
newImageRect.size.height = ratio * image.size.height;
newImageRect.origin.x = 0;
newImageRect.origin.y = 0;
// Create a transparent context # the newImageRect size & no scaling
UIGraphicsBeginImageContextWithOptions(newImageRect.size, NO, 0.0);
// Draw the image within it (drawInRect will scale the image for us)
[image drawInRect:newImageRect];
// for now just re-assigning i since i don't need the big one
image = UIGraphicsGetImageFromCurrentImageContext();
// Cleanup image context resources; we're done!
UIGraphicsEndImageContext();
}
return image;
}
There is no combination of autoresizing flags and content modes that will do what you want. Using viewDidLayoutSubviews is one reasonable way to handle the layout. That's why it exists: the autoresizing flags are pretty limited.
A different approach is to change your -[ArtView drawRect:] method so that the autoresizing flags can do what you want. You can make your drawRect: method implement the same algorithm that UIImageView implements for UIViewContentModeScaleAspectFit.
I have a problem that I can't seem to fix. I am trying to take a screen-shot of a UIScrollView (including off-screen content) but when the view is long the renderInContext doesn't get all the contents of the scroll view. The produced image dimensions are correct but the rendered data appears to be missing chunks of the display leaving white space where those chunks should be. The missing blocks are from the content in a UIWebView, which I believe is set to "scaleToFit". It doesn't happen everytime, it appears to only happen when the UIWebView's height if fairly large. Which makes me think is has to do with the scaling of the UIWebView.
If I adjust the coreLayer.bounds CGRECT below I get different results, sometimes the missing blocks are at the bottom and sometimes they are in the middle of the image.
I started with the code from the accepted answer of this question and when I noticed the cutoff issue, I modified it to the following:
UIGraphicsBeginImageContext(scrollView.contentSize);
{
CGPoint savedContentOffset = scrollView.contentOffset;
CGRect savedFrame = scrollView.frame;
//hide the scroll bars
[scrollView setShowsHorizontalScrollIndicator:NO];
[scrollView setShowsVerticalScrollIndicator:NO];
scrollView.contentOffset = CGPointZero;
scrollView.frame = CGRectMake(0, 0, scrollView.contentSize.width, scrollView.contentSize.height);
//adjust layer for cut-off
CALayer *coreLayer = scrollView.layer;
coreLayer.bounds = CGRectMake(0, 0, scrollView.contentSize.width, scrollView.contentSize.height);
[coreLayer renderInContext: UIGraphicsGetCurrentContext()];
//[scrollView.layer renderInContext: UIGraphicsGetCurrentContext()];
image = UIGraphicsGetImageFromCurrentImageContext();
scrollView.contentOffset = savedContentOffset;
scrollView.frame = savedFrame;
//reset the scroll bars to default
[scrollView setShowsHorizontalScrollIndicator:YES];
[scrollView setShowsVerticalScrollIndicator:YES];
}
UIGraphicsEndImageContext();
The cut-off adjustment helped (fixed it with some views) but its still getting cut-off when the UIScrollView is fairly long. I've been working on this for a while and can't seem to find a fix. Do you have any suggestions? Has anyone ever encountered this issue?
Please help!
Is your scroll view a table view? If so, pretty much only the onscreen content actually exists due to cell reuse. Even if it's a regular scroll view, it's plausible that the OS is making optimizations by not rendering some offscreen elements of the scroll view. If that's true, you may be able to get this to work by programmatically scrolling one screenful at a time and rendering each of those into your context at the right position.
I want to let the user resize a UILabel with a pinch gesture. Using a CGAffineTransformScale alone doesn't do the job, because the text in the label becomes blurry when scaled up.
So what I'm doing is actually using the CGAffineTransformScale to just show that its scaling up, saving the frame size, reverting the transform identity, and finalizing the frame size. A simple switcheroo, but it works.
-(void)handlePinch:(UIPinchGestureRecognizer *)recognizer{
if(recognizer.state == UIGestureRecognizerStateBegan){
startingTransform = self.transform;
}
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1;
if (recognizer.state == UIGestureRecognizerStateEnded){
CGRect endFrame = self.frame;
self.transform = startingTransform;
self.frame = endFrame;
}
}
The end result of this is a resized frame for the UILabel. However the text does not scale up to fit the label. Also, the property adjustsFontSizeToFitWidth only works for scaling DOWNWARDS, not upwards (reference). So what should I do to make my label scale up to fit the frame?
You're onto something already, I think. The adjustFontSizeToFitWidth property will only adjust the font size down from whatever size it's set to... so what happens if your label's font size is set to something really big? Like vastly, hugely, mindbogglingly big compared to the possible frame size of your label?
add it in scrollview and in scrollview delegzte specify the view for zooming.
http://developer.apple.com/library/ios/documentation/uikit/reference/uiscrollviewdelegate_protocol/Reference/UIScrollViewDelegate.html#//apple_ref/occ/intfm/UIScrollViewDelegate/viewForZoomingInScrollView:
To describe my project:
I have a rectangle UIImageView frame floating over a white layer. Inside the UIImageView, I'm successfully creating the illusion that it is showing a portion of a background image behind the white layer. You can drag the rectangle around, and it will "redraw" the image so that you can peer into what is behind the white. Its basically this code:
//whenever the frame is moved, updated the CGRect frameRect and run this:
self.newCroppedImage = CGImageCreateWithImageInRect([bgImage.image CGImage], frameRect);
frame.image = [UIImage imageWithCGImage:self.newCroppedImage];
Anyhow, I also have a rotation gesture recognizer that allows the user to rotate the frame (and consequentially rotates the image). This is because the CGRect sent to the CGImageCreateWithImageInRect is still oriented at its original rotation. This breaks the illusion that you're looking through a window because the image you see is rotated when only the frame should appear that way.
So ideally, I need to take the rotation of my frame and apply it to the image created from my bgImage. Does anyone have any clues or ideas on how I could apply this?
I suggest you take a different approach. Don't constantly create new images to put in your UIImageView. Instead, set up your view hierarchy like this:
White view
"Hole" view (just a regular UIView)
Image view
That is, the white view has the hole view as a subview. The hole view has the UIImageView as its subview.
The hole view must have its clipsToBounds property set to YES (you can set it with code or in your nib).
The image view should have its size set to the size of its image. This will of course be larger than the size of the hole view.
And this is very very important: the image view's center must be set to the hole view's center.
Here's the code I used in my test project to set things up. The white view is self.view. I start with the hole centered in the white view, and I set the image view's center to the hole view's center.
- (void)viewWillAppear:(BOOL)animated
{
[super viewWillAppear:YES];
CGRect bounds = self.view.bounds;
self.holeView.center = CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds));
self.holeView.clipsToBounds = YES;
bounds = self.holeView.bounds;
self.imageView.center = CGPointMake(CGRectGetMidX(bounds), CGRectGetMidY(bounds));
self.imageView.bounds = (CGRect){ CGPointZero, self.imageView.image.size };
}
I also set the image view's size to the size of its image. You might want to set it to the size of the white view.
To pan and rotate the hole, I'm going to set holeView.transform. I'm not going to change holeView.frame or holeView.center. I have two instance variables, _holeOffset and _holeRotation, that I use to compute the transform. The trick to making it seem like a hole through the white view, revealing the image view, is to apply the inverse transform to the image view, undoing the effects of the hole view's transform:
- (void)updateTransforms {
CGAffineTransform holeTransform = CGAffineTransformIdentity;
holeTransform = CGAffineTransformTranslate(holeTransform, _holeOffset.x, _holeOffset.y);
holeTransform = CGAffineTransformRotate(holeTransform, _holeRotation);
self.holeView.transform = holeTransform;
self.imageView.transform = CGAffineTransformInvert(holeTransform);
}
This trick of using the inverse transform on the subview only works if the center of the subview is at the center of its superview. (Technically the anchor points have to line up, but by default the anchor point of a view is its center.)
I put a UIPanGestureRecognizer on holeView. I configured it to send panGesture: to my view controller:
- (IBAction)panGesture:(UIPanGestureRecognizer *)sender {
CGPoint offset = [sender translationInView:self.view];
[sender setTranslation:CGPointZero inView:self.view];
_holeOffset.x += offset.x;
_holeOffset.y += offset.y;
[self updateTransforms];
}
I also put a UIRotationGestureRecognizer on holeView. I configured it to send rotationGesture: to my view controller:
- (IBAction)rotationGesture:(UIRotationGestureRecognizer *)sender {
_holeRotation += sender.rotation;
sender.rotation = 0;
[self updateTransforms];
}
I'm currently implementing an expanding timeline. When I pinch zoom into the timeline, I need my drawn text to stay at the same relative locations on the UIView they're drawn on inside the UIScrollView that handles the zooming. (Essentially like pins on GoogleMaps) However, I don't want to zoom vertically, so I apply a transform by overriding:
- (void)setTransform:(CGAffineTransform)newValue;
{
newValue.d = 1.0;
[super setTransform:newValue];
}
This works great in keeping the timeline fixed vertically and allowing it to expand horizontally. However, I am drawing my text labels as such in a method called during setNeedsDisplay:
for (int i = 1; i < 11; i++)
{
CGRect newFrame = CGRectMake(i * (512.0/11.0) - (512.0/11.0/2.0), self.frame.size.height - 16.0, 512.0/11.0, 32.0);
NSString *label = [NSString stringWithFormat:#"%d", i+1];
[label drawInRect:newFrame withFont:[UIFont systemFontOfSize:14.0]];
}
This draws my text at the correct position in the scrollview, and nearly works perfectly. However, because of my transform to keep the zooming view static vertically, the text expands horizontally and not vertically, and so stretches out horribly. I can't seem to get the text to redraw at the correct aspect ratio. Using UILabels works, however I am going to be rendering and manipulating upwards of 1,000 such labels, so I'd preferably like to draw static images in drawRect or something similar.
I've tried changing the CGRect I'm drawing the text in (was worth a shot), and applying CGAffineTransformIdentity isn't possible because I'm already transforming the view to keep it from zooming vertically. I've also tried drawing the text in various Views to no avail, and again, I'd rather not populate an obscene amount of objects if I can avoid it.
Thanks for any help!
Instead of applying a transform inside the 'setTransform:' method, I intercepts the scale at which it is being transformed, and resize the frame of the view being transformed. The code (roughly) follows:
- (void)setTransform:(CGAffineTransform)newValue;
{
// The 'a' value of the transform is the transform's new scale of the view, which is reset after the zooming action completes
// newZoomScale should therefore be kept while zooming, and then zoomScale should be updated upon completion
_newZoomScale = _zoomScale * newValue.a;
if (_newZoomScale < 1.0)
_newZoomScale = 1.0;
// Resize self
self.frame = CGRectMake(self.frame.origin.x, self.frame.origin.y, _originalFrame.size.width * _newZoomScale, self.frame.size.height);
}
As mentioned in the comments, the transform value of the CGAffineTransform is reset each time a new zooming action occurs (however, it is kept for the duration of the zooming action). So, I keep two instance variables in my UIView subclass (not sure if it's incredibly elegant, but it's not insanely terrible): the original frame the the view was instantiated with, and the "current" zoom scale of the view (prior to the current zooming action).
The _originalFrame is what is referenced in order to determine the proper sizing of the now zoomed frame, and the _zoomScale(the scale of the view prior to the current zooming action) is set to the value of _newZoomScale when the didFinishZooming callback is called in the UIScrollView containing this UIView.
All of this allows for the coordinate system of the UIView to not be transformed during zooming, so text, etc. may be drawn on the view without any distortion. Looking back at this solution, I'd wager a guess that you could also account for the transform and draw based on a stretched coordinate system. Not sure which is more effective. I had a slight concern by not calling super in setTransform:, but I haven't noticed any ill effects after about 6 months of use and development.