Scaling custom draw code for different iOS resolutions - iphone

I am struggling to get my custom drawing code to render at the proper scale for all iOS devices, i.e., older iPhones, those with retina displays and the iPad.
I have a subclass of UIView that has a custom class that displays a vector graphic. It has a scale property that I can set. I do the scaling in initWithCoder when the UIView loads and I first instantiate the vector graphic. This UIView is shown when the user taps a button on the home screen.
At first I tried this:
screenScaleFactor = 1.0;
if ([UIScreen instancesRespondToSelector:#selector(scale)]) {
screenScaleFactor = [[UIScreen mainScreen] scale];
}
// and then I multiply stuff by screenScale
... which worked for going between normal iPhones and retina iPhones, but chokes on the iPad. As I said, you can get to the UIView at issue by tapping a button on the home screen. When run on the iPad, if you display the UIView when at 1X, it works, but at 2X I get a vector graphic that twice as big as it should be.
So I tried this instead:
UPDATE: This block is the one that's right. (with the corrected spelling, of course!)
screenScaleFactor = 1.0;
if ([self respondsToSelector:#selector(contentScaleFactor)]) { //EDIT: corrected misspellng.
screenScaleFactor = (float)self.contentScaleFactor;
}
// again multiplying stuff by screenScale
Which works at both 1X and 2X on the iPad and on the older iPhones, but on a retina display, the vector graphic is half the size it should be.
In the first case, I query the UIScreen for its scale property and in the second case, I'm asking the parent view of the vector graphic for its contentsScaleFactor. Neither of these seem to get me where I want for all cases.
Any suggestions?
UPDATE:
Here's the method in my subclassed UIView (it's called a GaugeView):
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGAffineTransform t0 = CGContextGetCTM(context);
t0 = CGAffineTransformInvert(t0);
CGContextConcatCTM(context, t0);
[needle updateBox];
[needle draw: context];
}
needle is of class VectorSprite which is a subclass of Sprite which is subclassed from NSObject. These are from a programming book I'm working through. needle has the scale property that I set.
updateBox comes from Sprite and looks like this:
- (void) updateBox {
CGFloat w = width*scale;
CGFloat h = height*scale;
CGFloat w2 = w*0.5;
CGFloat h2 = h*0.5;
CGPoint origin = box.origin;
CGSize bsize = box.size;
CGFloat left = -kScreenHeight*0.5;
CGFloat right = -left;
CGFloat top = kScreenWidth*0.5;
CGFloat bottom = -top;
offScreen = NO;
if (wrap) {
if ((x+w2) < left) x = right + w2;
else if ((x-w2) > right) x = left - w2;
else if ((y+h2) < bottom) y = top + h2;
else if ((y-h2) > top) y = bottom - h2;
}
else {
offScreen =
((x+w2) < left) ||
((x-w2) > right) ||
((y+h2) < bottom) ||
((y-h2) > top);
}
origin.x = x-w2*scale;
origin.y = y-h2*scale;
bsize.width = w;
bsize.height = h;
box.origin = origin;
box.size = bsize;
}
Sprite also has the draw and drawBody methods which are:
- (void) draw: (CGContextRef) context {
CGContextSaveGState(context);
// Position the sprite
CGAffineTransform t = CGAffineTransformIdentity;
t = CGAffineTransformTranslate(t,x,y);
t = CGAffineTransformRotate(t,rotation);
t = CGAffineTransformScale(t,scale,scale);
CGContextConcatCTM(context, t);
// draw sprite body
[self drawBody: context];
CGContextRestoreGState(context);
}
- (void) drawBody: (CGContextRef) context {
// Draw your sprite here, centered
// on (x,y)
// As an example, we draw a filled white circle
if (alpha < 0.05) return;
CGContextBeginPath(context);
CGContextSetRGBFillColor(context, r,g,b,alpha);
CGContextAddEllipseInRect(context, CGRectMake(-width/2,-height/2,width,height));
CGContextClosePath(context);
CGContextDrawPath(context,kCGPathFill);
}

How, exactly, are you rendering the graphic?
This should be handled automatically in drawRect: (the context you get should be already 2x). This should also be handled automatically with UIGraphicsBeginImageContextWithOptions(size,NO,0); if available (if you need to fall back to UIGraphicsBeginImageContext(), assume a scale of 1). You shouldn't need to worry about it unless you're drawing the bitmap yourself somehow.
You could try something like self.contentScaleFactor = [[UIScreen mainScreen] scale], with appropriate checks first (this might mean if you display it in an iPad at 2x, you'll get high-res graphics).
Fundamentally, there's not much difference between an iPad in 2x mode and a "retina display", except that the iPad can switch between 1x and 2x.
Finally, there's a typo: #selector(contentsScaleFactor) has an extra s.

Related

iPhone correct landscape window coordinates

I am trying to get the window coordinates of a table view using the following code:
[self.tableView.superview convertRect:self.tableView.frame toView:nil]
It reports the correct coordinates while in portrait mode, but when I rotate to landscape it no longer reports correct coordinates. First off, it flips the x, y coordinates and the width and height. That's not really the problem though. The real problem is that the coordinates are incorrect. In portrait the window coordinates for the table view's frame are {{0, 114}, {320, 322}}, while in landscape the window coordinates are {{32, 0}, {204, 480}}. Obviously the x-value here is incorrect, right? Shouldn't it be 84? I'm looking for a fix to this problem, and if anybody knows how to get the correct window coordinates of a view in landscape mode, I would greatly appreciate it if you would share that knowledge with me.
Here are some screenshots so you can see the view layout.
Portrait: http://i.stack.imgur.com/IaKJc.png
Landscape: http://i.stack.imgur.com/JHUV6.png
I've found what I believe to be the beginnings of the solution. It seems the coordinates you and I are seeing are being based on the bottom left or top right, depending on whether the orientation is UIInterfaceOrientationLandscapeRight or UIInterfaceOrientationLandscapeLeft.
I don't know why yet, but hopefully that helps. :)
[UPDATE]
So I guess the origin of the window is 0,0 in normal portrait mode, and rotates with the ipad/iphone.
So here's how I solved this.
First I grab my orientation, window bounds and the rect of my view within the window (with the wonky coordinates)
UIInterfaceOrientation orientation = [[UIApplication sharedApplication] statusBarOrientation];
CGRect windowRect = appDelegate.window.bounds;
CGRect viewRectAbsolute = [self.guestEntryTableView convertRect:self.guestEntryTableView.bounds toView:nil];
Then if the orientation is landscape, I reverse the x and y coordinates and the width and height
if (UIInterfaceOrientationLandscapeLeft == orientation ||UIInterfaceOrientationLandscapeRight == orientation ) {
windowRect = XYWidthHeightRectSwap(windowRect);
viewRectAbsolute = XYWidthHeightRectSwap(viewRectAbsolute);
}
Then I call my function for fixing the origin to be based on the top left no matter the rotation of the ipad/iphone.
It fixes the origin depending on where 0,0 currently lives (depending on the orientation)
viewRectAbsolute = FixOriginRotation(viewRectAbsolute, orientation, windowRect.size.width, windowRect.size.height);
Here are the two functions I use
CGRect XYWidthHeightRectSwap(CGRect rect) {
CGRect newRect;
newRect.origin.x = rect.origin.y;
newRect.origin.y = rect.origin.x;
newRect.size.width = rect.size.height;
newRect.size.height = rect.size.width;
return newRect;
}
CGRect FixOriginRotation(CGRect rect, UIInterfaceOrientation orientation, int parentWidth, int parentHeight) {
CGRect newRect;
switch(orientation)
{
case UIInterfaceOrientationLandscapeLeft:
newRect = CGRectMake(parentWidth - (rect.size.width + rect.origin.x), rect.origin.y, rect.size.width, rect.size.height);
break;
case UIInterfaceOrientationLandscapeRight:
newRect = CGRectMake(rect.origin.x, parentHeight - (rect.size.height + rect.origin.y), rect.size.width, rect.size.height);
break;
case UIInterfaceOrientationPortrait:
newRect = rect;
break;
case UIInterfaceOrientationPortraitUpsideDown:
newRect = CGRectMake(parentWidth - (rect.size.width + rect.origin.x), parentHeight - (rect.size.height + rect.origin.y), rect.size.width, rect.size.height);
break;
}
return newRect;
}
This is a hack, but it works for me:
UIView *toView = [UIApplication sharedApplication].keyWindow.rootViewController.view;
[self.tableView convertRect:self.tableView.bounds toView:toView];
I am not sure this is the best solution. It may not work reliably if your root view controller doesn't support the same orientations as the current view controller.
You should be able to get the current table view coordinates from self.tableView.bounds
Your code should be:
[tableView convertRect:tableView.bounds toView:[UIApplication sharedApplication].keyWindow];
That will give you the view's rectangle in the window's coordinate system. Be sure to use "bounds" and not "frame". frame is the rectangle of the view in its parent view coordinate system already. "bounds" is the view rectangle in its own system. So the above code asks the table view to convert its own rectangle from its own system to the window's system. Your previous code was asking the table's parent view to convert the table's rectangle from the parent coordinate system to nothing.
Try bounds instead of frame
self.parentViewController.view.bounds
for it gives me adjusted coords according to the current orientation

Draw line or rectangle on cocos2d Layer

Can you let me know what is the best way to draw a line or rectangle on a scene layer using Cocos2d ios4 iphone.
So far have tried Texture2d, but it is more like a paint brush and is not so good. Tried drawing a line using draw method, but previous line disappears on drawing another line.
Basically want to draw multiple horizontal ,vertical, oblique beams. Please suggest. Any code would help a lot .
The code to draw using texture is below:
CGPoint start = edge.start;
CGPoint end = edge.end;
// begin drawing to the render texture
[target begin];
// for extra points, we'll draw this smoothly from the last position and vary the sprite's
// scale/rotation/offset
float distance = ccpDistance(start, end);
if (distance > 1)
{
int d = (int)distance;
for (int i = 0; i < d; i++)
{
float difx = end.x - start.x;
float dify = end.y - start.y;
float delta = (float)i / distance;
[brush setPosition:ccp(start.x + (difx * delta), start.y + (dify * delta))];
[brush setScale:0.3];
// Call visit to draw the brush, don't call draw..
[brush visit];
}
}
// finish drawing and return context back to the screen
[target end];
The rendering is not good esp. with oblique lines as the scaling affects the quality.
Cheers
You could create a separate layer and call the draw method like this:
-(void) draw
{
CGSize s = [[Director sharedDirector] winSize];
drawCircle( ccp(s.width/2, s.height/2), circleSize, 0, 50, NO);
It's for a circle but the principle is the same. This is from a project I made a while back and it worked then. Don't know if anything has changed since.
You need to add draw method to your layer:
-(void) draw {
// ...
}
Inside it you can use some openGL like functions and cocos2d wrapper methods for openGL.
Hint: other methods can be called inside draw method.
But keep in mind that using other name for method
containing openGL instructions, that's not called inside draw mentioned above simply won't work.
Even when called from update method or other method used by scheduleUpdate selector.
So you will end up with something like this:
-(void) draw {
glEnable(GL_LINE_SMOOTH);
glColor4ub(255, 0, 100, 255);
glLineWidth(4);
CGPoint verts[] = { ccp(0,200), ccp(300,200) };
ccDrawLine(verts[0], verts[1]);
[self drawSomething];
[self drawSomeOtherStuffFrom:ccp(a,b) to:ccp(c,d)];
[someObject doSomeDrawingAsWell];
}
For more information check out cocos2d-iphone programming guide :
http://www.cocos2d-iphone.org/wiki/doku.php/prog_guide:draw_update?s[]=schedule#draw

My vector sprite renders in different locations in simulator and device

I'm implementing a subclass of UIView that displays a gauge dial with a sprite for the indicator. It has angle property that I can vary to make the needle point to different angles. It works, but on the same values for the position of the needle make it show up in different locations on the phone and the simulator. It's an iPhone 4, so I'm sure the double resolution thing is behind this, but I don't know what to do about it. I tried setting the UIView's layer's contentScaleFactor but that fails. I thought UIView got the resolution thing for free. Any suggestions?
I should note that the NSLog statements report 150 for both .frame.size. dimensions, in both the simulator and the device.
Here's the .m file
UPDATE: In the simulator, I found how to set the hardware to iPhone 4, and it looks just like the device now, both are scaling and positioning the sprite at half size.
UPDATE 2: I made a workaround. I set the .scale of my sprite equal to the UIView's contentScaleFactor and then use it to dived the UIView in half if it's a lo-res screen and the full width if it's hi-res. I still don't see why this is necessary, as I should be working in points now, not pixels. It must have something to do with the custom drawing code in the Sprite or VectorSprite classes.
I'd still appreciate some feedback if anyone has some...
#import "GaugeView.h"
#implementation GaugeView
#synthesize needle;
#define kVectorArtCount 4
static CGFloat kVectorArt[] = {
3,-4,
2,55,
-2,55,
-3,-4
};
- (id)initWithCoder:(NSCoder *)coder {
if (self = [super initWithCoder:coder]) {
needle = [VectorSprite withPoints:kVectorArt count:kVectorArtCount];
needle.scale = (float)self.contentScaleFactor; // returns 1 for lo-res, 2 for hi-res
NSLog(#" needle.scale = %1.1f", needle.scale);
needle.x = self.frame.size.width / ((float)(-self.contentScaleFactor) + 3.0); // divisor = 1 for hi-res, 2 for lo-res
NSLog(#" needle.x = %1.1f", needle.x);
needle.y = self.frame.size.height / ((float)(-self.contentScaleFactor) + 3.0);
NSLog(#" needle.y = %1.1f", needle.y);
needle.r = 0.0;
needle.g = 0.0;
needle.b = 0.0;
needle.alpha = 1.0; }
}
self.backgroundColor = [UIColor clearColor];
return self;
}
- (id)initWithFrame:(CGRect)frame {
if ((self = [super initWithFrame:frame])) {
// Initialization code
}
return self;
}
// Only override drawRect: if you perform custom drawing.
// An empty implementation adversely affects performance during animation.
- (void)drawRect:(CGRect)rect {
// Drawing code
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGAffineTransform t0 = CGContextGetCTM(context);
t0 = CGAffineTransformInvert(t0);
CGContextConcatCTM(context, t0);
[needle updateBox];
[needle draw: context];
}
- (void)dealloc {
[needle release];
[super dealloc];
}
#end
I believe the answer is that iOS takes care of the resolution scaling automatically in drawRect methods, but in custom drawing code, you have to do it yourself.
In my example, I used the UIView's contentsScaleFactor to scale my sprite. In the future, in my custom draw method (not shown) I'll query [UIScreen mainScreen] scale and scale accordingly there.

setNeedsDisplayInRect: paints a white rectangle only

I'm still a little fresh to CoreGraphics programming, so please bear with me. I'm trying to write an application, which allows the user to rub stuff off an image with the finger. I have the basic functionality nailed down, but the result is sluggish since the screen is redrawn completely every time a touch is rendered. I did some research and found out that I can refresh only a portion of the screen using UIView's setNeedsDisplayInRect: method.
This does call drawRect: as expected, however everything I draw in the drawRect: following the setNeedsDisplayInRect: is ignored. Instead, the area in the rect parameter is simply filled with white. No matter what I draw inside, all I end up with is a white rectangle.
In essence, this is what I do:
1) when user touches screen, this touch is rendered into a mask
2) when the drawRect: is called, the image is masked with that mask
There must be something simple I'm overlooking, surely?
I found a solution, however it still escapes me how exactly this works. Here's the code:
This method flips the given rectangle in the same manner, in which the coordinate transformation in the context flips the context coordinate system:
- (CGRect) flippedRect:(CGRect)rect
{
CGRect flippedRect = rect;
flippedRect.origin.y = self.bounds.size.height - rect.origin.y - rect.size.height;
return CGRectIntersection( self.bounds, flippedRect );
}
This calculates the rectangle to be updated from the touch location. Note that the rectangle gets flipped:
- (CGRect) updateRectFromTouch:(UITouch *)touch
{
CGPoint location = [touch locationInView:self];
int d = RubbingSize;
CGRect touchRect = [self flippedRect:CGRectMake( location.x - d, location.y - d, 2*d, 2*d )];
return CGRectIntersection( self.frame, touchRect );
}
In render touch the 'flipped' update rectangles are coerced:
- (void) renderTouch:(UITouch *)touch
{
//
// Code to render into the mask here
//
if ( m_updateRect.size.width == 0 )
{
m_updateRect = [self updateRectFromTouch:touch];
}
else
{
m_updateRect = CGRectUnion( m_updateRect, [self updateRectFromTouch:touch] );
}
}
The whole view is refreshed with about 20Hz during the fingerpainting process. The following method is called every 1/20th second and submits the rectangle for rendering:
- (void) refreshScreen
{
if ( m_updateRect.size.width > 0 )
{
[self setNeedsDisplayInRect:[self flippedRect:m_updateRect]];
}
}
Here's a helper method to compare to rectangles:
BOOL rectIsEqualTo(CGRect a, CGRect b)
{
return a.origin.x == b.origin.x && a.origin.y == b.origin.y && a.size.width == b.size.width && a.size.height == b.size.height;
}
In the drawRect: method, the update rectangle is used to draw only the portion that needs updating.
- (void)drawRect:(CGRect)rect
{
BOOL drawFullScreen = rectIsEqualTo( rect, self.frame );
// Drawing code
CGContextRef context = UIGraphicsGetCurrentContext();
// Turn coordinate system around
CGContextTranslateCTM( context, 0.0, self.frame.size.height );
CGContextScaleCTM( context, 1.0, -1.0 );
if ( drawFullScreen )
{
// draw the full thing
CGContextDrawImage( context, self.frame, self.image );
}
else
{
CGImageRef partialImage = CGImageCreateWithImageInRect( self.image, [self flippedRect:m_updateRect] );
CGContextDrawImage( context, m_updateRect, partialPhoto );
CGImageRelease( partialImage );
}
...
// Reset update box
m_updateRect = CGRectZero;
}
If someone can explain to me why the flipping works, I'd appreciate it.
The flipping issue is most likely because UIKit has a 'flipped' coordinate compared to Core Graphics.
Apple documentation here

UIScrollView zoomToRect not zooming to given rect (created from UITouch CGPoint)

My application has a UIScrollView with one subview. The subview is an extended UIView which prints a PDF page to itself using layers in the drawLayer event.
Zooming using the built in pinching works great. setZoomScale also works as expected.
I have been struggling with the zoomToRect function. I found an example online which makes a CGRect zoomRect variable from a given CGPoint.
In the touchesEnded function, if there was a double tap and they are all the way zoomed out, I want to zoom in to that PDFUIView I created as though they were pinching out with the center of the pinch where they double tapped.
So assume that I pass the UITouch variable to my function which utilizes zoomToRect if they double tap.
I started with the following function I found on apples site:
http://developer.apple.com/iphone/library/documentation/WindowsViews/Conceptual/UIScrollView_pg/ZoomZoom/ZoomZoom.html
The following is a modified version for my UIScrollView extended class:
- (void)zoomToCenter:(float)scale withCenter:(CGPoint)center {
CGRect zoomRect;
zoomRect.size.height = self.frame.size.height / scale;
zoomRect.size.width = self.frame.size.width / scale;
zoomRect.origin.x = center.x - (zoomRect.size.width / 2.0);
zoomRect.origin.y = center.y - (zoomRect.size.height / 2.0);
//return zoomRect;
[self zoomToRect:zoomRect animated:YES];
}
When I do this, the UIScrollView seems to zoom using the bottom right edge of the zoomRect above and not the center.
If I make UIView like this
UIView *v = [[UIView alloc] initWithFrame:zoomRect];
[v setBackgroundColor:[UIView redColor]];
[self addSubview:v];
The red box shows up with the touch point dead in the center.
Please note: I am writing this from my PC, I recall messing around with the divided by two part on my Mac, so just assume that this draws a rect with the touch point in the center. If the UIView drew off center but zoomed to the right spot it would be all good.
However, what happens is when it preforms the zoomToRect it seems to use the bottom right off the zoomRect at the top left of the zoomed in results.
Also, I noticed that depending on where I click on the UIScrollView, it anchors to diffrent spots. It almost seems like there is a cross down the middle and it's reflecting the points somehow as though anywhere left of the middle is a negative reflection and anywhere right of the middle is a positive reflection?
This seems to complicated, shouldn't it just zoom to the rect that was drawn as the UIView was able to draw?
I used a lot of research to figure out how to create a PDF that scales in high quality, so I am assuming that using the CALayer may be throwing off the coordinate system? But to the UIScrollView it should just treat it as a view with 768x985 dimensions.
This is sort of advanced, please assume the code for creating the zoomRect is all good. There is something deeper with the CALayer in the UIView which is in the UIScrollView....
Ok another answer:
The apple supplied routine works for me, but you need to have the gesture recognizer convert the tap point to the imageView coords - not to the scroller.
Apple's example does this, but since our app works differently (we change the UIImageView), so the gestureRecongnizer was set up on the uiscrollview - which works fine, but you need to do this in the handleDoubleTap:
This is loosely based on the apple example code "TaptoZoom", but as I said we needed our gesture recognizer hooked up to the scroll view.
- (void)handleDoubleTap:(UIGestureRecognizer *)gestureRecognizer {
// double tap zooms in
[NSObject cancelPreviousPerformRequestsWithTarget:self selector:#selector(handleSingleTap:) object:nil];
float newScale = [imageScrollView zoomScale] * 1.5;
// Note we need to get location of the tap in the imageView coords, not the imageScrollView
CGRect zoomRect = [self zoomRectForScale:newScale withCenter:[gestureRecognizer locationInView:imageView]];
[imageScrollView zoomToRect:zoomRect animated:YES];
}
Declare BOOL isZoom; in .h
-(void)handleDoubleTap:(UIGestureRecognizer *)recognizer {
if(isZoom){
CGPoint Pointview=[recognizer locationInView:self];
CGFloat newZoomscal=3.0;
newZoomscal=MIN(newZoomscal, self.maximumZoomScale);
CGSize scrollViewSize=self.bounds.size;
CGFloat w=scrollViewSize.width/newZoomscal;
CGFloat h=scrollViewSize.height /newZoomscal;
CGFloat x= Pointview.x-(w/2.0);
CGFloat y = Pointview.y-(h/2.0);
CGRect rectTozoom=CGRectMake(x, y, w, h);
[self zoomToRect:rectTozoom animated:YES];
[self setZoomScale:3.0 animated:YES];
isZoom=NO;
}
else{
[self setZoomScale:1.0 animated:YES];
isZoom=YES;
}
}
I've noticed that the apple you're using doesn't zoom properly if the image is starting at a zoomScale less than 1 because the zoomRect origin is incorrect. I edited it to work correctly. Here's the code:
- (CGRect)zoomRectForScale:(float)scale withCenter:(CGPoint)center {
CGRect zoomRect;
// the zoom rect is in the content view's coordinates.
// At a zoom scale of 1.0, it would be the size of the imageScrollView's bounds.
// As the zoom scale decreases, so more content is visible, the size of the rect grows.
zoomRect.size.height = [self frame].size.height / scale;
zoomRect.size.width = [self frame].size.width / scale;
// choose an origin so as to get the right center.
zoomRect.origin.x = (center.x * (2 - self.minimumZoomScale) - (zoomRect.size.width / 2.0));
zoomRect.origin.y = (center.y * (2 - self.minimumZoomScale) - (zoomRect.size.height / 2.0));
return zoomRect;
}
The key is this part multiplying the center value by (2 - self.minimumZoomScale).
Hope this helps.
In my case it was:
zoomRect.origin.x = center.x / self.zoomScale - (zoomRect.size.width / 2.0);
zoomRect.origin.y = center.y / self.zoomScale - (zoomRect.size.height / 2.0);
extension UIScrollView {
func getRectForVisibleView() -> CGRect {
var visibleRect: CGRect = .zero
visibleRect.origin = self.contentOffset
visibleRect.size = self.bounds.size
let theScale = 1.0 / self.zoomScale
visibleRect.origin.x *= theScale
visibleRect.origin.y *= theScale
visibleRect.size.width *= theScale
visibleRect.size.height *= theScale
return visibleRect
}
func moveToRect(rect: CGRect) {
let scale = self.bounds.width / rect.width
self.zoomScale = scale
self.contentOffset = .init(x: rect.origin.x * scale, y: rect.origin.y * scale)
}
}
I had something similar and it was because I didn't adjust the center.x and center.y values by dividing them by the scale also (using center.x/scale and center.y/scale). Maybe I'm not reading your code right.
I am having the same behavior and it is quite frustrating... The rectangle being fed to the UIScrollView is perfect.. yet my view, no matter what I do anything that involves changing the zoomScale programmatically always zooms and scales to coordinate 0,0, no matter what.
I have tried just changing the zoomScale, I've tried zoomToRect, I have tried them all, and every one the minute I touch the zoomScale in code, it goes to coordinate 0,0.
I did also have to add and explicit setContentSize to the resized image in the scrollview after a zooming operation, or otherwise I cannot scroll after a zoom or pinch.
Is this a bug in 3.1.3 or what?
I have tried different solutions, but this looks the best resolution
It is really straight forward and conceptional?
CGRect frame = [[UIScreen mainScreen] applicationFrame];
scrollView.contentInset = UIEdgeInsetsMake(frame.size.height/2,
frame.size.width/2,
frame.size.height/2,
frame.size.width/2);
I disagree with one of the comments above saying that you should never multiply the center's coordinates by some factor.
Say that you are currently displaying an entire 400x400px image or PDF file in a 100x100 scroll view and want to allow the users to double the size of the content until it's 1:1.
If you double tap at point (75,75), you expect the zoomed-in rectangle to have origin 100,100 and size 100x100 within the new 200x200 content view. So the original tapping point (75,75) is now (150,150) in the new 200x200 space.
Now, after zoom action #1 has completed, if you again double tap at (75,75) inside the new 100x100 rectangle (which is the bottom-right square of the larger 200x200 rectangle), you expect the user to be shown the bottom-right 100x100 square of the larger image, which would now become zoomed to 400x400 pixels.
In order to calculate the origin of this latest 100x100 rectangle within the larger 400x400 rectangle, you would need to consider the scale and current content offset (since before this last zoom action we were displaying the bottom-right 100x100 rectangle within a 200x200 content rectangle).
So the x coordinate of the final rectangle becomes:
center.x/currentScale - (scrollView.frame.size.width/2) + scrollView.contentOffset.x/currentScale
= 75/.5 - 100/2 + 100/.5 = 150 - 50 + 200 = 300.
In this case, being a square, the calculation for the y coordinate is the same.
And we did indeed zoom in the bottom-right 100x100 rectangle, which, in the larger 400x400 content view has origin 300,300.
So here is how you would calculate the zoom rectangle's size and origin:
zoomRect.size.height = mScrollView.frame.size.height/scale;
zoomRect.size.width = mScrollView.frame.size.width/scale;
zoomRect.origin.x = center.x/currentScale - (mScrollView.frame.size.width/2) + mScrollView.contentOffset.x/currentScale;
zoomRect.origin.y = center.y/currentScale - (mScrollView.frame.size.height/2) + mScrollView.contentOffset.y/currentScale;
Hope this made sense; it's hard to explain it in writing without sketching out the various squares/rectangles.
Cheers,
Raf Colasante