I'm asking my ViewController for it's view.center property and drawing a new UIView that centers itself around this "center"... I am getting (160, 250) as a response. but when the new UIView draws it's below center... So I'm wondering who's giving me this info and what it relates to? This is clearly where the center of the view is in relation to the WINDOW if you take into account the 20px height of the status bar. which would push the center of the view down 10px. but when drawing myView it appears to draw in relation to the ViewController.view and not the Window so it appears 20px below center...
I would expect the ViewController to give me its center (160, 230) so I could draw in its center...do I need to manually account for the status bar and subtract 20 from height every time? or is there some view space translation i'm overlooking? From ViewController.m:
- (void)setUpMyView {
// Create my view
MyView *aView = [[MyView alloc] init];
self.myView = aView;
[aView release];
myView.center = self.view.center;
NSLog(#"CenterX: %f, Y: %f", self.view.center.x, self.view.center.y);
CGAffineTransform transform = CGAffineTransformMakeRotation(M_PI_2);
myView.transform = transform;
[self.view addSubview:myView];
}
console: CenterX: 160.000000, Y: 250.000000
It's not "lying" actually, it's giving you an answer in a different coordinate system.
When you get or set a view's center, it is calculated relative to its parent's coordinate system. self.view's parent is the application's window, hence its center is (160, 250) as you surmised. But myView's parent is self.view, which has its own local coordinate system. In this case, that coordinate system is 20 pixels lower than that of the window.
What you want is to find the center of self.myView in its own coordinate system. There are two ways you could do it.
1) You could calculate it based on the bounds property, which is a CGRect that specifies the boundaries of a view in its own coordinate system:
myView.center = CGPointMake(self.view.bounds.size.width / 2,
self.view.bounds.size.height / 2);
2) Or you could use UIView's convertPoint:fromView: method to convert the coordinate from the window's coordinate system into that of self.view:
// Pass nil as the source view to convert from the window's coordinate system
myView.center = [self.view convertPoint:self.view.center fromView:nil];
Maybe add it to the Subview before you ask its center and size.
MyView *aView = [[MyView alloc] init];
self.myView = aView;
[aView release];
[self.view addSubview:myView]; // add it to the subview first before asking center.
myView.center = self.view.center;
NSLog(#"CenterX: %f, Y: %f", self.view.center.x, self.view.center.y);
CGAffineTransform transform = CGAffineTransformMakeRotation(M_PI_2);
myView.transform = transform;
I think I know the answer. You View center is correct!
You forgot to add the iPhone status bar is 20px down from the top of the screen and hence included in your view center. Bingo.
Related
I am trying to get the window coordinates of a table view using the following code:
[self.tableView.superview convertRect:self.tableView.frame toView:nil]
It reports the correct coordinates while in portrait mode, but when I rotate to landscape it no longer reports correct coordinates. First off, it flips the x, y coordinates and the width and height. That's not really the problem though. The real problem is that the coordinates are incorrect. In portrait the window coordinates for the table view's frame are {{0, 114}, {320, 322}}, while in landscape the window coordinates are {{32, 0}, {204, 480}}. Obviously the x-value here is incorrect, right? Shouldn't it be 84? I'm looking for a fix to this problem, and if anybody knows how to get the correct window coordinates of a view in landscape mode, I would greatly appreciate it if you would share that knowledge with me.
Here are some screenshots so you can see the view layout.
Portrait: http://i.stack.imgur.com/IaKJc.png
Landscape: http://i.stack.imgur.com/JHUV6.png
I've found what I believe to be the beginnings of the solution. It seems the coordinates you and I are seeing are being based on the bottom left or top right, depending on whether the orientation is UIInterfaceOrientationLandscapeRight or UIInterfaceOrientationLandscapeLeft.
I don't know why yet, but hopefully that helps. :)
[UPDATE]
So I guess the origin of the window is 0,0 in normal portrait mode, and rotates with the ipad/iphone.
So here's how I solved this.
First I grab my orientation, window bounds and the rect of my view within the window (with the wonky coordinates)
UIInterfaceOrientation orientation = [[UIApplication sharedApplication] statusBarOrientation];
CGRect windowRect = appDelegate.window.bounds;
CGRect viewRectAbsolute = [self.guestEntryTableView convertRect:self.guestEntryTableView.bounds toView:nil];
Then if the orientation is landscape, I reverse the x and y coordinates and the width and height
if (UIInterfaceOrientationLandscapeLeft == orientation ||UIInterfaceOrientationLandscapeRight == orientation ) {
windowRect = XYWidthHeightRectSwap(windowRect);
viewRectAbsolute = XYWidthHeightRectSwap(viewRectAbsolute);
}
Then I call my function for fixing the origin to be based on the top left no matter the rotation of the ipad/iphone.
It fixes the origin depending on where 0,0 currently lives (depending on the orientation)
viewRectAbsolute = FixOriginRotation(viewRectAbsolute, orientation, windowRect.size.width, windowRect.size.height);
Here are the two functions I use
CGRect XYWidthHeightRectSwap(CGRect rect) {
CGRect newRect;
newRect.origin.x = rect.origin.y;
newRect.origin.y = rect.origin.x;
newRect.size.width = rect.size.height;
newRect.size.height = rect.size.width;
return newRect;
}
CGRect FixOriginRotation(CGRect rect, UIInterfaceOrientation orientation, int parentWidth, int parentHeight) {
CGRect newRect;
switch(orientation)
{
case UIInterfaceOrientationLandscapeLeft:
newRect = CGRectMake(parentWidth - (rect.size.width + rect.origin.x), rect.origin.y, rect.size.width, rect.size.height);
break;
case UIInterfaceOrientationLandscapeRight:
newRect = CGRectMake(rect.origin.x, parentHeight - (rect.size.height + rect.origin.y), rect.size.width, rect.size.height);
break;
case UIInterfaceOrientationPortrait:
newRect = rect;
break;
case UIInterfaceOrientationPortraitUpsideDown:
newRect = CGRectMake(parentWidth - (rect.size.width + rect.origin.x), parentHeight - (rect.size.height + rect.origin.y), rect.size.width, rect.size.height);
break;
}
return newRect;
}
This is a hack, but it works for me:
UIView *toView = [UIApplication sharedApplication].keyWindow.rootViewController.view;
[self.tableView convertRect:self.tableView.bounds toView:toView];
I am not sure this is the best solution. It may not work reliably if your root view controller doesn't support the same orientations as the current view controller.
You should be able to get the current table view coordinates from self.tableView.bounds
Your code should be:
[tableView convertRect:tableView.bounds toView:[UIApplication sharedApplication].keyWindow];
That will give you the view's rectangle in the window's coordinate system. Be sure to use "bounds" and not "frame". frame is the rectangle of the view in its parent view coordinate system already. "bounds" is the view rectangle in its own system. So the above code asks the table view to convert its own rectangle from its own system to the window's system. Your previous code was asking the table's parent view to convert the table's rectangle from the parent coordinate system to nothing.
Try bounds instead of frame
self.parentViewController.view.bounds
for it gives me adjusted coords according to the current orientation
I'd like to move a view from a scrollview to a uiview.
I'm having trouble changing it's center(or frame) so that it remains in the same position in screen (but in a different view, possibly the superview of scrollview).
How should I convert the view's center/frame?
Thank you.
EDIT:
CGPoint oldCenter = dragView.center;
CGPoint newCenter = [dragView convertPoint: oldCenter toView: self.navigationView.contentView];
dragView.center = newCenter;
[self.navigationView.contentView addSubview: dragView];
I can also use (NSSet*) touches since i'm in touchesBegan:
I was having hard time to make it work but the doc wasn't so clear to me.
You can use convertPoint:toView: method of UIView. It is used to convert a point from one view's coordinate system to another. See Converting Between View Coordinate Systems section of UIView class reference. There are more methods available.
-edit-
You are using the wrong point when calling convertPoint: method. The given point should be in dragView's coordinate system where as dragView.center is in its superview's coordinate system.
Use the following point and it should give you the center of dragView in its own coordinate system.
CGPoint p;
p = CGPointMake(dragView.bounds.size.width * 0.5, dragView.bounds.size.height * 0.5);
My goal is simple; I want to create a program that displays an UIImage, and when swiped from bottom to top, displays another UIImage. The images here could be a happy face/sad face. The sad face should be the starting point, the happy face the end point. When swiping your finger the part below the finger should be showing the happy face.
So far I tried solving this with the frame and bounds properties of the UIImageview I used for the happy face image.
What this piece of code does is wrong, because the transition starts in the center of the screen and not the bottom. Notice that the origin of both frame and bounds are at 0,0...
I have read numerous pages about frames and bounds, but I don't get it. Any help is appreciated!
The loadimages is called only once.
- (void)loadImages {
sadface = [UIImage imageNamed:#"face-sad.jpg"];
happyface = [UIImage imageNamed:#"face-happy.jpg"];
UIImageView *face1view = [[UIImageView alloc]init];
face1view.image = sadface;
[self.view addSubview:face1view];
CGRect frame;
CGRect contentRect = self.view.frame;
frame = CGRectMake(0, 0, contentRect.size.width, contentRect.size.height);
face1view.frame = frame;
face2view = [[UIImageView alloc]init];
face2view.layer.masksToBounds = YES;
face2view.contentMode = UIViewContentModeScaleAspectFill;
face2view.image = happyface;
[self.view addSubview:face2view];
frame = CGRectMake(startpoint.x, 0, contentRect.size.width, contentRect.size.height);
face2view.frame = frame;
face2view.clipsToBounds = YES;
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
CGPoint movepoint = [[touches anyObject] locationInView: self.view];
NSLog(#"movepoint: %f %f", movepoint.x, movepoint.y);
face2view.bounds = CGRectMake(0, 0, 320, 480 - movepoint.y);
}
The UIImages and UIImageViews are properly disposed of in the dealloc function.
Indeed, you seem to be confused about frames and bounds. In fact, they are easy. Always remember that any view has its own coordinate system. The frame, center and transform properties are expressed in superview's coordinates, while the bounds is expressed in the view's own coordinate system. If a view doesn't have a superview (not installed into a view hierarchy yet), it still has a frame. In iOS the frame property is calculated from the view's bounds, center and transform. You may ask what the hell frame and center mean when there's no superview. They are used when you add the view to another view, allowing to position the view before it's actually visible.
The most common example when a view's bounds differ from its frame is when it is not in the upper left corner of its superview: its bounds.origin may be CGPointZero, while its frame.origin is not. Another classic example is UIScrollView, which frequently modifies its bounds.origin to make subviews scroll (in fact, modifying the origin of the coordinate system automatically moves every subview without affecting their frames), while its own frame is constant.
Back to your code. First of all, when you already have images to display in image views, it makes sense to init the views with their images:
UIImageView *face1view = [[UIImageView alloc] initWithImage: sadface];
That helps the image view to immediately size itself properly. It is not recommended to init views with -init because that might skip some important code in their designated initializer, -initWithFrame:.
Since you add face1view to self.view, you should really use its bounds rather than its frame:
face1view.frame = self.view.bounds;
Same goes for the happier face. Then in -touchesMoved:… you should either change face2view's frame to move it inside self.view or (if self.view does not contain any other subviews besides faces) modify self.view's bounds to move both faces inside it together. Instead, you do something weird like vertically stretching the happy face inside face2view. If you want the happy face to slide from the bottom of self.view, you should initially set its frame like this (not visible initially):
face2view.frame = CGRectOffset(face2view.frame, 0, CGRectGetHeight(self.view.bounds));
If you choose to swap faces by changing image views' frames (contrasted with changing self.view's bounds), I guess you might want to change both the views' frame origins, so that the sad face slides up out and the happy face slides up in. Alternatively, if you want the happy face to cover the sad one:
face2view.frame = face1view.frame;
Your problem seems to have something to do with the face2view.bounds in touchesMoved.
You are setting the bounds of this view to the rect, x:0, y:0, width:320, height:480 - y
x = 0 == left on the x axis
y = 0 == top on the y axis
So you are putting this image frame at the upper left corner, and making it fill the whole view. That's not what you want. The image simply becomes centered in this imageView.
This is quite the iPhone quandry. I am working on a library, but have narrowed down my problem to very simple code. What this code does is create a 50x50 view, applies a rotation transform of a few degrees, then shifts the frame down a few times. The result is the 50x50 view is now much larger looking.
Here's the code:
// a simple 50x50 view
UIView *redThing = [[UIView alloc] initWithFrame:CGRectMake(50, 50, 50, 50)];
redThing.backgroundColor = [UIColor redColor];
[self.view addSubview:redThing];
// rotate a small amount (as long as it's not 90 or 180, etc.)
redThing.transform = CGAffineTransformRotate(redThing.transform, 0.1234);
// move the view down 2 pixels
CGRect newFrame = CGRectMake(redThing.frame.origin.x, redThing.frame.origin.y + 2, redThing.frame.size.width, redThing.frame.size.height);
redThing.frame = newFrame;
// move the view down another 2 pixels
newFrame = CGRectMake(redThing.frame.origin.x, redThing.frame.origin.y + 2, redThing.frame.size.width, redThing.frame.size.height);
redThing.frame = newFrame;
// move the view down another 2 pixels
newFrame = CGRectMake(redThing.frame.origin.x, redThing.frame.origin.y + 2, redThing.frame.size.width, redThing.frame.size.height);
redThing.frame = newFrame;
// move the view down another 2 pixels
newFrame = CGRectMake(redThing.frame.origin.x, redThing.frame.origin.y + 2, redThing.frame.size.width, redThing.frame.size.height);
redThing.frame = newFrame;
So, what the heck is going on? Now, if I move the view by applying a translation transform, it works just fine. But that's not what I want to do and this should work anyway.
Any ideas?
From the UIView documentation:
If the transform property is also set, use the bounds and center properties instead; otherwise, animating changes to the frame property does not correctly reflect the actual location of the view.
Warning: If the transform property is not the identity transform, the value of this property is undefined and therefore should be ignored.
In other words, I would be wary of the frame property when a transform is set.
My application has a UIScrollView with one subview. The subview is an extended UIView which prints a PDF page to itself using layers in the drawLayer event.
Zooming using the built in pinching works great. setZoomScale also works as expected.
I have been struggling with the zoomToRect function. I found an example online which makes a CGRect zoomRect variable from a given CGPoint.
In the touchesEnded function, if there was a double tap and they are all the way zoomed out, I want to zoom in to that PDFUIView I created as though they were pinching out with the center of the pinch where they double tapped.
So assume that I pass the UITouch variable to my function which utilizes zoomToRect if they double tap.
I started with the following function I found on apples site:
http://developer.apple.com/iphone/library/documentation/WindowsViews/Conceptual/UIScrollView_pg/ZoomZoom/ZoomZoom.html
The following is a modified version for my UIScrollView extended class:
- (void)zoomToCenter:(float)scale withCenter:(CGPoint)center {
CGRect zoomRect;
zoomRect.size.height = self.frame.size.height / scale;
zoomRect.size.width = self.frame.size.width / scale;
zoomRect.origin.x = center.x - (zoomRect.size.width / 2.0);
zoomRect.origin.y = center.y - (zoomRect.size.height / 2.0);
//return zoomRect;
[self zoomToRect:zoomRect animated:YES];
}
When I do this, the UIScrollView seems to zoom using the bottom right edge of the zoomRect above and not the center.
If I make UIView like this
UIView *v = [[UIView alloc] initWithFrame:zoomRect];
[v setBackgroundColor:[UIView redColor]];
[self addSubview:v];
The red box shows up with the touch point dead in the center.
Please note: I am writing this from my PC, I recall messing around with the divided by two part on my Mac, so just assume that this draws a rect with the touch point in the center. If the UIView drew off center but zoomed to the right spot it would be all good.
However, what happens is when it preforms the zoomToRect it seems to use the bottom right off the zoomRect at the top left of the zoomed in results.
Also, I noticed that depending on where I click on the UIScrollView, it anchors to diffrent spots. It almost seems like there is a cross down the middle and it's reflecting the points somehow as though anywhere left of the middle is a negative reflection and anywhere right of the middle is a positive reflection?
This seems to complicated, shouldn't it just zoom to the rect that was drawn as the UIView was able to draw?
I used a lot of research to figure out how to create a PDF that scales in high quality, so I am assuming that using the CALayer may be throwing off the coordinate system? But to the UIScrollView it should just treat it as a view with 768x985 dimensions.
This is sort of advanced, please assume the code for creating the zoomRect is all good. There is something deeper with the CALayer in the UIView which is in the UIScrollView....
Ok another answer:
The apple supplied routine works for me, but you need to have the gesture recognizer convert the tap point to the imageView coords - not to the scroller.
Apple's example does this, but since our app works differently (we change the UIImageView), so the gestureRecongnizer was set up on the uiscrollview - which works fine, but you need to do this in the handleDoubleTap:
This is loosely based on the apple example code "TaptoZoom", but as I said we needed our gesture recognizer hooked up to the scroll view.
- (void)handleDoubleTap:(UIGestureRecognizer *)gestureRecognizer {
// double tap zooms in
[NSObject cancelPreviousPerformRequestsWithTarget:self selector:#selector(handleSingleTap:) object:nil];
float newScale = [imageScrollView zoomScale] * 1.5;
// Note we need to get location of the tap in the imageView coords, not the imageScrollView
CGRect zoomRect = [self zoomRectForScale:newScale withCenter:[gestureRecognizer locationInView:imageView]];
[imageScrollView zoomToRect:zoomRect animated:YES];
}
Declare BOOL isZoom; in .h
-(void)handleDoubleTap:(UIGestureRecognizer *)recognizer {
if(isZoom){
CGPoint Pointview=[recognizer locationInView:self];
CGFloat newZoomscal=3.0;
newZoomscal=MIN(newZoomscal, self.maximumZoomScale);
CGSize scrollViewSize=self.bounds.size;
CGFloat w=scrollViewSize.width/newZoomscal;
CGFloat h=scrollViewSize.height /newZoomscal;
CGFloat x= Pointview.x-(w/2.0);
CGFloat y = Pointview.y-(h/2.0);
CGRect rectTozoom=CGRectMake(x, y, w, h);
[self zoomToRect:rectTozoom animated:YES];
[self setZoomScale:3.0 animated:YES];
isZoom=NO;
}
else{
[self setZoomScale:1.0 animated:YES];
isZoom=YES;
}
}
I've noticed that the apple you're using doesn't zoom properly if the image is starting at a zoomScale less than 1 because the zoomRect origin is incorrect. I edited it to work correctly. Here's the code:
- (CGRect)zoomRectForScale:(float)scale withCenter:(CGPoint)center {
CGRect zoomRect;
// the zoom rect is in the content view's coordinates.
// At a zoom scale of 1.0, it would be the size of the imageScrollView's bounds.
// As the zoom scale decreases, so more content is visible, the size of the rect grows.
zoomRect.size.height = [self frame].size.height / scale;
zoomRect.size.width = [self frame].size.width / scale;
// choose an origin so as to get the right center.
zoomRect.origin.x = (center.x * (2 - self.minimumZoomScale) - (zoomRect.size.width / 2.0));
zoomRect.origin.y = (center.y * (2 - self.minimumZoomScale) - (zoomRect.size.height / 2.0));
return zoomRect;
}
The key is this part multiplying the center value by (2 - self.minimumZoomScale).
Hope this helps.
In my case it was:
zoomRect.origin.x = center.x / self.zoomScale - (zoomRect.size.width / 2.0);
zoomRect.origin.y = center.y / self.zoomScale - (zoomRect.size.height / 2.0);
extension UIScrollView {
func getRectForVisibleView() -> CGRect {
var visibleRect: CGRect = .zero
visibleRect.origin = self.contentOffset
visibleRect.size = self.bounds.size
let theScale = 1.0 / self.zoomScale
visibleRect.origin.x *= theScale
visibleRect.origin.y *= theScale
visibleRect.size.width *= theScale
visibleRect.size.height *= theScale
return visibleRect
}
func moveToRect(rect: CGRect) {
let scale = self.bounds.width / rect.width
self.zoomScale = scale
self.contentOffset = .init(x: rect.origin.x * scale, y: rect.origin.y * scale)
}
}
I had something similar and it was because I didn't adjust the center.x and center.y values by dividing them by the scale also (using center.x/scale and center.y/scale). Maybe I'm not reading your code right.
I am having the same behavior and it is quite frustrating... The rectangle being fed to the UIScrollView is perfect.. yet my view, no matter what I do anything that involves changing the zoomScale programmatically always zooms and scales to coordinate 0,0, no matter what.
I have tried just changing the zoomScale, I've tried zoomToRect, I have tried them all, and every one the minute I touch the zoomScale in code, it goes to coordinate 0,0.
I did also have to add and explicit setContentSize to the resized image in the scrollview after a zooming operation, or otherwise I cannot scroll after a zoom or pinch.
Is this a bug in 3.1.3 or what?
I have tried different solutions, but this looks the best resolution
It is really straight forward and conceptional?
CGRect frame = [[UIScreen mainScreen] applicationFrame];
scrollView.contentInset = UIEdgeInsetsMake(frame.size.height/2,
frame.size.width/2,
frame.size.height/2,
frame.size.width/2);
I disagree with one of the comments above saying that you should never multiply the center's coordinates by some factor.
Say that you are currently displaying an entire 400x400px image or PDF file in a 100x100 scroll view and want to allow the users to double the size of the content until it's 1:1.
If you double tap at point (75,75), you expect the zoomed-in rectangle to have origin 100,100 and size 100x100 within the new 200x200 content view. So the original tapping point (75,75) is now (150,150) in the new 200x200 space.
Now, after zoom action #1 has completed, if you again double tap at (75,75) inside the new 100x100 rectangle (which is the bottom-right square of the larger 200x200 rectangle), you expect the user to be shown the bottom-right 100x100 square of the larger image, which would now become zoomed to 400x400 pixels.
In order to calculate the origin of this latest 100x100 rectangle within the larger 400x400 rectangle, you would need to consider the scale and current content offset (since before this last zoom action we were displaying the bottom-right 100x100 rectangle within a 200x200 content rectangle).
So the x coordinate of the final rectangle becomes:
center.x/currentScale - (scrollView.frame.size.width/2) + scrollView.contentOffset.x/currentScale
= 75/.5 - 100/2 + 100/.5 = 150 - 50 + 200 = 300.
In this case, being a square, the calculation for the y coordinate is the same.
And we did indeed zoom in the bottom-right 100x100 rectangle, which, in the larger 400x400 content view has origin 300,300.
So here is how you would calculate the zoom rectangle's size and origin:
zoomRect.size.height = mScrollView.frame.size.height/scale;
zoomRect.size.width = mScrollView.frame.size.width/scale;
zoomRect.origin.x = center.x/currentScale - (mScrollView.frame.size.width/2) + mScrollView.contentOffset.x/currentScale;
zoomRect.origin.y = center.y/currentScale - (mScrollView.frame.size.height/2) + mScrollView.contentOffset.y/currentScale;
Hope this made sense; it's hard to explain it in writing without sketching out the various squares/rectangles.
Cheers,
Raf Colasante