UIImageView coordinate to subview coordinates - iphone

If I start with a UIImageView, and I add a subview, how do I translate a coordinate in the original UIImageView to a corresponding coordinate (the same place on the screen) in the subview?

UIView provides methods for exactly this purpose. In your case you have two options:
CGPoint newLocation = [imageView convertPoint:thePoint toView:subview];
or
CGPoint newLocation = [subview convertPoint:thePoint fromView:imageView];
They both do the same thing, so pick whichever one feels more appropriate. There's also equivalent functions for converting rects. These functions will convert between any two views on the same window. If the destination view is nil, it converts to/from the window base coordinates. These functions can handle views that aren't direct descendants of each other, and it can also handle views with transforms (though the rect methods may not produce accurate results in the case of a transform that contains any rotation or skewing).

Subtract the subview's frame.origin from the point in the parents view to the same point in the subview's coordinate:
subviewX = parentX - subview.frame.origin.x;
subviewY = parentY - subview.frame.origin.y;

Starting with code like:
UIImageView* superView=....;
UIImageView subView=[
[UIImageView alloc]initWithFrame:CGRectMake(0,0,subViewWidth,subViewHeight)
];
subView.center=CGPointMake(subViewCenterX, subViewCenterY);
[superView addSubview:subView];
The (subViewCenterX, subViewCenterY) coordinate is a point, in superView, where the center of subView is "pinned". The subView can be moved around wrt the superView by moving its center around. We can go, for example
subView.center=CGPointMake(subViewCenterX+1, subViewCenterY);
to move it 1 point to the right. Now lets say we have a point (X,Y) in the superView, and we want to find the corresponding point (x,y) in the subView, so that (X,Y) and (x,y) refer to the same point on the screen. The formula for x is:
x=X+subViewWidth/2-subViewCenterX;
and similarly for y:
y=Y+subViewHeight/2-subViewCenterY;
To explain this, if you draw a box representing the superView, and another (larger) box representing the subView, the difference subViewWidth/2-subViewCenterX is "the width of the bit of the subView box sticking out to the left of the superView"

Related

Checking to see if point in child view is in parent view

I have the following set-up:
Where the light blue view, let's call it parentView, has a rectangular subview (the purple view) called childView. The user can use pan touches to rotate and stretch childView by putting their finger on the point exhibited by the red dot and pushing it or pulling it.
It's possible that the childView could be scaled small enough to that after the user is finished with its touches, the point denoted by the red dot would be inside of the parentView.
My goal is to create a method that can detect if the red point is in the parentView or not. I've written the following code:
CGPoint childViewRedPoint = CGPointMake(self.bounds.size.width, self.bounds.size.height / 2);
CGPoint rotatedChildViewRedPoint = CGPointApplyAffineTransform(childViewRedPoint, CGAffineTransformMakeRotation(self.rotateAngle));
CGPoint convertedChildViewRedPoint = [self convertPoint:rotatedChildViewRedPoint toView:self.superview];
if (CGRectContainsPoint(self.superview.bounds, convertedChildViewRedPoint))
{
return YES;
}
else
{
return NO;
}
First I find the red point as defined within the childView, then I rotate it by the amount that the view has been rotated, then I convert it to be in the parentViews coordinates.
The points I'm getting don't seem to make sense and this isn't working. Was wondering if anyone knows where I'm going wrong here? Am I not taking parentViews superview into account?
I am not 100% sure, but I think that convertPoint: already takes a rotation (or any other transformation) into account, so you only need:
CGPoint childViewRedPoint = CGPointMake(self.bounds.size.width, self.bounds.size.height / 2);
CGPoint convertedChildViewRedPoint = [self convertPoint:childViewRedPoint toView:self.superview];
if (CGRectContainsPoint(self.superview.bounds, convertedChildViewRedPoint))
...

CGAffineTransformMakeTranslation translate to a point rather then by a value.

My question is simple.
Let us say I use this method
CGAffineTransformMakeTranslation(5.0f, 0.0f);
which translates the image view 5 pixels to the right. But is there a similar method that does the exact same thing except takes the destination point as an argument rather then the values you want to move the image view by?
For example, if I wanted to move an image view to 100.0f, 0.0f what would I use?
You can use the following two options:
imgOne.center = CGPointMake(50, 50);
or
imgOne.frame = CGRectMake(50, 50, imgOne.frame.size.width, imgOne.frame.size.height);
If it's the center point you want to move to this coordinate, use:
imageView.center = CGPointMake(100.0f, 0.0f);
If it's one of the corner points, subtract/add half the view's frame's width/height to the coordinates. If you need this frequently, it's a good idea to write a small UIView category that allows you to position a view's corner on a particular coordinate.

Get Absolute location of Any Control in Objective C IPad App

Im wondering if there is anyway that you can get the absolute location of a control in a ipad/iphone application. e.g. I have a TextField which is within a child of a child of a view. I want to know the X and Y values in relation to the top Parent View (e.g. currently the x and Y of the textfield return 10 and 10 because that is it's frame location within its own view, but I want to know this in relation to its parent which should be something like X = 10 and Y = 220). I need to make a generic method for this somehow. Hope this make sense.
Any ideas?
You are looking for -[UIView convertPoint:toView:].
For example, to get the origin of a view view in terms of its window's base coordinates, you would write:
CGPoint localPoint = [view bounds].origin;
CGPoint basePoint = [view convertPoint:localPoint toView:nil];
If you instead want to convert the point into the coordinate system of some other view within the same window, you can use that view as the toView: argument instead of nil:
NSAssert([view window] == [otherView window],
#"%s: Views must be part of the same window.", __func__);
CGPoint otherPoint = [view convertPoint:localPoint toView:otherView];
Because different coordinate systems can have different scales, you might find -convertRect:toView: to be more useful, depending on what you're planning to do with the coordinates. There are also analogous -fromView: versions of both the point and rect conversion methods.

(iphone) how to set view.center when detaching a view from scroll view and adding it to another view?

I'd like to move a view from a scrollview to a uiview.
I'm having trouble changing it's center(or frame) so that it remains in the same position in screen (but in a different view, possibly the superview of scrollview).
How should I convert the view's center/frame?
Thank you.
EDIT:
CGPoint oldCenter = dragView.center;
CGPoint newCenter = [dragView convertPoint: oldCenter toView: self.navigationView.contentView];
dragView.center = newCenter;
[self.navigationView.contentView addSubview: dragView];
I can also use (NSSet*) touches since i'm in touchesBegan:
I was having hard time to make it work but the doc wasn't so clear to me.
You can use convertPoint:toView: method of UIView. It is used to convert a point from one view's coordinate system to another. See Converting Between View Coordinate Systems section of UIView class reference. There are more methods available.
-edit-
You are using the wrong point when calling convertPoint: method. The given point should be in dragView's coordinate system where as dragView.center is in its superview's coordinate system.
Use the following point and it should give you the center of dragView in its own coordinate system.
CGPoint p;
p = CGPointMake(dragView.bounds.size.width * 0.5, dragView.bounds.size.height * 0.5);

bounds and frames: how do I display part of an UIImage

My goal is simple; I want to create a program that displays an UIImage, and when swiped from bottom to top, displays another UIImage. The images here could be a happy face/sad face. The sad face should be the starting point, the happy face the end point. When swiping your finger the part below the finger should be showing the happy face.
So far I tried solving this with the frame and bounds properties of the UIImageview I used for the happy face image.
What this piece of code does is wrong, because the transition starts in the center of the screen and not the bottom. Notice that the origin of both frame and bounds are at 0,0...
I have read numerous pages about frames and bounds, but I don't get it. Any help is appreciated!
The loadimages is called only once.
- (void)loadImages {
sadface = [UIImage imageNamed:#"face-sad.jpg"];
happyface = [UIImage imageNamed:#"face-happy.jpg"];
UIImageView *face1view = [[UIImageView alloc]init];
face1view.image = sadface;
[self.view addSubview:face1view];
CGRect frame;
CGRect contentRect = self.view.frame;
frame = CGRectMake(0, 0, contentRect.size.width, contentRect.size.height);
face1view.frame = frame;
face2view = [[UIImageView alloc]init];
face2view.layer.masksToBounds = YES;
face2view.contentMode = UIViewContentModeScaleAspectFill;
face2view.image = happyface;
[self.view addSubview:face2view];
frame = CGRectMake(startpoint.x, 0, contentRect.size.width, contentRect.size.height);
face2view.frame = frame;
face2view.clipsToBounds = YES;
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
CGPoint movepoint = [[touches anyObject] locationInView: self.view];
NSLog(#"movepoint: %f %f", movepoint.x, movepoint.y);
face2view.bounds = CGRectMake(0, 0, 320, 480 - movepoint.y);
}
The UIImages and UIImageViews are properly disposed of in the dealloc function.
Indeed, you seem to be confused about frames and bounds. In fact, they are easy. Always remember that any view has its own coordinate system. The frame, center and transform properties are expressed in superview's coordinates, while the bounds is expressed in the view's own coordinate system. If a view doesn't have a superview (not installed into a view hierarchy yet), it still has a frame. In iOS the frame property is calculated from the view's bounds, center and transform. You may ask what the hell frame and center mean when there's no superview. They are used when you add the view to another view, allowing to position the view before it's actually visible.
The most common example when a view's bounds differ from its frame is when it is not in the upper left corner of its superview: its bounds.origin may be CGPointZero, while its frame.origin is not. Another classic example is UIScrollView, which frequently modifies its bounds.origin to make subviews scroll (in fact, modifying the origin of the coordinate system automatically moves every subview without affecting their frames), while its own frame is constant.
Back to your code. First of all, when you already have images to display in image views, it makes sense to init the views with their images:
UIImageView *face1view = [[UIImageView alloc] initWithImage: sadface];
That helps the image view to immediately size itself properly. It is not recommended to init views with -init because that might skip some important code in their designated initializer, -initWithFrame:.
Since you add face1view to self.view, you should really use its bounds rather than its frame:
face1view.frame = self.view.bounds;
Same goes for the happier face. Then in -touchesMoved:… you should either change face2view's frame to move it inside self.view or (if self.view does not contain any other subviews besides faces) modify self.view's bounds to move both faces inside it together. Instead, you do something weird like vertically stretching the happy face inside face2view. If you want the happy face to slide from the bottom of self.view, you should initially set its frame like this (not visible initially):
face2view.frame = CGRectOffset(face2view.frame, 0, CGRectGetHeight(self.view.bounds));
If you choose to swap faces by changing image views' frames (contrasted with changing self.view's bounds), I guess you might want to change both the views' frame origins, so that the sad face slides up out and the happy face slides up in. Alternatively, if you want the happy face to cover the sad one:
face2view.frame = face1view.frame;
Your problem seems to have something to do with the face2view.bounds in touchesMoved.
You are setting the bounds of this view to the rect, x:0, y:0, width:320, height:480 - y
x = 0 == left on the x axis
y = 0 == top on the y axis
So you are putting this image frame at the upper left corner, and making it fill the whole view. That's not what you want. The image simply becomes centered in this imageView.