How do we get the coordinates of an UIImageView programmatically? - iphone

I would like to get the coordinates of an UIImageView programmatically. How do I do this?
For example I have a square. I want to get the coordinates of all the corners. How must i proceed?
NSLog("%f", ImageView.frame.origin.x);
NSLog("%f", ImageView.frame.origin.y);
I get the topleft coordinate of the square.
I must say that the square (imageview) rotates and that's why I must get it's coordinates.

The coordinates in whose coordinate space?
Each UIView has its own coordinate space. You can refer to a view's size in its own coordinate space by asking for its bounds.
A view's size and position in its parent view is called its frame. In your example, you're asking for the image view's top left corner in its parent view's coordinate space.
If that's what you want to do, then try these:
frame.origin.x
frame.origin.y
frame.size.width
frame.size.height
By adding those together you can get any coordinate: for example, the x coordinate of the top right would be
frame.origin.x + frame.size.width

Related

Finding Bounding Box of Rotated Image that has Transparent Background

I have a UIImage that has a transparent background. When rotating this image, I'd like to find the bounding box around the graphic (ie the nons transparent part, if you rotate it in a UIImageView, it will find the bounding box around the entire UIImage including the transparent part).
Is there an Apple library that might do this for me? If not, does anyone know how this can be done?
If I understood your questions correctly, you can retrieve the frame (not bounds) of uiimageview then get the individual cgpoints and explicitly transform these points to get a transformed rectangle. Because in Apple's documentation it says: You can operate on a CGRect structure by calling the function CGRectApplyAffineTransform. This function returns the smallest rectangle that contains the transformed corner points of the rectangle passed to it. Transforming points 1 by 1 should avoid this auto-correcting behavior.
CGRect originalFrame = UIImageView.frame;
CGPoint p1 = originalFrame.origin;
CGPoint p2 = p1; p2.x += originalFrame.width;
CGPoint p3 = p1; p3.y += originalFrame.height;
//Use the same transformation that you applied to uiimageview here
CGPoint transformedP1 = CGPointApplyAffineTransform(p1, transform);
CGPoint transformedP2 = CGPointApplyAffineTransform(p2, transform);
CGPoint transformedP3 = CGPointApplyAffineTransform(p3, transform);
Now you should be able to define a new rectangle from these 3 points (4th one is optional because width and height can be calculated from 3 points. One point to note is that you cannot store this new rectangle in a cgrect because cgrect is defined by an origin and a size so its edges are always parallel to x and y axis. Apple's cgrect definition does not allow rotated rectangles to be stored.

Calculating new origin of insetted CGRect after its size changes

I have a CGRect A and CGRect B where B is centered inside of A (a CGRect contains the x and y origin and height and width size of a rectangle). If I increase the width and height of A by some proportion, and also increase the width and height of B by that same proportion, will multiplying the x origin of B and the y origin of B by this same proportion (for the width and height respectfully), will that keep B in the center of A as both grow by the new proportion? I've tested this out in a few different scenarios and it works, but just wanted to verify it'll work for all situations as I am not that sharp in math.
Also, was wondering if there is a method that will simply allow you to multiply all values of a CGRect by this proportion without having to do it manually (couldn't find one in the docs).
UPDATE: Actually, this will not work...trying to think of a methodology that will allow me to correctly position a view within another view after a proportional increase in size for both.
Yes, what you proposed works, but only if the origin of the outer CGRect is 0,0 or if you multiply its origin by the factor, too. If you don't do that, the inner rect will be shifted to the bottom right.
Here's what happens if you multiple both origin and size:
If you don't multiply the outer rect's origin, this happens:
From your question, it isn't entirely clear what you're trying to achieve.
If you want to enlarge a CGRect and (re)center it another one, use these functions:
// center a CGRect in another one
static inline CGRect ALRectCenterInRect(CGRect outerRect, CGRect innerRect)
{
return CGRectMake(CGRectGetMidX(outerRect)-innerRect.size.width/2, CGRectGetMidY(outerRect)-innerRect.size.height/2, innerRect.size.width, innerRect.size.height);
}
// multiply each value of a CGRect with factor
// combine with CGRectIntegral() to prevent fractions (and the resulting aliasing)
static inline CGRect ALRectMultiply(CGRect rect, CGFloat factor)
{
return CGRectMake(rect.origin.x*factor, rect.origin.y*factor, rect.size.width*factor, rect.size.height*factor);
}
How to use them:
CGRect centeredInnerRect = ALRectCenterInRect(outerRect, innerRect);
CGRect multipliedRect = ALRectMultiply(someRect, 1.5);
However, when dealing with CGRects, it's usually about UIViews. If you want to center a UIView in its superview, do this:
someSubview.center = CGPointMake(CGRectGetMidX(someSuperview.bounds), CGRectGetMidY(someSuperview.bounds));
If the inner view has the same superview as the outer view, you can simply do this to center it in the outer view:
innerView.center = outerView.center;

Co-ordinates of the four points of a uiview which has been rotated

Is it possible to get this? If so, can anyone please tell me how?
Get the four points of the frame of your view (view.frame)
Retrieve the CGAffineTransform applied to your view (view.transform)
Then apply this same affine transform to the four points using CGPointApplyAffineTransform (and sibling methods of the CGAffineTransform Reference)
CGPoint topLeft = view.bounds.origin;
topLeft = [[view superview] convertPoint:topLeft fromView:view];
CGPoint topRight = CGPointMake(view.bounds.origin.x + view.bounds.width, view.bounds.origin.y);
topRight = [[view superview] convertPoint:topRight fromView:view];
// ... likewise for the other points
The first point is in the view's coordinate space, which is always "upright". Then the next statement finds the point that point corresponds to in the parent view's coordinate space. Note for an un-transformed view, that would be equal to view.frame.origin. The above calculations give the equivalent of the corners of view.frame for a transformed view.

UIImageView coordinate to subview coordinates

If I start with a UIImageView, and I add a subview, how do I translate a coordinate in the original UIImageView to a corresponding coordinate (the same place on the screen) in the subview?
UIView provides methods for exactly this purpose. In your case you have two options:
CGPoint newLocation = [imageView convertPoint:thePoint toView:subview];
or
CGPoint newLocation = [subview convertPoint:thePoint fromView:imageView];
They both do the same thing, so pick whichever one feels more appropriate. There's also equivalent functions for converting rects. These functions will convert between any two views on the same window. If the destination view is nil, it converts to/from the window base coordinates. These functions can handle views that aren't direct descendants of each other, and it can also handle views with transforms (though the rect methods may not produce accurate results in the case of a transform that contains any rotation or skewing).
Subtract the subview's frame.origin from the point in the parents view to the same point in the subview's coordinate:
subviewX = parentX - subview.frame.origin.x;
subviewY = parentY - subview.frame.origin.y;
Starting with code like:
UIImageView* superView=....;
UIImageView subView=[
[UIImageView alloc]initWithFrame:CGRectMake(0,0,subViewWidth,subViewHeight)
];
subView.center=CGPointMake(subViewCenterX, subViewCenterY);
[superView addSubview:subView];
The (subViewCenterX, subViewCenterY) coordinate is a point, in superView, where the center of subView is "pinned". The subView can be moved around wrt the superView by moving its center around. We can go, for example
subView.center=CGPointMake(subViewCenterX+1, subViewCenterY);
to move it 1 point to the right. Now lets say we have a point (X,Y) in the superView, and we want to find the corresponding point (x,y) in the subView, so that (X,Y) and (x,y) refer to the same point on the screen. The formula for x is:
x=X+subViewWidth/2-subViewCenterX;
and similarly for y:
y=Y+subViewHeight/2-subViewCenterY;
To explain this, if you draw a box representing the superView, and another (larger) box representing the subView, the difference subViewWidth/2-subViewCenterX is "the width of the bit of the subView box sticking out to the left of the superView"

Getting x and y Position?

How to Get the X and Y axis of UIImage, I have one images which are randomly changes
it's position so how to get the image current x and y position so i can match with
another image x and y position.I have to get the position of Image without any touch
on screen. please suggest some solution.
Thank You.
You can get the frame of any view by accessing its frame property. Within that frame struct are a CGPoint origin and a CGSize size value. The origin is probably what you're looking for. Note that it is expressed in terms of relative position of the view within its superview.
For example, the following will print the origin coordinate of a view called imageView within its superview:
CGPoint origin = imageView.frame.origin;
NSLog(#"Current position: (%f, %f)", origin.x, origin.y);