Co-ordinates of the four points of a uiview which has been rotated - iphone

Is it possible to get this? If so, can anyone please tell me how?

Get the four points of the frame of your view (view.frame)
Retrieve the CGAffineTransform applied to your view (view.transform)
Then apply this same affine transform to the four points using CGPointApplyAffineTransform (and sibling methods of the CGAffineTransform Reference)

CGPoint topLeft = view.bounds.origin;
topLeft = [[view superview] convertPoint:topLeft fromView:view];
CGPoint topRight = CGPointMake(view.bounds.origin.x + view.bounds.width, view.bounds.origin.y);
topRight = [[view superview] convertPoint:topRight fromView:view];
// ... likewise for the other points
The first point is in the view's coordinate space, which is always "upright". Then the next statement finds the point that point corresponds to in the parent view's coordinate space. Note for an un-transformed view, that would be equal to view.frame.origin. The above calculations give the equivalent of the corners of view.frame for a transformed view.

Related

Finding Bounding Box of Rotated Image that has Transparent Background

I have a UIImage that has a transparent background. When rotating this image, I'd like to find the bounding box around the graphic (ie the nons transparent part, if you rotate it in a UIImageView, it will find the bounding box around the entire UIImage including the transparent part).
Is there an Apple library that might do this for me? If not, does anyone know how this can be done?
If I understood your questions correctly, you can retrieve the frame (not bounds) of uiimageview then get the individual cgpoints and explicitly transform these points to get a transformed rectangle. Because in Apple's documentation it says: You can operate on a CGRect structure by calling the function CGRectApplyAffineTransform. This function returns the smallest rectangle that contains the transformed corner points of the rectangle passed to it. Transforming points 1 by 1 should avoid this auto-correcting behavior.
CGRect originalFrame = UIImageView.frame;
CGPoint p1 = originalFrame.origin;
CGPoint p2 = p1; p2.x += originalFrame.width;
CGPoint p3 = p1; p3.y += originalFrame.height;
//Use the same transformation that you applied to uiimageview here
CGPoint transformedP1 = CGPointApplyAffineTransform(p1, transform);
CGPoint transformedP2 = CGPointApplyAffineTransform(p2, transform);
CGPoint transformedP3 = CGPointApplyAffineTransform(p3, transform);
Now you should be able to define a new rectangle from these 3 points (4th one is optional because width and height can be calculated from 3 points. One point to note is that you cannot store this new rectangle in a cgrect because cgrect is defined by an origin and a size so its edges are always parallel to x and y axis. Apple's cgrect definition does not allow rotated rectangles to be stored.

ios MapKit, following a path

I have a path rendering on my map. I have a series of waypoints and I'm simulating movement through them.
When changing the display region it appears that my coordinate when converted to a CGPoint is floorf by Apple's implementation. This causes a very jittery appearance instead of a smooth one.
Is there anyway to get around this?
[m_mapView setRegion: MKCoordinateRegionMake(coord, MKCoordinateSpanMake(0, 0))];
The map then tries to center on this given point. However the point may not be pixel aligned as can be view by the following function.
CGPoint point = [m_mapView convertCoordinate:coord toPointToView:m_mapView];
Thus the mapview floors the centerpoint's result to align all pixels for the underlying map.
I make the view larger than the screen to account for the offset and to avoid clipping.
I simulate a point moving along the route and place it using an annotation and then center on that point at 30 frames per second.
Assume coord is the position of the moving point.
[m_mapView setCenterCoordinate: coord];
CGPoint p = [m_mapView convertCoordinate:m_mapView.centerCoordinate toPointToView:m_mapView];
CGPoint p1 = [m_mapView convertCoordinate:coord toPointToView:m_mapView];
CGPoint offset = CGPointMake(p.x - p1.x, p.y - p1.y);
CGRect frame = CGRectInset(CGRectMake(0.0f, 0.0f, 1024.0f, 1024.0f), -50.0f, -50.0f);
frame.origin.x+= offset.x;
frame.origin.y+= offset.y;
[m_mapView setFrame: frame];

Translating a view and the rotating it problem

I have a custom UIImageView, I can drag it around screen by making a translation with (xDif and yDif is the amount fingers moved):
CGAffineTransform translate = CGAffineTransformMakeTranslation(xDif, yDif);
[self setTransform: CGAffineTransformConcat([self transform], translate)];
Let's say I moved the ImageView for 50px in both x and y directions. I then try to rotate the ImageView (via gesture recognizer) with:
CGAffineTransform transform = CGAffineTransformMakeRotation([recognizer rotation]);
myImageView.transform = transform;
What happens is the ImageView suddenly moves to where the ImageView was originally located (before the translation - not from the moved position + 50px in both directions).
(It seems that no matter how I translate the view, the self.center of the ImageView subclass stays the same - where it was originally laid in IB).
Another problem is, if I rotate the ImageView by 30 deg, and then try to rotate it a bit more, it will again start from the original position (angle = 0) and go from there, why wouldn't it start from the angle 30 deg and not 0.
You are overwriting the earlier transform. To add to the current transform, you should do this –
myImageView.transform = CGAffineTransformRotate(myImageView.transform, recognizer.rotation);
Since you're changing the transform property in a serial order, you should use CGAffineTransformRotate, CGAffineTransformTranslate and CGAffineTransformScale instead so that you add to the original transform and not create a new one.

UIImageView coordinate to subview coordinates

If I start with a UIImageView, and I add a subview, how do I translate a coordinate in the original UIImageView to a corresponding coordinate (the same place on the screen) in the subview?
UIView provides methods for exactly this purpose. In your case you have two options:
CGPoint newLocation = [imageView convertPoint:thePoint toView:subview];
or
CGPoint newLocation = [subview convertPoint:thePoint fromView:imageView];
They both do the same thing, so pick whichever one feels more appropriate. There's also equivalent functions for converting rects. These functions will convert between any two views on the same window. If the destination view is nil, it converts to/from the window base coordinates. These functions can handle views that aren't direct descendants of each other, and it can also handle views with transforms (though the rect methods may not produce accurate results in the case of a transform that contains any rotation or skewing).
Subtract the subview's frame.origin from the point in the parents view to the same point in the subview's coordinate:
subviewX = parentX - subview.frame.origin.x;
subviewY = parentY - subview.frame.origin.y;
Starting with code like:
UIImageView* superView=....;
UIImageView subView=[
[UIImageView alloc]initWithFrame:CGRectMake(0,0,subViewWidth,subViewHeight)
];
subView.center=CGPointMake(subViewCenterX, subViewCenterY);
[superView addSubview:subView];
The (subViewCenterX, subViewCenterY) coordinate is a point, in superView, where the center of subView is "pinned". The subView can be moved around wrt the superView by moving its center around. We can go, for example
subView.center=CGPointMake(subViewCenterX+1, subViewCenterY);
to move it 1 point to the right. Now lets say we have a point (X,Y) in the superView, and we want to find the corresponding point (x,y) in the subView, so that (X,Y) and (x,y) refer to the same point on the screen. The formula for x is:
x=X+subViewWidth/2-subViewCenterX;
and similarly for y:
y=Y+subViewHeight/2-subViewCenterY;
To explain this, if you draw a box representing the superView, and another (larger) box representing the subView, the difference subViewWidth/2-subViewCenterX is "the width of the bit of the subView box sticking out to the left of the superView"

Check UIButton position after rotating UIView

I have a buttonsthat I add on a UIImageView. With a method when the user touch the screen
the UIImageView will rotate, I want to know if there is a way to get the new position of the button after the rotation is done.
Right now I'm getting all the time the original position with this method :
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
NSLog(#"Xposition : %f", myButton.frame.origin.x);
NSLog(#"Yposition : %f", myButton.frame.origin.y);
}
Thanks,
This is a tricky question. Referring to the UIView documentation on the frame property it states:
Warning: If the transform property is not the identity transform, the value of this property is undefined and therefore should be ignored.
So the trick is finding a workaround, and it depends on what exactly you need. If you just need an approximation, or if your rotation is always a multiple of 90 degrees, the CGRectApplyAffineTransform() function might work well enough. Pass it the (untransformed) frame of the UIButton of interest, along with the button's current transform and it will give you a transformed rect. Note that since a rect is defined as an origin, width and height, it can't define a rectangle with sides not parallel to the screen edges. In the case that it isn't parallel, it will return the smallest possible bounding rectangle for the rotated rect.
Now if you need to know the exact coordinates of one or all of the transformed points, I've written code to compute them before, but it's a bit more involved:
- (void)computeCornersOfTransformedView:(UIView*)transformedView relativeToView:(UIView*)parentView {
/* Computes the coordinates of each corner of transformedView in the coordinate system
* of parentView. Each is corner represented by an independent CGPoint. Doesn't do anything
* with the transformed points because this is, after all, just an example.
*/
// Cache the current transform, and restore the view to a normal position and size.
CGAffineTransform cachedTransform = transformedView.transform;
transformedView.transform = CGAffineTransformIdentity;
// Note each of the (untransformed) points of interest.
CGPoint topLeft = CGPointMake(0, 0);
CGPoint bottomLeft = CGPointMake(0, transformedView.frame.size.height);
CGPoint bottomRight = CGPointMake(transformedView.frame.size.width, transformedView.frame.size.height);
CGPoint topRight = CGPointMake(transformedView.frame.size.width, 0);
// Re-apply the transform.
transformedView.transform = cachedTransform;
// Use handy built-in UIView methods to convert the points.
topLeft = [transformedView convertPoint:topLeft toView:parentView];
bottomLeft = [transformedView convertPoint:bottomLeft toView:parentView];
bottomRight = [transformedView convertPoint:bottomRight toView:parentView];
topRight = [transformedView convertPoint:topRight toView:parentView];
// Do something with the newly acquired points.
}
Please forgive any minor errors in the code, I wrote it in the browser. Not the most helpful IDE...