I have a UIImage that has a transparent background. When rotating this image, I'd like to find the bounding box around the graphic (ie the nons transparent part, if you rotate it in a UIImageView, it will find the bounding box around the entire UIImage including the transparent part).
Is there an Apple library that might do this for me? If not, does anyone know how this can be done?
If I understood your questions correctly, you can retrieve the frame (not bounds) of uiimageview then get the individual cgpoints and explicitly transform these points to get a transformed rectangle. Because in Apple's documentation it says: You can operate on a CGRect structure by calling the function CGRectApplyAffineTransform. This function returns the smallest rectangle that contains the transformed corner points of the rectangle passed to it. Transforming points 1 by 1 should avoid this auto-correcting behavior.
CGRect originalFrame = UIImageView.frame;
CGPoint p1 = originalFrame.origin;
CGPoint p2 = p1; p2.x += originalFrame.width;
CGPoint p3 = p1; p3.y += originalFrame.height;
//Use the same transformation that you applied to uiimageview here
CGPoint transformedP1 = CGPointApplyAffineTransform(p1, transform);
CGPoint transformedP2 = CGPointApplyAffineTransform(p2, transform);
CGPoint transformedP3 = CGPointApplyAffineTransform(p3, transform);
Now you should be able to define a new rectangle from these 3 points (4th one is optional because width and height can be calculated from 3 points. One point to note is that you cannot store this new rectangle in a cgrect because cgrect is defined by an origin and a size so its edges are always parallel to x and y axis. Apple's cgrect definition does not allow rotated rectangles to be stored.
Related
Is it possible to get this? If so, can anyone please tell me how?
Get the four points of the frame of your view (view.frame)
Retrieve the CGAffineTransform applied to your view (view.transform)
Then apply this same affine transform to the four points using CGPointApplyAffineTransform (and sibling methods of the CGAffineTransform Reference)
CGPoint topLeft = view.bounds.origin;
topLeft = [[view superview] convertPoint:topLeft fromView:view];
CGPoint topRight = CGPointMake(view.bounds.origin.x + view.bounds.width, view.bounds.origin.y);
topRight = [[view superview] convertPoint:topRight fromView:view];
// ... likewise for the other points
The first point is in the view's coordinate space, which is always "upright". Then the next statement finds the point that point corresponds to in the parent view's coordinate space. Note for an un-transformed view, that would be equal to view.frame.origin. The above calculations give the equivalent of the corners of view.frame for a transformed view.
My question is simple.
Let us say I use this method
CGAffineTransformMakeTranslation(5.0f, 0.0f);
which translates the image view 5 pixels to the right. But is there a similar method that does the exact same thing except takes the destination point as an argument rather then the values you want to move the image view by?
For example, if I wanted to move an image view to 100.0f, 0.0f what would I use?
You can use the following two options:
imgOne.center = CGPointMake(50, 50);
or
imgOne.frame = CGRectMake(50, 50, imgOne.frame.size.width, imgOne.frame.size.height);
If it's the center point you want to move to this coordinate, use:
imageView.center = CGPointMake(100.0f, 0.0f);
If it's one of the corner points, subtract/add half the view's frame's width/height to the coordinates. If you need this frequently, it's a good idea to write a small UIView category that allows you to position a view's corner on a particular coordinate.
I have a buttonsthat I add on a UIImageView. With a method when the user touch the screen
the UIImageView will rotate, I want to know if there is a way to get the new position of the button after the rotation is done.
Right now I'm getting all the time the original position with this method :
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
NSLog(#"Xposition : %f", myButton.frame.origin.x);
NSLog(#"Yposition : %f", myButton.frame.origin.y);
}
Thanks,
This is a tricky question. Referring to the UIView documentation on the frame property it states:
Warning: If the transform property is not the identity transform, the value of this property is undefined and therefore should be ignored.
So the trick is finding a workaround, and it depends on what exactly you need. If you just need an approximation, or if your rotation is always a multiple of 90 degrees, the CGRectApplyAffineTransform() function might work well enough. Pass it the (untransformed) frame of the UIButton of interest, along with the button's current transform and it will give you a transformed rect. Note that since a rect is defined as an origin, width and height, it can't define a rectangle with sides not parallel to the screen edges. In the case that it isn't parallel, it will return the smallest possible bounding rectangle for the rotated rect.
Now if you need to know the exact coordinates of one or all of the transformed points, I've written code to compute them before, but it's a bit more involved:
- (void)computeCornersOfTransformedView:(UIView*)transformedView relativeToView:(UIView*)parentView {
/* Computes the coordinates of each corner of transformedView in the coordinate system
* of parentView. Each is corner represented by an independent CGPoint. Doesn't do anything
* with the transformed points because this is, after all, just an example.
*/
// Cache the current transform, and restore the view to a normal position and size.
CGAffineTransform cachedTransform = transformedView.transform;
transformedView.transform = CGAffineTransformIdentity;
// Note each of the (untransformed) points of interest.
CGPoint topLeft = CGPointMake(0, 0);
CGPoint bottomLeft = CGPointMake(0, transformedView.frame.size.height);
CGPoint bottomRight = CGPointMake(transformedView.frame.size.width, transformedView.frame.size.height);
CGPoint topRight = CGPointMake(transformedView.frame.size.width, 0);
// Re-apply the transform.
transformedView.transform = cachedTransform;
// Use handy built-in UIView methods to convert the points.
topLeft = [transformedView convertPoint:topLeft toView:parentView];
bottomLeft = [transformedView convertPoint:bottomLeft toView:parentView];
bottomRight = [transformedView convertPoint:bottomRight toView:parentView];
topRight = [transformedView convertPoint:topRight toView:parentView];
// Do something with the newly acquired points.
}
Please forgive any minor errors in the code, I wrote it in the browser. Not the most helpful IDE...
It's really a pain, but always when I draw an UIImage in -drawRect:, it's upside-down.
When I flip the coordinates, the image draws correctly, but at the cost of all other CG functions drawing "wrong" (flipped).
What's your strategy when you have to draw images and other things? Is there any rule of thumb how to not get stuck in this problem over and over again?
Also, one nasty thing when I flip the y-axis is, that my CGRect from the UIImageView frame is wrong. Instead of the origin appearing at 10,10 upper left as expected, it appears at the bottom.
But at the same time, all those normal line drawing functions of CGContext take correct coordinates. drawing a line in -drawRect with origin 10,10 upper left, will really start at upper left. But at the same time that's strange, because core graphics actually has a flipped coordinate system with y 0 at the bottom.
So it seems like something is really inconsistent there. Drawing with CGContext functions takes coordinates as "expected" (cmon, nobody thinks in coordinates starting from bottom left, that's silly), while drawing any kind of image still works the "wrong" way.
Do you use helper methods to draw images? Or is there anything useful that makes image drawing not a pain in the butt?
Problem: Origin is at lower-left corner; positive y goes upward (negative y goes downward).
Goal: Origin at upper-left corner; positive y going downward (negative y going upward).
Solution:
Move origin up by the view's height.
Negate (multiply by -1) the y axis.
The way to do this in code is to translate up by the view bounds' height and scale by (1, -1), in that order.
There are a couple of portions of the Quartz 2D Programming Guide that are relevant to this topic, including “Drawing to a Graphics Context on iPhone OS” and the whole chapter on Transforms. Of course, you really should read the whole thing.
You can do that by apply affinetransform on the point you want to convert in UIKit related coordinates. Following is example.
// Create a affine transform object
CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
// First translate your image View according to transform
transform = CGAffineTransformTranslate(transform, 0, - imageView.bounds.size.height);
// Then whenever you want any point according to UIKit related coordinates apply this transformation on the point or rect.
// To get tranformed point
CGPoint newPointForUIKit = CGPointApplyAffineTransform(oldPointInCGKit, transform);
// To get transformed rect
CGRect newRectForUIKit = CGRectApplyAffineTransform(oldPointInCGKit, transform);
The better answer to this problem is to use the UIImage method drawInRect: to draw your image. I'm assuming you want the image to span the entire bounds of your view. This is what you'd type in your drawRect: method.
Instead of:
CGContextRef ctx = UIGraphicsGetCurrentContext();
UIImage *myImage = [UIImage imageNamed:#"theImage.png"];
CGImageRef img = [myImage CGImage];
CGRect bounds = [self bounds];
CGContextDrawImage(ctx, bounds, img);
Write this:
UIImage *myImage = [UIImage imageNamed:#"theImage.png"];
CGRect bounds = [self bounds];
[myImage drawInRect:bounds];
It's really a pain, but always when I draw an UIImage in -drawRect:, it's upside-down.
Are you telling the UIImage to draw, or getting its CGImage and drawing that?
As noted in “Drawing to a Graphics Context on iPhone OS”, UIImages are aware of the difference in co-ordinate spaces and should draw themselves correctly without you having to flip your co-ordinate space yourself.
CGImageRef flip (CGImageRef im) {
CGSize sz = CGSizeMake(CGImageGetWidth(im), CGImageGetHeight(im));
UIGraphicsBeginImageContextWithOptions(sz, NO, 0);
CGContextDrawImage(UIGraphicsGetCurrentContext(),
CGRectMake(0, 0, sz.width, sz.height), im);
CGImageRef result = [UIGraphicsGetImageFromCurrentImageContext() CGImage];
UIGraphicsEndImageContext();
return result;
}
Call the above method using the code below:
This code deals with getting the left half of an image from an existing UIImageview and setting the thus generated image to a new imageview - imgViewleft
CGContextRef con = UIGraphicsGetCurrentContext();
CGContextDrawImage(con,
CGRectMake(0,0,sz.width/2.0,sz.height),
flip(leftReference));
imgViewLeft = [[UIImageView alloc] initWithImage:UIGraphicsGetImageFromCurrentImageContext()];
My usercase is an iphone application where I do an animation on the scale, rotation and translation of an image.
So, I concat everything and feed it to the transform property, but there is one problem:
Since my images vary in size, it is a problem to position them correctly. I'm used to an inverted y axis coordinate system, so I want my image to positioned exactly at 60 pixels in the y axis.
So, how do I change from the original cartesian y axis to an inverted y axis point of view?
As smacl points out, the easiest way to do this is to shift your origin to the bottom-left of the screen by using (screenheight - viewheight - y) instead of y in the origins of your views.
However, you can flip the coordinate system of your main view's layers using a CATransform3D. I do this so that I can share the same Core Animation CALayer layout code between my iPhone application and a Mac client (the iPhone inverts the normal Quartz coordinate system for CALayers to match that of the UIViews). All you need to do to enable this is to place the line
self.layer.sublayerTransform = CATransform3DMakeScale(1.0f, -1.0f, 1.0f);
in your initialization code for your layer-hosting UIView. Remember that this will flip your CALayers, so any UIKit text rendering in those layers may also need to be flipped using code similar to the following:
CGContextSaveGState(context);
CGContextTranslateCTM(context, 0.0f, self.frame.size.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
UIFont *theFont = [UIFont fontWithName:#"Helvetica" size:fontSize];
[text drawAtPoint:CGPointZero withFont:theFont];
CGContextRestoreGState(context);
You can do a similar sort of inversion using a CGAffineTransform, but you will also need to apply a translation to make that work:
CGAffineTransform flipTransform = CGAffineTransformMakeTranslation(0.0f, self.frame.size.height);
flipTransform = CGAffineTransformScale(flipTransform, 1.0f, -1.0f);
You may be able to use the affine transform to convert your origin coordinates using CGPointApplyAffineTransform().
For every y ordinate, y = top-y, where top is the y ordinate of the top of the bounding box you are drawing in.