Proper use of UIRectClip to scale a UIImage down to icon size - iphone

Given a UIImage of any dimension, I wish to generate a square "icon" sized version, px pixels to a side, without any distortion (stretching). However, I'm running into a little snag. Not quite sure where the problem is. Here's what I'm doing so far.
First, given a UImage size, I determine three things: the ratio to use when scaling down the image; a delta (the difference between our desired icon size and the longest side), and an offset (which is used to figure out our origin coordinate when clipping the image):
if (size.width > size.height) {
ratio = px / size.width;
delta = (ratio*size.width - ratio*size.height);
offset = CGPointMake(delta/2, 0);
} else {
ratio = px / size.height;
delta = (ratio*size.height - ratio*size.width);
offset = CGPointMake(0, delta/2);
}
Now, let's say you have an image 640px wide by 480px high, and we want to get a 50px x 50px icon out of this. The width is greater than the height, so our calculations are:
ratio = 50px / 640px = 0.078125
delta = (ratio * 640px) - (ratio * 480px) = 50px - 37.5px = 12.5px
offset = {x=6.25, y=0}
Next, I create a CGRect rect that is large enough to be cropped down to our desired icon size without distortion, plus a clipRect for clipping purposes:
CGRect rect = CGRectMake(0.0, 0.0, (ratio * size.width) + delta,
(ratio * size.height) + delta);
CGRect clipRect = CGRectMake(offset.x, offset.y, px, px);
Substituting our values from above, we get:
rect = origin {x=0.0, y=0.0}, size {width=62.5, height=50.0}
clipRect = origin {x=6.25, y=0}, size {width=50.0, height=50.0}
So now we have a 62.5px wide by 50px high rect to work with, and a clipping rectangle that grabs the "middle" 50x50 portion.
On to the home stretch! Next, we set up our image context, draw the UIImage (called myImage here) into the rect, set the clipping rectangle, get the (presumably now-clipped) image, use it, and finally clean up our image context:
UIGraphicsBeginImageContext(rect.size);
[myImage drawInRect:rect];
UIRectClip(clipRect);
UIImage *icon = UIGraphicsGetImageFromCurrentImageContext();
// Do something with the icon here ...
UIGraphicsEndImageContext();
Only one problem: The clipping never occurs! I end up with an image 63px wide x 50px high. :(
Perhaps I'm misusing/misunderstanding UIRectClip? I've tried shuffling various things around: swapping the use of rect and clipRect, moving UIRectClip before drawInRect:. No dice.
I tried searching for an example of this method online as well, to no avail. For the record, UIRectClip is defined as:
Modifies the current clipping path by
intersecting it with the specified
rectangle.
Shuffling things around gets us a little bit closer:
UIGraphicsBeginImageContext(clipRect.size);
UIRectClip(rect);
[myImage drawInRect:rect];
Now we don't have distortion, but the clipped image isn't centered on the original as I expected. Still, at least the image is 50x50, though the variable names are now fouled up as a result of said shuffling. (I'll respectfully leave renaming as an exercise for the reader.)

Eureka! I had things a little mixed up. This works:
CGRect clipRect = CGRectMake(-offset.x, -offset.y,
(ratio * size.width) + delta,
(ratio * size.height) + delta);
UIGraphicsBeginImageContext(CGSizeMake(px, px));
UIRectClip(clipRect);
[myImage drawInRect:clipRect];
UIImage *icon = UIGraphicsGetImageFromCurrentImageContext();
// Do something with the icon here ...
UIGraphicsEndImageContext();
No more need for rect. The trick appears to be using a negative offset in the clipping rectangle, thereby lining up the origin of where we want to grab our 50 x 50 image (in this example).
Perhaps there's an easier way. If so, please weigh in!

I wanted to achieve a similar thing but found the answer from by the original poster didn't quite work. It distorted the image. This may well be solely because he didn't post the whole solution and had changed some of how the variables are initialised:
(if (size.width > size.height)
ratio = px / size.width;
Was wrong for my solution (which wanted to use the largest possible square from the source image). Also it is not necessary to use UIClipRect - if you make the context the size of the image you want to extract, no actual drawing will be done outside that rect anyway. It is just a matter of scaling the size of the image rect and offsetting one of the origin coordinates. I have posted my solution below:
+(UIImage *)makeIconImage:(UIImage *)image
{
CGFloat destSize = 400.0;
CGRect rect = CGRectMake(0, 0, destSize, destSize);
UIGraphicsBeginImageContext(rect.size);
if(image.size.width != image.size.height)
{
CGFloat ratio;
CGRect destRect;
if (image.size.width > image.size.height)
{
ratio = destSize / image.size.height;
CGFloat destWidth = image.size.width * ratio;
CGFloat destX = (destWidth - destSize) / 2.0;
destRect = CGRectMake(-destX, 0, destWidth, destSize);
}
else
{
ratio = destSize / image.size.width;
CGFloat destHeight = image.size.height * ratio;
CGFloat destY = (destHeight - destSize) / 2.0;
destRect = CGRectMake(0, destY, destSize, destHeight);
}
[image drawInRect:destRect];
}
else
{
[image drawInRect:rect];
}
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}

wheeliebin answers is correct but he forgot a minus sign in front of destY
destRect = CGRectMake(0, -destY, destSize, destHeight);

Related

how to set image in top avoiding space in UIimageView

I've an UIImageView with content mode Aspect Fit of size 220x155. I'm dynamically inserting different images in different resolutions, but all larger than the size of the UIImageView. As the content mode is set to Aspect Fit, the image is scaled with respect to the ratio to fit the UIImageView.
My problem is, that if for instance the image inside the UIImageView is scaled to 220x100, I would like the UIImageView to shrink from a height of 155 to 100 too to avoid space between my elements.
How can I do this?
I wrote this method to get me the frame of the image view once it loaded an image.
So , the requirements for me were the same as in your case:
1) image view with aspect fit content mode
2) get the exact frame of the image ( this way you can re-position the image view )
Hope this helps:
- (CGRect) getFrameOfImage:(AsyncImageView *) imgView
{
if(!imgView.loaded)
return CGRectZero;
CGSize imgSize = imgView.image.size;
CGSize frameSize = imgView.frame.size;
CGRect resultFrame;
if(imgSize.width < frameSize.width && imgSize.height < frameSize.height)
{
resultFrame.size = imgSize;
}
else
{
float widthRatio = imgSize.width / frameSize.width;
float heightRatio = imgSize.height / frameSize.height;
float maxRatio = MAX (widthRatio , heightRatio);
NSLog(#"widthRatio = %.2f , heightRatio = %.2f , maxRatio = %.2f" , widthRatio , heightRatio , maxRatio);
resultFrame.size = CGSizeMake(imgSize.width / maxRatio, imgSize.height / maxRatio);
}
resultFrame.origin = CGPointMake(imgView.center.x - resultFrame.size.width/2 , imgView.center.y - resultFrame.size.height/2);
return resultFrame;
}
I am using here AsyncImageView but it will work just as good with UIImageView. The important thing to remember is to call this method AFTER the image was loaded.
Cheers!
Its very simple, you just need to get image actual size, which can be done by
UIImage *image = [UIImage imageName:#""];
then you just need to set frame
Like :-
imageView.frame = CGRectMake(0.0, 0.0, image.size.width, image.size.height);
Hope this helps you.
Once the imageview's image is set to the new image (and thus scaled) you can get the height of the image inside the imageview (imageview.image.size.height) and set the imageview's height (frame) accordingly.

Manually Scaling UIImage for UIImageView

What is the best way to manually reproduce
contentMode = UIViewContentModeScaleAspectFit;
without using it?
I need to scale a UIImageView (inside a scroll view) to fit the aspect ratio. I need to know the new size of the image to draw overlays over it.
Recently, I needed to find the frame of the image inside an ImageView, to add touchable views over that image, this is how I did it:
-(void)calculateScaleAndContainerFrame{
if(!imageView || !image) return;
CGSize imageSize = image.size;
CGSize imageViewSize = imageView.frame.size;
float imageRatio = imageSize.width / imageSize.height;
float viewRatio = imageViewSize.width / imageViewSize.height;
if(imageRatio > viewRatio){
scale = imageSize.width / imageViewSize.width;
}else{
scale = imageSize.height / imageViewSize.height;
}
CGRect frame = CGRectZero;
frame.size = CGSizeMake(roundf(imageSize.width / scale), roundf(imageSize.height / scale));
frame.origin = CGPointMake((imageViewSize.width - frame.size.width) / 2.0, (imageViewSize.height - frame.size.height) / 2.0);
[container setFrame:frame];
}
I'm pretty sure you can use it as a guide, replacing the imageViewSize with the content size of your scroll view (or the view you want to put your image in).
Note 1: In my case, I needed to center the view vertically, if you don't, just set the y to 0 on the line where I set the frame origin. Same for x if you don't want to center the image horizontally.
Note 2: This is NOT, by any means, a code you can just plug in into your project and work, you'll probably have to read it, understand it, and then apply the method to your own project. I don't have time right now to modify it to your needs.
Note 3: With that code I managed to get a view perfectly over the image inside a image view that used that content mode, so it works.

iPhone SDK: Problem saving one image over another

basically I am making an app that involves a user taking a photo, or selecting one already on their device, and then placing an overlay onto the image.
So, I seem to have coded everything fine, apart from one thing, after the user has selected the overlay and positioned it, when saved the size of the overlay has changed, whereas the x and y values seem correct.
And so this is the code I use to add the overlay ("image" being the users photo):
float wid = (overlay.image.size.width);
float hei = (overlay.image.size.height);
overlay.frame = CGRectMake(0, 0, wid, hei);
[image addSubview:overlay];
And this is the code used to save the resulting image:
UIGraphicsBeginImageContext(image.image.size);
// Draw the users photo
[image.image drawInRect:CGRectMake(0, 0, image.image.size.width, image.image.size.height)];
// Draw the overlay
float xx = (overlay.center.x);
float yy = (overlay.center.y);
CGRect aaFrame = overlay.frame;
float width = aaFrame.size.width;
float height = aaFrame.size.height;
[overlay.image drawInRect:CGRectMake(xx, yy, width, height)];
UIGraphicsEndImageContext();
Any help? Thanks
The problem is that you are using image's size rather than the image view's frame size. Image seems to be much larger than its image view so when you use the image's size the other image's size ends up being much smaller in comparison although it is still the correct size. You can modify your snippet to this –
UIGraphicsBeginImageContext(image.frame.size);
// Draw the users photo
[image.image drawInRect:CGRectMake(0, 0, image.frame.size.width, image.frame.size.height)];
[overlay.image drawInRect:overlay.frame];
UIImage * resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Avoiding loss of quality
While the above method leads to loss of resolution, trying to draw the parent image in its proper resolution might have an unwanted effect on its child image i.e. if the overlay wasn't of high resolution itself then it can end being stretchy. However you can try this code to draw it in the parent image's resolution (untested, let me know if you've problems ) –
float verticalScale = image.image.size.height / image.frame.size.height;
float horizontalScale = image.image.size.width / image.frame.size.width;
CGRect overlayFrame = overlay.frame;
overlayFrame.origin.x *= horizontalScale;
overlayFrame.origin.y *= verticalScale;
overlayFrame.size.width *= horizontalScale;
overlayFrame.size.height *= verticalScale;
UIGraphicsBeginImageContext(image.image.size);
// Draw the users photo
[image.image drawInRect:CGRectMake(0, 0, image.image.size.width, image.image.size.height)];
[overlay.image drawInRect:overlayFrame];
UIImage * resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Getting a resized screenshot from a UIView

I'm trying to take a screenshot of a UIView shrunk down to thumbnail size with the following code,
UIGraphicsBeginImageContext(size);
[canvas.layer renderInContext:UIGraphicsGetCurrentContext()];
result = [UIGraphicsGetImageFromCurrentImageContext() retain];
UIGraphicsEndImageContext();
The above code will simply grab the top left portion of the view in the original unshrunk size instead.
I'm sure I've done this before, but I just can't get it working. Anyone know what's off here?
Supposing that you have a CGSize origSize which is the original size (e.g. 768x1024) and a CGSize size which is the required size, this can be done like so:
CGFloat scaleX = size.width / origSize.width;
CGFloat scaleY = size.height / origSize.height;
UIGraphicsBeginImageContextWithOptions(origSize, NO, scaleX > scaleY ? scaleY : scaleX);
[canvas.layer renderInContext:UIGraphicsGetCurrentContext()];
result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Note that we're using origSize in the begun context, not size. The scale affects the size as well.
Update (roughly a year later): note that this technique interferes with (or is interefered by) transforms on the UIView being snapshotted. If the above is not working and you're doing scale transforms on the view (or its layer), you may wanna go with this solution: How to scale down a UIImage and make it crispy / sharp at the same time instead of blurry?
I find that this solution generates thumbnails that are the right size.
let thumbRect = CGRect(x: 0, y: 0, width: 512, height: 666)
UIGraphicsBeginImageContext(thumbSize)
let context = UIGraphicsGetCurrentContext()
self.view.frame = thumbRect
self.view.layer.renderInContext(context)
thumbImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
However, the resized image adopts the trait collection from the original view controller.
So although the size is correct, some auto layout features still end-up visible in the resulting image.

Copy part of scaled UIImage

I want to be able to scale and position an image, and then save just part of that image. I currently have a UIImageView inside a UIScrollView. After zooming and positioning the image I hit a button and then have the following code.
// imageScale = current scale of UIView
// get the new width and height of the scaled UIView
float scaledImageWidth = [scrollView viewWithTag:1].frame.size.width;
float scaledImageHeight = [scrollView viewWithTag:1].frame.size.height;
// get starting X and Y coords of target in relation to UIView
// target is a box in the middle of the screen, 210x255px
float imageY = scaledImageHeight / 2 - (scrollView.contentOffset.y * imageScale);
imageY = (imageY < 0) ? (140 * imageScale) + ((imageY * -1) + (scrollView.contentOffset.y *imageScale)) : (140 * imageScale) - imageY;
float imageX = scaledImageWidth / 2 - (scrollView.contentOffset.x * imageScale);
imageX = (imageX < 0) ? (56 * imageScale) + ((imageX * -1) + (scrollView.contentOffset.x *imageScale)) : (56 * imageScale) - imageX;
// image = original unscaled UIImage
// create new UIImage that matches the size of the scaled UIView we have been working with
UIGraphicsBeginImageContext( CGSizeMake(scaledImageWidth, scaledImageHeight) );
[image drawInRect:CGRectMake(0,0,scaledImageWidth, scaledImageHeight)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// now with the UIImage that is the size we want, copy a piece of the image
CGImageRef imageRef = CGImageCreateWithImageInRect([newImage CGImage], CGRectMake(imageX,imageY,210, 255));
UIImage* myThumbnail = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
This seems to work mostly ok. The problem I see is that when I make the final copy of just a piece of the image, the copy (myThumbnail) isn't scaled. However, the source (newImage) appears to scale without problems. Does anyone know what I am missing, or if there would be a different approach to this problem?
Edit:
Ok, I was a little off. The copy is scaling. The problem I'm having is its position is off. So if the image position is too far in one direction, the new copy wont be in the right position. For example, if I move the image so that I'm cropping the bottom left, it might give me a sliver on the right instead of that bottom left portion of the image.