I'm using the following code to crop and create a new UIImage out of a bigger one. I've isolated the issue to be with the function CGImageCreateWithImageInRect() which seem to not set some CGImage property the way I want. :-) The problem is that a call to function UIImagePNGRepresentation() fails returning a nil.
CGImageRef origRef = [stillView.image CGImage];
CGImageRef cgCrop = CGImageCreateWithImageInRect( origRef, theRect);
UIImage *imgCrop = [UIImage imageWithCGImage:cgCrop];
...
NSData *data = UIImagePNGRepresentation ( imgCrop);
-- libpng error: No IDATs written into file
Any idea what might wrong or alternative for cropping a rect out of UIImage?
I had the same problem, but only when testing compatibility on iOS 3.2. On 4.2 it works fine.
In the end I found this http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/ which works on both, albeit a little more verbose!
I converted this into a category on UIImage:
UIImage+Crop.h
#interface UIImage (Crop)
- (UIImage*) imageByCroppingToRect:(CGRect)rect;
#end
UIImage+Crop.m
#implementation UIImage (Crop)
- (UIImage*) imageByCroppingToRect:(CGRect)rect
{
//create a context to do our clipping in
UIGraphicsBeginImageContext(rect.size);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
//create a rect with the size we want to crop the image to
//the X and Y here are zero so we start at the beginning of our
//newly created context
CGRect clippedRect = CGRectMake(0, 0, rect.size.width, rect.size.height);
CGContextClipToRect( currentContext, clippedRect);
//create a rect equivalent to the full size of the image
//offset the rect by the X and Y we want to start the crop
//from in order to cut off anything before them
CGRect drawRect = CGRectMake(rect.origin.x * -1,
rect.origin.y * -1,
self.size.width,
self.size.height);
//draw the image to our clipped context using our offset rect
CGContextTranslateCTM(currentContext, 0.0, rect.size.height);
CGContextScaleCTM(currentContext, 1.0, -1.0);
CGContextDrawImage(currentContext, drawRect, self.CGImage);
//pull the image from our cropped context
UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();
//pop the context to get back to the default
UIGraphicsEndImageContext();
//Note: this is autoreleased
return cropped;
}
#end
In a PNG there are various chunks present, some containing palette info, some actual image data and some other information, it's a very interesting standard. The IDAT chunk is the bit that actually contains the image data. If there's no "IDAT written into file" then libpng has had some issue creating a PNG from the input data.
I don't know exactly what your stillView.image is, but what happens when you pass your code a CGImageRef that is certainly valid? What are the actual values in theRect? If your theRect is beyond the bounds of the image then the cgCrop you're trying to use to make the UIImage could easily be nil - or not nil, but containing no image or an image with width and height 0, giving libpng nothing to work with.
It seems the solution you are trying should work, but I recommend to use this:
CGImageRef image = [stillView.image CGImage];
CGRect cropZone;
size_t cWitdh = cropZone.size.width;
size_t cHeight = cropZone.size.height;
size_t bitsPerComponent = CGImageGetBitsPerComponent(image);
size_t bytesPerRow = CGImageGetBytesPerRow(image) / CGImageGetWidth(image) * cWidth;
//Now we build a Context with those dimensions.
CGContextRef context = CGBitmapContextCreate(nil, cWitdh, cHeight, bitsPerComponent, bytesPerRow, CGColorSpaceCreateDeviceRGB(), CGImageGetBitmapInfo(image));
CGContextDrawImage(context, cropZone, image);
CGImageRef result = CGBitmapContextCreateImage(context);
UIImage * cropUIImage = [[UIImage alloc] initWithCGImage:tmp];
CGContextRelease(context);
CGImageRelease(mergeResult);
NSData * imgData = UIImagePNGRepresentation ( cropUIImage);
UIImage *croppedImage = [self imageByCropping:yourImageView.image toRect:heredefineyourRect];
CGSize size = CGSizeMake(croppedImage.size.height, croppedImage.size.width);
UIGraphicsBeginImageContext(size);
CGPoint pointImg1 = CGPointMake(0,0);
[croppedImage drawAtPoint:pointImg1 ];
[[UIImage imageNamed:yourImagenameDefine] drawInRect:CGRectMake(0,532, 150,80) ];//here define your Reactangle
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
croppedImage = result;
yourCropImageView.image=croppedImage;
[yourCropImageView.image retain];
Related
I am trying to crop image using rectangle frame. But somehow not able to do that according to its required.
Here is What i am trying:
Here is the result i want :
Now what i need is when click on done image should crop in rectangle shape exactly placed in image. I have tried few things like masking & draw image using mask image rect but no success yet.
Here is my code which is not working :
CALayer *mask = [CALayer layer];
mask.contents = (id)[imgMaskImage.image CGImage];
mask.frame = imgMaskImage.frame;
imgEditedImageView.layer.mask = mask;
imgEditedImageView.layer.masksToBounds = YES;
Can anyone suggest me the better way to implement it.
I have tried so many other things & wasted time so please if i get some help that it will be great & appreciated.
Thanks.
- (UIImage *)croppedPhoto {
// For dealing with Retina displays as well as non-Retina, we need to check
// the scale factor, if it is available. Note that we use the size of teh cropping Rect
// passed in, and not the size of the view we are taking a screenshot of.
CGRect croppingRect = CGRectMake(imgMaskImage.frame.origin.x,
imgMaskImage.frame.origin.y, imgMaskImage.frame.size.width,
imgMaskImage.frame.size.height);
imgMaskImage.hidden=YES;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(croppingRect.size, YES,
[UIScreen mainScreen].scale);
} else {
UIGraphicsBeginImageContext(croppingRect.size);
}
// Create a graphics context and translate it the view we want to crop so
// that even in grabbing (0,0), that origin point now represents the actual
// cropping origin desired:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ctx, -croppingRect.origin.x, -croppingRect.origin.y);
[self.view.layer renderInContext:ctx];
// Retrieve a UIImage from the current image context:
UIImage *snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Return the image in a UIImageView:
return snapshotImage;
}
Here is the way you do
+(UIImage *)maskImage:(UIImage *)image andMaskingImage:(UIImage *)maskingImage{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [maskingImage CGImage];
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskingImage.size.width, maskingImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = maskingImage.size.width/ image.size.width;
if(ratio * image.size.height < maskingImage.size.height) {
ratio = maskingImage.size.height/ image.size.height;
}
CGRect rect1 = {{0, 0}, {maskingImage.size.width, maskingImage.size.height}};
//// CHANGE THIS RECT ACCORDING TO YOUR NEEDS
CGRect rect2 = {{-((image.size.width*ratio)-maskingImage.size.width)/2 , -((image.size.height*ratio)-maskingImage.size.height)/2}, {image.size.width*ratio, image.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
CGColorSpaceRelease(colorSpace);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
return theImage;
}
You need to have image like this
Note that
The mask image cannot have ANY transparency. Instead, transparent areas must be white or some value between black and white. The more towards black a pixel is the less transparent it becomes.
I got a UIImage from UIImagePickerController, and using the code from this site to resize the image
- (UIImage *)resizedImage:(CGSize)newSize
transform:(CGAffineTransform)transform
drawTransposed:(BOOL)transpose
interpolationQuality:(CGInterpolationQuality)quality {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGRect transposedRect = CGRectMake(0, 0, newRect.size.height, newRect.size.width);
CGImageRef imageRef = self.CGImage;
// Build a context that's the same dimensions as the new size
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
CGImageGetBitmapInfo(imageRef));
// Rotate and/or flip the image if required by its orientation
CGContextConcatCTM(bitmap, transform);
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(bitmap, quality);
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, transpose ? transposedRect : newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGContextRelease(bitmap);
CGImageRelease(newImageRef);
return newImage;
}
UIImagePNGRepresentation() failed to return NSData on re-sized image, but UIImageJPEGRepresentation() succeed.
How do we know if a UIImage is presentable in PNG or JPEG? What missed in the above code that make the resized image can not be represented in PNG?
According to apple document: "This function may return nil if the image has no data or if the underlying CGImageRef contains data in an unsupported bitmap format."
What bitmap format supported by PNG presentation? How to make an UIImage PNG-supported format?
That was a mistake that in another part of the code the image was rescaled with the following
CGContextRef context = CGBitmapContextCreate(NULL,
size.width,
size.height,
8,
0,
CGImageGetColorSpace(source),
kCGImageAlphaNoneSkipFirst);
Changing kCGImageAlphaNoneSkipFirst to CGImageGetBitmapInfo(source) fixed the problem
go to following link...
How to check if downloaded PNG image is corrupt?
it may help you...
Let me know it is working or not...
Happy Coding!!!!
I am working on Jigsaw type of game where i have two images for masking,
I have implemented this code for masking
- (UIImage*) maskImage:(UIImage *)image withMaskImage:(UIImage*)maskImage {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [maskImage CGImage];
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = maskImage.size.width/ image.size.width;
if(ratio * image.size.height < maskImage.size.height) {
ratio = maskImage.size.height/ image.size.height;
}
CGRect rect1 = {{0, 0}, {maskImage.size.width, maskImage.size.height}};
CGRect rect2 = {{-((image.size.width*ratio)-maskImage.size.width)/2,-((image.size.height*ratio)-maskImage.size.height)/2},{image.size.width*ratio, image.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
return theImage;
}
+
=
This is final result i got after masking.
now i would like to crop image in piece like and and so on parametrically(crop an image by transparency).
if any one has implemented such code or any idea on this scenario please share.
Thanks.
I am using this line of code for as Guntis Treulands's suggestion
int i=1;
for (int x=0; x<=212; x+=106) {
for (int y=0; y<318; y+=106) {
CGRect rect = CGRectMake(x, y, 106, 106);
CGRect rect2x = CGRectMake(x*2, y*2, 212, 212);
UIImage *orgImg = [UIImage imageNamed:#"cat#2x.png"];
UIImage *frmImg = [UIImage imageNamed:[NSString stringWithFormat:#"%d#2x.png",i]];
UIImage *cropImg = [self cropImage:orgImg withRect:rect2x];
UIImageView *tmpImg = [[UIImageView alloc] initWithFrame:rect];
[tmpImg setUserInteractionEnabled:YES];
[tmpImg setImage:[self maskImage:cropImg withMaskImage:frmImg]];
[self.view addSubview:tmpImg];
i++;
}
}
orgImg is original cat image, frmImg frame for holding individual piece, masked in photoshop and cropImg is 106x106 cropped image of original cat#2x.png.
my function for cropping is as following
- (UIImage *) cropImage:(UIImage*)originalImage withRect:(CGRect)rect {
return [UIImage imageWithCGImage:CGImageCreateWithImageInRect([originalImage CGImage], rect)];
}
UPDATE 2
I became really curious to find a better way to create a Jigsaw puzzle, so I spent two weekends and created a demo project of Jigsaw puzzle.
It contains:
provide column/row count and it will generate necessary puzzle pieces with correct width/height. The more columns/rows - the smaller the width/height and outline/inline puzzle form.
each time generate randomly sides
can randomly position / rotate pieces at the beginning of launch
each piece can be rotated by tap, or by two fingers (like a real piece) - but once released, it will snap to 90/180/270/360 degrees
each piece can be moved if touched on its “touchable shape” boundary (which is mostly the - same visible puzzle shape, but WITHOUT inline shapes)
Drawbacks:
no checking if piece is in its right place
if more than 100 pieces - it starts to lag, because, when picking up a piece, it goes through all subviews until it finds correct piece.
UPDATE
Thanks for updated question.
I managed to get this:
As you can see - jigsaw item is cropped correctly, and it is in square imageView (green color is UIImageView backgroundColor).
So - what I did was:
CGRect rect = CGRectMake(105, 0, 170, 170); //~ location on cat image where second Jigsaw item will be.
UIImage *originalCatImage = [UIImage imageNamed:#"cat.png"];//original cat image
UIImage *jigSawItemMask = [UIImage imageNamed:#"JigsawItemNo2.png"];//second jigsaw item mask (visible in my answer) (same width/height as cat image.)
UIImage *fullJigSawItemImage = [jigSawItemMask maskImage:originalCatImage];//masking - so that from full cat image would be visible second jigsaw item
UIImage *croppedJigSawItemImage = [self fullJigSawItemImage withRect:rect];//cropping so that we would get small image with jigsaw item centered in it.
For image masking I am using UIImage category function: (but you can probably use your masking function. But I'll post it anyways.)
- (UIImage*) maskImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
UIImage *maskImage = self;
CGImageRef maskImageRef = [maskImage CGImage];
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = maskImage.size.width/ image.size.width;
if(ratio * image.size.height < maskImage.size.height) {
ratio = maskImage.size.height/ image.size.height;
}
CGRect rect1 = {{0, 0}, {maskImage.size.width, maskImage.size.height}};
CGRect rect2 = {{-((image.size.width*ratio)-maskImage.size.width)/2 , -((image.size.height*ratio)-maskImage.size.height)/2}, {image.size.width*ratio, image.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
// return the image
return theImage;
}
PREVIOUS ANSWER
Can you prepare a mask for each piece?
For example, you have that frame image. Can you cut it in photoshop in 9 separate images, where in each image it would only show corresponding piece. (all the rest - delete).
Example - second piece mask:
Then you use each of these newly created mask images on cat image - each piece will mask all image, but one peace. Thus you will have 9 piece images using 9 different masks.
For larger or different jigsaw frame - again, create separated image masks.
This is a basic solution, but not perfect, as you need to prepare each peace mask separately.
Hope it helps..
I created a masked image using a function form an iphone blog:
UIImage *imgToSave = [self maskImage:[UIImage imageNamed:#"pic.jpg"] withMask:[UIImage imageNamed:#"sd-face-mask.png"]];
Looks good in a UIImageView
UIImageView *imgView = [[UIImageView alloc] initWithImage:imgToSave];
imgView.center = CGPointMake(160.0f, 140.0f);
[self.view addSubview:imgView];
UIImagePNGRepresentation to save to disk:
[UIImagePNGRepresentation(imgToSave) writeToFile:[self findUniqueSavePath] atomically:YES];
UIImagePNGRepresentation returns NSData of an image that looks different.
The output is inverse image mask.
The area that was cut out in the app is now visible in the file.
The area that was visible in the app is now removed. Visibility is opposite.
My mask is designed to remove everything but the face area in the picture. The UIImage looks right in the app but after I save it on disk, the file looks opposite. The face is removed but everything else this there.
Please let me know if you can help!
In quartz you cam mask either by an image mask (black let through and white blocks), or a normal image (white let through and black blocks) which is the opposite. It seems for some reason saving is treating the image mask as a normal image to mask with. One thought is to render to a bitmap context and then create an image to be saved from that.
I had the exact same issue, when I saved the file it was one way, but the image returned in memory was the exact opposite.
The culprit & the solution was UIImagePNGRepresentation(). It fixes the in-app image before saving it to disk, so I just inserted that function as the last step in creating the masked image and returning that.
This may not be the most elegant solution, but it works. I copied some code from my app and condensed it, not sure if this code below works as is, but if not, its close... maybe just some typos.
Enjoy. :)
// MyImageHelperObj.h
#interface MyImageHelperObj : NSObject
+ (UIImage *) createGrayScaleImage:(UIImage*)originalImage;
+ (UIImage *) createMaskedImageWithSize:(CGSize)newSize sourceImage:(UIImage *)sourceImage maskImage:(UIImage *)maskImage;
#end
// MyImageHelperObj.m
#import <QuartzCore/QuartzCore.h>
#import "MyImageHelperObj.h"
#implementation MyImageHelperObj
+ (UIImage *) createMaskedImageWithSize:(CGSize)newSize sourceImage:(UIImage *)sourceImage maskImage:(UIImage *)maskImage;
{
// create image size rect
CGRect newRect = CGRectZero;
newRect.size = newSize;
// draw source image
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0f);
[sourceImage drawInRect:newRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
// draw mask image
[maskImage drawInRect:newRect blendMode:kCGBlendModeNormal alpha:1.0f];
maskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// create grayscale version of mask image to make the "image mask"
UIImage *grayScaleMaskImage = [MyImageHelperObj createGrayScaleImage:maskImage];
CGFloat width = CGImageGetWidth(grayScaleMaskImage.CGImage);
CGFloat height = CGImageGetHeight(grayScaleMaskImage.CGImage);
CGFloat bitsPerPixel = CGImageGetBitsPerPixel(grayScaleMaskImage.CGImage);
CGFloat bytesPerRow = CGImageGetBytesPerRow(grayScaleMaskImage.CGImage);
CGDataProviderRef providerRef = CGImageGetDataProvider(grayScaleMaskImage.CGImage);
CGImageRef imageMask = CGImageMaskCreate(width, height, 8, bitsPerPixel, bytesPerRow, providerRef, NULL, false);
CGImageRef maskedImage = CGImageCreateWithMask(newImage.CGImage, imageMask);
CGImageRelease(imageMask);
newImage = [UIImage imageWithCGImage:maskedImage];
CGImageRelease(maskedImage);
return [UIImage imageWithData:UIImagePNGRepresentation(newImage)];
}
+ (UIImage *) createGrayScaleImage:(UIImage*)originalImage;
{
//create gray device colorspace.
CGColorSpaceRef space = CGColorSpaceCreateDeviceGray();
//create 8-bit bimap context without alpha channel.
CGContextRef bitmapContext = CGBitmapContextCreate(NULL, originalImage.size.width, originalImage.size.height, 8, 0, space, kCGImageAlphaNone);
CGColorSpaceRelease(space);
//Draw image.
CGRect bounds = CGRectMake(0.0, 0.0, originalImage.size.width, originalImage.size.height);
CGContextDrawImage(bitmapContext, bounds, originalImage.CGImage);
//Get image from bimap context.
CGImageRef grayScaleImage = CGBitmapContextCreateImage(bitmapContext);
CGContextRelease(bitmapContext);
//image is inverted. UIImage inverts orientation while converting CGImage to UIImage.
UIImage* image = [UIImage imageWithCGImage:grayScaleImage];
CGImageRelease(grayScaleImage);
return image;
}
#end
I developing the simple UIApplication in which i want to crop the UIImage (in .jpg format) with help of CGContext. The developed code till now as follows,
CGImageRef graphicOriginalImage = [originalImage.image CGImage];
UIGraphicsBeginImageContext(originalImage.image.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGBitmapContextCreateImage(graphicOriginalImage);
CGFloat fltW = originalImage.image.size.width;
CGFloat fltH = originalImage.image.size.height;
CGFloat X = round(fltW/4);
CGFloat Y =round(fltH/4);
CGFloat width = round(X + (fltW/2));
CGFloat height = round(Y + (fltH/2));
CGContextTranslateCTM(ctx, 0, image.size.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
CGRect rect = CGRectMake(X,Y ,width ,height);
CGContextDrawImage(ctx, rect, graphicOriginalImage);
croppedImage = UIGraphicsGetImageFromCurrentImageContext();
return croppedImage;
}
The above code is worked fine but it can't crop image.
The original image memory and cropped image memory i will got same(equal to original image memory).
The above code is right for cropping the image??????????????????
Here is a good way to crop an image to a CGRect:
- (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect
{
//create a context to do our clipping in
UIGraphicsBeginImageContext(rect.size);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
//create a rect with the size we want to crop the image to
//the X and Y here are zero so we start at the beginning of our
//newly created context
CGRect clippedRect = CGRectMake(0, 0, rect.size.width, rect.size.height);
CGContextClipToRect( currentContext, clippedRect);
//create a rect equivalent to the full size of the image
//offset the rect by the X and Y we want to start the crop
//from in order to cut off anything before them
CGRect drawRect = CGRectMake(rect.origin.x * -1,
rect.origin.y * -1,
imageToCrop.size.width,
imageToCrop.size.height);
//draw the image to our clipped context using our offset rect
CGContextDrawImage(currentContext, drawRect, imageToCrop.CGImage);
//pull the image from our cropped context
UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();
//pop the context to get back to the default
UIGraphicsEndImageContext();
//Note: this is autoreleased
return cropped;
}
Or another way:
- (UIImage *)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([imageToCrop CGImage], rect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return cropped;
}
From http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/.
The context you create to draw the image has the same size that the original image. That's why they have the same size.
If you don't want to re-invent the wheel, take a look at the TouchCode project on Google Code. You will find UIImage categories that do the job (see UIImage_ThumbnailExtensions.m).