Crop image using border frame - iphone

I am trying to crop image using rectangle frame. But somehow not able to do that according to its required.
Here is What i am trying:
Here is the result i want :
Now what i need is when click on done image should crop in rectangle shape exactly placed in image. I have tried few things like masking & draw image using mask image rect but no success yet.
Here is my code which is not working :
CALayer *mask = [CALayer layer];
mask.contents = (id)[imgMaskImage.image CGImage];
mask.frame = imgMaskImage.frame;
imgEditedImageView.layer.mask = mask;
imgEditedImageView.layer.masksToBounds = YES;
Can anyone suggest me the better way to implement it.
I have tried so many other things & wasted time so please if i get some help that it will be great & appreciated.
Thanks.

- (UIImage *)croppedPhoto {
// For dealing with Retina displays as well as non-Retina, we need to check
// the scale factor, if it is available. Note that we use the size of teh cropping Rect
// passed in, and not the size of the view we are taking a screenshot of.
CGRect croppingRect = CGRectMake(imgMaskImage.frame.origin.x,
imgMaskImage.frame.origin.y, imgMaskImage.frame.size.width,
imgMaskImage.frame.size.height);
imgMaskImage.hidden=YES;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(croppingRect.size, YES,
[UIScreen mainScreen].scale);
} else {
UIGraphicsBeginImageContext(croppingRect.size);
}
// Create a graphics context and translate it the view we want to crop so
// that even in grabbing (0,0), that origin point now represents the actual
// cropping origin desired:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ctx, -croppingRect.origin.x, -croppingRect.origin.y);
[self.view.layer renderInContext:ctx];
// Retrieve a UIImage from the current image context:
UIImage *snapshotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Return the image in a UIImageView:
return snapshotImage;
}

Here is the way you do
+(UIImage *)maskImage:(UIImage *)image andMaskingImage:(UIImage *)maskingImage{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [maskingImage CGImage];
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskingImage.size.width, maskingImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = maskingImage.size.width/ image.size.width;
if(ratio * image.size.height < maskingImage.size.height) {
ratio = maskingImage.size.height/ image.size.height;
}
CGRect rect1 = {{0, 0}, {maskingImage.size.width, maskingImage.size.height}};
//// CHANGE THIS RECT ACCORDING TO YOUR NEEDS
CGRect rect2 = {{-((image.size.width*ratio)-maskingImage.size.width)/2 , -((image.size.height*ratio)-maskingImage.size.height)/2}, {image.size.width*ratio, image.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
CGColorSpaceRelease(colorSpace);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
return theImage;
}
You need to have image like this
Note that
The mask image cannot have ANY transparency. Instead, transparent areas must be white or some value between black and white. The more towards black a pixel is the less transparent it becomes.

Related

image is blurred after cropping the image?

I have an application in which i am cropping the image taken from the camera.all are going well.but after the cropping the image seems to blured and streched.
CGRect rect = CGRectMake(20,40,280,200);
UIGraphicsBeginImageContext(rect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
// translated rectangle for drawing sub image
CGRect drawRect = CGRectMake(-rect.origin.x, -rect.origin.y,280,200);
// clip to the bounds of the image context
// not strictly necessary as it will get clipped anyway?
CGContextClipToRect(context, CGRectMake(0, 0, rect.size.width, rect.size.height));
// draw image
[image drawInRect:drawRect];
// grab image
UIImage* croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGSize size = [croppedImage size];
NSLog(#" = %#",NSStringFromCGSize(size));
NSData* pictureData = UIImagePNGRepresentation(croppedImage);
Can anybody help me in finding out where i am going wrong?
try replacing
UIGraphicsBeginImageContext(rect.size);
with
UIGraphicsBeginImageContextWithOptions(rect.size, NO, [[UIScreen mainScreen] scale]);
to account for retina

Cropping image with transparency in iPhone

I am working on Jigsaw type of game where i have two images for masking,
I have implemented this code for masking
- (UIImage*) maskImage:(UIImage *)image withMaskImage:(UIImage*)maskImage {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef maskImageRef = [maskImage CGImage];
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = maskImage.size.width/ image.size.width;
if(ratio * image.size.height < maskImage.size.height) {
ratio = maskImage.size.height/ image.size.height;
}
CGRect rect1 = {{0, 0}, {maskImage.size.width, maskImage.size.height}};
CGRect rect2 = {{-((image.size.width*ratio)-maskImage.size.width)/2,-((image.size.height*ratio)-maskImage.size.height)/2},{image.size.width*ratio, image.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
return theImage;
}
+
=
This is final result i got after masking.
now i would like to crop image in piece like and and so on parametrically(crop an image by transparency).
if any one has implemented such code or any idea on this scenario please share.
Thanks.
I am using this line of code for as Guntis Treulands's suggestion
int i=1;
for (int x=0; x<=212; x+=106) {
for (int y=0; y<318; y+=106) {
CGRect rect = CGRectMake(x, y, 106, 106);
CGRect rect2x = CGRectMake(x*2, y*2, 212, 212);
UIImage *orgImg = [UIImage imageNamed:#"cat#2x.png"];
UIImage *frmImg = [UIImage imageNamed:[NSString stringWithFormat:#"%d#2x.png",i]];
UIImage *cropImg = [self cropImage:orgImg withRect:rect2x];
UIImageView *tmpImg = [[UIImageView alloc] initWithFrame:rect];
[tmpImg setUserInteractionEnabled:YES];
[tmpImg setImage:[self maskImage:cropImg withMaskImage:frmImg]];
[self.view addSubview:tmpImg];
i++;
}
}
orgImg is original cat image, frmImg frame for holding individual piece, masked in photoshop and cropImg is 106x106 cropped image of original cat#2x.png.
my function for cropping is as following
- (UIImage *) cropImage:(UIImage*)originalImage withRect:(CGRect)rect {
return [UIImage imageWithCGImage:CGImageCreateWithImageInRect([originalImage CGImage], rect)];
}
UPDATE 2
I became really curious to find a better way to create a Jigsaw puzzle, so I spent two weekends and created a demo project of Jigsaw puzzle.
It contains:
provide column/row count and it will generate necessary puzzle pieces with correct width/height. The more columns/rows - the smaller the width/height and outline/inline puzzle form.
each time generate randomly sides
can randomly position / rotate pieces at the beginning of launch
each piece can be rotated by tap, or by two fingers (like a real piece) - but once released, it will snap to 90/180/270/360 degrees
each piece can be moved if touched on its “touchable shape” boundary (which is mostly the - same visible puzzle shape, but WITHOUT inline shapes)
Drawbacks:
no checking if piece is in its right place
if more than 100 pieces - it starts to lag, because, when picking up a piece, it goes through all subviews until it finds correct piece.
UPDATE
Thanks for updated question.
I managed to get this:
As you can see - jigsaw item is cropped correctly, and it is in square imageView (green color is UIImageView backgroundColor).
So - what I did was:
CGRect rect = CGRectMake(105, 0, 170, 170); //~ location on cat image where second Jigsaw item will be.
UIImage *originalCatImage = [UIImage imageNamed:#"cat.png"];//original cat image
UIImage *jigSawItemMask = [UIImage imageNamed:#"JigsawItemNo2.png"];//second jigsaw item mask (visible in my answer) (same width/height as cat image.)
UIImage *fullJigSawItemImage = [jigSawItemMask maskImage:originalCatImage];//masking - so that from full cat image would be visible second jigsaw item
UIImage *croppedJigSawItemImage = [self fullJigSawItemImage withRect:rect];//cropping so that we would get small image with jigsaw item centered in it.
For image masking I am using UIImage category function: (but you can probably use your masking function. But I'll post it anyways.)
- (UIImage*) maskImage:(UIImage *)image
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
UIImage *maskImage = self;
CGImageRef maskImageRef = [maskImage CGImage];
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = CGBitmapContextCreate (NULL, maskImage.size.width, maskImage.size.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
if (mainViewContentContext==NULL)
return NULL;
CGFloat ratio = 0;
ratio = maskImage.size.width/ image.size.width;
if(ratio * image.size.height < maskImage.size.height) {
ratio = maskImage.size.height/ image.size.height;
}
CGRect rect1 = {{0, 0}, {maskImage.size.width, maskImage.size.height}};
CGRect rect2 = {{-((image.size.width*ratio)-maskImage.size.width)/2 , -((image.size.height*ratio)-maskImage.size.height)/2}, {image.size.width*ratio, image.size.height*ratio}};
CGContextClipToMask(mainViewContentContext, rect1, maskImageRef);
CGContextDrawImage(mainViewContentContext, rect2, image.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef newImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
UIImage *theImage = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
// return the image
return theImage;
}
PREVIOUS ANSWER
Can you prepare a mask for each piece?
For example, you have that frame image. Can you cut it in photoshop in 9 separate images, where in each image it would only show corresponding piece. (all the rest - delete).
Example - second piece mask:
Then you use each of these newly created mask images on cat image - each piece will mask all image, but one peace. Thus you will have 9 piece images using 9 different masks.
For larger or different jigsaw frame - again, create separated image masks.
This is a basic solution, but not perfect, as you need to prepare each peace mask separately.
Hope it helps..

iPhone programmatically crop a square image to appear as circle

I'm trying to create an image for a custom style UIButton using an image from the camera roll on iPhone. The button has a circular background and effectively appears as a circle. Now I need an image to go in the middle of the button that also appears round.
How do I cut a square UIImage to appear round with transparency outside of the round area?
If masking is involved, do I need to pre-render a mask or can I create one programmatically(ex: a circle)?
Thank you!
I have never done anything like that, but try using QuartzCore framework and its' cornerRadius property. Example:
#import <QuartzCore/QuartzCore.h>
//some other code ...
UIImageView *imgView = [[UIImageView alloc]initWithFrame:CGRectMake(0, 0, 100, 100)];
imgView.layer.cornerRadius = 10.0f;
play around with it a bit and you will get what you want.
Hope it helps
Yes you can use CoreGraphics to draw the mask dynamically.
Then you can create the masked image.
Example for masking:
- (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *)maskImage
{
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef maskedImageRef = CGImageCreateWithMask([image CGImage], mask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedImageRef];
CGImageRelease(maskedImageRef);
CGImageRelease(mask);
return maskedImage;
}
I started looking into this a couple of weeks back. I tried all the suggestions here, none of which worked well. In the great tradition of RTFM I went and read Apple's documentation on Quartz 2D Programming and came up with this. Please try it out and let me know how you go.
The code could be fairly easily altered to crop to an elipse, or any other shape defined by a path.
Make sure you include Quartz 2D in your project.
#include <math.h>
+ (UIImage*)circularScaleNCrop:(UIImage*)image: (CGRect) rect{
// This function returns a newImage, based on image, that has been:
// - scaled to fit in (CGRect) rect
// - and cropped within a circle of radius: rectWidth/2
//Create the bitmap graphics context
UIGraphicsBeginImageContextWithOptions(CGSizeMake(rect.size.width, rect.size.height), NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
//Get the width and heights
CGFloat imageWidth = image.size.width;
CGFloat imageHeight = image.size.height;
CGFloat rectWidth = rect.size.width;
CGFloat rectHeight = rect.size.height;
//Calculate the scale factor
CGFloat scaleFactorX = rectWidth/imageWidth;
CGFloat scaleFactorY = rectHeight/imageHeight;
//Calculate the centre of the circle
CGFloat imageCentreX = rectWidth/2;
CGFloat imageCentreY = rectHeight/2;
// Create and CLIP to a CIRCULAR Path
// (This could be replaced with any closed path if you want a different shaped clip)
CGFloat radius = rectWidth/2;
CGContextBeginPath (context);
CGContextAddArc (context, imageCentreX, imageCentreY, radius, 0, 2*M_PI, 0);
CGContextClosePath (context);
CGContextClip (context);
//Set the SCALE factor for the graphics context
//All future draw calls will be scaled by this factor
CGContextScaleCTM (context, scaleFactorX, scaleFactorY);
// Draw the IMAGE
CGRect myRect = CGRectMake(0, 0, imageWidth, imageHeight);
[image drawInRect:myRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Include the following code in your UIView class replacing "monk2.png" with your own image name.
- (void)drawRect:(CGRect)rect
{
UIImage *originalImage = [UIImage imageNamed:[NSString stringWithFormat:#"monk2.png"]];
CGFloat oImageWidth = originalImage.size.width;
CGFloat oImageHeight = originalImage.size.height;
// Draw the original image at the origin
CGRect oRect = CGRectMake(0, 0, oImageWidth, oImageHeight);
[originalImage drawInRect:oRect];
// Set the newRect to half the size of the original image
CGRect newRect = CGRectMake(0, 0, oImageWidth/2, oImageHeight/2);
UIImage *newImage = [self circularScaleNCrop:originalImage :newRect];
CGFloat nImageWidth = newImage.size.width;
CGFloat nImageHeight = newImage.size.height;
//Draw the scaled and cropped image
CGRect thisRect = CGRectMake(oImageWidth+10, 0, nImageWidth, nImageHeight);
[newImage drawInRect:thisRect];
}
Here is a quick way to create rounded corners on a square ImageView to make it look like a perfect circle. Basically you apply a corner radius equal to 1/2 the width (width == height on a square image).
#import <QuartzCore/QuartzCore.h> //you need QuartzCore
...
float width = imageView.bounds.size.width; // we can also use the frame property instead of bounds since we just care about the Size and don't care about position
imageView.layer.cornerRadius = width/2;
{
imageView.layer.cornerRadius = imageView.frame.size.height /2;
imageView.layer.masksToBounds = YES;
imageView.layer.borderWidth = 0;
}
UIImage category to mask an image with a circle:
UIImage *originalImage = [UIImage imageNamed:#"myimage.png"];
UImage *myRoundedImage = [UIImage roundedImageWithImage:originalImage];
Get it here.
I have another solution:
- (UIImage *)roundedImageWithRect:(CGRect)rect radius:(CGFloat)radius
{
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0);
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:CGRectMake(0, 0, rect.size.width, rect.size.height) cornerRadius:radius];
CGFloat imageRatio = self.size.width / self.size.height;
CGSize imageSize = CGSizeMake(rect.size.height * imageRatio, rect.size.height);
CGRect imageRect = CGRectMake(0, 0, imageSize.width, imageSize.height);
[path addClip];
[self drawInRect:imageRect];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
This variant is better for performance than set cornerRadius directly.
Personally, I'd create a transparent circle image with opaque corners to overlay the photo. This solution is only suitable where you will be placing the image in one place on the UI, and assumes the opaque corners will blend in with the background.
Following is the answer I given in How to crop UIImage on oval shape or circle shape? to make the image circle. It works for me..
Download the Support archive file
#import "UIImage+RoundedCorner.h"
#import "UIImage+Resize.h"
Following lines used to resize the image and convert in to round with radius
UIImage *mask = [UIImage imageNamed:#"mask.jpg"];
mask = [mask resizedImage:CGSizeMake(47, 47) interpolationQuality:kCGInterpolationHigh ];
mask = [mask roundedCornerImage:23.5 borderSize:1];
Just use
_profilePictureImgView.layer.cornerRadius = 32.0f;
_profilePictureImgView.layer.masksToBounds = YES;

Cropping image captured by AVCaptureSession

I'm writing an iPhone App which uses AVFoundation to take a photo and crop it.
The App is similar to a QR code reader: It uses a AVCaptureVideoPreviewLayer with an overlay.
The overlay has a square. I want to crop the image so the cropped image is exactly what the user has places inside the square.
The preview layer has gravity AVLayerVideoGravityResizeAspectFill.
It looks like what the camera actually captures is not exactly what the user sees in the preview layer. This means that I need to move from the preview coordinate system to the captured image coordinate system so I can crop the image. For this I think that I need the following parameters:
1. ration between view size and captured image size.
2. information which tells which part of the captured image matches what is displayed in the preview layer.
Does anybody know how I can obtain this info, or if there is a different approach to crop the image.
(p.s. capturing a screenshot of the preview is not an option, as I understand it might resulting in the App being rejected).
Thank you in advance
Hope this meets your requirements
- (UIImage *)cropImage:(UIImage *)image to:(CGRect)cropRect andScaleTo:(CGSize)size {
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGImageRef subImage = CGImageCreateWithImageInRect([image CGImage], cropRect);
NSLog(#"---------");
NSLog(#"*cropRect.origin.y=%f",cropRect.origin.y);
NSLog(#"*cropRect.origin.x=%f",cropRect.origin.x);
NSLog(#"*cropRect.size.width=%f",cropRect.size.width);
NSLog(#"*cropRect.size.height=%f",cropRect.size.height);
NSLog(#"---------");
NSLog(#"*size.width=%f",size.width);
NSLog(#"*size.height=%f",size.height);
CGRect myRect = CGRectMake(0.0f, 0.0f, size.width, size.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
CGContextTranslateCTM(context, 0.0f, -size.height);
CGContextDrawImage(context, myRect, subImage);
UIImage* croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRelease(subImage);
return croppedImage;
}
you can use this api from AVFoundation: AVMakeRectWithAspectRatioInsideRect
it will return the crop region for an image in a bounding region, apple doc is here:
https://developer.apple.com/library/ios/Documentation/AVFoundation/Reference/AVFoundation_Functions/Reference/reference.html
I think this is just simple as this
- (CGRect)computeCropRect:(CGImageRef)cgImageRef
{
static CGFloat cgWidth = 0;
static CGFloat cgHeight = 0;
static CGFloat viewWidth = 320;
if(cgWidth == 0)
cgWidth = CGImageGetWidth(cgImageRef);
if(cgHeight == 0)
cgHeight = CGImageGetHeight(cgImageRef);
CGRect cropRect;
// Only based on width
cropRect.origin.x = cropRect.origin.y = kMargin * cgWidth / viewWidth;
cropRect.size.width = cropRect.size.height = kSquareSize * cgWidth / viewWidth;
return cropRect;
}
with kMargin and kSquareSize (20point and 280point in my case) are the margin and Scanning area respectively
Then perform cropping
CGRect cropRect = [self computeCropRect:cgCapturedImageRef];
CGImageRef croppedImageRef = CGImageCreateWithImageInRect(cgCapturedImageRef, cropRect);

Problem in cropping the UIImage using CGContext?

I developing the simple UIApplication in which i want to crop the UIImage (in .jpg format) with help of CGContext. The developed code till now as follows,
CGImageRef graphicOriginalImage = [originalImage.image CGImage];
UIGraphicsBeginImageContext(originalImage.image.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGBitmapContextCreateImage(graphicOriginalImage);
CGFloat fltW = originalImage.image.size.width;
CGFloat fltH = originalImage.image.size.height;
CGFloat X = round(fltW/4);
CGFloat Y =round(fltH/4);
CGFloat width = round(X + (fltW/2));
CGFloat height = round(Y + (fltH/2));
CGContextTranslateCTM(ctx, 0, image.size.height);
CGContextScaleCTM(ctx, 1.0, -1.0);
CGRect rect = CGRectMake(X,Y ,width ,height);
CGContextDrawImage(ctx, rect, graphicOriginalImage);
croppedImage = UIGraphicsGetImageFromCurrentImageContext();
return croppedImage;
}
The above code is worked fine but it can't crop image.
The original image memory and cropped image memory i will got same(equal to original image memory).
The above code is right for cropping the image??????????????????
Here is a good way to crop an image to a CGRect:
- (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect
{
//create a context to do our clipping in
UIGraphicsBeginImageContext(rect.size);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
//create a rect with the size we want to crop the image to
//the X and Y here are zero so we start at the beginning of our
//newly created context
CGRect clippedRect = CGRectMake(0, 0, rect.size.width, rect.size.height);
CGContextClipToRect( currentContext, clippedRect);
//create a rect equivalent to the full size of the image
//offset the rect by the X and Y we want to start the crop
//from in order to cut off anything before them
CGRect drawRect = CGRectMake(rect.origin.x * -1,
rect.origin.y * -1,
imageToCrop.size.width,
imageToCrop.size.height);
//draw the image to our clipped context using our offset rect
CGContextDrawImage(currentContext, drawRect, imageToCrop.CGImage);
//pull the image from our cropped context
UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();
//pop the context to get back to the default
UIGraphicsEndImageContext();
//Note: this is autoreleased
return cropped;
}
Or another way:
- (UIImage *)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect
 {
CGImageRef imageRef = CGImageCreateWithImageInRect([imageToCrop CGImage], rect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);

 return cropped;
}
From http://www.hive05.com/2008/11/crop-an-image-using-the-iphone-sdk/.
The context you create to draw the image has the same size that the original image. That's why they have the same size.
If you don't want to re-invent the wheel, take a look at the TouchCode project on Google Code. You will find UIImage categories that do the job (see UIImage_ThumbnailExtensions.m).