Related
I am new to Objective-c iPhone programming. I have an application in which I display a PDF in my UIWebView successfully, but now I want to create a thumbnail of my PDF. My PDF is stored in my resource folder.
So please give me code for how I can show a thumbnail of my PDF. My code is for displaying PDF is which is taken in button function:
-(void)show:(id)sender {
pdfView.autoresizesSubviews = NO;
pdfView.scalesPageToFit=YES;
pdfView.autoresizingMask=(UIViewAutoresizingFlexibleHeight | UIViewAutoresizingFlexibleWidth);
[pdfView setDelegate:self];
NSString *path = [[NSBundle mainBundle] pathForResource:#"com" ofType:#"pdf"];
NSLog(#"Path of res is%#",path);
NSURL *url = [NSURL fileURLWithPath:path];
NSURLRequest *request = [NSURLRequest requestWithURL:url];
[pdfView loadRequest:request];
}
try the following method:
- (UIImage *)imageFromPDFWithDocumentRef:(CGPDFDocumentRef)documentRef {
CGPDFPageRef pageRef = CGPDFDocumentGetPage(documentRef, 1);
CGRect pageRect = CGPDFPageGetBoxRect(pageRef, kCGPDFCropBox);
UIGraphicsBeginImageContext(pageRect.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(context, CGRectGetMinX(pageRect),CGRectGetMaxY(pageRect));
CGContextScaleCTM(context, 1, -1);
CGContextTranslateCTM(context, -(pageRect.origin.x), -(pageRect.origin.y));
CGContextDrawPDFPage(context, pageRef);
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return finalImage;
}
Swift 3
(Thanks to Prine for Swift 2!)
func getPdfThumb(url:NSURL, pageBase1:Int) -> UIImage? {
guard let document = CGPDFDocument(url as CFURL) else { return nil }
guard let firstPage = document.page(at: pageBase1) else { return nil }
let width:CGFloat = 240.0;
var pageRect:CGRect = firstPage.getBoxRect(.mediaBox)
let pdfScale:CGFloat = width/pageRect.size.width
pageRect.size = CGSize(width: pageRect.size.width*pdfScale, height: pageRect.size.height*pdfScale)
pageRect.origin = CGPoint.zero
UIGraphicsBeginImageContext(pageRect.size)
let context:CGContext = UIGraphicsGetCurrentContext()!
// White background
context.setFillColor(red: 1.0,green: 1.0,blue: 1.0,alpha: 1.0)
context.fill(pageRect)
context.saveGState()
// Handle rotation
context.translateBy(x: 0.0, y: pageRect.size.height)
context.scaleBy(x: 1.0, y: -1.0)
context.concatenate(firstPage.getDrawingTransform(.mediaBox, rect: pageRect, rotate: 0, preserveAspectRatio: true))
context.drawPDFPage(firstPage)
context.restoreGState()
let image:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}
There are as you may already know 2 ways of rendering PDF's:
UIWebView
Quartz Rendering
The Other answers at the time of writing this have focused on Quartz. There are a number of good reasons for this mostly performance related but in my opinion using Quartz is worth it. I would recommend reading this thread to get a better idea of the pro's and cons.
There is apparently an excellent newish api for Quartz based pdf rendering here
ofc you could present the pdf via UIWebView and render the thumbs using quartz.
There is also a bit of confussion around thumbs, for people new quartz pdf magic, it might seem that after some searching there are apis that support thumbs abut you should check if the support is for embedded thumbs only, many PDF's don't have these.
Another option is to create the thumbs yourself (using quartz) and there are plenty of examples of this around the net including the two answers above. However If you are targeting iOS 4 or above I would strongly recommend using blocks. (Also graphics contexts are thread safe since 4).
I found a significant performance increase when I generated thumbs with blocks.
What I have done in the past is:
Have a ViewController for your
thumbs, it has a scrollview that has
a content size appropriate for all
your pages. Insert Placeholder
ImageViews into is if you like.
On document load, kick off a thumb
generator in the background (see code
below)
The code below calls a method drawImageView that takes the index of the page, grabs the image from the disk and puts it into the scroll view
If your feeling really motivated you can implement a render scope on the thumb scrollView (only rendering the thumbs you need to - something you should be doing for the pdf's anyway)
Dont forget to delete the thumbs when your done, unless you want to cache..
#define THUMB_SIZE 100,144
-(void)generateThumbsWithGCD
{
thumbQueue = dispatch_queue_create("thumbQueue", 0);//thumbQueue = dispatch_queue_t
NSFileManager *fm = [NSFileManager defaultManager];
//good idea to check for previous thumb cache with NSFileManager here
CGSize thumbSize = CGSizeMake(THUMB_SIZE);
__block CGPDFPageRef myPageRef;
NSString *reqSysVer = #"4.0";
NSString *currSysVer = [[UIDevice currentDevice] systemVersion];
//need to handle ios versions < 4
if ([currSysVer compare:reqSysVer options:NSNumericSearch] == NSOrderedAscending) {NSLog(#"UIKIT MULTITHREADING NOT SUPPORTED!");return;}//thread/api saftey
dispatch_async(thumbQueue, ^{
for (i=1; i<=_maxPages; i++) {
//check if worker is valid (class member bool) for cancelations
myPageRef=[[PDFDocument sharedPDFDocument]getPageData:i];//pdfdocument is a singleton class
if(!myPageRef)return;
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
NSString* imageName = [NSString stringWithFormat:#"%#thumb%i.png",documentName,i];
NSString* fullPathToFile = [thumbDocPath stringByAppendingPathComponent:imageName];
if(![fm fileExistsAtPath:fullPathToFile]){
//NSLog(#"Not there");
UIGraphicsBeginImageContext(thumbSize);//thread Safe in iOs4
CGContextRef context = UIGraphicsGetCurrentContext();//thread Safe in iOs4
CGContextTranslateCTM(context, 0, 144);
CGContextScaleCTM(context, 0.15, -0.15);
CGContextDrawPDFPage (context, myPageRef);
UIImage * render = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData* imageData= UIImagePNGRepresentation(render);
if(imageData){
NSLog(#"WROTE TO:%#",fullPathToFile);
if(![imageData writeToFile:fullPathToFile atomically:NO])NSLog(#"ERROR: Thumb Didnt Save"); //COMMENT OUT TO DISABLE WRITE
}
}
else NSLog(#"Allready There! %#",fullPathToFile);
//update progress on thumb viewController if you wish here
[pool release];
dispatch_sync(dispatch_get_main_queue(), ^{
[self drawImageView:i];
});
}
});
dispatch_release(thumbQueue);
}
I come up with a solution that uses CoreGraphics and Swift 3.0. It's highly inspired by the one presented Alexandre. In my opinion my approach results in more 'Swifty' code. Also, my solution fixes a couple of problems with scaling and orientation of resulting image.
Note that my code uses AVMakeRect(aspectRatio:, insideRect:) and requires import of AVFoundation.
//pages numbering starts from 1.
func generate(size: CGSize, page: Int) -> UIImage? {
guard let document = CGPDFDocument(url as CFURL), let page = document.page(at: page) else { return nil }
let originalPageRect: CGRect = page.getBoxRect(.mediaBox)
var targetPageRect = AVMakeRect(aspectRatio: originalPageRect.size, insideRect: CGRect(origin: CGPoint.zero, size: size))
targetPageRect.origin = CGPoint.zero
UIGraphicsBeginImageContextWithOptions(targetPageRect.size, true, 0)
defer { UIGraphicsEndImageContext() }
guard let context = UIGraphicsGetCurrentContext() else { return nil }
context.setFillColor(gray: 1.0, alpha: 1.0)
context.fill(targetPageRect)
context.saveGState()
context.translateBy(x: 0.0, y: targetPageRect.height)
context.scaleBy(x: 1.0, y: -1.0)
context.concatenate(page.getDrawingTransform(.mediaBox, rect: targetPageRect, rotate: 0, preserveAspectRatio: true))
context.drawPDFPage(page)
context.restoreGState()
return context.makeImage().flatMap() { UIImage(cgImage: $0, scale: UIScreen.main.scale, orientation: .up) }
}
I just rewrote the Objective-C code to Swift. Maybe anyone else can use it:
func getThumbnail(url:NSURL, pageNumber:Int) -> UIImage {
var pdf:CGPDFDocumentRef = CGPDFDocumentCreateWithURL(url as CFURLRef);
var firstPage = CGPDFDocumentGetPage(pdf, pageNumber)
var width:CGFloat = 240.0;
var pageRect:CGRect = CGPDFPageGetBoxRect(firstPage, kCGPDFMediaBox);
var pdfScale:CGFloat = width/pageRect.size.width;
pageRect.size = CGSizeMake(pageRect.size.width*pdfScale, pageRect.size.height*pdfScale);
pageRect.origin = CGPointZero;
UIGraphicsBeginImageContext(pageRect.size);
var context:CGContextRef = UIGraphicsGetCurrentContext();
// White BG
CGContextSetRGBFillColor(context, 1.0,1.0,1.0,1.0);
CGContextFillRect(context,pageRect);
CGContextSaveGState(context);
// ***********
// Next 3 lines makes the rotations so that the page look in the right direction
// ***********
CGContextTranslateCTM(context, 0.0, pageRect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextConcatCTM(context, CGPDFPageGetDrawingTransform(firstPage, kCGPDFMediaBox, pageRect, 0, true));
CGContextDrawPDFPage(context, firstPage);
CGContextRestoreGState(context);
var thm:UIImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return thm;
}
I have a bit of code that does what I want and takes a screen shot. Apple have now said that screen shots have to be taken not using UIGetScreenImage(), but UIGraphicsBeginImageContextWithOptions. Could anyone point me in the right direction. Here's a snippet of code that shows how I am doing it at the moment.
CGImageRef inImage = UIGetScreenImage();
UIScreen* mainscr = [UIScreen mainScreen];
CGSize screenSize = mainscr.currentMode.size;
CGImageRef handRef;
if(screenSize.height > 500)
{
handRef = CGImageCreateWithImageInRect(inImage, CGRectMake(320,460,1,1));
}
else
{
handRef = CGImageCreateWithImageInRect(inImage, CGRectMake(160,230,1,1));
}
unsigned char rawData[4];
CGContextRef context = CGBitmapContextCreate(
rawData,
CGImageGetWidth(handRef),
CGImageGetHeight(handRef),
CGImageGetBitsPerComponent(handRef),
CGImageGetBytesPerRow(handRef),
CGImageGetColorSpace(handRef),
kCGImageAlphaPremultipliedLast
);
Anyone how I can do this now?
Thanks to any answers in advance!
should be something like this:
UIGraphicsBeginImageContext(theView.bounds.size);
theView.layer.renderInContext(UIGraphicsGetCurrentContext());
UIImage* yourFinalScreenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Where theView is the topmost View you want to render, you could for example use ][UIApplication sharedApplication] keyWindow].
It looks like you are accessing only parts of your screenshot. That could be done more easily with this approach. Just use the view you want to capture, it done not need to be the window.
My big picture goal is to have a grey field over an image, and then as the user rubs on that grey field, it reveals the image underneath. Basically like a lottery scratcher card. I've done a bunch of searching through the docs, as well as this site, but can't find the solution.
The following is just a proof of concept to test "erasing" an image based on where the user touches, but it isn't working. :(
I have a UIView that detects touches, then sends the coords of the move to the UIViewController that clips the image in a UIImageView by doing the following:
- (void) moveDetectedFrom:(CGPoint) from to:(CGPoint) to
{
UIImage* image = bkgdImageView.image;
CGSize s = image.size;
UIGraphicsBeginImageContext(s);
CGContextRef g = UIGraphicsGetCurrentContext();
CGContextMoveToPoint(g, from.x, from.y);
CGContextAddLineToPoint(g, to.x, to.y);
CGContextClosePath(g);
CGContextAddRect(g, CGRectMake(0, 0, s.width, s.height));
CGContextEOClip(g);
[image drawAtPoint:CGPointZero];
bkgdImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[bkgdImageView setNeedsDisplay];
}
The problem is that the touches are sent to this method just fine, but nothing happens on the original.
Am I doing the clip path incorrectly? Or?
Not really sure...so any help you may have would be greatly appreciated.
Thanks in advance,
Joel
I've been trying to do the same thing a lot of time ago, using just Core Graphics, and it can be done, but trust me, the effect is not as smooth and soft as the user expects to be. So, i knew how to work with OpenCV, (Open Computer Vision Library), and as it was written in C, i knew i could ise it on the iPhone.
Doing what you want to do with OpenCV is extremely easy.
First you need a couple of functions to convert a UIImage to an IplImage wich is the type used in OpenCV to represent images of all kinds, and the other way.
+ (IplImage *)CreateIplImageFromUIImage:(UIImage *)image {
CGImageRef imageRef = image.CGImage;
//This is the function you use to convert a UIImage -> IplImage
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
IplImage *iplimage = cvCreateImage(cvSize(image.size.width, image.size.height), IPL_DEPTH_8U, 4);
CGContextRef contextRef = CGBitmapContextCreate(iplimage->imageData, iplimage->width, iplimage->height,
iplimage->depth, iplimage->widthStep,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, image.size.width, image.size.height), imageRef);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return iplimage;}
+ (UIImage *)UIImageFromIplImage:(IplImage *)image {
//Convert a IplImage -> UIImage
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSData * data = [[NSData alloc] initWithBytes:image->imageData length:image->imageSize];
//NSData *data = [NSData dataWithBytes:image->imageData length:image->imageSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
CGImageRef imageRef = CGImageCreate(image->width, image->height,
image->depth, image->depth * image->nChannels, image->widthStep,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault,
provider, NULL, false, kCGRenderingIntentDefault);
UIImage *ret = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
[data release];
return ret;}
Now that you have both the basic functions you need you can do whatever you want with your IplImage:
this is what you want:
+(UIImage *)erasePointinUIImage:(IplImage *)image :(CGPoint)point :(int)r{
//r is the radious of the erasing
int a = point.x;
int b = point.y;
int position;
int minX,minY,maxX,maxY;
minX = (a-r>0)?a-r:0;
minY = (b-r>0)?b-r:0;
maxX = ((a+r) < (image->width))? a+r : (image->width);
maxY = ((b+r) < (image->height))? b+r : (image->height);
for (int i = minX; i < maxX ; i++)
{
for(int j=minY; j<maxY;j++)
{
position = ((j-b)*(j-b))+((i-a)*(i-a));
if (position <= r*r)
{
uchar* ptr =(uchar*)(image->imageData) + (j*image->widthStep + i*image->nChannels);
ptr[1] = ptr[2] = ptr[3] = ptr[4] = 0;
}
}
}
UIImage * res = [self UIImageFromIplImage:image];
return res;}
Sorry for the formatting.
If you want to know how to port OpenCV to the iPhone Yoshimasa Niwa's
If you want to check out an app currently working with OpenCV on the AppStore go get :Flags&Faces
You usually want to draw into the current graphics context inside of a drawRect: method, not just any old method. Also, a clip region only affects what is drawn to the current graphics context. But instead of going into why this approach isn't working, I'd suggest doing it differently.
What I would do is have two views. One with the image, and one with the gray color that is made transparent. This allows the graphics hardware to cache the image, instead of trying to redraw the image every time you modify the gray fill.
The gray one would be a UIView subclass with CGBitmapContext that you would draw into to make the pixels that the user touches clear.
There are probably several ways to do this. I'm just suggesting one way above.
I'm trying to write an animation on the iPhone, without much success, getting crashes and nothing seems to work.
What I wanna do appears simple, create a UIImage, and draw part of another UIImage into it, I got a bit confused with the context and layers and stuff.
Could someone please explain how to write something like that (efficiently), with example code?
For the record, this turns out to be fairly straightforward - everything you need to know is somewhere in the example below:
+ (UIImage*) addStarToThumb:(UIImage*)thumb
{
CGSize size = CGSizeMake(50, 50);
UIGraphicsBeginImageContext(size);
CGPoint thumbPoint = CGPointMake(0, 25 - thumb.size.height / 2);
[thumb drawAtPoint:thumbPoint];
UIImage* starred = [UIImage imageNamed:#"starred.png"];
CGPoint starredPoint = CGPointMake(0, 0);
[starred drawAtPoint:starredPoint];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
I just want to add a comment about the answer above by dpjanes, because it is a good answer but will look blocky on iPhone 4 (with high resolution retina display), since "UIGraphicsGetImageFromCurrentImageContext()" does not render at the full resolution of an iPhone 4.
Use "...WithOptions()" instead. But since WithOptions is not available until iOS 4.0, you could weak link it (discussed here) then use the following code to only use the hires version if it is supported:
if (UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
}
else {
UIGraphicsBeginImageContext();
}
Here is an example to merge two images that are the same size into one. I don't know if this is the best and don't know if this kind of code is posted somewhere else. Here is my two cents.
+ (UIImage *)mergeBackImage:(UIImage *)backImage withFrontImage:(UIImage *)frontImage
{
UIImage *newImage;
CGRect rect = CGRectMake(0, 0, backImage.size.width, backImage.size.height);
// Begin context
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0);
// draw images
[backImage drawInRect:rect];
[frontImage drawInRect:rect];
// grab context
newImage = UIGraphicsGetImageFromCurrentImageContext();
// end context
UIGraphicsEndImageContext();
return newImage;
}
Hope this helps.
Objective: take a UIImage, crop out a square in the middle, change size of square to 320x320 pixels, slice up the image into 16 80x80 images, save the 16 images in an array.
Here's my code:
CGImageRef originalImage, resizedImage, finalImage, tmp;
float imgWidth, imgHeight, diff;
UIImage *squareImage, *playImage;
NSMutableArray *tileImgArray;
int r, c;
originalImage = [image CGImage];
imgWidth = image.size.width;
imgHeight = image.size.height;
diff = fabs(imgWidth - imgHeight);
if(imgWidth > imgHeight){
resizedImage = CGImageCreateWithImageInRect(originalImage, CGRectMake(floor(diff/2), 0, imgHeight, imgHeight));
}else{
resizedImage = CGImageCreateWithImageInRect(originalImage, CGRectMake(0, floor(diff/2), imgWidth, imgWidth));
}
CGImageRelease(originalImage);
squareImage = [UIImage imageWithCGImage:resizedImage];
if(squareImage.size.width != squareImage.size.height){
NSLog(#"image cutout error!");
//*code to return to main menu of app, irrelevant here
}else{
float newDim = squareImage.size.width;
if(newDim != 320.0){
CGSize finalSize = CGSizeMake(320.0, 320.0);
UIGraphicsBeginImageContext(finalSize);
[squareImage drawInRect:CGRectMake(0, 0, finalSize.width, finalSize.height)];
playImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}else{
playImage = squareImage;
}
}
finalImage = [playImage CGImage];
tileImgArray = [NSMutableArray arrayWithCapacity:0];
for(int i = 0; i < 16; i++){
r = i/4;
c = i%4;
//*
tmp = CGImageCreateWithImageInRect(finalImage, CGRectMake(c*tileSize, r*tileSize, tileSize, tileSize));
[tileImgArray addObject:[UIImage imageWithCGImage:tmp]];
}
The code works correctly when the original (the variable image) has its smaller dimension either bigger or smaller than 320 pixels. When it's exactly 320, the resulting 80x80 images are almost entirely black, some with a few pixels at the edges that may (I can't really tell) be from the original image.
I tested by displaying the full image both directly:
[UIImage imageWithCGImage:finalImage];
And indirectly:
[UIImage imageWithCGImage:CGImageCreateWithImageInRect(finalImage, CGRectMake(0, 0, 320, 320))];
In both cases, the display worked. The problems only arise when I attempt to slice out some part of the image.
After some more experimentation, I found the following solution (I still don't know why it didn't work as originally written, though.) But anyway, the slicing works after the resize code is put in place even when resizing is unnecessary:
if(newDim != 320.0){
CGSize finalSize = CGSizeMake(320.0, 320.0);
UIGraphicsBeginImageContext(finalSize);
[squareImage drawInRect:CGRectMake(0, 0, finalSize.width, finalSize.height)];
playImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}else{
CGSize finalSize = CGSizeMake(320.0, 320.0);
UIGraphicsBeginImageContext(finalSize);
[squareImage drawInRect:CGRectMake(0, 0, finalSize.width, finalSize.height)];
playImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Anyone has any clue WHY this is going on?
P.S. Yes, if/else is no longer required here. Removing it before I knew it was going to work would be stupid, though.
Just out of curiosity, why did you make your mutable array with bound of 0 when you know you're going to put 16 things in it?
Well, aside from that, I've tried the basic techniques you used for resizing and slicing (I did not need to crop, because I'm working with images that are already square) and I'm unable to reproduce your problem in the simulator. You might want to try breaking your code into three separate functions (crop to square, resize, and slice into pieces) and then test the three separately so you can figure out which of the three steps is causing the problems (ie. input images that you've manipulated in a normal graphics program instead of using objective c and then inspect what you get back out!).
I'll attach my versions of the resize and slice functions below, which will hopefully be helpful. It was nice to have your versions to look at, since I didn't have to find all the methods by myself for once. :)
Just as a note, the two dimensional array mentioned is my own class built out of NSMutableArrays, but you could easily implement your own version or use a flat NSMutableArray instead. ;)
// cut the given image into a grid of equally sized smaller images
// this assumes that the image can be equally divided in the requested increments
// the images will be stored in the return array in [row][column] order
+ (TwoDimensionalArray *) chopImageIntoGrid : (UIImage *) originalImage : (int) numberOfRows : (int) numberOfColumns
{
// figure out the size of our tiles
int tileWidth = originalImage.size.width / numberOfColumns;
int tileHeight = originalImage.size.height / numberOfRows;
// create our return array
TwoDimensionalArray * toReturn = [[TwoDimensionalArray alloc] initWithBounds : numberOfRows
: numberOfColumns];
// get a CGI image version of our image
CGImageRef cgVersionOfOriginal = [originalImage CGImage];
// loop to chop up each row
for(int row = 0; row < numberOfRows ; row++){
// loop to chop up each individual piece by column
for (int column = 0; column < numberOfColumns; column++)
{
CGImageRef tempImage =
CGImageCreateWithImageInRect(cgVersionOfOriginal,
CGRectMake(column * tileWidth,
row * tileHeight,
tileWidth,
tileHeight));
[toReturn setObjectAt : row : column : [UIImage imageWithCGImage:tempImage]];
}
}
// now return the set of images we created
return [toReturn autorelease];
}
// this method resizes an image to the requested dimentions
// be a bit careful when using this method, since the resize will not respect
// the proportions of the image
+ (UIImage *) resize : (UIImage *) originalImage : (int) newWidth : (int) newHeight
{
// translate the image to the new size
CGSize newSize = CGSizeMake(newWidth, newHeight); // the new size we want the image to be
UIGraphicsBeginImageContext(newSize); // downside: this can't go on a background thread, I'm told
[originalImage drawInRect : CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext(); // get our new image
UIGraphicsEndImageContext();
// return our brand new image
return newImage;
}
Eva Schiffer