Drawrect causing memory issues - iphone

Currently I am using UIView instead of UIImageview due to Memory consumption in large scale images. following is same code I am using.
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextClearRect(context, rect);
[myImage drawInRect:rect];
}
-(void) SetImage:(UIImage*) aImage
{
if(!aImage)
return;
if(myImage)
{
[myImage release];
myImage = nil;
}
myImage = [[[UIImage alloc]initWithCGImage:aImage.CGImage] retain];
[self setNeedsDisplay];
}
This is causing now memory leak of 8 MB ( checked with Instrument ) every time when Update and set the same image again. if I comment
[self setNeedsDisplay];
There is no leak. can anyone help me here if I am doing something wrong. OR can anyone help me to Subclass UIImageview to handle large scale image.
// Calling functions
-(void) FitToCardStart
{
UIImage* temp = ScaleImage([iImageBgView GetImage]);
[iImageBgView SetImage:temp];
[temp release];
temp = nil;
}
// ScaleImage
UIImage* ScaleImage(UIImage* image)
{
NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init];
int kMaxResolution = 1800;
CGImageRef imgRef = image.CGImage;
CGFloat width = CGImageGetWidth(imgRef);
CGFloat height = CGImageGetHeight(imgRef);
CGAffineTransform transform = CGAffineTransformIdentity;
CGRect bounds = CGRectMake(0, 0, width, height);
if (width < kMaxResolution || height < kMaxResolution)
{
CGFloat ratio = width/height;
if (ratio > 1)
{
bounds.size.width = kMaxResolution;
bounds.size.height = bounds.size.width / ratio;
}
else
{
bounds.size.height = kMaxResolution;
bounds.size.width = bounds.size.height * ratio;
}
}
CGFloat scaleRatio = bounds.size.width / width;
CGSize imageSize = CGSizeMake(CGImageGetWidth(imgRef), CGImageGetHeight(imgRef));
UIImageOrientation orient = image.imageOrientation;
switch(orient)
{
case UIImageOrientationUp: //default
transform = CGAffineTransformIdentity;
break;
default:
[NSException raise:NSInternalInconsistencyException format:#"Invalid image orientation"];
}
UIGraphicsBeginImageContext(bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextScaleCTM(context, scaleRatio, -scaleRatio);
CGContextTranslateCTM(context, 0, -height);
CGContextConcatCTM(context, transform);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, width, height), imgRef);
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIImage* temp = [[[UIImage alloc] initWithCGImage:imageCopy.CGImage] retain];
CGImageRelease(imgRef);
CGContextRelease(context);
[pool release];
return temp;
}
Thanks,
Sagar

Your problem is this line:
myImage = [[[UIImage alloc]initWithCGImage:aImage.CGImage] retain];
alloc already gives you a retain count of 1, with the additional retain you end up with a retain count of 2 which is too high. Remove the retain and everything will be fine.

myImage = [[[UIImage alloc]initWithCGImage:aImage.CGImage] retain];
There's redundant retain in this line - as you're allocating new UIIMage object (use +alloc) method you don't need to extra retain it.
Edit: ScaleImage method has the same problem with redundant retain:
// remove extra retain here
UIImage* temp = [[[UIImage alloc] initWithCGImage:imageCopy.CGImage] retain];
// should be
UIImage* temp = [[UIImage alloc] initWithCGImage:imageCopy.CGImage];
And a code-style comment - it is better to indicate in your method names what memory management behavior required for returned objects - as image returned by your method needs to be released method name should contain something from "new", "alloc", "copy", "create"...

I suggest not creating a new image, but just keeping the aImage instance.
myImage = [aImage retain];
I you absolutely must make it a new instance, you are doing it in a very roundabout way.
Copying would be a much better alternative.
myImage = [aImage copy];

Related

UIWebView to UIImage

I tried to capture image from UIWebView using this method but the image contains only visible area of the screen. How do I capture full content of UIWebView including invisible areas, i.e. the entire web page into one single image?
-(UIImage*)captureScreen:(UIView*) viewToCapture{
UIGraphicsBeginImageContext(viewToCapture.bounds.size);
[viewToCapture.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;
}
Check this out Rendering a UIWebView into an ImageContext
or just use this :) :
(UIImage*) imageFromWebview:(UIWebView*) webview{
//store the original framesize to put it back after the snapshot
CGRect originalFrame = webview.frame;
//get the width and height of webpage using js (you might need to use another call, this doesn't work always)
int webViewHeight = [[webview stringByEvaluatingJavaScriptFromString:#"document.body.scrollHeight;"] integerValue];
int webViewWidth = [[webview stringByEvaluatingJavaScriptFromString:#"document.body.scrollWidth;"] integerValue];
//set the webview's frames to match the size of the page
[webview setFrame:CGRectMake(0, 0, webViewWidth, webViewHeight)];
//make the snapshot
UIGraphicsBeginImageContextWithOptions(webview.frame.size, false, 0.0);
[webview.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//set the webview's frame to the original size
[webview setFrame:originalFrame];
//and VOILA :)
return image;
}
EDIT (from a comment by Vad)
Solution was to call
webView.scalesPageToFit = YES;
in the initialization, and
[webView sizeToFit]
when the page did finish loading.
You are currently capturing only the visible part because you are limiting the image context to what's visible. You should limit it to what's available.
UIView has a scrollView property that has contentSize, telling you what is the size of the web view inside the scroll view. You can use that size to set your image context like this:
-(UIImage*)captureScreen:(UIView*) viewToCapture{
CGSize overallSize = overallSize;
UIGraphicsBeginImageContext(viewToCapture.scrollView.contentSize);
// Save the current bounds
CGRect tmp = viewToCapture.bounds;
viewToCapture.bounds = CGRectMake(0, 0, overallSize.width, overallSize.height);
// Wait for the view to finish loading.
// This is not very nice, but it should work. A better approach would be
// to use a delegate, and run the capturing on the did finish load event.
while (viewToCapture.loading) {
[NSThread sleepForTimeInterval:0.1];
}
[viewToCapture.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Restore the bounds
viewToCapture.bounds = tmp;
return viewImage;
}
EDIT : New answer from me with tested code.
Add below method to capture UIWebViewinto UIImage. It will also capture unvisible area as well.
- (UIImage*)webviewToImage:(UIWebView*)webView
{
int currentWebViewHeight = webView.scrollView.contentSize.height;
int scrollByY = webView.frame.size.height;
[webView.scrollView setContentOffset:CGPointMake(0, 0)];
NSMutableArray* images = [[NSMutableArray alloc] init];
CGRect screenRect = webView.frame;
int pages = currentWebViewHeight/scrollByY;
if (currentWebViewHeight%scrollByY > 0) {
pages ++;
}
for (int i = 0; i< pages; i++)
{
if (i == pages-1) {
if (pages>1)
screenRect.size.height = currentWebViewHeight - scrollByY;
}
if (IS_RETINA)
UIGraphicsBeginImageContextWithOptions(screenRect.size, NO, 0);
else
UIGraphicsBeginImageContext( screenRect.size );
if ([webView.layer respondsToSelector:#selector(setContentsScale:)]) {
webView.layer.contentsScale = [[UIScreen mainScreen] scale];
}
//UIGraphicsBeginImageContext(screenRect.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, screenRect);
[webView.layer renderInContext:ctx];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if (i == 0)
{
scrollByY = webView.frame.size.height;
}
else
{
scrollByY += webView.frame.size.height;
}
[webView.scrollView setContentOffset:CGPointMake(0, scrollByY)];
[images addObject:newImage];
}
[webView.scrollView setContentOffset:CGPointMake(0, 0)];
UIImage *resultImage;
if(images.count > 1) {
//join all images together..
CGSize size;
for(int i=0;i<images.count;i++) {
size.width = MAX(size.width, ((UIImage*)[images objectAtIndex:i]).size.width );
size.height += ((UIImage*)[images objectAtIndex:i]).size.height;
}
if (IS_RETINA)
UIGraphicsBeginImageContextWithOptions(size, NO, 0);
else
UIGraphicsBeginImageContext(size);
if ([webView.layer respondsToSelector:#selector(setContentsScale:)]) {
webView.layer.contentsScale = [[UIScreen mainScreen] scale];
}
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, screenRect);
int y=0;
for(int i=0;i<images.count;i++) {
UIImage* img = [images objectAtIndex:i];
[img drawAtPoint:CGPointMake(0,y)];
y += img.size.height;
}
resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
} else {
resultImage = [images objectAtIndex:0];
}
[images removeAllObjects];
return resultImage;
}
Also add these macro for checking if iOS is retina display
#define IS_RETINA ([[UIScreen mainScreen] respondsToSelector:#selector(displayLinkWithTarget:selector:)] && ([UIScreen mainScreen].scale == 2.0))

How do you crop an image in iOS

I have a photo app where you can add stickers in one section. When you're finished I want to save the image. Here is the code that I have to do that.
if(UIGraphicsBeginImageContextWithOptions != NULL)
{
UIGraphicsBeginImageContextWithOptions(self.view.frame.size, YES, 2.5);
} else {
UIGraphicsBeginImageContext(self.view.frame.size);
}
CGContextRef contextNew=UIGraphicsGetCurrentContext();
[self.view.layer renderInContext:contextNew];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Now the image that gets saved is the full screen of the image, which is fine, but now I need to crop the image and I don't know how. You can see the image at the link below:
http://dl.dropbox.com/u/19130454/Photo%202012-04-09%201%2036%2018%20PM.png
I need to crop:
91px from the left and right
220px from the bottom
Any help would be greatly appreciated. If I haven't explained things clearly, please let me know and I'll do my best to re-explain.
How about something like this
CGRect clippedRect = CGRectMake(self.view.frame.origin.x+91, self.view.frame.origin.y, self.view.frame.size.width-91*2, self.view.frame.size.height-220);
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], clippedRect);
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
Following code may help you.
You should get the correct cropFrame fist by below method
-(CGRect)cropRectForFrame:(CGRect)frame
{
// NSAssert(self.contentMode == UIViewContentModeScaleAspectFit, #"content mode must be aspect fit");
CGFloat widthScale = imageview.superview.bounds.size.width / imageview.image.size.width;
CGFloat heightScale = imageview.superview.bounds.size.height / imageview.image.size.height;
float x, y, w, h, offset;
if (widthScale<heightScale) {
offset = (imageview.superview.bounds.size.height - (imageview.image.size.height*widthScale))/2;
x = frame.origin.x / widthScale;
y = (frame.origin.y-offset) / widthScale;
w = frame.size.width / widthScale;
h = frame.size.height / widthScale;
} else {
offset = (imageview.superview.bounds.size.width - (imageview.image.size.width*heightScale))/2;
x = (frame.origin.x-offset) / heightScale;
y = frame.origin.y / heightScale;
w = frame.size.width / heightScale;
h = frame.size.height / heightScale;
}
return CGRectMake(x, y, w, h);
}
Then you need to call this method to get cropped image
- (UIImage *)imageByCropping:(UIImage *)image toRect:(CGRect)rect
{
// you need to update scaling factor value if deice is not retina display
UIGraphicsBeginImageContextWithOptions(rect.size,
/* your view opaque */ NO,
/* scaling factor */ 2.0);
// stick to methods on UIImage so that orientation etc. are automatically
// dealt with for us
[image drawAtPoint:CGPointMake(-rect.origin.x, -rect.origin.y)];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
- (UIImage*)imageByCropping:(CGRect)rect
{
//create a context to do our clipping in
UIGraphicsBeginImageContext(rect.size);
CGContextRef currentContext = UIGraphicsGetCurrentContext();
//create a rect with the size we want to crop the image to
//the X and Y here are zero so we start at the beginning of our
//newly created context
CGRect clippedRect = CGRectMake(0, 0, rect.size.width, rect.size.height);
CGContextClipToRect( currentContext, clippedRect);
//create a rect equivalent to the full size of the image
//offset the rect by the X and Y we want to start the crop
//from in order to cut off anything before them
CGRect drawRect = CGRectMake(rect.origin.x * -1,
rect.origin.y * -1,
self.size.width,
self.size.height);
//draw the image to our clipped context using our offset rect
// CGContextDrawImage(currentContext, drawRect, self.CGImage);
[self drawInRect:drawRect]; // This will fix getting inverted image from context.
//pull the image from our cropped context
UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();
//pop the context to get back to the default
UIGraphicsEndImageContext();
//Note: this is autoreleased
return cropped;
}
Refer the below link for crop image
https://github.com/myang-git/iOS-Image-Crop-View
** How to Use **
Very easy! It is created to be a drop-in component, so no static library, no extra dependencies. Just copy ImageCropView.h and ImageCropView.m to your project, and implement ImageCropViewControllerDelegate protocol.
Use it like UIImagePicker:
- (void)cropImage:(UIImage *)image{
ImageCropViewController *controller = [[ImageCropViewController alloc] initWithImage:image];
controller.delegate = self;
[[self navigationController] pushViewController:controller animated:YES];
}
- (void)ImageCropViewController:(ImageCropViewController *)controller didFinishCroppingImage:(UIImage *)croppedImage{
image = croppedImage;
imageView.image = croppedImage;
[[self navigationController] popViewControllerAnimated:YES];
}
- (void)ImageCropViewControllerDidCancel:(ImageCropViewController *)controller{
imageView.image = image;
[[self navigationController] popViewControllerAnimated:YES];
}

UIImageView doesn't update

I have a problem with my UIImageView that doesn't update.
So, it's like this:
I have a UIScrollView that contains an UIImageView (called imageView).
Now, imageView , should contain more UIImageViews. Those UIImageViews I add from code but they do not appear.
This is the code:
for(i = 0 ; i < NrOfTilesPerHeight ; i++)
for(j = 0 ; j < NrOfTilesPerWidth ; j++)
{
imageRect = CGRectMake(j*TILE_WIDTH,i*TILE_HEIGHT,TILE_WIDTH, TILE_HEIGHT);
image = CGImageCreateWithImageInRect(aux.CGImage, imageRect);
if(!data[i][j])
NSLog(#"data[%d][%d] is nil",i,j);
context = CGBitmapContextCreate (data[i][j], TILE_WIDTH, TILE_HEIGHT,
bitsPerComponent, bitmapBytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
if (context == NULL)
{
free (data);
printf ("Context not created!");
CGColorSpaceRelease( colorSpace );
}
CGContextDrawImage(context, CGRectMake(0, 0, TILE_WIDTH, TILE_HEIGHT), image);
data[i][j] = CGBitmapContextGetData (context);
memcpy(originalData[i][j],data[i][j],TILE_WIDTH*TILE_HEIGHT*numberOfCompponents);
CGContextFlush(context);
CGImageRelease(image);
UIImageView *imgView = [[UIImageView alloc] init];
[imgView setTag:i*10+j];
CGRect frame = imgView.frame;
frame.origin.x = j * (TILE_WIDTH+5) * initialScale;
frame.origin.y = i * (TILE_HEIGHT+5) * initialScale;
frame.size.width *= initialScale;
frame.size.height *= initialScale;
[imgView setFrame:frame];
[imageView addSubview:imgView];
[self updateTileAtLine:i andRow:j];
[imgView release];
CGDataProviderRelease(dataProvider);
CGImageRelease(cgImage);
}
- (void) updateTileAtLine: (int) i andRow: (int) j
{
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, data[i][j], bitmapByteCount, NULL);
CGImageRef cgImage = CGImageCreate(TILE_WIDTH, TILE_HEIGHT, bitsPerComponent,
bitsPerPixel, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big, dataProvider, NULL, false, kCGRenderingIntentDefault);
UIImage *myImg = [UIImage imageWithCGImage:cgImage];
UIImageView *auxImageView = (UIImageView*) [imageView viewWithTag:(i*10+j)];
[auxImageView setImage:myImg];
CGDataProviderRelease(dataProvider);
CGImageRelease(cgImage);
}
Now...this doesn't crashes...so everything is non-nil and ok.
If instead of using viewWithTag , I alloc init a new UIImageView and add it to imageView, it will appear. But I don't want to do another copy of the view since this updateTile method will be called quite often.
My question is: Why doesn't the auxImageView appear? It very much should.
Thank you.
Regards,
George
Try this
for(UIView *view in [imageView subviews]) {
if(view.tag == i*10+j) {
UIImageView *auxImageView = (UIImageView*) view;
[auxImageView setImage:myImg];
}
}

How to resize the image programmatically in objective-c in iphone

I have an application where I am displaying large images in a small space.
The images are quite large, but I am only displaying them in 100x100 pixel frames.
My app is responding slowly because of the size fo the images I am using.
To improve performance, how can I resize the images programmatically using Objective-C?
Please find the following code.
- (UIImage *)imageWithImage:(UIImage *)image convertToSize:(CGSize)size {
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *destImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return destImage;
}
This code is for just change image scale not for resizing. You have to set CGSize as your image width and hight so the image will not stretch and it arrange at the middle.
- (UIImage *)imageWithImage:(UIImage *)image scaledToFillSize:(CGSize)size
{
CGFloat scale = MAX(size.width/image.size.width, size.height/image.size.height);
CGFloat width = image.size.width * scale;
CGFloat height = image.size.height * scale;
CGRect imageRect = CGRectMake((size.width - width)/2.0f,
(size.height - height)/2.0f,
width,
height);
UIGraphicsBeginImageContextWithOptions(size, NO, 0);
[image drawInRect:imageRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
My favorite way to do this is with CGImageSourceCreateThumbnailAtIndex (in the ImageIO framework). The name is a bit misleading.
Here's an excerpt of some code from a recent app of mine.
CGFloat maxw = // whatever;
CGFloat maxh = // whatever;
CGImageSourceRef src = NULL;
if ([imageSource isKindOfClass:[NSURL class]])
src = CGImageSourceCreateWithURL((__bridge CFURLRef)imageSource, nil);
else if ([imageSource isKindOfClass:[NSData class]])
src = CGImageSourceCreateWithData((__bridge CFDataRef)imageSource, nil);
// if at double resolution, double the thumbnail size and use double-resolution image
CGFloat scale = 1;
if ([[UIScreen mainScreen] scale] > 1.0) {
scale = 2;
maxw *= 2;
maxh *= 2;
}
// load the image at the desired size
NSDictionary* d = #{
(id)kCGImageSourceShouldAllowFloat: (id)kCFBooleanTrue,
(id)kCGImageSourceCreateThumbnailWithTransform: (id)kCFBooleanTrue,
(id)kCGImageSourceCreateThumbnailFromImageAlways: (id)kCFBooleanTrue,
(id)kCGImageSourceThumbnailMaxPixelSize: #((int)(maxw > maxh ? maxw : maxh))
};
CGImageRef imref = CGImageSourceCreateThumbnailAtIndex(src, 0, (__bridge CFDictionaryRef)d);
if (NULL != src)
CFRelease(src);
UIImage* im = [UIImage imageWithCGImage:imref scale:scale orientation:UIImageOrientationUp];
if (NULL != imref)
CFRelease(imref);
If you are using a image on different sizes and resizing each time it will degrade your app performance. Solution is don't resize them just use button in place of imageview. and just set the image on button it will resize automatically and you will get great performance.
I was also resizing images while setting it on cell but my app got slow So I used Button in place of imageview (not resizing images programatically button is doing this job) and it is working perfectly fine.
-(UIImage *)scaleImage:(UIImage *)image toSize:. (CGSize)targetSize
{
//If scaleFactor is not touched, no scaling will occur
CGFloat scaleFactor = 1.0;
//Deciding which factor to use to scale the image (factor = targetSize / imageSize)
if (image.size.width > targetSize.width ||
image.size.height > targetSize.height || image.size.width == image.size.height)
if (!((scaleFactor = (targetSize.width /
image.size.width)) > (targetSize.height /
image.size.height))) //scale to fit width, or
scaleFactor = targetSize.height / image.size.height; // scale to fit heigth.
Since the code ran perfectly fine in iOS 4, for backwards compatibility I added a check for OS version and for anything below 5.0 the old code would work.
- (UIImage *)resizedImage:(CGSize)newSize interpolationQuality:(CGInterpolationQuality)quality {
BOOL drawTransposed;
CGAffineTransform transform = CGAffineTransformIdentity;
if ([[[UIDevice currentDevice] systemVersion] floatValue] >= 5.0) {
// Apprently in iOS 5 the image is already correctly rotated, so we don't need to rotate it manually
drawTransposed = NO;
} else {
switch (self.imageOrientation) {
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
drawTransposed = YES;
break;
default:
drawTransposed = NO;
}
transform = [self transformForOrientation:newSize];
}
return [self resizedImage:newSize
transform:transform
drawTransposed:drawTransposed
interpolationQuality:quality];
}
You can use this.
[m_Image.layer setMinificationFilter:kCAFilterTrilinear];
This thread is old, but it is what I pulled up when trying to solve this problem. Once the image is scaled it was not displaying well in my container even though I turned auto layout off. The easiest way for me to solve this for display in a table row, was to paint the image on a white background that had a fixed size.
Helper function
+(UIImage*)scaleMaintainAspectRatio:(UIImage*)sourceImage :(float)i_width :(float)i_height
{
float newHeight = 0.0;
float newWidth = 0.0;
float oldWidth = sourceImage.size.width;
float widthScaleFactor = i_width / oldWidth;
float oldHeight = sourceImage.size.height;
float heightScaleFactor = i_height / oldHeight;
if (heightScaleFactor > widthScaleFactor) {
newHeight = oldHeight * widthScaleFactor;
newWidth = sourceImage.size.width * widthScaleFactor;
} else {
newHeight = sourceImage.size.height * heightScaleFactor;
newWidth = oldWidth * heightScaleFactor;
}
// return image in white rect
float cxPad = i_width - newWidth;
float cyPad = i_height - newHeight;
if (cyPad > 0) {
cyPad = cyPad / 2.0;
}
if (cxPad > 0) {
cxPad = cxPad / 2.0;
}
CGSize size = CGSizeMake(i_width, i_height);
UIGraphicsBeginImageContextWithOptions(CGSizeMake(size.width, size.height), YES, 0.0);
[[UIColor whiteColor] setFill];
UIRectFill(CGRectMake(0, 0, size.width, size.height));
[sourceImage drawInRect:CGRectMake((int)cxPad, (int)cyPad, newWidth, newHeight)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
// will return scaled image at actual size, not in white rect
// UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
// [sourceImage drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
// UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
// UIGraphicsEndImageContext();
// return newImage;
}
I called this like this from my table view cellForRowAtIndexPath
PFFile *childsPicture = [object objectForKey:#"picture"];
[childsPicture getDataInBackgroundWithBlock:^(NSData *imageData, NSError *error) {
if (!error) {
UIImage *largePicture = [UIImage imageWithData:imageData];
UIImage *scaledPicture = [Utility scaleMaintainAspectRatio:largePicture :70.0 :70.0 ];
PFImageView *thumbnailImageView = (PFImageView*)[cell viewWithTag:100];
thumbnailImageView.image = scaledPicture;
[self.tableView reloadData];
}
}];
Hello from the end of 2018.
Solved with next solution (you need only last line, first & second are just for explanation):
NSURL *url = [NSURL URLWithString:response.json[0][#"photo_50"]];
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:data scale:customScale];
'customScale' is scale which you want (>1 if image must be smaller, <1 if image must be bigger).
This c method will resize your image with cornerRadius "Without effecting image's quality" :
UIImage *Resize_Image(UIImage *iImage, CGFloat iSize, CGFloat icornerRadius) {
CGFloat scale = MAX(CGSizeMake(iSize ,iSize).width/iImage.size.width, CGSizeMake(iSize ,iSize).height/iImage.size.height);
CGFloat width = iImage.size.width * scale;
CGFloat height = iImage.size.height * scale;
CGRect imageRect = CGRectMake((CGSizeMake(iSize ,iSize).width - width)/2.0f,(CGSizeMake(iSize ,iSize).height - height)/2.0f,width,height);
UIGraphicsBeginImageContextWithOptions(CGSizeMake(iSize ,iSize), NO, 0);
[[UIBezierPath bezierPathWithRoundedRect:imageRect cornerRadius:icornerRadius] addClip];
[iImage drawInRect:imageRect];
UIImage *ResizedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return ResizedImage;
}
This is how to use :
UIImage *ResizedImage = Resize_Image([UIImage imageNamed:#"image.png"], 64, 14.4);
I do not remember where i took the first 4 lines ..

reseting CGContextRef after drawing pdf page using CGContextDrawPDFPage

I am trying to create thumb images for every pdf page in PDF document and place it in a UISCrollVIew. I have succeeded in this, but scrolling is not so smooth as I want when it's too fast.
And I want to optimize thumb images creating for Pdf page.
I want to create one CGContextRef and reset its content after CGContextDrawPDFPage, as a consequence I wouldn't have to create a context each time and perform some other calculation, which takes a lot of resources.
Is it possible to reset CGContextRef content after CGContextDrawPDFPage? CGContextRestoreGState and CGContextSaveGState seems to doesn't help in this situation.
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
GCPdfSource *pdfSource = [GCPdfSource sharedInstance];
for (int i = pageRange.location; i <= pageRange.length; i++) {
UIView *thumbPdfView = [scrollView viewWithTag:i+1];
if (thumbPdfView == nil) {
CGPDFPageRef pdfPage = [pdfSource pageAt:i + 1];
float xPosition = THUMB_H_PADDING + THUMB_H_PADDING * i + THUMB_WIDTH * i;
CGRect frame = CGRectMake(xPosition, THUMB_H_PADDING, THUMB_WIDTH, THUMB_HEIGHT);
thumbPdfView = [[UIView alloc] initWithFrame:frame];
thumbPdfView.opaque = YES;
thumbPdfView.backgroundColor = [UIColor whiteColor];
[thumbPdfView setTag:i+1];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL,
frame.size.width,
frame.size.height,
8, /* bits per component*/
frame.size.width * 4, /* bytes per row */
colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextClipToRect(context, CGRectMake(0, 0, frame.size.width,frame.size.height));
CGRect pdfPageRect = CGPDFPageGetBoxRect(pdfPage, kCGPDFMediaBox);
CGRect contextRect = CGContextGetClipBoundingBox(context);
CGAffineTransform transform = aspectFit(pdfPageRect, contextRect);
CGContextConcatCTM(context, transform);
CGContextDrawPDFPage(context, pdfPage);
CGImageRef image = CGBitmapContextCreateImage(context);
CGContextRelease(context);
UIImage *uiImage = [[UIImage alloc]initWithCGImage:image];
CGImageRelease(image);
UIImageView *imageVIew = [[UIImageView alloc]initWithImage:uiImage];
[uiImage release];
[thumbPdfView addSubview:imageVIew];
[imageVIew release];
[scrollView addSubview:thumbPdfView];
[thumbPdfView release];
}
}
[pool release];//release
and aspectFit function...
CGAffineTransform aspectFit(CGRect innerRect, CGRect outerRect) {
CGFloat scaleFactor = MIN(outerRect.size.width/innerRect.size.width, outerRect.size.height/innerRect.size.height);
CGAffineTransform scale = CGAffineTransformMakeScale(scaleFactor, scaleFactor);
CGRect scaledInnerRect = CGRectApplyAffineTransform(innerRect, scale);
CGAffineTransform translation =
CGAffineTransformMakeTranslation((outerRect.size.width - scaledInnerRect.size.width) / 2 - scaledInnerRect.origin.x,
(outerRect.size.height - scaledInnerRect.size.height) / 2 - scaledInnerRect.origin.y);
return CGAffineTransformConcat(scale, translation);
}
Try CGContextClearRect(context, contextBounds).
CGContextSaveGState and CGContextRestoreGState do not have any effect on the content of a context. They push and pop changes made to state aspects of the context like the current fill color.