I have this app with a full screen tableView that displays a bunch of tiny images. Those images are pulled from the web, processed on a background thread, and then saved to disk using something like:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
UIGraphicsBeginImageContextWithOptions(rect.size, YES, 0);
// code that adds some glosses, shadows, etc
UIImage *output = UIGraphicsGetImageFromCurrentImageContext();
NSData* cacheData = UIImagePNGRepresentation(output);
[cacheData writeToFile:thumbPath atomically:YES];
dispatch_async(dispatch_get_main_queue(), ^{
self.image = output; // refreshes the cell using KVO
});
});
This code is only executed the first time the cell is displayed (since after that the image is already on disk). In that case, the cell is loaded using:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
UIImage *savedImage = [UIImage imageWithContentsOfFile:thumbPath];
if(savedImage) {
dispatch_async(dispatch_get_main_queue(), ^{
self.image = savedImage; // refreshes the cell using KVO
});
}
});
My problem is that in the first case, scrolling is butter smooth. But in the 2nd case (where it's reading the image directly from disk), scrolling is super jerky, even once the image is loaded. Drawing is what's causing the lag. Using Instruments, I see copyImageBlockSetPNG, png_read_now and inflate are taking up most of the cpu (they aren't when assigning self.image to UIGraphicsGetImageFromCurrentImageContext())
I'm assuming this happens because in the first case the UIImage is a raw output of the drawing, whereas in the second case it has to decompress the PNG every time it's drawing it. I tried using JPGs instead of PNGs and I get similar results.
Is there a way to fix this? Maybe to have it only decompress the PNG the first time it gets drawn?
Your problem is that +imageWithContentsOfFile: is cached and lazy loading. If you want to do something like this, instead use this code on your background queue:
// Assuming ARC
NSData* imageFileData = [[NSData alloc] initWithContentsOfFile:thumbPath];
UIImage* savedImage = [[UIImage alloc] initWithData:imageFileData];
// Dispatch back to main queue and set image...
Now, with this code, the actual decompression of the image data will still be lazy and cost a little bit, but not nearly as much as the file access hit you're getting with the lazy loading in your code example.
Since you're still seeing a performance issue, you can also force UIImage to decompress the image on the background thread:
// Still on background, before dispatching to main
UIGraphicsBeginImageContext(CGSizeMake(100, 100)); // this isn't that important since you just want UIImage to decompress the image data before switching back to main thread
[savedImage drawAtPoint:CGPointZero];
UIGraphicsEndImageContext();
// dispatch back to main thread...
Jason's tip about pre-drawing the image to decompress it is the key, but you'll get even better performance by copying the whole image and discarding the original.
Images created at runtime on iOS seem to be better optimised for drawing than ones that have been loaded from a file, even after you've forced them to decompress. For that reason, you should load like this (it's also a good idea to put the decompressed image into an NSCache so you don't have to keep reloading it):
- (void)loadImageWithPath:(NSString *)path block:(void(^)(UIImage *image))block
{
static NSCache *cache = nil;
if (!cache)
{
cache = [[NSCache alloc] init];
}
//check cache first
UIImage *image = [cache objectForKey:path];
if (image)
{
block(image);
return;
}
//switch to background thread
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
//load image
UIImage *image = [UIImage imageWithContentsOfFile:path];
//redraw image using device context
UIGraphicsBeginImageContextWithOptions(image.size, YES, 0);
[image drawAtPoint:CGPointZero];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//back to main thread
dispatch_async(dispatch_get_main_queue(), ^{
//cache the image
[cache setObject:image forKey:path];
//return the image
block(image);
});
});
}
Another way image decompression:
NS_INLINE void forceImageDecompression(UIImage *image)
{
CGImageRef imageRef = [image CGImage];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, CGImageGetWidth(imageRef), CGImageGetHeight(imageRef), 8, CGImageGetWidth(imageRef) * 4, colorSpace,kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little);
CGColorSpaceRelease(colorSpace);
if (!context) { NSLog(#"Could not create context for image decompression"); return; }
CGContextDrawImage(context, (CGRect){{0.0f, 0.0f}, {CGImageGetWidth(imageRef), CGImageGetHeight(imageRef)}}, imageRef);
CFRelease(context);
}
Using:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
UIImage *image = [UIImage imageWithContentsOfFile:[NSString stringWithFormat:#"%u.jpg", pageIndex]];
forceImageDecompression(image);
dispatch_async(dispatch_get_main_queue(), ^{
[((UIImageView*)page)setImage:image];
});
}
Related
I need to read images from NSDocumentDirectory to multiple uiimageview async so it won't block the UI.
I know i can use perform selector in background to load a uiimage, but then how can i associate it with the dynamic uiimageview ?
One convenient way is to use blocks, something like:
[self loadFullImageAt:imagePath completion:^(UIIMage * image){
self.imageView.image = image;
}];
Where you would load the image as data (since UIImage otherwise loads the image data deferred - when you first access it). It's also a good idea to decompress the image while still in the background thread, so the main thread doesn't have to do it when we first use the image.
- (void)loadFullImageAt:(NSString *)imageFilePath completion:(MBLoaderCompletion)completion {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
NSData *imageData = [NSData dataWithContentsOfFile:imageFilePath];
UIImage *image = nil;
if (imageData) {
image = [[[UIImage alloc] initWithData:imageData] decodedImage];
}
dispatch_async(dispatch_get_main_queue(), ^{
completion(image);
});
});
}
The callback is defined as:
typedef void (^MBLoaderCompletion)(UIImage *image);
Here's an UIImage category that implements the decompression code:
UIIMage+Decode.h
#import <UIKit/UIKit.h>
#interface UIImage (Decode)
- (UIImage *)decodedImage;
#end
UIIMage+Decode.m
#import "UIImage+Decode.h"
#implementation UIImage (Decode)
- (UIImage *)decodedImage {
CGImageRef imageRef = self.CGImage;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL,
CGImageGetWidth(imageRef),
CGImageGetHeight(imageRef),
8,
// Just always return width * 4 will be enough
CGImageGetWidth(imageRef) * 4,
// System only supports RGB, set explicitly
colorSpace,
// Makes system don't need to do extra conversion when displayed.
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little);
CGColorSpaceRelease(colorSpace);
if (!context) return nil;
CGRect rect = (CGRect){CGPointZero,{CGImageGetWidth(imageRef), CGImageGetHeight(imageRef)}};
CGContextDrawImage(context, rect, imageRef);
CGImageRef decompressedImageRef = CGBitmapContextCreateImage(context);
CGContextRelease(context);
UIImage *decompressedImage = [[UIImage alloc] initWithCGImage:decompressedImageRef scale:self.scale orientation:self.imageOrientation];
CGImageRelease(decompressedImageRef);
return decompressedImage;
}
#end
The sample code provided here assumes that we're using ARC.
When you say "dynamic" UIImageView, are these programmatically created on a UIScrollView? on a UITableView? samfisher is quite right on the basic question, but the details differ a little based upon how you created the UIImageView (e.g. if UITableView, you need to make sure that the cell is still visible and hasn't been dequeued; if UIScrollView, even then you might want to only load the UIImageView if the image is still visible on the screen (esp if the images are large or numerous)).
But the basic idea is that you might do something like:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
UIImage *image = [self getTheImage];
// ok, now that you have the image, dispatch the update of the UI back to the main queue
dispatch_async(dispatch_get_main_queue(), ^{
// if the image view is still visible, update it
});
});
Note that you invoke the retrieval of the image on some background queue or thread, but make sure to update the UI back on the main thread!
If you're updating a scrollview, you might want to do some checking that the view is still visible, such as contemplated here or here. If you're updating a tableview, perhaps something like this which checks if the cell is still visible. It all depends upon what you're trying to do.
you can use NSThread/dispatch queue for creating threads which can create your UIImageView-s and loads up images in them.
I have a blog application that I'm making. To compose a new entry, there is a "Compose Entry" view where the user can select a photo and input text. For the photo, there is a UIImageView placeholder and upon clicking this, a custom ImagePicker comes up where the user can select up to 3 photos.
This is where the problem comes in. I don't need the full resolution photo from the ALAsset, but at the same time, the thumbnail is too low resolution for me to use.
So what I'm doing at this point is resizing the fullResolution photos to a smaller size. However, this takes some time, especially when resizing up to 3 photos to a smaller size.
Here is a code snipped to show what I'm doing:
ALAssetRepresentation *rep = [[dict objectForKey:#"assetObject"] defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref)
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
UIImage *previewImage;
UIImage *largeImage;
if([rep orientation] == ALAssetOrientationUp) //landscape image
{
largeImage = [[UIImage imageWithCGImage:iref] scaledToWidth:screenBounds.size.width];
previewImage = [[UIImage imageWithCGImage:iref] scaledToWidth:300];
}
else // portrait image
{
previewImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:300] imageRotatedByDegrees:90];
largeImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:screenBounds.size.height] imageRotatedByDegrees:90];
}
}
Here, from the fullresolution image, I am creating two images: a preview image (max 300px on the long end) and a large image (max 960px or 640px on the long end). The preview image is what is shown on the app itself in the "new entry" preview. The large image is what will be used when uploading to the server.
The actual code I'm using to resize, I grabbed somewhere from here:
-(UIImage*)scaledToWidth:(float)i_width
{
float oldWidth = self.size.width;
float scaleFactor = i_width / oldWidth;
float newHeight = self.size.height * scaleFactor;
float newWidth = oldWidth * scaleFactor;
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
[self drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
Am I doing things wrong here? As it stands, the ALAsset thumbnail is too low clarity, and at the same time, I dont need the entire full resolution. It's all working now, but the resizing takes some time. Is this just a necessary consequence?
Thanks!
It is a necessary consequence of resizing your image that it will take some amount of time. How much depends on the device, the resolution of the asset and the format of the asset. But you don't have any control over that. But you do have control over where the resizing takes place. I suspect that right now you are resizing the image in your main thread, which will cause the UI to grind to a halt while you are doing the resizing. Do enough images, and your app will appear hung for long enough that the user will just go off and do something else (perhaps check out competing apps in the App Store).
What you should be doing is performing the resizing off the main thread. With iOS 4 and later, this has become much simpler because you can use Grand Central Dispatch to do the resizing. You can take your original block of code from above and wrap it in a block like this:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
ALAssetRepresentation *rep = [[dict objectForKey:#"assetObject"] defaultRepresentation];
CGImageRef iref = [rep fullResolutionImage];
if (iref)
{
CGRect screenBounds = [[UIScreen mainScreen] bounds];
__block UIImage *previewImage;
__block UIImage *largeImage;
if([rep orientation] == ALAssetOrientationUp) //landscape image
{
largeImage = [[UIImage imageWithCGImage:iref] scaledToWidth:screenBounds.size.width];
previewImage = [[UIImage imageWithCGImage:iref] scaledToWidth:300];
}
else // portrait image
{
previewImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:300] imageRotatedByDegrees:90];
largeImage = [[[UIImage imageWithCGImage:iref] scaledToHeight:screenBounds.size.height] imageRotatedByDegrees:90];
}
dispatch_async(dispatch_get_main_queue(), ^{
// do what ever you need to do in the main thread here once your image is resized.
// this is going to be things like setting the UIImageViews to show your new images
// or adding new views to your view hierarchy
});
}
});
You'll have to think about things a little differently this way. For example, you've now broken up what used to be a single step into multiple steps now. Code that was running after this will end up running before the image resize is complete or before you actually do anything with the images, so you need to make sure that you didn't have any dependencies on those images or you'll likely crash.
A late answer, but for those stumbling on this question, you might want to consider using the fullScreenImage rather than the fullResolutionImage of the defaultRepresentation. It's usually much smaller, but still large enough to maintain good quality for larger thumbnails.
I am scaling and cropping a UIImage and I want to be able to do it in a block that is thread safe. I could not find in the docs whether UIImageJPEGRepresentation is thread safe.
In the following code, I crop and scale a CGImage, then I create a UIImage from that and get the UIImageJPEGRepresentation. The end goal of this block is to get the NSData* from the scaled/cropped version.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
CGImageRef imageRef = photo.CGImage;
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);
CGContextRef bitmap;
if (photo.imageOrientation == UIImageOrientationUp || photo.imageOrientation == UIImageOrientationDown) {
bitmap = CGBitmapContextCreate(NULL, kFINAL_WIDTH, kFINAL_HEIGHT, CGImageGetBitsPerComponent(imageRef), 0, colorSpaceInfo, bitmapInfo);
} else {
bitmap = CGBitmapContextCreate(NULL, kFINAL_HEIGHT, kFINAL_WIDTH, CGImageGetBitsPerComponent(imageRef), 0, colorSpaceInfo, bitmapInfo);
}
CGContextSetInterpolationQuality(bitmap, kCGInterpolationHigh);
CGContextDrawImage(bitmap, drawRect, imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
NSData *finalData = UIImageJPEGRepresentation([UIImage imageWithCGImage:ref], 1.0);
CGContextRelease(bitmap);
CGImageRelease(ref);
dispatch_async(dispatch_get_main_queue(), ^{
[self.delegate sendNSDataBack:finalData];
});
});
I tried getting the NSData using a CGDataProviderRef, but when I did finally get the NSData, putting it in a UIImage into a UIImageView displayed nothing.
So bottomline question is. Can I do [UIImage imageWithData:] and UIImageJPEGRepresentation in another thread in a block using GCD?
You can use UIImageJPEGRepresentation() in the background (I'm using it this way in a current project).
However what you can't do is create a UIImage the way you are doing in the background, the [UIImage imagewithCGImage] call must be doing in the main thread (as a rule of thumb all UIKit calls should be done on the main thread).
This seems like a case where you might need nested blocks.
Edit: My own code I have found does call [UIImage imagewithCGImage] while in a background thread, but I am still suspicious that might cause issues in some cases. But my code does work.
Edit2: I just noticed you are resizing the image, UIImage+Resize. There's a very nice class linked to in this post, that has been built to do that in a robust way:
http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
You should really read that whole page to understand the nuances of resizing images. As I said, I do use that from a background thread even though part of what it does inside is what you were doing.
Edit3: If you are running on iOS4 or later, you may want to look into using the ImageIO framework to output images, which is more likely to be thread safe:
http://developer.apple.com/graphicsimaging/workingwithimageio.html
Example code for that is hard to find, here's a method that saves a PNG image using ImageIO (based on the code in "Programming With Quartz:2D and PDF graphics in Mac OS X):
// You'll need both ImageIO and MobileCoreServices frameworks to have this compile
#import <ImageIO/ImageIO.h>
#import <MobileCoreServices/MobileCoreServices.h>
void exportCGImageToPNGFileWithDestination( CGImageRef image, CFURLRef url)
{
float resolution = 144;
CFTypeRef keys[2];
CFTypeRef values[2];
CFDictionaryRef options = NULL;
// Create image destination to go into URL, using PNG
CGImageDestinationRef imageDestination = CGImageDestinationCreateWithURL( url, kUTTypePNG, 1, NULL);
if ( imageDestination == NULL )
{
fprintf( stderr, "Error creating image destination\n");
return;
}
// Set the keys to be the X and Y resolution of the image
keys[0] = kCGImagePropertyDPIWidth;
keys[1] = kCGImagePropertyDPIHeight;
// Create a number for the DPI value for the image
values[0] = CFNumberCreate( NULL, kCFNumberFloatType, &resolution );
values[1] = values[0];
// Options dictionary for output
options = CFDictionaryCreate(NULL,
(const void **)keys,
(const void **)values,
2,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFRelease(values[0]);
// Adding the image to the destination
CGImageDestinationAddImage( imageDestination, image, options );
CFRelease( options );
// Finalizing writes out the image to the destination
CGImageDestinationFinalize( imageDestination );
CFRelease( imageDestination );
}
Apple's official position is that no part of UIKit is thread-safe. However, the rest of your code appears to be Quartz-based, which is thread-safe when used in the manner you use it.
You can do everything on a background thread, then do the call to UIImageJPEGRepresentation() back on main:
// ...
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
dispatch_async(dispatch_get_main_queue(), ^ {
NSData *finalData = UIImageJPEGRepresentation([UIImage imageWithCGImage:ref], 1.0);
[self.delegate sendNSDataBack:finalData];
});
CGContextRelease(bitmap);
CGImageRelease(ref);
I think it is thread-safe because I do the similar things to resize an UIImage, or store image data to database in the background thread. And, the main-thread sometimes is named to UI thread. Anything about updating a screen should be executed on the UI thread. But, the UIImage is an object to store image data. It is not sub-classed from UIView. So, it is thread-safe.
Is there any way to create a UIImage object from an offscreen surface without copying the underlying pixel data?
I would like to do something like this:
// These numbers are made up obviously...
CGContextRef offscreenContext = MyCreateBitmapContext(width, height);
// Draw into the offscreen buffer
// Draw commands not relevant...
// Convert offscreen into CGImage
// This consumes ~15MB
CGImageRef offscreenContextImage = CGBitmapContextCreateImage(offscreenContext);
// This allocates another ~15MB
// Is there any way to share the bits from the
// CGImageRef instead of copying the data???
UIImage * newImage = [[UIImage alloc] initWithCGImage:offscreenContextImage];
// Releases the original 15MB, but the spike of 30MB total kills the app.
CGImageRelease(offscreenContextImage);
CGContextRelease(offscreenContext);
The memory is released and levels out at the acceptable size, but the 30MB memory spike is what kills the application. Is there any way to share the pixel data?
I've considered saving the offscreen buffer to a file and loading the data again, but this is a hack and the convenience methods for the iPhone require a UIImage to save it...
You could try releasing the context right after you created the CGImage, releasing the memory used by the context, because CGBitmapContextCreateImage() creates a copy of the context.
Like this:
CGImageRef offscreenContextImage = CGBitmapContextCreateImage(offscreenContext);
CGContextRelease(offscreenContext);
UIImage * newImage = [[UIImage alloc] initWithCGImage:offscreenContextImage];
// ...
CGImageRelease(offscreenContextImage);
Maybe
UIImage *newImage = [[UIImage alloc[ initWithData:offScreenContext];
My program displays a horizontal scrolling surface tiled with UIImageViews from left to right. Code runs on the UI thread to ensure that newly-visible UIImageViews have a freshly loaded UIImage assigned to them. The loading happens on a background thread.
Everything works almost fine, except there is a stutter as each image becomes visible. At first I thought my background worker was locking something in the UI thread. I spent a lot of time looking at it and eventually realized that the UIImage is doing some extra lazy processing on the UI thread when it first becomes visible. This puzzles me, since my worker thread has explicit code for decompressing JPEG data.
Anyway, on a hunch I wrote some code to render into a temporary graphics context on the background thread and - sure enough, the stutter went away. The UIImage is now being pre-loaded on my worker thread. So far so good.
The issue is that my new "force lazy load of image" method is unreliable. It causes intermittent EXC_BAD_ACCESS. I have no idea what UIImage is actually doing behind the scenes. Perhaps it is decompressing the JPEG data. Anyway, the method is:
+ (void)forceLazyLoadOfImage: (UIImage*)image
{
CGImageRef imgRef = image.CGImage;
CGFloat currentWidth = CGImageGetWidth(imgRef);
CGFloat currentHeight = CGImageGetHeight(imgRef);
CGRect bounds = CGRectMake(0.0f, 0.0f, 1.0f, 1.0f);
CGAffineTransform transform = CGAffineTransformIdentity;
CGFloat scaleRatioX = bounds.size.width / currentWidth;
CGFloat scaleRatioY = bounds.size.height / currentHeight;
UIGraphicsBeginImageContext(bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextScaleCTM(context, scaleRatioX, -scaleRatioY);
CGContextTranslateCTM(context, 0, -currentHeight);
CGContextConcatCTM(context, transform);
CGContextDrawImage(context, CGRectMake(0, 0, currentWidth, currentHeight), imgRef);
UIGraphicsEndImageContext();
}
And the EXC_BAD_ACCESS happens on the CGContextDrawImage line. QUESTION 1: Am I allowed to do this on a thread other than the UI thread? QUESTION 2: What is the UIImage actually "pre-loading"? QUESTION 3: What is the official way to solve this problem?
Thanks for reading all that, any advice would be greatly appreciated!
I've had the same stuttering problem, with some help I figured out the proper solution here: Non-lazy image loading in iOS
Two important things to mention:
Don't use UIKit methods in a worker-thread. Use CoreGraphics instead.
Even if you have a background thread for loading and decompressing images, you'll still have a little stutter if you use the wrong bitmask for your CGBitmapContext. This are the options you have to choose (it's still a bit unclear to me why):
-
CGBitmapContextCreate(imageBuffer, width, height, 8, width*4, colourSpace,
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little);
I've posted a sample project here: SwapTest, it has about the same performace as Apples' Photos app for loading/displaying images.
I used #jasamer's SwapTest UIImage category to force load my large UIImage (about 3000x2100 px) in a worker thread (with NSOperationQueue). This reduces the stutter time when setting the image into the UIImageView to an acceptable value (about 0.5 sec on iPad1).
Here is SwapTest UIImage category... thanks again #jasamer :)
UIImage+ImmediateLoading.h file
#interface UIImage (UIImage_ImmediateLoading)
- (UIImage*)initImmediateLoadWithContentsOfFile:(NSString*)path;
+ (UIImage*)imageImmediateLoadWithContentsOfFile:(NSString*)path;
#end
UIImage+ImmediateLoading.m file
#import "UIImage+ImmediateLoading.h"
#implementation UIImage (UIImage_ImmediateLoading)
+ (UIImage*)imageImmediateLoadWithContentsOfFile:(NSString*)path {
return [[[UIImage alloc] initImmediateLoadWithContentsOfFile: path] autorelease];
}
- (UIImage*)initImmediateLoadWithContentsOfFile:(NSString*)path {
UIImage *image = [[UIImage alloc] initWithContentsOfFile:path];
CGImageRef imageRef = [image CGImage];
CGRect rect = CGRectMake(0.f, 0.f, CGImageGetWidth(imageRef), CGImageGetHeight(imageRef));
CGContextRef bitmapContext = CGBitmapContextCreate(NULL,
rect.size.width,
rect.size.height,
CGImageGetBitsPerComponent(imageRef),
CGImageGetBytesPerRow(imageRef),
CGImageGetColorSpace(imageRef),
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little
);
//kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little are the bit flags required so that the main thread doesn't have any conversions to do.
CGContextDrawImage(bitmapContext, rect, imageRef);
CGImageRef decompressedImageRef = CGBitmapContextCreateImage(bitmapContext);
UIImage* decompressedImage = [[UIImage alloc] initWithCGImage: decompressedImageRef];
CGImageRelease(decompressedImageRef);
CGContextRelease(bitmapContext);
[image release];
return decompressedImage;
}
#end
And this is how I create NSOpeationQueue and set the image on main thread...
// Loads low-res UIImage at a given index and start loading a hi-res one in background.
// After finish loading, set the hi-res image into UIImageView. Remember, we need to
// update UI "on main thread" otherwise its result will be unpredictable.
-(void)loadPageAtIndex:(int)index {
prevPage = index;
//load low-res
imageViewForZoom.image = [images objectAtIndex:index];
//load hi-res on another thread
[operationQueue cancelAllOperations];
NSInvocationOperation *operation = [NSInvocationOperation alloc];
filePath = [imagesHD objectAtIndex:index];
operation = [operation initWithTarget:self selector:#selector(loadHiResImage:) object:[imagesHD objectAtIndex:index]];
[operationQueue addOperation:operation];
[operation release];
operation = nil;
}
// background thread
-(void)loadHiResImage:(NSString*)file {
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
NSLog(#"loading");
// This doesn't load the image.
//UIImage *hiRes = [UIImage imageNamed:file];
// Loads UIImage. There is no UI updating so it should be thread-safe.
UIImage *hiRes = [[UIImage alloc] initImmediateLoadWithContentsOfFile:[[NSBundle mainBundle] pathForResource:file ofType: nil]];
[imageViewForZoom performSelectorOnMainThread:#selector(setImage:) withObject:hiRes waitUntilDone:NO];
[hiRes release];
NSLog(#"loaded");
[pool release];
}
The UIGraphics* methods are designed to be called from the main thread only. They are probably the source of your trouble.
You can replace UIGraphicsBeginImageContext() with a call to CGBitmapContextCreate(); it's a little more involved (you need to create a color space, figure out the right sized buffer to create, and allocate it yourself). The CG* methods are fine to run from a different thread.
I'm not sure how you're initializing UIImage, but if you're doing it with imageNamed: or initWithFile: then you might be able to force it to load by loading the data yourself and then calling initWithData:. The stutter is probably due to lazy file I/O, so initializing it with a data object won't give it the option of reading from a file.
I had the same problem, even though I initialized the image using data. (I guess the data is loaded lazily, too?) I’ve succeeded to force decoding using the following category:
#interface UIImage (Loading)
- (void) forceLoad;
#end
#implementation UIImage (Loading)
- (void) forceLoad
{
const CGImageRef cgImage = [self CGImage];
const int width = CGImageGetWidth(cgImage);
const int height = CGImageGetHeight(cgImage);
const CGColorSpaceRef colorspace = CGImageGetColorSpace(cgImage);
const CGContextRef context = CGBitmapContextCreate(
NULL, /* Where to store the data. NULL = don’t care */
width, height, /* width & height */
8, width * 4, /* bits per component, bytes per row */
colorspace, kCGImageAlphaNoneSkipFirst);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), cgImage);
CGContextRelease(context);
}
#end