Thread safe UIImage resizing? - iphone

I made this UIImage extension to get a rescaled copy:
-(UIImage*)scaleByRatio:(float) scaleRatio
{
CGSize scaledSize = CGSizeMake(self.size.width * scaleRatio, self.size.height * scaleRatio);
//The output context.
UIGraphicsBeginImageContext(scaledSize);
CGContextRef context = UIGraphicsGetCurrentContext();
//Percent (101%)
#define SCALE_OVER_A_BIT 1.01
//Scale.
CGContextScaleCTM(context, scaleRatio * SCALE_OVER_A_BIT, scaleRatio * SCALE_OVER_A_BIT);
[self drawAtPoint:CGPointZero];
//End?
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
It works in the most cases, but in my recent project it outputs the original image (I save to disk right after scaling using UIImageJPEGRepresentation and imageWithData:).
I invoke the method within a background thread. Could this be the problem? How can I rewrite this to be thread safe (supposing the problem is caused by threading).

-(UIImage*)scaleByRatio:(float) scaleRatio
{
CGSize scaledSize = CGSizeMake(self.size.width * scaleRatio, self.size.height * scaleRatio);
CGColorSpaceRef colorSpace;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
//The output context.
UIGraphicsBeginImageContext(scaledSize);
CGContextRef context = context = CGBitmapContextCreate (NULL,
scaledSize .width,
scaledSize .height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
//Percent (101%)
#define SCALE_OVER_A_BIT 1.01
//Scale.
CGContextScaleCTM(context, scaleRatio * SCALE_OVER_A_BIT, scaleRatio * SCALE_OVER_A_BIT);
[self drawAtPoint:CGPointZero];
//End?
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
In-short you have to Create CGContextRef using CGBitmapContextCreate not using UIGraphicsGetCurrentContext(); becuase UIGraphicsGetCurrentContext(); is not thrad safe.
Hope, this will help you...enjoy

Related

iOS - Where should I bew releasing my CFImageRef?

I have a method which returns a rotated image:
- (CGImageRef)rotateImage:(CGImageRef)original degrees:(float)degrees {
if (degrees == 0.0f) {
return original;
} else {
double radians = degrees * M_PI / 180;
#if TARGET_OS_EMBEDDED || TARGET_IPHONE_SIMULATOR
radians = -1 * radians;
#endif
size_t _width = CGImageGetWidth(original);
size_t _height = CGImageGetHeight(original);
CGRect imgRect = CGRectMake(0, 0, _width, _height);
CGAffineTransform _transform = CGAffineTransformMakeRotation(radians);
CGRect rotatedRect = CGRectApplyAffineTransform(imgRect, _transform);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL,
rotatedRect.size.width,
rotatedRect.size.height,
CGImageGetBitsPerComponent(original),
0,
colorSpace,
kCGImageAlphaPremultipliedFirst);
CGContextSetAllowsAntialiasing(context, FALSE);
CGContextSetInterpolationQuality(context, kCGInterpolationNone);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(context,
+(rotatedRect.size.width/2),
+(rotatedRect.size.height/2));
CGContextRotateCTM(context, radians);
CGContextDrawImage(context, CGRectMake(-imgRect.size.width/2,
-imgRect.size.height/2,
imgRect.size.width,
imgRect.size.height),
original);
CGImageRef rotatedImage = CGBitmapContextCreateImage(context);
CFRelease(context);
return rotatedImage;
}
}
Yet, instruments is telling me that rotatedImage: is not being released. I'm running this method a bunch, so the memory build up a lot. Should I be releasing it in the parent method which calls rotateImage:? Or should I release it before rotateImage: passes it back?
Thanks!
I would suggest to release the CGImageRef in the parent method calling rotateImage:
Moreover in that case you should use the naming convention and rename rotateImage: to createRotateImage: for clarity.

Getting an openGL image - runs in simulator, crashes on iPad

The purpose of this function is to return a UIImage from an openGL image. The reason it's being converted to a CG image is so openGL and UIKit elements can be rendered on top of each other, which is taken care of in another function.
The strange thing is, when the app is run in the simulator, everything works fine. However, after testing the app on multiple different iPads, when the drawGlToImage method is called on self, the app crashes with a EXC_BAD_ACCESS code=1 error. Does anyone know what I'm doing here that would cause this? I've read that UIGraphicsBeginImageContext() used to have thread safety issues, but it seems like that was fixed in iOS 4.
- (UIImage *)drawGlToImage
{
self.context = [EAGLContext currentContext];
[EAGLContext setCurrentContext:self.context];
UIGraphicsBeginImageContext(self.view.frame.size);
unsigned char buffer[1024 * 768 * 4];
NSInteger dataSize = 1024 * 768 * 4;
CGContextRef currentContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(currentContext);
glReadPixels(0, 0, 1024, 768, GL_RGBA, GL_UNSIGNED_BYTE, &buffer);
//flip the image
GLubyte *flippedBuffer = (GLubyte *) malloc(dataSize);
for(int y = 0; y <768; y++)
{
for(int x = 0; x <1024 * 4; x++)
{
if(buffer[y* 4 * 1024 + x]==0)
flippedBuffer[(767 - y) * 1024 * 4 + x]=1;
else
flippedBuffer[(767 - y) * 1024 * 4 + x] = buffer[y* 4 * 1024 + x];
}
}
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, flippedBuffer, 1024 * 768 * 4, NULL);
CGImageRef iref = CGImageCreate(1024,768,8,32,1024*4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaLast, ref, NULL, true, kCGRenderingIntentDefault);
CGContextScaleCTM(currentContext, 1.0, -1.0);
CGContextTranslateCTM(currentContext, 0, -self.view.frame.size.height);
UIGraphicsPopContext();
UIImage *image = [[UIImage alloc] initWithCGImage:iref];
UIGraphicsEndImageContext();
return image;
free(flippedBuffer);
UIGraphicsPopContext();
}
When a button is pressed, a method that is called makes this assignment, which causes the app to crash.
UIImage *glImage = [self drawGlToImage];
I am not sure in which phase you are calling this method. But before calling any OpenGL functions you need to set the right OpenGL context. In the Xcode template it is this line
[EAGLContext setCurrentContext:self.context];
Here's the code used to solve it
- (UIImage *)drawGlToImage {
// Code borrowed and tweaked from:
// http://stackoverflow.com/questions/9881143/missing-part-of-the-image-when-taking-screenshot-while-supporting-retina-display
CGFloat scale = UIScreen.mainScreen.scale;
CGFloat xOffset = 40.0f;
CGFloat yOffset = -16.0f;
CGSize size = CGSizeMake((self.chart.frame.size.width) * scale,
self.chart.frame.size.height * scale);
//Create buffer for pixels
GLuint bufferLength = size.width * size.height * 4;
GLubyte* buffer = (GLubyte*)malloc(bufferLength);
//Read Pixels from OpenGL
glReadPixels(0.0f, 0.0f, size.width, size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
//Make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
//Configure image
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * size.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(size.width, size.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
CGContextRef context = CGBitmapContextCreate(pixels, size.width, size.height, 8, size.width * 4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0.0f, size.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
// These numbers are a little magical.
CGContextDrawImage(context, CGRectMake(xOffset, yOffset, ((size.width - (6.0f * scale)) / scale) - (xOffset / 2), (size.height / scale) - (yOffset / 2)), iref);
UIImage *outputImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
//Dealloc
CGDataProviderRelease(provider);
CGImageRelease(iref);
CGContextRelease(context);
free(buffer);
free(pixels);
return outputImage;
}

Merging two UIImages faster than CGContextDrawImage

I'm merging two UIImages into one context. It works, but it performs pretty slow and I'm in the need of a faster solution. As my solution is it takes about 400ms to make the mergeImage: withImage: call on an iPad 1G.
Here's what I do:
-(CGContextRef)mergeImage:(UIImage*)img1 withImage:(UIImage*)img2
{
CGSize size = [ImageToolbox getScreenSize];
CGContextRef context = [ImageToolbox createARGBBitmapContextFromImageSize:CGSizeMake(size.width, size.height)];
CGContextSetRenderingIntent(context, kCGRenderingIntentSaturation);
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), img1.CGImage);
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), img2.CGImage);
return context;
}
And here's the static methods from the ImageToolbox class:
static CGRect screenRect;
+ (CGContextRef)createARGBBitmapContextFromImageSize:(CGSize)imageSize
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
size_t pixelsWide = imageSize.width;
size_t pixelsHigh = imageSize.height;
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
CGColorSpaceRelease( colorSpace );
return context;
}
+(CGSize)getScreenSize
{
if (screenRect.size.width == 0 && screenRect.size.height == 0)
{
screenRect = [[UIScreen mainScreen] bounds];
}
return CGSizeMake(screenRect.size.height, screenRect.size.width-20);
}
Any suggestions to increase the performance?
I would definitely recommend using Instruments to profile what message is taking the most time so you can really break it down. Also, I have written a couple methods which I think should do the same thing with a lot less code, but you must have everything written out the way you do to really keep things customizable. Here they are anyways:
-(CGContextRef)mergeImage:(UIImage *)img1 withImage:(UIImage *)img2
{
CGSize size = [ImageToolbox getScreenSize];
CGRect rect = CGRectMake(0, 0, size.width, size.height);
UIGraphicsBeginImageContextWithOptions(size, YES, 1.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRenderingIntent(context, kCGRenderingIntentSaturation);
[img1 drawInRect:rect];
[img2 drawInRect:rect];
UIGraphicsEndImageContext();
return context;
}
Or if you wanted the combined image right away:
- (UIImage *)mergeImage:(UIImage *)img1 withImage:(UIImage *)img2
{
CGSize size = [ImageToolbox getScreenSize];
CGRect rect = CGRectMake(0, 0, size.width, size.height);
UIGraphicsBeginImageContextWithOptions(size, YES, 1.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRenderingIntent(context, kCGRenderingIntentSaturation);
[img1 drawInRect:rect];
[img2 drawInRect:rect];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
I have no idea if they would be faster or not, but I really wouldn't know how to speed up what you have very easily unless you had the instruments profile break down.
In any case, I hope this helps.
I did no manage to find a faster way of merging the images. I reduced the images sizes to make the operation faster.

Reproduce Springboard's icon shine with WebKit

anyone got any ideas on how to reproduce the gloss of iPhone app icons using WebKit and CSS3 and/or a transparent overlay image? Is this even possible?
iPhone OS uses the following images to compose the icon:
AppIconMask.png
AppIconShadow.png
AppIconOverlay.png (optional)
This is how I have implemented the above using only the AppIconOverlay.png in my app. I kind of like the desired effect.
It doesn't have the shadow, but I'm sure if you really wanted it you could modify the code in order to suit your needs. In terms of the AppIconMask.png I couldn't really see any need in using this since I use the #import <QuartzCore/QuartzCore.h> framework in order to achieve the desired effect by adding a layer.masksToBounds and layer.cornerRadius.
I hope this works for anyone who is interested in achieving the Apple Springboard overlay effect. Oh and thank-you to rpetrich for providing those images.
I apologise for the lack of comments in the code. It's a culmination of code from similar existing implementations scattered all over the internet. So I'd like to thank all of those people for providing those bits and pieces of code that are used as well.
- (UIImage *)getIconOfSize:(CGSize)size icon:(UIImage *)iconImage withOverlay:(UIImage *)overlayImage {
UIImage *icon = [self scaleImage:iconImage toResolution:size.width];
CGRect iconBoundingBox = CGRectMake (0, 0, size.width, size.height);
CGRect overlayBoundingBox = CGRectMake (0, 0, size.width, size.height);
CGContextRef myBitmapContext = [self createBitmapContextOfSize:size];
CGContextSetRGBFillColor (myBitmapContext, 1, 1, 1, 1);
CGContextFillRect (myBitmapContext, iconBoundingBox);
CGContextDrawImage(myBitmapContext, iconBoundingBox, icon.CGImage);
CGContextDrawImage(myBitmapContext, overlayBoundingBox, overlayImage.CGImage);
UIImage *result = [UIImage imageWithCGImage: CGBitmapContextCreateImage (myBitmapContext)];
CGContextRelease (myBitmapContext);
return result;
}
- (CGContextRef)createBitmapContextOfSize:(CGSize)size {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL) {
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
size.width,
size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGContextSetAllowsAntialiasing (context,NO);
if (context== NULL) {
free (bitmapData);
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
return context;
}
- (UIImage *)scaleImage:(UIImage *)image toResolution:(int)resolution {
CGFloat width = image.size.width;
CGFloat height = image.size.height;
CGRect bounds = CGRectMake(0, 0, width, height);
// If already at the minimum resolution, return the original image, otherwise scale.
if (width <= resolution && height <= resolution) {
return image;
} else {
CGFloat ratio = width/height;
if (ratio > 1) {
bounds.size.width = resolution;
bounds.size.height = bounds.size.width / ratio;
} else {
bounds.size.height = resolution;
bounds.size.width = bounds.size.height * ratio;
}
}
UIGraphicsBeginImageContext(bounds.size);
[image drawInRect:CGRectMake(0.0, 0.0, bounds.size.width, bounds.size.height)];
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageCopy;
}
Usage:
UIImage *overlayImage = [UIImage imageNamed:#"AppIconOverlay.png"];
UIImage *profileImage = [Helper getIconOfSize:CGSizeMake(59, 60) icon:image withOverlay:overlayImage];
[profilePictureImageView setImage:profileImage];
profilePictureImageView.layer.masksToBounds = YES;
profilePictureImageView.layer.cornerRadius = 10.0;
profilePictureImageView.layer.borderColor = [[UIColor grayColor] CGColor];
profilePictureImageView.layer.borderWidth = 1.0;

Issue with Transparency

We have an issue with transparency. While writing an image to Context with gradient, transparency (which is unwanted) is getting applied. We are not sure why this has been getting applied. We need the context to be "ONLY" with Gradient but not with "TRANSPARENCY".
Attaching the snippet of the code for your reference.
- (UIImage *)ReflectImage:(CGFloat)refFract {
int reflectionHeight = self.size.height * refFract;
CGImageRef gradientMaskImage = NULL;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef gradientBitmapContext = CGBitmapContextCreate(nil, 1, reflectionHeight,
8, 0, colorSpace, kCGImageAlphaNone);
CGFloat colors[] = {0.0, 1.0, 1.0, 1.0};
CGGradientRef grayScaleGradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2);
CGColorSpaceRelease(colorSpace);
CGPoint gradientStartPoint = CGPointMake(0, reflectionHeight);
CGPoint gradientEndPoint = CGPointZero;
CGContextDrawLinearGradient(gradientBitmapContext, grayScaleGradient, gradientStartPoint,
gradientEndPoint, kCGGradientDrawsAfterEndLocation);
CGGradientRelease(grayScaleGradient);
CGContextSetGrayFillColor(gradientBitmapContext, 0.0, 0.5);
CGContextFillRect(gradientBitmapContext, CGRectMake(0, 0, 1, reflectionHeight));
gradientMaskImage = CGBitmapContextCreateImage(gradientBitmapContext);
CGContextRelease(gradientBitmapContext);
CGImageRef reflectionImage = CGImageCreateWithMask(self.CGImage, gradientMaskImage);
CGImageRelease(gradientMaskImage);
CGSize size = CGSizeMake(self.size.width, self.size.height + reflectionHeight);
UIGraphicsBeginImageContext(size);
[self drawAtPoint:CGPointZero];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, CGRectMake(0, self.size.height, self.size.width, reflectionHeight), reflectionImage);
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRelease(reflectionImage);
return result;
}
So, can someone please let me know why this is so? It would be of great help if this issue gets resolved.
Thanks!!
I didn't try running any of this, but you do seem to be passing an alpha value to CGContextSetGrayFillColor.
Also, the use of "device gray" has been generally discouraged. You might want to double check to make sure that the color space you're getting back has the same number of components as you expect it to.