Merging two UIImages faster than CGContextDrawImage - iphone

I'm merging two UIImages into one context. It works, but it performs pretty slow and I'm in the need of a faster solution. As my solution is it takes about 400ms to make the mergeImage: withImage: call on an iPad 1G.
Here's what I do:
-(CGContextRef)mergeImage:(UIImage*)img1 withImage:(UIImage*)img2
{
CGSize size = [ImageToolbox getScreenSize];
CGContextRef context = [ImageToolbox createARGBBitmapContextFromImageSize:CGSizeMake(size.width, size.height)];
CGContextSetRenderingIntent(context, kCGRenderingIntentSaturation);
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), img1.CGImage);
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), img2.CGImage);
return context;
}
And here's the static methods from the ImageToolbox class:
static CGRect screenRect;
+ (CGContextRef)createARGBBitmapContextFromImageSize:(CGSize)imageSize
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
size_t pixelsWide = imageSize.width;
size_t pixelsHigh = imageSize.height;
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
CGColorSpaceRelease( colorSpace );
return context;
}
+(CGSize)getScreenSize
{
if (screenRect.size.width == 0 && screenRect.size.height == 0)
{
screenRect = [[UIScreen mainScreen] bounds];
}
return CGSizeMake(screenRect.size.height, screenRect.size.width-20);
}
Any suggestions to increase the performance?

I would definitely recommend using Instruments to profile what message is taking the most time so you can really break it down. Also, I have written a couple methods which I think should do the same thing with a lot less code, but you must have everything written out the way you do to really keep things customizable. Here they are anyways:
-(CGContextRef)mergeImage:(UIImage *)img1 withImage:(UIImage *)img2
{
CGSize size = [ImageToolbox getScreenSize];
CGRect rect = CGRectMake(0, 0, size.width, size.height);
UIGraphicsBeginImageContextWithOptions(size, YES, 1.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRenderingIntent(context, kCGRenderingIntentSaturation);
[img1 drawInRect:rect];
[img2 drawInRect:rect];
UIGraphicsEndImageContext();
return context;
}
Or if you wanted the combined image right away:
- (UIImage *)mergeImage:(UIImage *)img1 withImage:(UIImage *)img2
{
CGSize size = [ImageToolbox getScreenSize];
CGRect rect = CGRectMake(0, 0, size.width, size.height);
UIGraphicsBeginImageContextWithOptions(size, YES, 1.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRenderingIntent(context, kCGRenderingIntentSaturation);
[img1 drawInRect:rect];
[img2 drawInRect:rect];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
I have no idea if they would be faster or not, but I really wouldn't know how to speed up what you have very easily unless you had the instruments profile break down.
In any case, I hope this helps.

I did no manage to find a faster way of merging the images. I reduced the images sizes to make the operation faster.

Related

iOS - Where should I bew releasing my CFImageRef?

I have a method which returns a rotated image:
- (CGImageRef)rotateImage:(CGImageRef)original degrees:(float)degrees {
if (degrees == 0.0f) {
return original;
} else {
double radians = degrees * M_PI / 180;
#if TARGET_OS_EMBEDDED || TARGET_IPHONE_SIMULATOR
radians = -1 * radians;
#endif
size_t _width = CGImageGetWidth(original);
size_t _height = CGImageGetHeight(original);
CGRect imgRect = CGRectMake(0, 0, _width, _height);
CGAffineTransform _transform = CGAffineTransformMakeRotation(radians);
CGRect rotatedRect = CGRectApplyAffineTransform(imgRect, _transform);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL,
rotatedRect.size.width,
rotatedRect.size.height,
CGImageGetBitsPerComponent(original),
0,
colorSpace,
kCGImageAlphaPremultipliedFirst);
CGContextSetAllowsAntialiasing(context, FALSE);
CGContextSetInterpolationQuality(context, kCGInterpolationNone);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(context,
+(rotatedRect.size.width/2),
+(rotatedRect.size.height/2));
CGContextRotateCTM(context, radians);
CGContextDrawImage(context, CGRectMake(-imgRect.size.width/2,
-imgRect.size.height/2,
imgRect.size.width,
imgRect.size.height),
original);
CGImageRef rotatedImage = CGBitmapContextCreateImage(context);
CFRelease(context);
return rotatedImage;
}
}
Yet, instruments is telling me that rotatedImage: is not being released. I'm running this method a bunch, so the memory build up a lot. Should I be releasing it in the parent method which calls rotateImage:? Or should I release it before rotateImage: passes it back?
Thanks!
I would suggest to release the CGImageRef in the parent method calling rotateImage:
Moreover in that case you should use the naming convention and rename rotateImage: to createRotateImage: for clarity.

App Crash due to Receive memory warning in CGContextDrawImage?

my app Crash due to Receive memory Warning.when i run app with instruments then i get the following leak memory issue .
here is my full code of this method.
- (UIImage *)resizeImage:(UIImage *)orignalImage newWidth:(int)width newHeight:(int)height bgColor:(UIColor *)bgColor
{
CGRect newRect;
if((orignalImage.size.width>width) || (orignalImage.size.height>height))
{
float newWidth,newHeight;
if(orignalImage.size.width>orignalImage.size.height)
{
newWidth=width;
newHeight=(height*orignalImage.size.height)/orignalImage.size.width;
}
else if(orignalImage.size.height>orignalImage.size.width)
{
newWidth=(width*orignalImage.size.width)/orignalImage.size.height;
newHeight=height;
}
else
{
newWidth=width;
newHeight=height;
}
newRect.origin.x=(width-newWidth)/2;
newRect.origin.y=(height-newHeight)/2;
newRect.size.width=newWidth;
newRect.size.height=newHeight;
}
else
{
newRect.origin.x=(width-(orignalImage.size.width))/2;
newRect.origin.y=(height-(orignalImage.size.height))/2;
newRect.size.width=orignalImage.size.width;
newRect.size.height=orignalImage.size.height;
}
CGImageRef imageRef = [orignalImage CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
//if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(imageRef), 4 * width, CGImageGetColorSpace(imageRef), alphaInfo);
CGContextSetFillColorWithColor(bitmap, bgColor.CGColor);
CGContextFillRect(bitmap, CGRectMake(0, 0, width, height));
CGContextDrawImage(bitmap, newRect, imageRef);
CGImageRef newImgRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImg = [UIImage imageWithCGImage:newImgRef];
CGContextRelease(bitmap);
CGImageRelease(newImgRef);
return newImg;
}
Some one can suggest me what mistake in my code.any help will be apprericated thanks.

Received Memory Warning in DrawRect Method

I am developing an app in which i have recording of main screen and SetNeedsDisplay method is used.
But the problem is that it takes lot of memory and even i am not recording the screen.
I want to reduce the memory usage of the below mentioned code.
Any solution to this?
Thanks in advance.
- (void) drawRect:(CGRect)rect
{
NSDate* start = [NSDate date];
CGContextRef context = [self createBitmapContextOfSize:self.frame.size];
//NSLog(#"context value %#",context);
//not sure why this is necessary...image renders upside-down and mirrored
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, self.frame.size.height);
CGContextConcatCTM(context, flipVertical);
[self.layer renderInContext:context];
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage* background = [UIImage imageWithCGImage: cgImage];
CGImageRelease(cgImage);
self.currentScreen = background;
if (_recording) {
float millisElapsed = [[NSDate date] timeIntervalSinceDate:startedAt] * 1000.0;
[self writeVideoFrameAtTime:CMTimeMake((int)millisElapsed, 1000)];
}
float processingSeconds = [[NSDate date] timeIntervalSinceDate:start];
delayRemaining = (1.0 / self.frameRate) - processingSeconds;
//redraw at the specified framerate
[self performSelector:#selector(setNeedsDisplay) withObject:nil afterDelay:delayRemaining > 0.0 ? delayRemaining : 0.01];
}
-(CGContextRef) createBitmapContextOfSize:(CGSize) size
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (bitmapData != NULL) {
free(bitmapData);
}
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL) {
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
size.width,
size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaNoneSkipFirst);
CGContextSetAllowsAntialiasing(context,NO);
if (context== NULL) {
free (bitmapData);
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
return context;
}
You create a context every frame, but never release it, try adding this to the end:
CGContextRelease(context);

Reproduce Springboard's icon shine with WebKit

anyone got any ideas on how to reproduce the gloss of iPhone app icons using WebKit and CSS3 and/or a transparent overlay image? Is this even possible?
iPhone OS uses the following images to compose the icon:
AppIconMask.png
AppIconShadow.png
AppIconOverlay.png (optional)
This is how I have implemented the above using only the AppIconOverlay.png in my app. I kind of like the desired effect.
It doesn't have the shadow, but I'm sure if you really wanted it you could modify the code in order to suit your needs. In terms of the AppIconMask.png I couldn't really see any need in using this since I use the #import <QuartzCore/QuartzCore.h> framework in order to achieve the desired effect by adding a layer.masksToBounds and layer.cornerRadius.
I hope this works for anyone who is interested in achieving the Apple Springboard overlay effect. Oh and thank-you to rpetrich for providing those images.
I apologise for the lack of comments in the code. It's a culmination of code from similar existing implementations scattered all over the internet. So I'd like to thank all of those people for providing those bits and pieces of code that are used as well.
- (UIImage *)getIconOfSize:(CGSize)size icon:(UIImage *)iconImage withOverlay:(UIImage *)overlayImage {
UIImage *icon = [self scaleImage:iconImage toResolution:size.width];
CGRect iconBoundingBox = CGRectMake (0, 0, size.width, size.height);
CGRect overlayBoundingBox = CGRectMake (0, 0, size.width, size.height);
CGContextRef myBitmapContext = [self createBitmapContextOfSize:size];
CGContextSetRGBFillColor (myBitmapContext, 1, 1, 1, 1);
CGContextFillRect (myBitmapContext, iconBoundingBox);
CGContextDrawImage(myBitmapContext, iconBoundingBox, icon.CGImage);
CGContextDrawImage(myBitmapContext, overlayBoundingBox, overlayImage.CGImage);
UIImage *result = [UIImage imageWithCGImage: CGBitmapContextCreateImage (myBitmapContext)];
CGContextRelease (myBitmapContext);
return result;
}
- (CGContextRef)createBitmapContextOfSize:(CGSize)size {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL) {
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
size.width,
size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGContextSetAllowsAntialiasing (context,NO);
if (context== NULL) {
free (bitmapData);
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
return context;
}
- (UIImage *)scaleImage:(UIImage *)image toResolution:(int)resolution {
CGFloat width = image.size.width;
CGFloat height = image.size.height;
CGRect bounds = CGRectMake(0, 0, width, height);
// If already at the minimum resolution, return the original image, otherwise scale.
if (width <= resolution && height <= resolution) {
return image;
} else {
CGFloat ratio = width/height;
if (ratio > 1) {
bounds.size.width = resolution;
bounds.size.height = bounds.size.width / ratio;
} else {
bounds.size.height = resolution;
bounds.size.width = bounds.size.height * ratio;
}
}
UIGraphicsBeginImageContext(bounds.size);
[image drawInRect:CGRectMake(0.0, 0.0, bounds.size.width, bounds.size.height)];
UIImage *imageCopy = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return imageCopy;
}
Usage:
UIImage *overlayImage = [UIImage imageNamed:#"AppIconOverlay.png"];
UIImage *profileImage = [Helper getIconOfSize:CGSizeMake(59, 60) icon:image withOverlay:overlayImage];
[profilePictureImageView setImage:profileImage];
profilePictureImageView.layer.masksToBounds = YES;
profilePictureImageView.layer.cornerRadius = 10.0;
profilePictureImageView.layer.borderColor = [[UIColor grayColor] CGColor];
profilePictureImageView.layer.borderWidth = 1.0;

Drawing an offscreen coloured square returns a greyscale image

I asked this question earlier and am now trying to explore the idea of making an offscreen image.
Something is going wrong - I think maybe with colour space? I've boiled the code down and down and down, until I eventually can demonstrate the problem with just a few lines.
I have a view with an imageview (called iv) and a button, which when pushed, calls "push"
- (UIImage *) block:(CGSize)size {
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColor(context, CGColorGetComponents([UIColor redColor].CGColor));
CGContextFillRect (context, CGRectMake(0, 0, size.width, size.height));
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
- (IBAction) push {
self.iv.image = [self block:CGSizeMake(50.0,50.0)];
}
The issue here is that instead of drawing a 50x50 block in the specified colour, it draws it in a shade of grey, which leads me to think its a colour space error.
I tried the same thing with a Bitmap Context using
- (CGContextRef) createBitmapContextOfSize:(CGSize) size {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL) {
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
size.width,
size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
//CGContextSetAllowsAntialiasing (context,NO);
if (context== NULL) {
free (bitmapData);
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
return context;
}
- (UIImage *) block:(CGSize)size {
UIGraphicsBeginImageContext(size);
CGContextRef context = [self createBitmapContextOfSize:size];
CGContextSetFillColor(context, CGColorGetComponents([UIColor blueColor].CGColor));
CGContextFillRect (context, CGRectMake(0, 0, size.width, size.height));
UIImage *result = [UIImage imageWithCGImage: CGBitmapContextCreateImage (context)];
CGContextRelease(context);
UIGraphicsEndImageContext();
return result;
}
with identical results. Grey boxes not (in this case) a blue box.
I'm sure if I can get this to work everything else will follow.
If you have a CGColor object, just use the CGContextSetFillColorWithColor function.
Found the answer
CGContextSetFillColor(context, CGColorGetComponents([UIColor redColor].CGColor));
fails in the above scenario (but works when used in drawRect!)
CGContextSetRGBFillColor(context, 1, 0, 0, 1);
works correctly.
I think this shows a whole area of ignorance for me. If someone could explain or point me at the correct documentation I would be very grateful.
This is happening because CGContextSetFillColor() has no effect unless CGContextSetFillColorSpace() has been called first at some point in the context's lifetime.
Both CGContextSetFillColorWithColor() and CGContextSetRGBFillColor() also set the fill colorspace, which is why those work.
As Peter Hosey says, the best thing to do in your example is CGContextSetFillColorWithColor().