CoreGraphics Image resize - iphone

This code is from Apple's WWDC 2011 Session 318 - iOS Performance in Depth and uses CoreGraphics to create thumbnails from server hosted images.
CGImageSourceRef src = CGImageSourceCreateWithURL(url);
NSDictionary *options = (CFDictionaryRef)[NSDictionary
dictionaryWithObject:[NSNumber numberWithInt:1024
forKey:(id)kCGImageSourceThumbnailMaxPixelSize];
CGImageRef thumbnail = CGImageSourceCreateThumbnailAtIndex(src,0,options);
UIImage *image = [UIImage imageWithCGImage:thumbnail];
CGImageRelease(thumbnail);
CGImageSourceRelease(src);
But it doesnt work and the docs don't really help. In the iOS docs CGImageSource CGImageSourceRef CGImageSourceCreateThumbnailAtIndex are available
in Mac OS X v10.4 or later
How can I get this to work?
EDIT
These are the compiler errors I'm getting:
Use of undeclared identifier 'CGImageSourceRef'
Use of undeclared identifier 'kCGImageSourceThumbnailMaxPixelSize'
Use of undeclared identifier 'src'
Implicit declaration of function 'CGImageSourceCreateThumbnailAtIndex' is invalid in C99
Implicit declaration of function 'CGImageSourceRelease' is invalid in C99
Implicit declaration of function 'CGImageSourceCreateWithURL' is invalid in C99

School boy mistake.
Didn't add #import <ImageIO/ImageIO.h>

Try image resize:
-(UIImage*) resizedImage:(UIImage *)inImage:(CGRect) thumbRect
{
CGImageRef imageRef = [inImage CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
// There's a wierdness with kCGImageAlphaNone and CGBitmapContextCreate
// see Supported Pixel Formats in the Quartz 2D Programming Guide
// Creating a Bitmap Graphics Context section
// only RGB 8 bit images with alpha of kCGImageAlphaNoneSkipFirst, kCGImageAlphaNoneSkipLast, kCGImageAlphaPremultipliedFirst,
// and kCGImageAlphaPremultipliedLast, with a few other oddball image kinds are supported
// The images on input here are likely to be png or jpeg files
if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
// Build a bitmap context that's the size of the thumbRect
CGContextRef bitmap = CGBitmapContextCreate(
NULL,
thumbRect.size.width, // width
thumbRect.size.height, // height
CGImageGetBitsPerComponent(imageRef), // really needs to always be 8
4 * thumbRect.size.width, // rowbytes
CGImageGetColorSpace(imageRef),
alphaInfo
);
// Draw into the context, this scales the image
CGContextDrawImage(bitmap, thumbRect, imageRef);
// Get an image from the context and a UIImage
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap); // ok if NULL
CGImageRelease(ref);
return result;
}
i use it in my code for awhile but i can't remember it's source
try this also
resizing a UIImage without loading it entirely into memory?

Related

Screenshot of OpenGL ES content for Paint app

I’m working on a paint app for iphone. In my code I'm using an imageView which contain outline image on which I am puting CAEAGLLayer for filling colors in outline image. Now I am taking screenshot of OpenGL ES [CAEAGLLayer] rendered content using function:
- (UIImage*)snapshot:(UIView*)eaglview{
GLint backingWidth1, backingHeight1;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth1);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight1);
NSInteger x = 0, y = 0, width = backingWidth1, height = backingHeight1;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;}
combining this screenshot with outline image using function:
- (void)Combine:(UIImage *)Back{
UIImage *Front =backgroundImageView.image;
//UIGraphicsBeginImageContext(Back.size);
UIGraphicsBeginImageContext(CGSizeMake(640,960));
// Draw image1
[Back drawInRect:CGRectMake(0, 0, Back.size.width*2, Back.size.height*2)];
// Draw image2
[Front drawInRect:CGRectMake(0, 0, Front.size.width*2, Front.size.height*2)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(resultingImage, nil, nil, nil);
UIGraphicsEndImageContext();
}
Save this image to photoalbum using function
-(void)captureToPhotoAlbum {
[self Combine:[self snapshot:self]];
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:#"Success" message:#"Image saved to Photo Album" delegate:nil cancelButtonTitle:#"OK" otherButtonTitles:nil];
[alert show];
[alert release]; }
Above Code is working but the image quality of screenshot is poor. On the outlines of the brush, there is a grayish outline. I have uploaded a screenshot of my app which is combination of opengles content & UIImage.
Is there any way to get retina display screenshot of opengles-CAEaglelayer content.
Thank you in advance!
I don't believe that resolution is your issue here. If you aren't seeing the grayish outlines on your drawing when it appears on the screen, odds are that you're observing a compression artifact in the saving process. Your image is probably being saved as a lower-quality JPEG image, where artifacts will appear on sharp edges, like the ones in your drawing.
To work around this, Ben Weiss's answer here provides the following code for forcing your image to be saved to the photo library as a PNG:
UIImage* im = [UIImage imageWithCGImage:myCGRef]; // make image from CGRef
NSData* imdata = UIImagePNGRepresentation ( im ); // get PNG representation
UIImage* im2 = [UIImage imageWithData:imdata]; // wrap UIImage around PNG representation
UIImageWriteToSavedPhotosAlbum(im2, nil, nil, nil); // save to photo album
While this is probably the easiest way to address your problem here, you could also try employing multisample antialiasing, as Apple describes in the "Using Multisampling to Improve Image Quality" section of the OpenGL ES Programming Guide for iOS. Depending on how fill-rate limited you are, MSAA might lead to a little bit of slowdown in your application.
You're using kCGImageAlphaPremultipliedLast when you create the CG bitmap context. Although I can't see your OpenGL code, it seems unlikely to me that your OpenGL context is rendering premultiplied alpha. Unfortunately, IIRC, it's not possible to create a non-premultiplied CG bitmap context on iOS (it would be using kCGImageAlphaLast, but I think that'll just make the creation call fail), so you may need to premultiply the data by hand between getting it from OpenGL and making the CG context.
On the other hand, is there a reason your OpenGL context has an alpha channel? Could you just make it opaque white then use kCGImageAlphaNoneSkipLast?

iPhone - UIImage imageWithData returning nil

I need to create an UIImage from a byte array.
Here is now I create the byte array:
image = CGImageCreateWithImageInRect(aux.CGImage, imageRect);
context = CGBitmapContextCreate (data[i][j], TILE_WIDTH, TILE_HEIGHT,
bitsPerComponent, bitmapBytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);//kCGImageAlphaNoneSkipFirst);//kCGImageAlphaNone);//
CGContextDrawImage(context, CGRectMake(0, 0, TILE_WIDTH, TILE_HEIGHT), image);
data[i][j] = CGBitmapContextGetData (context);
The data variable is an unsigned char.
And this is how I try to get the UIImage:
NSData *imgData = [NSData dataWithBytes:data[i][j] length:TILE_WIDTH*TILE_HEIGHT*numberOfCompponents];
UIImage *img = [UIImage imageWithData: imgData];
The img (UIImage) is remaining nil.
OK, not this is the background: I am trying to create a pixelate application :). The images from the iPhone 4 camera are too big in size, so I split the image in smaller images. Doing so, when the area pixelated (touched) needs to be updated in order for the pixelate effect to be displayed, I am updating a smaller UIImage. I needed to do it like this because in previous tests it seamed like the update of an UIImage is killing the CPU. Still, the smaller images are now around 80x100 pxl, and the update is not working as smooth as possible. Sometimes, if you move the finger to fast it misses some spots :D. I think that using this method to create an UIImage from the byte array is faster than this one:
CGImageRef cgImage = CGImageCreate(TILE_WIDTH, TILE_HEIGHT, bitsPerComponent,
bitsPerPixel, bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big,
dataProvider, NULL, false, kCGRenderingIntentDefault);
UIImage *imageToBeUpdated = [UIImage imageWithCGImage:cgImage];
Am I correct?
[UIImage imageWithData:data] parses data that is in a known image file format (e.g. jpeg, png, or gif; full list in the documentation. You're passing it raw pixel data, which is not supported.
Try this instead of CGBitmapContextGetData to get the image out of the context:
CGImageRef imgRef = CGBitmapContextCreateImage(context);
UIImage *img = [UIImage imageWithCGImage:imgRef];

mask image via another image

Alright what I am trying to do is:
given an image where there is a circle within that image that is "blank". I want to take an existing image from user library and then mask it so that only a certain part of that image is shown on the "blank" image..
I have tried a few masking code but they all seem to work the other way around ... any tips on how to tackle this?
Unfortunately you can't use CoreAnimation to do this (which would make it rather easy).
Looking at Apple's CoreAnimation documentation:
iOS Note: As a performance consideration, iOS does not support the mask property.
Therefore the next best way to do this is to use Quartz 2D (as answered here):
CGContextRef mainViewContentContext;
CGColorSpaceRef colorSpace;
colorSpace = CGColorSpaceCreateDeviceRGB();
// create a bitmap graphics context the size of the image
mainViewContentContext = CGBitmapContextCreate (NULL, targetSize.width, targetSize.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast);
// free the rgb colorspace
CGColorSpaceRelease(colorSpace);
if (mainViewContentContext==NULL)
return NULL;
CGImageRef maskImage = [[UIImage imageNamed:#"mask.png"] CGImage];
CGContextClipToMask(mainViewContentContext, CGRectMake(0, 0, targetSize.width, targetSize.height), maskImage);
CGContextDrawImage(mainViewContentContext, CGRectMake(thumbnailPoint.x, thumbnailPoint.y, scaledWidth, scaledHeight), self.CGImage);
// Create CGImageRef of the main view bitmap content, and then
// release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished resized image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:mainViewContentBitmapContext];
// image is retained by the property setting above, so we can
// release the original
CGImageRelease(mainViewContentBitmapContext);
// return the image
return theImage;

iPhone - CGBitmapContextCreateImage Leak, Anyone else with this problem?

Has anyone else come across this problem? I am resizing images pretty often with an NSTimer. After using Instruments it does not show any memory leaks but my objectalloc just continues to climb. It points directly to CGBitmapContextCreateImage.
Anyone know of a solution? or Even possible ideas?
-(UIImage *) resizedImage:(UIImage *)inImage : (CGRect)thumbRect : (double)interpolationQuality
{
CGImageRef imageRef = [inImage CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
// Build a bitmap context that's the size of the thumbRect
CGContextRef bitmap = CGBitmapContextCreate(
NULL,
thumbRect.size.width,
thumbRect.size.height,
CGImageGetBitsPerComponent(imageRef),
4 * thumbRect.size.width,
CGImageGetColorSpace(imageRef),
alphaInfo
);
// Draw into the context, this scales the image
CGContextSetInterpolationQuality(bitmap, interpolationQuality);
CGContextDrawImage(bitmap, thumbRect, imageRef);
// Get an image from the context and a UIImage
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap); // ok if NULL
CGImageRelease(ref);
return [result autorelease];
}
Should you be releasing imageRef?
CGImageRelease(imageRef);
Just a sanity check: are you releasing the return UIImage -- normally i would expect a function that allocating a new object (in this case a UIImage) to have create in the name?
Perhaps you want
return [result autorelease]
?
Why not use the simpler UIGraphicsBeginImageContext?
#implementation UIImage(ResizeExtension)
- (UIImage *)resizedImageWithSize:(CGSize)newSize interpolationQuality:(CGInterpolationQuality)interpolationQuality;
#end
#implementation UIImage(ResizeExtension)
- (UIImage *)resizedImageWithSize:(CGSize)newSize interpolationQuality:(CGInterpolationQuality)interpolationQuality
{
UIGraphicsBeginImageContext(newSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, interpolationQuality);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
#end
Also, this will return an image retained by the current autorelease pool; if you are creating many of these images in a loop, allocate and drain an NSAutoreleasePool manually.
Ok, the problem here is that we are defining the return from CGBitmapContextCreateImage as CGImageRef, it should be CGImage. The reason your allocations (im assuming malloc) are perpetually increasing is becuase the CGImage itself is never getting released. Try the below code. Also, there is no need to autorelease the result since it is never 'Alloc'd.
After you make the changes run in insturments with allocation again, this time you will hopefully not see an continual increase in the live bytes.
I typed this on PC so there may be a syntax error if you drop it into XCode; however, this should do the trick.
// Get an image from the context and a UIImage
CGImage cgImage = CGBitmapContextCreateImage(bitmap);
UIImage* result = [UIImage imageWithCGImage:cgImage];
CGContextRelease(bitmap); // ok if NULL
CGImageRelease(cgImage);
return result;
If you're using garbage collection, use CFMakeCollectable(posterFrame). If you're using traditional memory management, it's very straightforward:
return (CGImageRef)[(id)posterFrame autorelease];
You cast the CFTypeRef (in this case, a CGImageRef) to an Objective-C object pointer, send it the -autorelease message, and then cast the result back to CGImageRef. This pattern works for (almost) any type that's compatible with CFRetain() and CFRelease().

Any code/library to scale down an UIImage?

Is there any code or library out there that can help me scale down an image? If you take a picture with the iPhone, it is something like 2000x1000 pixels which is not very network friendly. I want to scale it down to say 480x320. Any hints?
This is what I am using. Works well. I'll definitely be watching this question to see if anyone has anything better/faster. I just added the below to a category on UIimage.
+ (UIImage*)imageWithImage:(UIImage*)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
See http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/ - this has a set of code you can download as well as some descriptions.
If speed is a worry, you can experiment with using CGContextSetInterpolationQuality to set a lower interpolation quality than the default.
Please note, this is NOT my code. I did a little digging and found it here. I figured you'd have to drop into the CoreGraphics layer, but wasn't quite sure of the specifics. This should work. Just be careful about managing your memory.
// ==============================================================
// resizedImage
// ==============================================================
// Return a scaled down copy of the image.
UIImage* resizedImage(UIImage *inImage, CGRect thumbRect)
{
CGImageRef imageRef = [inImage CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
// There's a wierdness with kCGImageAlphaNone and CGBitmapContextCreate
// see Supported Pixel Formats in the Quartz 2D Programming Guide
// Creating a Bitmap Graphics Context section
// only RGB 8 bit images with alpha of kCGImageAlphaNoneSkipFirst, kCGImageAlphaNoneSkipLast, kCGImageAlphaPremultipliedFirst,
// and kCGImageAlphaPremultipliedLast, with a few other oddball image kinds are supported
// The images on input here are likely to be png or jpeg files
if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
// Build a bitmap context that's the size of the thumbRect
CGContextRef bitmap = CGBitmapContextCreate(
NULL,
thumbRect.size.width, // width
thumbRect.size.height, // height
CGImageGetBitsPerComponent(imageRef), // really needs to always be 8
4 * thumbRect.size.width, // rowbytes
CGImageGetColorSpace(imageRef),
alphaInfo
);
// Draw into the context, this scales the image
CGContextDrawImage(bitmap, thumbRect, imageRef);
// Get an image from the context and a UIImage
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap); // ok if NULL
CGImageRelease(ref);
return result;
}
Please see the solution I posted to this question. The question involves rotating an image 90 degrees instead of scaling it, but the premise is the same (it's just the matrix transformation that is different).