I'm trying to write an iPhone app that takes PNG tilesets and displays segments of them on-screen, and I'm trying to get it to refresh the whole screen at 20fps. Currently I'm managing about 3 or 4fps on the simulator, and 0.5 - 2fps on the device (an iPhone 3G), depending on how much stuff is on the screen.
I'm using Core Graphics at the moment and currently trying to find ways to avoid biting the bullet and refactoring in OpenGL. I've done a Shark time profile analysis on the code and about 70-80% of everything that's going on is boiling down to a function called copyImageBlockSetPNG, which is being called from within CGContextDrawImage, which itself is calling all sorts of other functions with PNG in the name. Inflate is also in there, accounting for 37% of it.
Question is, I already loaded the image into memory from a UIImage, so why does the code still care that it was a PNG? Does it not decompress into a native uncompressed format on load? Can I convert it myself? The analysis implies that it's decompressing the image every time I draw a section from it, which ends up being 30 or more times a frame.
Solution
-(CGImageRef)inflate:(CGImageRef)compressedImage
{
size_t width = CGImageGetWidth(compressedImage);
size_t height = CGImageGetHeight(compressedImage);
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (width * 4);
bitmapByteCount = (bitmapBytesPerRow * height);
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate (NULL,
width,
height,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease( colorSpace );
CGContextDrawImage(context, CGRectMake(0, 0, width, height), compressedImage);
CGImageRef result = CGBitmapContextCreateImage(context);
CFRelease(context);
return result;
}
It's based on zneak's code (so he gets the big tick) but I've changed some of the parameters to CGBitmapContextCreate to stop it crashing when I feed it my PNG images.
To answer your last questions, your empirical case seems to prove they're not uncompressed once loaded.
To convert them into uncompressed data, you can draw them (once) in a CGBitmapContext and get a CGImage out of it. It should be well enough uncompressed.
Off my head, this should do it:
CGImageRef Inflate(CGImageRef compressedImage)
{
size_t width = CGImageGetWidth(compressedImage);
size_t height = CGImageGetHeight(compressedImage);
CGContextRef context = CGBitmapContextCreate(
NULL,
width,
height,
CGImageGetBitsPerComponent(compressedImage),
CGImageGetBytesPerRow(compressedImage),
CGImageGetColorSpace(compressedImage),
CGImageGetBitmapInfo(compressedImage)
);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), compressedImage);
CGImageRef result = CGBitmapContextCreateImage(context);
CFRelease(context);
return result;
}
Don't forget to release the CGImage you get once you're done with it.
This question totally saved my day! thanks!! I was having this problem although I wasn't sure where the problem was. Speed up UIImage creation from SpriteSheet
I would like to add that there is another way to load the image directly decompressed, qithout having to write to a context.
NSDictionary *dict = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES]
forKey:(id)kCGImageSourceShouldCache];
NSData *imageData = [NSData dataWithContentsOfFile:#"path/to/image.png"]];
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)(imageData), NULL);
CGImageRef atlasCGI = CGImageSourceCreateImageAtIndex(source, 0, (__bridge CFDictionaryRef)dict);
CFRelease(source);
I believe it is a little bit faster this way. Hope it helps!
Related
I am trying to create a jigsaw puzzle and I need to mask the UIImages to obtain the puzzle pieces.
I don't understand how can I mask a JPG picture because as I understand it doesn't have an alpha channel. Can anyone help me with this?
The JPGs are on an online server and there is no way to download them as PNG.
And one more thing, I can’t find this function anywhere on the Apple documentation:
“CopyImageAndAddAlphaChannel”. Does it even exist. I found a few references on some forums but nothing strait forward.
Thanks a lot,
Andrei
Found the answer. Here is the function, it works for JPG and PNG without alpha channel (I have tested it :)):
CGImageRef imageRef = self.CGImage;
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
CGContextRef offscreenContext = CGBitmapContextCreate(NULL,
width,
height,
8,
0,
CGImageGetColorSpace(imageRef),
kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(offscreenContext, CGRectMake(0, 0, width, height), imageRef);
CGImageRef imageRefWithAlpha = CGBitmapContextCreateImage(offscreenContext);
UIImage *imageWithAlpha = [UIImage imageWithCGImage:imageRefWithAlpha];
CGContextRelease(offscreenContext);
CGImageRelease(imageRefWithAlpha);
return imageWithAlpha;
Looking for a simple example or link to a tutorial.
Say I have a bunch of values stored in an array. I would like to create an image and update the image data from my array. Assume the array values are intensity data and will be updating a grayscale image. Assume the array values are between 0 and 255 -- or that I will convert it to that range.
This is not for purposes of animation. Rather the image would be updated based on user interaction. This is something I know how to do well in Java, but am very new to iPhone programming. I've googled some information about CGImage and UIImage -- but am confused as to where to start.
Any help would be appreciated.
I have sample code from one of my apps that takes data stored as an array of unsigned char and turns it into a UIImage:
// unsigned char *bitmap; // This is the bitmap data you already have.
// int width, height; // bitmap length should equal width * height
// Create a bitmap context with the image data
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(bitmap, width, height, 8, width, colorspace, kCGImageAlphaNone);
CGImageRef cgImage = nil;
if (context != nil) {
cgImage = CGBitmapContextCreateImage (context);
CGContextRelease(context);
}
CGColorSpaceRelease(colorspace);
// Release the cgImage when done
CGImageRelease(cgImage);
If your colorspace is RGB and you need to account alpha value pass kCGImageAlphaPremultipliedLast as last parameter to CGBitmapContextCreate function.
Don't use kCGImageAlphaLast, this will not work because bitmap contexts do not support alpha that isn't premultiplied.
The books I referenced in this SO answer both contain sample code and demonstrations of image manipulations and updates via user interaction.
Like in this post:
iPhone - UIImage Leak, ObjectAlloc Building
I'm having a similar problem. The pointer from the malloc in create_bitmap_data_provider is never freed. I've verified that the associated image object is eventually released, just not the provider's allocation. Should I explicitly create a data provider and somehow manage it's memory? Seems like a hack.
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, blah blah blah);
CGColorSpaceRelease(colorSpace);
// ... draw into context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage * image = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(context);
After fbrereto's answer below, I changed the code to this:
- (UIImage *)modifiedImage {
CGSize size = CGSizeMake(width, height);
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
// draw into context
UIImage * image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image; // image retainCount = 1
}
// caller:
{
UIImage * image = [self modifiedImage];
_imageView.image = image; // image retainCount = 2
}
// after caller done, image retainCount = 1, autoreleased object lost its scope
Unfortunately, this still exhibits the same issue with a side effect of flipping the image horizontally. It appears to do the same thing with CGBitmapContextCreateImage internally.
I have verified my object's dealloc is called. The retainCount on the _imageView.image and the _imageView are both 1 before I release the _imageView. This really doesn't make sense. Others seem to have this issue as well, I'm the last one to suspect the SDK, but could there be an iPhone SDK bug here???
It looks like the problem is that you are releasing a pointer to the returned CGImage, rather than the CGImage itself. I too was having similar issues before with continual growing allocations and an eventual app crash. I addressed it by allocating a CGImage rather than a CGImageRef. After the changes run your code in Insturments with allocations, and you should not see anymore perpetual memory consumption from malloc. As well if you use the class method imageWithCGImage you will not have to worry about autoreleasing your UIImage later on.
I typed this on a PC so if you drop it right into XCode you may have syntax issue, I appologize in advance; however the principal is sound.
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, blah blah blah);
CGColorSpaceRelease(colorSpace);
// ... draw into context
CGImage cgImage = CGBitmapContextCreateImage(context);
UIImage * image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGContextRelease(context);
return image;
I had this problem and it drove me nuts for a few days. After much digging and noticing a leak in CG Raster Data using Instruments.
The problem seems to lie inside CoreGraphics. My problem was when i was using CGBitmapContextCreateImage in a tight loop it would over a period of time retain some images (800kb each) and this slowly leaked out.
After a few days of tracing with Instruments I found a workaround was to use CGDataProviderCreateWithData method instead. The interesting thing was the output was the same CGImageRef but this time there would be no CG Raster Data used in VM by Core graphics and no leak. Im assuming this is an internal problem or were misusing it.
Here is the code that saved me:
#autoreleasepool {
CGImageRef cgImage;
CreateCGImageFromCVPixelBuffer(pixelBuffer,&cgImage);
UIImage *image= [UIImage imageWithCGImage:cgImage scale:1.0 orientation:UIImageOrientationUp];
// DO SOMETHING HERE WITH IMAGE
CGImageRelease(cgImage);
}
The key was using CGDataProviderCreateWithData in the method below.
static OSStatus CreateCGImageFromCVPixelBuffer(CVPixelBufferRef pixelBuffer, CGImageRef *imageOut)
{
OSStatus err = noErr;
OSType sourcePixelFormat;
size_t width, height, sourceRowBytes;
void *sourceBaseAddr = NULL;
CGBitmapInfo bitmapInfo;
CGColorSpaceRef colorspace = NULL;
CGDataProviderRef provider = NULL;
CGImageRef image = NULL;
sourcePixelFormat = CVPixelBufferGetPixelFormatType( pixelBuffer );
if ( kCVPixelFormatType_32ARGB == sourcePixelFormat )
bitmapInfo = kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipFirst;
else if ( kCVPixelFormatType_32BGRA == sourcePixelFormat )
bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst;
else
return -95014; // only uncompressed pixel formats
sourceRowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
width = CVPixelBufferGetWidth( pixelBuffer );
height = CVPixelBufferGetHeight( pixelBuffer );
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
sourceBaseAddr = CVPixelBufferGetBaseAddress( pixelBuffer );
colorspace = CGColorSpaceCreateDeviceRGB();
CVPixelBufferRetain( pixelBuffer );
provider = CGDataProviderCreateWithData( (void *)pixelBuffer, sourceBaseAddr, sourceRowBytes * height, ReleaseCVPixelBuffer);
image = CGImageCreate(width, height, 8, 32, sourceRowBytes, colorspace, bitmapInfo, provider, NULL, true, kCGRenderingIntentDefault);
if ( err && image ) {
CGImageRelease( image );
image = NULL;
}
if ( provider ) CGDataProviderRelease( provider );
if ( colorspace ) CGColorSpaceRelease( colorspace );
*imageOut = image;
return err;
}
static void ReleaseCVPixelBuffer(void *pixel, const void *data, size_t size)
{
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)pixel;
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CVPixelBufferRelease( pixelBuffer );
}
Instead of manually creating your CGContextRef I'd suggest you leverage UIGraphicsBeginImageContext as demonstrated in this post. More details on that set of routines can be found here. I trust it'll help to resolve this issue, or at the very least give you less memory you have to manage yourself.
UPDATE:
Given the new code, the retainCount of the UIImage as it comes out of the function will be 1, and assigning it to the imageView's image will cause it to bump to 2. At that point deallocating the imageView will leave the retainCount of the UIImage to be 1, resulting in a leak. It is important, then, after assigning the UIImage to the imageView, to release it. It may look a bit strange, but it will cause the retainCount to be properly set to 1.
You're not the only one with this problem. I've had major problems with CGBitmapContextCreateImage(). When you turn on Zombie mode, it even warns you that memory is released twice (when it's not the case). There's definitely a problem when mixing CG* stuff with UI* stuff. I'm still trying to figure out how to code around this issue.
Side note: calling UIGraphicsBeginImageContext is not thread-safe. Be careful.
This really helped me! Here's how I used it to fix that nasty leak problem:
CGImage *cgImage = CGBitmapContextCreateImage(context);
CFDataRef dataRef = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
CGImageRelease(cgImage);
image->imageRef = dataRef;
image->image = CFDataGetBytePtr(dataRef);
Notice, I had to store the CFDataRef (for a CFRelease(image->imageRef)) in my ~Image function. Hopefully this also helps others...JR
I'm new to the iPhone App development so it's likely that I'm doing something wrong.
Basically, I'm loading a bunch of images from the internet, and then cropping them. I managed to find examples of loading images asynchronous and adding them into views. I've managed to do that by adding an image with NSData, through a NSOperation, which was added into a NSOperationQueue.
Then, because I had to make fixed-sized thumbs, I needed a way to crop this images, so I found a script on the net which basically uses UIGraphicsBeginImageContext(), UIGraphicsGetImageFromCurrentImageContext() and UIGraphicsEndImageContext() to draw the cropped image, along with unimportant size calculations.
The thing is, the method works, but since it's generating like 20 of this images, it randomly crashes after a few of them were generated, or sometimes after I close and re-open the app one or two more times.
What should I do in this cases? I tried to make this methods run asynchronous somehow, as well, with NSOperations and a NSOperationQueue, but no luck.
If the crop code is more relevant than I think, here it is:
UIGraphicsBeginImageContext(CGSizeMake(50, 50));
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = CGPointMake(0.0,0.0); //this is actually generated
// based on the sourceImage size
thumbnailRect.size.width = 50;
thumbnailRect.size.height = 50;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
Thanks!
The code to scale the images looks too much simple.
Here is the one I am using. As you can see, there are no leaks, objects are released when no longer needed. Hope this helps.
// Draw the image into a pixelsWide x pixelsHigh bitmap and use that bitmap to
// create a new UIImage
- (UIImage *) createImage: (CGImageRef) image width: (int) pixelWidth height: (int) pixelHeight
{
// Set the size of the output image
CGRect aRect = CGRectMake(0.0f, 0.0f, pixelWidth, pixelHeight);
// Create a bitmap context to store the new thumbnail
CGContextRef context = MyCreateBitmapContext(pixelWidth, pixelHeight);
// Clear the context and draw the image into the rectangle
CGContextClearRect(context, aRect);
CGContextDrawImage(context, aRect, image);
// Return a UIImage populated with the new resized image
CGImageRef myRef = CGBitmapContextCreateImage (context);
UIImage *img = [UIImage imageWithCGImage:myRef];
free(CGBitmapContextGetData(context));
CGContextRelease(context);
CGImageRelease(myRef);
return img;
}
// MyCreateBitmapContext: Source based on Apple Sample Code
CGContextRef MyCreateBitmapContext (int pixelsWide,
int pixelsHigh)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
if (context== NULL)
{
free (bitmapData);
CGColorSpaceRelease( colorSpace );
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
return context;
}
Your app is crashing because the calls you're using (e.g., UIGraphicsBeginImageContext) manipulate UIKit's context stack which you can only safely do from the main thread.
unforgiven's solution won't crash when used in a thread as it doesn't manipulate the context stack.
It does sounds suspiciously like an out of memory crash. Fire up the Leaks tool and see your overall memory trends.
have been struggling with this issue for quite some time now and couldn't find an answer so far. Basically, what I want to do, is capturing the content of my EAGLview and then use it to merge it with other images. Anyway, the mainproblem is, that everything transparent in my EAGLview renders opaque when saving it to the photoalbum or putting it into a UIImageView. Let me share some code with you, I found somewhere else:
- (CGImageRef) glToUIImage {
unsigned char buffer[320*480*4];
glReadPixels(0,0,320,480,GL_RGBA,GL_UNSIGNED_BYTE,&buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, &buffer, 320*480*4, NULL);
CGImageRef iref = CGImageCreate(320,480,8,32,320*4,CGColorSpaceCreateDeviceRGB(),kCGBitmapByteOrderDefault,ref,NULL,true,kCGRenderingIntentDefault);
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width*height*4;
uint32_t *pixels = (uint32_t *)malloc(length);
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width*4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0.0, height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context);
UIImage *outputImage = [UIImage imageWithCGImage:outputRef];
UIImageWriteToSavedPhotosAlbum(outputImage, nil, nil, nil);
return outputRef;
}
As I already mentioned, this perfectly grabs the content of my EAGLview, but I can not get the image with its alpha values.
Any help appreciated. Thanks!
Two places I can see that you might be losing your transparency:
when you're drawing your scene: does your scene have a transparent background? make sure you're doing a glClear to something like (0,0,0,0) rather than (0,0,0,1).
when you're drawing the image to flip it over: what is the default background color here? Seems likely it's a non-transparent black and you'll end up with that where the transparent parts of your scene used to be.
You could check if #2 is your problem by saving the image before you flip it over, and if it is, you could avoid the flipping over process by flipping the memory in your pixels buffer directly rather than using Core Graphics calls.