Another iPhone - CGBitmapContextCreateImage Leak - iphone

Like in this post:
iPhone - UIImage Leak, ObjectAlloc Building
I'm having a similar problem. The pointer from the malloc in create_bitmap_data_provider is never freed. I've verified that the associated image object is eventually released, just not the provider's allocation. Should I explicitly create a data provider and somehow manage it's memory? Seems like a hack.
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, blah blah blah);
CGColorSpaceRelease(colorSpace);
// ... draw into context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage * image = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(context);
After fbrereto's answer below, I changed the code to this:
- (UIImage *)modifiedImage {
CGSize size = CGSizeMake(width, height);
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
// draw into context
UIImage * image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image; // image retainCount = 1
}
// caller:
{
UIImage * image = [self modifiedImage];
_imageView.image = image; // image retainCount = 2
}
// after caller done, image retainCount = 1, autoreleased object lost its scope
Unfortunately, this still exhibits the same issue with a side effect of flipping the image horizontally. It appears to do the same thing with CGBitmapContextCreateImage internally.
I have verified my object's dealloc is called. The retainCount on the _imageView.image and the _imageView are both 1 before I release the _imageView. This really doesn't make sense. Others seem to have this issue as well, I'm the last one to suspect the SDK, but could there be an iPhone SDK bug here???

It looks like the problem is that you are releasing a pointer to the returned CGImage, rather than the CGImage itself. I too was having similar issues before with continual growing allocations and an eventual app crash. I addressed it by allocating a CGImage rather than a CGImageRef. After the changes run your code in Insturments with allocations, and you should not see anymore perpetual memory consumption from malloc. As well if you use the class method imageWithCGImage you will not have to worry about autoreleasing your UIImage later on.
I typed this on a PC so if you drop it right into XCode you may have syntax issue, I appologize in advance; however the principal is sound.
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, blah blah blah);
CGColorSpaceRelease(colorSpace);
// ... draw into context
CGImage cgImage = CGBitmapContextCreateImage(context);
UIImage * image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGContextRelease(context);
return image;

I had this problem and it drove me nuts for a few days. After much digging and noticing a leak in CG Raster Data using Instruments.
The problem seems to lie inside CoreGraphics. My problem was when i was using CGBitmapContextCreateImage in a tight loop it would over a period of time retain some images (800kb each) and this slowly leaked out.
After a few days of tracing with Instruments I found a workaround was to use CGDataProviderCreateWithData method instead. The interesting thing was the output was the same CGImageRef but this time there would be no CG Raster Data used in VM by Core graphics and no leak. Im assuming this is an internal problem or were misusing it.
Here is the code that saved me:
#autoreleasepool {
CGImageRef cgImage;
CreateCGImageFromCVPixelBuffer(pixelBuffer,&cgImage);
UIImage *image= [UIImage imageWithCGImage:cgImage scale:1.0 orientation:UIImageOrientationUp];
// DO SOMETHING HERE WITH IMAGE
CGImageRelease(cgImage);
}
The key was using CGDataProviderCreateWithData in the method below.
static OSStatus CreateCGImageFromCVPixelBuffer(CVPixelBufferRef pixelBuffer, CGImageRef *imageOut)
{
OSStatus err = noErr;
OSType sourcePixelFormat;
size_t width, height, sourceRowBytes;
void *sourceBaseAddr = NULL;
CGBitmapInfo bitmapInfo;
CGColorSpaceRef colorspace = NULL;
CGDataProviderRef provider = NULL;
CGImageRef image = NULL;
sourcePixelFormat = CVPixelBufferGetPixelFormatType( pixelBuffer );
if ( kCVPixelFormatType_32ARGB == sourcePixelFormat )
bitmapInfo = kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipFirst;
else if ( kCVPixelFormatType_32BGRA == sourcePixelFormat )
bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst;
else
return -95014; // only uncompressed pixel formats
sourceRowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
width = CVPixelBufferGetWidth( pixelBuffer );
height = CVPixelBufferGetHeight( pixelBuffer );
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
sourceBaseAddr = CVPixelBufferGetBaseAddress( pixelBuffer );
colorspace = CGColorSpaceCreateDeviceRGB();
CVPixelBufferRetain( pixelBuffer );
provider = CGDataProviderCreateWithData( (void *)pixelBuffer, sourceBaseAddr, sourceRowBytes * height, ReleaseCVPixelBuffer);
image = CGImageCreate(width, height, 8, 32, sourceRowBytes, colorspace, bitmapInfo, provider, NULL, true, kCGRenderingIntentDefault);
if ( err && image ) {
CGImageRelease( image );
image = NULL;
}
if ( provider ) CGDataProviderRelease( provider );
if ( colorspace ) CGColorSpaceRelease( colorspace );
*imageOut = image;
return err;
}
static void ReleaseCVPixelBuffer(void *pixel, const void *data, size_t size)
{
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)pixel;
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CVPixelBufferRelease( pixelBuffer );
}

Instead of manually creating your CGContextRef I'd suggest you leverage UIGraphicsBeginImageContext as demonstrated in this post. More details on that set of routines can be found here. I trust it'll help to resolve this issue, or at the very least give you less memory you have to manage yourself.
UPDATE:
Given the new code, the retainCount of the UIImage as it comes out of the function will be 1, and assigning it to the imageView's image will cause it to bump to 2. At that point deallocating the imageView will leave the retainCount of the UIImage to be 1, resulting in a leak. It is important, then, after assigning the UIImage to the imageView, to release it. It may look a bit strange, but it will cause the retainCount to be properly set to 1.

You're not the only one with this problem. I've had major problems with CGBitmapContextCreateImage(). When you turn on Zombie mode, it even warns you that memory is released twice (when it's not the case). There's definitely a problem when mixing CG* stuff with UI* stuff. I'm still trying to figure out how to code around this issue.
Side note: calling UIGraphicsBeginImageContext is not thread-safe. Be careful.

This really helped me! Here's how I used it to fix that nasty leak problem:
CGImage *cgImage = CGBitmapContextCreateImage(context);
CFDataRef dataRef = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
CGImageRelease(cgImage);
image->imageRef = dataRef;
image->image = CFDataGetBytePtr(dataRef);
Notice, I had to store the CFDataRef (for a CFRelease(image->imageRef)) in my ~Image function. Hopefully this also helps others...JR

Related

Is UIImageJPEGRepresentation() thread safe?

I am scaling and cropping a UIImage and I want to be able to do it in a block that is thread safe. I could not find in the docs whether UIImageJPEGRepresentation is thread safe.
In the following code, I crop and scale a CGImage, then I create a UIImage from that and get the UIImageJPEGRepresentation. The end goal of this block is to get the NSData* from the scaled/cropped version.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
CGImageRef imageRef = photo.CGImage;
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);
CGContextRef bitmap;
if (photo.imageOrientation == UIImageOrientationUp || photo.imageOrientation == UIImageOrientationDown) {
bitmap = CGBitmapContextCreate(NULL, kFINAL_WIDTH, kFINAL_HEIGHT, CGImageGetBitsPerComponent(imageRef), 0, colorSpaceInfo, bitmapInfo);
} else {
bitmap = CGBitmapContextCreate(NULL, kFINAL_HEIGHT, kFINAL_WIDTH, CGImageGetBitsPerComponent(imageRef), 0, colorSpaceInfo, bitmapInfo);
}
CGContextSetInterpolationQuality(bitmap, kCGInterpolationHigh);
CGContextDrawImage(bitmap, drawRect, imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
NSData *finalData = UIImageJPEGRepresentation([UIImage imageWithCGImage:ref], 1.0);
CGContextRelease(bitmap);
CGImageRelease(ref);
dispatch_async(dispatch_get_main_queue(), ^{
[self.delegate sendNSDataBack:finalData];
});
});
I tried getting the NSData using a CGDataProviderRef, but when I did finally get the NSData, putting it in a UIImage into a UIImageView displayed nothing.
So bottomline question is. Can I do [UIImage imageWithData:] and UIImageJPEGRepresentation in another thread in a block using GCD?
You can use UIImageJPEGRepresentation() in the background (I'm using it this way in a current project).
However what you can't do is create a UIImage the way you are doing in the background, the [UIImage imagewithCGImage] call must be doing in the main thread (as a rule of thumb all UIKit calls should be done on the main thread).
This seems like a case where you might need nested blocks.
Edit: My own code I have found does call [UIImage imagewithCGImage] while in a background thread, but I am still suspicious that might cause issues in some cases. But my code does work.
Edit2: I just noticed you are resizing the image, UIImage+Resize. There's a very nice class linked to in this post, that has been built to do that in a robust way:
http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
You should really read that whole page to understand the nuances of resizing images. As I said, I do use that from a background thread even though part of what it does inside is what you were doing.
Edit3: If you are running on iOS4 or later, you may want to look into using the ImageIO framework to output images, which is more likely to be thread safe:
http://developer.apple.com/graphicsimaging/workingwithimageio.html
Example code for that is hard to find, here's a method that saves a PNG image using ImageIO (based on the code in "Programming With Quartz:2D and PDF graphics in Mac OS X):
// You'll need both ImageIO and MobileCoreServices frameworks to have this compile
#import <ImageIO/ImageIO.h>
#import <MobileCoreServices/MobileCoreServices.h>
void exportCGImageToPNGFileWithDestination( CGImageRef image, CFURLRef url)
{
float resolution = 144;
CFTypeRef keys[2];
CFTypeRef values[2];
CFDictionaryRef options = NULL;
// Create image destination to go into URL, using PNG
CGImageDestinationRef imageDestination = CGImageDestinationCreateWithURL( url, kUTTypePNG, 1, NULL);
if ( imageDestination == NULL )
{
fprintf( stderr, "Error creating image destination\n");
return;
}
// Set the keys to be the X and Y resolution of the image
keys[0] = kCGImagePropertyDPIWidth;
keys[1] = kCGImagePropertyDPIHeight;
// Create a number for the DPI value for the image
values[0] = CFNumberCreate( NULL, kCFNumberFloatType, &resolution );
values[1] = values[0];
// Options dictionary for output
options = CFDictionaryCreate(NULL,
(const void **)keys,
(const void **)values,
2,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFRelease(values[0]);
// Adding the image to the destination
CGImageDestinationAddImage( imageDestination, image, options );
CFRelease( options );
// Finalizing writes out the image to the destination
CGImageDestinationFinalize( imageDestination );
CFRelease( imageDestination );
}
Apple's official position is that no part of UIKit is thread-safe. However, the rest of your code appears to be Quartz-based, which is thread-safe when used in the manner you use it.
You can do everything on a background thread, then do the call to UIImageJPEGRepresentation() back on main:
// ...
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
dispatch_async(dispatch_get_main_queue(), ^ {
NSData *finalData = UIImageJPEGRepresentation([UIImage imageWithCGImage:ref], 1.0);
[self.delegate sendNSDataBack:finalData];
});
CGContextRelease(bitmap);
CGImageRelease(ref);
I think it is thread-safe because I do the similar things to resize an UIImage, or store image data to database in the background thread. And, the main-thread sometimes is named to UI thread. Anything about updating a screen should be executed on the UI thread. But, the UIImage is an object to store image data. It is not sub-classed from UIView. So, it is thread-safe.

Does CGContextDrawImage decompress PNG on the fly?

I'm trying to write an iPhone app that takes PNG tilesets and displays segments of them on-screen, and I'm trying to get it to refresh the whole screen at 20fps. Currently I'm managing about 3 or 4fps on the simulator, and 0.5 - 2fps on the device (an iPhone 3G), depending on how much stuff is on the screen.
I'm using Core Graphics at the moment and currently trying to find ways to avoid biting the bullet and refactoring in OpenGL. I've done a Shark time profile analysis on the code and about 70-80% of everything that's going on is boiling down to a function called copyImageBlockSetPNG, which is being called from within CGContextDrawImage, which itself is calling all sorts of other functions with PNG in the name. Inflate is also in there, accounting for 37% of it.
Question is, I already loaded the image into memory from a UIImage, so why does the code still care that it was a PNG? Does it not decompress into a native uncompressed format on load? Can I convert it myself? The analysis implies that it's decompressing the image every time I draw a section from it, which ends up being 30 or more times a frame.
Solution
-(CGImageRef)inflate:(CGImageRef)compressedImage
{
size_t width = CGImageGetWidth(compressedImage);
size_t height = CGImageGetHeight(compressedImage);
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (width * 4);
bitmapByteCount = (bitmapBytesPerRow * height);
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate (NULL,
width,
height,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease( colorSpace );
CGContextDrawImage(context, CGRectMake(0, 0, width, height), compressedImage);
CGImageRef result = CGBitmapContextCreateImage(context);
CFRelease(context);
return result;
}
It's based on zneak's code (so he gets the big tick) but I've changed some of the parameters to CGBitmapContextCreate to stop it crashing when I feed it my PNG images.
To answer your last questions, your empirical case seems to prove they're not uncompressed once loaded.
To convert them into uncompressed data, you can draw them (once) in a CGBitmapContext and get a CGImage out of it. It should be well enough uncompressed.
Off my head, this should do it:
CGImageRef Inflate(CGImageRef compressedImage)
{
size_t width = CGImageGetWidth(compressedImage);
size_t height = CGImageGetHeight(compressedImage);
CGContextRef context = CGBitmapContextCreate(
NULL,
width,
height,
CGImageGetBitsPerComponent(compressedImage),
CGImageGetBytesPerRow(compressedImage),
CGImageGetColorSpace(compressedImage),
CGImageGetBitmapInfo(compressedImage)
);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), compressedImage);
CGImageRef result = CGBitmapContextCreateImage(context);
CFRelease(context);
return result;
}
Don't forget to release the CGImage you get once you're done with it.
This question totally saved my day! thanks!! I was having this problem although I wasn't sure where the problem was. Speed up UIImage creation from SpriteSheet
I would like to add that there is another way to load the image directly decompressed, qithout having to write to a context.
NSDictionary *dict = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES]
forKey:(id)kCGImageSourceShouldCache];
NSData *imageData = [NSData dataWithContentsOfFile:#"path/to/image.png"]];
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)(imageData), NULL);
CGImageRef atlasCGI = CGImageSourceCreateImageAtIndex(source, 0, (__bridge CFDictionaryRef)dict);
CFRelease(source);
I believe it is a little bit faster this way. Hope it helps!

iPhone - CGBitmapContextCreateImage Leak, Anyone else with this problem?

Has anyone else come across this problem? I am resizing images pretty often with an NSTimer. After using Instruments it does not show any memory leaks but my objectalloc just continues to climb. It points directly to CGBitmapContextCreateImage.
Anyone know of a solution? or Even possible ideas?
-(UIImage *) resizedImage:(UIImage *)inImage : (CGRect)thumbRect : (double)interpolationQuality
{
CGImageRef imageRef = [inImage CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
// Build a bitmap context that's the size of the thumbRect
CGContextRef bitmap = CGBitmapContextCreate(
NULL,
thumbRect.size.width,
thumbRect.size.height,
CGImageGetBitsPerComponent(imageRef),
4 * thumbRect.size.width,
CGImageGetColorSpace(imageRef),
alphaInfo
);
// Draw into the context, this scales the image
CGContextSetInterpolationQuality(bitmap, interpolationQuality);
CGContextDrawImage(bitmap, thumbRect, imageRef);
// Get an image from the context and a UIImage
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap); // ok if NULL
CGImageRelease(ref);
return [result autorelease];
}
Should you be releasing imageRef?
CGImageRelease(imageRef);
Just a sanity check: are you releasing the return UIImage -- normally i would expect a function that allocating a new object (in this case a UIImage) to have create in the name?
Perhaps you want
return [result autorelease]
?
Why not use the simpler UIGraphicsBeginImageContext?
#implementation UIImage(ResizeExtension)
- (UIImage *)resizedImageWithSize:(CGSize)newSize interpolationQuality:(CGInterpolationQuality)interpolationQuality;
#end
#implementation UIImage(ResizeExtension)
- (UIImage *)resizedImageWithSize:(CGSize)newSize interpolationQuality:(CGInterpolationQuality)interpolationQuality
{
UIGraphicsBeginImageContext(newSize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, interpolationQuality);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
#end
Also, this will return an image retained by the current autorelease pool; if you are creating many of these images in a loop, allocate and drain an NSAutoreleasePool manually.
Ok, the problem here is that we are defining the return from CGBitmapContextCreateImage as CGImageRef, it should be CGImage. The reason your allocations (im assuming malloc) are perpetually increasing is becuase the CGImage itself is never getting released. Try the below code. Also, there is no need to autorelease the result since it is never 'Alloc'd.
After you make the changes run in insturments with allocation again, this time you will hopefully not see an continual increase in the live bytes.
I typed this on PC so there may be a syntax error if you drop it into XCode; however, this should do the trick.
// Get an image from the context and a UIImage
CGImage cgImage = CGBitmapContextCreateImage(bitmap);
UIImage* result = [UIImage imageWithCGImage:cgImage];
CGContextRelease(bitmap); // ok if NULL
CGImageRelease(cgImage);
return result;
If you're using garbage collection, use CFMakeCollectable(posterFrame). If you're using traditional memory management, it's very straightforward:
return (CGImageRef)[(id)posterFrame autorelease];
You cast the CFTypeRef (in this case, a CGImageRef) to an Objective-C object pointer, send it the -autorelease message, and then cast the result back to CGImageRef. This pattern works for (almost) any type that's compatible with CFRetain() and CFRelease().

Multiple Image Operations Crash iPhone App

I'm new to the iPhone App development so it's likely that I'm doing something wrong.
Basically, I'm loading a bunch of images from the internet, and then cropping them. I managed to find examples of loading images asynchronous and adding them into views. I've managed to do that by adding an image with NSData, through a NSOperation, which was added into a NSOperationQueue.
Then, because I had to make fixed-sized thumbs, I needed a way to crop this images, so I found a script on the net which basically uses UIGraphicsBeginImageContext(), UIGraphicsGetImageFromCurrentImageContext() and UIGraphicsEndImageContext() to draw the cropped image, along with unimportant size calculations.
The thing is, the method works, but since it's generating like 20 of this images, it randomly crashes after a few of them were generated, or sometimes after I close and re-open the app one or two more times.
What should I do in this cases? I tried to make this methods run asynchronous somehow, as well, with NSOperations and a NSOperationQueue, but no luck.
If the crop code is more relevant than I think, here it is:
UIGraphicsBeginImageContext(CGSizeMake(50, 50));
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = CGPointMake(0.0,0.0); //this is actually generated
// based on the sourceImage size
thumbnailRect.size.width = 50;
thumbnailRect.size.height = 50;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
Thanks!
The code to scale the images looks too much simple.
Here is the one I am using. As you can see, there are no leaks, objects are released when no longer needed. Hope this helps.
// Draw the image into a pixelsWide x pixelsHigh bitmap and use that bitmap to
// create a new UIImage
- (UIImage *) createImage: (CGImageRef) image width: (int) pixelWidth height: (int) pixelHeight
{
// Set the size of the output image
CGRect aRect = CGRectMake(0.0f, 0.0f, pixelWidth, pixelHeight);
// Create a bitmap context to store the new thumbnail
CGContextRef context = MyCreateBitmapContext(pixelWidth, pixelHeight);
// Clear the context and draw the image into the rectangle
CGContextClearRect(context, aRect);
CGContextDrawImage(context, aRect, image);
// Return a UIImage populated with the new resized image
CGImageRef myRef = CGBitmapContextCreateImage (context);
UIImage *img = [UIImage imageWithCGImage:myRef];
free(CGBitmapContextGetData(context));
CGContextRelease(context);
CGImageRelease(myRef);
return img;
}
// MyCreateBitmapContext: Source based on Apple Sample Code
CGContextRef MyCreateBitmapContext (int pixelsWide,
int pixelsHigh)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
if (context== NULL)
{
free (bitmapData);
CGColorSpaceRelease( colorSpace );
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
return context;
}
Your app is crashing because the calls you're using (e.g., UIGraphicsBeginImageContext) manipulate UIKit's context stack which you can only safely do from the main thread.
unforgiven's solution won't crash when used in a thread as it doesn't manipulate the context stack.
It does sounds suspiciously like an out of memory crash. Fire up the Leaks tool and see your overall memory trends.

How to pull out the ARGB component from BitmapContext on iPhone?

I'm trying to get ARGB components from CGBitmapContext with the following codes:
-(id) initWithImage: (UIImage*) image //create BitmapContext with UIImage and use 'pixelData' as the pointer
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (image.size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * image.size.height);
colorSpace = CGColorSpaceCreateDeviceRGB();
pixelData = malloc( bitmapByteCount ); //unsigned char* pixelData is defined in head file
context = CGBitmapContextCreate (pixelData,
image.size.width,
image.size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst
);
CGColorSpaceRelease( colorSpace );
CGContextDrawImage(context, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage);
pixelData = CGBitmapContextGetData(context);
return self;
}
-(float) alphaAtX:(int)x y:(int)y //get alpha component using the pointer 'pixelData'
{
return pixelData[(y *width + x) *4 + 3]; //+0 for red, +1 for green, +2 for blue, +3 for alpha
}
-(void)viewDidLoad {
[super viewDidLoad];
UIImage *img = [UIImage imageNamed:#"MacDrive.png"]; //load image
[self initWithImage:img]; //create BitmapContext with UIImage
float alpha = [self alphaAtX:20 y:20]; //store alpha component
}
When I try to store red/green/blue, they turn out to be always 240. And alpha is always 255.
So I think maybe something is wrong with the pointer. It could not return the correct ARGB data I want. Any ideas about what's wrong with the code?
First of all, you're leaking memory, the pixelData memory never gets freed.
For your usage, it's better to let the CGContext calls manage the memory. Simply pass NULL i.s.o. pixelData, and as long as you keep the reference count up on your context, the CGBitmapContextGetData(context) will remain valid.
You're using alpha as if it was the last entry (RGBA, not ARGB), use it as such in the creation of your context, or adapt the alphaAtX code. I'm not sure if you want premultiplied, if it's just to check alpha values, you don't.
All in all something like:
[...]
CGContextRelease(context);
context = CGBitmapContextCreate (NULL,
image.size.width,
image.size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaLast
);
CGColorSpaceRelease( colorSpace );
CGContextDrawImage(context, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage);
pixelData = CGBitmapContextGetData(context);
[...]
with context being a member variable, initialized to NULL, will already be a step in the right direction.
First, I would check to see if the context returned from your CGBitmapContextCreate call is valid (that is, make sure context != NULL). I'd also check to make sure your originally loaded image is valid (so, make sure img != nil).
If your context is nil, it may be because your image size is unreasonable, or else your pixel data format is unavailable. Try using kCGImageAlphaPremultipliedLast instead?
Second, you don't need to use CGBitmapContextGetData - you can just use the pixelData pointer you passed in in the first place. You should also CGContextRelease your context before leaving your initWithImage: function to avoid a leak.
I also note that you're using kCGImageAlphaPremultipliedFirst. This means that the alpha component is first, not last, in your image, so you want an offset of 0 for the alpha component in your alphaAtX:y: function.
Does any of that help?