I'm trying to get ARGB components from CGBitmapContext with the following codes:
-(id) initWithImage: (UIImage*) image //create BitmapContext with UIImage and use 'pixelData' as the pointer
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (image.size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * image.size.height);
colorSpace = CGColorSpaceCreateDeviceRGB();
pixelData = malloc( bitmapByteCount ); //unsigned char* pixelData is defined in head file
context = CGBitmapContextCreate (pixelData,
image.size.width,
image.size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst
);
CGColorSpaceRelease( colorSpace );
CGContextDrawImage(context, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage);
pixelData = CGBitmapContextGetData(context);
return self;
}
-(float) alphaAtX:(int)x y:(int)y //get alpha component using the pointer 'pixelData'
{
return pixelData[(y *width + x) *4 + 3]; //+0 for red, +1 for green, +2 for blue, +3 for alpha
}
-(void)viewDidLoad {
[super viewDidLoad];
UIImage *img = [UIImage imageNamed:#"MacDrive.png"]; //load image
[self initWithImage:img]; //create BitmapContext with UIImage
float alpha = [self alphaAtX:20 y:20]; //store alpha component
}
When I try to store red/green/blue, they turn out to be always 240. And alpha is always 255.
So I think maybe something is wrong with the pointer. It could not return the correct ARGB data I want. Any ideas about what's wrong with the code?
First of all, you're leaking memory, the pixelData memory never gets freed.
For your usage, it's better to let the CGContext calls manage the memory. Simply pass NULL i.s.o. pixelData, and as long as you keep the reference count up on your context, the CGBitmapContextGetData(context) will remain valid.
You're using alpha as if it was the last entry (RGBA, not ARGB), use it as such in the creation of your context, or adapt the alphaAtX code. I'm not sure if you want premultiplied, if it's just to check alpha values, you don't.
All in all something like:
[...]
CGContextRelease(context);
context = CGBitmapContextCreate (NULL,
image.size.width,
image.size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaLast
);
CGColorSpaceRelease( colorSpace );
CGContextDrawImage(context, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage);
pixelData = CGBitmapContextGetData(context);
[...]
with context being a member variable, initialized to NULL, will already be a step in the right direction.
First, I would check to see if the context returned from your CGBitmapContextCreate call is valid (that is, make sure context != NULL). I'd also check to make sure your originally loaded image is valid (so, make sure img != nil).
If your context is nil, it may be because your image size is unreasonable, or else your pixel data format is unavailable. Try using kCGImageAlphaPremultipliedLast instead?
Second, you don't need to use CGBitmapContextGetData - you can just use the pixelData pointer you passed in in the first place. You should also CGContextRelease your context before leaving your initWithImage: function to avoid a leak.
I also note that you're using kCGImageAlphaPremultipliedFirst. This means that the alpha component is first, not last, in your image, so you want an offset of 0 for the alpha component in your alphaAtX:y: function.
Does any of that help?
Related
I currently have some image data in a C array, which contains RGBA data.
float array[length][4]
I am trying to get this to a UIImage, which it looks like these are initialized with files, NSData, and URLs. Since the other two methods are slow, I am most interested in the NSData approach.
I can get all of these values into an NSArray like so:
for (i=0; i<image.size.width * image.size.height; i++){
replace = [UIColor colorWithRed:array[i][0] green:array[i][1] blue:array[i][2] alpha:array[i][3]];
[output replaceObjectAtIndex:i withObject:replace];
}
So, I have a NSArray full of objects that are a UIColor. I have tried many methods, but how do I convert this to a UIImage?
I think it would be straight forward. A function sorta like imageWithData:data R:0 B:1 G:2 A:3 length:length width:width length:length would be nice, but there is no function as far as I can tell.
imageWithData: is meant for image data in a standard image file format, e.g. a PNG or JPEG file that you have in memory. It's not suitable for creating images from raw data.
For that, you would typically create a bitmap graphics context, passing your array, pixel format, size, etc. to the CGBitmapContextCreate function. When you've created a bitmap context, you can create an image from it using CGBitmapContextCreateImage, which gives you a CGImageRef that you can pass to the UIImage method imageWithCGImage:.
Here's a basic example that creates a tiny 1×2 pixel image with one red pixel and one green pixel. It just uses hard-coded pixel values that are meant to show the order of the color components, normally, you would get this data from somewhere else of course:
size_t width = 2;
size_t height = 1;
size_t bytesPerPixel = 4;
//4 bytes per pixel (R, G, B, A) = 8 bytes for a 1x2 pixel image:
unsigned char rawData[8] = {255, 0, 0, 255, //red
0, 255, 0, 255}; //green
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
size_t bytesPerRow = bytesPerPixel * width;
size_t bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
//This is your image:
UIImage *image = [UIImage imageWithCGImage:cgImage];
//Don't forget to clean up:
CGImageRelease(cgImage);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
I have managed to use the reflection sample app from apple to create a reflection from a UIImageView.
The problem is that when I change the picture inside the UIImageView, the reflection from the previous displayed picture remains on the screen. The new reflection on the next picture then overlaps the previous reflection.
How do I ensure that the previous reflection is removed when I change to the next picture?
Thank you so much. I hope my question is not too basic.
Here is the code which I have used so far:
// reflection
self.view.autoresizesSubviews = YES;
self.view.userInteractionEnabled = YES;
// create the reflection view
CGRect reflectionRect = currentView.frame;
// the reflection is a fraction of the size of the view being reflected
reflectionRect.size.height = reflectionRect.size.height * kDefaultReflectionFraction;
// and is offset to be at the bottom of the view being reflected
reflectionRect = CGRectOffset(reflectionRect, 0, currentView.frame.size.height);
reflectionView = [[UIImageView alloc] initWithFrame:reflectionRect];
// determine the size of the reflection to create
NSUInteger reflectionHeight = currentView.bounds.size.height * kDefaultReflectionFraction;
// create the reflection image, assign it to the UIImageView and add the image view to the containerView
reflectionView.image = [self reflectedImage:currentView withHeight:reflectionHeight];
reflectionView.alpha = kDefaultReflectionOpacity;
[self.view addSubview:reflectionView];
Then the code below is used to form the reflection:
CGImageRef CreateGradientImage(int pixelsWide, int pixelsHigh)
{
CGImageRef theCGImage = NULL;
// gradient is always black-white and the mask must be in the gray colorspace
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// create the bitmap context
CGContextRef gradientBitmapContext = CGBitmapContextCreate(nil, pixelsWide, pixelsHigh,
8, 0, colorSpace, kCGImageAlphaNone);
// define the start and end grayscale values (with the alpha, even though
// our bitmap context doesn't support alpha the gradient requires it)
CGFloat colors[] = {0.0, 1.0, 1.0, 1.0};
// create the CGGradient and then release the gray color space
CGGradientRef grayScaleGradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2);
CGColorSpaceRelease(colorSpace);
// create the start and end points for the gradient vector (straight down)
CGPoint gradientStartPoint = CGPointZero;
CGPoint gradientEndPoint = CGPointMake(0, pixelsHigh);
// draw the gradient into the gray bitmap context
CGContextDrawLinearGradient(gradientBitmapContext, grayScaleGradient, gradientStartPoint,
gradientEndPoint, kCGGradientDrawsAfterEndLocation);
CGGradientRelease(grayScaleGradient);
// convert the context into a CGImageRef and release the context
theCGImage = CGBitmapContextCreateImage(gradientBitmapContext);
CGContextRelease(gradientBitmapContext);
// return the imageref containing the gradient
return theCGImage;
}
CGContextRef MyCreateBitmapContext(int pixelsWide, int pixelsHigh)
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create the bitmap context
CGContextRef bitmapContext = CGBitmapContextCreate (nil, pixelsWide, pixelsHigh, 8,
0, colorSpace,
// this will give us an optimal BGRA format for the device:
(kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst));
CGColorSpaceRelease(colorSpace);
return bitmapContext;
}
- (UIImage *)reflectedImage:(UIImageView *)fromImage withHeight:(NSUInteger)height
{
if (!height) return nil;
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = MyCreateBitmapContext(fromImage.bounds.size.width, height);
// offset the context -
// This is necessary because, by default, the layer created by a view for caching its content is flipped.
// But when you actually access the layer content and have it rendered it is inverted. Since we're only creating
// a context the size of our reflection view (a fraction of the size of the main view) we have to translate the
// context the delta in size, and render it.
//
CGFloat translateVertical= fromImage.bounds.size.height - height;
CGContextTranslateCTM(mainViewContentContext, 0, -translateVertical);
// render the layer into the bitmap context
CALayer *layer = fromImage.layer;
[layer renderInContext:mainViewContentContext];
// create CGImageRef of the main view bitmap content, and then release that bitmap context
CGImageRef mainViewContentBitmapContext = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// create a 2 bit CGImage containing a gradient that will be used for masking the
// main view content to create the 'fade' of the reflection. The CGImageCreateWithMask
// function will stretch the bitmap image as required, so we can create a 1 pixel wide gradient
CGImageRef gradientMaskImage = CreateGradientImage(1, height);
// create an image by masking the bitmap of the mainView content with the gradient view
// then release the pre-masked content bitmap and the gradient bitmap
CGImageRef reflectionImage = CGImageCreateWithMask(mainViewContentBitmapContext, gradientMaskImage);
CGImageRelease(mainViewContentBitmapContext);
CGImageRelease(gradientMaskImage);
// convert the finished reflection image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:reflectionImage];
// image is retained by the property setting above, so we can release the original
CGImageRelease(reflectionImage);
return theImage;
}
If you don't want to use IB, just add
reflectionView.image = nil;
before
reflectedImage.image = [self reflectedImage:...
and don't forget this line
if (currentView.image == nil) reflectedImage.image = nil;
or else you'll end up with an old reflection after the image has disappeared.
finally I choose to devote some time to find a way/implementation to
mask text inside UITextView/UIWebView.
By now what I'm able to do is:
- add some custom background
- add a uitextview/uiwebview with some text
- add an UIImageView (with a covering png) or a CAGradientLayer to
create a simple mask effect (*)
Of course this is not a magic bullet and require at least one more
layer (the one pointed out with *).
Furthermore it's not so good when you have a full transparent
background 'cause everyone can recognize the extra view/layer used to
fade away the text.
I searched all over google but still not found a good solution (I've
found about mask an image, blah blah)...
Any tips?
Thanks in advance,
marcio
PS maybe a screenshot will be more straightforward, here you're!
http://grab.by/KzS
Yes! I finally got it. I don't know if it's the Apple's way but it works. Maybe they have the opportunity to employ some private apis. Anyway this is a sort of pseudo-algorithm on how I got it works:
1) get a screenshot of the window
2) crop the desired rect with CGImageCreateWithImageInRect
3) apply a gradient mask (stolen from Apple' sample code on Reflections)
4) create an UIImageView with the freshly created image
I also noted that it doesn't affect the performances even on the lowest devices.
Hope it will be helpful!
And this is a crop of the result (link text)
I've promised to myself to implement a category just to make it better. Until now the code is quite spread in different classes.
Just to make a sample (supported only landscape orientation, see the transform below, supported only top mask). In this case I overrided didMoveToWindow of the table that needs to be masked:
- (void)didMoveToWindow {
if (self.window) {
UIImageView *reflected = (UIImageView *)[self.superview viewWithTag:TABLE_SHADOW_TOP];
if (!reflected) {
UIImage *image = [UIImage screenshot:self.window];
//
CGRect croppedRect = CGRectMake(480-self.frame.size.height, self.frame.origin.x, 16, self.frame.size.width);
CGImageRef cropImage = CGImageCreateWithImageInRect(image.CGImage, croppedRect);
UIImage *reflectedImage = [UIImage imageMaskedWithGradient:cropImage];
CGImageRelease(cropImage);
UIImageView *reflected = [[UIImageView alloc] initWithImage:reflectedImage];
reflected.transform = CGAffineTransformMakeRotation(-(M_PI/2));
reflected.tag = TABLE_SHADOW_TOP;
CGRect adjusted = reflected.frame;
adjusted.origin = self.frame.origin;
reflected.frame = adjusted;
[self.superview addSubview:reflected];
[reflected release];
}
}
}
and this is the uiimage category:
CGImageRef CreateGradientImage(int pixelsWide, int pixelsHigh)
{
CGImageRef theCGImage = NULL;
// gradient is always black-white and the mask must be in the gray colorspace
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// create the bitmap context
CGContextRef gradientBitmapContext = CGBitmapContextCreate(NULL, pixelsWide, pixelsHigh,
8, 0, colorSpace, kCGImageAlphaNone);
// define the start and end grayscale values (with the alpha, even though
// our bitmap context doesn't support alpha the gradient requires it)
CGFloat colors[] = {0.0, 1.0, 1.0, 1.0};
// create the CGGradient and then release the gray color space
CGGradientRef grayScaleGradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2);
CGColorSpaceRelease(colorSpace);
// create the start and end points for the gradient vector (straight down)
CGPoint gradientStartPoint = CGPointZero;
// CGPoint gradientStartPoint = CGPointMake(0, pixelsHigh);
CGPoint gradientEndPoint = CGPointMake(pixelsWide/1.75, 0);
// draw the gradient into the gray bitmap context
CGContextDrawLinearGradient(gradientBitmapContext, grayScaleGradient, gradientStartPoint,
gradientEndPoint, kCGGradientDrawsAfterEndLocation);
CGGradientRelease(grayScaleGradient);
// convert the context into a CGImageRef and release the context
theCGImage = CGBitmapContextCreateImage(gradientBitmapContext);
CGContextRelease(gradientBitmapContext);
// return the imageref containing the gradient
return theCGImage;
}
CGContextRef MyCreateBitmapContext(int pixelsWide, int pixelsHigh)
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create the bitmap context
CGContextRef bitmapContext = CGBitmapContextCreate (NULL, pixelsWide, pixelsHigh, 8,
0, colorSpace,
// this will give us an optimal BGRA format for the device:
(kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst));
CGColorSpaceRelease(colorSpace);
return bitmapContext;
}
+ (UIImage *)imageMaskedWithGradient:(CGImageRef)image {
UIDeviceOrientation deviceOrientation = [UIDevice currentDevice].orientation;
DEBUG(#"need to support deviceOrientation: %i", deviceOrientation);
float width = CGImageGetWidth(image);
float height = CGImageGetHeight(image);
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = MyCreateBitmapContext(width, height);
// create a 2 bit CGImage containing a gradient that will be used for masking the
// main view content to create the 'fade' of the reflection. The CGImageCreateWithMask
// function will stretch the bitmap image as required, so we can create a 1 pixel wide gradient
CGImageRef gradientMaskImage = CreateGradientImage(width, 1);
// create an image by masking the bitmap of the mainView content with the gradient view
// then release the pre-masked content bitmap and the gradient bitmap
CGContextClipToMask(mainViewContentContext, CGRectMake(0.0, 0.0, width, height), gradientMaskImage);
CGImageRelease(gradientMaskImage);
// draw the image into the bitmap context
CGContextDrawImage(mainViewContentContext, CGRectMake(0, 0, width, height), image);
// create CGImageRef of the main view bitmap content, and then release that bitmap context
CGImageRef reflectionImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished reflection image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:reflectionImage];
// image is retained by the property setting above, so we can release the original
CGImageRelease(reflectionImage);
return theImage;
}
Hope it helps.
Like in this post:
iPhone - UIImage Leak, ObjectAlloc Building
I'm having a similar problem. The pointer from the malloc in create_bitmap_data_provider is never freed. I've verified that the associated image object is eventually released, just not the provider's allocation. Should I explicitly create a data provider and somehow manage it's memory? Seems like a hack.
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, blah blah blah);
CGColorSpaceRelease(colorSpace);
// ... draw into context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage * image = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(context);
After fbrereto's answer below, I changed the code to this:
- (UIImage *)modifiedImage {
CGSize size = CGSizeMake(width, height);
UIGraphicsBeginImageContext(size);
CGContextRef context = UIGraphicsGetCurrentContext();
// draw into context
UIImage * image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image; // image retainCount = 1
}
// caller:
{
UIImage * image = [self modifiedImage];
_imageView.image = image; // image retainCount = 2
}
// after caller done, image retainCount = 1, autoreleased object lost its scope
Unfortunately, this still exhibits the same issue with a side effect of flipping the image horizontally. It appears to do the same thing with CGBitmapContextCreateImage internally.
I have verified my object's dealloc is called. The retainCount on the _imageView.image and the _imageView are both 1 before I release the _imageView. This really doesn't make sense. Others seem to have this issue as well, I'm the last one to suspect the SDK, but could there be an iPhone SDK bug here???
It looks like the problem is that you are releasing a pointer to the returned CGImage, rather than the CGImage itself. I too was having similar issues before with continual growing allocations and an eventual app crash. I addressed it by allocating a CGImage rather than a CGImageRef. After the changes run your code in Insturments with allocations, and you should not see anymore perpetual memory consumption from malloc. As well if you use the class method imageWithCGImage you will not have to worry about autoreleasing your UIImage later on.
I typed this on a PC so if you drop it right into XCode you may have syntax issue, I appologize in advance; however the principal is sound.
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, blah blah blah);
CGColorSpaceRelease(colorSpace);
// ... draw into context
CGImage cgImage = CGBitmapContextCreateImage(context);
UIImage * image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CGContextRelease(context);
return image;
I had this problem and it drove me nuts for a few days. After much digging and noticing a leak in CG Raster Data using Instruments.
The problem seems to lie inside CoreGraphics. My problem was when i was using CGBitmapContextCreateImage in a tight loop it would over a period of time retain some images (800kb each) and this slowly leaked out.
After a few days of tracing with Instruments I found a workaround was to use CGDataProviderCreateWithData method instead. The interesting thing was the output was the same CGImageRef but this time there would be no CG Raster Data used in VM by Core graphics and no leak. Im assuming this is an internal problem or were misusing it.
Here is the code that saved me:
#autoreleasepool {
CGImageRef cgImage;
CreateCGImageFromCVPixelBuffer(pixelBuffer,&cgImage);
UIImage *image= [UIImage imageWithCGImage:cgImage scale:1.0 orientation:UIImageOrientationUp];
// DO SOMETHING HERE WITH IMAGE
CGImageRelease(cgImage);
}
The key was using CGDataProviderCreateWithData in the method below.
static OSStatus CreateCGImageFromCVPixelBuffer(CVPixelBufferRef pixelBuffer, CGImageRef *imageOut)
{
OSStatus err = noErr;
OSType sourcePixelFormat;
size_t width, height, sourceRowBytes;
void *sourceBaseAddr = NULL;
CGBitmapInfo bitmapInfo;
CGColorSpaceRef colorspace = NULL;
CGDataProviderRef provider = NULL;
CGImageRef image = NULL;
sourcePixelFormat = CVPixelBufferGetPixelFormatType( pixelBuffer );
if ( kCVPixelFormatType_32ARGB == sourcePixelFormat )
bitmapInfo = kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipFirst;
else if ( kCVPixelFormatType_32BGRA == sourcePixelFormat )
bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst;
else
return -95014; // only uncompressed pixel formats
sourceRowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
width = CVPixelBufferGetWidth( pixelBuffer );
height = CVPixelBufferGetHeight( pixelBuffer );
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
sourceBaseAddr = CVPixelBufferGetBaseAddress( pixelBuffer );
colorspace = CGColorSpaceCreateDeviceRGB();
CVPixelBufferRetain( pixelBuffer );
provider = CGDataProviderCreateWithData( (void *)pixelBuffer, sourceBaseAddr, sourceRowBytes * height, ReleaseCVPixelBuffer);
image = CGImageCreate(width, height, 8, 32, sourceRowBytes, colorspace, bitmapInfo, provider, NULL, true, kCGRenderingIntentDefault);
if ( err && image ) {
CGImageRelease( image );
image = NULL;
}
if ( provider ) CGDataProviderRelease( provider );
if ( colorspace ) CGColorSpaceRelease( colorspace );
*imageOut = image;
return err;
}
static void ReleaseCVPixelBuffer(void *pixel, const void *data, size_t size)
{
CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)pixel;
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CVPixelBufferRelease( pixelBuffer );
}
Instead of manually creating your CGContextRef I'd suggest you leverage UIGraphicsBeginImageContext as demonstrated in this post. More details on that set of routines can be found here. I trust it'll help to resolve this issue, or at the very least give you less memory you have to manage yourself.
UPDATE:
Given the new code, the retainCount of the UIImage as it comes out of the function will be 1, and assigning it to the imageView's image will cause it to bump to 2. At that point deallocating the imageView will leave the retainCount of the UIImage to be 1, resulting in a leak. It is important, then, after assigning the UIImage to the imageView, to release it. It may look a bit strange, but it will cause the retainCount to be properly set to 1.
You're not the only one with this problem. I've had major problems with CGBitmapContextCreateImage(). When you turn on Zombie mode, it even warns you that memory is released twice (when it's not the case). There's definitely a problem when mixing CG* stuff with UI* stuff. I'm still trying to figure out how to code around this issue.
Side note: calling UIGraphicsBeginImageContext is not thread-safe. Be careful.
This really helped me! Here's how I used it to fix that nasty leak problem:
CGImage *cgImage = CGBitmapContextCreateImage(context);
CFDataRef dataRef = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
CGImageRelease(cgImage);
image->imageRef = dataRef;
image->image = CFDataGetBytePtr(dataRef);
Notice, I had to store the CFDataRef (for a CFRelease(image->imageRef)) in my ~Image function. Hopefully this also helps others...JR
I'm new to the iPhone App development so it's likely that I'm doing something wrong.
Basically, I'm loading a bunch of images from the internet, and then cropping them. I managed to find examples of loading images asynchronous and adding them into views. I've managed to do that by adding an image with NSData, through a NSOperation, which was added into a NSOperationQueue.
Then, because I had to make fixed-sized thumbs, I needed a way to crop this images, so I found a script on the net which basically uses UIGraphicsBeginImageContext(), UIGraphicsGetImageFromCurrentImageContext() and UIGraphicsEndImageContext() to draw the cropped image, along with unimportant size calculations.
The thing is, the method works, but since it's generating like 20 of this images, it randomly crashes after a few of them were generated, or sometimes after I close and re-open the app one or two more times.
What should I do in this cases? I tried to make this methods run asynchronous somehow, as well, with NSOperations and a NSOperationQueue, but no luck.
If the crop code is more relevant than I think, here it is:
UIGraphicsBeginImageContext(CGSizeMake(50, 50));
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = CGPointMake(0.0,0.0); //this is actually generated
// based on the sourceImage size
thumbnailRect.size.width = 50;
thumbnailRect.size.height = 50;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
Thanks!
The code to scale the images looks too much simple.
Here is the one I am using. As you can see, there are no leaks, objects are released when no longer needed. Hope this helps.
// Draw the image into a pixelsWide x pixelsHigh bitmap and use that bitmap to
// create a new UIImage
- (UIImage *) createImage: (CGImageRef) image width: (int) pixelWidth height: (int) pixelHeight
{
// Set the size of the output image
CGRect aRect = CGRectMake(0.0f, 0.0f, pixelWidth, pixelHeight);
// Create a bitmap context to store the new thumbnail
CGContextRef context = MyCreateBitmapContext(pixelWidth, pixelHeight);
// Clear the context and draw the image into the rectangle
CGContextClearRect(context, aRect);
CGContextDrawImage(context, aRect, image);
// Return a UIImage populated with the new resized image
CGImageRef myRef = CGBitmapContextCreateImage (context);
UIImage *img = [UIImage imageWithCGImage:myRef];
free(CGBitmapContextGetData(context));
CGContextRelease(context);
CGImageRelease(myRef);
return img;
}
// MyCreateBitmapContext: Source based on Apple Sample Code
CGContextRef MyCreateBitmapContext (int pixelsWide,
int pixelsHigh)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
if (context== NULL)
{
free (bitmapData);
CGColorSpaceRelease( colorSpace );
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
return context;
}
Your app is crashing because the calls you're using (e.g., UIGraphicsBeginImageContext) manipulate UIKit's context stack which you can only safely do from the main thread.
unforgiven's solution won't crash when used in a thread as it doesn't manipulate the context stack.
It does sounds suspiciously like an out of memory crash. Fire up the Leaks tool and see your overall memory trends.