How to erase part of an image as the user touches it - iphone

My big picture goal is to have a grey field over an image, and then as the user rubs on that grey field, it reveals the image underneath. Basically like a lottery scratcher card. I've done a bunch of searching through the docs, as well as this site, but can't find the solution.
The following is just a proof of concept to test "erasing" an image based on where the user touches, but it isn't working. :(
I have a UIView that detects touches, then sends the coords of the move to the UIViewController that clips the image in a UIImageView by doing the following:
- (void) moveDetectedFrom:(CGPoint) from to:(CGPoint) to
{
UIImage* image = bkgdImageView.image;
CGSize s = image.size;
UIGraphicsBeginImageContext(s);
CGContextRef g = UIGraphicsGetCurrentContext();
CGContextMoveToPoint(g, from.x, from.y);
CGContextAddLineToPoint(g, to.x, to.y);
CGContextClosePath(g);
CGContextAddRect(g, CGRectMake(0, 0, s.width, s.height));
CGContextEOClip(g);
[image drawAtPoint:CGPointZero];
bkgdImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[bkgdImageView setNeedsDisplay];
}
The problem is that the touches are sent to this method just fine, but nothing happens on the original.
Am I doing the clip path incorrectly? Or?
Not really sure...so any help you may have would be greatly appreciated.
Thanks in advance,
Joel

I've been trying to do the same thing a lot of time ago, using just Core Graphics, and it can be done, but trust me, the effect is not as smooth and soft as the user expects to be. So, i knew how to work with OpenCV, (Open Computer Vision Library), and as it was written in C, i knew i could ise it on the iPhone.
Doing what you want to do with OpenCV is extremely easy.
First you need a couple of functions to convert a UIImage to an IplImage wich is the type used in OpenCV to represent images of all kinds, and the other way.
+ (IplImage *)CreateIplImageFromUIImage:(UIImage *)image {
CGImageRef imageRef = image.CGImage;
//This is the function you use to convert a UIImage -> IplImage
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
IplImage *iplimage = cvCreateImage(cvSize(image.size.width, image.size.height), IPL_DEPTH_8U, 4);
CGContextRef contextRef = CGBitmapContextCreate(iplimage->imageData, iplimage->width, iplimage->height,
iplimage->depth, iplimage->widthStep,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, image.size.width, image.size.height), imageRef);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return iplimage;}
+ (UIImage *)UIImageFromIplImage:(IplImage *)image {
//Convert a IplImage -> UIImage
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSData * data = [[NSData alloc] initWithBytes:image->imageData length:image->imageSize];
//NSData *data = [NSData dataWithBytes:image->imageData length:image->imageSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
CGImageRef imageRef = CGImageCreate(image->width, image->height,
image->depth, image->depth * image->nChannels, image->widthStep,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault,
provider, NULL, false, kCGRenderingIntentDefault);
UIImage *ret = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
[data release];
return ret;}
Now that you have both the basic functions you need you can do whatever you want with your IplImage:
this is what you want:
+(UIImage *)erasePointinUIImage:(IplImage *)image :(CGPoint)point :(int)r{
//r is the radious of the erasing
int a = point.x;
int b = point.y;
int position;
int minX,minY,maxX,maxY;
minX = (a-r>0)?a-r:0;
minY = (b-r>0)?b-r:0;
maxX = ((a+r) < (image->width))? a+r : (image->width);
maxY = ((b+r) < (image->height))? b+r : (image->height);
for (int i = minX; i < maxX ; i++)
{
for(int j=minY; j<maxY;j++)
{
position = ((j-b)*(j-b))+((i-a)*(i-a));
if (position <= r*r)
{
uchar* ptr =(uchar*)(image->imageData) + (j*image->widthStep + i*image->nChannels);
ptr[1] = ptr[2] = ptr[3] = ptr[4] = 0;
}
}
}
UIImage * res = [self UIImageFromIplImage:image];
return res;}
Sorry for the formatting.
If you want to know how to port OpenCV to the iPhone Yoshimasa Niwa's
If you want to check out an app currently working with OpenCV on the AppStore go get :Flags&Faces

You usually want to draw into the current graphics context inside of a drawRect: method, not just any old method. Also, a clip region only affects what is drawn to the current graphics context. But instead of going into why this approach isn't working, I'd suggest doing it differently.
What I would do is have two views. One with the image, and one with the gray color that is made transparent. This allows the graphics hardware to cache the image, instead of trying to redraw the image every time you modify the gray fill.
The gray one would be a UIView subclass with CGBitmapContext that you would draw into to make the pixels that the user touches clear.
There are probably several ways to do this. I'm just suggesting one way above.

Related

take screen Programmatically of UIview+glview

i have glview in my uiview ,now i have to take scrren shot of combine view of uiview and glview.
i googled lot but i dnt found any thing useful i know how to take scrrenshot of glview
nt width = glView.frame.size.width;
int height = glView.frame.size.height;
NSInteger myDataLength = width * height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < height; y++)
{
for(int x = 0; x < width * 4; x++)
{
buffer2[((height - 1) - y) * width * 4 + x] = buffer[y * 4 * width + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
It seems like it's pretty tricky to get a screenshot nowadays, especially when you're mixing the UIKit and OpenGL ES: there used to be UIGetScreenImage() but Apple made it private again and is rejecting apps that use it.
Instead, there are two "solutions" to replace it: Screen capture in UIKit applications and OpenGL ES View Snapshot. The former does not capture OpenGL ES or video content while the later is only for OpenGL ES.
There is another technical note How do I take a screenshot of my app that contains both UIKit and Camera elements?, and here they essentially say: You need to first capture the camera picture and then when rendering the view hierarchy, draw that image in the context.
The very same would apply for OpenGL ES: You would first need to render a snapshot for your OpenGL ES view, then render the UIKit view hierarchy into an image context and draw the image of your OpenGL ES view on top of it. Very ugly, and depending on your view hierarchy it might actually not be what you're seeing on screen (e. g. if there are views in front of your OpenGL view).
Inspired by DarkDust, I was successful in implementing a screen capture of a mix of uiview and openglview (cocos2d 2.0 view). I've sanitized the code a bit and pasted below, hopefully it's helpful for others.
To help explain the setup, my app screen has 4 view layers: the back is a background UIView with background images "backgroundLayer", the middle are 2 layers of Cocos2d glview "glLayer1 and glLayer2"; and the front is another UIView layer with a few native UI controls (e.g. UIButtons) "frontView".
Here's the code:
+ (UIImage *) grabScreenshot
{
// Get the 2 layers in the middle of cocos2d glview and store it as UIImage
[CCDirector sharedDirector].nextDeltaTimeZero = YES;
CGSize winSize = [CCDirector sharedDirector].winSize;
CCRenderTexture* rtx =
[CCRenderTexture renderTextureWithWidth:winSize.width
height:winSize.height];
[rtx begin];
[glLayer1 visit];
[glLayer2 visit];
[rtx end];
UIImage *openglImage = [rtx getUIImage];
UIGraphicsBeginImageContext(winSize);
// Capture the bottom layer
[backgroundView.layer renderInContext:UIGraphicsGetCurrentContext()];
// Save the captured glLayers image to the image context
[openglImage drawInRect:CGRectMake(0, 0, openglImage.size.width, openglImage.size.height)];
[frontView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return viewImage;
}

Create a mask from difference between two images (iPhone)

How can I detect the difference between 2 images, creating a mask of the area that's different in order to process the area that's common to both images (gaussian blur for example)?
EDIT: I'm currently using this code to get the RGBA value of pixels:
+ (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
The problem is, the images are being captured from the iPhone's camera so they are not exactly the same position. I need to create areas of a couple of pixels and extracting the general color of the area (maybe by adding up the RGBA values and dividing by the number of pixels?). How could I do this and then translate it to a CGMask?
I know this is a complex question, so any help is appreciated.
Thanks.
I think the simplest way to do this would be to use a difference blend mode. The following code is based on code I use in CKImageAdditions.
+ (UIImage *) differenceOfImage:(UIImage *)top withImage:(UIImage *)bottom {
CGImageRef topRef = [top CGImage];
CGImageRef bottomRef = [bottom CGImage];
// Dimensions
CGRect bottomFrame = CGRectMake(0, 0, CGImageGetWidth(bottomRef), CGImageGetHeight(bottomRef));
CGRect topFrame = CGRectMake(0, 0, CGImageGetWidth(topRef), CGImageGetHeight(topRef));
CGRect renderFrame = CGRectIntegral(CGRectUnion(bottomFrame, topFrame));
// Create context
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if(colorSpace == NULL) {
printf("Error allocating color space.\n");
return NULL;
}
CGContextRef context = CGBitmapContextCreate(NULL,
renderFrame.size.width,
renderFrame.size.height,
8,
renderFrame.size.width * 4,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if(context == NULL) {
printf("Context not created!\n");
return NULL;
}
// Draw images
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGContextDrawImage(context, CGRectOffset(bottomFrame, -renderFrame.origin.x, -renderFrame.origin.y), bottomRef);
CGContextSetBlendMode(context, kCGBlendModeDifference);
CGContextDrawImage(context, CGRectOffset(topFrame, -renderFrame.origin.x, -renderFrame.origin.y), topRef);
// Create image from context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage * image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(context);
return image;
}
There are three reasons pixels will change from one iPhone photo to the next, the subject changed, the iPhone moved, and random noise. I assume for this question, you're most interested in the subject changes, and you want to process out the effects of the other two changes. I also assume the app intends the user to keep the iPhone reasonably still, so iPhone movement changes are less significant than subject changes.
To reduce the effects of random noise, just blur the image a little. A simple averaging blur, where each pixel in the resulting image is an average of the original pixel with its nearest neighbors should be sufficient to smooth out any noise in a reasonably well lit iPhone image.
To address iPhone movement, you can run a feature detection algorithm on each image (look up feature detection on Wikipedia for a start). Then calculate the transforms needed to align the least changed detected features.
Apply that transform to the blurred images, and find the difference between the images. Any pixels with a sufficient difference will become your mask. You can then process the mask to eliminate any islands of changed pixels. For example, a subject may be wearing a solid colored shirt. The subject may move from one image to the next, but the area of the solid colored shirt may overlap resulting in a mask with a hole in the middle.
In other words, this is a significant and difficult image processing problem. You won't find the answer in a stackoverflow.com post. You will find the answer in a digital image processing textbook.
Can't you just subtract pixel values from the images, and process pixels where the difference i 0?
Every pixel which does not have a suitably similar pixel in the other image within a certain radius can be deemed to be part of the mask. It's slow, (though there's not much that would be faster) but it works fairly simply.
Go through the pixels, copy the ones that are different in the lower image to a new one (not opaque).
Blur the upper one completely, then show the new one above.

mask text inside uitextview/uiwebview

finally I choose to devote some time to find a way/implementation to
mask text inside UITextView/UIWebView.
By now what I'm able to do is:
- add some custom background
- add a uitextview/uiwebview with some text
- add an UIImageView (with a covering png) or a CAGradientLayer to
create a simple mask effect (*)
Of course this is not a magic bullet and require at least one more
layer (the one pointed out with *).
Furthermore it's not so good when you have a full transparent
background 'cause everyone can recognize the extra view/layer used to
fade away the text.
I searched all over google but still not found a good solution (I've
found about mask an image, blah blah)...
Any tips?
Thanks in advance,
marcio
PS maybe a screenshot will be more straightforward, here you're!
http://grab.by/KzS
Yes! I finally got it. I don't know if it's the Apple's way but it works. Maybe they have the opportunity to employ some private apis. Anyway this is a sort of pseudo-algorithm on how I got it works:
1) get a screenshot of the window
2) crop the desired rect with CGImageCreateWithImageInRect
3) apply a gradient mask (stolen from Apple' sample code on Reflections)
4) create an UIImageView with the freshly created image
I also noted that it doesn't affect the performances even on the lowest devices.
Hope it will be helpful!
And this is a crop of the result (link text)
I've promised to myself to implement a category just to make it better. Until now the code is quite spread in different classes.
Just to make a sample (supported only landscape orientation, see the transform below, supported only top mask). In this case I overrided didMoveToWindow of the table that needs to be masked:
- (void)didMoveToWindow {
if (self.window) {
UIImageView *reflected = (UIImageView *)[self.superview viewWithTag:TABLE_SHADOW_TOP];
if (!reflected) {
UIImage *image = [UIImage screenshot:self.window];
//
CGRect croppedRect = CGRectMake(480-self.frame.size.height, self.frame.origin.x, 16, self.frame.size.width);
CGImageRef cropImage = CGImageCreateWithImageInRect(image.CGImage, croppedRect);
UIImage *reflectedImage = [UIImage imageMaskedWithGradient:cropImage];
CGImageRelease(cropImage);
UIImageView *reflected = [[UIImageView alloc] initWithImage:reflectedImage];
reflected.transform = CGAffineTransformMakeRotation(-(M_PI/2));
reflected.tag = TABLE_SHADOW_TOP;
CGRect adjusted = reflected.frame;
adjusted.origin = self.frame.origin;
reflected.frame = adjusted;
[self.superview addSubview:reflected];
[reflected release];
}
}
}
and this is the uiimage category:
CGImageRef CreateGradientImage(int pixelsWide, int pixelsHigh)
{
CGImageRef theCGImage = NULL;
// gradient is always black-white and the mask must be in the gray colorspace
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
// create the bitmap context
CGContextRef gradientBitmapContext = CGBitmapContextCreate(NULL, pixelsWide, pixelsHigh,
8, 0, colorSpace, kCGImageAlphaNone);
// define the start and end grayscale values (with the alpha, even though
// our bitmap context doesn't support alpha the gradient requires it)
CGFloat colors[] = {0.0, 1.0, 1.0, 1.0};
// create the CGGradient and then release the gray color space
CGGradientRef grayScaleGradient = CGGradientCreateWithColorComponents(colorSpace, colors, NULL, 2);
CGColorSpaceRelease(colorSpace);
// create the start and end points for the gradient vector (straight down)
CGPoint gradientStartPoint = CGPointZero;
// CGPoint gradientStartPoint = CGPointMake(0, pixelsHigh);
CGPoint gradientEndPoint = CGPointMake(pixelsWide/1.75, 0);
// draw the gradient into the gray bitmap context
CGContextDrawLinearGradient(gradientBitmapContext, grayScaleGradient, gradientStartPoint,
gradientEndPoint, kCGGradientDrawsAfterEndLocation);
CGGradientRelease(grayScaleGradient);
// convert the context into a CGImageRef and release the context
theCGImage = CGBitmapContextCreateImage(gradientBitmapContext);
CGContextRelease(gradientBitmapContext);
// return the imageref containing the gradient
return theCGImage;
}
CGContextRef MyCreateBitmapContext(int pixelsWide, int pixelsHigh)
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create the bitmap context
CGContextRef bitmapContext = CGBitmapContextCreate (NULL, pixelsWide, pixelsHigh, 8,
0, colorSpace,
// this will give us an optimal BGRA format for the device:
(kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst));
CGColorSpaceRelease(colorSpace);
return bitmapContext;
}
+ (UIImage *)imageMaskedWithGradient:(CGImageRef)image {
UIDeviceOrientation deviceOrientation = [UIDevice currentDevice].orientation;
DEBUG(#"need to support deviceOrientation: %i", deviceOrientation);
float width = CGImageGetWidth(image);
float height = CGImageGetHeight(image);
// create a bitmap graphics context the size of the image
CGContextRef mainViewContentContext = MyCreateBitmapContext(width, height);
// create a 2 bit CGImage containing a gradient that will be used for masking the
// main view content to create the 'fade' of the reflection. The CGImageCreateWithMask
// function will stretch the bitmap image as required, so we can create a 1 pixel wide gradient
CGImageRef gradientMaskImage = CreateGradientImage(width, 1);
// create an image by masking the bitmap of the mainView content with the gradient view
// then release the pre-masked content bitmap and the gradient bitmap
CGContextClipToMask(mainViewContentContext, CGRectMake(0.0, 0.0, width, height), gradientMaskImage);
CGImageRelease(gradientMaskImage);
// draw the image into the bitmap context
CGContextDrawImage(mainViewContentContext, CGRectMake(0, 0, width, height), image);
// create CGImageRef of the main view bitmap content, and then release that bitmap context
CGImageRef reflectionImage = CGBitmapContextCreateImage(mainViewContentContext);
CGContextRelease(mainViewContentContext);
// convert the finished reflection image to a UIImage
UIImage *theImage = [UIImage imageWithCGImage:reflectionImage];
// image is retained by the property setting above, so we can release the original
CGImageRelease(reflectionImage);
return theImage;
}
Hope it helps.

Multiple Image Operations Crash iPhone App

I'm new to the iPhone App development so it's likely that I'm doing something wrong.
Basically, I'm loading a bunch of images from the internet, and then cropping them. I managed to find examples of loading images asynchronous and adding them into views. I've managed to do that by adding an image with NSData, through a NSOperation, which was added into a NSOperationQueue.
Then, because I had to make fixed-sized thumbs, I needed a way to crop this images, so I found a script on the net which basically uses UIGraphicsBeginImageContext(), UIGraphicsGetImageFromCurrentImageContext() and UIGraphicsEndImageContext() to draw the cropped image, along with unimportant size calculations.
The thing is, the method works, but since it's generating like 20 of this images, it randomly crashes after a few of them were generated, or sometimes after I close and re-open the app one or two more times.
What should I do in this cases? I tried to make this methods run asynchronous somehow, as well, with NSOperations and a NSOperationQueue, but no luck.
If the crop code is more relevant than I think, here it is:
UIGraphicsBeginImageContext(CGSizeMake(50, 50));
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = CGPointMake(0.0,0.0); //this is actually generated
// based on the sourceImage size
thumbnailRect.size.width = 50;
thumbnailRect.size.height = 50;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
Thanks!
The code to scale the images looks too much simple.
Here is the one I am using. As you can see, there are no leaks, objects are released when no longer needed. Hope this helps.
// Draw the image into a pixelsWide x pixelsHigh bitmap and use that bitmap to
// create a new UIImage
- (UIImage *) createImage: (CGImageRef) image width: (int) pixelWidth height: (int) pixelHeight
{
// Set the size of the output image
CGRect aRect = CGRectMake(0.0f, 0.0f, pixelWidth, pixelHeight);
// Create a bitmap context to store the new thumbnail
CGContextRef context = MyCreateBitmapContext(pixelWidth, pixelHeight);
// Clear the context and draw the image into the rectangle
CGContextClearRect(context, aRect);
CGContextDrawImage(context, aRect, image);
// Return a UIImage populated with the new resized image
CGImageRef myRef = CGBitmapContextCreateImage (context);
UIImage *img = [UIImage imageWithCGImage:myRef];
free(CGBitmapContextGetData(context));
CGContextRelease(context);
CGImageRelease(myRef);
return img;
}
// MyCreateBitmapContext: Source based on Apple Sample Code
CGContextRef MyCreateBitmapContext (int pixelsWide,
int pixelsHigh)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8,
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
if (context== NULL)
{
free (bitmapData);
CGColorSpaceRelease( colorSpace );
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
return context;
}
Your app is crashing because the calls you're using (e.g., UIGraphicsBeginImageContext) manipulate UIKit's context stack which you can only safely do from the main thread.
unforgiven's solution won't crash when used in a thread as it doesn't manipulate the context stack.
It does sounds suspiciously like an out of memory crash. Fire up the Leaks tool and see your overall memory trends.

Slicing up a UIImage on iPhone

Objective: take a UIImage, crop out a square in the middle, change size of square to 320x320 pixels, slice up the image into 16 80x80 images, save the 16 images in an array.
Here's my code:
CGImageRef originalImage, resizedImage, finalImage, tmp;
float imgWidth, imgHeight, diff;
UIImage *squareImage, *playImage;
NSMutableArray *tileImgArray;
int r, c;
originalImage = [image CGImage];
imgWidth = image.size.width;
imgHeight = image.size.height;
diff = fabs(imgWidth - imgHeight);
if(imgWidth > imgHeight){
resizedImage = CGImageCreateWithImageInRect(originalImage, CGRectMake(floor(diff/2), 0, imgHeight, imgHeight));
}else{
resizedImage = CGImageCreateWithImageInRect(originalImage, CGRectMake(0, floor(diff/2), imgWidth, imgWidth));
}
CGImageRelease(originalImage);
squareImage = [UIImage imageWithCGImage:resizedImage];
if(squareImage.size.width != squareImage.size.height){
NSLog(#"image cutout error!");
//*code to return to main menu of app, irrelevant here
}else{
float newDim = squareImage.size.width;
if(newDim != 320.0){
CGSize finalSize = CGSizeMake(320.0, 320.0);
UIGraphicsBeginImageContext(finalSize);
[squareImage drawInRect:CGRectMake(0, 0, finalSize.width, finalSize.height)];
playImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}else{
playImage = squareImage;
}
}
finalImage = [playImage CGImage];
tileImgArray = [NSMutableArray arrayWithCapacity:0];
for(int i = 0; i < 16; i++){
r = i/4;
c = i%4;
//*
tmp = CGImageCreateWithImageInRect(finalImage, CGRectMake(c*tileSize, r*tileSize, tileSize, tileSize));
[tileImgArray addObject:[UIImage imageWithCGImage:tmp]];
}
The code works correctly when the original (the variable image) has its smaller dimension either bigger or smaller than 320 pixels. When it's exactly 320, the resulting 80x80 images are almost entirely black, some with a few pixels at the edges that may (I can't really tell) be from the original image.
I tested by displaying the full image both directly:
[UIImage imageWithCGImage:finalImage];
And indirectly:
[UIImage imageWithCGImage:CGImageCreateWithImageInRect(finalImage, CGRectMake(0, 0, 320, 320))];
In both cases, the display worked. The problems only arise when I attempt to slice out some part of the image.
After some more experimentation, I found the following solution (I still don't know why it didn't work as originally written, though.) But anyway, the slicing works after the resize code is put in place even when resizing is unnecessary:
if(newDim != 320.0){
CGSize finalSize = CGSizeMake(320.0, 320.0);
UIGraphicsBeginImageContext(finalSize);
[squareImage drawInRect:CGRectMake(0, 0, finalSize.width, finalSize.height)];
playImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}else{
CGSize finalSize = CGSizeMake(320.0, 320.0);
UIGraphicsBeginImageContext(finalSize);
[squareImage drawInRect:CGRectMake(0, 0, finalSize.width, finalSize.height)];
playImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
Anyone has any clue WHY this is going on?
P.S. Yes, if/else is no longer required here. Removing it before I knew it was going to work would be stupid, though.
Just out of curiosity, why did you make your mutable array with bound of 0 when you know you're going to put 16 things in it?
Well, aside from that, I've tried the basic techniques you used for resizing and slicing (I did not need to crop, because I'm working with images that are already square) and I'm unable to reproduce your problem in the simulator. You might want to try breaking your code into three separate functions (crop to square, resize, and slice into pieces) and then test the three separately so you can figure out which of the three steps is causing the problems (ie. input images that you've manipulated in a normal graphics program instead of using objective c and then inspect what you get back out!).
I'll attach my versions of the resize and slice functions below, which will hopefully be helpful. It was nice to have your versions to look at, since I didn't have to find all the methods by myself for once. :)
Just as a note, the two dimensional array mentioned is my own class built out of NSMutableArrays, but you could easily implement your own version or use a flat NSMutableArray instead. ;)
// cut the given image into a grid of equally sized smaller images
// this assumes that the image can be equally divided in the requested increments
// the images will be stored in the return array in [row][column] order
+ (TwoDimensionalArray *) chopImageIntoGrid : (UIImage *) originalImage : (int) numberOfRows : (int) numberOfColumns
{
// figure out the size of our tiles
int tileWidth = originalImage.size.width / numberOfColumns;
int tileHeight = originalImage.size.height / numberOfRows;
// create our return array
TwoDimensionalArray * toReturn = [[TwoDimensionalArray alloc] initWithBounds : numberOfRows
: numberOfColumns];
// get a CGI image version of our image
CGImageRef cgVersionOfOriginal = [originalImage CGImage];
// loop to chop up each row
for(int row = 0; row < numberOfRows ; row++){
// loop to chop up each individual piece by column
for (int column = 0; column < numberOfColumns; column++)
{
CGImageRef tempImage =
CGImageCreateWithImageInRect(cgVersionOfOriginal,
CGRectMake(column * tileWidth,
row * tileHeight,
tileWidth,
tileHeight));
[toReturn setObjectAt : row : column : [UIImage imageWithCGImage:tempImage]];
}
}
// now return the set of images we created
return [toReturn autorelease];
}
// this method resizes an image to the requested dimentions
// be a bit careful when using this method, since the resize will not respect
// the proportions of the image
+ (UIImage *) resize : (UIImage *) originalImage : (int) newWidth : (int) newHeight
{
// translate the image to the new size
CGSize newSize = CGSizeMake(newWidth, newHeight); // the new size we want the image to be
UIGraphicsBeginImageContext(newSize); // downside: this can't go on a background thread, I'm told
[originalImage drawInRect : CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext(); // get our new image
UIGraphicsEndImageContext();
// return our brand new image
return newImage;
}
Eva Schiffer