How to crop the image in iPhone - iphone

I want to do the same thing as asked in this question.
In my App i want to crop the image like we do image cropping in FaceBook can any one guide me with the link of good tutorial or with any sample code. The Link which i have provided will completely describe my requirement.

You may create new image with any properties. Here is my function, witch do that. you just need to use your own parameters of new image. In my case, image is not cropped, I just making some effect, moving pixels from there original place to another. But if you initialize new image with another height and width, you can just copy from any range of pixels of old image you need, to new one:
-(UIImage *)Color:(UIImage *)img
{
int R;
float m_width = img.size.width;
float m_height = img.size.height;
if (m_width>m_height) R = m_height*0.9;
else R = m_width*0.9;
int m_wint = (int)m_width; //later, we will need this parameters in float and int. you may just use "(int)" and "(float)" before variables later, and do not implement another ones
int m_hint = (int)m_height;
CGRect imageRect;
//cheking image orientation. we will work with image pixel-by-pixel, so we need to make top side at the top.
if(img.imageOrientation==UIImageOrientationUp
|| img.imageOrientation==UIImageOrientationDown)
{
imageRect = CGRectMake(0, 0, m_wint, m_hint);
}
else
{
imageRect = CGRectMake(0, 0, m_hint, m_wint);
}
uint32_t *rgbImage = (uint32_t *) malloc(m_wint * m_hint * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rgbImage, m_wint, m_hint, 8, m_wint *sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextSetShouldAntialias(context, NO);
CGContextTranslateCTM(context, 0, m_hint);
CGContextScaleCTM(context, 1.0, -1.0);
switch (img.imageOrientation) {
case UIImageOrientationRight:
{
CGContextRotateCTM(context, M_PI / 2);
CGContextTranslateCTM(context, 0, -m_wint);
}break;
case UIImageOrientationLeft:
{
CGContextRotateCTM(context, - M_PI / 2);
CGContextTranslateCTM(context, -m_hint, 0);
}break;
case UIImageOrientationUp:
{
CGContextTranslateCTM(context, m_wint, m_hint);
CGContextRotateCTM(context, M_PI);
}
default:
break;
}
CGContextDrawImage(context, imageRect, img.CGImage);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
//here is new image. you can change m_wint and m_hint as you whant
uint8_t *result = (uint8_t *) calloc(m_wint * m_hint * sizeof(uint32_t), 1);
for(int y = 0; y < m_hint; y++) //new m_hint here
{
float fy=y;
double yy = (m_height*( asinf(m_height/(2*R))-asin(((m_height/2)-fy)/R) )) /
(2*asin(m_height/(2*R))); // (xx, yy) - coordinates of pixel of OLD image
for(int x = 0; x < m_wint; x++) //new m_wint here
{
float fx=x;
double xx = (m_width*( asin(m_width/(2*R))-asin(((m_width/2)-fx)/R) )) /
(2*asin(m_width/(2*R)));
uint32_t rgbPixel=rgbImage[(int)yy * m_wint + (int)xx];
int intRedSource = (rgbPixel>>24)&255;
int intGreenSource = (rgbPixel>>16)&255;
int intBlueSource = (rgbPixel>>8)&255;
result[(y * (int)m_wint + x) * 4] = 0;
result[(y * (int)m_wint + x) * 4 + 1] = intBlueSource;
result[(y * (int)m_wint + x) * 4 + 2] = intGreenSource;
result[(y * (int)m_wint + x) * 4 + 3] = intRedSource;
}
}
free(rgbImage);
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(result, m_wint, m_hint, 8, m_wint * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast ); //new m_wint and m_hint as well
CGImageRef image1 = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *resultUIImage = [UIImage imageWithCGImage:image1];
CGImageRelease(image1);
#try {
free(result);
}
#catch (NSException * e) {
NSLog(#"proc. Exception: %#", e);
}
return resultUIImage;
}

CGRect rectImage = CGRectMake(p1.x,p1.y, p2.x - p1.x, p4.y - p1.y);
//Create bitmap image from original image data,
//using rectangle to specify desired crop area
CGImageRef imageRef = CGImageCreateWithImageInRect([imageForCropping CGImage], rectImage);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
imageView1 = [[UIImageView alloc] initWithFrame:CGRectMake(p1.x, p1.y,p2.x-p1.x p4.y-p1.y)];
imageView1.image = croppedImage;
[self.view addSubview:imageView1];
CGImageRelease(imageRef);

Related

Inappropriate Images display due to overlapping in CGContext

I can't find the solution for this:
I have 2 image Views both with different image - image_1(Jeans of Person) and image_2(Shirt of person). Now when I change the RGB value individually for image_1's or image_2's each and every pixel, I get the perfect result. But whenever one of my frame from the two, slightly overlap with other after processing both of them, then the problem occurs. Please help. This is how I am processing the image.
-(UIImage *)ColorChangeProcessing :(int )redvalue greenValue:(int)greenvalue blueValue:(int)bluevalue imageUsed : (UIImage *)image
{
CGContextRef ctx;
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
int byteIndex = (bytesPerRow * 0) + 0 * bytesPerPixel,RED = redvalue,GREEN=greenvalue,BLUE = bluevalue;
for (int ii = 0 ; ii < width * height ; ++ii)
{
if(rawData[byteIndex] != '/0' || rawData[byteIndex+1] != '/0' || rawData[byteIndex+2] != '/0'){
if ((((rawData[byteIndex])+RED)) > 255)
{
rawData[byteIndex] = (char)255;
}
else if((((rawData[byteIndex])+RED)) >0)
{
rawData[byteIndex] = (char) (((rawData[byteIndex] * 1.0) + RED));
}
else
{
rawData[byteIndex] = (char)0;
}
if ((((rawData[byteIndex+1])+GREEN)) > 255)
{
rawData[byteIndex+1] = (char)255;
}
else if((((rawData[byteIndex+1])+GREEN))>0)
{
rawData[byteIndex+1] = (char) (((rawData[byteIndex+1] * 1.0) + GREEN));
}
else
{
rawData[byteIndex+1] = (char)0;
}
if ((((rawData[byteIndex+2])+BLUE)) > 255)
{
rawData[byteIndex+2] = (char)255;
}
else if((((rawData[byteIndex+2])+BLUE))>0)
{
rawData[byteIndex+2] = (char) (((rawData[byteIndex+2] * 1.0) + BLUE));
}
else
{
rawData[byteIndex+2] = (char)0;
}
}
byteIndex += 4;
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedLast );
CGImageRef NewimageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:NewimageRef];
CGContextRelease(ctx);
free(rawData);
CGImageRelease(NewimageRef);
return rawImage;
}
Now on Any button action you can set R,G,B & image values and get the processed image after that. Then just try to place those processed images frame so that some of the part of one image is covered by another. Means if you have jeans image just try to place the small portion near belt over shirt image.
Finally I came up with the solution that is I was missing to check the alpha value. So the transparent image part was the one which created problems. Thanks all.

Getting an openGL image - runs in simulator, crashes on iPad

The purpose of this function is to return a UIImage from an openGL image. The reason it's being converted to a CG image is so openGL and UIKit elements can be rendered on top of each other, which is taken care of in another function.
The strange thing is, when the app is run in the simulator, everything works fine. However, after testing the app on multiple different iPads, when the drawGlToImage method is called on self, the app crashes with a EXC_BAD_ACCESS code=1 error. Does anyone know what I'm doing here that would cause this? I've read that UIGraphicsBeginImageContext() used to have thread safety issues, but it seems like that was fixed in iOS 4.
- (UIImage *)drawGlToImage
{
self.context = [EAGLContext currentContext];
[EAGLContext setCurrentContext:self.context];
UIGraphicsBeginImageContext(self.view.frame.size);
unsigned char buffer[1024 * 768 * 4];
NSInteger dataSize = 1024 * 768 * 4;
CGContextRef currentContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(currentContext);
glReadPixels(0, 0, 1024, 768, GL_RGBA, GL_UNSIGNED_BYTE, &buffer);
//flip the image
GLubyte *flippedBuffer = (GLubyte *) malloc(dataSize);
for(int y = 0; y <768; y++)
{
for(int x = 0; x <1024 * 4; x++)
{
if(buffer[y* 4 * 1024 + x]==0)
flippedBuffer[(767 - y) * 1024 * 4 + x]=1;
else
flippedBuffer[(767 - y) * 1024 * 4 + x] = buffer[y* 4 * 1024 + x];
}
}
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, flippedBuffer, 1024 * 768 * 4, NULL);
CGImageRef iref = CGImageCreate(1024,768,8,32,1024*4, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaLast, ref, NULL, true, kCGRenderingIntentDefault);
CGContextScaleCTM(currentContext, 1.0, -1.0);
CGContextTranslateCTM(currentContext, 0, -self.view.frame.size.height);
UIGraphicsPopContext();
UIImage *image = [[UIImage alloc] initWithCGImage:iref];
UIGraphicsEndImageContext();
return image;
free(flippedBuffer);
UIGraphicsPopContext();
}
When a button is pressed, a method that is called makes this assignment, which causes the app to crash.
UIImage *glImage = [self drawGlToImage];
I am not sure in which phase you are calling this method. But before calling any OpenGL functions you need to set the right OpenGL context. In the Xcode template it is this line
[EAGLContext setCurrentContext:self.context];
Here's the code used to solve it
- (UIImage *)drawGlToImage {
// Code borrowed and tweaked from:
// http://stackoverflow.com/questions/9881143/missing-part-of-the-image-when-taking-screenshot-while-supporting-retina-display
CGFloat scale = UIScreen.mainScreen.scale;
CGFloat xOffset = 40.0f;
CGFloat yOffset = -16.0f;
CGSize size = CGSizeMake((self.chart.frame.size.width) * scale,
self.chart.frame.size.height * scale);
//Create buffer for pixels
GLuint bufferLength = size.width * size.height * 4;
GLubyte* buffer = (GLubyte*)malloc(bufferLength);
//Read Pixels from OpenGL
glReadPixels(0.0f, 0.0f, size.width, size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
//Make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
//Configure image
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * size.width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(size.width, size.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
CGContextRef context = CGBitmapContextCreate(pixels, size.width, size.height, 8, size.width * 4, CGImageGetColorSpace(iref), kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextTranslateCTM(context, 0.0f, size.height);
CGContextScaleCTM(context, 1.0f, -1.0f);
// These numbers are a little magical.
CGContextDrawImage(context, CGRectMake(xOffset, yOffset, ((size.width - (6.0f * scale)) / scale) - (xOffset / 2), (size.height / scale) - (yOffset / 2)), iref);
UIImage *outputImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
//Dealloc
CGDataProviderRelease(provider);
CGImageRelease(iref);
CGContextRelease(context);
free(buffer);
free(pixels);
return outputImage;
}

How to filter colors in runtime ? - iphone

I have an image where each pixel has one of the three primary colors (RGBA).
And I have to change this image in run-time by changing each color channel by another (filter on run-time)
I have the .PVR and the corresponding glTexture2D, but how can I filter / change colors in run-time ?
I cannot use OpenGL ES 2.0
But I can use Cocos2D and OpenGL ES 1.x
:(
It's not trivial, but it could be useful to you:
// Convierte la imagen a blanco y negro
+ (UIImage *)convertImageToGrayScale:(UIImage *)i {
CGSize size = [i size];
int width = size.width;
int height = size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [i CGImage]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint32_t gray = 0.3 * rgbaPixel[RED] + 0.59 * rgbaPixel[GREEN] + 0.11 * rgbaPixel[BLUE];
// set the pixels to gray
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
}
This converts the UIimage received into a gray scale image
This is the code I used to manipulate RGBA picture
-(UIImage*)convertGrayScaleImageRedImage:(CGImageRef)inImage{
CFDataRef m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
int length = CFDataGetLength(m_DataRef);
for (int i=0; i<length; i+=4){
int r = i;
int g = i+1;
int b = i+2;
//int a = i+3;//alpha
NSLog(#"r=%i, g=%i, b=%i",m_PixelBuf[r], m_PixelBuf[g], m_PixelBuf[b]);
m_PixelBuf[r] = 1;
m_PixelBuf[g] = 0;
m_PixelBuf[b] = 0;
//m_PixelBuf[a] = m_PixelBuf[a];//alpha
}
CGContextRef ctx = CGBitmapContextCreate(m_PixelBuf,
CGImageGetWidth(inImage),
CGImageGetHeight(inImage),
CGImageGetBitsPerComponent(inImage),
CGImageGetBytesPerRow(inImage),
CGImageGetColorSpace(inImage),
CGImageGetBitmapInfo(inImage)
);
CGImageRef imageRef = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
UIImage *redImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return redImage;
}

What could be the cause of this error with CGBitmapContextCreate()?

My application gives these errors on the console:
<Error>: CGBitmapContextCreateImage: invalid context 0xdf12000
Program received signal: “EXC_BAD_ACCESS”.
from the following code:
- (UIImage *)pureBlackAndWhiteImage:(UIImage *)image {
//CGImageRef imageR = [image CGImage];
CGColorSpaceRef colorSpac = CGColorSpaceCreateDeviceGray();
//CGRect rect = CGRectMake(0.0f, 0.0f, image.size.width, image.size.height);
CGContextRef contex = CGBitmapContextCreate(NULL, image.size.width,
image.size.height, 8, 1*image.size.width, colorSpac, kCGImageAlphaNone);
contex = malloc(image.size.height * image.size.width * 4);
CGColorSpaceRelease(colorSpac);
// Draw the image into the grayscale context
//CGContextDrawImage(contex, rect, NULL);
CGImageRef grayscale = CGBitmapContextCreateImage(contex);
CGContextRelease(contex);
NSUInteger width = CGImageGetWidth(grayscale);
NSUInteger height = CGImageGetHeight(grayscale);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CFDataRef datare=CGDataProviderCopyData(CGImageGetDataProvider(grayscale));
unsigned char *dataBitmap=(unsigned char *)CFDataGetBytePtr(datare);
dataBitmap = malloc(height * width * 4);
NSUInteger bytesPerPixel = 1;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(dataBitmap, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaNoneSkipLast);
//unsigned char *dataBitmap = [self bitmapFromImage:image];
for (int i = 0; i < image.size.width * image.size.height * 4; i += 4) {
// if ((dataBitmap[i + 1] + dataBitmap[i + 2] + dataBitmap[i + 3]) < (255 * 3 / 2)) {
if(dataBitmap[i+1]>128 && dataBitmap[i+2]>128 && dataBitmap[i+3]>128)
{
dataBitmap[i + 1] = 0;
dataBitmap[i + 2] = 0;
dataBitmap[i + 3] = 0;
} else {
dataBitmap[i + 1] = 255;
dataBitmap[i + 2] = 255;
dataBitmap[i + 3] = 255;
}
}
//CFDataRef newData=CFDataCreate(NULL,dataBitmap,length);
CGColorSpaceRef colorSpa = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapcontext = CGBitmapContextCreate(dataBitmap,
image.size.width,
image.size.height,
8,
1*image.size.width,
colorSpa,
kCGImageAlphaNone);
CFRelease(colorSpace);
CFRelease(colorSpa);
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapcontext);
CFRelease(cgImage);
CFRelease(bitmapcontext);
CGContextRelease(context);
free(dataBitmap);
//CFRelease(imageR);
UIImage *newImage = [UIImage imageWithCGImage:cgImage];
return newImage;
}
What could be causing these errors?
You're trying to create the context using 4 bytes per row, when the width of the image is a lot more than that.
If you have 1 byte per pixel, then instead of 4 you should use 1 * image.size.width, like you do the second and third times you create a bitmap context.
Besides that, I don't think passing imageR as the first argument is a good idea. If you're deploying for iOS 4 or later, you can pass NULL instead.
Otherwise, I think you have to allocate the memory to store the bitmap context data.

Convert to grayscale - too slow

I've made a class that converts an image into grayscale. But it works way too slow. Is there a way to make it work faster?
Here's my class:
#implementation PixelProcessing
SYNTHESIZE_SINGLETON_FOR_CLASS(PixelProcessing);
#define bytesPerPixel 4
#define bitsPerComponent 8
-(UIImage*)scaleAndRotateImage: (UIImage*)img withMaxResolution: (int)kMaxResolution
{
CGImageRef imgRef = img.CGImage;
CGFloat width = CGImageGetWidth(imgRef);
CGFloat height = CGImageGetHeight(imgRef);
CGAffineTransform transform = CGAffineTransformIdentity;
CGRect bounds = CGRectMake(0, 0, width, height);
if ( (kMaxResolution != 0) && (width > kMaxResolution || height > kMaxResolution) ) {
CGFloat ratio = width/height;
if (ratio > 1) {
bounds.size.width = kMaxResolution;
bounds.size.height = bounds.size.width / ratio;
}
else {
bounds.size.height = kMaxResolution;
bounds.size.width = bounds.size.height * ratio;
}
}
CGFloat scaleRatio;
if (kMaxResolution != 0){
scaleRatio = bounds.size.width / width;
} else
{
scaleRatio = 1.0f;
}
CGSize imageSize = CGSizeMake(CGImageGetWidth(imgRef), CGImageGetHeight(imgRef));
CGFloat boundHeight;
UIImageOrientation orient = img.imageOrientation;
switch(orient) {
case UIImageOrientationUp: //EXIF = 1
transform = CGAffineTransformIdentity;
break;
case UIImageOrientationUpMirrored: //EXIF = 2
transform = CGAffineTransformMakeTranslation(imageSize.width, 0.0);
transform = CGAffineTransformScale(transform, -1.0, 1.0);
break;
case UIImageOrientationDown: //EXIF = 3
transform = CGAffineTransformMakeTranslation(imageSize.width, imageSize.height);
transform = CGAffineTransformRotate(transform, M_PI);
break;
case UIImageOrientationDownMirrored: //EXIF = 4
transform = CGAffineTransformMakeTranslation(0.0, imageSize.height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
break;
case UIImageOrientationLeftMirrored: //EXIF = 5
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeTranslation(imageSize.height, imageSize.width);
transform = CGAffineTransformScale(transform, -1.0, 1.0);
transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0);
break;
case UIImageOrientationLeft: //EXIF = 6
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeTranslation(0.0, imageSize.width);
transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0);
break;
case UIImageOrientationRightMirrored: //EXIF = 7
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeScale(-1.0, 1.0);
transform = CGAffineTransformRotate(transform, M_PI / 2.0);
break;
case UIImageOrientationRight: //EXIF = 8
boundHeight = bounds.size.height;
bounds.size.height = bounds.size.width;
bounds.size.width = boundHeight;
transform = CGAffineTransformMakeTranslation(imageSize.height, 0.0);
transform = CGAffineTransformRotate(transform, M_PI / 2.0);
break;
default:
[NSException raise:NSInternalInconsistencyException format: #"Invalid image orientation"];
}
UIGraphicsBeginImageContext(bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
if (orient == UIImageOrientationRight || orient == UIImageOrientationLeft) {
CGContextScaleCTM(context, -scaleRatio, scaleRatio);
CGContextTranslateCTM(context, -height, 0);
}
else {
CGContextScaleCTM(context, scaleRatio, -scaleRatio);
CGContextTranslateCTM(context, 0, -height);
}
CGContextConcatCTM(context, transform);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(0, 0, width, height), imgRef);
UIImage *tempImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return tempImage;
}
#pragma mark Getting Ans Writing Pixels
-(float*) getColorForPixel: (NSUInteger)xCoordinate andForY: (NSUInteger)yCoordinate
{
int byteIndex = (bytesPerRow * yCoordinate) + xCoordinate * bytesPerPixel;
float *colorToReturn = malloc(3);
colorToReturn[0] = bitmap[byteIndex] / 255.f; //Red
colorToReturn[1] = bitmap[byteIndex + 1] / 255.f; //Green
colorToReturn[2] = bitmap[byteIndex + 2] / 255.f; //Blue
return colorToReturn;
}
-(void) writeColor: (float*)colorToWrite forPixelAtX: (NSUInteger)xCoordinate andY: (NSUInteger)yCoordinate
{
int byteIndex = (bytesPerRow * yCoordinate) + xCoordinate * bytesPerPixel;
bitmap[byteIndex] = (unsigned char) ( colorToWrite[0] * 255);
bitmap[byteIndex + 1] = (unsigned char) ( colorToWrite[1] * 255);
bitmap[byteIndex + 2] = (unsigned char) ( colorToWrite[2] * 255);
}
#pragma mark Bitmap
-(float) getAverageBrightnessForImage: (UIImage*)img
{
UIImage *tempImage = [self scaleAndRotateImage: img withMaxResolution: 100];
unsigned char *rawData = [self getBytesForImage: tempImage];
double aBrightness = 0;
for(int y = 0; y < tempImage.size.height; y++) {
for(int x = 0; x < tempImage.size.width; x++) {
int byteIndex = ( (tempImage.size.width * y) + x) * bytesPerPixel;
aBrightness += (rawData[byteIndex] + rawData[byteIndex + 1] + rawData[byteIndex + 2]);
}
}
free(rawData);
aBrightness /= 3.0f;
aBrightness /= 255.0f;
aBrightness /= tempImage.size.width * tempImage.size.height;
return aBrightness;
}
-(unsigned char*) getBytesForImage: (UIImage*)pImage
{
CGImageRef image = [pImage CGImage];
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
bytesPerRow = bytesPerPixel * width;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * bytesPerPixel);
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGContextRelease(context);
return rawData;
}
-(void) loadWithImage: (UIImage*)img
{
averageBrightness = [self getAverageBrightnessForImage: img];
currentImage = [self scaleAndRotateImage: img withMaxResolution: 0];
imgWidth = currentImage.size.width;
imgHeight = currentImage.size.height;
bitmap = [self getBytesForImage: currentImage];
bytesPerRow = bytesPerPixel * imgWidth;
}
-(void) processImage
{
// now convert to grayscale
for(int y = 0; y < imgHeight; y++) {
for(int x = 0; x < imgWidth; x++) {
float *currentColor = [self getColorForPixel: x andForY: y];
//Grayscale
float averageColor = (currentColor[0] + currentColor[1] + currentColor[2]) / 3.0f;
averageColor += 0.5f - averageBrightness;
if (averageColor > 1.0f) averageColor = 1.0f;
currentColor[0] = averageColor;
currentColor[1] = averageColor;
currentColor[2] = averageColor;
[self writeColor: currentColor forPixelAtX: x andY: y];
free(currentColor);
}
}
}
-(UIImage*) getProcessedImage
{
// create a UIImage
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(bitmap, imgWidth, imgHeight, bitsPerComponent, bytesPerRow, colorSpace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast);
CGImageRef image = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *resultUIImage = [UIImage imageWithCGImage: image];
CGImageRelease(image);
return resultUIImage;
}
-(void) releaseCurrentImage
{
free(bitmap);
}
#end
And I convert an image into grayscale in the following way:
[ [PixelProcessing sharedPixelProcessing] loadWithImage: imageToDisplay.image];
[ [PixelProcessing sharedPixelProcessing] processImage];
imageToDisplay.image = [ [PixelProcessing sharedPixelProcessing] getProcessedImage];
[ [PixelProcessing sharedPixelProcessing] releaseCurrentImage];
Why is it working so slow? Is there a way to get float values for RGB color components of pixel? How can I optimize it?
Thanks.
You could let Quartz do the grayscale conversion for you:
CGImageRef grayscaleCGImageFromCGImage(CGImageRef inputImage) {
size_t width = CGImageGetWidth(inputImage);
size_t height = CGImageGetHeight(inputImage);
// Create a gray scale context and render the input image into that
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8,
4*width, colorspace, kCGBitmapByteOrderDefault);
CGContextDrawImage(context, CGRectMake(0,0, width,height), inputImage);
// Get an image representation of the grayscale context which the input
// was rendered into.
CGImageRef outputImage = CGBitmapContextCreateImage(context);
// Cleanup
CGContextRelease(context);
CGColorSpaceRelease(colorspace);
return (CGImageRef)[(id)outputImage autorelease];
}
I had to solve this same problem recently and came up with the following code (it also preserves alpha):
#implementation UIImage (grayscale)
typedef enum {
ALPHA = 0,
BLUE = 1,
GREEN = 2,
RED = 3
} PIXELS;
- (UIImage *)convertToGrayscale {
CGSize size = [self size];
int width = size.width;
int height = size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [self CGImage]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint32_t gray = 0.3 * rgbaPixel[RED] + 0.59 * rgbaPixel[GREEN] + 0.11 * rgbaPixel[BLUE];
// set the pixels to gray
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
}
#end
The way to find out your speed issue is to profile using Shark. (In Xcode, Run->Start with Performance Tool->Shark.) However, in this case I feel reasonably certain that the primary problems are the per-pixel malloc/free, the floating-point arithmetic, and the two method calls in the inner processing loop.
To avoid the malloc/free, you want to be doing something like this instead:
- (void) getColorForPixelX:(NSUInteger)x y:(NSUInteger)y pixel:(float[3])pixel
{ /* Write stuff to pixel[0], pixel[1], pixel[2] */ }
// To call:
float pixel[3];
for (each pixel)
{
[self getColorForPixelX:x y:y pixel:pixel];
// Do stuff
}
The second likely source of slowdown is the use of floating point – or rather, the cost of converting to and from floating point. For the filter you’re writing, working in integer maths is simple – add the integer pixel values and divide by 255*3. (Incidentally, that’s a pretty bad way to convert to greyscale. See http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale.)
Method calls are fast for what they are, but still pretty slow compared to the basic arithmetic of the filter. (For some numbers, see this article.) The easy way to eliminate the method calls is to replace them with inline functions.
Have you tried using the luminosity blend mode? A white image blended with your original with that blend mode seems to produce grayscale.
These two, foreground image on the right and background image on the left:
alt text http://developer.apple.com/iphone/library/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/Art/both_images.jpg
blended with kCGBlendModeLuminosity results in this:
alt text http://developer.apple.com/iphone/library/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/Art/luminosity_image.jpg
For details, see: Drawing with Quartz 2D: Using Blend Modes With Images