RGB in image processing in iphone app - iphone

I am doing an image processing in mp app. I got the pixel color from image and apply this on image by touching.. My code get the pixel color but it changes the whole image in blue color and apply that blue in image processing. I am stuck in code. But don't know what is going wrong in my code.May you please help me.
My code is:
-(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint coordinateTouch = [touch locationInView:[self view]];//where image was tapped
if (value == YES) {
self.lastColor = [self getPixelColorAtLocation:coordinateTouch];
value =NO;
}
NSLog(#"color %#",lastColor);
//[pickedColorDelegate pickedColor:(UIColor*)self.lastColor];
ListPoint point;
point.x = coordinateTouch.x;
point.y = coordinateTouch.y;
button = [UIButton buttonWithType:UIButtonTypeCustom];
button.backgroundColor = [UIColor whiteColor];
button.frame = CGRectMake(coordinateTouch.x-5, coordinateTouch.y-5, 2, 2);
//[descImageView addSubview:button];
[bgImage addSubview:button];
// Make image blurred on ImageView
if(bgImage.image)
{
CGImageRef imgRef = [[bgImage image] CGImage];
CFDataRef dataRef = CGDataProviderCopyData(CGImageGetDataProvider(imgRef));
const unsigned char *sourceBytesPtr = CFDataGetBytePtr(dataRef);
int len = CFDataGetLength(dataRef);
NSLog(#"length = %d, width = %d, height = %d, bytes per row = %d, bit per pixels = %d",
len, CGImageGetWidth(imgRef), CGImageGetHeight(imgRef), CGImageGetBytesPerRow(imgRef), CGImageGetBitsPerPixel(imgRef));
int width = CGImageGetWidth(imgRef);
int height = CGImageGetHeight(imgRef);
int widthstep = CGImageGetBytesPerRow(imgRef);
unsigned char *pixelData = (unsigned char *)malloc(len);
double wFrame = bgImage.frame.size.width;
double hFrame = bgImage.frame.size.height;
Image_Correction(sourceBytesPtr, pixelData, widthstep, width, height, wFrame, hFrame, point);
NSLog(#"finish");
NSData *data = [NSData dataWithBytes:pixelData length:len];
NSLog(#"1");
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
NSLog(#"2");
CGColorSpaceRef colorSpace2 = CGColorSpaceCreateDeviceRGB();
NSLog(#"3");
CGImageRef imageRef = CGImageCreate(width, height, 8, CGImageGetBitsPerPixel(imgRef), CGImageGetBytesPerRow(imgRef),
colorSpace2,kCGImageAlphaNoneSkipFirst|kCGBitmapByteOrder32Host,
provider, NULL, false, kCGRenderingIntentDefault);
NSLog(#"Start processing image");
UIImage *ret = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:UIImageOrientationUp];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace2);
CFRelease(dataRef);
free(pixelData);
NSLog(#"4");
bgImage.image = ret;
[button removeFromSuperview];
}
}
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
CGImageRef inImage = self.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(point.y))+round(point.x));
alpha = data[offset];
red = data[offset+1];
green = data[offset+2];
blue = data[offset+3];
NSLog(#"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
- (CGContextRef) createARGBBitmapContextFromImage:(CGImageRef) inImage {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
int Image_Correction(const unsigned char *pImage, unsigned char *rImage, int widthstep, int nW, int nH, double wFrame, double hFrame, ListPoint point)
{
double ratiox = nW/wFrame;
double ratioy = nH/hFrame;
double newW, newH, ratio;
if(ratioy > ratiox)
{
newH = hFrame;
newW = nW/ratioy;
ratio = ratioy;
}
else
{
newH = nH/ratiox;
newW = wFrame;
ratio = ratiox;
}
NSLog(#"new H, W = %f, %f", newW, newH);
NSLog(#"ratiox = %f; ratioy = %f", ratiox, ratioy);
ListPoint real_point;
real_point.x = (point.x - wFrame/2 + newW/2) *ratio;
real_point.y = (point.y - hFrame/2 + newH/2)*ratio;
for(int h = 0; h < nH; h++)
{
for(int k = 0; k < nW; k++)
{
rImage[h*widthstep + k*4 + 0] = pImage[h*widthstep + k*4 + 0];
rImage[h*widthstep + k*4 + 1] = pImage[h*widthstep + k*4 + 1];
rImage[h*widthstep + k*4 + 2] = pImage[h*widthstep + k*4 + 2];
rImage[h*widthstep + k*4 + 3] = pImage[h*widthstep + k*4 + 3];
}
}
// Modify this parameter to change Blurred area
int iBlurredArea = 6;
for(int h = -ratio*iBlurredArea; h <= ratio*iBlurredArea; h++)
for(int k = -ratio*iBlurredArea; k <= ratio*iBlurredArea; k++)
{
int tempx = real_point.x + k;
int tempy = real_point.y + h;
if (((tempy - 3) > 0)&&((tempy+3) >0)&&((tempx - 3) > 0)&&((tempx + 3) >0))
{
double sumR = 0;
double sumG = 0;
double sumB = 0;
double sumA = 0;
double count = 0;
for(int m = -3; m < 4; m++)
for (int n = -3; n < 4; n++)
{
sumR = red;//sumR + pImage[(tempy + m)*widthstep + (tempx + n)*4 + 0];
sumG = green;//sumG + pImage[(tempy + m)*widthstep + (tempx + n)*4 + 1];
sumB = blue;//sumB + pImage[(tempy + m)*widthstep + (tempx + n)*4 + 2];
sumA = alpha;//sumA + pImage[(tempy + m)*widthstep + (tempx + n)*4 + 3];
count++;
}
rImage[tempy*widthstep + tempx*4 + 0] = red;//sumR/count;
rImage[tempy*widthstep + tempx*4 + 1] = green;//sumG/count;
rImage[tempy*widthstep + tempx*4 + 2] = blue;//sumB/count;
rImage[tempy*widthstep + tempx*4 + 3] = alpha;//sumA/count;
}
}
return 1;
}
Thx for seeing this code.. i think i am doing something wrong.
Thx in advance.

This seems to work for me.
UIImage* modifyImage(UIImage* image)
{
size_t w = image.size.width;
size_t h = image.size.height;
CGFloat scale = image.scale;
// Create the bitmap context
UIGraphicsBeginImageContext(CGSizeMake(w*scale, h*scale));
CGContextRef context = UIGraphicsGetCurrentContext();
// NOTE you may have to setup a rotation here based on image.imageOrientation
// but I didn't need to consider that for my images.
CGContextScaleCTM(context, scale, scale);
[image drawInRect:CGRectMake(0, 0, w, h)];
unsigned char* data = CGBitmapContextGetData (context);
if (data != NULL) {
size_t height = CGBitmapContextGetHeight(context);
size_t width = CGBitmapContextGetWidth(context);
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(context);
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
// Not sure why the color info is in BGRA format
// Look at CGBitmapContextGetBitmapInfo(context) if this format isn't working for you
int offset = y * bytesPerRow + x * 4;
unsigned char* blue = &data[offset];
unsigned char* green = &data[offset+1];
unsigned char* red = &data[offset+2];
unsigned char* alpha = &data[offset+3];
int newRed = ...; // color calculation code here
int newGreen = ...;
int newBlue = ...;
// Assuming you don't want to change the original alpha value.
*red = (newRed * *alpha)/255;
*green = (newGreen * *alpha)/255;
*blue = (newBlue * *alpha)/255;
}
}
}
CGImageRef newImage = CGBitmapContextCreateImage(context);
UIImage *done = [UIImage imageWithCGImage:newImage scale:image.scale orientation: image.imageOrientation];
CGImageRelease(newImage);
UIGraphicsEndImageContext();
return done;
}

Related

iPhone paint bucket

I am working on implementing a flood-fill paint-bucket tool in an iPhone app and am having some trouble with it. The user is able to draw and I would like the paint bucket to allow them to tap a spot and fill everything of that color that is connected.
Here's my idea:
1) Start at the point the user selects
2) Save points checked to a NSMutableArray so they don't get re-checked
3) If the pixel color at the current point is the same as the original clicked point, save to an array to be changed later
4) If the pixel color at the current point is different than the original, return. (boundary)
5) Once finished scanning, go through the array of pixels to change and set them to the new color.
But this is not working out so far. Any help or knowledge of how to do this would be greatly appreciated! Here is my code.
-(void)flood:(int)x:(int)y
{
//NSLog(#"Flood %i %i", x, y);
CGPoint point = CGPointMake(x, y);
NSValue *value = [NSValue valueWithCGPoint:point];
//Don't repeat checked pixels
if([self.checkedFloodPixels containsObject:value])
{
return;
}
else
{
//If not checked, mark as checked
[self.checkedFloodPixels addObject:value];
//Make sure in bounds
if([self isOutOfBounds:x:y] || [self reachedStopColor:x:y])
{
return;
}
//Go to adjacent points
[self flood:x+1:y];
[self flood:x-1:y];
[self flood:x:y+1];
[self flood:x:y-1];
}
}
- (BOOL)isOutOfBounds:(int)x:(int)y
{
BOOL outOfBounds;
if(y > self.drawImage.frame.origin.y && y < (self.drawImage.frame.origin.y + self.drawImage.frame.size.height))
{
if(x > self.drawImage.frame.origin.x && x < (self.drawImage.frame.origin.x + self.drawImage.frame.size.width))
{
outOfBounds = NO;
}
else
{
outOfBounds = YES;
}
}
else
{
outOfBounds = YES;
}
if(outOfBounds)
NSLog(#"Out of bounds");
return outOfBounds;
}
- (BOOL)reachedStopColor:(int)x:(int)y
{
CFDataRef theData = CGDataProviderCopyData(CGImageGetDataProvider(self.drawImage.image.CGImage));
const UInt8 *pixelData = CFDataGetBytePtr(theData);
int red = 0;
int green = 1;
int blue = 2;
//RGB for point being checked
float newPointR;
float newPointG;
float newPointB;
//RGB for point initially clicked
float oldPointR;
float oldPointG;
float oldPointB;
int index;
BOOL reachedStopColor = NO;
//Format oldPoint RBG - pixels are every 4 bytes so round to 4
index = lastPoint.x * lastPoint.y;
if(index % 4 != 0)
{
index -= 2;
index /= 4;
index *= 4;
}
//Get into 0.0 - 1.0 value
oldPointR = pixelData[index + red];
oldPointG = pixelData[index + green];
oldPointB = pixelData[index + blue];
oldPointR /= 255.0;
oldPointG /= 255.0;
oldPointB /= 255.0;
oldPointR *= 1000;
oldPointG *= 1000;
oldPointB *= 1000;
int oldR = oldPointR;
int oldG = oldPointG;
int oldB = oldPointB;
oldPointR = oldR / 1000.0;
oldPointG = oldG / 1000.0;
oldPointB = oldB / 1000.0;
//Format newPoint RBG
index = x*y;
if(index % 4 != 0)
{
index -= 2;
index /= 4;
index *= 4;
}
newPointR = pixelData[index + red];
newPointG = pixelData[index + green];
newPointB = pixelData[index + blue];
newPointR /= 255.0;
newPointG /= 255.0;
newPointB /= 255.0;
newPointR *= 1000;
newPointG *= 1000;
newPointB *= 1000;
int newR = newPointR;
int newG = newPointG;
int newB = newPointB;
newPointR = newR / 1000.0;
newPointG = newG / 1000.0;
newPointB = newB / 1000.0;
//Check if different color
if(newPointR < (oldPointR - 0.02f) || newPointR > (oldPointR + 0.02f))
{
if(newPointG < (oldPointG - 0.02f) || newPointG > (oldPointG + 0.02f))
{
if(newPointB < (oldPointB - 0.02f) || newPointB > (oldPointB + 0.02f))
{
reachedStopColor = YES;
NSLog(#"Different Color");
}
else
{
NSLog(#"Same Color3");
NSNumber *num = [NSNumber numberWithInt:index];
[self.pixelsToChange addObject:num];
}
}
else
{
NSLog(#"Same Color2");
NSNumber *num = [NSNumber numberWithInt:index];
[self.pixelsToChange addObject:num];
}
}
else
{
NSLog(#"Same Color1");
NSNumber *num = [NSNumber numberWithInt:index];
[self.pixelsToChange addObject:num];
}
CFRelease(theData);
if(reachedStopColor)
NSLog(#"Reached stop color");
return reachedStopColor;
}
-(void)fillAll
{
CGContextRef ctx;
CGImageRef imageRef = self.drawImage.image.CGImage;
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
int red = 0;
int green = 1;
int blue = 2;
int index;
NSNumber *num;
for(int i = 0; i < [self.pixelsToChange count]; i++)
{
num = [self.pixelsToChange objectAtIndex:i];
index = [num intValue];
rawData[index + red] = (char)[[GameManager sharedManager] RValue];
rawData[index + green] = (char)[[GameManager sharedManager] GValue];
rawData[index + blue] = (char)[[GameManager sharedManager] BValue];
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedLast );
imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
self.drawImage.image = rawImage;
free(rawData);
}
so i found this (i know the question might be irrelevant now but for people who are still looking for something like this it's not ) :
to get color at pixel from context (modified code from here) :
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color;
CGContextRef cgctx = UIGraphicsGetCurrentContext();
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
int offset = 4*((ContextWidth*round(point.y))+round(point.x)); //i dont know how to get ContextWidth from current context so i have it as a instance variable in my code
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
if (data) { free(data); }
return color;
}
and the fill algorithm: is here
This is what i'm using but the fill itself is quite slow compared to CGPath drawing styles. Tho if you're rendering offscreen and/or you fill it dynamically like this it looks kinda cool :

Filling a portion of an image with color

I'm doing an IPhone painting app. I would like to paint a particular portion of an image using touchevent to find the pixel data and then use that pixel data to paint the remaining part of the image. Using touchevent, I got the pixel value for the portion:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
startPoint = [[touches anyObject] locationInView:imageView];
NSLog(#"the value of the index is %i",index);
NSString* imageName=[NSString stringWithFormat:#"roof%i", index];
tempColor = [[UIColor alloc] initWithPatternImage:[UIImage imageNamed:imageName]];
lastPoint = [touch locationInView:self.view];
lastPoint.y -= 20;
NSString *tx = [[NSString alloc]initWithFormat:#"%.0f", lastPoint.x];
NSString *ty = [[NSString alloc]initWithFormat:#"%.0f", lastPoint.y];
NSLog(#"the vale of the string is %# and %#",tx,ty);
int ix=[tx intValue];
int iy=[ty intValue];
int z=1;
NSLog(#"the vale of the string is %i and %i and z is %i",ix,iy,z);
[self getRGBAsFromImage:newImage atX:ix andY:iy count1:1];
}
Here I'm getting the pixel data for the image:
-(void)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count1:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
NSLog(#"the vale of the rbg of red is %f",red);
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
Using a tolerence value I'm getting the data. Here i'm struggling to paint the remaining section.
- (BOOL)cgHitTestForArea:(CGRect)area {
BOOL hit = FALSE;
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
float areaFloat = ((area.size.width * 4) * area.size.height);
unsigned char *bitmapData = malloc(areaFloat);
CGContextRef context = CGBitmapContextCreate(bitmapData,
area.size.width,
area.size.height,
8,
4*area.size.width,
colorspace,
kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(context, -area.origin.x, -area.origin.y);
[self.layer renderInContext:context];
//Seek through all pixels.
float transparentPixels = 0;
for (int i = 0; i < (int)areaFloat ; i += 4) {
//Count each transparent pixel.
if (((bitmapData[i + 3] * 1.0) / 255.0) == 0) {
transparentPixels += 1;
}
}
free(bitmapData);
//Calculate the percentage of transparent pixels.
float hitTolerance = [[self.layer valueForKey:#"hitTolerance"]floatValue];
NSLog(#"Apixels: %f hitPercent: %f",transparentPixels,(transparentPixels/areaFloat));
if ((transparentPixels/(areaFloat/4)) < hitTolerance) {
hit = TRUE;
}
CGColorSpaceRelease(colorspace);
CGContextRelease(context);
return hit;
}
Any suggestions to make this work please?
First, turning a bitmap image into an NSArray of UIColor objects is nuts. Way, way too much overhead. Work with a pixelbuffer instead. Learn how to use pointers.
http://en.wikipedia.org/wiki/Flood_fill#The_algorithm provides a good overview of a few simple techniques for performing a flood-fill — using either recursion or queues.

How to filter colors in runtime ? - iphone

I have an image where each pixel has one of the three primary colors (RGBA).
And I have to change this image in run-time by changing each color channel by another (filter on run-time)
I have the .PVR and the corresponding glTexture2D, but how can I filter / change colors in run-time ?
I cannot use OpenGL ES 2.0
But I can use Cocos2D and OpenGL ES 1.x
:(
It's not trivial, but it could be useful to you:
// Convierte la imagen a blanco y negro
+ (UIImage *)convertImageToGrayScale:(UIImage *)i {
CGSize size = [i size];
int width = size.width;
int height = size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [i CGImage]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint32_t gray = 0.3 * rgbaPixel[RED] + 0.59 * rgbaPixel[GREEN] + 0.11 * rgbaPixel[BLUE];
// set the pixels to gray
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
}
This converts the UIimage received into a gray scale image
This is the code I used to manipulate RGBA picture
-(UIImage*)convertGrayScaleImageRedImage:(CGImageRef)inImage{
CFDataRef m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
int length = CFDataGetLength(m_DataRef);
for (int i=0; i<length; i+=4){
int r = i;
int g = i+1;
int b = i+2;
//int a = i+3;//alpha
NSLog(#"r=%i, g=%i, b=%i",m_PixelBuf[r], m_PixelBuf[g], m_PixelBuf[b]);
m_PixelBuf[r] = 1;
m_PixelBuf[g] = 0;
m_PixelBuf[b] = 0;
//m_PixelBuf[a] = m_PixelBuf[a];//alpha
}
CGContextRef ctx = CGBitmapContextCreate(m_PixelBuf,
CGImageGetWidth(inImage),
CGImageGetHeight(inImage),
CGImageGetBitsPerComponent(inImage),
CGImageGetBytesPerRow(inImage),
CGImageGetColorSpace(inImage),
CGImageGetBitmapInfo(inImage)
);
CGImageRef imageRef = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
UIImage *redImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return redImage;
}

Box Blur CGImageRef

I'm attempting to implement a simple Box Blur, but am having issues. Namely, instead of blurring the images it seems to be converting each pixel to either: Red, Green, Blue or Black. Not sure exactly what is going on. Any help would be appreciated.
Please note, this code is simply a first pass to get it working, I'm not worried about speed... yet.
- (CGImageRef)blur:(CGImageRef)base radius:(int)radius {
CGContextRef ctx;
CGImageRef imageRef = base;
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width *4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
char red = 0;
char green = 0;
char blue = 0;
for (int widthIndex = radius; widthIndex < width - radius; widthIndex++) {
for (int heightIndex = radius; heightIndex < height - radius; heightIndex++) {
red = 0;
green = 0;
blue = 0;
for (int radiusY = -radius; radiusY <= radius; ++radiusY) {
for (int radiusX = -radius; radiusX <= radius; ++radiusX) {
int xIndex = widthIndex + radiusX;
int yIndex = heightIndex + radiusY;
int index = ((yIndex * width) + xIndex) * 4;
red += rawData[index];
green += rawData[index + 1];
blue += rawData[index + 2];
}
}
int currentIndex = ((heightIndex * width) + widthIndex) * 4;
int divisor = (radius * 2) + 1;
divisor *= divisor;
int finalRed = red / divisor;
int finalGreen = green / divisor;
int finalBlue = blue / divisor;
rawData[currentIndex] = (char)finalRed;
rawData[currentIndex + 1] = (char)finalGreen;
rawData[currentIndex + 2] = (char)finalBlue;
}
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedLast );
imageRef = CGBitmapContextCreateImage (ctx);
CGContextRelease(ctx);
free(rawData);
[(id)imageRef autorelease];
return imageRef;
}
The char colors should be declared as int.
int red = 0;
int green = 0;
int blue = 0;
I just have a comment. I used this script and it really works well and fast. the only thing is that it leaves the frame with width of radius which is not blurred. So I have modified it a bit so that it blurs whole area using mirroring technique near the edges.
For this one needs to put the following lines:
// Mirroring the part between edge and blur raduis
if (xIndex<0) xIndex=-xIndex-1; else if (xIndex>width-1) xIndex=2*width-xIndex-1;
if (yIndex<0) yIndex=-yIndex-1; else if (yIndex>height-1) yIndex=2*height-yIndex-1;
after these lines:
int xIndex = widthIndex + radiusX;
int yIndex = heightIndex + radiusY;
And then replace the headers of for loops:
for (int widthIndex = radius; widthIndex < width - radius; widthIndex++) {
for (int heightIndex = radius; heightIndex < height - radius; heightIndex++) {
with these headers:
for (int widthIndex = 0; widthIndex < width; widthIndex++) {
for (int heightIndex = 0; heightIndex < height; heightIndex++) {

Pixel color replacement working fine on simulator but not on iPhone

I am dealing with an iphone application which would choose a specific colored pixel from the image and replace it with some other color shades i choose from color menu. Problem is that the code i have implemented is working fine on simulator, but when i run the same code on device all i get is that, the image's pixels are replaced by only white color. I am pasting the code below, If any one has clue like how to implement it then it would be great help
// This is data buffer details of whole image's pixels
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,bitsPerComponent, bytesPerRow, colorSpace,kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
//Second image into buffer
//This is data buffer of the pixel to be replaced that i am selecting from image
CGImageRef imageRefs = [selectedColorImage.image CGImage];
NSUInteger widths = CGImageGetWidth(imageRefs);
NSUInteger heights = CGImageGetHeight(imageRefs);
CGColorSpaceRef colorSpaces = CGColorSpaceCreateDeviceRGB();
unsigned char *rawDatas = malloc(heights * widths * 4);
NSUInteger bytesPerPixels = 4;
NSUInteger bytesPerRows = bytesPerPixels * widths;
NSUInteger bitsPerComponents = 8;
CGContextRef contexts = CGBitmapContextCreate(rawDatas, widths, heights,bitsPerComponents, bytesPerRows, colorSpaces,kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpaces);
CGContextDrawImage(contexts, CGRectMake(0, 0, widths, heights), imageRefs);
CGContextRelease(contexts);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
int byteIndexs = (bytesPerRows * 0) + 0 * bytesPerPixels;
int i=0;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat redb = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat greenb = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blueb = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alphab = (rawData[byteIndex + 3] * 1.0) / 255.0;
CGFloat reds = (rawDatas[byteIndexs] * 1.0) / 255.0;
CGFloat greens = (rawDatas[byteIndexs + 1] * 1.0) / 255.0;
CGFloat blues = (rawDatas[byteIndexs + 2] * 1.0) / 255.0;
CGFloat alphas = (rawDatas[byteIndexs + 3] * 1.0) / 255.0;
/* CGColorRef ref=[[shapeButton backgroundColor] CGColor];
switch(CGColorSpaceGetModel(CGColorGetColorSpace(ref)))
{
case kCGColorSpaceModelMonochrome:
// For grayscale colors, the luminance is the color value
//luminance = components[0];
break;
case kCGColorSpaceModelRGB:
// For RGB colors, we calculate luminance assuming sRGB Primaries as per
// http://en.wikipedia.org/wiki/Luminance_(relative)
//luminance = 0.2126 * components[0] + 0.7152 * components[1] + 0.0722 * components[2];
break;
case kCGColorSpaceModelCMYK:
case kCGColorSpaceModelLab:
case kCGColorSpaceModelDeviceN:
case kCGColorSpaceModelIndexed:
break;
case kCGColorSpaceModelUnknown:
break;
case kCGColorSpaceModelPattern:
break;
//default:
// We don't implement support for non-gray, non-rgb colors at this time.
// Since our only consumer is colorSortByLuminance, we return a larger than normal
// value to ensure that these types of colors are sorted to the end of the list.
//luminance = 2.0;
}
int numComponents = CGColorGetNumberOfComponents(ref);
if (numComponents == 4)
{
const CGFloat *components = CGColorGetComponents(ref);
CGFloat red = components[0];
CGFloat green = components[1];
CGFloat blue = components[2];
CGFloat alpha = components[3];
}*/
if((redb==red/255.0f)&&(greenb=green/255.0f)&&(blueb=blue/255.0f)&&(alphab==alpha/255.0f))
{
if(button_tag ==1)
{
NSLog(#"color matching %d",i);//done
i++;
rawData[byteIndex]=(reds*255.0)/1.0+999999999;
rawData[byteIndex+1]=(blues*255.0)/1.0+000000800.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+999999000.0;
rawData[byteIndex+3]=(alphas*255.0)/255.0;
}
if(button_tag ==2)
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+899989989;
rawData[byteIndex+1]=(blues*255.0)/1.0+898998999.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+899989900.0;
rawData[byteIndex+3]=(alphas*255.0)/255.0;
}
if(button_tag ==3)
{
NSLog(#"color matching %d",i);//done
i++;
rawData[byteIndex]=(reds*255.0)/1.0+999999999;
rawData[byteIndex+1]=(blues*255.0)/1.0+990000800.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+999999000.0;
rawData[byteIndex+3]=(alphas*255.0)/10.0;
}
if(button_tag ==4)//done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+50.0;
rawData[byteIndex+1]=(blues*255.0)/1.0+50.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+50.0;
rawData[byteIndex+3]=(alphas*0.0)/0.0;
}
if(button_tag ==5)//done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+255000000.0;
rawData[byteIndex+1]=(blues*255.0)/1.0+000000000.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+000000255.0;
rawData[byteIndex+3]=(alphas*255.0)/10.0;
}
if(button_tag ==6)// done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+0.0;
rawData[byteIndex+1]=(blues*255.0)/1.0+1.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+0.0;
rawData[byteIndex+3]=(alphas*255.0)/0.0;
}
if(button_tag ==7)//done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+255255255.0f;
rawData[byteIndex+1]=(blues*255.0)/1.0+000255255.0f;
rawData[byteIndex+2]=(greens*255.0)/1.0+255255255.0f;
rawData[byteIndex+3]=(alphas*255.0)/255.0;
}
if(button_tag ==8)//done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+200.0;
rawData[byteIndex+1]=(blues*255.0)/1.0+200.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+200.0;
rawData[byteIndex+3]=(alphas*0.0)/0.0;
}
if(button_tag ==9)//done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+1.0;
rawData[byteIndex+1]=(blues*255.0)/1.0+0.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+0.0;
rawData[byteIndex+3]=(alphas*255.0)/0.0;
}
if(button_tag ==10)//done
{
i++;
rawData[byteIndex]=(reds*255.0)/1.0+999999999;
rawData[byteIndex+1]=(blues*255.0)/1.0+990000888.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+999999000.0;
rawData[byteIndex+3]=(alphas*255.0)/10.0;
}
if(button_tag ==11)//done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+900000000;
rawData[byteIndex+1]=(blues*255.0)/1.0+990000800.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+999999000.0;
rawData[byteIndex+3]=(alphas*255.0)/255.0;
}
if(button_tag ==12)//done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+150.0f;
rawData[byteIndex+1]=(blues*255.0)/1.0+150.0f;
rawData[byteIndex+2]=(greens*255.0)/1.0+150.0f;
rawData[byteIndex+3]=(alphas*255.0)/255.0f;
}
if(button_tag ==13)//done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+0.0;
rawData[byteIndex+1]=(blues*255.0)/1.0+0.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+0.0;
rawData[byteIndex+3]=(alphas*255.0)/0.0;
}
if(button_tag ==14)//done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+10.0;
rawData[byteIndex+1]=(blues*255.0)/1.0+10.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+10.0;
rawData[byteIndex+3]=(alphas*255.0)/1.0;
}
}
byteIndex += 4;
//byteIndexs += 4;
}
CGSize size=CGSizeMake(320, 330);
UIImage *newImage= [self imageWithBits:rawData withSize:size];
[backgroundImage setImage:newImage];
[self HideLoadingIndicator];
//free(rawData);
//free(rawDatas);
Thanks in advance :)
While I can't tell you exactly what your problem is, I'm noticing a lot of code similar to:
rawData[byteIndex]=(reds*255.0)/1.0+999999999;
which when writing to a single byte is going to max out the value (255). Doing that four times over an RGBA8 pixel will render it white and opaque.
Comparing floats will always be inexact. Use the original integer values instead; they are easier to work with and will be quicker.
edit: something similar to this should work to replace white with transparent:
uint32_t *pixels = (pointer to image data);
uint32_t sourceColor = 0xffffffff;
uint32_t destColor = 0x00000000;
size_t pixelCount = width * height;
for (int i = 0; i < pixelCount; i++)
if (pixels[i] == sourceColor)
pixels[i] = destColor;