Box Blur CGImageRef - iphone

I'm attempting to implement a simple Box Blur, but am having issues. Namely, instead of blurring the images it seems to be converting each pixel to either: Red, Green, Blue or Black. Not sure exactly what is going on. Any help would be appreciated.
Please note, this code is simply a first pass to get it working, I'm not worried about speed... yet.
- (CGImageRef)blur:(CGImageRef)base radius:(int)radius {
CGContextRef ctx;
CGImageRef imageRef = base;
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width *4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
char red = 0;
char green = 0;
char blue = 0;
for (int widthIndex = radius; widthIndex < width - radius; widthIndex++) {
for (int heightIndex = radius; heightIndex < height - radius; heightIndex++) {
red = 0;
green = 0;
blue = 0;
for (int radiusY = -radius; radiusY <= radius; ++radiusY) {
for (int radiusX = -radius; radiusX <= radius; ++radiusX) {
int xIndex = widthIndex + radiusX;
int yIndex = heightIndex + radiusY;
int index = ((yIndex * width) + xIndex) * 4;
red += rawData[index];
green += rawData[index + 1];
blue += rawData[index + 2];
}
}
int currentIndex = ((heightIndex * width) + widthIndex) * 4;
int divisor = (radius * 2) + 1;
divisor *= divisor;
int finalRed = red / divisor;
int finalGreen = green / divisor;
int finalBlue = blue / divisor;
rawData[currentIndex] = (char)finalRed;
rawData[currentIndex + 1] = (char)finalGreen;
rawData[currentIndex + 2] = (char)finalBlue;
}
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedLast );
imageRef = CGBitmapContextCreateImage (ctx);
CGContextRelease(ctx);
free(rawData);
[(id)imageRef autorelease];
return imageRef;
}

The char colors should be declared as int.
int red = 0;
int green = 0;
int blue = 0;

I just have a comment. I used this script and it really works well and fast. the only thing is that it leaves the frame with width of radius which is not blurred. So I have modified it a bit so that it blurs whole area using mirroring technique near the edges.
For this one needs to put the following lines:
// Mirroring the part between edge and blur raduis
if (xIndex<0) xIndex=-xIndex-1; else if (xIndex>width-1) xIndex=2*width-xIndex-1;
if (yIndex<0) yIndex=-yIndex-1; else if (yIndex>height-1) yIndex=2*height-yIndex-1;
after these lines:
int xIndex = widthIndex + radiusX;
int yIndex = heightIndex + radiusY;
And then replace the headers of for loops:
for (int widthIndex = radius; widthIndex < width - radius; widthIndex++) {
for (int heightIndex = radius; heightIndex < height - radius; heightIndex++) {
with these headers:
for (int widthIndex = 0; widthIndex < width; widthIndex++) {
for (int heightIndex = 0; heightIndex < height; heightIndex++) {

Related

iPhone paint bucket

I am working on implementing a flood-fill paint-bucket tool in an iPhone app and am having some trouble with it. The user is able to draw and I would like the paint bucket to allow them to tap a spot and fill everything of that color that is connected.
Here's my idea:
1) Start at the point the user selects
2) Save points checked to a NSMutableArray so they don't get re-checked
3) If the pixel color at the current point is the same as the original clicked point, save to an array to be changed later
4) If the pixel color at the current point is different than the original, return. (boundary)
5) Once finished scanning, go through the array of pixels to change and set them to the new color.
But this is not working out so far. Any help or knowledge of how to do this would be greatly appreciated! Here is my code.
-(void)flood:(int)x:(int)y
{
//NSLog(#"Flood %i %i", x, y);
CGPoint point = CGPointMake(x, y);
NSValue *value = [NSValue valueWithCGPoint:point];
//Don't repeat checked pixels
if([self.checkedFloodPixels containsObject:value])
{
return;
}
else
{
//If not checked, mark as checked
[self.checkedFloodPixels addObject:value];
//Make sure in bounds
if([self isOutOfBounds:x:y] || [self reachedStopColor:x:y])
{
return;
}
//Go to adjacent points
[self flood:x+1:y];
[self flood:x-1:y];
[self flood:x:y+1];
[self flood:x:y-1];
}
}
- (BOOL)isOutOfBounds:(int)x:(int)y
{
BOOL outOfBounds;
if(y > self.drawImage.frame.origin.y && y < (self.drawImage.frame.origin.y + self.drawImage.frame.size.height))
{
if(x > self.drawImage.frame.origin.x && x < (self.drawImage.frame.origin.x + self.drawImage.frame.size.width))
{
outOfBounds = NO;
}
else
{
outOfBounds = YES;
}
}
else
{
outOfBounds = YES;
}
if(outOfBounds)
NSLog(#"Out of bounds");
return outOfBounds;
}
- (BOOL)reachedStopColor:(int)x:(int)y
{
CFDataRef theData = CGDataProviderCopyData(CGImageGetDataProvider(self.drawImage.image.CGImage));
const UInt8 *pixelData = CFDataGetBytePtr(theData);
int red = 0;
int green = 1;
int blue = 2;
//RGB for point being checked
float newPointR;
float newPointG;
float newPointB;
//RGB for point initially clicked
float oldPointR;
float oldPointG;
float oldPointB;
int index;
BOOL reachedStopColor = NO;
//Format oldPoint RBG - pixels are every 4 bytes so round to 4
index = lastPoint.x * lastPoint.y;
if(index % 4 != 0)
{
index -= 2;
index /= 4;
index *= 4;
}
//Get into 0.0 - 1.0 value
oldPointR = pixelData[index + red];
oldPointG = pixelData[index + green];
oldPointB = pixelData[index + blue];
oldPointR /= 255.0;
oldPointG /= 255.0;
oldPointB /= 255.0;
oldPointR *= 1000;
oldPointG *= 1000;
oldPointB *= 1000;
int oldR = oldPointR;
int oldG = oldPointG;
int oldB = oldPointB;
oldPointR = oldR / 1000.0;
oldPointG = oldG / 1000.0;
oldPointB = oldB / 1000.0;
//Format newPoint RBG
index = x*y;
if(index % 4 != 0)
{
index -= 2;
index /= 4;
index *= 4;
}
newPointR = pixelData[index + red];
newPointG = pixelData[index + green];
newPointB = pixelData[index + blue];
newPointR /= 255.0;
newPointG /= 255.0;
newPointB /= 255.0;
newPointR *= 1000;
newPointG *= 1000;
newPointB *= 1000;
int newR = newPointR;
int newG = newPointG;
int newB = newPointB;
newPointR = newR / 1000.0;
newPointG = newG / 1000.0;
newPointB = newB / 1000.0;
//Check if different color
if(newPointR < (oldPointR - 0.02f) || newPointR > (oldPointR + 0.02f))
{
if(newPointG < (oldPointG - 0.02f) || newPointG > (oldPointG + 0.02f))
{
if(newPointB < (oldPointB - 0.02f) || newPointB > (oldPointB + 0.02f))
{
reachedStopColor = YES;
NSLog(#"Different Color");
}
else
{
NSLog(#"Same Color3");
NSNumber *num = [NSNumber numberWithInt:index];
[self.pixelsToChange addObject:num];
}
}
else
{
NSLog(#"Same Color2");
NSNumber *num = [NSNumber numberWithInt:index];
[self.pixelsToChange addObject:num];
}
}
else
{
NSLog(#"Same Color1");
NSNumber *num = [NSNumber numberWithInt:index];
[self.pixelsToChange addObject:num];
}
CFRelease(theData);
if(reachedStopColor)
NSLog(#"Reached stop color");
return reachedStopColor;
}
-(void)fillAll
{
CGContextRef ctx;
CGImageRef imageRef = self.drawImage.image.CGImage;
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
int red = 0;
int green = 1;
int blue = 2;
int index;
NSNumber *num;
for(int i = 0; i < [self.pixelsToChange count]; i++)
{
num = [self.pixelsToChange objectAtIndex:i];
index = [num intValue];
rawData[index + red] = (char)[[GameManager sharedManager] RValue];
rawData[index + green] = (char)[[GameManager sharedManager] GValue];
rawData[index + blue] = (char)[[GameManager sharedManager] BValue];
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedLast );
imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
self.drawImage.image = rawImage;
free(rawData);
}
so i found this (i know the question might be irrelevant now but for people who are still looking for something like this it's not ) :
to get color at pixel from context (modified code from here) :
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color;
CGContextRef cgctx = UIGraphicsGetCurrentContext();
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
int offset = 4*((ContextWidth*round(point.y))+round(point.x)); //i dont know how to get ContextWidth from current context so i have it as a instance variable in my code
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
if (data) { free(data); }
return color;
}
and the fill algorithm: is here
This is what i'm using but the fill itself is quite slow compared to CGPath drawing styles. Tho if you're rendering offscreen and/or you fill it dynamically like this it looks kinda cool :

Blur an UIImage on change of slider

I have tried gaussian blur and checked out all the questions on stackoverflow but no one of them solved my crash issue.Please help is there is any other way to blur image other than gaussian blur algorithm. My image size is 768x1024 and the loops iterates for 2*1024*768 times and this is not feasible.
CGContextRef NYXImageCreateARGBBitmapContext(const size_t width, const size_t height, const size_t bytesPerRow)
{
/// Use the generic RGB color space
/// We avoid the NULL check because CGColorSpaceRelease() NULL check the value anyway, and worst case scenario = fail to create context
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
/// Create the bitmap context, we want pre-multiplied ARGB, 8-bits per component
CGContextRef bmContext = CGBitmapContextCreate(NULL, width, height, 8/*Bits per component*/, bytesPerRow, colorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
return bmContext;
}
-(UIImage*)blurredImageUsingGaussFactor:(NSUInteger)gaussFactor andPixelRadius:(NSUInteger)pixelRadius
{
CGImageRef cgImage = self.CGImage;
const size_t originalWidth = CGImageGetWidth(cgImage);
const size_t originalHeight = CGImageGetHeight(cgImage);
const size_t bytesPerRow = originalWidth * 4;
CGContextRef context = NYXImageCreateARGBBitmapContext(originalWidth, originalHeight, bytesPerRow);
if (!context)
return nil;
unsigned char *srcData, *destData, *finalData;
size_t width = CGBitmapContextGetWidth(context);
size_t height = CGBitmapContextGetHeight(context);
size_t bpr = CGBitmapContextGetBytesPerRow(context);
size_t bpp = CGBitmapContextGetBitsPerPixel(context) / 8;
CGRect rect = {{0.0f, 0.0f}, {width, height}};
CGContextDrawImage(context, rect, cgImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
srcData = (unsigned char*)CGBitmapContextGetData(context);
if (srcData != NULL)
{
size_t dataSize = bpr * height;
finalData = malloc(dataSize);
destData = malloc(dataSize);
memcpy(finalData, srcData, dataSize);
memcpy(destData, srcData, dataSize);
int sums[gaussFactor];
size_t i, /*x, y,*/ k;
int gauss_sum = 0;
size_t radius = pixelRadius * 2 + 1;
int *gauss_fact = malloc(radius * sizeof(int));
for (i = 0; i < pixelRadius; i++)
{
gauss_fact[i] = 1 + (gaussFactor * i);
gauss_fact[radius - (i + 1)] = 1 + (gaussFactor * i);
gauss_sum += (gauss_fact[i] + gauss_fact[radius - (i + 1)]);
}
gauss_fact[(radius - 1) / 2] = 1 + (gaussFactor*pixelRadius);
gauss_sum += gauss_fact[(radius - 1) / 2];
unsigned char *p1, *p2, *p3;
for (size_t y = 0; y < height; y++)
{
for (size_t x = 0; x < width; x++)
{
p1 = srcData + bpp * (y * width + x);
p2 = destData + bpp * (y * width + x);
for (i = 0; i < gaussFactor; i++)
sums[i] = 0;
for (k = 0; k < radius ; k++)
{
if ((y - ((radius - 1) >> 1) + k) < height)
p1 = srcData + bpp * ((y - ((radius - 1) >> 1) + k) * width + x);
else
p1 = srcData + bpp * (y * width + x);
for (i = 0; i < bpp; i++)
sums[i] += p1[i] * gauss_fact[k];
}
for (i = 0; i < bpp; i++)
p2[i] = sums[i] / gauss_sum;
}
}
for (size_t y = 0; y < height; y++)
{
for (size_t x = 0; x < width; x++)
{
p2 = destData + bpp * (y * width + x);
p3 = finalData + bpp * (y * width + x);
for (i = 0; i < gaussFactor; i++)
sums[i] = 0;
for(k = 0; k < radius ; k++)
{
if ((x - ((radius - 1) >> 1) + k) < width)
p1 = srcData + bpp * ( y * width + (x - ((radius - 1) >> 1) + k));
else
p1 = srcData + bpp * (y * width + x);
for (i = 0; i < bpp; i++)
sums[i] += p2[i] * gauss_fact[k];
}
for (i = 0; i < bpp; i++)
{
p3[i] = sums[i] / gauss_sum;
}
}
}
}
size_t bitmapByteCount = bpr * height;
///////Here was the problem.. you had given srcData instead of destData.. Rest all
//were perfect...
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, destData, bitmapByteCount, NULL);
CGImageRef blurredImageRef = CGImageCreate(width, height, CGBitmapContextGetBitsPerComponent(context), CGBitmapContextGetBitsPerPixel(context), CGBitmapContextGetBytesPerRow(context), CGBitmapContextGetColorSpace(context), CGBitmapContextGetBitmapInfo(context), dataProvider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
CGContextRelease(context);
if (destData)
free(destData);
if (finalData)
free(finalData);
UIImage* retUIImage = [UIImage imageWithCGImage:blurredImageRef];
CGImageRelease(blurredImageRef);
return retUIImage;
}
I've made a small StackBlur extension to UIImage. StackBlur is close to GaussianBlur but much faster.
Check it at: https://github.com/tomsoft1/StackBluriOS
tiny note... there's a typo just on that ReadMe, "normalized" for "normalize"
Not sure about how to blur an image.
This may help is you want to blur an UIImageView or any view.
UIView *myView = self.theImageView;
CALayer *layer = [myView layer];
[layer setRasterizationScale:0.25];
[layer setShouldRasterize:YES];
You can undo it by setting the rasterization scale back to 1.
[layer setRasterizationScale:1.0];
UPDATE:
The below Apple sample code includes a blur/sharp effect. (using Open GL)
See if it helps, http://developer.apple.com/library/ios/#samplecode/GLImageProcessing/Introduction/Intro.html
What you probably want is a Box Blur algorithm. It is about 10 times faster than Gaussian blur and produces nice results. I have the code running on Android, I just haven't ported it to iOS yet. Here is the source.
Should only take about 10 minutes to port to iOS. The functions will work as is, you just need to access the image bytes (as you are doing in the source code above) and feed them to functions.

RGB in image processing in iphone app

I am doing an image processing in mp app. I got the pixel color from image and apply this on image by touching.. My code get the pixel color but it changes the whole image in blue color and apply that blue in image processing. I am stuck in code. But don't know what is going wrong in my code.May you please help me.
My code is:
-(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint coordinateTouch = [touch locationInView:[self view]];//where image was tapped
if (value == YES) {
self.lastColor = [self getPixelColorAtLocation:coordinateTouch];
value =NO;
}
NSLog(#"color %#",lastColor);
//[pickedColorDelegate pickedColor:(UIColor*)self.lastColor];
ListPoint point;
point.x = coordinateTouch.x;
point.y = coordinateTouch.y;
button = [UIButton buttonWithType:UIButtonTypeCustom];
button.backgroundColor = [UIColor whiteColor];
button.frame = CGRectMake(coordinateTouch.x-5, coordinateTouch.y-5, 2, 2);
//[descImageView addSubview:button];
[bgImage addSubview:button];
// Make image blurred on ImageView
if(bgImage.image)
{
CGImageRef imgRef = [[bgImage image] CGImage];
CFDataRef dataRef = CGDataProviderCopyData(CGImageGetDataProvider(imgRef));
const unsigned char *sourceBytesPtr = CFDataGetBytePtr(dataRef);
int len = CFDataGetLength(dataRef);
NSLog(#"length = %d, width = %d, height = %d, bytes per row = %d, bit per pixels = %d",
len, CGImageGetWidth(imgRef), CGImageGetHeight(imgRef), CGImageGetBytesPerRow(imgRef), CGImageGetBitsPerPixel(imgRef));
int width = CGImageGetWidth(imgRef);
int height = CGImageGetHeight(imgRef);
int widthstep = CGImageGetBytesPerRow(imgRef);
unsigned char *pixelData = (unsigned char *)malloc(len);
double wFrame = bgImage.frame.size.width;
double hFrame = bgImage.frame.size.height;
Image_Correction(sourceBytesPtr, pixelData, widthstep, width, height, wFrame, hFrame, point);
NSLog(#"finish");
NSData *data = [NSData dataWithBytes:pixelData length:len];
NSLog(#"1");
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
NSLog(#"2");
CGColorSpaceRef colorSpace2 = CGColorSpaceCreateDeviceRGB();
NSLog(#"3");
CGImageRef imageRef = CGImageCreate(width, height, 8, CGImageGetBitsPerPixel(imgRef), CGImageGetBytesPerRow(imgRef),
colorSpace2,kCGImageAlphaNoneSkipFirst|kCGBitmapByteOrder32Host,
provider, NULL, false, kCGRenderingIntentDefault);
NSLog(#"Start processing image");
UIImage *ret = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:UIImageOrientationUp];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace2);
CFRelease(dataRef);
free(pixelData);
NSLog(#"4");
bgImage.image = ret;
[button removeFromSuperview];
}
}
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
CGImageRef inImage = self.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(point.y))+round(point.x));
alpha = data[offset];
red = data[offset+1];
green = data[offset+2];
blue = data[offset+3];
NSLog(#"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
- (CGContextRef) createARGBBitmapContextFromImage:(CGImageRef) inImage {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
int Image_Correction(const unsigned char *pImage, unsigned char *rImage, int widthstep, int nW, int nH, double wFrame, double hFrame, ListPoint point)
{
double ratiox = nW/wFrame;
double ratioy = nH/hFrame;
double newW, newH, ratio;
if(ratioy > ratiox)
{
newH = hFrame;
newW = nW/ratioy;
ratio = ratioy;
}
else
{
newH = nH/ratiox;
newW = wFrame;
ratio = ratiox;
}
NSLog(#"new H, W = %f, %f", newW, newH);
NSLog(#"ratiox = %f; ratioy = %f", ratiox, ratioy);
ListPoint real_point;
real_point.x = (point.x - wFrame/2 + newW/2) *ratio;
real_point.y = (point.y - hFrame/2 + newH/2)*ratio;
for(int h = 0; h < nH; h++)
{
for(int k = 0; k < nW; k++)
{
rImage[h*widthstep + k*4 + 0] = pImage[h*widthstep + k*4 + 0];
rImage[h*widthstep + k*4 + 1] = pImage[h*widthstep + k*4 + 1];
rImage[h*widthstep + k*4 + 2] = pImage[h*widthstep + k*4 + 2];
rImage[h*widthstep + k*4 + 3] = pImage[h*widthstep + k*4 + 3];
}
}
// Modify this parameter to change Blurred area
int iBlurredArea = 6;
for(int h = -ratio*iBlurredArea; h <= ratio*iBlurredArea; h++)
for(int k = -ratio*iBlurredArea; k <= ratio*iBlurredArea; k++)
{
int tempx = real_point.x + k;
int tempy = real_point.y + h;
if (((tempy - 3) > 0)&&((tempy+3) >0)&&((tempx - 3) > 0)&&((tempx + 3) >0))
{
double sumR = 0;
double sumG = 0;
double sumB = 0;
double sumA = 0;
double count = 0;
for(int m = -3; m < 4; m++)
for (int n = -3; n < 4; n++)
{
sumR = red;//sumR + pImage[(tempy + m)*widthstep + (tempx + n)*4 + 0];
sumG = green;//sumG + pImage[(tempy + m)*widthstep + (tempx + n)*4 + 1];
sumB = blue;//sumB + pImage[(tempy + m)*widthstep + (tempx + n)*4 + 2];
sumA = alpha;//sumA + pImage[(tempy + m)*widthstep + (tempx + n)*4 + 3];
count++;
}
rImage[tempy*widthstep + tempx*4 + 0] = red;//sumR/count;
rImage[tempy*widthstep + tempx*4 + 1] = green;//sumG/count;
rImage[tempy*widthstep + tempx*4 + 2] = blue;//sumB/count;
rImage[tempy*widthstep + tempx*4 + 3] = alpha;//sumA/count;
}
}
return 1;
}
Thx for seeing this code.. i think i am doing something wrong.
Thx in advance.
This seems to work for me.
UIImage* modifyImage(UIImage* image)
{
size_t w = image.size.width;
size_t h = image.size.height;
CGFloat scale = image.scale;
// Create the bitmap context
UIGraphicsBeginImageContext(CGSizeMake(w*scale, h*scale));
CGContextRef context = UIGraphicsGetCurrentContext();
// NOTE you may have to setup a rotation here based on image.imageOrientation
// but I didn't need to consider that for my images.
CGContextScaleCTM(context, scale, scale);
[image drawInRect:CGRectMake(0, 0, w, h)];
unsigned char* data = CGBitmapContextGetData (context);
if (data != NULL) {
size_t height = CGBitmapContextGetHeight(context);
size_t width = CGBitmapContextGetWidth(context);
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(context);
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
// Not sure why the color info is in BGRA format
// Look at CGBitmapContextGetBitmapInfo(context) if this format isn't working for you
int offset = y * bytesPerRow + x * 4;
unsigned char* blue = &data[offset];
unsigned char* green = &data[offset+1];
unsigned char* red = &data[offset+2];
unsigned char* alpha = &data[offset+3];
int newRed = ...; // color calculation code here
int newGreen = ...;
int newBlue = ...;
// Assuming you don't want to change the original alpha value.
*red = (newRed * *alpha)/255;
*green = (newGreen * *alpha)/255;
*blue = (newBlue * *alpha)/255;
}
}
}
CGImageRef newImage = CGBitmapContextCreateImage(context);
UIImage *done = [UIImage imageWithCGImage:newImage scale:image.scale orientation: image.imageOrientation];
CGImageRelease(newImage);
UIGraphicsEndImageContext();
return done;
}

How can i choose UIImage for glReadPixels?

I have a view with 50 images. Images may overlap. I want (for example) select image number 33 and find pixel color. How can i do this ?
PS i use glReadPixels.
Unless you want to kill time applying the image as a texture first, you wouldn't use glReadPixels. You can do this directly from the UIImage instead:
void pixelExamine( UIImage *image )
{
CGImageRef colorImage = image.CGImage;
int width = CGImageGetWidth(colorImage);
int height = CGImageGetHeight(colorImage);
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), colorImage);
int x;
int y;
uint8_t *rgbaPixel;
for( y = 0; y < height; y++)
{
rgbaPixel = (uint8_t *) &pixels[y * width];
for( x = 0; x < width; x++, rgbaPixel+=4)
{
// rgbaPixel[0] = ALPHA 0..255
// rgbaPixel[3] = RED 0..255
// rgbaPixel[2] = GREEN 0..255
// rgbaPixel[1] = BLUE 0..255
}
}
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
}

iPhone Image Processing--matrix convolution

I am implementing a matrix convolution blur on the iPhone. The following code converts the UIImage supplied as an argument of the blur function into a CGImageRef, and then stores the RGBA values in a standard C char array.
CGImageRef imageRef = imgRef.CGImage;
int width = imgRef.size.width;
int height = imgRef.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *pixels = malloc((height) * (width) * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * (width);
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(pixels, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
Then the pixels values stored in the pixels array are convolved, and stored in another array.
unsigned char *results = malloc((height) * (width) * 4);
Finally, these augmented pixel values are changed back into a CGImageRef, converted to a UIImage, and the returned at the end of the function with the following code.
context = CGBitmapContextCreate(results, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGImageRef finalImage = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
CGImageRelease(finalImage);
NSLog(#"edges found");
free(results);
free(pixels);
CGColorSpaceRelease(colorSpace);
return newImage;
This works perfectly, once. Then, once the image is put through the filter again, very odd, unprecedented pixel values representing input pixel values that don't exist, are returned. Is there any reason why this should work the first time, but then not afterward? Beneath is the entirety of the function.
-(UIImage*) blur:(UIImage*)imgRef {
CGImageRef imageRef = imgRef.CGImage;
int width = imgRef.size.width;
int height = imgRef.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *pixels = malloc((height) * (width) * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * (width);
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(pixels, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
height = imgRef.size.height;
width = imgRef.size.width;
float matrix[] = {1,1,1,1,1,1,1,1,1};
float divisor = 9;
float shift = 0;
unsigned char *results = malloc((height) * (width) * 4);
for(int y = 1; y < height; y++){
for(int x = 1; x < width; x++){
float red = 0;
float green = 0;
float blue = 0;
int multiplier=1;
if(y>0 && x>0){
int index = (y-1)*width + x;
red = matrix[0]*multiplier*(float)pixels[4*(index-1)] +
matrix[1]*multiplier*(float)pixels[4*(index)] +
matrix[2]*multiplier*(float)pixels[4*(index+1)];
green = matrix[0]*multiplier*(float)pixels[4*(index-1)+1] +
matrix[1]*multiplier*(float)pixels[4*(index)+1] +
matrix[2]*multiplier*(float)pixels[4*(index+1)+1];
blue = matrix[0]*multiplier*(float)pixels[4*(index-1)+2] +
matrix[1]*multiplier*(float)pixels[4*(index)+2] +
matrix[2]*multiplier*(float)pixels[4*(index+1)+2];
index = (y)*width + x;
red = red+ matrix[3]*multiplier*(float)pixels[4*(index-1)] +
matrix[4]*multiplier*(float)pixels[4*(index)] +
matrix[5]*multiplier*(float)pixels[4*(index+1)];
green = green + matrix[3]*multiplier*(float)pixels[4*(index-1)+1] +
matrix[4]*multiplier*(float)pixels[4*(index)+1] +
matrix[5]*multiplier*(float)pixels[4*(index+1)+1];
blue = blue + matrix[3]*multiplier*(float)pixels[4*(index-1)+2] +
matrix[4]*multiplier*(float)pixels[4*(index)+2] +
matrix[5]*multiplier*(float)pixels[4*(index+1)+2];
index = (y+1)*width + x;
red = red+ matrix[6]*multiplier*(float)pixels[4*(index-1)] +
matrix[7]*multiplier*(float)pixels[4*(index)] +
matrix[8]*multiplier*(float)pixels[4*(index+1)];
green = green + matrix[6]*multiplier*(float)pixels[4*(index-1)+1] +
matrix[7]*multiplier*(float)pixels[4*(index)+1] +
matrix[8]*multiplier*(float)pixels[4*(index+1)+1];
blue = blue + matrix[6]*multiplier*(float)pixels[4*(index-1)+2] +
matrix[7]*multiplier*(float)pixels[4*(index)+2] +
matrix[8]*multiplier*(float)pixels[4*(index+1)+2];
red = red/divisor+shift;
green = green/divisor+shift;
blue = blue/divisor+shift;
if(red<0){
red=0;
}
if(green<0){
green=0;
}
if(blue<0){
blue=0;
}
if(red>255){
red=255;
}
if(green>255){
green=255;
}
if(blue>255){
blue=255;
}
int realPos = 4*(y*imgRef.size.width + x);
results[realPos] = red;
results[realPos + 1] = green;
results[realPos + 2] = blue;
results[realPos + 3] = 1;
}else {
int realPos = 4*((y)*(imgRef.size.width) + (x));
results[realPos] = 0;
results[realPos + 1] = 0;
results[realPos + 2] = 0;
results[realPos + 3] = 1;
}
}
}
context = CGBitmapContextCreate(results, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGImageRef finalImage = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
CGImageRelease(finalImage);
free(results);
free(pixels);
CGColorSpaceRelease(colorSpace);
return newImage;}
THANKS!!!
The problem was that I was assuming the alpha value, needed to calculate it like the RGB values.