polaroid filter from UIImage - iphone

I am trying to implement some image filters, like polaroid, in iphone. I searched on how to filter an existing UIImage to convert it into a polaroid style and come across this stackoverflow link. Taking the answer there as a starting point, I looped through each pixel of the image, taking RGB values, and converted them to HSV, up to this point I have been successful. So this is what I have done (anyone is free to point any mistakes..)
double minRGB(double r, double g, double b){
if (r < g){
if (r < b){
return r;
}else {
return b;
}
}else {
if (g < b){
return g;
}else{
return b;
}
}
}
double maxRGB(double r, double g, double b){
if (r > g){
if (r > b){
return r;
}else {
return b;
}
}else {
if (g > b){
return g;
}else {
return b;
}
}
}
void rgbToHsv(double redIn,double greenIn,double blueIn,double *hue,double *saturation,double* value){
double min,max,delta;
min = minRGB(redIn,greenIn,blueIn);
max = maxRGB(redIn,greenIn,blueIn);
*value = max;
delta = max - min;
if (max != 0) {
*saturation = delta/max;
}else {
*saturation = 0;
*hue = -1.0;
return ;
}
if (redIn == max) {
*hue = (greenIn - blueIn)/delta;
}else if (greenIn == max) {
*hue = 2 + (blueIn - redIn)/delta;
}else {
*hue = 4 + (redIn - greenIn)/delta;
}
*hue *= 60.0;
if (*hue < 0) {
*hue += 360.0;
}
}
void hsvToRgb(double h,double s, double v, double *r,double *g, double *b){
int i;
float f, p, q, t;
if( s == 0 ) {
// achromatic (grey)
*r = *g = *b = v;
return;
}
h /= 60; // sector 0 to 5
i = floor( h );
f = h - i; // factorial part of h
p = v * ( 1 - s );
q = v * ( 1 - s * f );
t = v * ( 1 - s * ( 1 - f ) );
switch( i ) {
case 0:
*r = v;
*g = t;
*b = p;
break;
case 1:
*r = q;
*g = v;
*b = p;
break;
case 2:
*r = p;
*g = v;
*b = t;
break;
case 3:
*r = p;
*g = q;
*b = v;
break;
case 4:
*r = t;
*g = p;
*b = v;
break;
default: // case 5:
*r = v;
*g = p;
*b = q;
break;
}
}
-(void)makeImagePolaroid:(UIImage*)myImage{
CGImageRef originalImage = [myImage CGImage];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(NULL,CGImageGetWidth(originalImage),CGImageGetHeight(originalImage),8,CGImageGetWidth(originalImage)*4,colorSpace,kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(bitmapContext, CGRectMake(0, 0, CGBitmapContextGetWidth(bitmapContext), CGBitmapContextGetHeight(bitmapContext)), originalImage);
UInt8 *data = CGBitmapContextGetData(bitmapContext);
int numComponents = 4;
int bytesInContext = CGBitmapContextGetHeight(bitmapContext) * CGBitmapContextGetBytesPerRow(bitmapContext);
double redIn, greenIn, blueIn,alphaIn;
double hue,saturation,value;
for (int i = 0; i < bytesInContext; i += numComponents) {
redIn = (double)data[i]/255.0;
greenIn = (double)data[i+1]/255.0;
blueIn = (double)data[i+2]/255.0;
alphaIn = (double)data[i+3]/255.0;
rgbToHsv(redIn,greenIn,blueIn,&hue,&saturation,&value);
hue = hue * 0.7;
if (hue > 360) {
hue = 360;
}
saturation = saturation *1.3;
if (saturation > 1.0) {
saturation = 1.0;
}
value = value * 0.8;
if (value > 1.0) {
value = 1.0;
}
hsvToRgb(hue,saturation,value,&redIn,&greenIn,&blueIn);
data[i] = redIn * 255.0;
data[i+1] = greenIn * 255.0;
data[i+2] = blueIn * 255.0;
}
CGImageRef outImage = CGBitmapContextCreateImage(bitmapContext);
myImage = [UIImage imageWithCGImage:outImage];
CGImageRelease(outImage);
return myImage;
}
Now my idea about image processing is very childish (not even amateurish). I read this and tried to adjust the saturation and hue to see if I can get a polaroid effect..I think I am missing something, for I got every effect on earth other than a polaroid (saying that I havent got anything)..
Is there any document on net (or
books) which tells about image
filtering on a programmers point of
view? (and not on a designers point
of view and without a photoshop
screenshot)
What is the hue, saturation, value
difference I have to make on a pixel
so that I can make it polaroid?
And third, Am I on the right track?
Thanks in advance..

This might be helpful, from Camera+ taking filters from photoshop and reproducing them for iOS.
http://taptaptap.com/blog/creating-a-camera-plus-fx/

int step=10;// variable value that changes the level of saturation
int red = pixelRedVal;
int green = pixelGreenVal;
int blue = pixelBlueVal;
int avg = (red + green + blue) / 3;
pixelBuf[r] = SAFECOLOR((avg + step * (red - avg)));
pixelBuf[g] = SAFECOLOR((avg + step * (green - avg)));
pixelBuf[b] = SAFECOLOR((avg + step * (blue - avg)));
where SAFECOLOR is a macro
define SAFECOLOR(color) MIN(255,MAX(0,color))
and for Brightness int step=10;// variable value that changes the level of saturation
int red = pixelRedVal;
int green = pixelGreenVal;
int blue = pixelBlueVal;
pixelBuf[r] = SAFECOLOR(red * step);
pixelBuf[g] = SAFECOLOR(green * step);
pixelBuf[b] = SAFECOLOR(blue * step);
You can simply use this: with different parameters
// Note: the hue input ranges from 0.0 to 1.0, both red. Values outside this range will be clamped to 0.0 or 1.0.
//Polaroid with HSB parameter
- (UIImage*) polaroidishEffectWithHue:(CGFloat)hue saturation:(CGFloat)sat brightness:(CGFloat)bright alpha:(CGFloat)alpha
{
// Find the image dimensions.
CGSize imageSize = [self size];
CGRect imageExtent = CGRectMake(0,0,imageSize.width,imageSize.height);
// Create a context containing the image.
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
[self drawAtPoint:CGPointMake(0,0)];
// Draw the hue on top of the image.
CGContextSetBlendMode(context, kCGBlendModeHue);
[[UIColor colorWithHue:hue saturation:sat brightness:bright alpha:alpha] set];
UIBezierPath *imagePath = [UIBezierPath bezierPathWithRect:imageExtent];
[imagePath fill];
// Retrieve the new image.
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}

Related

Detect paper as series of points with OpenCV

I'm attempting to detect a piece of paper in a photo on the iPhone using OpenCV. I'm using the code from this question: OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection
Here's the code:
- (void)findEdges {
image = [[UIImage imageNamed:#"photo.JPG"] retain];
Mat matImage = [image CVMat];
find_squares(matImage, points);
UIImageView *imageView = [[[UIImageView alloc] initWithImage:image] autorelease];
[imageView setFrame:CGRectMake(0.0f, 0.0f, self.frame.size.width, self.frame.size.height)];
[self addSubview:imageView];
[imageView setAlpha:0.3f];
}
- (void)drawRect:(CGRect)rect {
[super drawRect:rect];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetRGBStrokeColor(context, 1.0, 0.0, 0.0, 0.8);
CGFloat scaleX = self.frame.size.width / image.size.width;
CGFloat scaleY = self.frame.size.height / image.size.height;
// Draw the detected squares.
for( vector<vector<cv::Point> >::const_iterator it = points.begin(); it != points.end(); it++ ) {
vector<cv::Point> square = *it;
cv::Point p1 = square[0];
cv::Point p2 = square[1];
cv::Point p3 = square[2];
cv::Point p4 = square[3];
CGContextBeginPath(context);
CGContextMoveToPoint(context, p1.x * scaleX, p1.y * scaleY); //start point
CGContextAddLineToPoint(context, p2.x * scaleX, p2.y * scaleY);
CGContextAddLineToPoint(context, p3.x * scaleX, p3.y * scaleY);
CGContextAddLineToPoint(context, p4.x * scaleX, p4.y * scaleY); // end path
CGContextClosePath(context);
CGContextSetLineWidth(context, 4.0);
CGContextStrokePath(context);
}
}
double angle( cv::Point pt1, cv::Point pt2, cv::Point pt0 ) {
double dx1 = pt1.x - pt0.x;
double dy1 = pt1.y - pt0.y;
double dx2 = pt2.x - pt0.x;
double dy2 = pt2.y - pt0.y;
return (dx1*dx2 + dy1*dy2)/sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
}
void find_squares(Mat& image, vector<vector<cv::Point> >& squares) {
// blur will enhance edge detection
Mat blurred(image);
medianBlur(image, blurred, 9);
Mat gray0(blurred.size(), CV_8U), gray;
vector<vector<cv::Point> > contours;
// find squares in every color plane of the image
for (int c = 0; c < 3; c++)
{
int ch[] = {c, 0};
mixChannels(&blurred, 1, &gray0, 1, ch, 1);
// try several threshold levels
const int threshold_level = 2;
for (int l = 0; l < threshold_level; l++)
{
// Use Canny instead of zero threshold level!
// Canny helps to catch squares with gradient shading
if (l == 0)
{
Canny(gray0, gray, 10, 20, 3); //
// Dilate helps to remove potential holes between edge segments
dilate(gray, gray, Mat(), cv::Point(-1,-1));
}
else
{
gray = gray0 >= (l+1) * 255 / threshold_level;
}
// Find contours and store them in a list
findContours(gray, contours, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
// Test contours
vector<cv::Point> approx;
for (size_t i = 0; i < contours.size(); i++)
{
// approximate contour with accuracy proportional
// to the contour perimeter
approxPolyDP(Mat(contours[i]), approx, arcLength(Mat(contours[i]), true)*0.02, true);
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
if (approx.size() == 4 &&
fabs(contourArea(Mat(approx))) > 1000 &&
isContourConvex(Mat(approx)))
{
double maxCosine = 0;
for (int j = 2; j < 5; j++)
{
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
if (maxCosine < 0.3)
squares.push_back(approx);
}
}
}
}
}
Here's the input image:
Here's the result:
What am I doing wrong?

iPhone paint bucket

I am working on implementing a flood-fill paint-bucket tool in an iPhone app and am having some trouble with it. The user is able to draw and I would like the paint bucket to allow them to tap a spot and fill everything of that color that is connected.
Here's my idea:
1) Start at the point the user selects
2) Save points checked to a NSMutableArray so they don't get re-checked
3) If the pixel color at the current point is the same as the original clicked point, save to an array to be changed later
4) If the pixel color at the current point is different than the original, return. (boundary)
5) Once finished scanning, go through the array of pixels to change and set them to the new color.
But this is not working out so far. Any help or knowledge of how to do this would be greatly appreciated! Here is my code.
-(void)flood:(int)x:(int)y
{
//NSLog(#"Flood %i %i", x, y);
CGPoint point = CGPointMake(x, y);
NSValue *value = [NSValue valueWithCGPoint:point];
//Don't repeat checked pixels
if([self.checkedFloodPixels containsObject:value])
{
return;
}
else
{
//If not checked, mark as checked
[self.checkedFloodPixels addObject:value];
//Make sure in bounds
if([self isOutOfBounds:x:y] || [self reachedStopColor:x:y])
{
return;
}
//Go to adjacent points
[self flood:x+1:y];
[self flood:x-1:y];
[self flood:x:y+1];
[self flood:x:y-1];
}
}
- (BOOL)isOutOfBounds:(int)x:(int)y
{
BOOL outOfBounds;
if(y > self.drawImage.frame.origin.y && y < (self.drawImage.frame.origin.y + self.drawImage.frame.size.height))
{
if(x > self.drawImage.frame.origin.x && x < (self.drawImage.frame.origin.x + self.drawImage.frame.size.width))
{
outOfBounds = NO;
}
else
{
outOfBounds = YES;
}
}
else
{
outOfBounds = YES;
}
if(outOfBounds)
NSLog(#"Out of bounds");
return outOfBounds;
}
- (BOOL)reachedStopColor:(int)x:(int)y
{
CFDataRef theData = CGDataProviderCopyData(CGImageGetDataProvider(self.drawImage.image.CGImage));
const UInt8 *pixelData = CFDataGetBytePtr(theData);
int red = 0;
int green = 1;
int blue = 2;
//RGB for point being checked
float newPointR;
float newPointG;
float newPointB;
//RGB for point initially clicked
float oldPointR;
float oldPointG;
float oldPointB;
int index;
BOOL reachedStopColor = NO;
//Format oldPoint RBG - pixels are every 4 bytes so round to 4
index = lastPoint.x * lastPoint.y;
if(index % 4 != 0)
{
index -= 2;
index /= 4;
index *= 4;
}
//Get into 0.0 - 1.0 value
oldPointR = pixelData[index + red];
oldPointG = pixelData[index + green];
oldPointB = pixelData[index + blue];
oldPointR /= 255.0;
oldPointG /= 255.0;
oldPointB /= 255.0;
oldPointR *= 1000;
oldPointG *= 1000;
oldPointB *= 1000;
int oldR = oldPointR;
int oldG = oldPointG;
int oldB = oldPointB;
oldPointR = oldR / 1000.0;
oldPointG = oldG / 1000.0;
oldPointB = oldB / 1000.0;
//Format newPoint RBG
index = x*y;
if(index % 4 != 0)
{
index -= 2;
index /= 4;
index *= 4;
}
newPointR = pixelData[index + red];
newPointG = pixelData[index + green];
newPointB = pixelData[index + blue];
newPointR /= 255.0;
newPointG /= 255.0;
newPointB /= 255.0;
newPointR *= 1000;
newPointG *= 1000;
newPointB *= 1000;
int newR = newPointR;
int newG = newPointG;
int newB = newPointB;
newPointR = newR / 1000.0;
newPointG = newG / 1000.0;
newPointB = newB / 1000.0;
//Check if different color
if(newPointR < (oldPointR - 0.02f) || newPointR > (oldPointR + 0.02f))
{
if(newPointG < (oldPointG - 0.02f) || newPointG > (oldPointG + 0.02f))
{
if(newPointB < (oldPointB - 0.02f) || newPointB > (oldPointB + 0.02f))
{
reachedStopColor = YES;
NSLog(#"Different Color");
}
else
{
NSLog(#"Same Color3");
NSNumber *num = [NSNumber numberWithInt:index];
[self.pixelsToChange addObject:num];
}
}
else
{
NSLog(#"Same Color2");
NSNumber *num = [NSNumber numberWithInt:index];
[self.pixelsToChange addObject:num];
}
}
else
{
NSLog(#"Same Color1");
NSNumber *num = [NSNumber numberWithInt:index];
[self.pixelsToChange addObject:num];
}
CFRelease(theData);
if(reachedStopColor)
NSLog(#"Reached stop color");
return reachedStopColor;
}
-(void)fillAll
{
CGContextRef ctx;
CGImageRef imageRef = self.drawImage.image.CGImage;
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
int red = 0;
int green = 1;
int blue = 2;
int index;
NSNumber *num;
for(int i = 0; i < [self.pixelsToChange count]; i++)
{
num = [self.pixelsToChange objectAtIndex:i];
index = [num intValue];
rawData[index + red] = (char)[[GameManager sharedManager] RValue];
rawData[index + green] = (char)[[GameManager sharedManager] GValue];
rawData[index + blue] = (char)[[GameManager sharedManager] BValue];
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedLast );
imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
self.drawImage.image = rawImage;
free(rawData);
}
so i found this (i know the question might be irrelevant now but for people who are still looking for something like this it's not ) :
to get color at pixel from context (modified code from here) :
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color;
CGContextRef cgctx = UIGraphicsGetCurrentContext();
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
int offset = 4*((ContextWidth*round(point.y))+round(point.x)); //i dont know how to get ContextWidth from current context so i have it as a instance variable in my code
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
if (data) { free(data); }
return color;
}
and the fill algorithm: is here
This is what i'm using but the fill itself is quite slow compared to CGPath drawing styles. Tho if you're rendering offscreen and/or you fill it dynamically like this it looks kinda cool :

How to define a struct correctly

I have a struct where I define the size (width, height) of a square, and I don't know why the code doesn't work well. Here's the code that I'm using:
.h
struct size{
int width;
int height;
};
.m
struct size a;
a.width = 508;
a.height = 686;
// I use it here.
Any ideas?
If you want to use Apple provided types, you have:
CGSize for sizes (with width and height)
CGPoint for locations (with x and y)
and CGRect, which combines the two.
Example usage:
CGPoint p;
CGSize s;
CGRect r;
p.x = 1;
p.y = 2;
// or:
p = CGPointMake(1, 2);
s.width = 3;
s.height = 4;
// or:
s = CGSizeMake(3, 4);
r.origin.x = 1;
r.origin.y = 2;
r.size.width = 3;
r.size.height = 4;
// or:
r.origin = p;
r.size = s;
// or:
r = CGRectMake(1, 2, 3, 4);

RGB in image processing in iphone app

I am doing an image processing in mp app. I got the pixel color from image and apply this on image by touching.. My code get the pixel color but it changes the whole image in blue color and apply that blue in image processing. I am stuck in code. But don't know what is going wrong in my code.May you please help me.
My code is:
-(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint coordinateTouch = [touch locationInView:[self view]];//where image was tapped
if (value == YES) {
self.lastColor = [self getPixelColorAtLocation:coordinateTouch];
value =NO;
}
NSLog(#"color %#",lastColor);
//[pickedColorDelegate pickedColor:(UIColor*)self.lastColor];
ListPoint point;
point.x = coordinateTouch.x;
point.y = coordinateTouch.y;
button = [UIButton buttonWithType:UIButtonTypeCustom];
button.backgroundColor = [UIColor whiteColor];
button.frame = CGRectMake(coordinateTouch.x-5, coordinateTouch.y-5, 2, 2);
//[descImageView addSubview:button];
[bgImage addSubview:button];
// Make image blurred on ImageView
if(bgImage.image)
{
CGImageRef imgRef = [[bgImage image] CGImage];
CFDataRef dataRef = CGDataProviderCopyData(CGImageGetDataProvider(imgRef));
const unsigned char *sourceBytesPtr = CFDataGetBytePtr(dataRef);
int len = CFDataGetLength(dataRef);
NSLog(#"length = %d, width = %d, height = %d, bytes per row = %d, bit per pixels = %d",
len, CGImageGetWidth(imgRef), CGImageGetHeight(imgRef), CGImageGetBytesPerRow(imgRef), CGImageGetBitsPerPixel(imgRef));
int width = CGImageGetWidth(imgRef);
int height = CGImageGetHeight(imgRef);
int widthstep = CGImageGetBytesPerRow(imgRef);
unsigned char *pixelData = (unsigned char *)malloc(len);
double wFrame = bgImage.frame.size.width;
double hFrame = bgImage.frame.size.height;
Image_Correction(sourceBytesPtr, pixelData, widthstep, width, height, wFrame, hFrame, point);
NSLog(#"finish");
NSData *data = [NSData dataWithBytes:pixelData length:len];
NSLog(#"1");
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
NSLog(#"2");
CGColorSpaceRef colorSpace2 = CGColorSpaceCreateDeviceRGB();
NSLog(#"3");
CGImageRef imageRef = CGImageCreate(width, height, 8, CGImageGetBitsPerPixel(imgRef), CGImageGetBytesPerRow(imgRef),
colorSpace2,kCGImageAlphaNoneSkipFirst|kCGBitmapByteOrder32Host,
provider, NULL, false, kCGRenderingIntentDefault);
NSLog(#"Start processing image");
UIImage *ret = [UIImage imageWithCGImage:imageRef scale:1.0 orientation:UIImageOrientationUp];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace2);
CFRelease(dataRef);
free(pixelData);
NSLog(#"4");
bgImage.image = ret;
[button removeFromSuperview];
}
}
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
CGImageRef inImage = self.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(point.y))+round(point.x));
alpha = data[offset];
red = data[offset+1];
green = data[offset+2];
blue = data[offset+3];
NSLog(#"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
- (CGContextRef) createARGBBitmapContextFromImage:(CGImageRef) inImage {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
void * bitmapData;
int bitmapByteCount;
int bitmapBytesPerRow;
// Get image width, height. We'll use the entire image.
size_t pixelsWide = CGImageGetWidth(inImage);
size_t pixelsHigh = CGImageGetHeight(inImage);
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (pixelsWide * 4);
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
// Use the generic RGB color space.
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL)
{
fprintf(stderr, "Error allocating color space\n");
return NULL;
}
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL)
{
fprintf (stderr, "Memory not allocated!");
CGColorSpaceRelease( colorSpace );
return NULL;
}
// Create the bitmap context. We want pre-multiplied ARGB, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
context = CGBitmapContextCreate (bitmapData,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst);
if (context == NULL)
{
free (bitmapData);
fprintf (stderr, "Context not created!");
}
// Make sure and release colorspace before returning
CGColorSpaceRelease( colorSpace );
return context;
}
int Image_Correction(const unsigned char *pImage, unsigned char *rImage, int widthstep, int nW, int nH, double wFrame, double hFrame, ListPoint point)
{
double ratiox = nW/wFrame;
double ratioy = nH/hFrame;
double newW, newH, ratio;
if(ratioy > ratiox)
{
newH = hFrame;
newW = nW/ratioy;
ratio = ratioy;
}
else
{
newH = nH/ratiox;
newW = wFrame;
ratio = ratiox;
}
NSLog(#"new H, W = %f, %f", newW, newH);
NSLog(#"ratiox = %f; ratioy = %f", ratiox, ratioy);
ListPoint real_point;
real_point.x = (point.x - wFrame/2 + newW/2) *ratio;
real_point.y = (point.y - hFrame/2 + newH/2)*ratio;
for(int h = 0; h < nH; h++)
{
for(int k = 0; k < nW; k++)
{
rImage[h*widthstep + k*4 + 0] = pImage[h*widthstep + k*4 + 0];
rImage[h*widthstep + k*4 + 1] = pImage[h*widthstep + k*4 + 1];
rImage[h*widthstep + k*4 + 2] = pImage[h*widthstep + k*4 + 2];
rImage[h*widthstep + k*4 + 3] = pImage[h*widthstep + k*4 + 3];
}
}
// Modify this parameter to change Blurred area
int iBlurredArea = 6;
for(int h = -ratio*iBlurredArea; h <= ratio*iBlurredArea; h++)
for(int k = -ratio*iBlurredArea; k <= ratio*iBlurredArea; k++)
{
int tempx = real_point.x + k;
int tempy = real_point.y + h;
if (((tempy - 3) > 0)&&((tempy+3) >0)&&((tempx - 3) > 0)&&((tempx + 3) >0))
{
double sumR = 0;
double sumG = 0;
double sumB = 0;
double sumA = 0;
double count = 0;
for(int m = -3; m < 4; m++)
for (int n = -3; n < 4; n++)
{
sumR = red;//sumR + pImage[(tempy + m)*widthstep + (tempx + n)*4 + 0];
sumG = green;//sumG + pImage[(tempy + m)*widthstep + (tempx + n)*4 + 1];
sumB = blue;//sumB + pImage[(tempy + m)*widthstep + (tempx + n)*4 + 2];
sumA = alpha;//sumA + pImage[(tempy + m)*widthstep + (tempx + n)*4 + 3];
count++;
}
rImage[tempy*widthstep + tempx*4 + 0] = red;//sumR/count;
rImage[tempy*widthstep + tempx*4 + 1] = green;//sumG/count;
rImage[tempy*widthstep + tempx*4 + 2] = blue;//sumB/count;
rImage[tempy*widthstep + tempx*4 + 3] = alpha;//sumA/count;
}
}
return 1;
}
Thx for seeing this code.. i think i am doing something wrong.
Thx in advance.
This seems to work for me.
UIImage* modifyImage(UIImage* image)
{
size_t w = image.size.width;
size_t h = image.size.height;
CGFloat scale = image.scale;
// Create the bitmap context
UIGraphicsBeginImageContext(CGSizeMake(w*scale, h*scale));
CGContextRef context = UIGraphicsGetCurrentContext();
// NOTE you may have to setup a rotation here based on image.imageOrientation
// but I didn't need to consider that for my images.
CGContextScaleCTM(context, scale, scale);
[image drawInRect:CGRectMake(0, 0, w, h)];
unsigned char* data = CGBitmapContextGetData (context);
if (data != NULL) {
size_t height = CGBitmapContextGetHeight(context);
size_t width = CGBitmapContextGetWidth(context);
size_t bytesPerRow = CGBitmapContextGetBytesPerRow(context);
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
// Not sure why the color info is in BGRA format
// Look at CGBitmapContextGetBitmapInfo(context) if this format isn't working for you
int offset = y * bytesPerRow + x * 4;
unsigned char* blue = &data[offset];
unsigned char* green = &data[offset+1];
unsigned char* red = &data[offset+2];
unsigned char* alpha = &data[offset+3];
int newRed = ...; // color calculation code here
int newGreen = ...;
int newBlue = ...;
// Assuming you don't want to change the original alpha value.
*red = (newRed * *alpha)/255;
*green = (newGreen * *alpha)/255;
*blue = (newBlue * *alpha)/255;
}
}
}
CGImageRef newImage = CGBitmapContextCreateImage(context);
UIImage *done = [UIImage imageWithCGImage:newImage scale:image.scale orientation: image.imageOrientation];
CGImageRelease(newImage);
UIGraphicsEndImageContext();
return done;
}

Pixel color replacement working fine on simulator but not on iPhone

I am dealing with an iphone application which would choose a specific colored pixel from the image and replace it with some other color shades i choose from color menu. Problem is that the code i have implemented is working fine on simulator, but when i run the same code on device all i get is that, the image's pixels are replaced by only white color. I am pasting the code below, If any one has clue like how to implement it then it would be great help
// This is data buffer details of whole image's pixels
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,bitsPerComponent, bytesPerRow, colorSpace,kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
//Second image into buffer
//This is data buffer of the pixel to be replaced that i am selecting from image
CGImageRef imageRefs = [selectedColorImage.image CGImage];
NSUInteger widths = CGImageGetWidth(imageRefs);
NSUInteger heights = CGImageGetHeight(imageRefs);
CGColorSpaceRef colorSpaces = CGColorSpaceCreateDeviceRGB();
unsigned char *rawDatas = malloc(heights * widths * 4);
NSUInteger bytesPerPixels = 4;
NSUInteger bytesPerRows = bytesPerPixels * widths;
NSUInteger bitsPerComponents = 8;
CGContextRef contexts = CGBitmapContextCreate(rawDatas, widths, heights,bitsPerComponents, bytesPerRows, colorSpaces,kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpaces);
CGContextDrawImage(contexts, CGRectMake(0, 0, widths, heights), imageRefs);
CGContextRelease(contexts);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
int byteIndexs = (bytesPerRows * 0) + 0 * bytesPerPixels;
int i=0;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat redb = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat greenb = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blueb = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alphab = (rawData[byteIndex + 3] * 1.0) / 255.0;
CGFloat reds = (rawDatas[byteIndexs] * 1.0) / 255.0;
CGFloat greens = (rawDatas[byteIndexs + 1] * 1.0) / 255.0;
CGFloat blues = (rawDatas[byteIndexs + 2] * 1.0) / 255.0;
CGFloat alphas = (rawDatas[byteIndexs + 3] * 1.0) / 255.0;
/* CGColorRef ref=[[shapeButton backgroundColor] CGColor];
switch(CGColorSpaceGetModel(CGColorGetColorSpace(ref)))
{
case kCGColorSpaceModelMonochrome:
// For grayscale colors, the luminance is the color value
//luminance = components[0];
break;
case kCGColorSpaceModelRGB:
// For RGB colors, we calculate luminance assuming sRGB Primaries as per
// http://en.wikipedia.org/wiki/Luminance_(relative)
//luminance = 0.2126 * components[0] + 0.7152 * components[1] + 0.0722 * components[2];
break;
case kCGColorSpaceModelCMYK:
case kCGColorSpaceModelLab:
case kCGColorSpaceModelDeviceN:
case kCGColorSpaceModelIndexed:
break;
case kCGColorSpaceModelUnknown:
break;
case kCGColorSpaceModelPattern:
break;
//default:
// We don't implement support for non-gray, non-rgb colors at this time.
// Since our only consumer is colorSortByLuminance, we return a larger than normal
// value to ensure that these types of colors are sorted to the end of the list.
//luminance = 2.0;
}
int numComponents = CGColorGetNumberOfComponents(ref);
if (numComponents == 4)
{
const CGFloat *components = CGColorGetComponents(ref);
CGFloat red = components[0];
CGFloat green = components[1];
CGFloat blue = components[2];
CGFloat alpha = components[3];
}*/
if((redb==red/255.0f)&&(greenb=green/255.0f)&&(blueb=blue/255.0f)&&(alphab==alpha/255.0f))
{
if(button_tag ==1)
{
NSLog(#"color matching %d",i);//done
i++;
rawData[byteIndex]=(reds*255.0)/1.0+999999999;
rawData[byteIndex+1]=(blues*255.0)/1.0+000000800.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+999999000.0;
rawData[byteIndex+3]=(alphas*255.0)/255.0;
}
if(button_tag ==2)
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+899989989;
rawData[byteIndex+1]=(blues*255.0)/1.0+898998999.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+899989900.0;
rawData[byteIndex+3]=(alphas*255.0)/255.0;
}
if(button_tag ==3)
{
NSLog(#"color matching %d",i);//done
i++;
rawData[byteIndex]=(reds*255.0)/1.0+999999999;
rawData[byteIndex+1]=(blues*255.0)/1.0+990000800.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+999999000.0;
rawData[byteIndex+3]=(alphas*255.0)/10.0;
}
if(button_tag ==4)//done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+50.0;
rawData[byteIndex+1]=(blues*255.0)/1.0+50.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+50.0;
rawData[byteIndex+3]=(alphas*0.0)/0.0;
}
if(button_tag ==5)//done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+255000000.0;
rawData[byteIndex+1]=(blues*255.0)/1.0+000000000.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+000000255.0;
rawData[byteIndex+3]=(alphas*255.0)/10.0;
}
if(button_tag ==6)// done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+0.0;
rawData[byteIndex+1]=(blues*255.0)/1.0+1.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+0.0;
rawData[byteIndex+3]=(alphas*255.0)/0.0;
}
if(button_tag ==7)//done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+255255255.0f;
rawData[byteIndex+1]=(blues*255.0)/1.0+000255255.0f;
rawData[byteIndex+2]=(greens*255.0)/1.0+255255255.0f;
rawData[byteIndex+3]=(alphas*255.0)/255.0;
}
if(button_tag ==8)//done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+200.0;
rawData[byteIndex+1]=(blues*255.0)/1.0+200.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+200.0;
rawData[byteIndex+3]=(alphas*0.0)/0.0;
}
if(button_tag ==9)//done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+1.0;
rawData[byteIndex+1]=(blues*255.0)/1.0+0.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+0.0;
rawData[byteIndex+3]=(alphas*255.0)/0.0;
}
if(button_tag ==10)//done
{
i++;
rawData[byteIndex]=(reds*255.0)/1.0+999999999;
rawData[byteIndex+1]=(blues*255.0)/1.0+990000888.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+999999000.0;
rawData[byteIndex+3]=(alphas*255.0)/10.0;
}
if(button_tag ==11)//done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+900000000;
rawData[byteIndex+1]=(blues*255.0)/1.0+990000800.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+999999000.0;
rawData[byteIndex+3]=(alphas*255.0)/255.0;
}
if(button_tag ==12)//done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+150.0f;
rawData[byteIndex+1]=(blues*255.0)/1.0+150.0f;
rawData[byteIndex+2]=(greens*255.0)/1.0+150.0f;
rawData[byteIndex+3]=(alphas*255.0)/255.0f;
}
if(button_tag ==13)//done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+0.0;
rawData[byteIndex+1]=(blues*255.0)/1.0+0.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+0.0;
rawData[byteIndex+3]=(alphas*255.0)/0.0;
}
if(button_tag ==14)//done
{
NSLog(#"color matching %d",i);
i++;
rawData[byteIndex]=(reds*255.0)/1.0+10.0;
rawData[byteIndex+1]=(blues*255.0)/1.0+10.0;
rawData[byteIndex+2]=(greens*255.0)/1.0+10.0;
rawData[byteIndex+3]=(alphas*255.0)/1.0;
}
}
byteIndex += 4;
//byteIndexs += 4;
}
CGSize size=CGSizeMake(320, 330);
UIImage *newImage= [self imageWithBits:rawData withSize:size];
[backgroundImage setImage:newImage];
[self HideLoadingIndicator];
//free(rawData);
//free(rawDatas);
Thanks in advance :)
While I can't tell you exactly what your problem is, I'm noticing a lot of code similar to:
rawData[byteIndex]=(reds*255.0)/1.0+999999999;
which when writing to a single byte is going to max out the value (255). Doing that four times over an RGBA8 pixel will render it white and opaque.
Comparing floats will always be inexact. Use the original integer values instead; they are easier to work with and will be quicker.
edit: something similar to this should work to replace white with transparent:
uint32_t *pixels = (pointer to image data);
uint32_t sourceColor = 0xffffffff;
uint32_t destColor = 0x00000000;
size_t pixelCount = width * height;
for (int i = 0; i < pixelCount; i++)
if (pixels[i] == sourceColor)
pixels[i] = destColor;