How to filter colors in runtime ? - iphone - iphone

I have an image where each pixel has one of the three primary colors (RGBA).
And I have to change this image in run-time by changing each color channel by another (filter on run-time)
I have the .PVR and the corresponding glTexture2D, but how can I filter / change colors in run-time ?
I cannot use OpenGL ES 2.0
But I can use Cocos2D and OpenGL ES 1.x
:(

It's not trivial, but it could be useful to you:
// Convierte la imagen a blanco y negro
+ (UIImage *)convertImageToGrayScale:(UIImage *)i {
CGSize size = [i size];
int width = size.width;
int height = size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [i CGImage]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint32_t gray = 0.3 * rgbaPixel[RED] + 0.59 * rgbaPixel[GREEN] + 0.11 * rgbaPixel[BLUE];
// set the pixels to gray
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
}
This converts the UIimage received into a gray scale image

This is the code I used to manipulate RGBA picture
-(UIImage*)convertGrayScaleImageRedImage:(CGImageRef)inImage{
CFDataRef m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
int length = CFDataGetLength(m_DataRef);
for (int i=0; i<length; i+=4){
int r = i;
int g = i+1;
int b = i+2;
//int a = i+3;//alpha
NSLog(#"r=%i, g=%i, b=%i",m_PixelBuf[r], m_PixelBuf[g], m_PixelBuf[b]);
m_PixelBuf[r] = 1;
m_PixelBuf[g] = 0;
m_PixelBuf[b] = 0;
//m_PixelBuf[a] = m_PixelBuf[a];//alpha
}
CGContextRef ctx = CGBitmapContextCreate(m_PixelBuf,
CGImageGetWidth(inImage),
CGImageGetHeight(inImage),
CGImageGetBitsPerComponent(inImage),
CGImageGetBytesPerRow(inImage),
CGImageGetColorSpace(inImage),
CGImageGetBitmapInfo(inImage)
);
CGImageRef imageRef = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
UIImage *redImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return redImage;
}

Related

Inappropriate Images display due to overlapping in CGContext

I can't find the solution for this:
I have 2 image Views both with different image - image_1(Jeans of Person) and image_2(Shirt of person). Now when I change the RGB value individually for image_1's or image_2's each and every pixel, I get the perfect result. But whenever one of my frame from the two, slightly overlap with other after processing both of them, then the problem occurs. Please help. This is how I am processing the image.
-(UIImage *)ColorChangeProcessing :(int )redvalue greenValue:(int)greenvalue blueValue:(int)bluevalue imageUsed : (UIImage *)image
{
CGContextRef ctx;
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
int byteIndex = (bytesPerRow * 0) + 0 * bytesPerPixel,RED = redvalue,GREEN=greenvalue,BLUE = bluevalue;
for (int ii = 0 ; ii < width * height ; ++ii)
{
if(rawData[byteIndex] != '/0' || rawData[byteIndex+1] != '/0' || rawData[byteIndex+2] != '/0'){
if ((((rawData[byteIndex])+RED)) > 255)
{
rawData[byteIndex] = (char)255;
}
else if((((rawData[byteIndex])+RED)) >0)
{
rawData[byteIndex] = (char) (((rawData[byteIndex] * 1.0) + RED));
}
else
{
rawData[byteIndex] = (char)0;
}
if ((((rawData[byteIndex+1])+GREEN)) > 255)
{
rawData[byteIndex+1] = (char)255;
}
else if((((rawData[byteIndex+1])+GREEN))>0)
{
rawData[byteIndex+1] = (char) (((rawData[byteIndex+1] * 1.0) + GREEN));
}
else
{
rawData[byteIndex+1] = (char)0;
}
if ((((rawData[byteIndex+2])+BLUE)) > 255)
{
rawData[byteIndex+2] = (char)255;
}
else if((((rawData[byteIndex+2])+BLUE))>0)
{
rawData[byteIndex+2] = (char) (((rawData[byteIndex+2] * 1.0) + BLUE));
}
else
{
rawData[byteIndex+2] = (char)0;
}
}
byteIndex += 4;
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedLast );
CGImageRef NewimageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:NewimageRef];
CGContextRelease(ctx);
free(rawData);
CGImageRelease(NewimageRef);
return rawImage;
}
Now on Any button action you can set R,G,B & image values and get the processed image after that. Then just try to place those processed images frame so that some of the part of one image is covered by another. Means if you have jeans image just try to place the small portion near belt over shirt image.
Finally I came up with the solution that is I was missing to check the alpha value. So the transparent image part was the one which created problems. Thanks all.

image manipulation using slider in iphone

Is there a way to apply brightness to uiimages. For example I have a UIImageview and inside that i have a uiimage. I want to manipulate its brightness with a help of UISlider without using GlImageprocessing.
Please help me to solve this problem and please don't suggest me about GlImageprocessing. I want to do it without using GlImageprocessing.
here is the code:-
CGImageRef inImage = currentImage.CGImage;
CFDataRef ref = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8 * buf = (UInt8 *) CFDataGetBytePtr(ref);
int length = CFDataGetLength(ref);
float value2 = (1+slider.value-0.5);
NSLog(#"%i",value);
for(int i=0; i<length; i+=4)
{
int r = i;
int g = i+1;
int b = i+2;
int red = buf[r];
int green = buf[g];
int blue = buf[b];
buf[r] = SAFECOLOR(red*value2);
buf[g] = SAFECOLOR(green*value2);
buf[b] = SAFECOLOR(blue*value2);
}
CGContextRef ctx = CGBitmapContextCreate(buf,
CGImageGetWidth(inImage),
CGImageGetHeight(inImage),
CGImageGetBitsPerComponent(inImage),
CGImageGetBytesPerRow(inImage),
CGImageGetColorSpace(inImage),
CGImageGetAlphaInfo(inImage));
CGImageRef img = CGBitmapContextCreateImage(ctx);
[photoEditView setImage:[UIImage imageWithCGImage:img]];
CFRelease(ref);
CGContextRelease(ctx);
CGImageRelease(img);
define this at the top:-
#define SAFECOLOR(color) MIN(255,MAX(0,color))
Have a look at Apple's demo app GLImageProcessing
, it's really fast:
Open Gl to UIImage Create
-(UIImage *) glToUIImage {
NSInteger myDataLength = 320 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <480; y++)
{
for(int x = 0; x <320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(
NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaNone;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
//UIImageWriteToSavedPhotosAlbum(myImage,nil,nil,nil);
[[NSUserDefaults standardUserDefaults] setObject:UIImageJPEGRepresentation(myImage,1) forKey:#"image"];
return myImage;
}

How to crop the image in iPhone

I want to do the same thing as asked in this question.
In my App i want to crop the image like we do image cropping in FaceBook can any one guide me with the link of good tutorial or with any sample code. The Link which i have provided will completely describe my requirement.
You may create new image with any properties. Here is my function, witch do that. you just need to use your own parameters of new image. In my case, image is not cropped, I just making some effect, moving pixels from there original place to another. But if you initialize new image with another height and width, you can just copy from any range of pixels of old image you need, to new one:
-(UIImage *)Color:(UIImage *)img
{
int R;
float m_width = img.size.width;
float m_height = img.size.height;
if (m_width>m_height) R = m_height*0.9;
else R = m_width*0.9;
int m_wint = (int)m_width; //later, we will need this parameters in float and int. you may just use "(int)" and "(float)" before variables later, and do not implement another ones
int m_hint = (int)m_height;
CGRect imageRect;
//cheking image orientation. we will work with image pixel-by-pixel, so we need to make top side at the top.
if(img.imageOrientation==UIImageOrientationUp
|| img.imageOrientation==UIImageOrientationDown)
{
imageRect = CGRectMake(0, 0, m_wint, m_hint);
}
else
{
imageRect = CGRectMake(0, 0, m_hint, m_wint);
}
uint32_t *rgbImage = (uint32_t *) malloc(m_wint * m_hint * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rgbImage, m_wint, m_hint, 8, m_wint *sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextSetShouldAntialias(context, NO);
CGContextTranslateCTM(context, 0, m_hint);
CGContextScaleCTM(context, 1.0, -1.0);
switch (img.imageOrientation) {
case UIImageOrientationRight:
{
CGContextRotateCTM(context, M_PI / 2);
CGContextTranslateCTM(context, 0, -m_wint);
}break;
case UIImageOrientationLeft:
{
CGContextRotateCTM(context, - M_PI / 2);
CGContextTranslateCTM(context, -m_hint, 0);
}break;
case UIImageOrientationUp:
{
CGContextTranslateCTM(context, m_wint, m_hint);
CGContextRotateCTM(context, M_PI);
}
default:
break;
}
CGContextDrawImage(context, imageRect, img.CGImage);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
//here is new image. you can change m_wint and m_hint as you whant
uint8_t *result = (uint8_t *) calloc(m_wint * m_hint * sizeof(uint32_t), 1);
for(int y = 0; y < m_hint; y++) //new m_hint here
{
float fy=y;
double yy = (m_height*( asinf(m_height/(2*R))-asin(((m_height/2)-fy)/R) )) /
(2*asin(m_height/(2*R))); // (xx, yy) - coordinates of pixel of OLD image
for(int x = 0; x < m_wint; x++) //new m_wint here
{
float fx=x;
double xx = (m_width*( asin(m_width/(2*R))-asin(((m_width/2)-fx)/R) )) /
(2*asin(m_width/(2*R)));
uint32_t rgbPixel=rgbImage[(int)yy * m_wint + (int)xx];
int intRedSource = (rgbPixel>>24)&255;
int intGreenSource = (rgbPixel>>16)&255;
int intBlueSource = (rgbPixel>>8)&255;
result[(y * (int)m_wint + x) * 4] = 0;
result[(y * (int)m_wint + x) * 4 + 1] = intBlueSource;
result[(y * (int)m_wint + x) * 4 + 2] = intGreenSource;
result[(y * (int)m_wint + x) * 4 + 3] = intRedSource;
}
}
free(rgbImage);
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(result, m_wint, m_hint, 8, m_wint * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast ); //new m_wint and m_hint as well
CGImageRef image1 = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *resultUIImage = [UIImage imageWithCGImage:image1];
CGImageRelease(image1);
#try {
free(result);
}
#catch (NSException * e) {
NSLog(#"proc. Exception: %#", e);
}
return resultUIImage;
}
CGRect rectImage = CGRectMake(p1.x,p1.y, p2.x - p1.x, p4.y - p1.y);
//Create bitmap image from original image data,
//using rectangle to specify desired crop area
CGImageRef imageRef = CGImageCreateWithImageInRect([imageForCropping CGImage], rectImage);
UIImage *croppedImage = [UIImage imageWithCGImage:imageRef];
imageView1 = [[UIImageView alloc] initWithFrame:CGRectMake(p1.x, p1.y,p2.x-p1.x p4.y-p1.y)];
imageView1.image = croppedImage;
[self.view addSubview:imageView1];
CGImageRelease(imageRef);

Making touched area of image transparent in Iphone

I want make a UIImage's touched pixel to transparent.
I saw the iPhone Objective C: How to get a pixel's color of the touched point on an UIImageView?
Using that code, I can locate images touches pixel. But I dont know how make that pixel transparent and update the UIImage.
Please help me.
Hope these helps
What is the fastest way to draw single pixels directly to the screen in an iPhone application?
From this SO question
How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?
+ (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}

How can i choose UIImage for glReadPixels?

I have a view with 50 images. Images may overlap. I want (for example) select image number 33 and find pixel color. How can i do this ?
PS i use glReadPixels.
Unless you want to kill time applying the image as a texture first, you wouldn't use glReadPixels. You can do this directly from the UIImage instead:
void pixelExamine( UIImage *image )
{
CGImageRef colorImage = image.CGImage;
int width = CGImageGetWidth(colorImage);
int height = CGImageGetHeight(colorImage);
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), colorImage);
int x;
int y;
uint8_t *rgbaPixel;
for( y = 0; y < height; y++)
{
rgbaPixel = (uint8_t *) &pixels[y * width];
for( x = 0; x < width; x++, rgbaPixel+=4)
{
// rgbaPixel[0] = ALPHA 0..255
// rgbaPixel[3] = RED 0..255
// rgbaPixel[2] = GREEN 0..255
// rgbaPixel[1] = BLUE 0..255
}
}
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
}