How can i choose UIImage for glReadPixels? - iphone

I have a view with 50 images. Images may overlap. I want (for example) select image number 33 and find pixel color. How can i do this ?
PS i use glReadPixels.

Unless you want to kill time applying the image as a texture first, you wouldn't use glReadPixels. You can do this directly from the UIImage instead:
void pixelExamine( UIImage *image )
{
CGImageRef colorImage = image.CGImage;
int width = CGImageGetWidth(colorImage);
int height = CGImageGetHeight(colorImage);
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), colorImage);
int x;
int y;
uint8_t *rgbaPixel;
for( y = 0; y < height; y++)
{
rgbaPixel = (uint8_t *) &pixels[y * width];
for( x = 0; x < width; x++, rgbaPixel+=4)
{
// rgbaPixel[0] = ALPHA 0..255
// rgbaPixel[3] = RED 0..255
// rgbaPixel[2] = GREEN 0..255
// rgbaPixel[1] = BLUE 0..255
}
}
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
}

Related

Inappropriate Images display due to overlapping in CGContext

I can't find the solution for this:
I have 2 image Views both with different image - image_1(Jeans of Person) and image_2(Shirt of person). Now when I change the RGB value individually for image_1's or image_2's each and every pixel, I get the perfect result. But whenever one of my frame from the two, slightly overlap with other after processing both of them, then the problem occurs. Please help. This is how I am processing the image.
-(UIImage *)ColorChangeProcessing :(int )redvalue greenValue:(int)greenvalue blueValue:(int)bluevalue imageUsed : (UIImage *)image
{
CGContextRef ctx;
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
int byteIndex = (bytesPerRow * 0) + 0 * bytesPerPixel,RED = redvalue,GREEN=greenvalue,BLUE = bluevalue;
for (int ii = 0 ; ii < width * height ; ++ii)
{
if(rawData[byteIndex] != '/0' || rawData[byteIndex+1] != '/0' || rawData[byteIndex+2] != '/0'){
if ((((rawData[byteIndex])+RED)) > 255)
{
rawData[byteIndex] = (char)255;
}
else if((((rawData[byteIndex])+RED)) >0)
{
rawData[byteIndex] = (char) (((rawData[byteIndex] * 1.0) + RED));
}
else
{
rawData[byteIndex] = (char)0;
}
if ((((rawData[byteIndex+1])+GREEN)) > 255)
{
rawData[byteIndex+1] = (char)255;
}
else if((((rawData[byteIndex+1])+GREEN))>0)
{
rawData[byteIndex+1] = (char) (((rawData[byteIndex+1] * 1.0) + GREEN));
}
else
{
rawData[byteIndex+1] = (char)0;
}
if ((((rawData[byteIndex+2])+BLUE)) > 255)
{
rawData[byteIndex+2] = (char)255;
}
else if((((rawData[byteIndex+2])+BLUE))>0)
{
rawData[byteIndex+2] = (char) (((rawData[byteIndex+2] * 1.0) + BLUE));
}
else
{
rawData[byteIndex+2] = (char)0;
}
}
byteIndex += 4;
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedLast );
CGImageRef NewimageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:NewimageRef];
CGContextRelease(ctx);
free(rawData);
CGImageRelease(NewimageRef);
return rawImage;
}
Now on Any button action you can set R,G,B & image values and get the processed image after that. Then just try to place those processed images frame so that some of the part of one image is covered by another. Means if you have jeans image just try to place the small portion near belt over shirt image.
Finally I came up with the solution that is I was missing to check the alpha value. So the transparent image part was the one which created problems. Thanks all.

How to get the correct pixel from a CGImage in a Retina Display device?

I have the following code which correctly retrieves the pixel color (RGB) being touched by the user's finger on an image based on the touch's current position (X and Y coordinate). So, this piece of code works fine but not for retina display devices:
-(void)drawFirstColorWithXCoord:(CGFloat)xCoor andWithYCoord:(CGFloat)yCoor
{
CGImageRef imageRef = [image CGImage]; //image is an ivar of type UIImage
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
//GET PIXEL FROM POINT
int index = 4*((width*round(yCoor))+round(xCoor));
int R = rawData[index];
int G = rawData[index+1];
int B = rawData[index+2];
NSLog(#"%d %d %d", R, G, B);
free(rawData);
}
I would like to know what should I tweak in this code to make it work for retina display devices.
Any advice would be well apreciated. Thanks in advance for your help.
try:
CGFloat scale = [[UIScreen mainScreen] scale];
NSUInteger bytesPerRow = width * scale * bytesPerPixel;

How to filter colors in runtime ? - iphone

I have an image where each pixel has one of the three primary colors (RGBA).
And I have to change this image in run-time by changing each color channel by another (filter on run-time)
I have the .PVR and the corresponding glTexture2D, but how can I filter / change colors in run-time ?
I cannot use OpenGL ES 2.0
But I can use Cocos2D and OpenGL ES 1.x
:(
It's not trivial, but it could be useful to you:
// Convierte la imagen a blanco y negro
+ (UIImage *)convertImageToGrayScale:(UIImage *)i {
CGSize size = [i size];
int width = size.width;
int height = size.height;
// the pixels will be painted to this array
uint32_t *pixels = (uint32_t *) malloc(width * height * sizeof(uint32_t));
// clear the pixels so any transparency is preserved
memset(pixels, 0, width * height * sizeof(uint32_t));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// create a context with RGBA pixels
CGContextRef context = CGBitmapContextCreate(pixels, width, height, 8, width * sizeof(uint32_t), colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
// paint the bitmap to our context which will fill in the pixels array
CGContextDrawImage(context, CGRectMake(0, 0, width, height), [i CGImage]);
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
uint8_t *rgbaPixel = (uint8_t *) &pixels[y * width + x];
// convert to grayscale using recommended method: http://en.wikipedia.org/wiki/Grayscale#Converting_color_to_grayscale
uint32_t gray = 0.3 * rgbaPixel[RED] + 0.59 * rgbaPixel[GREEN] + 0.11 * rgbaPixel[BLUE];
// set the pixels to gray
rgbaPixel[RED] = gray;
rgbaPixel[GREEN] = gray;
rgbaPixel[BLUE] = gray;
}
}
// create a new CGImageRef from our context with the modified pixels
CGImageRef image = CGBitmapContextCreateImage(context);
// we're done with the context, color space, and pixels
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
free(pixels);
// make a new UIImage to return
UIImage *resultUIImage = [UIImage imageWithCGImage:image];
// we're done with image now too
CGImageRelease(image);
return resultUIImage;
}
This converts the UIimage received into a gray scale image
This is the code I used to manipulate RGBA picture
-(UIImage*)convertGrayScaleImageRedImage:(CGImageRef)inImage{
CFDataRef m_DataRef = CGDataProviderCopyData(CGImageGetDataProvider(inImage));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr(m_DataRef);
int length = CFDataGetLength(m_DataRef);
for (int i=0; i<length; i+=4){
int r = i;
int g = i+1;
int b = i+2;
//int a = i+3;//alpha
NSLog(#"r=%i, g=%i, b=%i",m_PixelBuf[r], m_PixelBuf[g], m_PixelBuf[b]);
m_PixelBuf[r] = 1;
m_PixelBuf[g] = 0;
m_PixelBuf[b] = 0;
//m_PixelBuf[a] = m_PixelBuf[a];//alpha
}
CGContextRef ctx = CGBitmapContextCreate(m_PixelBuf,
CGImageGetWidth(inImage),
CGImageGetHeight(inImage),
CGImageGetBitsPerComponent(inImage),
CGImageGetBytesPerRow(inImage),
CGImageGetColorSpace(inImage),
CGImageGetBitmapInfo(inImage)
);
CGImageRef imageRef = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
UIImage *redImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return redImage;
}

Image processing Glamour filter in iphone

I want to create an app in which i want to do some image processing. So I would like to know if there is any open-source image processing library available? also I would like to create a filter like this one Glamour Filter any help regarding this would be very much appreciated. If someone already have a source code to create sepia,black and white rotate scale code than please send. Thanks
Here is the code for sepia image
-(UIImage*)makeSepiaScale:(UIImage*)image
{
CGImageRef cgImage = [image CGImage];
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
UInt8* data = (UInt8*)CFDataGetBytePtr(bitmapData);
int width = image.size.width;
int height = image.size.height;
NSInteger myDataLength = width * height * 4;
for (int i = 0; i < myDataLength; i+=4)
{
UInt8 r_pixel = data[i];
UInt8 g_pixel = data[i+1];
UInt8 b_pixel = data[i+2];
int outputRed = (r_pixel * .393) + (g_pixel *.769) + (b_pixel * .189);
int outputGreen = (r_pixel * .349) + (g_pixel *.686) + (b_pixel * .168);
int outputBlue = (r_pixel * .272) + (g_pixel *.534) + (b_pixel * .131);
if(outputRed>255)outputRed=255;
if(outputGreen>255)outputGreen=255;
if(outputBlue>255)outputBlue=255;
data[i] = outputRed;
data[i+1] = outputGreen;
data[i+2] = outputBlue;
}
CGDataProviderRef provider2 = CGDataProviderCreateWithData(NULL, data, myDataLength, NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider2, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef); // YOU CAN RELEASE THIS NOW
CGDataProviderRelease(provider2); // YOU CAN RELEASE THIS NOW
CFRelease(bitmapData);
UIImage *sepiaImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef); // YOU CAN RELEASE THIS NOW
return sepiaImage;
}
Here is the code for Black & White effect
- (UIImage*) createGrayCopy:(UIImage*) source {
int width = source.size.width;
int height = source.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate (nil, width,
height,
8, // bits per component
0,
colorSpace,
kCGImageAlphaNone);
CGColorSpaceRelease(colorSpace);
if (context == NULL) {
return nil;
}
CGContextDrawImage(context,
CGRectMake(0, 0, width, height), source.CGImage);
UIImage *grayImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
CGContextRelease(context);
return grayImage;
}
Search for OpenCV

iPhone Image Processing--matrix convolution

I am implementing a matrix convolution blur on the iPhone. The following code converts the UIImage supplied as an argument of the blur function into a CGImageRef, and then stores the RGBA values in a standard C char array.
CGImageRef imageRef = imgRef.CGImage;
int width = imgRef.size.width;
int height = imgRef.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *pixels = malloc((height) * (width) * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * (width);
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(pixels, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
Then the pixels values stored in the pixels array are convolved, and stored in another array.
unsigned char *results = malloc((height) * (width) * 4);
Finally, these augmented pixel values are changed back into a CGImageRef, converted to a UIImage, and the returned at the end of the function with the following code.
context = CGBitmapContextCreate(results, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGImageRef finalImage = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
CGImageRelease(finalImage);
NSLog(#"edges found");
free(results);
free(pixels);
CGColorSpaceRelease(colorSpace);
return newImage;
This works perfectly, once. Then, once the image is put through the filter again, very odd, unprecedented pixel values representing input pixel values that don't exist, are returned. Is there any reason why this should work the first time, but then not afterward? Beneath is the entirety of the function.
-(UIImage*) blur:(UIImage*)imgRef {
CGImageRef imageRef = imgRef.CGImage;
int width = imgRef.size.width;
int height = imgRef.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *pixels = malloc((height) * (width) * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * (width);
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(pixels, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
height = imgRef.size.height;
width = imgRef.size.width;
float matrix[] = {1,1,1,1,1,1,1,1,1};
float divisor = 9;
float shift = 0;
unsigned char *results = malloc((height) * (width) * 4);
for(int y = 1; y < height; y++){
for(int x = 1; x < width; x++){
float red = 0;
float green = 0;
float blue = 0;
int multiplier=1;
if(y>0 && x>0){
int index = (y-1)*width + x;
red = matrix[0]*multiplier*(float)pixels[4*(index-1)] +
matrix[1]*multiplier*(float)pixels[4*(index)] +
matrix[2]*multiplier*(float)pixels[4*(index+1)];
green = matrix[0]*multiplier*(float)pixels[4*(index-1)+1] +
matrix[1]*multiplier*(float)pixels[4*(index)+1] +
matrix[2]*multiplier*(float)pixels[4*(index+1)+1];
blue = matrix[0]*multiplier*(float)pixels[4*(index-1)+2] +
matrix[1]*multiplier*(float)pixels[4*(index)+2] +
matrix[2]*multiplier*(float)pixels[4*(index+1)+2];
index = (y)*width + x;
red = red+ matrix[3]*multiplier*(float)pixels[4*(index-1)] +
matrix[4]*multiplier*(float)pixels[4*(index)] +
matrix[5]*multiplier*(float)pixels[4*(index+1)];
green = green + matrix[3]*multiplier*(float)pixels[4*(index-1)+1] +
matrix[4]*multiplier*(float)pixels[4*(index)+1] +
matrix[5]*multiplier*(float)pixels[4*(index+1)+1];
blue = blue + matrix[3]*multiplier*(float)pixels[4*(index-1)+2] +
matrix[4]*multiplier*(float)pixels[4*(index)+2] +
matrix[5]*multiplier*(float)pixels[4*(index+1)+2];
index = (y+1)*width + x;
red = red+ matrix[6]*multiplier*(float)pixels[4*(index-1)] +
matrix[7]*multiplier*(float)pixels[4*(index)] +
matrix[8]*multiplier*(float)pixels[4*(index+1)];
green = green + matrix[6]*multiplier*(float)pixels[4*(index-1)+1] +
matrix[7]*multiplier*(float)pixels[4*(index)+1] +
matrix[8]*multiplier*(float)pixels[4*(index+1)+1];
blue = blue + matrix[6]*multiplier*(float)pixels[4*(index-1)+2] +
matrix[7]*multiplier*(float)pixels[4*(index)+2] +
matrix[8]*multiplier*(float)pixels[4*(index+1)+2];
red = red/divisor+shift;
green = green/divisor+shift;
blue = blue/divisor+shift;
if(red<0){
red=0;
}
if(green<0){
green=0;
}
if(blue<0){
blue=0;
}
if(red>255){
red=255;
}
if(green>255){
green=255;
}
if(blue>255){
blue=255;
}
int realPos = 4*(y*imgRef.size.width + x);
results[realPos] = red;
results[realPos + 1] = green;
results[realPos + 2] = blue;
results[realPos + 3] = 1;
}else {
int realPos = 4*((y)*(imgRef.size.width) + (x));
results[realPos] = 0;
results[realPos + 1] = 0;
results[realPos + 2] = 0;
results[realPos + 3] = 1;
}
}
}
context = CGBitmapContextCreate(results, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGImageRef finalImage = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
CGImageRelease(finalImage);
free(results);
free(pixels);
CGColorSpaceRelease(colorSpace);
return newImage;}
THANKS!!!
The problem was that I was assuming the alpha value, needed to calculate it like the RGB values.