How to set pixels transparent and after non transparent - iphone

I have a code thet changes the pixels in UIImage. I need set pixels in touch places transparent and later after when I touch there again non-transparent. How can I do it???
Code
//It create new image with changed pixels
- (UIImage*)fromImage:(UIImage*)source
colorBuffer:(NSArray *)colors erase:(BOOL)eraser
{
CGContextRef ctx;
CGImageRef imageRef = [source CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
byte *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
int byteIndex = 0;
for (int ii = 0 ; ii < width * height ; ++ii)
{
if(eraser)
{
// SET PIXEL TRANSPARENT
}
else
{
//SET PIXEL NON-TRANSPARENT
}
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast );
CGColorSpaceRelease(colorSpace);
imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(ctx);
free(rawData);
return rawImage;
}

Related

Inappropriate Images display due to overlapping in CGContext

I can't find the solution for this:
I have 2 image Views both with different image - image_1(Jeans of Person) and image_2(Shirt of person). Now when I change the RGB value individually for image_1's or image_2's each and every pixel, I get the perfect result. But whenever one of my frame from the two, slightly overlap with other after processing both of them, then the problem occurs. Please help. This is how I am processing the image.
-(UIImage *)ColorChangeProcessing :(int )redvalue greenValue:(int)greenvalue blueValue:(int)bluevalue imageUsed : (UIImage *)image
{
CGContextRef ctx;
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
int byteIndex = (bytesPerRow * 0) + 0 * bytesPerPixel,RED = redvalue,GREEN=greenvalue,BLUE = bluevalue;
for (int ii = 0 ; ii < width * height ; ++ii)
{
if(rawData[byteIndex] != '/0' || rawData[byteIndex+1] != '/0' || rawData[byteIndex+2] != '/0'){
if ((((rawData[byteIndex])+RED)) > 255)
{
rawData[byteIndex] = (char)255;
}
else if((((rawData[byteIndex])+RED)) >0)
{
rawData[byteIndex] = (char) (((rawData[byteIndex] * 1.0) + RED));
}
else
{
rawData[byteIndex] = (char)0;
}
if ((((rawData[byteIndex+1])+GREEN)) > 255)
{
rawData[byteIndex+1] = (char)255;
}
else if((((rawData[byteIndex+1])+GREEN))>0)
{
rawData[byteIndex+1] = (char) (((rawData[byteIndex+1] * 1.0) + GREEN));
}
else
{
rawData[byteIndex+1] = (char)0;
}
if ((((rawData[byteIndex+2])+BLUE)) > 255)
{
rawData[byteIndex+2] = (char)255;
}
else if((((rawData[byteIndex+2])+BLUE))>0)
{
rawData[byteIndex+2] = (char) (((rawData[byteIndex+2] * 1.0) + BLUE));
}
else
{
rawData[byteIndex+2] = (char)0;
}
}
byteIndex += 4;
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedLast );
CGImageRef NewimageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:NewimageRef];
CGContextRelease(ctx);
free(rawData);
CGImageRelease(NewimageRef);
return rawImage;
}
Now on Any button action you can set R,G,B & image values and get the processed image after that. Then just try to place those processed images frame so that some of the part of one image is covered by another. Means if you have jeans image just try to place the small portion near belt over shirt image.
Finally I came up with the solution that is I was missing to check the alpha value. So the transparent image part was the one which created problems. Thanks all.

iPhone SDK - <Error>: CGBitmapContextCreate: unsupported color space

I was able to find a method to help me modify the alpha value for a pixel in a UIImage in my app, but I am running into two errors (the second is most certainly caused by the first). I can't, however, figure out what is going on.
My method:
- (void)modifyAlpha:(int)x and:(int)y {
CGContextRef ctx;
CGImageRef imageRef = [scratchOffImage CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * y) + x * bytesPerPixel;
rawData[byteIndex+3] = (char) (255); //Change the pixel value
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedFirst ); //This line causes an error: incorrect colorspace
imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
scratchOffImage = rawImage;
[scratchOffImageView setImage:scratchOffImage];
free(rawData);
}
The error:
<Error>: CGBitmapContextCreate: unsupported color space.
is being thrown on the line:
ctx = CGBitmapContextCreate(rawData, ...
and then the second error:
<Error>: CGBitmapContextCreateImage: invalid context 0x0
is being thrown on the line:
imageRef = CGBitmapContextCreateImage (ctx);
My image is included in the bundle I originally made it by outputting it Photoshop's Save for Web function using PNG-8 with transparancy as the format.
When I ran the sample code that the function was first used in, the sample image worked fine. However, my image doesn't. How can I debug this?
Does anyone have any idea how I can debug this? Might my input PNG be formatted incorrectly? How can I check this?
Cheers,
Brett
EDIT 1: The original source code came from the example found here. The sample shows a conversion to greyscale, whereas I only need to change the alpha value.
EDIT 2: I have tried saving my image as a PNG-8 and a PNG-24, both with no luck.
A PNG-8 uses an 8-bit indexed color space. Quartz doesn't support indexed color spaces for CGBitmapContext. The CGBitmapContextCreate documentation says "Note that indexed color spaces are not supported for bitmap graphics contexts."
Instead of passing CGImageGetColorSpace( imageRef ) as the color space in your second call to CGBitmapContextCreate, you want to pass the same color space you used in your first call (your colorSpace variable).
Anyway, there's no reason to even create a second CGBitmapContext. And you're leaking the result of CGBitmapContextCreateImage. Just do this:
- (void)modifyAlpha:(int)x and:(int)y {
CGImageRef imageRef = [scratchOffImage CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * y) + x * bytesPerPixel;
rawData[byteIndex+3] = (char) (255); //Change the pixel value
imageRef = CGBitmapContextCreateImage (context);
CGContextRelease(context);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
scratchOffImage = rawImage;
[scratchOffImageView setImage:scratchOffImage];
free(rawData);
}

iOS grayscale/black & white

I am having an issue with some code that changes a UIImage to grayscale. It works right on iPhone/iPod, but on iPad whatever is already drawn gets all stretched and skewed in the process.
It also sometimes crashes only on iPad on the line
imageRef = CGBitmapContextCreateImage (ctx);
Here is the code:
CGContextRef ctx;
CGImageRef imageRef = [self.drawImage.image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
int byteIndex = 0;
int grayScale;
for(int ii = 0 ; ii < width * height ; ++ii)
{
grayScale = (rawData[byteIndex] + rawData[byteIndex + 1] + rawData[byteIndex + 2]) / 3;
rawData[byteIndex] = (char)grayScale;
rawData[byteIndex+1] = (char)grayScale;
rawData[byteIndex+2] = (char)grayScale;
//rawData[byteIndex+3] = 255;
byteIndex += 4;
}
ctx = CGBitmapContextCreate(rawData,
CGImageGetWidth( imageRef ),
CGImageGetHeight( imageRef ),
8,
CGImageGetBytesPerRow( imageRef ),
CGImageGetColorSpace( imageRef ),
kCGImageAlphaPremultipliedLast );
imageRef = CGBitmapContextCreateImage (ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGContextRelease(ctx);
self.drawImage.image = rawImage;
free(rawData);
Consider leveraging the CIColorMonochrome filter available in iOS 6.
void ToMonochrome()
{
var mono = new CIColorMonochrome();
mono.Color = CIColor.FromRgb(1, 1, 1);
mono.Intensity = 1.0f;
var uiImage = new UIImage("Images/photo.jpg");
mono.Image = CIImage.FromCGImage(uiImage.CGImage);
DisplayFilterOutput(mono, ImageView);
}
static void DisplayFilterOutput(CIFilter filter, UIImageView imageView)
{
CIImage output = filter.OutputImage;
if (output == null)
return;
var context = CIContext.FromOptions(null);
var renderedImage = context.CreateCGImage(output, output.Extent);
var finalImage = new UIImage(renderedImage);
imageView.Image = finalImage;
}
Figured it out with some help elsewhere
Got rid of an extra context being used and changed the bitmap format
CGContextRef ctx;
CGImageRef imageRef = [self.drawImage.image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
ctx = CGBitmapContextCreate(rawData,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host);
CGContextDrawImage(ctx, CGRectMake(0, 0, width, height), imageRef);
int byteIndex = 0;
int grayScale;
for(int ii = 0 ; ii < width * height ; ++ii)
{
grayScale = (rawData[byteIndex] + rawData[byteIndex + 1] + rawData[byteIndex + 2]) / 3;
rawData[byteIndex] = (char)grayScale;
rawData[byteIndex+1] = (char)grayScale;
rawData[byteIndex+2] = (char)grayScale;
//rawData[byteIndex+3] = 255;
byteIndex += 4;
}
imageRef = CGBitmapContextCreateImage(ctx);
UIImage* rawImage = [UIImage imageWithCGImage:imageRef];
CGColorSpaceRelease(colorSpace);
CGContextRelease(ctx);
self.drawImage.image = rawImage;
free(rawData);

Changing Image through pixel manipulation(iphone specific)

Similar questions must have been asked many times but this one is a tad different.
I am manipulating pixels of a image loaded in ImageView control from the photo library using the pixel values read from another image (rocks.jpg) .
Below is the code for the same:--
CGContextRef ctx;
CGImageRef imageRef = [self.workingImage CGImage];
UIImage* image2 = [UIImage imageNamed:#"rocks.jpg"];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGColorSpaceRef colorSpace2 = CGColorSpaceCreateDeviceRGB();
CGColorSpaceRef colorSpace3 = CGColorSpaceCreateDeviceRGB();
CGImageRef cgimage2 = image2.CGImage;
NSUInteger width2 = CGImageGetWidth(cgimage2);
NSUInteger height2 = CGImageGetHeight(cgimage2);
struct pixel* pixels = (struct pixel*) calloc(1, width * height * sizeof(struct pixel));
struct pixel* pixels2 = (struct pixel*) calloc(1, width2 * height2 * sizeof(struct pixel));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
NSUInteger bpr2 = bytesPerPixel * width2;
//size_t bpp2 = CGImageGetBitsPerPixel(cgimage2);
NSUInteger bpc2 = 8;
CGContextRef context = CGBitmapContextCreate(pixels, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRef context2 = CGBitmapContextCreate(pixels2, width2, height2, bpc2, bpr2, colorSpace2,kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace2);
CGContextDrawImage(context2, CGRectMake(0, 0, width2, height2),cgimage2);
NSUInteger numberOfPixels = width*height;
while (numberOfPixels > 0)
{
if(pixels->g == 250)
{
pixels->r = pixels2->r;
pixels->g = pixels2->g;
pixels->b = pixels2->b;
pixels->a = pixels2->a;
// pixels->r = 0;
// pixels->g = 0;
// pixels->b = 0;
}
pixels++;
pixels2++;
numberOfPixels--;
}
ctx = CGBitmapContextCreate(pixels,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpace3,
kCGImageAlphaPremultipliedLast|kCGBitmapByteOrder32Big );
if (ctx == NULL)
{
NSLog(#"The context is null");
}
CGColorSpaceRelease(colorSpace3);
CGImageRef img2 = nil;
if (ctx != nil)
{
img2 = CGBitmapContextCreateImage (ctx);
}
//CGContextDrawImage(ctx, CGRectMake(0, 0, width, height), rawImage.CGImage );
UIImage* rawImage = [UIImage imageWithCGImage:img2];
CGContextRelease(context);
CGContextRelease(context2);
CGContextRelease(ctx);
//self.workingImage = rawImage;
//[self.imageView setImage:self.workingImage];
//[imageView setImage:rawImage];
imageView.image = rawImage;
if (pixels != NULL)
{
pixels = NULL;
free(pixels);
}
if (pixels2 != NULL)
{
pixels2 = NULL;
free(pixels2);
}
The problem is that though I can read the pixel values and also manipulate them but the end result is that the second image(rocks.jpg) gets loaded in the view instead of the manipulated image being loaded.
Can someone help me out on this.??
Thanks in advance.

Image processing Glamour filter in iphone

I want to create an app in which i want to do some image processing. So I would like to know if there is any open-source image processing library available? also I would like to create a filter like this one Glamour Filter any help regarding this would be very much appreciated. If someone already have a source code to create sepia,black and white rotate scale code than please send. Thanks
Here is the code for sepia image
-(UIImage*)makeSepiaScale:(UIImage*)image
{
CGImageRef cgImage = [image CGImage];
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
UInt8* data = (UInt8*)CFDataGetBytePtr(bitmapData);
int width = image.size.width;
int height = image.size.height;
NSInteger myDataLength = width * height * 4;
for (int i = 0; i < myDataLength; i+=4)
{
UInt8 r_pixel = data[i];
UInt8 g_pixel = data[i+1];
UInt8 b_pixel = data[i+2];
int outputRed = (r_pixel * .393) + (g_pixel *.769) + (b_pixel * .189);
int outputGreen = (r_pixel * .349) + (g_pixel *.686) + (b_pixel * .168);
int outputBlue = (r_pixel * .272) + (g_pixel *.534) + (b_pixel * .131);
if(outputRed>255)outputRed=255;
if(outputGreen>255)outputGreen=255;
if(outputBlue>255)outputBlue=255;
data[i] = outputRed;
data[i+1] = outputGreen;
data[i+2] = outputBlue;
}
CGDataProviderRef provider2 = CGDataProviderCreateWithData(NULL, data, myDataLength, NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider2, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef); // YOU CAN RELEASE THIS NOW
CGDataProviderRelease(provider2); // YOU CAN RELEASE THIS NOW
CFRelease(bitmapData);
UIImage *sepiaImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef); // YOU CAN RELEASE THIS NOW
return sepiaImage;
}
Here is the code for Black & White effect
- (UIImage*) createGrayCopy:(UIImage*) source {
int width = source.size.width;
int height = source.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef context = CGBitmapContextCreate (nil, width,
height,
8, // bits per component
0,
colorSpace,
kCGImageAlphaNone);
CGColorSpaceRelease(colorSpace);
if (context == NULL) {
return nil;
}
CGContextDrawImage(context,
CGRectMake(0, 0, width, height), source.CGImage);
UIImage *grayImage = [UIImage imageWithCGImage:CGBitmapContextCreateImage(context)];
CGContextRelease(context);
return grayImage;
}
Search for OpenCV