Making touched area of image transparent in Iphone - iphone

I want make a UIImage's touched pixel to transparent.
I saw the iPhone Objective C: How to get a pixel's color of the touched point on an UIImageView?
Using that code, I can locate images touches pixel. But I dont know how make that pixel transparent and update the UIImage.
Please help me.

Hope these helps
What is the fastest way to draw single pixels directly to the screen in an iPhone application?
From this SO question
How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?
+ (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}

Related

iPhone : How to get colour of each pixel of an image?

I want to get color of all the individual pixels of an image.
To elaborate
Let say I have an image named "SampleImage" having 400 x 400 pixels
Basically I want to create a grid from 'SampleImage' which will have 400 x 400 squares each filled with color corresponding to the specific pixel in 'SampleImage'.
I know this is a little abstract, but I am novice in iOS and don't know where to start from.
Thanks in advance!
Use this :
Here is more efficient solution:
// UIView+ColorOfPoint.h
#interface UIView (ColorOfPoint)
- (UIColor *) colorOfPoint:(CGPoint)point;
#end
// UIView+ColorOfPoint.m
#import "UIView+ColorOfPoint.h"
#import <QuartzCore/QuartzCore.h>
#implementation UIView (ColorOfPoint)
- (UIColor *) colorOfPoint:(CGPoint)point
{
unsigned char pixel[4] = {0};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 4, colorSpace, kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(context, -point.x, -point.y);
[self.layer renderInContext:context];
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
//NSLog(#"pixel: %d %d %d %d", pixel[0], pixel[1], pixel[2], pixel[3]);
UIColor *color = [UIColor colorWithRed:pixel[0]/255.0 green:pixel[1]/255.0 blue:pixel[2]/255.0 alpha:pixel[3]/255.0];
return color;
}
#end
Hope it helps you.
This code worked flawlessly for me -:
- (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
If you are a novice, you should consider doing something easier first. Anyway, what you need to do is set up a CGContextRef via CGBitmapContextCreate with enough data to hold your image. Once you create it, you need to render your image into it via CGDrawImage. After that you will have a pointer to every pixel in your image. The code is similar to Nishant's answer, but instead of 1x1, you will use 400x400 to get all of the pixels at once.

Drawing Histogram needs more accuracy in iPhone

I am working in an app in which I need to draw Histogram of any inputed image. I can draw Histogram successfully but its not so sharp as its in PREVIEW in Mac OS.
As code is too large ,I have uploaded it to GitHub Click here to Download
The RGB Values are read in
-(void)readImage:(UIImage*)image
{
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
for (int yy=0;yy<height; yy++)
{
for (int xx=0; xx<width; xx++)
{
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < 1 ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) ;
CGFloat green = (rawData[byteIndex + 1] * 1.0) ;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) ;
// CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
// TYPE CASTING ABOVE FLOAT VALUES TO THAT THEY CAN BE MATCHED WITH ARRAY'S INDEX.
int redValue = (int)red;
int greenValue = (int)green;
int blueValue = (int)blue;
// THESE COUNTERS COUNT " TOTAL NUMBER OF PIXELS " FOR THAT Red , Green or Blue Value IN ENTIRE IMAGE.
fltR[redValue]++;
fltG[greenValue]++;
fltB[blueValue]++;
}
}
}
[self makeArrays];
free(rawData);
}
I stored values in c array variables , fltR,fltG,fltB.
I have a class ClsDrawPoint it has members
#property CGFloat x;
#property CGFloat y;
Then prepared an array containing objects of ClsDrawPoint having fltR[]'s index as X value and value for that index as Y value .
array is prepared and graph is drawn in
-(void)makeArrays
method
You may see the result
Currently its not so accurate as it is in PREVIEW in mac for the same image. You can open an image in PREVIEW app in Mac and in Tools>AdjustColor , you will be able to see the Histogram of that image.
I think if my graph is accrate , it will be sharper. Kindly check my code and suggest me if you find anyway to make it more accurate.
I pulled your sample down from Github and found that your drawRect: method in ClsDraw is drawing lines that are one pixel wide. Strokes are centered on the line, and with a width of 1, the stroke is split into half-pixels which introduces anti-aliasing.
I moved your horizontal offsets by a half-pixel and rendering looks sharp. I didn't mess with vertical offsets, but to make them sharp you would need to round them and then move them to a half-pixel offset as well. I only made the following change:
#define OFFSET_X 0.5
- (void)drawRect:(CGRect)rect
{
CGContextRef ctx = UIGraphicsGetCurrentContext();
if ([arrPoints count] > 0)
{
CGContextSetLineWidth(ctx, 1);
CGContextSetStrokeColorWithColor(ctx, graphColor.CGColor);
CGContextSetAlpha(ctx, 0.8);
ClsDrawPoint *objPoint;
CGContextBeginPath(ctx);
for (int i = 0 ; i < [arrPoints count] ; i++)
{
objPoint = [arrPoints objectAtIndex:i];
CGPoint adjustedPoint = CGPointMake(objPoint.x+OFFSET_X, objPoint.y);
CGContextMoveToPoint(ctx, adjustedPoint.x, 0);
CGContextSetLineCap(ctx, kCGLineCapRound);
CGContextSetLineJoin(ctx, kCGLineJoinRound);
CGContextAddLineToPoint(ctx, adjustedPoint.x, adjustedPoint.y);
CGContextMoveToPoint(ctx, adjustedPoint.x, adjustedPoint.y);
CGContextStrokePath(ctx);
}
}
}
Notice the new OFFSET_X and the introduction of adjustedPoint.
You might also consider using a CGPoint stuffed into an NSValue instance for your points instead of your custom class ClsDrawPoint, unless you plan to add some additional behavior or properties. More details available here.

"<Error>: CGContextDrawImage: invalid context 0x0"

I have this code, which I found on here, which finds the color of a pixel in an image:
+ (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
But at the line CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef); it gives an error in NSLog of <Error>: CGContextDrawImage: invalid context 0x0. It doesn't crash the app, but obviously I'd prefer not to have an error there.
Any suggests for this?
This is usually due to the CGBitmapContextCreate failing due to an unsupported combination of flags, bitsPerComponent, etc. Try removing the kCGBitmapByteOrder32Big flag; there's an Apple doc that lists all the possible context formats - look for "Supported Pixel Formats".
Duh, I figured it out. So silly.
When I first enter the view, the UIImageView is blank, so the method is called against an empty UIIMageView. Makes sense that it would crash with "invalid context". Of course! The UIIMageView is empty. How can it get the width and height of an image that isn't there?
If I comment out the method, pick an image, and then put the method back in, it works. Makes sense.
I just put in an if/else statement to only call the method if the image view isn't empty.

Filling a portion of an image with color

I'm doing an IPhone painting app. I would like to paint a particular portion of an image using touchevent to find the pixel data and then use that pixel data to paint the remaining part of the image. Using touchevent, I got the pixel value for the portion:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
startPoint = [[touches anyObject] locationInView:imageView];
NSLog(#"the value of the index is %i",index);
NSString* imageName=[NSString stringWithFormat:#"roof%i", index];
tempColor = [[UIColor alloc] initWithPatternImage:[UIImage imageNamed:imageName]];
lastPoint = [touch locationInView:self.view];
lastPoint.y -= 20;
NSString *tx = [[NSString alloc]initWithFormat:#"%.0f", lastPoint.x];
NSString *ty = [[NSString alloc]initWithFormat:#"%.0f", lastPoint.y];
NSLog(#"the vale of the string is %# and %#",tx,ty);
int ix=[tx intValue];
int iy=[ty intValue];
int z=1;
NSLog(#"the vale of the string is %i and %i and z is %i",ix,iy,z);
[self getRGBAsFromImage:newImage atX:ix andY:iy count1:1];
}
Here I'm getting the pixel data for the image:
-(void)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count1:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
NSLog(#"the vale of the rbg of red is %f",red);
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
Using a tolerence value I'm getting the data. Here i'm struggling to paint the remaining section.
- (BOOL)cgHitTestForArea:(CGRect)area {
BOOL hit = FALSE;
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
float areaFloat = ((area.size.width * 4) * area.size.height);
unsigned char *bitmapData = malloc(areaFloat);
CGContextRef context = CGBitmapContextCreate(bitmapData,
area.size.width,
area.size.height,
8,
4*area.size.width,
colorspace,
kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(context, -area.origin.x, -area.origin.y);
[self.layer renderInContext:context];
//Seek through all pixels.
float transparentPixels = 0;
for (int i = 0; i < (int)areaFloat ; i += 4) {
//Count each transparent pixel.
if (((bitmapData[i + 3] * 1.0) / 255.0) == 0) {
transparentPixels += 1;
}
}
free(bitmapData);
//Calculate the percentage of transparent pixels.
float hitTolerance = [[self.layer valueForKey:#"hitTolerance"]floatValue];
NSLog(#"Apixels: %f hitPercent: %f",transparentPixels,(transparentPixels/areaFloat));
if ((transparentPixels/(areaFloat/4)) < hitTolerance) {
hit = TRUE;
}
CGColorSpaceRelease(colorspace);
CGContextRelease(context);
return hit;
}
Any suggestions to make this work please?
First, turning a bitmap image into an NSArray of UIColor objects is nuts. Way, way too much overhead. Work with a pixelbuffer instead. Learn how to use pointers.
http://en.wikipedia.org/wiki/Flood_fill#The_algorithm provides a good overview of a few simple techniques for performing a flood-fill — using either recursion or queues.

Detect pixel collision/overlapping between two images

I have two UIImageViews that contain images with some transparent area. Is there any way to check if the non-transparent area between both images collide?
Thanks.
[UPDATE]
So this is what I have up until now, unfortunately it still ain't working but I can't figure out why.
if (!CGRectIntersectsRect(frame1, frame2)) return NO;
NSLog(#"OverlapsPixelsInImage:withImage:> Images Intersect");
UIImage *img1 = imgView1.image;
UIImage *img2 = imgView2.image;
CGImageRef imgRef1 = [img1 CGImage];
CGImageRef imgRef2 = [img2 CGImage];
float minx = MIN(frame1.origin.x, frame2.origin.x);
float miny = MIN(frame1.origin.y, frame2.origin.y);
float maxx = MAX(frame1.origin.x + frame1.size.width, frame2.origin.x + frame2.size.width);
float maxy = MAX(frame1.origin.y + frame1.size.height, frame2.origin.y + frame2.size.height);
CGRect canvasRect = CGRectMake(0, 0, maxx - minx, maxy - miny);
size_t width = floorf(canvasRect.size.width);
size_t height = floorf(canvasRect.size.height);
NSUInteger bitsPerComponent = 8;
NSUInteger bytesPerRow = 4 * width;
unsigned char *rawData = calloc(width * height, sizeof(*rawData));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(context, 0, canvasRect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, CGRectMake(frame2.origin.x - minx, frame2.origin.y - miny, frame2.size.width, frame2.size.height), imgRef2);
CGContextDrawImage(context, CGRectMake(frame1.origin.x - minx, frame1.origin.y - miny, frame1.size.width, frame1.size.height), imgRef1);
CGContextRelease(context);
int byteIndex = 0;
for (int i = 0; i < width * height; i++)
{
CGFloat alpha = rawData[byteIndex + 3];
if (alpha > 128)
{
NSLog(#"collided in byte: %d", i);
free(rawData);
return YES;
}
byteIndex += 4;
}
free(rawData);
return NO;
You can draw both the alpha channels of both images into a single bitmap context and then look through the data for any transparent pixels. Take a look at the clipRectToPath() code in Clipping CGRect to a CGPath. It's solving a different problem, but the approach is the same. Rather than using CGContextFillPath() to draw into the context, just draw both of your images.
Here's the flow:
Create an alpha-only bitmap context (kCGImageAlphaOnly)
Draw everything you want to compare into it
Walk the pixels looking at the value. In my example, it considers < 128 to be "transparent." If you want fully transparent, use == 0.
When you find a transparent pixel, the example just makes a note of what column it was in. In your problem, you might just return YES, or you might use that data to form another mask.
Not easily, you basically have to read the in raw bitmap data and walk the pixels.