I have this code, which I found on here, which finds the color of a pixel in an image:
+ (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
But at the line CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef); it gives an error in NSLog of <Error>: CGContextDrawImage: invalid context 0x0. It doesn't crash the app, but obviously I'd prefer not to have an error there.
Any suggests for this?
This is usually due to the CGBitmapContextCreate failing due to an unsupported combination of flags, bitsPerComponent, etc. Try removing the kCGBitmapByteOrder32Big flag; there's an Apple doc that lists all the possible context formats - look for "Supported Pixel Formats".
Duh, I figured it out. So silly.
When I first enter the view, the UIImageView is blank, so the method is called against an empty UIIMageView. Makes sense that it would crash with "invalid context". Of course! The UIIMageView is empty. How can it get the width and height of an image that isn't there?
If I comment out the method, pick an image, and then put the method back in, it works. Makes sense.
I just put in an if/else statement to only call the method if the image view isn't empty.
Related
I want to get color of all the individual pixels of an image.
To elaborate
Let say I have an image named "SampleImage" having 400 x 400 pixels
Basically I want to create a grid from 'SampleImage' which will have 400 x 400 squares each filled with color corresponding to the specific pixel in 'SampleImage'.
I know this is a little abstract, but I am novice in iOS and don't know where to start from.
Thanks in advance!
Use this :
Here is more efficient solution:
// UIView+ColorOfPoint.h
#interface UIView (ColorOfPoint)
- (UIColor *) colorOfPoint:(CGPoint)point;
#end
// UIView+ColorOfPoint.m
#import "UIView+ColorOfPoint.h"
#import <QuartzCore/QuartzCore.h>
#implementation UIView (ColorOfPoint)
- (UIColor *) colorOfPoint:(CGPoint)point
{
unsigned char pixel[4] = {0};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pixel, 1, 1, 8, 4, colorSpace, kCGImageAlphaPremultipliedLast);
CGContextTranslateCTM(context, -point.x, -point.y);
[self.layer renderInContext:context];
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
//NSLog(#"pixel: %d %d %d %d", pixel[0], pixel[1], pixel[2], pixel[3]);
UIColor *color = [UIColor colorWithRed:pixel[0]/255.0 green:pixel[1]/255.0 blue:pixel[2]/255.0 alpha:pixel[3]/255.0];
return color;
}
#end
Hope it helps you.
This code worked flawlessly for me -:
- (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
If you are a novice, you should consider doing something easier first. Anyway, what you need to do is set up a CGContextRef via CGBitmapContextCreate with enough data to hold your image. Once you create it, you need to render your image into it via CGDrawImage. After that you will have a pointer to every pixel in your image. The code is similar to Nishant's answer, but instead of 1x1, you will use 400x400 to get all of the pixels at once.
I am working in an app in which I need to draw Histogram of any inputed image. I can draw Histogram successfully but its not so sharp as its in PREVIEW in Mac OS.
As code is too large ,I have uploaded it to GitHub Click here to Download
The RGB Values are read in
-(void)readImage:(UIImage*)image
{
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
for (int yy=0;yy<height; yy++)
{
for (int xx=0; xx<width; xx++)
{
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < 1 ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) ;
CGFloat green = (rawData[byteIndex + 1] * 1.0) ;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) ;
// CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
// TYPE CASTING ABOVE FLOAT VALUES TO THAT THEY CAN BE MATCHED WITH ARRAY'S INDEX.
int redValue = (int)red;
int greenValue = (int)green;
int blueValue = (int)blue;
// THESE COUNTERS COUNT " TOTAL NUMBER OF PIXELS " FOR THAT Red , Green or Blue Value IN ENTIRE IMAGE.
fltR[redValue]++;
fltG[greenValue]++;
fltB[blueValue]++;
}
}
}
[self makeArrays];
free(rawData);
}
I stored values in c array variables , fltR,fltG,fltB.
I have a class ClsDrawPoint it has members
#property CGFloat x;
#property CGFloat y;
Then prepared an array containing objects of ClsDrawPoint having fltR[]'s index as X value and value for that index as Y value .
array is prepared and graph is drawn in
-(void)makeArrays
method
You may see the result
Currently its not so accurate as it is in PREVIEW in mac for the same image. You can open an image in PREVIEW app in Mac and in Tools>AdjustColor , you will be able to see the Histogram of that image.
I think if my graph is accrate , it will be sharper. Kindly check my code and suggest me if you find anyway to make it more accurate.
I pulled your sample down from Github and found that your drawRect: method in ClsDraw is drawing lines that are one pixel wide. Strokes are centered on the line, and with a width of 1, the stroke is split into half-pixels which introduces anti-aliasing.
I moved your horizontal offsets by a half-pixel and rendering looks sharp. I didn't mess with vertical offsets, but to make them sharp you would need to round them and then move them to a half-pixel offset as well. I only made the following change:
#define OFFSET_X 0.5
- (void)drawRect:(CGRect)rect
{
CGContextRef ctx = UIGraphicsGetCurrentContext();
if ([arrPoints count] > 0)
{
CGContextSetLineWidth(ctx, 1);
CGContextSetStrokeColorWithColor(ctx, graphColor.CGColor);
CGContextSetAlpha(ctx, 0.8);
ClsDrawPoint *objPoint;
CGContextBeginPath(ctx);
for (int i = 0 ; i < [arrPoints count] ; i++)
{
objPoint = [arrPoints objectAtIndex:i];
CGPoint adjustedPoint = CGPointMake(objPoint.x+OFFSET_X, objPoint.y);
CGContextMoveToPoint(ctx, adjustedPoint.x, 0);
CGContextSetLineCap(ctx, kCGLineCapRound);
CGContextSetLineJoin(ctx, kCGLineJoinRound);
CGContextAddLineToPoint(ctx, adjustedPoint.x, adjustedPoint.y);
CGContextMoveToPoint(ctx, adjustedPoint.x, adjustedPoint.y);
CGContextStrokePath(ctx);
}
}
}
Notice the new OFFSET_X and the introduction of adjustedPoint.
You might also consider using a CGPoint stuffed into an NSValue instance for your points instead of your custom class ClsDrawPoint, unless you plan to add some additional behavior or properties. More details available here.
I have two UIImageViews that contain images with some transparent area. Is there any way to check if the non-transparent area between both images collide?
Thanks.
[UPDATE]
So this is what I have up until now, unfortunately it still ain't working but I can't figure out why.
if (!CGRectIntersectsRect(frame1, frame2)) return NO;
NSLog(#"OverlapsPixelsInImage:withImage:> Images Intersect");
UIImage *img1 = imgView1.image;
UIImage *img2 = imgView2.image;
CGImageRef imgRef1 = [img1 CGImage];
CGImageRef imgRef2 = [img2 CGImage];
float minx = MIN(frame1.origin.x, frame2.origin.x);
float miny = MIN(frame1.origin.y, frame2.origin.y);
float maxx = MAX(frame1.origin.x + frame1.size.width, frame2.origin.x + frame2.size.width);
float maxy = MAX(frame1.origin.y + frame1.size.height, frame2.origin.y + frame2.size.height);
CGRect canvasRect = CGRectMake(0, 0, maxx - minx, maxy - miny);
size_t width = floorf(canvasRect.size.width);
size_t height = floorf(canvasRect.size.height);
NSUInteger bitsPerComponent = 8;
NSUInteger bytesPerRow = 4 * width;
unsigned char *rawData = calloc(width * height, sizeof(*rawData));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(context, 0, canvasRect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, CGRectMake(frame2.origin.x - minx, frame2.origin.y - miny, frame2.size.width, frame2.size.height), imgRef2);
CGContextDrawImage(context, CGRectMake(frame1.origin.x - minx, frame1.origin.y - miny, frame1.size.width, frame1.size.height), imgRef1);
CGContextRelease(context);
int byteIndex = 0;
for (int i = 0; i < width * height; i++)
{
CGFloat alpha = rawData[byteIndex + 3];
if (alpha > 128)
{
NSLog(#"collided in byte: %d", i);
free(rawData);
return YES;
}
byteIndex += 4;
}
free(rawData);
return NO;
You can draw both the alpha channels of both images into a single bitmap context and then look through the data for any transparent pixels. Take a look at the clipRectToPath() code in Clipping CGRect to a CGPath. It's solving a different problem, but the approach is the same. Rather than using CGContextFillPath() to draw into the context, just draw both of your images.
Here's the flow:
Create an alpha-only bitmap context (kCGImageAlphaOnly)
Draw everything you want to compare into it
Walk the pixels looking at the value. In my example, it considers < 128 to be "transparent." If you want fully transparent, use == 0.
When you find a transparent pixel, the example just makes a note of what column it was in. In your problem, you might just return YES, or you might use that data to form another mask.
Not easily, you basically have to read the in raw bitmap data and walk the pixels.
I want make a UIImage's touched pixel to transparent.
I saw the iPhone Objective C: How to get a pixel's color of the touched point on an UIImageView?
Using that code, I can locate images touches pixel. But I dont know how make that pixel transparent and update the UIImage.
Please help me.
Hope these helps
What is the fastest way to draw single pixels directly to the screen in an iPhone application?
From this SO question
How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?
+ (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
My application gives these errors on the console:
<Error>: CGBitmapContextCreateImage: invalid context 0xdf12000
Program received signal: “EXC_BAD_ACCESS”.
from the following code:
- (UIImage *)pureBlackAndWhiteImage:(UIImage *)image {
//CGImageRef imageR = [image CGImage];
CGColorSpaceRef colorSpac = CGColorSpaceCreateDeviceGray();
//CGRect rect = CGRectMake(0.0f, 0.0f, image.size.width, image.size.height);
CGContextRef contex = CGBitmapContextCreate(NULL, image.size.width,
image.size.height, 8, 1*image.size.width, colorSpac, kCGImageAlphaNone);
contex = malloc(image.size.height * image.size.width * 4);
CGColorSpaceRelease(colorSpac);
// Draw the image into the grayscale context
//CGContextDrawImage(contex, rect, NULL);
CGImageRef grayscale = CGBitmapContextCreateImage(contex);
CGContextRelease(contex);
NSUInteger width = CGImageGetWidth(grayscale);
NSUInteger height = CGImageGetHeight(grayscale);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CFDataRef datare=CGDataProviderCopyData(CGImageGetDataProvider(grayscale));
unsigned char *dataBitmap=(unsigned char *)CFDataGetBytePtr(datare);
dataBitmap = malloc(height * width * 4);
NSUInteger bytesPerPixel = 1;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(dataBitmap, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaNoneSkipLast);
//unsigned char *dataBitmap = [self bitmapFromImage:image];
for (int i = 0; i < image.size.width * image.size.height * 4; i += 4) {
// if ((dataBitmap[i + 1] + dataBitmap[i + 2] + dataBitmap[i + 3]) < (255 * 3 / 2)) {
if(dataBitmap[i+1]>128 && dataBitmap[i+2]>128 && dataBitmap[i+3]>128)
{
dataBitmap[i + 1] = 0;
dataBitmap[i + 2] = 0;
dataBitmap[i + 3] = 0;
} else {
dataBitmap[i + 1] = 255;
dataBitmap[i + 2] = 255;
dataBitmap[i + 3] = 255;
}
}
//CFDataRef newData=CFDataCreate(NULL,dataBitmap,length);
CGColorSpaceRef colorSpa = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapcontext = CGBitmapContextCreate(dataBitmap,
image.size.width,
image.size.height,
8,
1*image.size.width,
colorSpa,
kCGImageAlphaNone);
CFRelease(colorSpace);
CFRelease(colorSpa);
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapcontext);
CFRelease(cgImage);
CFRelease(bitmapcontext);
CGContextRelease(context);
free(dataBitmap);
//CFRelease(imageR);
UIImage *newImage = [UIImage imageWithCGImage:cgImage];
return newImage;
}
What could be causing these errors?
You're trying to create the context using 4 bytes per row, when the width of the image is a lot more than that.
If you have 1 byte per pixel, then instead of 4 you should use 1 * image.size.width, like you do the second and third times you create a bitmap context.
Besides that, I don't think passing imageR as the first argument is a good idea. If you're deploying for iOS 4 or later, you can pass NULL instead.
Otherwise, I think you have to allocate the memory to store the bitmap context data.