2D Pixel-Level collision detection between UIImages - iphone

I'm trying to code a method which detects when two UIImages collide taking into account only the non-transparent pixels. Just to be clear, a method that returns TRUE when a pixel with its alpha component greater than 0 on a UIImageView overlaps with a pixel also with its alpha component greater than 0 on the other UIImageView.
The method should be something like:
- (void)checkCollisionBetweenImage:(UIImage *)img1 inFrame:(CGRect)frame1 andImage:(UIImage *)img2 inFrame:(CGRect)frame2;
So it receives both images to be checked with its frame passed independently since the coordinate positions must be converted to match (UIImageView.frame won't do).
[UPDATE 1]
I'll update with a piece of code I used in a previous question I made, this code however doesn't always work. I guess the problem lies in the fact that the UIImages used aren't necessarily in the same superview.
Detect pixel collision/overlapping between two images
if (!CGRectIntersectsRect(frame1, frame2)) return NO;
NSLog(#"OverlapsPixelsInImage:withImage:> Images Intersect");
UIImage *img1 = imgView1.image;
UIImage *img2 = imgView2.image;
CGImageRef imgRef1 = [img1 CGImage];
CGImageRef imgRef2 = [img2 CGImage];
float minx = MIN(frame1.origin.x, frame2.origin.x);
float miny = MIN(frame1.origin.y, frame2.origin.y);
float maxx = MAX(frame1.origin.x + frame1.size.width, frame2.origin.x + frame2.size.width);
float maxy = MAX(frame1.origin.y + frame1.size.height, frame2.origin.y + frame2.size.height);
CGRect canvasRect = CGRectMake(0, 0, maxx - minx, maxy - miny);
size_t width = floorf(canvasRect.size.width);
size_t height = floorf(canvasRect.size.height);
NSUInteger bitsPerComponent = 8;
NSUInteger bytesPerRow = 4 * width;
unsigned char *rawData = malloc(canvasRect.size.width * canvasRect.size.height * 4 * bitsPerComponent);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(context, 0, canvasRect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, CGRectMake(frame2.origin.x - minx, frame2.origin.y - miny, frame2.size.width, frame2.size.height), imgRef2);
CGContextDrawImage(context, CGRectMake(frame1.origin.x - minx, frame1.origin.y - miny, frame1.size.width, frame1.size.height), imgRef1);
CGContextRelease(context);
int byteIndex = 0;
for (int i = 0; i < width * height; i++)
{
int8_t alpha = rawData[byteIndex + 3];
if (alpha > 64)
{
NSLog(#"collided in byte: %d", i);
free(rawData);
return YES;
}
byteIndex += 4;
}
free(rawData);
return NO;
[UPDATE 2]
I used another line supposedly required just before drawing the masked image.
CGContextSetBlendMode(context, kCGBlendModeCopy);
Still doesn't work. Curiously, I've been checking the alpha values detected on collision and they are random numbers. It's curious because the images I'm using have only full opacity or full transparency, nothing in between.

Check for the actual frame intersection -> gives you a rectangle.
Check every pixel in one of the images in the rectangle for alpha > 0. If so then convert the current coordinate into the other image's and then if the alpha of that pixel > 0 you have a hit. Repeat until the rectangle is done.
Stop early if you have fount a hit.

Hrmm... Is your problem that your images have white squares behind them and you don't want the white boxes to collide, but you want the images inside them to collide? Well...
I don't think you can check collision based on colors. What you need to do is download some image editing software (like gimp) and crop out the white background.

Related

Drawing Histogram needs more accuracy in iPhone

I am working in an app in which I need to draw Histogram of any inputed image. I can draw Histogram successfully but its not so sharp as its in PREVIEW in Mac OS.
As code is too large ,I have uploaded it to GitHub Click here to Download
The RGB Values are read in
-(void)readImage:(UIImage*)image
{
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
for (int yy=0;yy<height; yy++)
{
for (int xx=0; xx<width; xx++)
{
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < 1 ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) ;
CGFloat green = (rawData[byteIndex + 1] * 1.0) ;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) ;
// CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
// TYPE CASTING ABOVE FLOAT VALUES TO THAT THEY CAN BE MATCHED WITH ARRAY'S INDEX.
int redValue = (int)red;
int greenValue = (int)green;
int blueValue = (int)blue;
// THESE COUNTERS COUNT " TOTAL NUMBER OF PIXELS " FOR THAT Red , Green or Blue Value IN ENTIRE IMAGE.
fltR[redValue]++;
fltG[greenValue]++;
fltB[blueValue]++;
}
}
}
[self makeArrays];
free(rawData);
}
I stored values in c array variables , fltR,fltG,fltB.
I have a class ClsDrawPoint it has members
#property CGFloat x;
#property CGFloat y;
Then prepared an array containing objects of ClsDrawPoint having fltR[]'s index as X value and value for that index as Y value .
array is prepared and graph is drawn in
-(void)makeArrays
method
You may see the result
Currently its not so accurate as it is in PREVIEW in mac for the same image. You can open an image in PREVIEW app in Mac and in Tools>AdjustColor , you will be able to see the Histogram of that image.
I think if my graph is accrate , it will be sharper. Kindly check my code and suggest me if you find anyway to make it more accurate.
I pulled your sample down from Github and found that your drawRect: method in ClsDraw is drawing lines that are one pixel wide. Strokes are centered on the line, and with a width of 1, the stroke is split into half-pixels which introduces anti-aliasing.
I moved your horizontal offsets by a half-pixel and rendering looks sharp. I didn't mess with vertical offsets, but to make them sharp you would need to round them and then move them to a half-pixel offset as well. I only made the following change:
#define OFFSET_X 0.5
- (void)drawRect:(CGRect)rect
{
CGContextRef ctx = UIGraphicsGetCurrentContext();
if ([arrPoints count] > 0)
{
CGContextSetLineWidth(ctx, 1);
CGContextSetStrokeColorWithColor(ctx, graphColor.CGColor);
CGContextSetAlpha(ctx, 0.8);
ClsDrawPoint *objPoint;
CGContextBeginPath(ctx);
for (int i = 0 ; i < [arrPoints count] ; i++)
{
objPoint = [arrPoints objectAtIndex:i];
CGPoint adjustedPoint = CGPointMake(objPoint.x+OFFSET_X, objPoint.y);
CGContextMoveToPoint(ctx, adjustedPoint.x, 0);
CGContextSetLineCap(ctx, kCGLineCapRound);
CGContextSetLineJoin(ctx, kCGLineJoinRound);
CGContextAddLineToPoint(ctx, adjustedPoint.x, adjustedPoint.y);
CGContextMoveToPoint(ctx, adjustedPoint.x, adjustedPoint.y);
CGContextStrokePath(ctx);
}
}
}
Notice the new OFFSET_X and the introduction of adjustedPoint.
You might also consider using a CGPoint stuffed into an NSValue instance for your points instead of your custom class ClsDrawPoint, unless you plan to add some additional behavior or properties. More details available here.

How t draw outline/stroke around UIImage (*.png) - Example inside

I tried many many ways to draw a black outline around image.
This is an example of the result I want:
Can someone please let me know how should I do it? or give me an example ?
Edit: i stuck in here: can someone please help me finish it ?
What i did was to make another shape in black color under the the white with shadow and then fill it all in black so it will be like an outline - but i cant figure out how to make the last and important part of making the shadow and fill it to be all in black.
- (IBAction)addStroke:(id)sender{
[iconStrokeTest setImage:[self makeIconStroke:icon.imageView.image]];
}
- (UIImage *)makeIconStroke:(UIImage *)image{
CGImageRef originalImage = [image CGImage];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(NULL,
CGImageGetWidth(originalImage),
CGImageGetHeight(originalImage),
8,
CGImageGetWidth(originalImage)*4,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGContextDrawImage(bitmapContext, CGRectMake(0, 0, CGBitmapContextGetWidth(bitmapContext), CGBitmapContextGetHeight(bitmapContext)), originalImage);
CGImageRef finalMaskImage = [self createMaskWithImageAlpha:bitmapContext];
UIImage *result = [UIImage imageWithCGImage:finalMaskImage];
CGContextRelease(bitmapContext);
CGImageRelease(finalMaskImage);
// begin a new image context, to draw our colored image onto
UIGraphicsBeginImageContext(result.size);
// get a reference to that context we created
CGContextRef context = UIGraphicsGetCurrentContext();
// set the fill color
[[UIColor blackColor] setFill];
// translate/flip the graphics context (for transforming from CG* coords to UI* coords
CGContextTranslateCTM(context, 0, result.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
// set the blend mode to color burn, and the original image
CGContextSetBlendMode(context, kCGBlendModeColorBurn);
CGRect rect = CGRectMake(0, 0, result.size.width, result.size.height);
CGContextDrawImage(context, rect, result.CGImage);
// set a mask that matches the shape of the image, then draw (color burn) a colored rectangle
CGContextClipToMask(context, rect, result.CGImage);
CGContextAddRect(context, rect);
CGContextDrawPath(context,kCGPathFill);
// generate a new UIImage from the graphics context we drew onto
UIImage *coloredImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//return the color-burned image
return coloredImg;
}
- (CGImageRef)createMaskWithImageAlpha:(CGContextRef)originalImageContext {
UInt8 *data = (UInt8 *)CGBitmapContextGetData(originalImageContext);
float width = CGBitmapContextGetBytesPerRow(originalImageContext) / 4;
float height = CGBitmapContextGetHeight(originalImageContext);
int strideLength = ROUND_UP(width * 1, 4);
unsigned char * alphaData = (unsigned char * )calloc(strideLength * height, 1);
CGContextRef alphaOnlyContext = CGBitmapContextCreate(alphaData,
width,
height,
8,
strideLength,
NULL,
kCGImageAlphaOnly);
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
unsigned char val = data[y*(int)width*4 + x*4 + 3];
val = 255 - val;
alphaData[y*strideLength + x] = val;
}
}
CGImageRef alphaMaskImage = CGBitmapContextCreateImage(alphaOnlyContext);
CGContextRelease(alphaOnlyContext);
free(alphaData);
// Make a mask
CGImageRef finalMaskImage = CGImageMaskCreate(CGImageGetWidth(alphaMaskImage),
CGImageGetHeight(alphaMaskImage),
CGImageGetBitsPerComponent(alphaMaskImage),
CGImageGetBitsPerPixel(alphaMaskImage),
CGImageGetBytesPerRow(alphaMaskImage),
CGImageGetDataProvider(alphaMaskImage), NULL, false);
CGImageRelease(alphaMaskImage);
return finalMaskImage;
}
Well, theres no build-in API for that. You'll have to do it yourself or find a libary for this. But you could "fake" the effect by drawing the image with a shadow. Note that shadows can be any color, it doesn't have to look like a shadow. This would be the easiest way.
Other than that you could vectorize the raster image and stroke that path. Core image's edge detection filter will help for that but it could turn out to be hard to acomplish.

Detect pixel collision/overlapping between two images

I have two UIImageViews that contain images with some transparent area. Is there any way to check if the non-transparent area between both images collide?
Thanks.
[UPDATE]
So this is what I have up until now, unfortunately it still ain't working but I can't figure out why.
if (!CGRectIntersectsRect(frame1, frame2)) return NO;
NSLog(#"OverlapsPixelsInImage:withImage:> Images Intersect");
UIImage *img1 = imgView1.image;
UIImage *img2 = imgView2.image;
CGImageRef imgRef1 = [img1 CGImage];
CGImageRef imgRef2 = [img2 CGImage];
float minx = MIN(frame1.origin.x, frame2.origin.x);
float miny = MIN(frame1.origin.y, frame2.origin.y);
float maxx = MAX(frame1.origin.x + frame1.size.width, frame2.origin.x + frame2.size.width);
float maxy = MAX(frame1.origin.y + frame1.size.height, frame2.origin.y + frame2.size.height);
CGRect canvasRect = CGRectMake(0, 0, maxx - minx, maxy - miny);
size_t width = floorf(canvasRect.size.width);
size_t height = floorf(canvasRect.size.height);
NSUInteger bitsPerComponent = 8;
NSUInteger bytesPerRow = 4 * width;
unsigned char *rawData = calloc(width * height, sizeof(*rawData));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextTranslateCTM(context, 0, canvasRect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextClipToMask(context, CGRectMake(frame2.origin.x - minx, frame2.origin.y - miny, frame2.size.width, frame2.size.height), imgRef2);
CGContextDrawImage(context, CGRectMake(frame1.origin.x - minx, frame1.origin.y - miny, frame1.size.width, frame1.size.height), imgRef1);
CGContextRelease(context);
int byteIndex = 0;
for (int i = 0; i < width * height; i++)
{
CGFloat alpha = rawData[byteIndex + 3];
if (alpha > 128)
{
NSLog(#"collided in byte: %d", i);
free(rawData);
return YES;
}
byteIndex += 4;
}
free(rawData);
return NO;
You can draw both the alpha channels of both images into a single bitmap context and then look through the data for any transparent pixels. Take a look at the clipRectToPath() code in Clipping CGRect to a CGPath. It's solving a different problem, but the approach is the same. Rather than using CGContextFillPath() to draw into the context, just draw both of your images.
Here's the flow:
Create an alpha-only bitmap context (kCGImageAlphaOnly)
Draw everything you want to compare into it
Walk the pixels looking at the value. In my example, it considers < 128 to be "transparent." If you want fully transparent, use == 0.
When you find a transparent pixel, the example just makes a note of what column it was in. In your problem, you might just return YES, or you might use that data to form another mask.
Not easily, you basically have to read the in raw bitmap data and walk the pixels.

iphone image resolution increased when emailed

In my app i capture an image and then add a frame to it...
I also have the feature to add custom text on the final image (original image + frame). I am using the following code to draw the text.
-(UIImage *)addText:(UIImage *)img text:(NSString *)textInput
{
CGFloat imageWidth = img.size.width;
CGFloat imageHeigth = img.size.height;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, imageWidth, imageHeigth, 8,
4 * imageWidth, colorSpace, kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, imageWidth, imageHeigth), img.CGImage);
CGContextSetCMYKFillColor(context, 0.0, 0.0, 0.0, 1.0,1);
CGContextSetFont(context, customFont);
UIColor * strokeColor = [UIColor blackColor];
CGContextSetFillColorWithColor(context, strokeColor.CGColor);
CGContextSetFontSize(context, DISPLAY_FONT_SIZE * DisplayToOutputScale);
// Create an array of Glyph's the size of text that will be drawn.
CGGlyph textToPrint[[textInput length]];
for (int i = 0; i < [textInput length]; ++i)
{
// Store each letter in a Glyph and subtract the MagicNumber to get appropriate value.
textToPrint[i] = [textInput characterAtIndex:i] + 3 - 32;
}
// First pass to be displayed invisible, will be used to calculate the length of the text in glyph
CGContextSetTextDrawingMode(context, kCGTextInvisible);
CGContextShowGlyphsAtPoint(context, 0 , 0 , textToPrint, [textInput length]);
CGPoint endPoint = CGContextGetTextPosition(context);
// Calculate position of text on white border frame
CGFloat xPos = (imageWidth/2.0f) - (endPoint.x/2.0f);
CGFloat yPos;
yPos = 30 * DisplayToOutputScale;
// Toggle off invisible mode, we are ready to draw the text
CGContextSetTextDrawingMode(context, kCGTextFill);
CGContextShowGlyphsAtPoint(context, xPos , yPos , textToPrint, [textInput length]);
// Extract resulting image
CGImageRef imageMasked = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return [UIImage imageWithCGImage:imageMasked];
}
I email the image using UIImageJPEGRepresentation and attach the data.
When i email the image without adding custom text the image size increases from 1050 x 1275 to 2100 x 2550 which is strange.
But when i email the image with text added the image size remains unchanged.
Can any one explain me why this happens ??
I think there is something to do with converting from UIImage to UIData.
Thanx
I had the same problem. Fix it with starting the context with a scale of 1:
UIGraphicsBeginImageContextWithOptions(size, YES, 1.0);

replace specific color in openGl drawing in xcode?

i building a drawing app that uses openGl. i am new to openGL but managed to build the basic app.
I am now working on the ability of the user to save the drawings to camera roll.
i have a view that holds an image that the user uses to draw, so it have to be visible but not affected from the drawing canvas.
a bove it i have a view that i use for the drawing. because of what i said before i have set the background to black transparent with -
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
The problem -
everything is great until i am saving the image. obviously the background color is transparent black so the images have black background while i need it to be white.
i taught to change all the black color in the canvas to white before the saving.
can someone direct me with that?
tanks
shani
Hi david thanks for replying. In the first time I actually used it as is. I red a little bit more and added the rest of the code (I hope so) this time the background became white but the drawn lines disappeared. I tried to check the buffer color at each point and replace it to 255 every time it's 0. That worked great, the background became white and the lines remained but their colors changed in a way I can not understand. I will appreciate if you can help me with that.
This is the code I wrote
-(void)saveCurrentScreenToPhotoAlbum {
int width = 768;
int height = 1024;
//glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
//glClear(GL_COLOR_BUFFER_BIT);
NSInteger myDataLength = width * height * 4;
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
NSLog(#"%d", buffer[0]);
for(int y = 0; y <height; y++) {
for(int x = 0; x <width * 4; x++) {
if (buffer[y * 4 * width + x]==0) {
buffer[y * 4 * width + x] = 250;
//buffer[y * 4 * width + x+1] = 250;
NSLog(#" = %i",y * 4 * width + x);
}
buffer2[(height - 1 - y) * width * 4 + x] = buffer[y * 4 * width + x];
// printf("%d %d %d\n",buffer[(height - 1 - y) * width * 4 + x],buffer[1],buffer[2]);
}
}
free(buffer); // YOU CAN FREE THIS NOW
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, releaseData);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
CGColorSpaceRelease(colorSpaceRef); // YOU CAN RELEASE THIS NOW
CGDataProviderRelease(provider); // YOU CAN RELEASE THIS NOW
UIImage *image = [[UIImage alloc] initWithCGImage:imageRef]; // change this to manual alloc/init instead of autorelease
CGImageRelease(imageRef); // YOU CAN RELEASE THIS NOW
UIImageWriteToSavedPhotosAlbum(image, self, #selector(image:didFinishSavingWithError:contextInfo:), nil); // add callback for finish saving
}
Shani
Change the behavior of the drawing code during saving:
if(willSave)
glClearColor(1,1,1,1);
else
glClearColor(0,0,0,0);