Create a mask from difference between two images (iPhone) - iphone

How can I detect the difference between 2 images, creating a mask of the area that's different in order to process the area that's common to both images (gaussian blur for example)?
EDIT: I'm currently using this code to get the RGBA value of pixels:
+ (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)xx andY:(int)yy count:(int)count
{
NSMutableArray *result = [NSMutableArray arrayWithCapacity:count];
// First get the image into your data buffer
CGImageRef imageRef = [image CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * yy) + xx * bytesPerPixel;
for (int ii = 0 ; ii < count ; ++ii)
{
CGFloat red = (rawData[byteIndex] * 1.0) / 255.0;
CGFloat green = (rawData[byteIndex + 1] * 1.0) / 255.0;
CGFloat blue = (rawData[byteIndex + 2] * 1.0) / 255.0;
CGFloat alpha = (rawData[byteIndex + 3] * 1.0) / 255.0;
byteIndex += 4;
UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
[result addObject:acolor];
}
free(rawData);
return result;
}
The problem is, the images are being captured from the iPhone's camera so they are not exactly the same position. I need to create areas of a couple of pixels and extracting the general color of the area (maybe by adding up the RGBA values and dividing by the number of pixels?). How could I do this and then translate it to a CGMask?
I know this is a complex question, so any help is appreciated.
Thanks.

I think the simplest way to do this would be to use a difference blend mode. The following code is based on code I use in CKImageAdditions.
+ (UIImage *) differenceOfImage:(UIImage *)top withImage:(UIImage *)bottom {
CGImageRef topRef = [top CGImage];
CGImageRef bottomRef = [bottom CGImage];
// Dimensions
CGRect bottomFrame = CGRectMake(0, 0, CGImageGetWidth(bottomRef), CGImageGetHeight(bottomRef));
CGRect topFrame = CGRectMake(0, 0, CGImageGetWidth(topRef), CGImageGetHeight(topRef));
CGRect renderFrame = CGRectIntegral(CGRectUnion(bottomFrame, topFrame));
// Create context
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if(colorSpace == NULL) {
printf("Error allocating color space.\n");
return NULL;
}
CGContextRef context = CGBitmapContextCreate(NULL,
renderFrame.size.width,
renderFrame.size.height,
8,
renderFrame.size.width * 4,
colorSpace,
kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
if(context == NULL) {
printf("Context not created!\n");
return NULL;
}
// Draw images
CGContextSetBlendMode(context, kCGBlendModeNormal);
CGContextDrawImage(context, CGRectOffset(bottomFrame, -renderFrame.origin.x, -renderFrame.origin.y), bottomRef);
CGContextSetBlendMode(context, kCGBlendModeDifference);
CGContextDrawImage(context, CGRectOffset(topFrame, -renderFrame.origin.x, -renderFrame.origin.y), topRef);
// Create image from context
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage * image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(context);
return image;
}

There are three reasons pixels will change from one iPhone photo to the next, the subject changed, the iPhone moved, and random noise. I assume for this question, you're most interested in the subject changes, and you want to process out the effects of the other two changes. I also assume the app intends the user to keep the iPhone reasonably still, so iPhone movement changes are less significant than subject changes.
To reduce the effects of random noise, just blur the image a little. A simple averaging blur, where each pixel in the resulting image is an average of the original pixel with its nearest neighbors should be sufficient to smooth out any noise in a reasonably well lit iPhone image.
To address iPhone movement, you can run a feature detection algorithm on each image (look up feature detection on Wikipedia for a start). Then calculate the transforms needed to align the least changed detected features.
Apply that transform to the blurred images, and find the difference between the images. Any pixels with a sufficient difference will become your mask. You can then process the mask to eliminate any islands of changed pixels. For example, a subject may be wearing a solid colored shirt. The subject may move from one image to the next, but the area of the solid colored shirt may overlap resulting in a mask with a hole in the middle.
In other words, this is a significant and difficult image processing problem. You won't find the answer in a stackoverflow.com post. You will find the answer in a digital image processing textbook.

Can't you just subtract pixel values from the images, and process pixels where the difference i 0?

Every pixel which does not have a suitably similar pixel in the other image within a certain radius can be deemed to be part of the mask. It's slow, (though there's not much that would be faster) but it works fairly simply.

Go through the pixels, copy the ones that are different in the lower image to a new one (not opaque).
Blur the upper one completely, then show the new one above.

Related

iPhone App Green Screen Replacement

Q: I'm looking to use the iPhone camera to take a photo and then replace the green screen in that photo with another photo.
What's the best way to dive into this? I couldn't find many resources online.
Thanks in advance!
Conceptually, all that you need to do is loop through the pixel data of the photo taken by the phone, and for each pixel that is not within a certain range of green, copy the pixel into the same location on your background image.
Here is an example I modified from keremic's answer to another stackoverflow question.
NOTE: This is untested and just intended to give you an idea of a technique that will work
//Get data into C array
CGImageRef image = [UIImage CGImage];
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel_ * width;
NSUInteger bitsPerComponent = 8;
unsigned char *data = malloc(height * width * bytesPerPixel);
// you will need to copy your background image into resulting_image_data.
// which I am not showing here
unsigned char *resulting_image_data = malloc(height * width * bytesPerPixel);
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height));
CGContextRelease(context);
//loop through each pixel
for(int row = 0; row < height; row++){
for(int col = 0; col < width*bytesPerPixel; col=col+4){
red = data[col];
green = data[col + 1];
blue = data[col + 2];
alpha = data[col + 3];
// if the pixel is within a shade of green
if(!(green > 250 && red < 10 && blue < 10)){
//copy them over to the background image
resulting_image_data[row*col] = red;
resulting_image_data[row*col+1] = green;
resulting_image_data[row*col+2] = blue;
resulting_image_data[row*col+3] = alpha;
}
}
}
//covert resulting_image_data into a UIImage
Have a look at compiling OpenCV for iPhone - not an easy task, but it gives you access to a whole library of really great image processing tools.
I'm using openCV for an app I'm developing at the moment (not all that dissimilar to yours) - for what you're trying to do, openCV would be a great solution, although it requires a bit of learning etc. Once you've got OpenCV working, the actual task of removing green shouldn't be too hard.
Edit: This link will be a helpful resource if you do decide to use OpenCV: Compiling OpenCV for iOS

split UIImage by colors and create 2 images

I have looked through replacing colors in an image but cannot get it to work how i need because I am trying to do it with every color but one, as well as transparency.
what I am looking for is a way to take in an image and split out a color (say all the pure black) from that image. Then take that split out portion and make a new image with a transparent background and the split out portion.
(here is just an example of the idea, say i want to take a screenshot of this page. make every other color but pure black be transparent, and save that new image to the library, or put it into a UIImageView)
i have looked in to CGImageCreateWithMaskingColors but cant seem to do what I need with the transparent portion, and I dont really understand the colorMasking input other than you can provide it with a {Rmin,Rmax,Gmin,Gmax,Bmin,Bmax} color mask but when I do, it colors everything. any ideas or input would be great.
Sounds like you're going to have to get access to the underlying bytes and write code to process them directly. You can use CGImageGetDataProvider() to get access to the data of an image, but there's no guarantee that the format will be something you know how to handle. Alternately you can create a new CGContextRef using a specific format you know how to handle, then draw the original image into your new context, then process the underlying data. Here's a quick attempt at doing what you want (uncompiled):
- (UIImage *)imageWithBlackPixels:(UIImage *)image {
CGImageRef cgImage = image.CGImage;
// create a premultiplied ARGB context with 32bpp
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
size_t bpc = 8; // bits per component
size_t bpp = bpc * 4 / 8; // bytes per pixel
size_t bytesPerRow = bpp * width;
void *data = malloc(bytesPerRow * height);
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host;
CGContextRef ctx = CGBitmapContextCreate(data, width, height, bpc, bytesPerRow, colorspace, bitmapInfo);
CGColorSpaceRelease(colorspace);
if (ctx == NULL) {
// couldn't create the context - double-check the parameters?
free(data);
return nil;
}
// draw the image into the context
CGContextDrawImage(ctx, CGRectMake(0, 0, width, height), cgImage);
// replace all non-black pixels with transparent
// preserve existing transparency on black pixels
for (size_t y = 0; y < height; y++) {
size_t rowStart = bytesPerRow * y;
for (size_t x = 0; x < width; x++) {
size_t pixelOffset = rowStart + x*bpp;
// check the RGB components of the pixel
if (data[pixelOffset+1] != 0 || data[pixelOffset+2] != 0 || data[pixelOffset+3] != 0) {
// this pixel contains non-black. zero it out
memset(&data[pixelOffset], 0, 4);
}
}
}
// create our new image and release the context data
CGImageRef newCGImage = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
free(data);
UIImage *newImage = [UIImage imageWithCGImage:newCGImage scale:image.scale orientation:image.imageOrientation];
CGImageRelease(newCGImage);
return newImage;
}

Reading and editing pixels of image on iPhone

Curious about how to read and edit a picture's pixels on the iPhone. Am I better of using an array of points with colours?
I want to do things like.. if a CGPoint intersects with a "brown" spot on the picture, set the colour of all brown pixels in a radius to white. More questions to come, but this is a start.
Cheers
The picture data is available to you as precisely that -- a two-dimensional array of pixels, each pixel being represented by a 32 bit integer. For each of the color components (red, green, blue, and alpga) there is an 8 bit value. The ordering of these 8-bit-wide values within the 32 bit integer varies with the format of the picture data. The apple doc about all this is really good. While there is some attractive Apple stuff using CGDataProviderCopyData to give you a pointer into the actual data storage of a UIImage, in practice this can be a headache, because the format of that internal storage can vary widely from one image to the next. In practice, most people doing image processing seem to use this approach:
CGImageRef image = [UIImage CGImage];
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData_ = malloc(height * width * 4);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel_ * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height));
CGContextRelease(context);
// rawData contains image data in the RGBA8888 format.
// for any pixel at coordinate x,y -- the value is
//
int pixelIndex = (bytesPerRow * y) + x * bytesPerPixel;
unsigned char red = rawData[pixelIndex];
green = rawData[pixelIndex + 1];
blue = rawData[pixelIndex + 2];
alpha = rawData[pixelIndex + 3];

How to erase part of an image as the user touches it

My big picture goal is to have a grey field over an image, and then as the user rubs on that grey field, it reveals the image underneath. Basically like a lottery scratcher card. I've done a bunch of searching through the docs, as well as this site, but can't find the solution.
The following is just a proof of concept to test "erasing" an image based on where the user touches, but it isn't working. :(
I have a UIView that detects touches, then sends the coords of the move to the UIViewController that clips the image in a UIImageView by doing the following:
- (void) moveDetectedFrom:(CGPoint) from to:(CGPoint) to
{
UIImage* image = bkgdImageView.image;
CGSize s = image.size;
UIGraphicsBeginImageContext(s);
CGContextRef g = UIGraphicsGetCurrentContext();
CGContextMoveToPoint(g, from.x, from.y);
CGContextAddLineToPoint(g, to.x, to.y);
CGContextClosePath(g);
CGContextAddRect(g, CGRectMake(0, 0, s.width, s.height));
CGContextEOClip(g);
[image drawAtPoint:CGPointZero];
bkgdImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[bkgdImageView setNeedsDisplay];
}
The problem is that the touches are sent to this method just fine, but nothing happens on the original.
Am I doing the clip path incorrectly? Or?
Not really sure...so any help you may have would be greatly appreciated.
Thanks in advance,
Joel
I've been trying to do the same thing a lot of time ago, using just Core Graphics, and it can be done, but trust me, the effect is not as smooth and soft as the user expects to be. So, i knew how to work with OpenCV, (Open Computer Vision Library), and as it was written in C, i knew i could ise it on the iPhone.
Doing what you want to do with OpenCV is extremely easy.
First you need a couple of functions to convert a UIImage to an IplImage wich is the type used in OpenCV to represent images of all kinds, and the other way.
+ (IplImage *)CreateIplImageFromUIImage:(UIImage *)image {
CGImageRef imageRef = image.CGImage;
//This is the function you use to convert a UIImage -> IplImage
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
IplImage *iplimage = cvCreateImage(cvSize(image.size.width, image.size.height), IPL_DEPTH_8U, 4);
CGContextRef contextRef = CGBitmapContextCreate(iplimage->imageData, iplimage->width, iplimage->height,
iplimage->depth, iplimage->widthStep,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, image.size.width, image.size.height), imageRef);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return iplimage;}
+ (UIImage *)UIImageFromIplImage:(IplImage *)image {
//Convert a IplImage -> UIImage
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSData * data = [[NSData alloc] initWithBytes:image->imageData length:image->imageSize];
//NSData *data = [NSData dataWithBytes:image->imageData length:image->imageSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
CGImageRef imageRef = CGImageCreate(image->width, image->height,
image->depth, image->depth * image->nChannels, image->widthStep,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault,
provider, NULL, false, kCGRenderingIntentDefault);
UIImage *ret = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
[data release];
return ret;}
Now that you have both the basic functions you need you can do whatever you want with your IplImage:
this is what you want:
+(UIImage *)erasePointinUIImage:(IplImage *)image :(CGPoint)point :(int)r{
//r is the radious of the erasing
int a = point.x;
int b = point.y;
int position;
int minX,minY,maxX,maxY;
minX = (a-r>0)?a-r:0;
minY = (b-r>0)?b-r:0;
maxX = ((a+r) < (image->width))? a+r : (image->width);
maxY = ((b+r) < (image->height))? b+r : (image->height);
for (int i = minX; i < maxX ; i++)
{
for(int j=minY; j<maxY;j++)
{
position = ((j-b)*(j-b))+((i-a)*(i-a));
if (position <= r*r)
{
uchar* ptr =(uchar*)(image->imageData) + (j*image->widthStep + i*image->nChannels);
ptr[1] = ptr[2] = ptr[3] = ptr[4] = 0;
}
}
}
UIImage * res = [self UIImageFromIplImage:image];
return res;}
Sorry for the formatting.
If you want to know how to port OpenCV to the iPhone Yoshimasa Niwa's
If you want to check out an app currently working with OpenCV on the AppStore go get :Flags&Faces
You usually want to draw into the current graphics context inside of a drawRect: method, not just any old method. Also, a clip region only affects what is drawn to the current graphics context. But instead of going into why this approach isn't working, I'd suggest doing it differently.
What I would do is have two views. One with the image, and one with the gray color that is made transparent. This allows the graphics hardware to cache the image, instead of trying to redraw the image every time you modify the gray fill.
The gray one would be a UIView subclass with CGBitmapContext that you would draw into to make the pixels that the user touches clear.
There are probably several ways to do this. I'm just suggesting one way above.

How i count red color pixel from the UIImage using objective-C in iphone?

I m the beginner in iphone software development.
I developing the application on skin cancer in which i want to calculate or count red color pixel from UIImage which is captured by iphone camera.It is possible to count red pixel from UIImage?
Since this is a question that is asked almost weekly, I decided to make a little example project that shows how to do this. You can look at the code at:
http://github.com/st3fan/iphone-experiments/tree/master/Miscellaneous/PixelAccess/
The important bit is the following code, which takes a UIImage and then counts the number of pure red pixels. It is an example and you can use it and modify it for your own algorithms:
/**
* Structure to keep one pixel in RRRRRRRRGGGGGGGGBBBBBBBBAAAAAAAA format
*/
struct pixel {
unsigned char r, g, b, a;
};
/**
* Process the image and return the number of pure red pixels in it.
*/
- (NSUInteger) processImage: (UIImage*) image
{
NSUInteger numberOfRedPixels = 0;
// Allocate a buffer big enough to hold all the pixels
struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel));
if (pixels != nil)
{
// Create a new bitmap
CGContextRef context = CGBitmapContextCreate(
(void*) pixels,
image.size.width,
image.size.height,
8,
image.size.width * 4,
CGImageGetColorSpace(image.CGImage),
kCGImageAlphaPremultipliedLast
);
if (context != NULL)
{
// Draw the image in the bitmap
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage);
// Now that we have the image drawn in our own buffer, we can loop over the pixels to
// process it. This simple case simply counts all pixels that have a pure red component.
// There are probably more efficient and interesting ways to do this. But the important
// part is that the pixels buffer can be read directly.
NSUInteger numberOfPixels = image.size.width * image.size.height;
while (numberOfPixels > 0) {
if (pixels->r == 255) {
numberOfRedPixels++;
}
pixels++;
numberOfPixels--;
}
CGContextRelease(context);
}
free(pixels);
}
return numberOfRedPixels;
}
A simple example on how to call this:
- (IBAction) processImage
{
NSUInteger numberOfRedPixels = [self processImage: [UIImage imageNamed: #"DutchFlag.png"]];
label_.text = [NSString stringWithFormat: #"There are %d red pixels in the image", numberOfRedPixels];
}
The example project on Github contains a complete working example.