Face Tracking in iPhone using OpenCV - iphone

I want to create face tracking in iPhone same like this code. Its a mac os code but I want to make it in iPhone same as given code.
Any Idea about face tracking in iphone.

You have to use OPENCV to detect the face and import it to your code.In this method i ahve used a rectangle/ellipse to represent the face detected
-(UIImage *) opencvFaceDetect:(UIImage *)originalImage {
cvSetErrMode(CV_ErrModeParent);
IplImage *image = [self CreateIplImageFromUIImage:originalImage];
// Scaling down
/*
Creates IPL image (header and data) ----------------cvCreateImage
CVAPI(IplImage*) cvCreateImage( CvSize size, int depth, int channels );
*/
IplImage *small_image = cvCreateImage(cvSize(image->width/2,image->height/2), IPL_DEPTH_8U, 3);
/*SMOOTHES DOWN THYE GUASSIAN SURFACE--------:cvPyrDown*/
cvPyrDown(image, small_image, CV_GAUSSIAN_5x5);
int scale = 2;
// Load XML
NSString *path = [[NSBundle mainBundle] pathForResource:#"haarcascade_frontalface_default" ofType:#"xml"];
CvHaarClassifierCascade* cascade = (CvHaarClassifierCascade*)cvLoad([path cStringUsingEncoding:NSASCIIStringEncoding], NULL, NULL, NULL);
// Check whether the cascade has loaded successfully. Else report and error and quit
if( !cascade )
{
NSLog(#"ERROR: Could not load classifier cascade\n");
//return;
}
//Allocate the Memory storage
CvMemStorage* storage = cvCreateMemStorage(0);
// Clear the memory storage which was used before
cvClearMemStorage( storage );
CGColorSpaceRef colorSpace;
CGContextRef contextRef;
CGRect face_rect;
// Find whether the cascade is loaded, to find the faces. If yes, then:
if( cascade )
{
CvSeq* faces = cvHaarDetectObjects(small_image, cascade, storage, 1.1f, 3, 0, cvSize(20, 20));
cvReleaseImage(&small_image);
// Create canvas to show the results
CGImageRef imageRef = originalImage.CGImage;
colorSpace = CGColorSpaceCreateDeviceRGB();
contextRef = CGBitmapContextCreate(NULL, originalImage.size.width, originalImage.size.height,
8, originalImage.size.width * 4, colorSpace,
kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);
//VIKAS
CGContextDrawImage(contextRef, CGRectMake(0, 0, originalImage.size.width, originalImage.size.height), imageRef);
CGContextSetLineWidth(contextRef, 4);
CGContextSetRGBStrokeColor(contextRef, 1.0, 1.0, 1.0, 0.5);
// Draw results on the image:Draw all components of face in the form of small rectangles
// Loop the number of faces found.
for(int i = 0; i < faces->total; i++)
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
// Calc the rect of faces
// Create a new rectangle for drawing the face
CvRect cvrect = *(CvRect*)cvGetSeqElem(faces, i);
// CGRect face_rect = CGContextConvertRectToDeviceSpace(contextRef,
// CGRectMake(cvrect.x * scale, cvrect.y * scale, cvrect.width * scale, cvrect.height * scale));
face_rect = CGContextConvertRectToDeviceSpace(contextRef,
CGRectMake(cvrect.x*scale, cvrect.y, cvrect.width*scale, cvrect.height*scale*1.25));
facedetectapp=(FaceDetectAppDelegate *)[[UIApplication sharedApplication]delegate];
facedetectapp.grabcropcoordrect=face_rect;
NSLog(#" FACE off %f %f %f %f",facedetectapp.grabcropcoordrect.origin.x,facedetectapp.grabcropcoordrect.origin.y,facedetectapp.grabcropcoordrect.size.width,facedetectapp.grabcropcoordrect.size.height);
CGContextStrokeRect(contextRef, face_rect);
//CGContextFillEllipseInRect(contextRef,face_rect);
CGContextStrokeEllipseInRect(contextRef,face_rect);
[pool release];
}
}
CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage],face_rect);
UIImage *returnImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
cvReleaseMemStorage(&storage);
cvReleaseHaarClassifierCascade(&cascade);
return returnImage;
}

Take a look at this article. It includes a demo project and explains how to get best performance when processing live video.
Computer vision with iOS Part 2: Face tracking in live video

Related

iPhone take augmented reality screenshot with AVCaptureVideoPreviewLayer

I have a small augmented reality app that I'm developing and would like to know how to save a screenshot of what the user sees with a tap of a button or a timer.
The app works by overlaying live camera feed above another UIView. I can save screenshots by using power button +home button, these are saved to camera roll. However, Apple will not render the AVCaptureVideoPreviewLayer, even if I ask the window to save itself. It will create a transparent piece of canvas where the preview layer is.
What's the proper way for an augmented reality app to save screenshots, including transparency and subviews?
//displaying a live preview on one of the views
-(void)startCapture
{
captureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *audioCaptureDevice = nil;
// AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in videoDevices) {
if(useFrontCamera){
if (device.position == AVCaptureDevicePositionFront) {
//FRONT-FACING CAMERA EXISTS
audioCaptureDevice = device;
break;
}
}else{
if (device.position == AVCaptureDevicePositionBack) {
//Rear-FACING CAMERA EXISTS
audioCaptureDevice = device;
break;
}
}
}
NSError *error = nil;
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error];
if (audioInput) {
[captureSession addInput:audioInput];
}
else {
// Handle the failure.
}
if([captureSession canAddOutput:captureOutput]){
captureOutput = [[AVCaptureVideoDataOutput alloc] init];
[captureOutput setAlwaysDiscardsLateVideoFrames:YES];
[captureOutput setSampleBufferDelegate:self queue:queue];
[captureOutput setVideoSettings:videoSettings];
dispatch_release(queue);
}else{
//handle failure
}
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
UIView *aView = arOverlayView;
previewLayer.frame =CGRectMake(0,0, arOverlayView.frame.size.width,arOverlayView.frame.size.height); // Assume you want the preview layer to fill the view.
[aView.layer addSublayer:previewLayer];
[captureSession startRunning];
}
//ask the entire window to draw itself in a graphics context. This call will not render
//the AVCaptureVideoPreviewLayer . It has to be replaced with a UIImageView or GL based view.
//see following code for creating a dynamically updating UIImageView
-(void)saveScreenshot
{
UIGraphicsBeginImageContext(appDelegate.window.bounds.size);
[appDelegate.window.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(screenshot, self,
#selector(image:didFinishSavingWithError:contextInfo:), nil);
}
//image saved to camera roll callback
- (void)image:(UIImage *)image didFinishSavingWithError:(NSError *)error
contextInfo:(void *)contextInfo
{
// Was there an error?
if (error != NULL)
{
// Show error message...
NSLog(#"save failed");
}
else // No errors
{
NSLog(#"save successful");
// Show message image successfully saved
}
}
Here's the code for creating the image:
//you need to add your view controller as a delegate to the camera output to be notified of buffereed data
-(void)activateCameraFeed
{
//this is the code responsible for capturing feed for still image processing
dispatch_queue_t queue = dispatch_queue_create("com.AugmentedRealityGlamour.ImageCaptureQueue", NULL);
captureOutput = [[AVCaptureVideoDataOutput alloc] init];
[captureOutput setAlwaysDiscardsLateVideoFrames:YES];
[captureOutput setSampleBufferDelegate:self queue:queue];
[captureOutput setVideoSettings:videoSettings];
dispatch_release(queue);
//......configure audio feed, add inputs and outputs
}
//buffer delegate callback
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
if ( ignoreImageStream )
return;
[self performImageCaptureFrom:sampleBuffer];
}
Create a UIImage:
- (void) performImageCaptureFrom:(CMSampleBufferRef)sampleBuffer
{
CVImageBufferRef imageBuffer;
if ( CMSampleBufferGetNumSamples(sampleBuffer) != 1 )
return;
if ( !CMSampleBufferIsValid(sampleBuffer) )
return;
if ( !CMSampleBufferDataIsReady(sampleBuffer) )
return;
imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if ( CVPixelBufferGetPixelFormatType(imageBuffer) != kCVPixelFormatType_32BGRA )
return;
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGImageRef newImage = nil;
if ( cameraDeviceSetting == CameraDeviceSetting640x480 )
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
newImage = CGBitmapContextCreateImage(newContext);
CGColorSpaceRelease( colorSpace );
CGContextRelease(newContext);
}
else
{
uint8_t *tempAddress = malloc( 640 * 4 * 480 );
memcpy( tempAddress, baseAddress, bytesPerRow * height );
baseAddress = tempAddress;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst);
newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
newContext = CGBitmapContextCreate(baseAddress, 640, 480, 8, 640*4, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextScaleCTM( newContext, (CGFloat)640/(CGFloat)width, (CGFloat)480/(CGFloat)height );
CGContextDrawImage(newContext, CGRectMake(0,0,640,480), newImage);
CGImageRelease(newImage);
newImage = CGBitmapContextCreateImage(newContext);
CGColorSpaceRelease( colorSpace );
CGContextRelease(newContext);
free( tempAddress );
}
if ( newImage != nil )
{
//modified for iOS5.0 with ARC
tempImage = [[UIImage alloc] initWithCGImage:newImage scale:(CGFloat)1.0 orientation:cameraImageOrientation];
CGImageRelease(newImage);
//this call creates the illusion of a preview layer, while we are actively switching images created with this method
[self performSelectorOnMainThread:#selector(newCameraImageNotification:) withObject:tempImage waitUntilDone:YES];
}
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}
update the interface with a UIView that can actually be rendered in a graphics context:
- (void) newCameraImageNotification:(UIImage*)newImage
{
if ( newImage == nil )
return;
[arOverlayView setImage:newImage];
//or do more advanced processing of the image
}
If you are wanting a snapshot of what's on screen, this is what I'm doing in one of my camera apps. I haven't touched this code in a long time so there might be a better 5.0 way now but this is solid with over 1 million downloads. There is a function for grabbing a UIView based screen and one for grabbing an Open/GLES1 screen:
//
// ScreenCapture.m
// LiveEffectsCam
//
// Created by John Carter on 10/8/10.
//
#import "ScreenCapture.h"
#import <QuartzCore/CABase.h>
#import <QuartzCore/CATransform3D.h>
#import <QuartzCore/CALayer.h>
#import <QuartzCore/CAScrollLayer.h>
#import <OpenGLES/EAGL.h>
#import <OpenGLES/ES1/gl.h>
#import <OpenGLES/ES1/glext.h>
#import <QuartzCore/QuartzCore.h>
#import <OpenGLES/EAGLDrawable.h>
#implementation ScreenCapture
+ (UIImage *) GLViewToImage:(GLView *)glView
{
UIImage *glImage = [GLView snapshot:glView]; // returns an autoreleased image
return glImage;
}
+ (UIImage *) GLViewToImage:(GLView *)glView withOverlayImage:(UIImage *)overlayImage
{
UIImage *glImage = [GLView snapshot:glView]; // returns an autoreleased image
// Merge Image and Overlay
//
CGRect imageRect = CGRectMake((CGFloat)0.0, (CGFloat)0.0, glImage.size.width*glImage.scale, glImage.size.height*glImage.scale);
CGImageRef overlayCopy = CGImageCreateCopy( overlayImage.CGImage );
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, (int)glImage.size.width*glImage.scale, (int)glImage.size.height*glImage.scale, 8, (int)glImage.size.width*4*glImage.scale, colorSpace, kCGImageAlphaPremultipliedLast);
CGContextDrawImage(context, imageRect, glImage.CGImage);
CGContextDrawImage(context, imageRect, overlayCopy);
CGImageRef newImage = CGBitmapContextCreateImage(context);
UIImage *combinedViewImage = [[[UIImage alloc] initWithCGImage:newImage] autorelease];
CGImageRelease(newImage);
CGImageRelease(overlayCopy);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return combinedViewImage;
}
+ (UIImage *) UIViewToImage:(UIView *)view withOverlayImage:(UIImage *)overlayImage
{
UIImage *viewImage = [ScreenCapture UIViewToImage:view]; // returns an autoreleased image
// Merge Image and Overlay
//
CGRect imageRect = CGRectMake((CGFloat)0.0, (CGFloat)0.0, viewImage.size.width*viewImage.scale, viewImage.size.height*viewImage.scale);
CGImageRef overlayCopy = CGImageCreateCopy( overlayImage.CGImage );
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, (int)viewImage.size.width*viewImage.scale, (int)viewImage.size.height*viewImage.scale, 8, (int)viewImage.size.width*4*viewImage.scale, colorSpace, kCGImageAlphaPremultipliedLast);
CGContextDrawImage(context, imageRect, viewImage.CGImage);
CGContextDrawImage(context, imageRect, overlayCopy);
CGImageRef newImage = CGBitmapContextCreateImage(context);
UIImage *combinedViewImage = [[[UIImage alloc] initWithCGImage:newImage] autorelease];
CGImageRelease(newImage);
CGImageRelease(overlayCopy);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return combinedViewImage;
}
+ (UIImage *) UIViewToImage:(UIView *)view
{
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
//
// CGSize imageSize = [[UIScreen mainScreen] bounds].size;
CGSize imageSize = CGSizeMake( (CGFloat)480.0, (CGFloat)640.0 ); // camera image size
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Start with the view...
//
CGContextSaveGState(context);
CGContextTranslateCTM(context, [view center].x, [view center].y);
CGContextConcatCTM(context, [view transform]);
CGContextTranslateCTM(context,-[view bounds].size.width * [[view layer] anchorPoint].x,-[view bounds].size.height * [[view layer] anchorPoint].y);
[[view layer] renderInContext:context];
CGContextRestoreGState(context);
// ...then repeat for every subview from back to front
//
for (UIView *subView in [view subviews])
{
if ( [subView respondsToSelector:#selector(screen)] )
if ( [(UIWindow *)subView screen] == [UIScreen mainScreen] )
continue;
CGContextSaveGState(context);
CGContextTranslateCTM(context, [subView center].x, [subView center].y);
CGContextConcatCTM(context, [subView transform]);
CGContextTranslateCTM(context,-[subView bounds].size.width * [[subView layer] anchorPoint].x,-[subView bounds].size.height * [[subView layer] anchorPoint].y);
[[subView layer] renderInContext:context];
CGContextRestoreGState(context);
}
UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); // autoreleased image
UIGraphicsEndImageContext();
return image;
}
+ (UIImage *) snapshot:(GLView *)eaglview
{
NSInteger x = 0;
NSInteger y = 0;
NSInteger width = [eaglview backingWidth];
NSInteger height = [eaglview backingHeight];
NSInteger dataLength = width * height * 4;
NSUInteger i;
for ( i=0; i<100; i++ )
{
glFlush();
CFRunLoopRunInMode(kCFRunLoopDefaultMode, (float)1.0/(float)60.0, FALSE);
}
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
//
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
//
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast, ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
//
NSInteger widthInPoints;
NSInteger heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions)
{
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
//
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else
{
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
//
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
//
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); // autoreleased image
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
#end

How can I draw an image?

I'm programming on objective-c. I have an image a line (see below) (1 x 30) pixels.
How can I get a UIImage (50 x 30) (see below) from this line?
Create a CGBitmapContext with size of 50 * 30 than you can just draw that image on the context by using CGContextDrawImage.
After that use CGBitmapContextCreateImage and [UIImage imageWithCGImage:] to create the UIImage
CGContextRef CreateBitmapContext(int pixelsWide, int pixelsHigh)
{
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (pixelsWide * 4); // RGBA
bitmapByteCount = (bitmapBytesPerRow * pixelsHigh);
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate (NULL,
pixelsWide,
pixelsHigh,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
kCGImageAlphaPremultipliedLast);
NSCAssert(context != NULL, #"cannot create bitmap context");
CGColorSpaceRelease( colorSpace );
return context;
}
CGContextRef context = CreateBitmapContext(50, 30);
UIImage *yourLineImage = ...;
CGImageRef cgImg = [yourLineImage CGImage];
for (int i = 0; i < 50; i++) {
CGRect rect;
rect.origin.x = i;
rect.origin.y = 0;
rect.size.width = 1;
rect.size.height = 30;
CGContextDrawImage(context, rect, cgImg);
}
CGImageRef output = CGBitmapContextCreateImage(context);
UIImage *result = [UIImage imageWithCGImage:output];
if your line has simple color, try this lazy method:
UIImageView *line = [[UIImageView alloc] initWithFrame:CGRectMake(10, 10, 50, 30)];
[line setImage:[UIImage imageNamed:#"your gray line"]];
[self.view addSubView:line];
You can use +[UIColor colorWithPatternImage] in iOS:
NSString *path =
[[NSBundle mainBundle] pathForResource:#"<# the pattern file #>" ofType:#"png"];
UIColor *patternColor = [UIColor colorWithPatternImage:
[UIImage imageWithContentsOfFile:path]];
/* use patternColor anywhere as a regular UIColor instance */
It works better with seamless patterns. For OSX you can use +[NSColor colorWithPatternImage] method.
If you just want to draw the image, you might want to try UIImage's drawInRect: method.
You'd typically want to call this from your custom UIView's drawRect:.
There are different approaches to drawing in Cocoa (and Cocoa-Touch) so here's Apple's Drawing and Printing Guide for iOS.

How to efficiently and fast blur an image on the iPhone?

If I have a UIImage or CGContextRef or the pure bitmap data (direct access to decompressed ARGB-8 pixels), what's my best option to blur an image with radius 10 pixels as fast as possible?
I've implemented a stackBlur algorithm for iOS, which is close to GaussianBlur but very fast:
https://github.com/tomsoft1/StackBluriOS
Check for instance here:
Blur an UIImage on change of slider
Either use a stack blur, a box blur or use the OpenGL texture blur (google the first two, and check the Apple dev samples for the latter).
https://github.com/rnystrom/RNBlurModalView
-(UIImage *)boxblurImageWithBlur:(CGFloat)blur bluringImage : (UIImage *) image
{
int boxSize = (int)(blur * 40);
boxSize = boxSize - (boxSize % 2) + 1;
CGImageRef img = image.CGImage;
enter code here
vImage_Buffer inBuffer, outBuffer;
vImage_Error error;
void *pixelBuffer;
//create vImage_Buffer with data from CGImageRef
CGDataProviderRef inProvider = CGImageGetDataProvider(img);
CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);
inBuffer.width = CGImageGetWidth(img);
inBuffer.height = CGImageGetHeight(img);
inBuffer.rowBytes = CGImageGetBytesPerRow(img);
inBuffer.data = (void*)CFDataGetBytePtr(inBitmapData);
//create vImage_Buffer for output
pixelBuffer = malloc(CGImageGetBytesPerRow(img) * CGImageGetHeight(img));
if(pixelBuffer == NULL)
NSLog(#"No pixelbuffer");
outBuffer.data = pixelBuffer;
outBuffer.width = CGImageGetWidth(img);
outBuffer.height = CGImageGetHeight(img);
outBuffer.rowBytes = CGImageGetBytesPerRow(img);
// Create a third buffer for intermediate processing
void *pixelBuffer2 = malloc(CGImageGetBytesPerRow(img) * CGImageGetHeight(img));
vImage_Buffer outBuffer2;
outBuffer2.data = pixelBuffer2;
outBuffer2.width = CGImageGetWidth(img);
outBuffer2.height = CGImageGetHeight(img);
outBuffer2.rowBytes = CGImageGetBytesPerRow(img);
//perform convolution
error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer2, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);
error = vImageBoxConvolve_ARGB8888(&outBuffer2, &inBuffer, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);
error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);
if (error) {
NSLog(#"error from convolution %ld", error);
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(outBuffer.data,
outBuffer.width,
outBuffer.height,
8,
outBuffer.rowBytes,
colorSpace,
kCGImageAlphaNoneSkipLast);
CGImageRef imageRef = CGBitmapContextCreateImage (ctx);
UIImage *returnImage = [UIImage imageWithCGImage:imageRef];
//clean up
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
free(pixelBuffer);
CFRelease(inBitmapData);
CGImageRelease(imageRef);
return returnImage;
}

Crop circular or elliptical image from original UIImage

I am working on openCV for detecting the face .I want face to get cropped once its detected.Till now I got the face and have marked the rect/ellipse around it on iPhone.
Please help me out in cropping the face in circular/elliptical pattern
(UIImage *) opencvFaceDetect:(UIImage *)originalImage
{
cvSetErrMode(CV_ErrModeParent);
IplImage *image = [self CreateIplImageFromUIImage:originalImage];
// Scaling down
/*
Creates IPL image (header and data) ----------------cvCreateImage
CVAPI(IplImage*) cvCreateImage( CvSize size, int depth, int channels );
*/
IplImage *small_image = cvCreateImage(cvSize(image->width/2,image->height/2),
IPL_DEPTH_8U, 3);
/*SMOOTHES DOWN THYE GUASSIAN SURFACE--------:cvPyrDown*/
cvPyrDown(image, small_image, CV_GAUSSIAN_5x5);
int scale = 2;
// Load XML
NSString *path = [[NSBundle mainBundle] pathForResource:#"haarcascade_frontalface_default" ofType:#"xml"];
CvHaarClassifierCascade* cascade = (CvHaarClassifierCascade*)cvLoad([path cStringUsingEncoding:NSASCIIStringEncoding], NULL, NULL, NULL);
// Check whether the cascade has loaded successfully. Else report and error and quit
if( !cascade )
{
NSLog(#"ERROR: Could not load classifier cascade\n");
//return;
}
//Allocate the Memory storage
CvMemStorage* storage = cvCreateMemStorage(0);
// Clear the memory storage which was used before
cvClearMemStorage( storage );
CGColorSpaceRef colorSpace;
CGContextRef contextRef;
CGRect face_rect;
// Find whether the cascade is loaded, to find the faces. If yes, then:
if( cascade )
{
CvSeq* faces = cvHaarDetectObjects(small_image, cascade, storage, 1.1f, 3, 0, cvSize(20, 20));
cvReleaseImage(&small_image);
// Create canvas to show the results
CGImageRef imageRef = originalImage.CGImage;
colorSpace = CGColorSpaceCreateDeviceRGB();
contextRef = CGBitmapContextCreate(NULL, originalImage.size.width, originalImage.size.height, 8, originalImage.size.width * 4,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);
//VIKAS
CGContextDrawImage(contextRef, CGRectMake(0, 0, originalImage.size.width, originalImage.size.height), imageRef);
CGContextSetLineWidth(contextRef, 4);
CGContextSetRGBStrokeColor(contextRef, 1.0, 1.0, 1.0, 0.5);
// Draw results on the iamge:Draw all components of face in the form of small rectangles
// Loop the number of faces found.
for(int i = 0; i < faces->total; i++)
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
// Calc the rect of faces
// Create a new rectangle for drawing the face
CvRect cvrect = *(CvRect*)cvGetSeqElem(faces, i);
// CGRect face_rect = CGContextConvertRectToDeviceSpace(contextRef,
// CGRectMake(cvrect.x * scale, cvrect.y * scale, cvrect.width * scale, cvrect.height * scale));
face_rect = CGContextConvertRectToDeviceSpace(contextRef,
CGRectMake(cvrect.x*scale, cvrect.y , cvrect.width*scale , cvrect.height*scale*1.25
));
facedetectapp=(FaceDetectAppDelegate *)[[UIApplication sharedApplication]delegate];
facedetectapp.grabcropcoordrect=face_rect;
NSLog(#" FACE off %f %f %f %f",facedetectapp.grabcropcoordrect.origin.x,facedetectapp.grabcropcoordrect.origin.y,facedetectapp.grabcropcoordrect.size.width,facedetectapp.grabcropcoordrect.size.height);
CGContextStrokeRect(contextRef, face_rect);
//CGContextFillEllipseInRect(contextRef,face_rect);
CGContextStrokeEllipseInRect(contextRef,face_rect);
[pool release];
}
}
CGImageRef imageRef = CGImageCreateWithImageInRect([originalImage CGImage],face_rect);
UIImage *returnImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
cvReleaseMemStorage(&storage);
cvReleaseHaarClassifierCascade(&cascade);
return returnImage;
}
}
Thanks
Vikas
There are a pile of blend modes to choose from, a few of which are useful for "masking". I believe this should do approximately what you want:
CGContextSaveGState(contextRef);
CGContextSetBlendMode(contextRef,kCGBlendModeDestinationIn);
CGContextFillEllipseInRect(contextRef,face_rect);
CGContextRestoreGState(contextRef);
"approximately" because it'll mask the entire context contents every time, thus doing the wrong thing for more than one face. To handle this case, use CGContextAddEllipseInRect() in the loop and CGContextFillPath() at the end.
You might also want to look at CGContextBeginTransparencyLayerWithRect().
Following is the answer I given in How to crop UIImage on oval shape or circle shape? to make the image circle. It works for me..
Download the Support archive file from URL http://vocaro.com/trevor/blog/2009/10/12/resize-a-uiimage-the-right-way/
#import "UIImage+RoundedCorner.h"
#import "UIImage+Resize.h"
Following lines used to resize the image and convert in to round with radius
UIImage *mask = [UIImage imageNamed:#"mask.jpg"];
mask = [mask resizedImage:CGSizeMake(47, 47) interpolationQuality:kCGInterpolationHigh ];
mask = [mask roundedCornerImage:23.5 borderSize:1];
Hope it helps some one..

How to get UIImage from EAGLView?

I am trying to get a UIImage from what is displayed in my EAGLView. Any suggestions on how to do this?
Here is a cleaned up version of Quakeboy's code.
I tested it on iPad, and works just fine.
The improvements include:
works with any size EAGLView
works with retina display (point scale 2)
replaced nested loop with memcpy
cleaned up memory leaks
saves the UIImage in the photoalbum as a bonus.
Use this as a method in your EAGLView:
-(void)snapUIImage
{
int s = 1;
UIScreen* screen = [ UIScreen mainScreen ];
if ( [ screen respondsToSelector:#selector(scale) ] )
s = (int) [ screen scale ];
const int w = self.frame.size.width;
const int h = self.frame.size.height;
const NSInteger myDataLength = w * h * 4 * s * s;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, w*s, h*s, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < h*s; y++)
{
memcpy( buffer2 + (h*s - 1 - y) * w * 4 * s, buffer + (y * 4 * w * s), w * 4 * s );
}
free(buffer); // work with the flipped buffer, so get rid of the original one.
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * w * s;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(w*s, h*s, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [ UIImage imageWithCGImage:imageRef scale:s orientation:UIImageOrientationUp ];
UIImageWriteToSavedPhotosAlbum( myImage, nil, nil, nil );
CGImageRelease( imageRef );
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
free(buffer2);
}
I was unable to get the other answers here to work correctly for me.
After a few days I finally got a working solution to this. There is code provided by Apple which produces a UIImage from a EAGLView. Then you simply need to flip the image vertically since UIkit is upside down.
Apple Provided Method - Modified to be inside the view you want to make into an image.
-(UIImage *) drawableToCGImage
{
GLint backingWidth2, backingHeight2;
//Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth2);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight2);
NSInteger x = 0, y = 0, width2 = backingWidth2, height2 = backingHeight2;
NSInteger dataLength = width2 * height2 * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width2, height2, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width2, height2, 8, 32, width2 * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = self.contentScaleFactor;
widthInPoints = width2 / scale;
heightInPoints = height2 / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width2;
heightInPoints = height2;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
And heres a method to flip the image
- (UIImage *) flipImageVertically:(UIImage *)originalImage {
UIImageView *tempImageView = [[UIImageView alloc] initWithImage:originalImage];
UIGraphicsBeginImageContext(tempImageView.frame.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform flipVertical = CGAffineTransformMake(
1, 0, 0, -1, 0, tempImageView.frame.size.height
);
CGContextConcatCTM(context, flipVertical);
[tempImageView.layer renderInContext:context];
UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//[tempImageView release];
return flippedImage;
}
And here's a link to the Apple dev page where I found the first method for reference.
http://developer.apple.com/library/ios/#qa/qa1704/_index.html
-(UIImage *) saveImageFromGLView
{
NSInteger myDataLength = 320 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <480; y++)
{
for(int x = 0; x <320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease( imageRef );
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
free(buffer2);
return myImage;
}
EDIT: as demianturner notes below, you no longer need to render the layer, you can (and should) now use the higher-level [UIView drawViewHierarchyInRect:]. Other than that; this should work the same.
An EAGLView is just a kind of view, and its underlying CAEAGLLayer is just a kind of layer. That means, that the standard approach for converting a view/layer into a UIImage will work. (The fact that the linked question is UIWebview doesn't matter; that's just yet another kind of view.)
CGDataProviderCreateWithData comes with a release callback to release the data, where you should do the release:
void releaseBufferData(void *info, const void *data, size_t size)
{
free((void*)data);
}
Then do this like other examples, but NOT to free data here:
GLubyte *bufferData = (GLubyte *) malloc(bufferDataSize);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, bufferData, bufferDataSize, releaseBufferData);
....
CGDataProviderRelease(provider);
Or simply use CGDataProviderCreateWithCFData without release callback stuff instead:
GLubyte *bufferData = (GLubyte *) malloc(bufferDataSize);
NSData *data = [NSData dataWithBytes:bufferData length:bufferDataSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
....
CGDataProviderRelease(provider);
free(bufferData); // Remember to free it
For more information, please check this discuss:
What's the right memory management pattern for buffer->CGImageRef->UIImage?
With this above code of Brad Larson, you have to edit your EAGLView.m
- (id)initWithCoder:(NSCoder*)coder{
self = [super initWithCoder:coder];
if (self) {
CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
eaglLayer.opaque = TRUE;
eaglLayer.drawableProperties =
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
}
return self;
}
You have to change numberWithBool value YES