Can't detect face in images that capture by camera of Iphone - iphone

I have a problem. I use 2 image. One is download from internet. the other is captured by camera of iPhone.
I use CIDetector to detect face in 2 images. It work perfect in image that download from internet. But the other, it's can't detect or detect wrong.
I check in many images. That result is the same.

Try this
NSDictionary *options = [NSDictionary dictionaryWithObject: CIDetectorAccuracyLow forKey: CIDetectorAccuracy];
CIDetector *detector = [CIDetector detectorOfType: CIDetectorTypeFace context: nil options: options];
CIImage *ciImage = [CIImage imageWithCGImage: [image CGImage]];
NSNumber *orientation = [NSNumber numberWithInt:[image imageOrientation]+1];
NSDictionary *fOptions = [NSDictionary dictionaryWithObject:orientation forKey: CIDetectorImageOrientation];
NSArray *features = [detector featuresInImage:ciImage options:fOptions];
for (CIFaceFeature *f in features) {
NSLog(#"left eye found: %#", (f. hasLeftEyePosition ? #"YES" : #"NO"));
NSLog(#"right eye found: %#", (f. hasRightEyePosition ? #"YES" : #"NO"));
NSLog(#"mouth found: %#", (f. hasMouthPosition ? #"YES" : #"NO"));
if(f.hasLeftEyePosition)
NSLog(#"left eye position x = %f , y = %f", f.leftEyePosition.x, f.leftEyePosition.y);
if(f.hasRightEyePosition)
NSLog(#"right eye position x = %f , y = %f", f.rightEyePosition.x, f.rightEyePosition.y);
if(f.hasMouthPosition)
NSLog(#"mouth position x = %f , y = %f", f.mouthPosition.x, f.mouthPosition.y);
}
if you're using the front camera always in portrait add this
NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation];
NSArray* features = [detector featuresInImage:image options:imageOptions];
For more info
sample: https://github.com/beetlebugorg/PictureMe
iOS Face Detection Issue
Face Detection issue using CIDetector
https://stackoverflow.com/questions/4332868/detect-face-in-iphone?rq=1

I try to this code above. It's can detect images captured by Iphone. But it's can't detect image download from Internet. This is my code
NSDictionary *options = [NSDictionary dictionaryWithObject: CIDetectorAccuracyLow forKey: CIDetectorAccuracy];
CIDetector *detector = [CIDetector detectorOfType: CIDetectorTypeFace context: nil options: options];
CIImage *ciImage = [CIImage imageWithCGImage: [facePicture CGImage]];
NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation];
NSArray *features = [detector featuresInImage:ciImage options:imageOptions];
And when it's detect face. I show by code
for (CIFaceFeature *feature in features) {
// // Set red feature color
CGRect faceRect = [feature bounds];
CGContextSetRGBFillColor(context, 0.0f, 0.0f, 0.0f, 0.5f);
CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor);
CGContextSetLineWidth(context, 2.0f * scale);
CGContextAddRect(context, feature.bounds);
CGContextDrawPath(context, kCGPathFillStroke);
CGContextDrawImage(context, faceRect, [imgDraw CGImage]);
it's not right position. It's move to right a distance.

I had the same problem. You can change size of the image before detection.
CGSize size = CGSizeMake(cameraCaptureImage.size.width, cameraCaptureImage.size.height);
UIGraphicsBeginImageContext(size);
[cameraCaptureImage drawInRect:CGRectMake(0, 0, size.width, size.height)];
cameraCaptureImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Related

MKAnnotationView image is not displayed on result snapshot image iOS 7

I`ve created similar code as was shown on WWDC for displaying pin on snapshots, but pin image is not displayed:
MKMapSnapshotOptions *options = [[MKMapSnapshotOptions alloc] init];
options.region = self.mapView.region;
options.scale = 2;
options.size = self.mapView.frame.size;
MKMapSnapshotter *snapshotter = [[MKMapSnapshotter alloc] initWithOptions:options];
[snapshotter startWithCompletionHandler:^(MKMapSnapshot *snapshot, NSError *error)
{
MKAnnotationView *pin = [[MKAnnotationView alloc] initWithAnnotation:nil reuseIdentifier:#""];
UIImage *image;
UIImage *finalImage;
image = snapshot.image;
NSLog(#"%f", image.size.height);
UIImage *pinImage = pin.image;
CGPoint pinPoint = [snapshot pointForCoordinate:CLLocationCoordinate2DMake(self.longtitude, self.latitude)];
CGPoint pinCenterOffset = pin.centerOffset;
pinPoint.x -= pin.bounds.size.width / 2.0;
pinPoint.y -= pin.bounds.size.height / 2.0;
pinPoint.x += pinCenterOffset.x;
pinPoint.y += pinCenterOffset.y;
UIGraphicsBeginImageContextWithOptions(image.size, YES, image.scale);
[image drawAtPoint:CGPointMake(0, 0)];
[pinImage drawAtPoint:pinPoint];
finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData *data = UIImageJPEGRepresentation(finalImage, 0.95f);
NSArray *pathArray = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *path = [pathArray objectAtIndex:0];
NSLog(#"%#", path);
NSString *fileWithPath = [path stringByAppendingPathComponent:#"test.jpeg"];
[data writeToFile:fileWithPath atomically:YES];
}];
Only snapshot of map is displayed without pin image.
If you're expecting the default pin image to appear, you need to create an MKPinAnnotationView instead of the plain MKAnnotationView (which has no default image -- it's blank by default).
Also, please note that the latitude and longitude parameters are backwards in this line:
CGPoint pinPoint = [snapshot pointForCoordinate:CLLocationCoordinate2DMake(
self.longtitude, self.latitude)];
In CLLocationCoordinate2DMake, latitude should be the first parameter and longitude the second.

How to draw image and text in Image Context?

I tried to draw text on the image.When I don't apply the transformations, then the image is drawn at the bottom left corner and the image and text are fine(Fig 2),but I want the image on the top left of the view.
Below is my drawRect implementation.
How to flip the image so that text and image are aligned properly?
or
How to move the image to the top left of the view?
If I don't use the following function calls image gets created at the bottom of the view.(Fig 2)
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0.0, self.bounds.size.height);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
- (void)drawRect:(CGRect)rect
{
UIGraphicsBeginImageContext(self.bounds.size);
// Fig 2 comment these lines to have Fig 2
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0.0, self.bounds.size.height);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
// Fig 2 Applying the above transformations results in Fig 1
UIImage *natureImage = [UIImage imageNamed:#"Nature"];
CGRect natureImageRect = CGRectMake(130, 380, 50, 50);
[natureImage drawInRect:natureImageRect];
UIFont *numberFont = [UIFont systemFontOfSize:28.0];
NSFileManager *fm = [NSFileManager defaultManager];
NSString * aNumber = #"111";
[aNumber drawAtPoint:CGPointMake(100, 335) withFont:numberFont];
UIFont *textFont = [UIFont systemFontOfSize:22.0];
NSString * aText = #"Hello";
[aText drawAtPoint:CGPointMake(220, 370) withFont: textFont];
self.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData * imageData = UIImagePNGRepresentation(self.image);
NSArray * paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask,YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *filePath = [documentsDirectory stringByAppendingPathComponent:#"Image.png"];
NSLog(#"filePath :%#", filePath);
BOOL isFileCreated = [fm createFileAtPath:filePath contents:imageData attributes:nil];
if(isFileCreated)
{
NSLog(#"File created at Path %#",filePath);
}
}
Here is the code I used to draw the same image (except I did not use the nature image). Its not in a draw rect but depending on your end goal it might be better to do this outside of the draw rect in a custom method and set the self.image to the result of -(UIImage *)imageToDraw. It outputs this:
Here is the code:
- (UIImage *)imageToDraw
{
UIGraphicsBeginImageContextWithOptions(CGSizeMake(320, 480), NO, [UIScreen mainScreen].scale);
UIImage *natureImage = [UIImage imageNamed:#"testImage.png"];
[natureImage drawInRect:CGRectMake(130, 380, 50, 50)];
UIFont *numberFont = [UIFont systemFontOfSize:28.0];
NSString * aNumber = #"111";
[aNumber drawAtPoint:CGPointMake(100, 335) withFont:numberFont];
UIFont *textFont = [UIFont systemFontOfSize:22.0];
NSString * aText = #"Hello";
[aText drawAtPoint:CGPointMake(220, 370) withFont:textFont];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultingImage;
}
- (NSString *)filePath
{
NSArray * paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask,YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
return [documentsDirectory stringByAppendingPathComponent:#"Image.png"];
}
- (void)testImageWrite
{
NSData *imageData = UIImagePNGRepresentation([self imageToDraw]);
NSError *writeError = nil;
BOOL success = [imageData writeToFile:[self filePath] options:0 error:&writeError];
if (!success || writeError != nil)
{
NSLog(#"Error Writing: %#",writeError.description);
}
}
anyway - hope this helps
Your image frame of reference is the standard iOS frame of reference: the origin is at the top left corner. Instead the text you are drawing with Core Text has the old frame of reference, with the origin at the bottom left corner. Just apply a transform on the text (not the graphics context) like so:
CGContextSetTextMatrix(context, CGAffineTransformMakeScale(1.0f, -1.0f));
then all your stuff will be layout as if the axis origin were in the top left corner.
If you have to write entire sentences with Core Text as opposed to just words take a look at this blog post
I want to say that drawRect method is for drawing in a view not to create an image. Performance issue, drawInRect is called a lot.With this intention is better viewDidLoad or some custom method.
I have done an example with your request:
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
UIGraphicsBeginImageContext(self.view.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGFloat margin = 10;
CGFloat y = self.view.bounds.size.height * 0.5;
CGFloat x = margin;
UIImage *natureImage = [UIImage imageNamed:#"image"];
CGRect natureImageRect = CGRectMake(x, y - 20, 40, 40);
[natureImage drawInRect:natureImageRect];
x += 40 + margin;
UIFont *textFont = [UIFont systemFontOfSize:22.0];
NSString * aText = #"Hello";
NSMutableParagraphStyle *style = [[NSParagraphStyle defaultParagraphStyle] mutableCopy];
style.alignment = NSTextAlignmentCenter;
NSDictionary *attr = #{NSParagraphStyleAttributeName: style,
NSFontAttributeName: textFont};
CGSize size = [aText sizeWithAttributes:attr];
[aText drawInRect: CGRectMake(x, y - size.height * 0.5, 100, 40)
withFont: textFont
lineBreakMode: UILineBreakModeClip
alignment: UITextAlignmentLeft];
self.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.imageView.image = self.image;
}

CoreImage Memory problems in ios5

Does anyone knows how to release memory while using core image framework to apply HUE changes on image?
Here is my code:-
CIImage *inputImage = [[CIImage alloc] initWithImage:currentImage];
CIFilter * controlsFilter = [CIFilter filterWithName:#"CIHueAdjust"];
[controlsFilter setValue:inputImage forKey:kCIInputImageKey];
[controlsFilter setValue:[NSNumber numberWithFloat:slider.value] forKey:#"inputAngle"];
CIImage *displayImage = controlsFilter.outputImage;
UIImage *finalImage = [UIImage imageWithCIImage:displayImage];
CIContext *context = [CIContext contextWithOptions:nil];
if (displayImage == nil || finalImage == nil) {
// We did not get output image. Let's display the original image itself.
photoEditView.image = currentImage;
}else {
// We got output image. Display it.
photoEditView.image = [UIImage imageWithCGImage:[context createCGImage:displayImage fromRect:displayImage.extent]];
}
context = nil;
[inputImage release];
I think you need to release this one as well :
[context createCGImage:displayImage fromRect:displayImage.extent]
by using the CGImageRelease(CGImageRef) method.

Convert UIImage to CVImageBufferRef

This code mostly works, but the resulting data seems to loose a color channel (is what I am thinking) as the resulting image data when displayed is tinted blue!
Here is the code:
UIImage* myImage=[UIImage imageNamed:#"sample1.png"];
CGImageRef imageRef=[myImage CGImage];
CVImageBufferRef pixelBuffer = [self pixelBufferFromCGImage:imageRef];
The method pixelBufferFromCGIImage was grabbed from another post on stackoverflow here: How do I export UIImage array as a movie? (although this application is unrelated to what I am trying to do) it is
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
{
CGSize frameSize = CGSizeMake(CGImageGetWidth(image), CGImageGetHeight(image));
NSDictionary *options = #{
(__bridge NSString *)kCVPixelBufferCGImageCompatibilityKey: #(NO),
(__bridge NSString *)kCVPixelBufferCGBitmapContextCompatibilityKey: #(NO)
};
CVPixelBufferRef pixelBuffer;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,
frameSize.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pixelBuffer);
if (status != kCVReturnSuccess) {
return NULL;
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *data = CVPixelBufferGetBaseAddress(pixelBuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(data, frameSize.width, frameSize.height,
8, CVPixelBufferGetBytesPerRow(pixelBuffer), rgbColorSpace,
(CGBitmapInfo) kCGImageAlphaNoneSkipLast);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return pixelBuffer;
}
I am thinking it has something to do with the relationship between kCVPixelFormatType_32ARGB and kCGImageAlphaNoneSkipLast though I have tried every combination and get either the same result or a application crash. Once again, this gets the UIImage data into CVImageBufferRef but when I display the image on screen, it appears to loose a color channel and shows up tinted blue. The image is a png.
The solution is that this code works perfectly as intended. :) The issue was in using the data in creating an OpenGL texture. Completely unrelated to this code. Anyone searching for how to Convert UIImage to CVImageBufferRef, your answer is in the above code!
If anyone is still looking for a solution to this problem, I solved it by switching the BOOLs in the pixelBuffer's options:
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:NO], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:NO], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
From NO to YES:
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
I encounter the same problem and find some samples: http://www.cakesolutions.net/teamblogs/2014/03/08/cmsamplebufferref-from-cgimageref
try to change
CGBitmapInfo bitmapInfo = (CGBitmapInfo)kCGBitmapByteOrder32Little |
kCGImageAlphaPremultipliedFirst)
Here's what really works:
+ (CVPixelBufferRef)pixelBufferFromImage:(CGImageRef)image {
CGSize frameSize = CGSizeMake(CGImageGetWidth(image), CGImageGetHeight(image)); // Not sure why this is even necessary, using CGImageGetWidth/Height in status/context seems to work fine too
CVPixelBufferRef pixelBuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width, frameSize.height, kCVPixelFormatType_32BGRA, nil, &pixelBuffer);
if (status != kCVReturnSuccess) {
return NULL;
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *data = CVPixelBufferGetBaseAddress(pixelBuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(data, frameSize.width, frameSize.height, 8, CVPixelBufferGetBytesPerRow(pixelBuffer), rgbColorSpace, (CGBitmapInfo) kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return pixelBuffer;
}
You can change the pixel buffer back to a UIImage (and then display or save it) to confirm that it works with this method:
+ (UIImage *)imageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer {
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef myImage = [context createCGImage:ciImage fromRect:CGRectMake(0, 0, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer))];
UIImage *image = [UIImage imageWithCGImage:myImage];
// Uncomment the following lines to say the image to your application's document directory
//NSString *imageSavePath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"myImageFromPixelBuffer.png"]];
//[UIImagePNGRepresentation(image) writeToFile:imageSavePath atomically:YES];
return image;
}
Just to clarify the answer above: I've ran into the same issue because my shader code was expecting two layered samples within a image buffer, while I used a single layer buffer
This line took the rgb values from one sample and passed them to (I don't know what), but the end result is full colored image.
gl_FragColor = vec4(texture2D(SamplerY, texCoordVarying).rgb, 1);
It sounds like it might be that relationship. Possibly have it be a jpg and RGB instead of indexed colors with a png?

Capturing a OpenGL view to an AVAssetWriterInputPixelBufferAdaptor [duplicate]

This question already has answers here:
OpenGL ES 2.0 to Video on iPad/iPhone
(7 answers)
Closed 2 years ago.
I am trying to create a AVAssetWriter to screen capture an openGL project. I have never written a AVAssetWriter or an AVAssetWriterInputPixelBufferAdaptor so I am not sure if I did anything correctly.
- (id) initWithOutputFileURL:(NSURL *)anOutputFileURL {
if ((self = [super init])) {
NSError *error;
movieWriter = [[AVAssetWriter alloc] initWithURL:anOutputFileURL fileType:AVFileTypeMPEG4 error:&error];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:640], AVVideoWidthKey,
[NSNumber numberWithInt:480], AVVideoHeightKey,
nil];
writerInput = [[AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings] retain];
writer = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:writerInput sourcePixelBufferAttributes:[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,nil]];
[movieWriter addInput:writerInput];
writerInput.expectsMediaDataInRealTime = YES;
}
return self;
}
Other parts of the class:
- (void)getFrame:(CVPixelBufferRef)SampleBuffer:(int64_t)frame{
frameNumber = frame;
[writer appendPixelBuffer:SampleBuffer withPresentationTime:CMTimeMake(frame, 24)];
}
- (void)startRecording {
[movieWriter startWriting];
[movieWriter startSessionAtSourceTime:kCMTimeZero];
}
- (void)stopRecording {
[writerInput markAsFinished];
[movieWriter endSessionAtSourceTime:CMTimeMake(frameNumber, 24)];
[movieWriter finishWriting];
}
The assetwriter is initiated by:
NSURL *outputFileURL = [NSURL fileURLWithPath:[NSString stringWithFormat:#"%#%#", NSTemporaryDirectory(), #"output.mov"]];
recorder = [[GLRecorder alloc] initWithOutputFileURL:outputFileURL];
The view is recorded this way:
glReadPixels(0, 0, 480, 320, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
for(int y = 0; y <320; y++) {
for(int x = 0; x <480 * 4; x++) {
int b2 = ((320 - 1 - y) * 480 * 4 + x);
int b1 = (y * 4 * 480 + x);
buffer2[b2] = buffer[b1];
}
}
pixelBuffer = NULL;
CVPixelBufferCreateWithBytes (NULL,480,320,kCVPixelFormatType_32BGRA,buffer2,1920,NULL,0,NULL,&pixelBuffer);
[recorder getFrame:pixelBuffer :framenumber];
framenumber++;
Note:
pixelBuffer is a CVPixelBufferRef.
framenumber is an int64_t.
buffer and buffer2 are GLubyte.
I get no errors but when I finish recording there is no file. Any help or links to help would greatly be appreciated. The opengl has from live feed from the camera. I've been able to save the screen as a UIImage but want to get a movie of what I created.
If you're writing RGBA frames, I think you may need to use a AVAssetWriterInputPixelBufferAdaptor to write them out. This class is supposed to manage a pool of pixel buffers, but I get the impression that it actually massages your data into YUV.
If that works, then I think you'll find that your colours are all swapped at which point you'll probably have to write pixel shader to convert them to BGRA. Or (shudder) do it on the CPU. Up to you.