iPhone image ratio captured from AVCaptureSession - iphone

I am using somebody's source code for capturing image with AVCaptureSession. However,I found that CaptureSessionManager's previewLayer is shotter then the final captured image.
I found that the resulted image is always with ratio 720x1280=9:16. Now I want to crop the resulted image to an UIImage with ratio 320:480 so that it will only capture the portion visible in previewLayer. Any Idea? Thanks a lot.
Relevant Questions in stackoverflow(NO good answer yet):
Q1,
Q2
Source Code:
- (id)init {
if ((self = [super init])) {
[self setCaptureSession:[[[AVCaptureSession alloc] init] autorelease]];
}
return self;
}
- (void)addVideoPreviewLayer {
[self setPreviewLayer:[[[AVCaptureVideoPreviewLayer alloc] initWithSession:[self captureSession]] autorelease]];
[[self previewLayer] setVideoGravity:AVLayerVideoGravityResizeAspectFill];
}
- (void)addVideoInput {
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (videoDevice) {
NSError *error;
if ([videoDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus] && [videoDevice lockForConfiguration:&error]) {
[videoDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
[videoDevice unlockForConfiguration];
}
AVCaptureDeviceInput *videoIn = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if (!error) {
if ([[self captureSession] canAddInput:videoIn])
[[self captureSession] addInput:videoIn];
else
NSLog(#"Couldn't add video input");
}
else
NSLog(#"Couldn't create video input");
}
else
NSLog(#"Couldn't create video capture device");
}
- (void)addStillImageOutput
{
[self setStillImageOutput:[[[AVCaptureStillImageOutput alloc] init] autorelease]];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG,AVVideoCodecKey,nil];
[[self stillImageOutput] setOutputSettings:outputSettings];
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in [[self stillImageOutput] connections]) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
[[self captureSession] addOutput:[self stillImageOutput]];
}
- (void)captureStillImage
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in [[self stillImageOutput] connections]) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo]) {
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
NSLog(#"about to request a capture from: %#", [self stillImageOutput]);
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
CFDictionaryRef exifAttachments = CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
NSLog(#"attachements: %#", exifAttachments);
} else {
NSLog(#"no attachments");
}
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
[self setStillImage:image];
[image release];
[[NSNotificationCenter defaultCenter] postNotificationName:kImageCapturedSuccessfully object:nil];
}];
}
Edit after doing some more research and testing:
AVCaptureSession's property "sessionPreset" has the following constants, I haven't checked each one of them, but noted that most of them ratio is either 9:16, or 3:4,
NSString *const AVCaptureSessionPresetPhoto;
NSString *const AVCaptureSessionPresetHigh;
NSString *const AVCaptureSessionPresetMedium;
NSString *const AVCaptureSessionPresetLow;
NSString *const AVCaptureSessionPreset352x288;
NSString *const AVCaptureSessionPreset640x480;
NSString *const AVCaptureSessionPresetiFrame960x540;
NSString *const AVCaptureSessionPreset1280x720;
NSString *const AVCaptureSessionPresetiFrame1280x720;
In My project, I have the fullscreen preview(frame size is 320x480)
also: [[self previewLayer] setVideoGravity:AVLayerVideoGravityResizeAspectFill];
I have done it in this way: take the photo in size 9:16 and crop it to 320:480, exactly the visible part of the previewlayer. It looks perfect.
The piece of code for resizing and croping to replace with old code is
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [UIImage imageWithData:imageData];
UIImage *scaledimage=[ImageHelper scaleAndRotateImage:image];
//going to crop the image 9:16 to 2:3;with Width fixed
float width=scaledimage.size.width;
float height=scaledimage.size.height;
float top_adjust=(height-width*3/2.0)/2.0;
[self setStillImage:[scaledimage croppedImage:rectToCrop]];

iPhone's camera is natively 4:3. The 16:9 images you get are already cropped from 4:3. Cropping those 16:9 images again to 4:3 is not what you want. Instead get the native 4:3 images from iPhone's camera by setting self.captureSession.sessionPreset = AVCaptureSessionPresetPhoto (before adding any inputs/outputs to the session).

Related

Integrate iphone camera into ViewController's subview UIView

I am using following code to embed Camera into my application view.
Here is my code
- (void)viewDidLoad
{
[super viewDidLoad];
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = self.cameraView.bounds;
[self.cameraView.layer addSublayer:captureVideoPreviewLayer];
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input)
{
[Utilities alertDisplay:#"Error" message:#"Camera not found. Please use Photo Gallery instead."];
}
[session addInput:input];
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
[session addOutput:stillImageOutput];
[session startRunning];
}
-(AVCaptureDevice *)backFacingCameraIfAvailable{
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *captureDevice = nil;
for (AVCaptureDevice *device in videoDevices){
if (device.position == AVCaptureDevicePositionBack){
captureDevice = device;
break;
}
}
// couldn't find one on the front, so just get the default video device.
if (!captureDevice){
captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
return captureDevice;
}
- (IBAction)scanButtonPressed:(id)sender
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
NSLog(#"about to request a capture from: %#", stillImageOutput);
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
CFDictionaryRef exifAttachments = CMGetAttachment( imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments)
{
// Do something with the attachments.
NSLog(#"attachements: %#", exifAttachments);
}
else
NSLog(#"no attachments");
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
//self.vImage.image = image;
}];
}
The problem i am facing is, I don't get any camera opened in my cameraView and also on scanBtnPressed i get
stillImageOutput.connections = 0 objects.
What is wrong?
Well, I just copy pasted your code into a blank project and it worked fine when changing self.cameraView.layer to self.view.layer. However, I did try creating self.cameraView and never initializing it and it had similar consequences to those you described.
Overall, I would check to make sure that self.cameraView isn't nil. If it's done programmatically, make sure you're calling alloc/init and setting a frame, and if it's an IBOutlet make sure it's properly linked.

Capture overlayimage using AVFoundation framework

I want to take picture with overlay image.
code:
- (void)addStillImageOutput
{
// NSLog(#"You capture image");
[self setStillImageOutput:[[[AVCaptureStillImageOutput alloc] init] autorelease]];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG,AVVideoCodecKey,nil];
[[self stillImageOutput] setOutputSettings:outputSettings];
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in [[self stillImageOutput] connections]) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
[[self captureSession] addOutput:[self stillImageOutput]];
}
- (void)captureStillImage
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in [[self stillImageOutput] connections]) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo]) {
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
NSLog(#"about to request a capture from: %#", [self stillImageOutput]);
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
CFDictionaryRef exifAttachments = CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
NSLog(#"attachements: %#", exifAttachments);
} else {
NSLog(#"no attachments");
}
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
[self setStillImage:image];
[image release];
[[NSNotificationCenter defaultCenter] postNotificationName:kImageCapturedSuccessfully object:nil];
}];
}
OverlayImage:
-(void)ButtonPressed1{
UIImageView *overlayImageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"sof.png"]];
[overlayImageView setFrame:CGRectMake(30, 100, 260, 200)];
[[self view] addSubview:overlayImageView];
[overlayImageView release];
}
captureStillImageAsynchronouslyFromConnection captures only data from connection (AVCaptureConnection) and the additional images are not in connection. They are in the view.
So, to "generate" an image with all elements (picture and overlay), you must have to do something like this:
UIGraphicsBeginImageContext(stillImage.size);
[stillImage drawInRect:CGRectMake(0, 0, picture.size.width, picture.size.height)];
[overlayImageView drawInRect:CGRectMake(overlayImageView.frame.origin.x, overlayImageView.frame.origin.y, overlayImageView.frame.size.width, overlayImageView.frame.size.height)];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This code is considering that picture size is equal to screen size. If picture size is different, you have to calculate the coordinates to place the overlay in the drawRect method. Sorry for bad english.

Capture image & Video without using uiimagepicker controller

I need to capture Image & Video without opening imagepickerController.
You can Capture video and photo using AVCaptureSession
refer to iPhone SDK 4 AVFoundation - How to use captureStillImageAsynchronouslyFromConnection correctly?
-(void) viewDidAppear:(BOOL)animated
{
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
CALayer *viewLayer = self.vImagePreview.layer;
NSLog(#"viewLayer = %#", viewLayer);
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = self.vImagePreview.bounds;
[self.vImagePreview.layer addSublayer:captureVideoPreviewLayer];
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
NSLog(#"ERROR: trying to open camera: %#", error);
}
[session addInput:input];
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
[session addOutput:stillImageOutput];
[session startRunning];
}
-(IBAction) captureNow
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
NSLog(#"about to request a capture from: %#", stillImageOutput);
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
CFDictionaryRef exifAttachments = CMGetAttachment( imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments)
{
// Do something with the attachments.
NSLog(#"attachements: %#", exifAttachments);
}
else
NSLog(#"no attachments");
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
self.vImage.image = image;
}];
}

How to capture image without displaying preview in iOS

I want to capture images at specific instances, for example when a button is pushed; but I don't want to show any video preview screen. I guess captureStillImageAsynchronouslyFromConnection is what I need to use for this scenario. Currently, I can capture image if I show a video preview. However, if I remove the code to show the preview, the app crashes with the following output:
2012-04-07 11:25:54.898 imCapWOPreview[748:707] *** Terminating app
due to uncaught exception 'NSInvalidArgumentException', reason: '***
-[AVCaptureStillImageOutput captureStillImageAsynchronouslyFromConnection:completionHandler:] -
inactive/invalid connection passed.'
*** First throw call stack: (0x336ee8bf 0x301e21e5 0x3697c35d 0x34187 0x33648435 0x310949eb 0x310949a7 0x31094985 0x310946f5 0x3109502d
0x3109350f 0x31092f01 0x310794ed 0x31078d2d 0x37db7df3 0x336c2553
0x336c24f5 0x336c1343 0x336444dd 0x336443a5 0x37db6fcd 0x310a7743
0x33887 0x3382c) terminate called throwing an exception(lldb)
So here is my implementation:
BIDViewController.h:
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#interface BIDViewController : UIViewController
{
AVCaptureStillImageOutput *stillImageOutput;
}
#property (strong, nonatomic) IBOutlet UIView *videoPreview;
- (IBAction)doCap:(id)sender;
#end
Relevant staff inside BIDViewController.m :
#import "BIDViewController.h"
#interface BIDViewController ()
#end
#implementation BIDViewController
#synthesize capturedIm;
#synthesize videoPreview;
- (void)viewDidLoad
{
[super viewDidLoad];
[self setupAVCapture];
}
- (BOOL)setupAVCapture
{
NSError *error = nil;
AVCaptureSession *session = [AVCaptureSession new];
[session setSessionPreset:AVCaptureSessionPresetHigh];
/*
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = self.videoPreview.bounds;
[self.videoPreview.layer addSublayer:captureVideoPreviewLayer];
*/
// Select a video device, make an input
AVCaptureDevice *backCamera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:backCamera error:&error];
if (error)
return NO;
if ([session canAddInput:input])
[session addInput:input];
// Make a still image output
stillImageOutput = [AVCaptureStillImageOutput new];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
if ([session canAddOutput:stillImageOutput])
[session addOutput:stillImageOutput];
[session startRunning];
return YES;
}
- (IBAction)doCap:(id)sender {
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *__strong error) {
// Do something with the captured image
}];
}
With the above code, if doCap is called, then the crash occurs. On the other hand, if I remove the following comments in setupAVCapture function
/*
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = self.videoPreview.bounds;
[self.videoPreview.layer addSublayer:captureVideoPreviewLayer];
*/
then it works without any problem.
In summary, my questions is, How can I capture images at controlled instances without showing preview ?
I use the following code for capturing from front facing camera (if available) or using the back camera. Works well on my iPhone 4S.
-(void)viewDidLoad{
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
AVCaptureDevice *device = [self frontFacingCameraIfAvailable];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
NSLog(#"ERROR: trying to open camera: %#", error);
}
[session addInput:input];
//stillImageOutput is a global variable in .h file: "AVCaptureStillImageOutput *stillImageOutput;"
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
[session addOutput:stillImageOutput];
[session startRunning];
}
-(AVCaptureDevice *)frontFacingCameraIfAvailable{
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *captureDevice = nil;
for (AVCaptureDevice *device in videoDevices){
if (device.position == AVCaptureDevicePositionFront){
captureDevice = device;
break;
}
}
// couldn't find one on the front, so just get the default video device.
if (!captureDevice){
captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
return captureDevice;
}
-(IBAction)captureNow{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections){
for (AVCaptureInputPort *port in [connection inputPorts]){
if ([[port mediaType] isEqual:AVMediaTypeVideo]){
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
NSLog(#"about to request a capture from: %#", stillImageOutput);
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error){
CFDictionaryRef exifAttachments = CMGetAttachment( imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments){
// Do something with the attachments if you want to.
NSLog(#"attachements: %#", exifAttachments);
}
else
NSLog(#"no attachments");
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
self.vImage.image = image;
}];
}
Well, i was facing a similar issue where by the captureStillImageAsynchronouslyFromConnection:stillImageConnection was raising an exception that the passed connection is invalid. Later on, i figured out that when i made properties for the session and stillImageOutPut to retain values, the issue got resolved.
I am a javascript developer. Wanted to create an iOS native framework for my cross-platform Javascript project
When I started to do the same, I faced many issues with methods deprecated and other runtime errors.
After fixing all the issues, below is the answer which is compliant with iOS 13.5
The code helps you to take a picture on a button click without a preview.
Your .h file
#interface NoPreviewCameraViewController : UIViewController <AVCapturePhotoCaptureDelegate> {
AVCaptureSession *captureSession;
AVCapturePhotoOutput *photoOutput;
AVCapturePhotoSettings *photoSetting;
AVCaptureConnection *captureConnection;
UIImageView *imageView;
}
#end
Your .m file
- (void)viewDidLoad {
[super viewDidLoad];
imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height - 140)];
[self.view addSubview:imageView];
UIButton *takePicture = [UIButton buttonWithType:UIButtonTypeCustom];
[takePicture addTarget:self action:#selector(takePicture:) forControlEvents:UIControlEventTouchUpInside];
[takePicture setTitle:#"Take Picture" forState:UIControlStateNormal];
takePicture.frame = CGRectMake(40.0, self.view.frame.size.height - 140, self.view.frame.size.width - 40, 40);
[self.view addSubview:takePicture];
[self initCaptureSession];
}
- (void) initCaptureSession {
captureSession = [[AVCaptureSession alloc] init];
if([captureSession canSetSessionPreset: AVCaptureSessionPresetPhoto] ) {
[captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
}
AVCaptureDeviceInput *deviceInput = [[AVCaptureDeviceInput alloc] initWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] error:nil];
if ([captureSession canAddInput:deviceInput]) {
[captureSession addInput:deviceInput];
}
photoOutput = [[AVCapturePhotoOutput alloc] init];
if ([captureSession canAddOutput:photoOutput]) {
[captureSession addOutput:photoOutput];
}
[captureSession startRunning];
}
-(void) setNewPhotoSetting {
photoSetting = [AVCapturePhotoSettings photoSettingsWithFormat:#{AVVideoCodecKey : AVVideoCodecTypeJPEG}];
[photoOutput setPhotoSettingsForSceneMonitoring:photoSetting];
}
- (IBAction)takePicture:(id)sender {
captureConnection = nil;
[self setNewPhotoSetting];
for (AVCaptureConnection *connection in photoOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual: AVMediaTypeVideo]) {
captureConnection = connection;
NSLog(#"Value of connection = %#", connection);
NSLog(#"Value of captureConnection = %#", captureConnection);
break;
}
}
if (captureConnection) {
break;
}
}
[photoOutput capturePhotoWithSettings:photoSetting delegate:self];
}

How to detect faces within a video session with iOS 5

Within my App I'm using a videosession where I can take pictures with. However I'd like to have face detection within this videosession. I looked at Apple's example "SquareCam" which is exactly what I'm looking for. However implementing their code in my project is driving me bonkers.
#import "CaptureSessionManager.h"
#import <ImageIO/ImageIO.h>
#implementation CaptureSessionManager
#synthesize captureSession;
#synthesize previewLayer;
#synthesize stillImageOutput;
#synthesize stillImage;
#pragma mark Capture Session Configuration
- (id)init {
if ((self = [super init])) {
[self setCaptureSession:[[AVCaptureSession alloc] init]];
}
return self;
}
- (void)didReceiveMemoryWarning
{
// Releases the view if it doesn't have a superview.
NSLog(#"memorywarning");
// Release any cached data, images, etc that aren't in use.
}
- (void)addVideoPreviewLayer {
[self setPreviewLayer:[[[AVCaptureVideoPreviewLayer alloc] initWithSession:[self captureSession]] autorelease]];
[[self previewLayer] setVideoGravity:AVLayerVideoGravityResizeAspectFill];
}
- (void)addVideoInput {
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if ([videoDevice isFocusModeSupported:AVCaptureFocusModeLocked]) {
NSError *error = nil;
if ([videoDevice lockForConfiguration:&error]) {
NSLog(#"focus");
videoDevice.focusMode = AVCaptureFocusModeLocked;
videoDevice.focusMode = AVCaptureFocusModeContinuousAutoFocus;
//videoDevice.focusMode = AVCaptureSessionPresetPhoto
videoDevice.whiteBalanceMode = AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance;
[captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
[videoDevice unlockForConfiguration];
}
else {
}
}
// Respond to the failure as appropriate.
if (videoDevice) {
NSError *error;
AVCaptureDeviceInput *videoIn = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if (!error) {
if ([[self captureSession] canAddInput:videoIn])
[[self captureSession] addInput:videoIn];
else
NSLog(#"Couldn't add video input");
}
else
NSLog(#"Couldn't create video input");
}
else
NSLog(#"Couldn't create video capture device");
}
- (void)addStillImageOutput
{
[self setStillImageOutput:[[[AVCaptureStillImageOutput alloc] init] autorelease]];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG,AVVideoCodecKey,nil];
[[self stillImageOutput] setOutputSettings:outputSettings];
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in [[self stillImageOutput] connections]) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
[[self captureSession] addOutput:[self stillImageOutput]];
[outputSettings release];
}
- (void)captureStillImage
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in [[self stillImageOutput] connections]) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo]) {
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
NSLog(#"about to request a capture from: %#", [self stillImageOutput]);
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
CFDictionaryRef exifAttachments = CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
NSLog(#"attachements: %#", exifAttachments);
} else {
NSLog(#"no attachments");
}
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
[self setStillImage:image];
[image release];
[[NSNotificationCenter defaultCenter] postNotificationName:kImageCapturedSuccessfully object:nil];
}];
}
- (void)dealloc {
[[self captureSession] stopRunning];
[previewLayer release], previewLayer = nil;
[captureSession release], captureSession = nil;
[stillImageOutput release], stillImageOutput = nil;
[stillImage release], stillImage = nil;
[super dealloc];
}
#end
Besides my videosession I did succeeded with recognizing faces within a UIImage which I imported inside my project. I did this with the example of #Abhinav Jha (How to properly instantiate CIDetector class object in iOS 5 face detection API).
CIImage *ciImage = [[CIImage alloc] initWithImage:[UIImage imageNamed:#"Photo.JPG"]];
if (ciImage == nil)
[imageView setImage:[UIImage imageNamed:#"Photo.JPG"]];
NSDictionary *options = [[NSDictionary alloc] initWithObjectsAndKeys:
#"CIDetectorAccuracy", #"CIDetectorAccuracyHigh",nil];
CIDetector *ciDetector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil
options:options];
NSArray *features = [ciDetector featuresInImage:ciImage];
NSLog(#"no of face detected: %d", [features count]);
Hope someone can point me in the right direction with combining the two examples!