I'm developing an application with the help of sample code from the WWDC 2010 AVCamDemo example. In the app I need to record a video from the front camera of iPhone, but since the new iPhone 4 is not available at my place I am not able to test the code properly.
I would be really thankful if someone can give me a heads up whether I'm going in the right direction or not. The limited code I could test on my iPhone 3G (upgraded to iOS 4.1) crashes when I set the AVCaptureSession, as shown in the code below:
- (void)recordVideo
{
NSLog(#"video recording on");
AVCaptureDevice *videoCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *videoInput = [[AVCaptureDeviceInput alloc] initWithDevice:videoCaptureDevice error:nil];
AVCaptureMovieFileOutput *movieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
[movieFileOutput release];
AVCaptureSession *session = [[AVCaptureSession alloc] init];
[session addInput:videoInput];
[session addOutput:movieFileOutput];
[self setSession:session]; // crashes
if (![session isRunning])
{
[self performSelector:#selector(startRecording) withObject:nil afterDelay:1.0];
[session startRunning];
}
}
- (void)startRecording
{
AVCaptureConnection *videoConnection = [playVideo connectionWithMediaType:AVMediaTypeVideo fromConnections:[[self movieFileOutput] connections]];
if ([videoConnection isVideoOrientationSupported]) {
[videoConnection setVideoOrientation:[self orientation]];
}
[[self movieFileOutput] startRecordingToOutputFileURL:[self tempFileURL]
recordingDelegate:self];
}
- (void) stopRecording
{
NSLog(#"stop recording");
[[self movieFileOutput] stopRecording];
}
- (NSURL *) tempFileURL
{
NSString *outputPath = [[NSString alloc] initWithFormat:#"%#%#", NSTemporaryDirectory(), #"output.mov"];
NSURL *outputURL = [[NSURL alloc] initFileURLWithPath:outputPath];
NSFileManager *fileManager = [NSFileManager defaultManager];
if ([fileManager fileExistsAtPath:outputPath]) {
NSLog(#"file saved");
}
[outputPath release];
return [outputURL autorelease];
}
+ (AVCaptureConnection *)connectionWithMediaType:(NSString *)mediaType fromConnections:(NSArray *)connections;
{
for ( AVCaptureConnection *connection in connections ) {
for ( AVCaptureInputPort *port in [connection inputPorts] ) {
if ( [[port mediaType] isEqual:mediaType] ) {
return [[connection retain] autorelease];
}
}
}
return nil;
}
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput
didStartRecordingToOutputFileAtURL:(NSURL *)fileURL
fromConnections:(NSArray *)connections
{
}
- (void)captureOutput:(AVCaptureFileOutput *)captureOutput
didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL
fromConnections:(NSArray *)connections
error:(NSError *)error
{
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:outputFileURL])
{
[library writeVideoAtPathToSavedPhotosAlbum:outputFileURL
completionBlock:^(NSURL *assetURL, NSError *error)];
}
[library release];
}
movieFileOutput is released immediately after it has been allocated. (line 9)
Related
I need to capture Image & Video without opening imagepickerController.
You can Capture video and photo using AVCaptureSession
refer to iPhone SDK 4 AVFoundation - How to use captureStillImageAsynchronouslyFromConnection correctly?
-(void) viewDidAppear:(BOOL)animated
{
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
CALayer *viewLayer = self.vImagePreview.layer;
NSLog(#"viewLayer = %#", viewLayer);
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = self.vImagePreview.bounds;
[self.vImagePreview.layer addSublayer:captureVideoPreviewLayer];
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
NSLog(#"ERROR: trying to open camera: %#", error);
}
[session addInput:input];
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
[session addOutput:stillImageOutput];
[session startRunning];
}
-(IBAction) captureNow
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
NSLog(#"about to request a capture from: %#", stillImageOutput);
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
CFDictionaryRef exifAttachments = CMGetAttachment( imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments)
{
// Do something with the attachments.
NSLog(#"attachements: %#", exifAttachments);
}
else
NSLog(#"no attachments");
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
self.vImage.image = image;
}];
}
I want to capture images at specific instances, for example when a button is pushed; but I don't want to show any video preview screen. I guess captureStillImageAsynchronouslyFromConnection is what I need to use for this scenario. Currently, I can capture image if I show a video preview. However, if I remove the code to show the preview, the app crashes with the following output:
2012-04-07 11:25:54.898 imCapWOPreview[748:707] *** Terminating app
due to uncaught exception 'NSInvalidArgumentException', reason: '***
-[AVCaptureStillImageOutput captureStillImageAsynchronouslyFromConnection:completionHandler:] -
inactive/invalid connection passed.'
*** First throw call stack: (0x336ee8bf 0x301e21e5 0x3697c35d 0x34187 0x33648435 0x310949eb 0x310949a7 0x31094985 0x310946f5 0x3109502d
0x3109350f 0x31092f01 0x310794ed 0x31078d2d 0x37db7df3 0x336c2553
0x336c24f5 0x336c1343 0x336444dd 0x336443a5 0x37db6fcd 0x310a7743
0x33887 0x3382c) terminate called throwing an exception(lldb)
So here is my implementation:
BIDViewController.h:
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#interface BIDViewController : UIViewController
{
AVCaptureStillImageOutput *stillImageOutput;
}
#property (strong, nonatomic) IBOutlet UIView *videoPreview;
- (IBAction)doCap:(id)sender;
#end
Relevant staff inside BIDViewController.m :
#import "BIDViewController.h"
#interface BIDViewController ()
#end
#implementation BIDViewController
#synthesize capturedIm;
#synthesize videoPreview;
- (void)viewDidLoad
{
[super viewDidLoad];
[self setupAVCapture];
}
- (BOOL)setupAVCapture
{
NSError *error = nil;
AVCaptureSession *session = [AVCaptureSession new];
[session setSessionPreset:AVCaptureSessionPresetHigh];
/*
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = self.videoPreview.bounds;
[self.videoPreview.layer addSublayer:captureVideoPreviewLayer];
*/
// Select a video device, make an input
AVCaptureDevice *backCamera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:backCamera error:&error];
if (error)
return NO;
if ([session canAddInput:input])
[session addInput:input];
// Make a still image output
stillImageOutput = [AVCaptureStillImageOutput new];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
if ([session canAddOutput:stillImageOutput])
[session addOutput:stillImageOutput];
[session startRunning];
return YES;
}
- (IBAction)doCap:(id)sender {
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *__strong error) {
// Do something with the captured image
}];
}
With the above code, if doCap is called, then the crash occurs. On the other hand, if I remove the following comments in setupAVCapture function
/*
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = self.videoPreview.bounds;
[self.videoPreview.layer addSublayer:captureVideoPreviewLayer];
*/
then it works without any problem.
In summary, my questions is, How can I capture images at controlled instances without showing preview ?
I use the following code for capturing from front facing camera (if available) or using the back camera. Works well on my iPhone 4S.
-(void)viewDidLoad{
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
AVCaptureDevice *device = [self frontFacingCameraIfAvailable];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
NSLog(#"ERROR: trying to open camera: %#", error);
}
[session addInput:input];
//stillImageOutput is a global variable in .h file: "AVCaptureStillImageOutput *stillImageOutput;"
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
[session addOutput:stillImageOutput];
[session startRunning];
}
-(AVCaptureDevice *)frontFacingCameraIfAvailable{
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *captureDevice = nil;
for (AVCaptureDevice *device in videoDevices){
if (device.position == AVCaptureDevicePositionFront){
captureDevice = device;
break;
}
}
// couldn't find one on the front, so just get the default video device.
if (!captureDevice){
captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
return captureDevice;
}
-(IBAction)captureNow{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections){
for (AVCaptureInputPort *port in [connection inputPorts]){
if ([[port mediaType] isEqual:AVMediaTypeVideo]){
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
NSLog(#"about to request a capture from: %#", stillImageOutput);
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error){
CFDictionaryRef exifAttachments = CMGetAttachment( imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments){
// Do something with the attachments if you want to.
NSLog(#"attachements: %#", exifAttachments);
}
else
NSLog(#"no attachments");
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
self.vImage.image = image;
}];
}
Well, i was facing a similar issue where by the captureStillImageAsynchronouslyFromConnection:stillImageConnection was raising an exception that the passed connection is invalid. Later on, i figured out that when i made properties for the session and stillImageOutPut to retain values, the issue got resolved.
I am a javascript developer. Wanted to create an iOS native framework for my cross-platform Javascript project
When I started to do the same, I faced many issues with methods deprecated and other runtime errors.
After fixing all the issues, below is the answer which is compliant with iOS 13.5
The code helps you to take a picture on a button click without a preview.
Your .h file
#interface NoPreviewCameraViewController : UIViewController <AVCapturePhotoCaptureDelegate> {
AVCaptureSession *captureSession;
AVCapturePhotoOutput *photoOutput;
AVCapturePhotoSettings *photoSetting;
AVCaptureConnection *captureConnection;
UIImageView *imageView;
}
#end
Your .m file
- (void)viewDidLoad {
[super viewDidLoad];
imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height - 140)];
[self.view addSubview:imageView];
UIButton *takePicture = [UIButton buttonWithType:UIButtonTypeCustom];
[takePicture addTarget:self action:#selector(takePicture:) forControlEvents:UIControlEventTouchUpInside];
[takePicture setTitle:#"Take Picture" forState:UIControlStateNormal];
takePicture.frame = CGRectMake(40.0, self.view.frame.size.height - 140, self.view.frame.size.width - 40, 40);
[self.view addSubview:takePicture];
[self initCaptureSession];
}
- (void) initCaptureSession {
captureSession = [[AVCaptureSession alloc] init];
if([captureSession canSetSessionPreset: AVCaptureSessionPresetPhoto] ) {
[captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
}
AVCaptureDeviceInput *deviceInput = [[AVCaptureDeviceInput alloc] initWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] error:nil];
if ([captureSession canAddInput:deviceInput]) {
[captureSession addInput:deviceInput];
}
photoOutput = [[AVCapturePhotoOutput alloc] init];
if ([captureSession canAddOutput:photoOutput]) {
[captureSession addOutput:photoOutput];
}
[captureSession startRunning];
}
-(void) setNewPhotoSetting {
photoSetting = [AVCapturePhotoSettings photoSettingsWithFormat:#{AVVideoCodecKey : AVVideoCodecTypeJPEG}];
[photoOutput setPhotoSettingsForSceneMonitoring:photoSetting];
}
- (IBAction)takePicture:(id)sender {
captureConnection = nil;
[self setNewPhotoSetting];
for (AVCaptureConnection *connection in photoOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual: AVMediaTypeVideo]) {
captureConnection = connection;
NSLog(#"Value of connection = %#", connection);
NSLog(#"Value of captureConnection = %#", captureConnection);
break;
}
}
if (captureConnection) {
break;
}
}
[photoOutput capturePhotoWithSettings:photoSetting delegate:self];
}
I am using somebody's source code for capturing image with AVCaptureSession. However,I found that CaptureSessionManager's previewLayer is shotter then the final captured image.
I found that the resulted image is always with ratio 720x1280=9:16. Now I want to crop the resulted image to an UIImage with ratio 320:480 so that it will only capture the portion visible in previewLayer. Any Idea? Thanks a lot.
Relevant Questions in stackoverflow(NO good answer yet):
Q1,
Q2
Source Code:
- (id)init {
if ((self = [super init])) {
[self setCaptureSession:[[[AVCaptureSession alloc] init] autorelease]];
}
return self;
}
- (void)addVideoPreviewLayer {
[self setPreviewLayer:[[[AVCaptureVideoPreviewLayer alloc] initWithSession:[self captureSession]] autorelease]];
[[self previewLayer] setVideoGravity:AVLayerVideoGravityResizeAspectFill];
}
- (void)addVideoInput {
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if (videoDevice) {
NSError *error;
if ([videoDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus] && [videoDevice lockForConfiguration:&error]) {
[videoDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
[videoDevice unlockForConfiguration];
}
AVCaptureDeviceInput *videoIn = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if (!error) {
if ([[self captureSession] canAddInput:videoIn])
[[self captureSession] addInput:videoIn];
else
NSLog(#"Couldn't add video input");
}
else
NSLog(#"Couldn't create video input");
}
else
NSLog(#"Couldn't create video capture device");
}
- (void)addStillImageOutput
{
[self setStillImageOutput:[[[AVCaptureStillImageOutput alloc] init] autorelease]];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG,AVVideoCodecKey,nil];
[[self stillImageOutput] setOutputSettings:outputSettings];
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in [[self stillImageOutput] connections]) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
[[self captureSession] addOutput:[self stillImageOutput]];
}
- (void)captureStillImage
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in [[self stillImageOutput] connections]) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo]) {
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
NSLog(#"about to request a capture from: %#", [self stillImageOutput]);
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
CFDictionaryRef exifAttachments = CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
NSLog(#"attachements: %#", exifAttachments);
} else {
NSLog(#"no attachments");
}
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
[self setStillImage:image];
[image release];
[[NSNotificationCenter defaultCenter] postNotificationName:kImageCapturedSuccessfully object:nil];
}];
}
Edit after doing some more research and testing:
AVCaptureSession's property "sessionPreset" has the following constants, I haven't checked each one of them, but noted that most of them ratio is either 9:16, or 3:4,
NSString *const AVCaptureSessionPresetPhoto;
NSString *const AVCaptureSessionPresetHigh;
NSString *const AVCaptureSessionPresetMedium;
NSString *const AVCaptureSessionPresetLow;
NSString *const AVCaptureSessionPreset352x288;
NSString *const AVCaptureSessionPreset640x480;
NSString *const AVCaptureSessionPresetiFrame960x540;
NSString *const AVCaptureSessionPreset1280x720;
NSString *const AVCaptureSessionPresetiFrame1280x720;
In My project, I have the fullscreen preview(frame size is 320x480)
also: [[self previewLayer] setVideoGravity:AVLayerVideoGravityResizeAspectFill];
I have done it in this way: take the photo in size 9:16 and crop it to 320:480, exactly the visible part of the previewlayer. It looks perfect.
The piece of code for resizing and croping to replace with old code is
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [UIImage imageWithData:imageData];
UIImage *scaledimage=[ImageHelper scaleAndRotateImage:image];
//going to crop the image 9:16 to 2:3;with Width fixed
float width=scaledimage.size.width;
float height=scaledimage.size.height;
float top_adjust=(height-width*3/2.0)/2.0;
[self setStillImage:[scaledimage croppedImage:rectToCrop]];
iPhone's camera is natively 4:3. The 16:9 images you get are already cropped from 4:3. Cropping those 16:9 images again to 4:3 is not what you want. Instead get the native 4:3 images from iPhone's camera by setting self.captureSession.sessionPreset = AVCaptureSessionPresetPhoto (before adding any inputs/outputs to the session).
Within my App I'm using a videosession where I can take pictures with. However I'd like to have face detection within this videosession. I looked at Apple's example "SquareCam" which is exactly what I'm looking for. However implementing their code in my project is driving me bonkers.
#import "CaptureSessionManager.h"
#import <ImageIO/ImageIO.h>
#implementation CaptureSessionManager
#synthesize captureSession;
#synthesize previewLayer;
#synthesize stillImageOutput;
#synthesize stillImage;
#pragma mark Capture Session Configuration
- (id)init {
if ((self = [super init])) {
[self setCaptureSession:[[AVCaptureSession alloc] init]];
}
return self;
}
- (void)didReceiveMemoryWarning
{
// Releases the view if it doesn't have a superview.
NSLog(#"memorywarning");
// Release any cached data, images, etc that aren't in use.
}
- (void)addVideoPreviewLayer {
[self setPreviewLayer:[[[AVCaptureVideoPreviewLayer alloc] initWithSession:[self captureSession]] autorelease]];
[[self previewLayer] setVideoGravity:AVLayerVideoGravityResizeAspectFill];
}
- (void)addVideoInput {
AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if ([videoDevice isFocusModeSupported:AVCaptureFocusModeLocked]) {
NSError *error = nil;
if ([videoDevice lockForConfiguration:&error]) {
NSLog(#"focus");
videoDevice.focusMode = AVCaptureFocusModeLocked;
videoDevice.focusMode = AVCaptureFocusModeContinuousAutoFocus;
//videoDevice.focusMode = AVCaptureSessionPresetPhoto
videoDevice.whiteBalanceMode = AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance;
[captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
[videoDevice unlockForConfiguration];
}
else {
}
}
// Respond to the failure as appropriate.
if (videoDevice) {
NSError *error;
AVCaptureDeviceInput *videoIn = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
if (!error) {
if ([[self captureSession] canAddInput:videoIn])
[[self captureSession] addInput:videoIn];
else
NSLog(#"Couldn't add video input");
}
else
NSLog(#"Couldn't create video input");
}
else
NSLog(#"Couldn't create video capture device");
}
- (void)addStillImageOutput
{
[self setStillImageOutput:[[[AVCaptureStillImageOutput alloc] init] autorelease]];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:AVVideoCodecJPEG,AVVideoCodecKey,nil];
[[self stillImageOutput] setOutputSettings:outputSettings];
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in [[self stillImageOutput] connections]) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
[[self captureSession] addOutput:[self stillImageOutput]];
[outputSettings release];
}
- (void)captureStillImage
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in [[self stillImageOutput] connections]) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo]) {
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
NSLog(#"about to request a capture from: %#", [self stillImageOutput]);
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
CFDictionaryRef exifAttachments = CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
NSLog(#"attachements: %#", exifAttachments);
} else {
NSLog(#"no attachments");
}
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
[self setStillImage:image];
[image release];
[[NSNotificationCenter defaultCenter] postNotificationName:kImageCapturedSuccessfully object:nil];
}];
}
- (void)dealloc {
[[self captureSession] stopRunning];
[previewLayer release], previewLayer = nil;
[captureSession release], captureSession = nil;
[stillImageOutput release], stillImageOutput = nil;
[stillImage release], stillImage = nil;
[super dealloc];
}
#end
Besides my videosession I did succeeded with recognizing faces within a UIImage which I imported inside my project. I did this with the example of #Abhinav Jha (How to properly instantiate CIDetector class object in iOS 5 face detection API).
CIImage *ciImage = [[CIImage alloc] initWithImage:[UIImage imageNamed:#"Photo.JPG"]];
if (ciImage == nil)
[imageView setImage:[UIImage imageNamed:#"Photo.JPG"]];
NSDictionary *options = [[NSDictionary alloc] initWithObjectsAndKeys:
#"CIDetectorAccuracy", #"CIDetectorAccuracyHigh",nil];
CIDetector *ciDetector = [CIDetector detectorOfType:CIDetectorTypeFace
context:nil
options:options];
NSArray *features = [ciDetector featuresInImage:ciImage];
NSLog(#"no of face detected: %d", [features count]);
Hope someone can point me in the right direction with combining the two examples!
i am trying to record a video from front camera of iPhone 4 using AVFoundation Framework with the help of WWDC samples i got from iPhone developer program. But i still cant get it to work..the video does not get recorded or mayb saved in my iPhone library...here's the code i am trying to use...it would b really helpful if someone culd help me with the problem i am having??
-(void)recordVideo
{
AVCaptureDeviceInput *videoInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self backFacingCamera] error:nil];
AVCaptureMovieFileOutput *movieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
AVCaptureSession *session = [[AVCaptureSession alloc] init];
[session addInput:videoInput];
[session addOutput:movieFileOutput];
[movieFileOutput release];
if (![session isRunning])
{
[self performSelector:#selector(startRecording) withObject:nil afterDelay:1.0];
[session startRunning];
}
}
- (void) startRecording
{
NSLog(#"start recording");
AVCaptureConnection *videoConnection = [playVideo connectionWithMediaType:AVMediaTypeVideo fromConnections:[[self movieFileOutput] connections]];
if ([videoConnection isVideoOrientationSupported]) {
[videoConnection setVideoOrientation:[self orientation]];
}
[[self movieFileOutput] startRecordingToOutputFileURL:[self tempFileURL]
recordingDelegate:self];
}
- (void) stopRecording
{
NSLog(#"stop recording");
[[self movieFileOutput] stopRecording];
}
- (NSURL *) tempFileURL
{
NSLog(#"temp file url");
NSString *outputPath = [[NSString alloc] initWithFormat:#"%#%#", NSTemporaryDirectory(), #"output.mov"];
NSURL *outputURL = [[NSURL alloc] initFileURLWithPath:outputPath];
NSFileManager *fileManager = [NSFileManager defaultManager];
if ([fileManager fileExistsAtPath:outputPath]) {
NSLog(#"exists");
}
[outputPath release];
return [outputURL autorelease];
}
- (void) setConnectionWithMediaType:(NSString *)mediaType enabled:(BOOL)enabled;
{
[[playVideo connectionWithMediaType:AVMediaTypeVideo fromConnections:[[self movieFileOutput] connections]] setEnabled:enabled];
}
+ (AVCaptureConnection *)connectionWithMediaType:(NSString *)mediaType fromConnections:(NSArray *)connections;
{
NSLog(#"connection with media type");
for ( AVCaptureConnection *connection in connections ) {
for ( AVCaptureInputPort *port in [connection inputPorts] ) {
if ( [[port mediaType] isEqual:mediaType] ) {
return [[connection retain] autorelease];
}
}
}
return nil;
}
#implementation recordVideo (Internal)
- (AVCaptureDevice *) cameraWithPosition:(AVCaptureDevicePosition) position
{
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices) {
if ([device position] == position) {
return device;
}
}
return nil;
}
- (AVCaptureDevice *) backFacingCamera
{
NSLog(#"back");
return [self cameraWithPosition:AVCaptureDevicePositionBack];
}
- (AVCaptureDevice *) frontFacingCamera
{
NSLog(#"front ");
return [self cameraWithPosition:AVCaptureDevicePositionFront];
}
#end
#implementation recordVideo (AVCaptureFileOutputRecordingDelegate)
- (void) captureOutput:(AVCaptureFileOutput *)captureOutput
didStartRecordingToOutputFileAtURL:(NSURL *)fileURL
fromConnections:(NSArray *)connections
{
NSLog(#"did start recording");
}
- (void) captureOutput:(AVCaptureFileOutput *)captureOutput
didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL
fromConnections:(NSArray *)connections
error:(NSError *)error
{
NSLog(#"did finish recording output file");
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:outputFileURL])
{
[library writeVideoAtPathToSavedPhotosAlbum:outputFileURL
completionBlock:^(NSURL *assetURL, NSError *error){
if (error && [delegate respondsToSelector:#selector(assetLibraryError:forURL:)]) {
[delegate assetLibraryError:error forURL:assetURL];
}
}];
}
else {
}
[library release];
}
#end
You are throwing away the errors - how can you debug if you do that?
You need to be more specific in what is / is not happening. Which methods are called? What errors are reported by the SDK (if any)? etc...
you have released reference to movieFileOutput in -(void)recordVideo;
and you are still using released movieFileOutput in - (void) startRecording as well as
other functions.You better not release object that you are using.