UIlabel is not getting updated inside AVCaptureSession Delegate - iphone

I am learning objective c and doing a sample app to fetch video feed from iPhone camera. I was able to get the feeds from camera and display it on screen. Also I was trying to update some UILabel in screen for each frame from the video inside the delegate method. But the label value is not getting updated always. Here is the code I am using
This section will initialize the capture
- (void)initCapture
{
NSError *error = nil;
device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if ([device isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus] && [device lockForConfiguration:&error]) {
[device setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
[device unlockForConfiguration];
}
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
//AVCaptureStillImageOutput *imageCaptureOutput = [[AVCaptureStillImageOutput alloc] init];
AVCaptureVideoDataOutput *captureOutput =[[AVCaptureVideoDataOutput alloc] init];
captureOutput.alwaysDiscardsLateVideoFrames = YES;
//captureOutput.minFrameDuration = CMTimeMake(1, 1);
captureOutput.alwaysDiscardsLateVideoFrames = YES;
dispatch_queue_t queue;
queue = dispatch_queue_create("cameraQueue", NULL);
[captureOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Set the video output to store frame in BGRA (It is supposed to be faster)
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
self.captureSession = [[AVCaptureSession alloc] init];
[self.captureSession addInput:captureInput];
[self.captureSession addOutput:captureOutput];
self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: self.captureSession];
self.prevLayer.frame = CGRectMake(0, 0, 320, 320);
self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[self.videoPreview.layer addSublayer: self.prevLayer];
[self.captureSession startRunning];
}
This method is called for each video frame.
#pragma mark AVCaptureSession delegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
i++;
self.lblStatus.Text = [NSString stringWithFormat:#"%d",i];
}
I am trying to print UILabel inside this method but it is not printed always. THere is much delay for the label text to change.
Could someone help please?
Thanks.

Your sampleBufferDelegate's captureOutput is being called from a non-main thread - updating GUI objects from there can do no good. Try using performSelectorOnMainThread instead.

Related

iOS7 - Outputting frames with AVCaptureMetadataOutput?

I am trying to use the iOS7 QR reading functions in the AVFoundation framework using the following code:
-(void)setupCaptureSession_iOS7 {
self.session = [[AVCaptureSession alloc] init];
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input)
{
NSLog(#"Error: %#", error);
return;
}
[session addInput:input];
//Turn on point autofocus for middle of view
[device lockForConfiguration:&error];
CGPoint point = CGPointMake(0.5,0.5);
[device setFocusPointOfInterest:point];
[device setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
[device unlockForConfiguration];
//Add the metadata output device
AVCaptureMetadataOutput *output = [[AVCaptureMetadataOutput alloc] init];
[output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[session addOutput:output];
NSLog(#"%lu",(unsigned long)output.availableMetadataObjectTypes.count);
for (NSString *s in output.availableMetadataObjectTypes)
NSLog(#"%#",s);
//You should check here to see if the session supports these types, if they aren't support you'll get an exception
output.metadataObjectTypes = #[AVMetadataObjectTypeEAN13Code, AVMetadataObjectTypeEAN8Code, AVMetadataObjectTypeUPCECode];
output.rectOfInterest = CGRectMake(0, 0, 320, 480);
[session startRunning];
// Assign session to an ivar.
[self setSession:self.session];
}
This code obviously doesn't render the frames to the screen (yet). This is because, instead of using the AVCaptureVideoPreviewLayer class to display the preview, I need to display the frames as a UIImage (this is because I have want to display the frames multiple times on the view).
If I use AVCaptureVideoDataOutput as the output, I'm able to export the frames using by grabbing them from the captureOutput:didOutputSampleBuffer:fromConnection: callback. But I can't find an equivalent way to call get the frameBuffer when using AVCaptureMetadataOutput as the output.
Does anyone have any idea how to do this?

AVCaptureVideoDataOutput issue in IOS 5

I init AVCaptureVideoDataOutput with this method:
// Setup the video output
_videoOutput = [[AVCaptureVideoDataOutput alloc] init];
_videoOutput.alwaysDiscardsLateVideoFrames = YES;
_videoOutput.minFrameDuration = kCMTimeZero;
_videoOutput.videoSettings =
[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA],(id)kCVPixelBufferPixelFormatTypeKey
,AVVideoCodecH264, AVVideoCodecKey, nil];
// Setup the audio output
_audioOutput = [[AVCaptureAudioDataOutput alloc] init];
NSLog(#"dispatch_queue_t");
// Setup the queue
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[_videoOutput setSampleBufferDelegate:self queue:queue];
[_audioOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
And i have a problem that the delegate method:
(void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
is won't called.in IOS4 this method called, any idea why it happen?
Add input and output to the session.
Then call [session startRunning].

How to capture image without displaying preview in iOS

I want to capture images at specific instances, for example when a button is pushed; but I don't want to show any video preview screen. I guess captureStillImageAsynchronouslyFromConnection is what I need to use for this scenario. Currently, I can capture image if I show a video preview. However, if I remove the code to show the preview, the app crashes with the following output:
2012-04-07 11:25:54.898 imCapWOPreview[748:707] *** Terminating app
due to uncaught exception 'NSInvalidArgumentException', reason: '***
-[AVCaptureStillImageOutput captureStillImageAsynchronouslyFromConnection:completionHandler:] -
inactive/invalid connection passed.'
*** First throw call stack: (0x336ee8bf 0x301e21e5 0x3697c35d 0x34187 0x33648435 0x310949eb 0x310949a7 0x31094985 0x310946f5 0x3109502d
0x3109350f 0x31092f01 0x310794ed 0x31078d2d 0x37db7df3 0x336c2553
0x336c24f5 0x336c1343 0x336444dd 0x336443a5 0x37db6fcd 0x310a7743
0x33887 0x3382c) terminate called throwing an exception(lldb)
So here is my implementation:
BIDViewController.h:
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#interface BIDViewController : UIViewController
{
AVCaptureStillImageOutput *stillImageOutput;
}
#property (strong, nonatomic) IBOutlet UIView *videoPreview;
- (IBAction)doCap:(id)sender;
#end
Relevant staff inside BIDViewController.m :
#import "BIDViewController.h"
#interface BIDViewController ()
#end
#implementation BIDViewController
#synthesize capturedIm;
#synthesize videoPreview;
- (void)viewDidLoad
{
[super viewDidLoad];
[self setupAVCapture];
}
- (BOOL)setupAVCapture
{
NSError *error = nil;
AVCaptureSession *session = [AVCaptureSession new];
[session setSessionPreset:AVCaptureSessionPresetHigh];
/*
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = self.videoPreview.bounds;
[self.videoPreview.layer addSublayer:captureVideoPreviewLayer];
*/
// Select a video device, make an input
AVCaptureDevice *backCamera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:backCamera error:&error];
if (error)
return NO;
if ([session canAddInput:input])
[session addInput:input];
// Make a still image output
stillImageOutput = [AVCaptureStillImageOutput new];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
if ([session canAddOutput:stillImageOutput])
[session addOutput:stillImageOutput];
[session startRunning];
return YES;
}
- (IBAction)doCap:(id)sender {
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections)
{
for (AVCaptureInputPort *port in [connection inputPorts])
{
if ([[port mediaType] isEqual:AVMediaTypeVideo] )
{
videoConnection = connection;
break;
}
}
if (videoConnection) { break; }
}
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *__strong error) {
// Do something with the captured image
}];
}
With the above code, if doCap is called, then the crash occurs. On the other hand, if I remove the following comments in setupAVCapture function
/*
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = self.videoPreview.bounds;
[self.videoPreview.layer addSublayer:captureVideoPreviewLayer];
*/
then it works without any problem.
In summary, my questions is, How can I capture images at controlled instances without showing preview ?
I use the following code for capturing from front facing camera (if available) or using the back camera. Works well on my iPhone 4S.
-(void)viewDidLoad{
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
AVCaptureDevice *device = [self frontFacingCameraIfAvailable];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
NSLog(#"ERROR: trying to open camera: %#", error);
}
[session addInput:input];
//stillImageOutput is a global variable in .h file: "AVCaptureStillImageOutput *stillImageOutput;"
stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];
[session addOutput:stillImageOutput];
[session startRunning];
}
-(AVCaptureDevice *)frontFacingCameraIfAvailable{
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *captureDevice = nil;
for (AVCaptureDevice *device in videoDevices){
if (device.position == AVCaptureDevicePositionFront){
captureDevice = device;
break;
}
}
// couldn't find one on the front, so just get the default video device.
if (!captureDevice){
captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
}
return captureDevice;
}
-(IBAction)captureNow{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in stillImageOutput.connections){
for (AVCaptureInputPort *port in [connection inputPorts]){
if ([[port mediaType] isEqual:AVMediaTypeVideo]){
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
NSLog(#"about to request a capture from: %#", stillImageOutput);
[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error){
CFDictionaryRef exifAttachments = CMGetAttachment( imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments){
// Do something with the attachments if you want to.
NSLog(#"attachements: %#", exifAttachments);
}
else
NSLog(#"no attachments");
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
self.vImage.image = image;
}];
}
Well, i was facing a similar issue where by the captureStillImageAsynchronouslyFromConnection:stillImageConnection was raising an exception that the passed connection is invalid. Later on, i figured out that when i made properties for the session and stillImageOutPut to retain values, the issue got resolved.
I am a javascript developer. Wanted to create an iOS native framework for my cross-platform Javascript project
When I started to do the same, I faced many issues with methods deprecated and other runtime errors.
After fixing all the issues, below is the answer which is compliant with iOS 13.5
The code helps you to take a picture on a button click without a preview.
Your .h file
#interface NoPreviewCameraViewController : UIViewController <AVCapturePhotoCaptureDelegate> {
AVCaptureSession *captureSession;
AVCapturePhotoOutput *photoOutput;
AVCapturePhotoSettings *photoSetting;
AVCaptureConnection *captureConnection;
UIImageView *imageView;
}
#end
Your .m file
- (void)viewDidLoad {
[super viewDidLoad];
imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, self.view.frame.size.width, self.view.frame.size.height - 140)];
[self.view addSubview:imageView];
UIButton *takePicture = [UIButton buttonWithType:UIButtonTypeCustom];
[takePicture addTarget:self action:#selector(takePicture:) forControlEvents:UIControlEventTouchUpInside];
[takePicture setTitle:#"Take Picture" forState:UIControlStateNormal];
takePicture.frame = CGRectMake(40.0, self.view.frame.size.height - 140, self.view.frame.size.width - 40, 40);
[self.view addSubview:takePicture];
[self initCaptureSession];
}
- (void) initCaptureSession {
captureSession = [[AVCaptureSession alloc] init];
if([captureSession canSetSessionPreset: AVCaptureSessionPresetPhoto] ) {
[captureSession setSessionPreset:AVCaptureSessionPresetPhoto];
}
AVCaptureDeviceInput *deviceInput = [[AVCaptureDeviceInput alloc] initWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] error:nil];
if ([captureSession canAddInput:deviceInput]) {
[captureSession addInput:deviceInput];
}
photoOutput = [[AVCapturePhotoOutput alloc] init];
if ([captureSession canAddOutput:photoOutput]) {
[captureSession addOutput:photoOutput];
}
[captureSession startRunning];
}
-(void) setNewPhotoSetting {
photoSetting = [AVCapturePhotoSettings photoSettingsWithFormat:#{AVVideoCodecKey : AVVideoCodecTypeJPEG}];
[photoOutput setPhotoSettingsForSceneMonitoring:photoSetting];
}
- (IBAction)takePicture:(id)sender {
captureConnection = nil;
[self setNewPhotoSetting];
for (AVCaptureConnection *connection in photoOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual: AVMediaTypeVideo]) {
captureConnection = connection;
NSLog(#"Value of connection = %#", connection);
NSLog(#"Value of captureConnection = %#", captureConnection);
break;
}
}
if (captureConnection) {
break;
}
}
[photoOutput capturePhotoWithSettings:photoSetting delegate:self];
}

Double camera output in iOS

I want to make twin screen using built-in camera on iOS
I tried following code, but it shows just one view.
It's a natural result, I know.
Here's the code what I used..
- (void)prepareCameraView:(UIView *)window
{
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
CALayer *viewLayer = window.layer;
NSLog(#"viewLayer = %#", viewLayer);
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc]
initWithSession:session];
captureVideoPreviewLayer.frame = window.bounds;
[window.layer addSublayer:captureVideoPreviewLayer];
AVCaptureDevice *captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
if (!input)
{
NSLog(#"ERROR : trying to open camera : %#", error);
}
[session addInput:input];
[session startRunning];
}
How can I get double screen on iOS?
// Use this code
AVCaptureSession *session = [AVCaptureSession new];
AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error;
AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];
if ( [session canAddInput:deviceInput])
{
[session addInput:deviceInput];
}
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[previewLayer setFrame:CGRectMake(0.0, 0.0, self.view.bounds.size.width, self.view.bounds.size.height)];
NSUInteger replicatorInstances = 2;
CGFloat replicatorViewHeight = (self.view.bounds.size.height - 64)/replicatorInstances;
CAReplicatorLayer *replicatorLayer = [CAReplicatorLayer layer];
replicatorLayer.frame = CGRectMake(0, 0.0, self.view.bounds.size.width, replicatorViewHeight);
replicatorLayer.instanceCount = replicatorInstances;
replicatorLayer.instanceTransform = CATransform3DMakeTranslation(0.0, replicatorViewHeight, 0.0);
[replicatorLayer addSublayer:previewLayer];
[self.view.layer addSublayer:replicatorLayer];
[session startRunning];
Try this:
- (void)prepareCameraView:(UIView *)window
{
NSArray *captureDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
{
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
CALayer *viewLayer = window.layer;
NSLog(#"viewLayer = %#", viewLayer);
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = CGRectMake(0.0f, 0.0f, window.bounds.size.width/2.0f, window.bounds.size.height);
[window.layer addSublayer:captureVideoPreviewLayer];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:[captureDevices objectAtIndex:0] error:&error];
if (!input)
{
NSLog(#"ERROR : trying to open camera : %#", error);
}
[session addInput:input];
[session startRunning];
}
{
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetMedium;
CALayer *viewLayer = window.layer;
NSLog(#"viewLayer = %#", viewLayer);
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = CGRectMake(window.bounds.size.width/2.0f, 0.0f, window.bounds.size.width/2.0f, window.bounds.size.height);
[window.layer addSublayer:captureVideoPreviewLayer];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:[captureDevices objectAtIndex:1] error:&error];
if (!input)
{
NSLog(#"ERROR : trying to open camera : %#", error);
}
[session addInput:input];
[session startRunning];
}
}
Note that it makes absolutely no checks that there are actually 2 cameras and it splits it vertically so this is probably best viewed in landscape. You'll want to add some checks into that code and work out exactly how you want to lay out the layers of each camera before using it.

iphone . processing frames that are being recorded by the camera

I have this app the records video and I need to fire a method every time a frame is grabbed. After banging my head on the wall, I decided to try the following: create a dispatch queue, as I would grab a video from the output, just to have a method called when the frame is recorded by the camera.
I am trying to understand a section of code created by Apple to record videos to figure out how I should add the dispatch queue. This is the apple code and the section marked between asterisks is what I have added, in order to create the queue. It compiles without errors, but captureOutput: didOutputSampleBuffer: fromConnection: is never called.
- (BOOL) setupSessionWithPreset:(NSString *)sessionPreset error:(NSError **)error
{
BOOL success = NO;
// Init the device inputs
AVCaptureDeviceInput *videoInput = [[[AVCaptureDeviceInput alloc] initWithDevice:[self backFacingCamera] error:error] autorelease];
[self setVideoInput:videoInput]; // stash this for later use if we need to switch cameras
AVCaptureDeviceInput *audioInput = [[[AVCaptureDeviceInput alloc] initWithDevice:[self audioDevice] error:error] autorelease];
[self setAudioInput:audioInput];
AVCaptureMovieFileOutput *movieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
[self setMovieFileOutput:movieFileOutput];
[movieFileOutput release];
// Setup and start the capture session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
if ([session canAddInput:videoInput]) {
[session addInput:videoInput];
}
if ([session canAddInput:audioInput]) {
[session addInput:audioInput];
}
if ([session canAddOutput:movieFileOutput]) {
[session addOutput:movieFileOutput];
}
[session setSessionPreset:sessionPreset];
// I added this *****************
dispatch_queue_t queue = dispatch_queue_create("myqueue", NULL);
[[self videoDataOutput] setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// ******************** end of my code
[session startRunning];
[self setSession:session];
[session release];
success = YES;
return success;
}
What I need is just a method where I can process every frame that is being recorded.
thanks
Having set yourself as the delegate, you'll receive a call to:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
Every time a new frame is captured. You can put whatever code you want in there — just be careful because you won't be on the main thread. It's probably safest to do a quick [target performSelectorOnMainThread:#selector(methodYouActuallyWant)] in -captureOutput:didOutputSampleBuffer:fromConnection:.
Addition: I use the following as setup in my code, and that successfully leads to the delegate method being called. I'm unable to see any substantial difference between it and what you're using.
- (id)initWithSessionPreset:(NSString *)sessionPreset delegate:(id <AAVideoSourceDelegate>)aDelegate
{
#ifndef TARGET_OS_EMBEDDED
return nil;
#else
if(self = [super init])
{
delegate = aDelegate;
NSError *error = nil;
// create a low-quality capture session
session = [[AVCaptureSession alloc] init];
session.sessionPreset = sessionPreset;
// grab a suitable device...
device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// ...and a device input
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if(!input || error)
{
[self release];
return nil;
}
[session addInput:input];
// create a VideDataOutput to route output to us
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[session addOutput:[output autorelease]];
// create a suitable dispatch queue, GCD style, and hook self up as the delegate
dispatch_queue_t queue = dispatch_queue_create("aQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// set 32bpp BGRA pixel format, since I'll want to make sense of the frame
output.videoSettings =
[NSDictionary
dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
}
return self;
#endif
}
- (void)start
{
[session startRunning];
}
- (void)stop
{
[session stopRunning];
}
// create a suitable dispatch queue, GCD style, and hook self up as the delegate
dispatch_queue_t queue = dispatch_queue_create("aQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
Also very important into
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
be sure to put a
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; at the beginning and a [pool drain] at the end else will crash after too many processes.