Memory Issue while reading video frames iPhone - iphone

I'm having memory issues while reading video frames from an existing video chosen from the iPhone library. First I added the UIImage-frames themselves into an Array, but I thought that the array was too big for the memory after a while, so instead I save the UIImages in the documents folder and add the imagepath to the array. However, I still get the same memory warnings even though checking with Instruments for allocations. The total allocated memory never gets over 2.5mb. Also there are no leaks found... Can anyone think of something?
-(void)addFrame:(UIImage *)image
{
NSString *imgPath = [NSString stringWithFormat:#"%#/Analysis%d-%d.png", docFolder, currentIndex, framesArray.count];
[UIImagePNGRepresentation(image) writeToFile:imgPath atomically:YES];
[framesArray addObject:imgPath];
frameCount++;
}
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
[picker dismissModalViewControllerAnimated:YES];
[framesArray removeAllObjects];
frameCount = 0;
// incoming video
NSURL *videoURL = [info valueForKey:UIImagePickerControllerMediaURL];
//NSLog(#"Video : %#", videoURL);
// AVURLAsset to read input movie (i.e. mov recorded to local storage)
NSDictionary *inputOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
AVURLAsset *inputAsset = [[AVURLAsset alloc] initWithURL:videoURL options:inputOptions];
// Load the input asset tracks information
[inputAsset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:#"tracks"] completionHandler: ^{
NSError *error = nil;
nrFrames = CMTimeGetSeconds([inputAsset duration]) * 30;
NSLog(#"Total frames = %d", nrFrames);
// Check status of "tracks", make sure they were loaded
AVKeyValueStatus tracksStatus = [inputAsset statusOfValueForKey:#"tracks" error:&error];
if (!tracksStatus == AVKeyValueStatusLoaded)
// failed to load
return;
/* Read video samples from input asset video track */
AVAssetReader *reader = [AVAssetReader assetReaderWithAsset:inputAsset error:&error];
NSMutableDictionary *outputSettings = [NSMutableDictionary dictionary];
[outputSettings setObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey: (NSString*)kCVPixelBufferPixelFormatTypeKey];
AVAssetReaderTrackOutput *readerVideoTrackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:[[inputAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] outputSettings:outputSettings];
// Assign the tracks to the reader and start to read
[reader addOutput:readerVideoTrackOutput];
if ([reader startReading] == NO) {
// Handle error
NSLog(#"Error reading");
}
NSAutoreleasePool *pool = [NSAutoreleasePool new];
while (reader.status == AVAssetReaderStatusReading)
{
if(!memoryProblem)
{
CMSampleBufferRef sampleBufferRef = [readerVideoTrackOutput copyNextSampleBuffer];
if (sampleBufferRef)
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBufferRef);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
/*We unlock the image buffer*/
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
/*We release some components*/
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage *image= [UIImage imageWithCGImage:newImage scale:[UIScreen mainScreen].scale orientation:UIImageOrientationRight];
//[self addFrame:image];
[self performSelectorOnMainThread:#selector(addFrame:) withObject:image waitUntilDone:YES];
/*We release the CGImageRef*/
CGImageRelease(newImage);
CMSampleBufferInvalidate(sampleBufferRef);
CFRelease(sampleBufferRef);
sampleBufferRef = NULL;
}
}
else
{
break;
}
}
[pool release];
NSLog(#"Finished");
}];
}

You do one thing and try.
Move the NSAutoreleasePool inside the while loop and drain that inside the loop.
So that it would be like as follows:
while (reader.status == AVAssetReaderStatusReading)
{
NSAutoreleasePool *pool = [NSAutoreleasePool new];
.....
[pool drain];
}

Related

iPhone - AVAssetWriter - Error creating movie from photos at 1920×1080 pixels

I am trying to create a movie from some pictures. It works just fine with hd pictures ({720, 1280}) or lower resolutions . But when i try to create the movie with full hd pictures {1080, 1920} , the video is scrambled. Here is a link to see how it looks http://www.youtube.com/watch?v=BfYldb8e_18 . Do you have any ideas what i may be doing wrong?
- (void) createMovieWithOptions:(NSDictionary *) options
{
#autoreleasepool {
NSString *path = [options valueForKey:#"path"];
CGSize size = [(NSValue *)[options valueForKey:#"size"] CGSizeValue];
NSArray *imageArray = [options valueForKey:#"pictures"];
NSInteger recordingFPS = [[options valueForKey:#"fps"] integerValue];
BOOL success=YES;
NSError *error = nil;
AVAssetWriter *assetWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path]
fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(assetWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithFloat:size.width], AVVideoWidthKey,
[NSNumber numberWithFloat:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput *videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
// Configure settings for the pixel buffer adaptor.
NSDictionary* bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
sourcePixelBufferAttributes:bufferAttributes];
NSParameterAssert(videoWriterInput);
NSParameterAssert([assetWriter canAddInput:videoWriterInput]);
videoWriterInput.expectsMediaDataInRealTime = NO;
[assetWriter addInput:videoWriterInput];
//Start a session:
[assetWriter startWriting];
[assetWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
//convert uiimage to CGImage.
int frameCount = 0;
float progress = 0;
float progressFromFrames = _progressView.progress; //only for create iflipbook movie
for(UIImage * img in imageArray)
{
if([[NSThread currentThread] isCancelled])
{
[NSThread exit];
}
[condCreateMovie lock];
if(isCreateMoviePaused)
{
[condCreateMovie wait];
}
uint64_t totalFreeSpace=[Utils getFreeDiskspace];
if(((totalFreeSpace/1024ll)/1024ll)<50)
{
success=NO;
break;
}
// #autoreleasepool {
NSLog(#"size:%#",NSStringFromCGSize(img.size));
buffer = [[MovieWritter sharedMovieWritter] pixelBufferFromCGImage:[img CGImage] andSize:size];
BOOL append_ok = NO;
int j = 0;
while (!append_ok && j < 60)
{
if(adaptor.assetWriterInput.readyForMoreMediaData)
{
CMTime frameTime = CMTimeMake(frameCount, recordingFPS);
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
CVPixelBufferRelease(buffer);
[NSThread sleepForTimeInterval:0.1];
if(isCreatingiFlipBookFromImported)
progress = (float)frameCount/(float)[imageArray count]/2.0 + progressFromFrames;
else
progress = (float)frameCount/(float)[imageArray count];
[[NSNotificationCenter defaultCenter] postNotificationName:#"movieCreationProgress" object:[NSNumber numberWithFloat:progress]];
}
else
{
[NSThread sleepForTimeInterval:0.5];
}
j++;
}
if (!append_ok)
{
NSLog(#"error appending image %d times %d\n", frameCount, j);
}
frameCount++;
[condCreateMovie unlock];
}
//Finish the session:
[videoWriterInput markAsFinished];
[assetWriter finishWriting];
NSDictionary *dict = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:success], #"success",
path, #"path", nil];
[[NSNotificationCenter defaultCenter] postNotificationName:#"movieCreationFinished" object:dict];
}
}
*Edit . Here is the code for [[MovieWritter sharedMovieWritter] pixelBufferFromCGImage:]
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image andSize:(CGSize) size
{
#autoreleasepool {
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
}
I had the same problem and this answer resolved it: the size of the video must be a multiple of 16.
Pretty sure that this is either a HW limitation or a bug. Please file a Radar.
how about something like this to get pixel buffer
//you could use a cgiimageref here instead
CFDataRef imageData= CGDataProviderCopyData(CGImageGetDataProvider(imageView.image.CGImage));
NSLog (#"copied image data");
cvErr = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
FRAME_WIDTH,
FRAME_HEIGHT,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(imageData),
CGImageGetBytesPerRow(imageView.image.CGImage),
NULL,
NULL,
NULL,
&pixelBuffer);
NSLog (#"CVPixelBufferCreateWithBytes returned %d", cvErr);
CFAbsoluteTime thisFrameWallClockTime = CFAbsoluteTimeGetCurrent();
CFTimeInterval elapsedTime = thisFrameWallClockTime - firstFrameWallClockTime;
NSLog (#"elapsedTime: %f", elapsedTime);
CMTime presentationTime = CMTimeMake(elapsedTime * TIME_SCALE, TIME_SCALE);
// write the sample
BOOL appended = [assetWriterPixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];
CVPixelBufferRelease(pixelBuffer);
CFRelease(imageData);
if (appended) {
NSLog (#"appended sample at time %lf", CMTimeGetSeconds(presentationTime));
} else {
NSLog (#"failed to append");
[self stopRecording];
self.startStopButton.selected = NO;
}
You may also want to set the capture settings preset , although high usually is suitable and that is default
*/
Constants to define capture setting presets using the sessionPreset property.
NSString *const AVCaptureSessionPresetPhoto;
NSString *const AVCaptureSessionPresetHigh;
NSString *const AVCaptureSessionPresetMedium;
NSString *const AVCaptureSessionPresetLow;
NSString *const AVCaptureSessionPreset352x288;
NSString *const AVCaptureSessionPreset640x480;
NSString *const AVCaptureSessionPreset1280x720;
NSString *const AVCaptureSessionPreset1920x1080;
NSString *const AVCaptureSessionPresetiFrame960x540;
NSString *const AVCaptureSessionPresetiFrame1280x720;
*/
//set it like this
self.captureSession.sessionPreset = AVCaptureSessionPreset1920x1080;
//or like this when you define avcapturesession
[self.captureSession setSessionPreset:AVCaptureSessionPreset1920x1080];

How to make a movie from set of images using UIGetScreenImage

I have used this method and get multiple images. I am able to successfully create a movie but my problem is that when I play the movie, it seems to be playing too fast i.e. the movie doesn't have all the frames. Here is my code.
-(UIImage *)uiImageScreen
{
CGImageRef screen = UIGetScreenImage();
UIImage* image = [UIImage imageWithCGImage:screen];
CGImageRelease(screen);
UIImageWriteToSavedPhotosAlbum(image, self,nil, nil);
return image;
}
-(void) writeSample: (NSTimer*) _timer
{
if (assetWriterInput.readyForMoreMediaData) {
// CMSampleBufferRef sample = nil;
CVReturn cvErr = kCVReturnSuccess;
// get screenshot image!
CGImageRef image = (CGImageRef) [[self uiImageScreen] CGImage];
NSLog (#"made screenshot");
// prepare the pixel buffer
CVPixelBufferRef pixelBuffer = NULL;
CFDataRef imageData= CGDataProviderCopyData(CGImageGetDataProvider(image));
NSLog (#"copied image data");
cvErr = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
FRAME_WIDTH,
FRAME_HEIGHT,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(imageData),
CGImageGetBytesPerRow(image),
NULL,
NULL,
NULL,
&pixelBuffer);
NSLog (#"CVPixelBufferCreateWithBytes returned %d", cvErr);
// calculate the time
CFAbsoluteTime thisFrameWallClockTime = CFAbsoluteTimeGetCurrent();
CFTimeInterval elapsedTime = thisFrameWallClockTime - firstFrameWallClockTime;
NSLog (#"elapsedTime: %f", elapsedTime);
CMTime presentationTime = CMTimeMake (elapsedTime * 600, 600);
// write the sample
BOOL appended = [assetWriterPixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];
if (appended)
{
NSLog (#"appended sample at time %lf", CMTimeGetSeconds(presentationTime));
}
else
{
NSLog (#"failed to append");
}
}
}
Then I call this method to create movie.
-(void)StartRecording
{
NSString *moviePath = [[self pathToDocumentsDirectory] stringByAppendingPathComponent:OUTPUT_FILE_NAME];
if ([[NSFileManager defaultManager] fileExistsAtPath:moviePath]) {
[[NSFileManager defaultManager] removeItemAtPath:moviePath error:nil];
}
NSURL *movieURL = [NSURL fileURLWithPath:moviePath];
NSLog(#"path=%#",movieURL);
NSError *movieError = nil;
[assetWriter release];
assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL
fileType: AVFileTypeQuickTimeMovie
error: &movieError];
NSDictionary *assetWriterInputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:320], AVVideoWidthKey,
[NSNumber numberWithInt:480], AVVideoHeightKey,
nil];
assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType: AVMediaTypeVideo
outputSettings:assetWriterInputSettings];
assetWriterInput.expectsMediaDataInRealTime = YES;
[assetWriter addInput:assetWriterInput];
[assetWriterPixelBufferAdaptor release];
assetWriterPixelBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptor alloc]
initWithAssetWriterInput:assetWriterInput
sourcePixelBufferAttributes:nil];
[assetWriter startWriting];
firstFrameWallClockTime = CFAbsoluteTimeGetCurrent();
[assetWriter startSessionAtSourceTime: CMTimeMake(0, 1000)];
// start writing samples to it
[assetWriterTimer release];
assetWriterTimer = [NSTimer scheduledTimerWithTimeInterval:0.1
target:self
selector:#selector (writeSample:)
userInfo:nil
repeats:YES] ;
}
try this method....
if (![videoWriterInput isReadyForMoreMediaData]) {
NSLog(#"Not ready for video data");
}
else {
#synchronized (self) {
UIImage* newFrame = [self.currentScreen retain];
CVPixelBufferRef pixelBuffer = NULL;
CGImageRef cgImage = CGImageCreateCopy([newFrame CGImage]);
CFDataRef image = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
int status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, avAdaptor.pixelBufferPool, &pixelBuffer);
if(status != 0){
//could not get a buffer from the pool
NSLog(#"Error creating pixel buffer: status=%d", status);
}
// set image data into pixel buffer
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
uint8_t* destPixels = CVPixelBufferGetBaseAddress(pixelBuffer);
CFDataGetBytes(image, CFRangeMake(0, CFDataGetLength(image)), destPixels); //XXX: will work if the pixel buffer is contiguous and has the same bytesPerRow as the input data
if(status == 0){
BOOL success = [avAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:time];
if (!success)
NSLog(#"Warning: Unable to write buffer to video");
}
//clean up
[newFrame release];
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CVPixelBufferRelease( pixelBuffer );
CFRelease(image);
CGImageRelease(cgImage);
}
}

iPhone Realtime Image Processing using OpenCV and AVFoundation Frameworks?

I want to doing image processing in real time by using openCV.
My final target is showing the result in realtime on the screen while the other side camera is capturing the video by using AVFoundation frameworks.
How can I process every video frame by OpenCV, and show the result on the screen in real time?
Use AVAssertReader
//Setup Reader
AVURLAsset * asset = [AVURLAsset URLAssetWithURL:urlvalue options:nil];
[asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:#"tracks"] completionHandler: ^{ dispatch_async(dispatch_get_main_queue(), ^{
AVAssetTrack * videoTrack = nil;
NSArray * tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
if ([tracks count] == 1) {
videoTrack = [tracks objectAtIndex:0];
NSError * error = nil;
_movieReader = [[AVAssetReader alloc] initWithAsset:asset error:&error];
if (error)
NSLog(error.localizedDescription);
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_4444AYpCbCr16]; NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[_movieReader addOutput:[AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack outputSettings:videoSettings]];
[_movieReader startReading];
}
});
}];
to get next movie frame
static int frameCount=0;
- (void) readNextMovieFrame {
if (_movieReader.status == AVAssetReaderStatusReading) {
AVAssetReaderTrackOutput * output = [_movieReader.outputs objectAtIndex:0];
CMSampleBufferRef sampleBuffer = [output copyNextSampleBuffer];
if (sampleBuffer) {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the image buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get information of the image
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
/*We unlock the image buffer*/
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
/*We release some components*/
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
/*We display the result on the custom layer*/
/*self.customLayer.contents = (id) newImage;*/
/*We display the result on the image view (We need to change the orientation of the image so that the video is displayed correctly)*/
UIImage *image= [UIImage imageWithCGImage:newImage scale:0.0 orientation:UIImageOrientationRight];
UIGraphicsBeginImageContext(image.size);
[image drawAtPoint:CGPointMake(0, 0)];
// UIImage *img=UIGraphicsGetImageFromCurrentImageContext();
videoImage=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//videoImage=image;
// if (frameCount < 40) {
NSLog(#"readNextMovieFrame==%d",frameCount);
NSString* filename = [NSString stringWithFormat:#"Documents/frame_%d.png", frameCount];
NSString* pngPath = [NSHomeDirectory() stringByAppendingPathComponent:filename];
[UIImagePNGRepresentation(videoImage) writeToFile: pngPath atomically: YES];
frameCount++;
// }
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CFRelease(sampleBuffer);
}
}
}
once your _movieReader reach end then you need to restart again.

AVCaptureSession returning blank image on iphone 4

I am trying to call the 'media capture putting it all together' example from AVFoundation Programming Guide. I keep getting a blank (black) image. Is there anything I need to call first to have this code access the camera?
thanks
This is unmodified code from the example:
-(void) setupCapture {
AVCaptureSession *session = [[AVCaptureSession alloc] init];
session.sessionPreset = AVCaptureSessionPresetLow;
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
NSLog(#"no input");
return;
}
[session addInput:input];
AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
[session addOutput:output];
output.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
output.minFrameDuration = CMTimeMake(1, 15);
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
[session startRunning];
}
UIImage *imageFromSampleBuffer(CMSampleBufferRef sampleBuffer) {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer.
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get the number of bytes per row for the pixel buffer.
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height.
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space.
static CGColorSpaceRef colorSpace = NULL;
if (colorSpace == NULL) {
colorSpace = CGColorSpaceCreateDeviceRGB();
if (colorSpace == NULL) {
// Handle the error appropriately.
return nil;
}
}
// Get the base address of the pixel buffer.
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the data size for contiguous planes of the pixel buffer.
size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);
// Create a Quartz direct-access data provider that uses data we supply.
CGDataProviderRef dataProvider =
CGDataProviderCreateWithData(NULL, baseAddress, bufferSize, NULL);
// Create a bitmap image from data supplied by the data provider.
CGImageRef cgImage =
CGImageCreate(width, height, 8, 32, bytesPerRow,
colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,
dataProvider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(dataProvider);
// Create and return an image object to represent the Quartz image.
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return image;
}
and my callback method is:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
UIImage *image = imageFromSampleBuffer(sampleBuffer);
NSLog(#"am I nil?: %#", image);
self.imageV.image = image;
[self.view setNeedsDisplay];
}
tip 1:
do not start capturesession in main viewDidLoad method, but a little later
tip 2:
do not update your ui in the capturesession callback method, do it on the main thread.

How do you only capture select camera frames using AVCaptureSession?

I'm trying to use AVCaptureSession to get images from the front camera for processing. So far each time a new frame was available I simply assigned it to a variable, and ran an NSTimer that checks every tenth of a second if there's a new frame, and if there is it processes it.
I would like to get a frame, freeze the camera, and get the next frame whenever I like. Something like [captureSession getNextFrame] you know?
Here's a part of my code although I'm not sure how helpful it may be:
- (void)startFeed {
loopTimerIndex = 0;
NSArray *captureDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput
deviceInputWithDevice:[captureDevices objectAtIndex:1]
error:nil];
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.minFrameDuration = CMTimeMake(1, 10);
captureOutput.alwaysDiscardsLateVideoFrames = true;
dispatch_queue_t queue;
queue = dispatch_queue_create("cameraQueue", nil);
[captureOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
NSString *key = (NSString *)kCVPixelBufferPixelFormatTypeKey;
NSNumber *value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
captureSession = [[AVCaptureSession alloc] init];
captureSession.sessionPreset = AVCaptureSessionPresetLow;
[captureSession addInput:captureInput];
[captureSession addOutput:captureOutput];
imageView = [[UIImage alloc] init];
[captureSession startRunning];
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
loopTimerIndex++;
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
imageView = [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationLeftMirrored];
[delegate updatePresentor:imageView];
if(loopTimerIndex == 1) {
[delegate feedStarted];
}
CGImageRelease(newImage);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
[pool drain];
}
You don't actively poll the camera to get frames back, because that's not how the capture process is architected. Instead, if you would like to only display frames every tenth of a second instead of every 1/30th or faster, you should just ignore the frames in between.
For example, you could maintain a timestamp that you would compare against every time -captureOutput:didOutputSampleBuffer:fromConnection: was triggered. If the timestamp is greater than or equal to 0.1 seconds from right now, process and display the camera frame and reset the timestamp to the current time. Otherwise, ignore the frame.