iOS - Automatically resize CVPixelBufferRef - iphone

I am trying to crop and scale a CMSampleBufferRef based on user's inputs, on ratio, the below code takes a CMSampleBufferRef, convert it into a CVImageBufferRef and use CVPixelBuffer to crop the internal image based on its bytes. The goal of this process is to have a cropped and scaled CVPixelBufferRef to write to the video
- (CVPixelBufferRef)modifyImage:(CMSampleBufferRef) sampleBuffer {
#synchronized (self) {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the image buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get information about the image
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CVPixelBufferRef pxbuffer;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
[NSNumber numberWithInt:720], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:1280], kCVPixelBufferHeightKey,
nil];
NSInteger tempWidth = (NSInteger) (width / ratio);
NSInteger tempHeight = (NSInteger) (height / ratio);
NSInteger baseAddressStart = 100 + 100 * bytesPerRow;
CVReturn status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, tempWidth, tempHeight, kCVPixelFormatType_32BGRA, &baseAddress[baseAddressStart], bytesPerRow, MyPixelBufferReleaseCallback, NULL, (CFDictionaryRef)options, &pxbuffer);
if (status != 0) {
CKLog(#"%d", status);
return NULL;
}
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
return pxbuffer;
}
}
It all works fine, except that when I am trying to write it into the video's ouput using this method, it keeps receiving memory warning. It is fine if I keep the same ratio
- (void)writeBufferFrame:(CMSampleBufferRef)sampleBuffer pixelBuffer:(CVPixelBufferRef)pixelBuffer {
CMTime lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
if(self.videoWriter.status != AVAssetWriterStatusWriting)
{
CKLog(#"%d", self.videoWriter.status);
[self.videoWriter startWriting];
[self.videoWriter startSessionAtSourceTime:lastSampleTime];
}
CVPixelBufferRef pxbuffer = [self modifyImage:sampleBuffer];
BOOL success = [self.avAdaptor appendPixelBuffer:pxbuffer withPresentationTime:lastSampleTime];
if (!success)
NSLog(#"Warning: Unable to write buffer to video");
}
I also tried with different approaches using CMSampleBufferRef and CGContext. If you can provide a solution for any approach here, I can give you the full score

Try to use kCVPixelBufferLock_ReadOnly flag in both calls to -CVPixelBufferLockBaseAddress and -CVPixelBufferUnlockBaseAddress.
And sometimes this issue can be solved by copying pixel buffer. Perform allocating once:
unsigned char *data = (unsigned char*)malloc(ySize * sizeof(unsigned char));
After that, copy data from pixelBuffer to data
size_t size = height * bytesPerRow;
memcpy(data, baseAddress, size);
After that, use data. Hope, that will help.

Related

Generate movie with UIImages using AVFoundation

Many before me have shared their knowledge in stack overflow about this topic. I was able to take over much of the tips and code snippets thanks to the contribution. It all worked quite good except that it was often hard on the working memory. This time-lapse application that I am working on, was able to generate a movie out of 2000 hd images and more, but since iOS 7.1 it is having trouble generating a video out of more than 240 hd images. 240 images seems to be the limit on an iPhone 5s. I was wondering whether anybody has had these problems too and whether anybody has found solutions to it. Now to the source code.
This part iterates through saved uiimages in the apps document's directory.
if ([adaptor.assetWriterInput isReadyForMoreMediaData])
{
CMTime frameTime = CMTimeMake(1, fps);
CMTime lastTime=CMTimeMake(i, fps);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
NSString *imageFilePath = [NSString stringWithFormat:#"%#/%#",folderPathName, imageFileName];
image = [UIImage imageWithContentsOfFile:imageFilePath] ;
cgimage = [image CGImage];
buffer = (CVPixelBufferRef)[self pixelBufferFromCGImage: cgimage];
bool result = [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
if (result == NO)
{
NSLog(#"failed to append buffer %i", i);
_videoStatus = 0;
success = NO;
return success;
}
//buffer has to be released here or memory pressure will occur
if(buffer != NULL)
{
CVBufferRelease(buffer);
buffer = NULL;
}
}
This is the local method which appears to make most trouble. It gets the pixel buffer reference from cgimage.
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (CFDictionaryRef) CFBridgingRetain(options),
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace, (CGBitmapInfo)kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(M_PI));
float width = CGImageGetWidth(image);
float height = CGImageGetHeight(image);
CGContextDrawImage(context, CGRectMake(0, 0, width,
height), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I have been spending a lot of time on this and not moving forward. Help is much appreciated. If any more details are necessary, I am glad to provide.
I use very similar code although for slightly different reasons (I'm using AVAssetReader, grabbing frames as images and manipulating them). Net result however should be similar - I'm iterating through 1000's of images without issue.
The two things I notice that I'm doing that are different:
When you release the Buffer, you're using CVBufferRelease, I'm using
CVPixelBufferRelease.
You are not releasing the CGImage using CGImageRelease.
Try rewriting this:
//buffer has to be released here or memory pressure will occur
if(buffer != NULL)
{
CVBufferRelease(buffer);
buffer = NULL;
}
as:
//buffer has to be released here or memory pressure will occur
if(buffer != NULL)
{
CVPixelBufferRelease(buffer);
buffer = NULL;
}
CGImageRelease(cgImage);
Let me know how that goes.
EDIT: Here is a sample of my code, getting and releasing a CGImageRef. The Image is created from a CIImage extracted from the reader buffer and filtered.
CGImageRef finalImage = [context createCGImage:outputImage fromRect:[outputImage extent]];
// 2. Grab the size
CGSize size = CGSizeMake(CGImageGetWidth(finalImage), CGImageGetHeight(finalImage));
// 3. Convert the CGImage to a PixelBuffer
CVPixelBufferRef pxBuffer = NULL;
pxBuffer = [self pixelBufferFromCGImage: finalImage andSize: size];
// 4. Write things back out.
// Calculate the frame time
CMTime frameTime = CMTimeMake(1, 30);
CMTime presentTime=CMTimeAdd(currentTime, frameTime);
[_ugcAdaptor appendPixelBuffer:pxBuffer withPresentationTime:presentTime];
CGImageRelease(finalImage);
CVPixelBufferRelease(pxBuffer);
Finally I found the solution to my problem, there were 2 points I had to change in my code.
I changed the parameter type of the method (CVPixelBufferRef) pixelBufferFromImage: (UIImage*) image from CGImageRef to UIImage. Now the reason for this is mainly to simplify the code so the coming correction is easier to implement.
Autoreleasepool is introduced to this method. Now this is the actual key to the solution. CGImageRef cgimage = [image CGImage]; and all other components of the method must be included in the Autoreleasepool.
The code looks like this.
- (CVPixelBufferRef) pixelBufferFromImage: (UIImage*) image withOrientation:(ImageOrientation)orientation
{
#autoreleasepool
{
CGImageRef cgimage = [image CGImage];
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
float width = CGImageGetWidth(cgimage);
float height = CGImageGetHeight(cgimage);
CVPixelBufferCreate(kCFAllocatorDefault,width,
height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)(options),
&pxbuffer);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, width,
height, 8, 4*width, rgbColorSpace,
(CGBitmapInfo)kCGImageAlphaNoneSkipFirst);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(-M_PI/2));
CGContextDrawImage(context, CGRectMake(-height, 0, height, width), cgimage);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
}
With this solution a hd movie of more than 2000 images is generated at a rather slow speed but it seems to be very reliable, which is most important.

Convert UIImage to CMSampleBufferRef

I am doing video recording using AVFoundation. I have to crop the video to 320x280. I am getting the CMSampleBufferRef and converting it to UIImage using the below code.
CGImageRef _cgImage = [self imageFromSampleBuffer:sampleBuffer];
UIImage *_uiImage = [UIImage imageWithCGImage:_cgImage];
CGImageRelease(_cgImage);
_uiImage = [_uiImage resizedImageWithSize:CGSizeMake(320, 280)];
CMSampleBufferRef croppedBuffer = /* NEED HELP WITH THIS */
[_videoInput appendSampleBuffer:sampleBuffer];
// _videoInput is a AVAssetWriterInput
The imageFromSampleBuffer: method looks like this:
- (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
return newImage;
}
Now I have to convert the resized image back to CMSampleBufferRef to write in AVAssetWriterInput.
How do I convert UIImage to CMSampleBufferRef?
Thanks everyone!
While you could create your own Core Media sample buffers from scratch, it's probably easier to use a AVPixelBufferAdaptor.
You describe the source pixel buffer format in the inputSettings dictionary and pass that to the adaptor initializer:
NSMutableDictionary* inputSettingsDict = [NSMutableDictionary dictionary];
[inputSettingsDict setObject:[NSNumber numberWithInt:pixelFormat] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
[inputSettingsDict setObject:[NSNumber numberWithUnsignedInteger:(NSUInteger)(image.uncompressedSize/image.rect.size.height)] forKey:(NSString*)kCVPixelBufferBytesPerRowAlignmentKey];
[inputSettingsDict setObject:[NSNumber numberWithDouble:image.rect.size.width] forKey:(NSString*)kCVPixelBufferWidthKey];
[inputSettingsDict setObject:[NSNumber numberWithDouble:image.rect.size.height] forKey:(NSString*)kCVPixelBufferHeightKey];
[inputSettingsDict setObject:[NSNumber numberWithBool:YES] forKey:(NSString*)kCVPixelBufferCGImageCompatibilityKey];
[inputSettingsDict setObject:[NSNumber numberWithBool:YES] forKey:(NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey];
AVAssetWriterInputPixelBufferAdaptor* pixelBufferAdapter = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:assetWriterInput sourcePixelBufferAttributes:inputSettingsDict];
You can then append CVPixelBuffers to your adaptor:
[pixelBufferAdapter appendPixelBuffer:completePixelBuffer withPresentationTime:pixelBufferTime]
The pixelbufferAdaptor accepts CVPixelBuffers, so you have to convert your UIImages to pixelBuffers, which is described here: https://stackoverflow.com/a/3742212/100848
Pass the CGImage property of your UIImage to newPixelBufferFromCGImage.
This is a function that I use in my GPUImage framework to resize an incoming CMSampleBufferRef and place the scaled results within a CVPixelBufferRef that you provide:
void GPUImageCreateResizedSampleBuffer(CVPixelBufferRef cameraFrame, CGSize finalSize, CMSampleBufferRef *sampleBuffer)
{
// CVPixelBufferCreateWithPlanarBytes for YUV input
CGSize originalSize = CGSizeMake(CVPixelBufferGetWidth(cameraFrame), CVPixelBufferGetHeight(cameraFrame));
CVPixelBufferLockBaseAddress(cameraFrame, 0);
GLubyte *sourceImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, sourceImageBytes, CVPixelBufferGetBytesPerRow(cameraFrame) * originalSize.height, NULL);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef cgImageFromBytes = CGImageCreate((int)originalSize.width, (int)originalSize.height, 8, 32, CVPixelBufferGetBytesPerRow(cameraFrame), genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst, dataProvider, NULL, NO, kCGRenderingIntentDefault);
GLubyte *imageData = (GLubyte *) calloc(1, (int)finalSize.width * (int)finalSize.height * 4);
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)finalSize.width, (int)finalSize.height, 8, (int)finalSize.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, finalSize.width, finalSize.height), cgImageFromBytes);
CGImageRelease(cgImageFromBytes);
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
CGDataProviderRelease(dataProvider);
CVPixelBufferRef pixel_buffer = NULL;
CVPixelBufferCreateWithBytes(kCFAllocatorDefault, finalSize.width, finalSize.height, kCVPixelFormatType_32BGRA, imageData, finalSize.width * 4, stillImageDataReleaseCallback, NULL, NULL, &pixel_buffer);
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixel_buffer, &videoInfo);
CMTime frameTime = CMTimeMake(1, 30);
CMSampleTimingInfo timing = {frameTime, frameTime, kCMTimeInvalid};
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixel_buffer, YES, NULL, NULL, videoInfo, &timing, sampleBuffer);
CFRelease(videoInfo);
CVPixelBufferRelease(pixel_buffer);
}
It doesn't take you all the way to creating a CMSampleBufferRef, but as weichsel points out, you only need the CVPixelBufferRef for encoding the video.
However, if what you really want to do here is crop video and record it, going to and from a UIImage is going to be a very slow way to do this. Instead, may I recommend looking into using something like GPUImage to capture video with a GPUImageVideoCamera input (or GPUImageMovie if cropping a previously recorded movie), feeding that into a GPUImageCropFilter, and taking the result to a GPUImageMovieWriter. That way, the video never touches Core Graphics and hardware acceleration is used as much as possible. It will be a lot faster than what you describe above.
- (CVPixelBufferRef)CVPixelBufferRefFromUiImage:(UIImage *)img {
CGSize size = img.size;
CGImageRef image = [img CGImage];
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
img -> UIImage
CVPixelBufferRef pxbuffer = NULL;
CGImageRef image = [img CGImage];
// Initilize CVPixelBuffer
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image), CGImageGetHeight(image), kCVPixelFormatType_32ARGB, NULL, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CGContextRef context = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(pxbuffer), CGImageGetWidth(image), CGImageGetHeight(image), CGImageGetBitsPerComponent(image), CVPixelBufferGetBytesPerRow(pxbuffer), CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
Please make sure that Component and BytesPerRow are fetched from CGImageRef and CVPixelBufferRef respectively.
CGImageGetBitsPerComponent(image)
CVPixelBufferGetBytesPerRow(pxbuffer)
In many places I saw people using constants, if they are not correct you get a distorted image.

Crash on CFDataGetBytes(image, CFRangeMake(0, CFDataGetLength(image)), destPixels);

I am making video of screen but crashes on this line.
CFDataGetBytes(image, CFRangeMake(0, CFDataGetLength(image)), destPixels);
Note: It will work if the pixel buffer is contiguous and has the same bytesPerRow as the input data
So I am providing my code that grabs frames from the camera - maybe it will help After grabbing the data, I put it on a queue for further processing. I had to remove some of the code as it was not relevant to you - so what you see here has pieces you should be able to use.
- (void)captureOutput:(AVCaptureVideoDataOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
#autoreleasepool {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//NSLog(#"PE: value=%lld timeScale=%d flags=%x", prStamp.value, prStamp.timescale, prStamp.flags);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
NSRange captureRange;
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
// Note Apple sample code cheats big time - the phone is big endian so this reverses the "apparent" order of bytes
CGContextRef newContext = CGBitmapContextCreate(NULL, width, captureRange.length, 8, bytesPerRow, colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little); // Video in ARGB format
assert(newContext);
uint8_t *newPtr = (uint8_t *)CGBitmapContextGetData(newContext);
size_t offset = captureRange.location * bytesPerRow;
memcpy(newPtr, baseAddress + offset, captureRange.length * bytesPerRow);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CMTime prStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer); // when it was taken?
//CMTime deStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer); // now?
NSDictionary *dict = [NSDictionary dictionaryWithObjectsAndKeys:
[NSValue valueWithBytes:&saveState objCType:#encode(saveImages)], kState,
[NSValue valueWithNonretainedObject:(__bridge id)newContext], kImageContext,
[NSValue valueWithBytes:&prStamp objCType:#encode(CMTime)], kPresTime,
nil ];
dispatch_async(imageQueue, ^
{
// could be on any thread now
OSAtomicDecrement32(&queueDepth);
if(!isCancelled) {
saveImages state; [(NSValue *)[dict objectForKey:kState] getValue:&state];
CGContextRef context; [(NSValue *)[dict objectForKey:kImageContext] getValue:&context];
CMTime stamp; [(NSValue *)[dict objectForKey:kPresTime] getValue:&stamp];
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
CGContextRelease(context);
UIImageOrientation orient = state == saveOne ? UIImageOrientationLeft : UIImageOrientationUp;
UIImage *image = [UIImage imageWithCGImage:newImageRef scale:1.0 orientation:orient]; // imageWithCGImage: UIImageOrientationUp UIImageOrientationLeft
CGImageRelease(newImageRef);
NSData *data = UIImagePNGRepresentation(image);
// NSLog(#"STATE:[%d]: value=%lld timeScale=%d flags=%x", state, stamp.value, stamp.timescale, stamp.flags);
{
NSString *name = [NSString stringWithFormat:#"%d.png", num];
NSString *path = [[wlAppDelegate snippetsDirectory] stringByAppendingPathComponent:name];
BOOL ret = [data writeToFile:path atomically:NO];
//NSLog(#"WROTE %d err=%d w/time %f path:%#", num, ret, (double)stamp.value/(double)stamp.timescale, path);
if(!ret) {
++errors;
} else {
dispatch_async(dispatch_get_main_queue(), ^
{
if(num) [delegate progress:(CGFloat)num/(CGFloat)(MORE_THAN_ONE_REV * SNAPS_PER_SEC) file:path];
} );
}
++num;
}
} else NSLog(#"CANCELLED");
} );
}
}

captureOutput:didOutputSampleBuffer:fromConnection Performance Issues

I use AVCaptureSessionPhoto to allow the user to take high-resolution photos. Upon taking a photo, I use the captureOutput:didOutputSampleBuffer:fromConnection: method to retrieve a thumbnail at the time of capture. However, although I try to do minimal work in the delegate method, the app becomes sort of laggy (I say sort of because it is still useable). Also, the iPhone tends to run hot.
Is there some way of reducing the amount of work the iPhone has to do?
I set up the AVCaptureVideoDataOutput by doing the following:
self.videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
self.videoDataOutput.alwaysDiscardsLateVideoFrames = YES;
// Specify the pixel format
dispatch_queue_t queue = dispatch_queue_create("com.myapp.videoDataOutput", NULL);
[self.videoDataOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
self.videoDataOutput.videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
Here's my captureOutput:didOutputSampleBuffer:fromConnection (and assisting imageRefFromSampleBuffer method):
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
if (videoDataOutputConnection == nil) {
videoDataOutputConnection = connection;
}
if (getThumbnail > 0) {
getThumbnail--;
CGImageRef tempThumbnail = [self imageRefFromSampleBuffer:sampleBuffer];
UIImage *image;
if (self.prevLayer.mirrored) {
image = [[UIImage alloc] initWithCGImage:tempThumbnail scale:1.0 orientation:UIImageOrientationLeftMirrored];
}
else {
image = [[UIImage alloc] initWithCGImage:tempThumbnail scale:1.0 orientation:UIImageOrientationRight];
}
[self.cameraThumbnailArray insertObject:image atIndex:0];
dispatch_async(dispatch_get_main_queue(), ^{
self.freezeCameraView.image = image;
});
CFRelease(tempThumbnail);
}
sampleBuffer = nil;
[pool release];
}
-(CGImageRef)imageRefFromSampleBuffer:(CMSampleBufferRef)sampleBuffer {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(context);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return newImage;
}
minFrameDuration is deprecated, this may work:
AVCaptureConnection *stillImageConnection = [stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
stillImageConnection.videoMinFrameDuration = CMTimeMake(1, 10);
To improve, we should setup our AVCaptureVideoDataOutput by:
output.minFrameDuration = CMTimeMake(1, 10);
We specify a minimum duration for each frame (play with this settings to avoid having too many frames waiting in the queue because it can cause memory issues). It is similar to the inverse of the maximum frame-rate. In this example we set a min frame duration of 1/10 seconds so a maximum frame-rate of 10fps. We say that we are not able to process more than 10 frames per second.
Hope that help!

What is the best/fastest way to convert CMSampleBufferRef to OpenCV IplImage?

I am writing an iPhone app that does some sort of real-time image detection with OpenCV. What is the best way to convert a CMSampleBufferRef image from the camera (I'm using AVCaptureVideoDataOutputSampleBufferDelegate of AVFoundation) into an IplImage that OpenCV understands? The conversion needs to be fast enough so it can run realtime.
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
// Convert CMSampleBufferRef into IplImage
IplImage *openCVImage = ???(sampleBuffer);
// Do OpenCV computations realtime
// ...
[pool release];
}
Thanks in advance.
This sample code is based on Apple's sample to manage CMSampleBuffer's pointer:
- (IplImage *)createIplImageFromSampleBuffer:(CMSampleBufferRef)sampleBuffer {
IplImage *iplimage = 0;
if (sampleBuffer) {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// get information of the image in the buffer
uint8_t *bufferBaseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
size_t bufferWidth = CVPixelBufferGetWidth(imageBuffer);
size_t bufferHeight = CVPixelBufferGetHeight(imageBuffer);
// create IplImage
if (bufferBaseAddress) {
iplimage = cvCreateImage(cvSize(bufferWidth, bufferHeight), IPL_DEPTH_8U, 4);
iplimage->imageData = (char*)bufferBaseAddress;
}
// release memory
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
else
DLog(#"No sampleBuffer!!");
return iplimage;
}
You need to create a 4-channel IplImage because the Phone's camera buffer is in BGRA.
To my experience, this conversion is fast enough to be done in a real-time application, but of course, anything you will add to it will cost time, especially with OpenCV.
"iplimage->imageData = (char*)bufferBaseAddress;" will lead to memory leak.
It should be "memcpy(iplimage->imageData, (char*)bufferBaseAddress, iplimage->imageSize);"
so the complete coded is:
-(IplImage *)createIplImageFromSampleBuffer:(CMSampleBufferRef)sampleBuffer {
IplImage *iplimage = 0;
if (sampleBuffer) {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// get information of the image in the buffer
uint8_t *bufferBaseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
size_t bufferWidth = CVPixelBufferGetWidth(imageBuffer);
size_t bufferHeight = CVPixelBufferGetHeight(imageBuffer);
// create IplImage
if (bufferBaseAddress) {
iplimage = cvCreateImage(cvSize(bufferWidth, bufferHeight), IPL_DEPTH_8U, 4);
//iplimage->imageData = (char*)bufferBaseAddress;
memcpy(iplimage->imageData, (char*)bufferBaseAddress, iplimage->imageSize);
}
// release memory
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
else
DLog(#"No sampleBuffer!!");
return iplimage;
}