Is there any way to improve time between shots with AVCaptureStillImageOutput? - iphone

I currently use the following code to shoot a series of images:
- (void)shootSeries:(int)photos {
if(photos == 0) {
[self mergeImages];
} else {
[output captureStillImageAsynchronouslyFromConnection:connection completionHandler:
^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
NSLog(#"Shot picture %d.", 7 - photos);
[self shootSeries:(photos - 1)];
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int dataSize = CVPixelBufferGetDataSize(pixelBuffer);
CFDataRef data = CFDataCreate(NULL, (const UInt8 *)CVPixelBufferGetBaseAddress(pixelBuffer), dataSize);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData(data);
CFRelease(data);
CGImageRef image = CGImageCreate(CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer),
8, 32,
CVPixelBufferGetBytesPerRow(pixelBuffer),
colorspace,
kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,
dataProvider, NULL, true, kCGRenderingIntentDefault);
CFRelease(dataProvider);
CFArrayAppendValue(shotPictures, image);
CFRelease(image);
}];
}
}
While this works rather well it is very slow. How come apps like ClearCam can shoot pictures much faster in series than this and how can I do it too?

After capturing the image, store the sample buffer in a CFArray, and once you're done taking all your phones, THEN convert them to images (or in your case CGImageRefs).

Related

Generate movie with UIImages using AVFoundation

Many before me have shared their knowledge in stack overflow about this topic. I was able to take over much of the tips and code snippets thanks to the contribution. It all worked quite good except that it was often hard on the working memory. This time-lapse application that I am working on, was able to generate a movie out of 2000 hd images and more, but since iOS 7.1 it is having trouble generating a video out of more than 240 hd images. 240 images seems to be the limit on an iPhone 5s. I was wondering whether anybody has had these problems too and whether anybody has found solutions to it. Now to the source code.
This part iterates through saved uiimages in the apps document's directory.
if ([adaptor.assetWriterInput isReadyForMoreMediaData])
{
CMTime frameTime = CMTimeMake(1, fps);
CMTime lastTime=CMTimeMake(i, fps);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
NSString *imageFilePath = [NSString stringWithFormat:#"%#/%#",folderPathName, imageFileName];
image = [UIImage imageWithContentsOfFile:imageFilePath] ;
cgimage = [image CGImage];
buffer = (CVPixelBufferRef)[self pixelBufferFromCGImage: cgimage];
bool result = [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
if (result == NO)
{
NSLog(#"failed to append buffer %i", i);
_videoStatus = 0;
success = NO;
return success;
}
//buffer has to be released here or memory pressure will occur
if(buffer != NULL)
{
CVBufferRelease(buffer);
buffer = NULL;
}
}
This is the local method which appears to make most trouble. It gets the pixel buffer reference from cgimage.
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (CFDictionaryRef) CFBridgingRetain(options),
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace, (CGBitmapInfo)kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(M_PI));
float width = CGImageGetWidth(image);
float height = CGImageGetHeight(image);
CGContextDrawImage(context, CGRectMake(0, 0, width,
height), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I have been spending a lot of time on this and not moving forward. Help is much appreciated. If any more details are necessary, I am glad to provide.
I use very similar code although for slightly different reasons (I'm using AVAssetReader, grabbing frames as images and manipulating them). Net result however should be similar - I'm iterating through 1000's of images without issue.
The two things I notice that I'm doing that are different:
When you release the Buffer, you're using CVBufferRelease, I'm using
CVPixelBufferRelease.
You are not releasing the CGImage using CGImageRelease.
Try rewriting this:
//buffer has to be released here or memory pressure will occur
if(buffer != NULL)
{
CVBufferRelease(buffer);
buffer = NULL;
}
as:
//buffer has to be released here or memory pressure will occur
if(buffer != NULL)
{
CVPixelBufferRelease(buffer);
buffer = NULL;
}
CGImageRelease(cgImage);
Let me know how that goes.
EDIT: Here is a sample of my code, getting and releasing a CGImageRef. The Image is created from a CIImage extracted from the reader buffer and filtered.
CGImageRef finalImage = [context createCGImage:outputImage fromRect:[outputImage extent]];
// 2. Grab the size
CGSize size = CGSizeMake(CGImageGetWidth(finalImage), CGImageGetHeight(finalImage));
// 3. Convert the CGImage to a PixelBuffer
CVPixelBufferRef pxBuffer = NULL;
pxBuffer = [self pixelBufferFromCGImage: finalImage andSize: size];
// 4. Write things back out.
// Calculate the frame time
CMTime frameTime = CMTimeMake(1, 30);
CMTime presentTime=CMTimeAdd(currentTime, frameTime);
[_ugcAdaptor appendPixelBuffer:pxBuffer withPresentationTime:presentTime];
CGImageRelease(finalImage);
CVPixelBufferRelease(pxBuffer);
Finally I found the solution to my problem, there were 2 points I had to change in my code.
I changed the parameter type of the method (CVPixelBufferRef) pixelBufferFromImage: (UIImage*) image from CGImageRef to UIImage. Now the reason for this is mainly to simplify the code so the coming correction is easier to implement.
Autoreleasepool is introduced to this method. Now this is the actual key to the solution. CGImageRef cgimage = [image CGImage]; and all other components of the method must be included in the Autoreleasepool.
The code looks like this.
- (CVPixelBufferRef) pixelBufferFromImage: (UIImage*) image withOrientation:(ImageOrientation)orientation
{
#autoreleasepool
{
CGImageRef cgimage = [image CGImage];
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
float width = CGImageGetWidth(cgimage);
float height = CGImageGetHeight(cgimage);
CVPixelBufferCreate(kCFAllocatorDefault,width,
height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)(options),
&pxbuffer);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, width,
height, 8, 4*width, rgbColorSpace,
(CGBitmapInfo)kCGImageAlphaNoneSkipFirst);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(-M_PI/2));
CGContextDrawImage(context, CGRectMake(-height, 0, height, width), cgimage);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
}
With this solution a hd movie of more than 2000 images is generated at a rather slow speed but it seems to be very reliable, which is most important.

Convert UIImage to CMSampleBufferRef

I am doing video recording using AVFoundation. I have to crop the video to 320x280. I am getting the CMSampleBufferRef and converting it to UIImage using the below code.
CGImageRef _cgImage = [self imageFromSampleBuffer:sampleBuffer];
UIImage *_uiImage = [UIImage imageWithCGImage:_cgImage];
CGImageRelease(_cgImage);
_uiImage = [_uiImage resizedImageWithSize:CGSizeMake(320, 280)];
CMSampleBufferRef croppedBuffer = /* NEED HELP WITH THIS */
[_videoInput appendSampleBuffer:sampleBuffer];
// _videoInput is a AVAssetWriterInput
The imageFromSampleBuffer: method looks like this:
- (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
return newImage;
}
Now I have to convert the resized image back to CMSampleBufferRef to write in AVAssetWriterInput.
How do I convert UIImage to CMSampleBufferRef?
Thanks everyone!
While you could create your own Core Media sample buffers from scratch, it's probably easier to use a AVPixelBufferAdaptor.
You describe the source pixel buffer format in the inputSettings dictionary and pass that to the adaptor initializer:
NSMutableDictionary* inputSettingsDict = [NSMutableDictionary dictionary];
[inputSettingsDict setObject:[NSNumber numberWithInt:pixelFormat] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
[inputSettingsDict setObject:[NSNumber numberWithUnsignedInteger:(NSUInteger)(image.uncompressedSize/image.rect.size.height)] forKey:(NSString*)kCVPixelBufferBytesPerRowAlignmentKey];
[inputSettingsDict setObject:[NSNumber numberWithDouble:image.rect.size.width] forKey:(NSString*)kCVPixelBufferWidthKey];
[inputSettingsDict setObject:[NSNumber numberWithDouble:image.rect.size.height] forKey:(NSString*)kCVPixelBufferHeightKey];
[inputSettingsDict setObject:[NSNumber numberWithBool:YES] forKey:(NSString*)kCVPixelBufferCGImageCompatibilityKey];
[inputSettingsDict setObject:[NSNumber numberWithBool:YES] forKey:(NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey];
AVAssetWriterInputPixelBufferAdaptor* pixelBufferAdapter = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:assetWriterInput sourcePixelBufferAttributes:inputSettingsDict];
You can then append CVPixelBuffers to your adaptor:
[pixelBufferAdapter appendPixelBuffer:completePixelBuffer withPresentationTime:pixelBufferTime]
The pixelbufferAdaptor accepts CVPixelBuffers, so you have to convert your UIImages to pixelBuffers, which is described here: https://stackoverflow.com/a/3742212/100848
Pass the CGImage property of your UIImage to newPixelBufferFromCGImage.
This is a function that I use in my GPUImage framework to resize an incoming CMSampleBufferRef and place the scaled results within a CVPixelBufferRef that you provide:
void GPUImageCreateResizedSampleBuffer(CVPixelBufferRef cameraFrame, CGSize finalSize, CMSampleBufferRef *sampleBuffer)
{
// CVPixelBufferCreateWithPlanarBytes for YUV input
CGSize originalSize = CGSizeMake(CVPixelBufferGetWidth(cameraFrame), CVPixelBufferGetHeight(cameraFrame));
CVPixelBufferLockBaseAddress(cameraFrame, 0);
GLubyte *sourceImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, sourceImageBytes, CVPixelBufferGetBytesPerRow(cameraFrame) * originalSize.height, NULL);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef cgImageFromBytes = CGImageCreate((int)originalSize.width, (int)originalSize.height, 8, 32, CVPixelBufferGetBytesPerRow(cameraFrame), genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst, dataProvider, NULL, NO, kCGRenderingIntentDefault);
GLubyte *imageData = (GLubyte *) calloc(1, (int)finalSize.width * (int)finalSize.height * 4);
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)finalSize.width, (int)finalSize.height, 8, (int)finalSize.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, finalSize.width, finalSize.height), cgImageFromBytes);
CGImageRelease(cgImageFromBytes);
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
CGDataProviderRelease(dataProvider);
CVPixelBufferRef pixel_buffer = NULL;
CVPixelBufferCreateWithBytes(kCFAllocatorDefault, finalSize.width, finalSize.height, kCVPixelFormatType_32BGRA, imageData, finalSize.width * 4, stillImageDataReleaseCallback, NULL, NULL, &pixel_buffer);
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixel_buffer, &videoInfo);
CMTime frameTime = CMTimeMake(1, 30);
CMSampleTimingInfo timing = {frameTime, frameTime, kCMTimeInvalid};
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixel_buffer, YES, NULL, NULL, videoInfo, &timing, sampleBuffer);
CFRelease(videoInfo);
CVPixelBufferRelease(pixel_buffer);
}
It doesn't take you all the way to creating a CMSampleBufferRef, but as weichsel points out, you only need the CVPixelBufferRef for encoding the video.
However, if what you really want to do here is crop video and record it, going to and from a UIImage is going to be a very slow way to do this. Instead, may I recommend looking into using something like GPUImage to capture video with a GPUImageVideoCamera input (or GPUImageMovie if cropping a previously recorded movie), feeding that into a GPUImageCropFilter, and taking the result to a GPUImageMovieWriter. That way, the video never touches Core Graphics and hardware acceleration is used as much as possible. It will be a lot faster than what you describe above.
- (CVPixelBufferRef)CVPixelBufferRefFromUiImage:(UIImage *)img {
CGSize size = img.size;
CGImageRef image = [img CGImage];
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
img -> UIImage
CVPixelBufferRef pxbuffer = NULL;
CGImageRef image = [img CGImage];
// Initilize CVPixelBuffer
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image), CGImageGetHeight(image), kCVPixelFormatType_32ARGB, NULL, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CGContextRef context = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(pxbuffer), CGImageGetWidth(image), CGImageGetHeight(image), CGImageGetBitsPerComponent(image), CVPixelBufferGetBytesPerRow(pxbuffer), CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
Please make sure that Component and BytesPerRow are fetched from CGImageRef and CVPixelBufferRef respectively.
CGImageGetBitsPerComponent(image)
CVPixelBufferGetBytesPerRow(pxbuffer)
In many places I saw people using constants, if they are not correct you get a distorted image.

How to save the image on the photo album after perform the 3D Transform?

How to save the 3D transformed image on the photo album? I am using CATransform3DRotate for change the transform. I am not able to save the image. Image saving code.
UIImageWriteToSavedPhotosAlbum(newImage, nil, nil, nil);
Is it possible to save the 3D Transformed image? Please help me.Thanks in advance.
-(UIImage *) glToUIImage {
NSInteger myDataLength = 320 * 480 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < 480; y++)
{
for(int x = 0; x < 320 * 4; x++)
{
buffer2[(479 - y) * 320 * 4 + x] = buffer[y * 4 * 320 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 320;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(320, 480, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
return myImage;
}
-(void)captureToPhotoAlbum {
UIImage *image = [self glToUIImage];
UIImageWriteToSavedPhotosAlbum(image, self, nil, nil);
}
also see full tutorial for save openGL Image in photoAlbum from this link
and also see my blog with this post.. captureimagescreenshot-of-view
2 . also use ALAssetsLibrary to save image
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library writeImageToSavedPhotosAlbum:[image CGImage] orientation:(ALAssetOrientation)[image imageOrientation] completionBlock:^(NSURL *assetURL, NSError *error){
if (error) {
// TODO: error handling
} else {
// TODO: success handling
}
}];
[library release];
UIImageWriteToSavedPhotosAlbum(UIImage *image, id completionTarget, SEL completionSelector, void *contextInfo);
You only need completionTarget, completionSelector and contextInfo if you want to be notified when the image is done saving, otherwise you can pass in nil.
ok then try like this..
UIGraphicsBeginImageContext(YOUR_VIEW.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
It will capture your view as like screenshot and save to photo album
YOUR_VIEW = paas your edited image's superview..

captureOutput:didOutputSampleBuffer:fromConnection Performance Issues

I use AVCaptureSessionPhoto to allow the user to take high-resolution photos. Upon taking a photo, I use the captureOutput:didOutputSampleBuffer:fromConnection: method to retrieve a thumbnail at the time of capture. However, although I try to do minimal work in the delegate method, the app becomes sort of laggy (I say sort of because it is still useable). Also, the iPhone tends to run hot.
Is there some way of reducing the amount of work the iPhone has to do?
I set up the AVCaptureVideoDataOutput by doing the following:
self.videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
self.videoDataOutput.alwaysDiscardsLateVideoFrames = YES;
// Specify the pixel format
dispatch_queue_t queue = dispatch_queue_create("com.myapp.videoDataOutput", NULL);
[self.videoDataOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
self.videoDataOutput.videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
Here's my captureOutput:didOutputSampleBuffer:fromConnection (and assisting imageRefFromSampleBuffer method):
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
if (videoDataOutputConnection == nil) {
videoDataOutputConnection = connection;
}
if (getThumbnail > 0) {
getThumbnail--;
CGImageRef tempThumbnail = [self imageRefFromSampleBuffer:sampleBuffer];
UIImage *image;
if (self.prevLayer.mirrored) {
image = [[UIImage alloc] initWithCGImage:tempThumbnail scale:1.0 orientation:UIImageOrientationLeftMirrored];
}
else {
image = [[UIImage alloc] initWithCGImage:tempThumbnail scale:1.0 orientation:UIImageOrientationRight];
}
[self.cameraThumbnailArray insertObject:image atIndex:0];
dispatch_async(dispatch_get_main_queue(), ^{
self.freezeCameraView.image = image;
});
CFRelease(tempThumbnail);
}
sampleBuffer = nil;
[pool release];
}
-(CGImageRef)imageRefFromSampleBuffer:(CMSampleBufferRef)sampleBuffer {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(context);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return newImage;
}
minFrameDuration is deprecated, this may work:
AVCaptureConnection *stillImageConnection = [stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
stillImageConnection.videoMinFrameDuration = CMTimeMake(1, 10);
To improve, we should setup our AVCaptureVideoDataOutput by:
output.minFrameDuration = CMTimeMake(1, 10);
We specify a minimum duration for each frame (play with this settings to avoid having too many frames waiting in the queue because it can cause memory issues). It is similar to the inverse of the maximum frame-rate. In this example we set a min frame duration of 1/10 seconds so a maximum frame-rate of 10fps. We say that we are not able to process more than 10 frames per second.
Hope that help!

getting UIImage out of captureStillImageAsynchronouslyFromConnection

I am writing an iPhone app that uses AVFoundation for the camera stuff and I'm trying to save the UIImage from the camera into the Camera Roll.
It currently does it this way...
[imageCaptureOutput captureStillImageAsynchronouslyFromConnection:[imageCaptureOutput.connections objectAtIndex:0]
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error)
{
if (imageDataSampleBuffer != NULL)
{
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
MyCameraAppDelegate *delegate = [[UIApplication sharedApplication] delegate];
[delegate processImage:image];
}
}];
I have been watching the WWDC tutorial videos and I think that the 2 lines (NSData... and UIImage...) are a long way round of getting from imageDataSampleBuffer to a UIImage.
It seems to take far too long to save the images to the library.
Does anyone know if there is a single line transition to get the UIImage out of this?
Thanks for any help!
Oliver
I think doing this in the completion handler block might be more efficient, but you're right, it's saving to the library that takes the biggest chunk of time.
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef cgImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
if ( /*wanna save metadata on iOS4.1*/ ) {
CFDictionaryRef metadataDict = CMCopyDictionaryOfAttachments(NULL, imageDataSampleBuffer, kCMAttachmentMode_ShouldPropagate);
[assetsLibraryInstance writeImageToSavedPhotosAlbum:cgImage metadata:metadataDict completionBlock:^(NSURL *assetURL, NSError *error) { /*do something*/ }];
CFRelease(metadataDict);
} else {
[assetsLibraryInstance writeImageToSavedPhotosAlbum:cgImage orientation:ALAssetOrientationRight completionBlock:^(NSURL *assetURL, NSError *error) { /*do something*/ }];
// i think this is the correct orientation for Portrait, or Up if deviceOr'n is L'Left, Down if L'Right
}
CGImageRelease(cgImage);