iPhone - application stops after saving 50 frames to a movie - iphone

I have several UIImages and I want to create a video from them.
I am used a solution based on this
to create a video from UIImages. In my case, I would like to create a 30 fps video. So, every image is 1/30 of a second.
After setting everything to start saving the video, as mentioned on that page, I have created a method that saves one image to the movie and this method is called by a loop. Something like:
for (int i=0; i<[self.arrayOfFrames count]; i++ {
UIImage *oneImage = [self.arrayOfFrames objectAtIndex:i];
[self saveOneFrame:oneImage atTime:i];
}
and the method is
-(void)saveOneFrame:(UIImage *)imagem atTime:(NSInteger)time {
// I have tried this autorelease pool to drain memory after the method is finished
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
CVPixelBufferRef buffer = NULL;
buffer = [self pixelBufferFromCGImage:imagem.CGImage size:imagem.size];
BOOL append_ok = NO;
int j = 0;
while (!append_ok && j < 30)
{
if (adaptor.assetWriterInput.readyForMoreMediaData)
{
printf("appending %d attemp %d\n", time, j);
CMTime oneFrameLength = CMTimeMake(1, 30.0f ); // one frame = 1/30 s
CMTime lastTime;
CMTime presentTime;
if (time == 0) {
presentTime = CMTimeMake(0, self.FPS);
} else {
lastTime = CMTimeMake(tempo-1, self.FPS);
presentTime = CMTimeAdd(lastTime, duracaoUmFrame);
}
// this will always add 1/30 to every new keyframe
CMTime presentTime = CMTimeAdd(lastTime, oneFrameLength);
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
CVPixelBufferPoolRef bufferPool = adaptor.pixelBufferPool;
NSParameterAssert(bufferPool != NULL);
[NSThread sleepForTimeInterval:0.05];
}
else
{
printf("adaptor not ready %d, %d\n", time, j);
[NSThread sleepForTimeInterval:0.1];
}
j++;
}
if (!append_ok) {
printf("error appending image %d times %d\n", time, j);
}
CVBufferRelease(buffer);
[pool drain]; // I have tried with and without this autorelease pool in place... no difference
}
The application simply quits, without any warning, after saving 50 frames to the movie...
This is the other method:
-(CVPixelBufferRef) pixelBufferFromCGImage:(CGImageRef)image size:(CGSize)size
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options,
&pxbuffer);
status=status;//Added to make the stupid compiler not show a stupid warning.
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
//CGContextTranslateCTM(context, 0, CGImageGetHeight(image));
//CGContextScaleCTM(context, 1.0, -1.0);//Flip vertically to account for different origin
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I run instruments and have not detected any leak or exaggerated memory usage that is about the same before the movie starts being saved.
any clues?
NOTE:
After looking at the device logs, I found this:
<Notice>: (UIKitApplication:com.myID.myApp[0xc304]) Bug: launchd_core_logic.c:3732 (25562):3
<Notice>: (UIKitApplication:com.myID.myApp[0xc304]) Assuming job exited: <rdar://problem/5020256>: 10: No child processes
<Warning>: (UIKitApplication:com.myID.myApp[0xc304]) Job appears to have crashed: Segmentation fault: 11
<Warning>: Application 'myApp' exited abnormally with signal 11: Segmentation fault: 11

Maybe you have tried this already but this can help?
In the end, the solution was to restart the iPhone, since some data
got corrupted. After the reboot everything was working normally.
Should have thought of the classic "Have you tried turning it off and
on again?"

Look at it this way, you have an array of images (something which eats a lot of memory). You're making a copy of each one of those images and saving the copy to the finished movie. So your requirements are essentially doubled what you started with. How about if you released each frame after you've added the frame? That will mean that you may end up with the same size (or probably somewhat larger, but still smaller than what you had) memory usage.

Related

Generate movie with UIImages using AVFoundation

Many before me have shared their knowledge in stack overflow about this topic. I was able to take over much of the tips and code snippets thanks to the contribution. It all worked quite good except that it was often hard on the working memory. This time-lapse application that I am working on, was able to generate a movie out of 2000 hd images and more, but since iOS 7.1 it is having trouble generating a video out of more than 240 hd images. 240 images seems to be the limit on an iPhone 5s. I was wondering whether anybody has had these problems too and whether anybody has found solutions to it. Now to the source code.
This part iterates through saved uiimages in the apps document's directory.
if ([adaptor.assetWriterInput isReadyForMoreMediaData])
{
CMTime frameTime = CMTimeMake(1, fps);
CMTime lastTime=CMTimeMake(i, fps);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
NSString *imageFilePath = [NSString stringWithFormat:#"%#/%#",folderPathName, imageFileName];
image = [UIImage imageWithContentsOfFile:imageFilePath] ;
cgimage = [image CGImage];
buffer = (CVPixelBufferRef)[self pixelBufferFromCGImage: cgimage];
bool result = [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
if (result == NO)
{
NSLog(#"failed to append buffer %i", i);
_videoStatus = 0;
success = NO;
return success;
}
//buffer has to be released here or memory pressure will occur
if(buffer != NULL)
{
CVBufferRelease(buffer);
buffer = NULL;
}
}
This is the local method which appears to make most trouble. It gets the pixel buffer reference from cgimage.
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (CFDictionaryRef) CFBridgingRetain(options),
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace, (CGBitmapInfo)kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(M_PI));
float width = CGImageGetWidth(image);
float height = CGImageGetHeight(image);
CGContextDrawImage(context, CGRectMake(0, 0, width,
height), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I have been spending a lot of time on this and not moving forward. Help is much appreciated. If any more details are necessary, I am glad to provide.
I use very similar code although for slightly different reasons (I'm using AVAssetReader, grabbing frames as images and manipulating them). Net result however should be similar - I'm iterating through 1000's of images without issue.
The two things I notice that I'm doing that are different:
When you release the Buffer, you're using CVBufferRelease, I'm using
CVPixelBufferRelease.
You are not releasing the CGImage using CGImageRelease.
Try rewriting this:
//buffer has to be released here or memory pressure will occur
if(buffer != NULL)
{
CVBufferRelease(buffer);
buffer = NULL;
}
as:
//buffer has to be released here or memory pressure will occur
if(buffer != NULL)
{
CVPixelBufferRelease(buffer);
buffer = NULL;
}
CGImageRelease(cgImage);
Let me know how that goes.
EDIT: Here is a sample of my code, getting and releasing a CGImageRef. The Image is created from a CIImage extracted from the reader buffer and filtered.
CGImageRef finalImage = [context createCGImage:outputImage fromRect:[outputImage extent]];
// 2. Grab the size
CGSize size = CGSizeMake(CGImageGetWidth(finalImage), CGImageGetHeight(finalImage));
// 3. Convert the CGImage to a PixelBuffer
CVPixelBufferRef pxBuffer = NULL;
pxBuffer = [self pixelBufferFromCGImage: finalImage andSize: size];
// 4. Write things back out.
// Calculate the frame time
CMTime frameTime = CMTimeMake(1, 30);
CMTime presentTime=CMTimeAdd(currentTime, frameTime);
[_ugcAdaptor appendPixelBuffer:pxBuffer withPresentationTime:presentTime];
CGImageRelease(finalImage);
CVPixelBufferRelease(pxBuffer);
Finally I found the solution to my problem, there were 2 points I had to change in my code.
I changed the parameter type of the method (CVPixelBufferRef) pixelBufferFromImage: (UIImage*) image from CGImageRef to UIImage. Now the reason for this is mainly to simplify the code so the coming correction is easier to implement.
Autoreleasepool is introduced to this method. Now this is the actual key to the solution. CGImageRef cgimage = [image CGImage]; and all other components of the method must be included in the Autoreleasepool.
The code looks like this.
- (CVPixelBufferRef) pixelBufferFromImage: (UIImage*) image withOrientation:(ImageOrientation)orientation
{
#autoreleasepool
{
CGImageRef cgimage = [image CGImage];
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
float width = CGImageGetWidth(cgimage);
float height = CGImageGetHeight(cgimage);
CVPixelBufferCreate(kCFAllocatorDefault,width,
height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)(options),
&pxbuffer);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, width,
height, 8, 4*width, rgbColorSpace,
(CGBitmapInfo)kCGImageAlphaNoneSkipFirst);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(-M_PI/2));
CGContextDrawImage(context, CGRectMake(-height, 0, height, width), cgimage);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
}
With this solution a hd movie of more than 2000 images is generated at a rather slow speed but it seems to be very reliable, which is most important.

App crash when get fullResolutionImage

I try using my app ALAssetRepresentation.and when i loop om an images there are couple of image that crash the app
for(ALAsset *asset in _assets) {
NSMutableDictionary *workingDictionary = [[NSMutableDictionary alloc] init];
[workingDictionary setObject:[asset valueForProperty:ALAssetPropertyType] forKey:#"UIImagePickerControllerMediaType"];
ALAssetRepresentation *representation = [asset defaultRepresentation];
if (!representation) {
[workingDictionary release];
continue;
}
CGImageRef imageRef = [representation fullResolutionImage];//here the app crash
UIImage *img = [UIImage imageWithCGImage:imageRef];
if (!img) {
[workingDictionary release];
continue;
}
if (!img) {
[workingDictionary release];
continue;
}
[workingDictionary setObject:img forKey:#"UIImagePickerControllerOriginalImage"];
[workingDictionary setObject:[asset valueForProperty:ALAssetPropertyOrientation] forKey:#"orientation"];
[returnArray addObject:workingDictionary];
[workingDictionary release];
}
in this line i get crash without any msg:
CGImageRef imageRef = [representation fullResolutionImage];
This is the crash msg
Program received signal: “0”.
Data Formatters temporarily unavailable, will re-try after a 'continue'. (Unknown error loading shared library "/Developer/usr/lib/libXcodeDebuggerSupport.dylib")
That is most likely due to running out of memory, how big are the images that cause the crash?
I had a similar problem and after hours of lookin for solution I found this - the best solution of too big Asset bug:
// For details, see http://mindsea.com/2012/12/18/downscaling-huge-alassets-without-fear-of-sigkill
#import <AssetsLibrary/AssetsLibrary.h>
#import <ImageIO/ImageIO.h>
// Helper methods for thumbnailForAsset:maxPixelSize:
static size_t getAssetBytesCallback(void *info, void *buffer, off_t position, size_t count) {
ALAssetRepresentation *rep = (__bridge id)info;
NSError *error = nil;
size_t countRead = [rep getBytes:(uint8_t *)buffer fromOffset:position length:count error:&error];
if (countRead == 0 && error) {
// We have no way of passing this info back to the caller, so we log it, at least.
NSLog(#"thumbnailForAsset:maxPixelSize: got an error reading an asset: %#", error);
}
return countRead;
}
static void releaseAssetCallback(void *info) {
// The info here is an ALAssetRepresentation which we CFRetain in thumbnailForAsset:maxPixelSize:.
// This release balances that retain.
CFRelease(info);
}
// Returns a UIImage for the given asset, with size length at most the passed size.
// The resulting UIImage will be already rotated to UIImageOrientationUp, so its CGImageRef
// can be used directly without additional rotation handling.
// This is done synchronously, so you should call this method on a background queue/thread.
- (UIImage *)thumbnailForAsset:(ALAsset *)asset maxPixelSize:(NSUInteger)size {
NSParameterAssert(asset != nil);
NSParameterAssert(size > 0);
ALAssetRepresentation *rep = [asset defaultRepresentation];
CGDataProviderDirectCallbacks callbacks = {
.version = 0,
.getBytePointer = NULL,
.releaseBytePointer = NULL,
.getBytesAtPosition = getAssetBytesCallback,
.releaseInfo = releaseAssetCallback,
};
CGDataProviderRef provider = CGDataProviderCreateDirect((void *)CFBridgingRetain(rep), [rep size], &callbacks);
CGImageSourceRef source = CGImageSourceCreateWithDataProvider(provider, NULL);
CGImageRef imageRef = CGImageSourceCreateThumbnailAtIndex(source, 0, (__bridge CFDictionaryRef) #{
(NSString *)kCGImageSourceCreateThumbnailFromImageAlways : #YES,
(NSString *)kCGImageSourceThumbnailMaxPixelSize : [NSNumber numberWithInt:size],
(NSString *)kCGImageSourceCreateThumbnailWithTransform : #YES,
});
CFRelease(source);
CFRelease(provider);
if (!imageRef) {
return nil;
}
UIImage *toReturn = [UIImage imageWithCGImage:imageRef];
CFRelease(imageRef);
return toReturn;
}

iOS - Automatically resize CVPixelBufferRef

I am trying to crop and scale a CMSampleBufferRef based on user's inputs, on ratio, the below code takes a CMSampleBufferRef, convert it into a CVImageBufferRef and use CVPixelBuffer to crop the internal image based on its bytes. The goal of this process is to have a cropped and scaled CVPixelBufferRef to write to the video
- (CVPixelBufferRef)modifyImage:(CMSampleBufferRef) sampleBuffer {
#synchronized (self) {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the image buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get information about the image
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CVPixelBufferRef pxbuffer;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
[NSNumber numberWithInt:720], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:1280], kCVPixelBufferHeightKey,
nil];
NSInteger tempWidth = (NSInteger) (width / ratio);
NSInteger tempHeight = (NSInteger) (height / ratio);
NSInteger baseAddressStart = 100 + 100 * bytesPerRow;
CVReturn status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, tempWidth, tempHeight, kCVPixelFormatType_32BGRA, &baseAddress[baseAddressStart], bytesPerRow, MyPixelBufferReleaseCallback, NULL, (CFDictionaryRef)options, &pxbuffer);
if (status != 0) {
CKLog(#"%d", status);
return NULL;
}
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
return pxbuffer;
}
}
It all works fine, except that when I am trying to write it into the video's ouput using this method, it keeps receiving memory warning. It is fine if I keep the same ratio
- (void)writeBufferFrame:(CMSampleBufferRef)sampleBuffer pixelBuffer:(CVPixelBufferRef)pixelBuffer {
CMTime lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
if(self.videoWriter.status != AVAssetWriterStatusWriting)
{
CKLog(#"%d", self.videoWriter.status);
[self.videoWriter startWriting];
[self.videoWriter startSessionAtSourceTime:lastSampleTime];
}
CVPixelBufferRef pxbuffer = [self modifyImage:sampleBuffer];
BOOL success = [self.avAdaptor appendPixelBuffer:pxbuffer withPresentationTime:lastSampleTime];
if (!success)
NSLog(#"Warning: Unable to write buffer to video");
}
I also tried with different approaches using CMSampleBufferRef and CGContext. If you can provide a solution for any approach here, I can give you the full score
Try to use kCVPixelBufferLock_ReadOnly flag in both calls to -CVPixelBufferLockBaseAddress and -CVPixelBufferUnlockBaseAddress.
And sometimes this issue can be solved by copying pixel buffer. Perform allocating once:
unsigned char *data = (unsigned char*)malloc(ySize * sizeof(unsigned char));
After that, copy data from pixelBuffer to data
size_t size = height * bytesPerRow;
memcpy(data, baseAddress, size);
After that, use data. Hope, that will help.

captureOutput:didOutputSampleBuffer:fromConnection Performance Issues

I use AVCaptureSessionPhoto to allow the user to take high-resolution photos. Upon taking a photo, I use the captureOutput:didOutputSampleBuffer:fromConnection: method to retrieve a thumbnail at the time of capture. However, although I try to do minimal work in the delegate method, the app becomes sort of laggy (I say sort of because it is still useable). Also, the iPhone tends to run hot.
Is there some way of reducing the amount of work the iPhone has to do?
I set up the AVCaptureVideoDataOutput by doing the following:
self.videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
self.videoDataOutput.alwaysDiscardsLateVideoFrames = YES;
// Specify the pixel format
dispatch_queue_t queue = dispatch_queue_create("com.myapp.videoDataOutput", NULL);
[self.videoDataOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
self.videoDataOutput.videoSettings = [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
Here's my captureOutput:didOutputSampleBuffer:fromConnection (and assisting imageRefFromSampleBuffer method):
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
if (videoDataOutputConnection == nil) {
videoDataOutputConnection = connection;
}
if (getThumbnail > 0) {
getThumbnail--;
CGImageRef tempThumbnail = [self imageRefFromSampleBuffer:sampleBuffer];
UIImage *image;
if (self.prevLayer.mirrored) {
image = [[UIImage alloc] initWithCGImage:tempThumbnail scale:1.0 orientation:UIImageOrientationLeftMirrored];
}
else {
image = [[UIImage alloc] initWithCGImage:tempThumbnail scale:1.0 orientation:UIImageOrientationRight];
}
[self.cameraThumbnailArray insertObject:image atIndex:0];
dispatch_async(dispatch_get_main_queue(), ^{
self.freezeCameraView.image = image;
});
CFRelease(tempThumbnail);
}
sampleBuffer = nil;
[pool release];
}
-(CGImageRef)imageRefFromSampleBuffer:(CMSampleBufferRef)sampleBuffer {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(context);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return newImage;
}
minFrameDuration is deprecated, this may work:
AVCaptureConnection *stillImageConnection = [stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
stillImageConnection.videoMinFrameDuration = CMTimeMake(1, 10);
To improve, we should setup our AVCaptureVideoDataOutput by:
output.minFrameDuration = CMTimeMake(1, 10);
We specify a minimum duration for each frame (play with this settings to avoid having too many frames waiting in the queue because it can cause memory issues). It is similar to the inverse of the maximum frame-rate. In this example we set a min frame duration of 1/10 seconds so a maximum frame-rate of 10fps. We say that we are not able to process more than 10 frames per second.
Hope that help!

Is there any way to improve time between shots with AVCaptureStillImageOutput?

I currently use the following code to shoot a series of images:
- (void)shootSeries:(int)photos {
if(photos == 0) {
[self mergeImages];
} else {
[output captureStillImageAsynchronouslyFromConnection:connection completionHandler:
^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
NSLog(#"Shot picture %d.", 7 - photos);
[self shootSeries:(photos - 1)];
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int dataSize = CVPixelBufferGetDataSize(pixelBuffer);
CFDataRef data = CFDataCreate(NULL, (const UInt8 *)CVPixelBufferGetBaseAddress(pixelBuffer), dataSize);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData(data);
CFRelease(data);
CGImageRef image = CGImageCreate(CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer),
8, 32,
CVPixelBufferGetBytesPerRow(pixelBuffer),
colorspace,
kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,
dataProvider, NULL, true, kCGRenderingIntentDefault);
CFRelease(dataProvider);
CFArrayAppendValue(shotPictures, image);
CFRelease(image);
}];
}
}
While this works rather well it is very slow. How come apps like ClearCam can shoot pictures much faster in series than this and how can I do it too?
After capturing the image, store the sample buffer in a CFArray, and once you're done taking all your phones, THEN convert them to images (or in your case CGImageRefs).