Generate movie with UIImages using AVFoundation - iphone

Many before me have shared their knowledge in stack overflow about this topic. I was able to take over much of the tips and code snippets thanks to the contribution. It all worked quite good except that it was often hard on the working memory. This time-lapse application that I am working on, was able to generate a movie out of 2000 hd images and more, but since iOS 7.1 it is having trouble generating a video out of more than 240 hd images. 240 images seems to be the limit on an iPhone 5s. I was wondering whether anybody has had these problems too and whether anybody has found solutions to it. Now to the source code.
This part iterates through saved uiimages in the apps document's directory.
if ([adaptor.assetWriterInput isReadyForMoreMediaData])
{
CMTime frameTime = CMTimeMake(1, fps);
CMTime lastTime=CMTimeMake(i, fps);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
NSString *imageFilePath = [NSString stringWithFormat:#"%#/%#",folderPathName, imageFileName];
image = [UIImage imageWithContentsOfFile:imageFilePath] ;
cgimage = [image CGImage];
buffer = (CVPixelBufferRef)[self pixelBufferFromCGImage: cgimage];
bool result = [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
if (result == NO)
{
NSLog(#"failed to append buffer %i", i);
_videoStatus = 0;
success = NO;
return success;
}
//buffer has to be released here or memory pressure will occur
if(buffer != NULL)
{
CVBufferRelease(buffer);
buffer = NULL;
}
}
This is the local method which appears to make most trouble. It gets the pixel buffer reference from cgimage.
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (CFDictionaryRef) CFBridgingRetain(options),
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace, (CGBitmapInfo)kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(M_PI));
float width = CGImageGetWidth(image);
float height = CGImageGetHeight(image);
CGContextDrawImage(context, CGRectMake(0, 0, width,
height), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I have been spending a lot of time on this and not moving forward. Help is much appreciated. If any more details are necessary, I am glad to provide.

I use very similar code although for slightly different reasons (I'm using AVAssetReader, grabbing frames as images and manipulating them). Net result however should be similar - I'm iterating through 1000's of images without issue.
The two things I notice that I'm doing that are different:
When you release the Buffer, you're using CVBufferRelease, I'm using
CVPixelBufferRelease.
You are not releasing the CGImage using CGImageRelease.
Try rewriting this:
//buffer has to be released here or memory pressure will occur
if(buffer != NULL)
{
CVBufferRelease(buffer);
buffer = NULL;
}
as:
//buffer has to be released here or memory pressure will occur
if(buffer != NULL)
{
CVPixelBufferRelease(buffer);
buffer = NULL;
}
CGImageRelease(cgImage);
Let me know how that goes.
EDIT: Here is a sample of my code, getting and releasing a CGImageRef. The Image is created from a CIImage extracted from the reader buffer and filtered.
CGImageRef finalImage = [context createCGImage:outputImage fromRect:[outputImage extent]];
// 2. Grab the size
CGSize size = CGSizeMake(CGImageGetWidth(finalImage), CGImageGetHeight(finalImage));
// 3. Convert the CGImage to a PixelBuffer
CVPixelBufferRef pxBuffer = NULL;
pxBuffer = [self pixelBufferFromCGImage: finalImage andSize: size];
// 4. Write things back out.
// Calculate the frame time
CMTime frameTime = CMTimeMake(1, 30);
CMTime presentTime=CMTimeAdd(currentTime, frameTime);
[_ugcAdaptor appendPixelBuffer:pxBuffer withPresentationTime:presentTime];
CGImageRelease(finalImage);
CVPixelBufferRelease(pxBuffer);

Finally I found the solution to my problem, there were 2 points I had to change in my code.
I changed the parameter type of the method (CVPixelBufferRef) pixelBufferFromImage: (UIImage*) image from CGImageRef to UIImage. Now the reason for this is mainly to simplify the code so the coming correction is easier to implement.
Autoreleasepool is introduced to this method. Now this is the actual key to the solution. CGImageRef cgimage = [image CGImage]; and all other components of the method must be included in the Autoreleasepool.
The code looks like this.
- (CVPixelBufferRef) pixelBufferFromImage: (UIImage*) image withOrientation:(ImageOrientation)orientation
{
#autoreleasepool
{
CGImageRef cgimage = [image CGImage];
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
float width = CGImageGetWidth(cgimage);
float height = CGImageGetHeight(cgimage);
CVPixelBufferCreate(kCFAllocatorDefault,width,
height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)(options),
&pxbuffer);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, width,
height, 8, 4*width, rgbColorSpace,
(CGBitmapInfo)kCGImageAlphaNoneSkipFirst);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(-M_PI/2));
CGContextDrawImage(context, CGRectMake(-height, 0, height, width), cgimage);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
}
With this solution a hd movie of more than 2000 images is generated at a rather slow speed but it seems to be very reliable, which is most important.

Related

Convert UIImage to CMSampleBufferRef

I am doing video recording using AVFoundation. I have to crop the video to 320x280. I am getting the CMSampleBufferRef and converting it to UIImage using the below code.
CGImageRef _cgImage = [self imageFromSampleBuffer:sampleBuffer];
UIImage *_uiImage = [UIImage imageWithCGImage:_cgImage];
CGImageRelease(_cgImage);
_uiImage = [_uiImage resizedImageWithSize:CGSizeMake(320, 280)];
CMSampleBufferRef croppedBuffer = /* NEED HELP WITH THIS */
[_videoInput appendSampleBuffer:sampleBuffer];
// _videoInput is a AVAssetWriterInput
The imageFromSampleBuffer: method looks like this:
- (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
return newImage;
}
Now I have to convert the resized image back to CMSampleBufferRef to write in AVAssetWriterInput.
How do I convert UIImage to CMSampleBufferRef?
Thanks everyone!
While you could create your own Core Media sample buffers from scratch, it's probably easier to use a AVPixelBufferAdaptor.
You describe the source pixel buffer format in the inputSettings dictionary and pass that to the adaptor initializer:
NSMutableDictionary* inputSettingsDict = [NSMutableDictionary dictionary];
[inputSettingsDict setObject:[NSNumber numberWithInt:pixelFormat] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
[inputSettingsDict setObject:[NSNumber numberWithUnsignedInteger:(NSUInteger)(image.uncompressedSize/image.rect.size.height)] forKey:(NSString*)kCVPixelBufferBytesPerRowAlignmentKey];
[inputSettingsDict setObject:[NSNumber numberWithDouble:image.rect.size.width] forKey:(NSString*)kCVPixelBufferWidthKey];
[inputSettingsDict setObject:[NSNumber numberWithDouble:image.rect.size.height] forKey:(NSString*)kCVPixelBufferHeightKey];
[inputSettingsDict setObject:[NSNumber numberWithBool:YES] forKey:(NSString*)kCVPixelBufferCGImageCompatibilityKey];
[inputSettingsDict setObject:[NSNumber numberWithBool:YES] forKey:(NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey];
AVAssetWriterInputPixelBufferAdaptor* pixelBufferAdapter = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:assetWriterInput sourcePixelBufferAttributes:inputSettingsDict];
You can then append CVPixelBuffers to your adaptor:
[pixelBufferAdapter appendPixelBuffer:completePixelBuffer withPresentationTime:pixelBufferTime]
The pixelbufferAdaptor accepts CVPixelBuffers, so you have to convert your UIImages to pixelBuffers, which is described here: https://stackoverflow.com/a/3742212/100848
Pass the CGImage property of your UIImage to newPixelBufferFromCGImage.
This is a function that I use in my GPUImage framework to resize an incoming CMSampleBufferRef and place the scaled results within a CVPixelBufferRef that you provide:
void GPUImageCreateResizedSampleBuffer(CVPixelBufferRef cameraFrame, CGSize finalSize, CMSampleBufferRef *sampleBuffer)
{
// CVPixelBufferCreateWithPlanarBytes for YUV input
CGSize originalSize = CGSizeMake(CVPixelBufferGetWidth(cameraFrame), CVPixelBufferGetHeight(cameraFrame));
CVPixelBufferLockBaseAddress(cameraFrame, 0);
GLubyte *sourceImageBytes = CVPixelBufferGetBaseAddress(cameraFrame);
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, sourceImageBytes, CVPixelBufferGetBytesPerRow(cameraFrame) * originalSize.height, NULL);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef cgImageFromBytes = CGImageCreate((int)originalSize.width, (int)originalSize.height, 8, 32, CVPixelBufferGetBytesPerRow(cameraFrame), genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst, dataProvider, NULL, NO, kCGRenderingIntentDefault);
GLubyte *imageData = (GLubyte *) calloc(1, (int)finalSize.width * (int)finalSize.height * 4);
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)finalSize.width, (int)finalSize.height, 8, (int)finalSize.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, finalSize.width, finalSize.height), cgImageFromBytes);
CGImageRelease(cgImageFromBytes);
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
CGDataProviderRelease(dataProvider);
CVPixelBufferRef pixel_buffer = NULL;
CVPixelBufferCreateWithBytes(kCFAllocatorDefault, finalSize.width, finalSize.height, kCVPixelFormatType_32BGRA, imageData, finalSize.width * 4, stillImageDataReleaseCallback, NULL, NULL, &pixel_buffer);
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixel_buffer, &videoInfo);
CMTime frameTime = CMTimeMake(1, 30);
CMSampleTimingInfo timing = {frameTime, frameTime, kCMTimeInvalid};
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixel_buffer, YES, NULL, NULL, videoInfo, &timing, sampleBuffer);
CFRelease(videoInfo);
CVPixelBufferRelease(pixel_buffer);
}
It doesn't take you all the way to creating a CMSampleBufferRef, but as weichsel points out, you only need the CVPixelBufferRef for encoding the video.
However, if what you really want to do here is crop video and record it, going to and from a UIImage is going to be a very slow way to do this. Instead, may I recommend looking into using something like GPUImage to capture video with a GPUImageVideoCamera input (or GPUImageMovie if cropping a previously recorded movie), feeding that into a GPUImageCropFilter, and taking the result to a GPUImageMovieWriter. That way, the video never touches Core Graphics and hardware acceleration is used as much as possible. It will be a lot faster than what you describe above.
- (CVPixelBufferRef)CVPixelBufferRefFromUiImage:(UIImage *)img {
CGSize size = img.size;
CGImageRef image = [img CGImage];
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
img -> UIImage
CVPixelBufferRef pxbuffer = NULL;
CGImageRef image = [img CGImage];
// Initilize CVPixelBuffer
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image), CGImageGetHeight(image), kCVPixelFormatType_32ARGB, NULL, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CGContextRef context = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(pxbuffer), CGImageGetWidth(image), CGImageGetHeight(image), CGImageGetBitsPerComponent(image), CVPixelBufferGetBytesPerRow(pxbuffer), CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
Please make sure that Component and BytesPerRow are fetched from CGImageRef and CVPixelBufferRef respectively.
CGImageGetBitsPerComponent(image)
CVPixelBufferGetBytesPerRow(pxbuffer)
In many places I saw people using constants, if they are not correct you get a distorted image.

iPhone - application stops after saving 50 frames to a movie

I have several UIImages and I want to create a video from them.
I am used a solution based on this
to create a video from UIImages. In my case, I would like to create a 30 fps video. So, every image is 1/30 of a second.
After setting everything to start saving the video, as mentioned on that page, I have created a method that saves one image to the movie and this method is called by a loop. Something like:
for (int i=0; i<[self.arrayOfFrames count]; i++ {
UIImage *oneImage = [self.arrayOfFrames objectAtIndex:i];
[self saveOneFrame:oneImage atTime:i];
}
and the method is
-(void)saveOneFrame:(UIImage *)imagem atTime:(NSInteger)time {
// I have tried this autorelease pool to drain memory after the method is finished
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
CVPixelBufferRef buffer = NULL;
buffer = [self pixelBufferFromCGImage:imagem.CGImage size:imagem.size];
BOOL append_ok = NO;
int j = 0;
while (!append_ok && j < 30)
{
if (adaptor.assetWriterInput.readyForMoreMediaData)
{
printf("appending %d attemp %d\n", time, j);
CMTime oneFrameLength = CMTimeMake(1, 30.0f ); // one frame = 1/30 s
CMTime lastTime;
CMTime presentTime;
if (time == 0) {
presentTime = CMTimeMake(0, self.FPS);
} else {
lastTime = CMTimeMake(tempo-1, self.FPS);
presentTime = CMTimeAdd(lastTime, duracaoUmFrame);
}
// this will always add 1/30 to every new keyframe
CMTime presentTime = CMTimeAdd(lastTime, oneFrameLength);
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
CVPixelBufferPoolRef bufferPool = adaptor.pixelBufferPool;
NSParameterAssert(bufferPool != NULL);
[NSThread sleepForTimeInterval:0.05];
}
else
{
printf("adaptor not ready %d, %d\n", time, j);
[NSThread sleepForTimeInterval:0.1];
}
j++;
}
if (!append_ok) {
printf("error appending image %d times %d\n", time, j);
}
CVBufferRelease(buffer);
[pool drain]; // I have tried with and without this autorelease pool in place... no difference
}
The application simply quits, without any warning, after saving 50 frames to the movie...
This is the other method:
-(CVPixelBufferRef) pixelBufferFromCGImage:(CGImageRef)image size:(CGSize)size
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options,
&pxbuffer);
status=status;//Added to make the stupid compiler not show a stupid warning.
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
//CGContextTranslateCTM(context, 0, CGImageGetHeight(image));
//CGContextScaleCTM(context, 1.0, -1.0);//Flip vertically to account for different origin
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I run instruments and have not detected any leak or exaggerated memory usage that is about the same before the movie starts being saved.
any clues?
NOTE:
After looking at the device logs, I found this:
<Notice>: (UIKitApplication:com.myID.myApp[0xc304]) Bug: launchd_core_logic.c:3732 (25562):3
<Notice>: (UIKitApplication:com.myID.myApp[0xc304]) Assuming job exited: <rdar://problem/5020256>: 10: No child processes
<Warning>: (UIKitApplication:com.myID.myApp[0xc304]) Job appears to have crashed: Segmentation fault: 11
<Warning>: Application 'myApp' exited abnormally with signal 11: Segmentation fault: 11
Maybe you have tried this already but this can help?
In the end, the solution was to restart the iPhone, since some data
got corrupted. After the reboot everything was working normally.
Should have thought of the classic "Have you tried turning it off and
on again?"
Look at it this way, you have an array of images (something which eats a lot of memory). You're making a copy of each one of those images and saving the copy to the finished movie. So your requirements are essentially doubled what you started with. How about if you released each frame after you've added the frame? That will mean that you may end up with the same size (or probably somewhat larger, but still smaller than what you had) memory usage.

iOS - Automatically resize CVPixelBufferRef

I am trying to crop and scale a CMSampleBufferRef based on user's inputs, on ratio, the below code takes a CMSampleBufferRef, convert it into a CVImageBufferRef and use CVPixelBuffer to crop the internal image based on its bytes. The goal of this process is to have a cropped and scaled CVPixelBufferRef to write to the video
- (CVPixelBufferRef)modifyImage:(CMSampleBufferRef) sampleBuffer {
#synchronized (self) {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the image buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get information about the image
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CVPixelBufferRef pxbuffer;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
[NSNumber numberWithInt:720], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:1280], kCVPixelBufferHeightKey,
nil];
NSInteger tempWidth = (NSInteger) (width / ratio);
NSInteger tempHeight = (NSInteger) (height / ratio);
NSInteger baseAddressStart = 100 + 100 * bytesPerRow;
CVReturn status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, tempWidth, tempHeight, kCVPixelFormatType_32BGRA, &baseAddress[baseAddressStart], bytesPerRow, MyPixelBufferReleaseCallback, NULL, (CFDictionaryRef)options, &pxbuffer);
if (status != 0) {
CKLog(#"%d", status);
return NULL;
}
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
return pxbuffer;
}
}
It all works fine, except that when I am trying to write it into the video's ouput using this method, it keeps receiving memory warning. It is fine if I keep the same ratio
- (void)writeBufferFrame:(CMSampleBufferRef)sampleBuffer pixelBuffer:(CVPixelBufferRef)pixelBuffer {
CMTime lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
if(self.videoWriter.status != AVAssetWriterStatusWriting)
{
CKLog(#"%d", self.videoWriter.status);
[self.videoWriter startWriting];
[self.videoWriter startSessionAtSourceTime:lastSampleTime];
}
CVPixelBufferRef pxbuffer = [self modifyImage:sampleBuffer];
BOOL success = [self.avAdaptor appendPixelBuffer:pxbuffer withPresentationTime:lastSampleTime];
if (!success)
NSLog(#"Warning: Unable to write buffer to video");
}
I also tried with different approaches using CMSampleBufferRef and CGContext. If you can provide a solution for any approach here, I can give you the full score
Try to use kCVPixelBufferLock_ReadOnly flag in both calls to -CVPixelBufferLockBaseAddress and -CVPixelBufferUnlockBaseAddress.
And sometimes this issue can be solved by copying pixel buffer. Perform allocating once:
unsigned char *data = (unsigned char*)malloc(ySize * sizeof(unsigned char));
After that, copy data from pixelBuffer to data
size_t size = height * bytesPerRow;
memcpy(data, baseAddress, size);
After that, use data. Hope, that will help.

Convert UIImage to CVImageBufferRef

This code mostly works, but the resulting data seems to loose a color channel (is what I am thinking) as the resulting image data when displayed is tinted blue!
Here is the code:
UIImage* myImage=[UIImage imageNamed:#"sample1.png"];
CGImageRef imageRef=[myImage CGImage];
CVImageBufferRef pixelBuffer = [self pixelBufferFromCGImage:imageRef];
The method pixelBufferFromCGIImage was grabbed from another post on stackoverflow here: How do I export UIImage array as a movie? (although this application is unrelated to what I am trying to do) it is
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
{
CGSize frameSize = CGSizeMake(CGImageGetWidth(image), CGImageGetHeight(image));
NSDictionary *options = #{
(__bridge NSString *)kCVPixelBufferCGImageCompatibilityKey: #(NO),
(__bridge NSString *)kCVPixelBufferCGBitmapContextCompatibilityKey: #(NO)
};
CVPixelBufferRef pixelBuffer;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,
frameSize.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pixelBuffer);
if (status != kCVReturnSuccess) {
return NULL;
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *data = CVPixelBufferGetBaseAddress(pixelBuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(data, frameSize.width, frameSize.height,
8, CVPixelBufferGetBytesPerRow(pixelBuffer), rgbColorSpace,
(CGBitmapInfo) kCGImageAlphaNoneSkipLast);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return pixelBuffer;
}
I am thinking it has something to do with the relationship between kCVPixelFormatType_32ARGB and kCGImageAlphaNoneSkipLast though I have tried every combination and get either the same result or a application crash. Once again, this gets the UIImage data into CVImageBufferRef but when I display the image on screen, it appears to loose a color channel and shows up tinted blue. The image is a png.
The solution is that this code works perfectly as intended. :) The issue was in using the data in creating an OpenGL texture. Completely unrelated to this code. Anyone searching for how to Convert UIImage to CVImageBufferRef, your answer is in the above code!
If anyone is still looking for a solution to this problem, I solved it by switching the BOOLs in the pixelBuffer's options:
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:NO], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:NO], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
From NO to YES:
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
I encounter the same problem and find some samples: http://www.cakesolutions.net/teamblogs/2014/03/08/cmsamplebufferref-from-cgimageref
try to change
CGBitmapInfo bitmapInfo = (CGBitmapInfo)kCGBitmapByteOrder32Little |
kCGImageAlphaPremultipliedFirst)
Here's what really works:
+ (CVPixelBufferRef)pixelBufferFromImage:(CGImageRef)image {
CGSize frameSize = CGSizeMake(CGImageGetWidth(image), CGImageGetHeight(image)); // Not sure why this is even necessary, using CGImageGetWidth/Height in status/context seems to work fine too
CVPixelBufferRef pixelBuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width, frameSize.height, kCVPixelFormatType_32BGRA, nil, &pixelBuffer);
if (status != kCVReturnSuccess) {
return NULL;
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *data = CVPixelBufferGetBaseAddress(pixelBuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(data, frameSize.width, frameSize.height, 8, CVPixelBufferGetBytesPerRow(pixelBuffer), rgbColorSpace, (CGBitmapInfo) kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return pixelBuffer;
}
You can change the pixel buffer back to a UIImage (and then display or save it) to confirm that it works with this method:
+ (UIImage *)imageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer {
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef myImage = [context createCGImage:ciImage fromRect:CGRectMake(0, 0, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer))];
UIImage *image = [UIImage imageWithCGImage:myImage];
// Uncomment the following lines to say the image to your application's document directory
//NSString *imageSavePath = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"myImageFromPixelBuffer.png"]];
//[UIImagePNGRepresentation(image) writeToFile:imageSavePath atomically:YES];
return image;
}
Just to clarify the answer above: I've ran into the same issue because my shader code was expecting two layered samples within a image buffer, while I used a single layer buffer
This line took the rgb values from one sample and passed them to (I don't know what), but the end result is full colored image.
gl_FragColor = vec4(texture2D(SamplerY, texCoordVarying).rgb, 1);
It sounds like it might be that relationship. Possibly have it be a jpg and RGB instead of indexed colors with a png?

Is there any way to improve time between shots with AVCaptureStillImageOutput?

I currently use the following code to shoot a series of images:
- (void)shootSeries:(int)photos {
if(photos == 0) {
[self mergeImages];
} else {
[output captureStillImageAsynchronouslyFromConnection:connection completionHandler:
^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
NSLog(#"Shot picture %d.", 7 - photos);
[self shootSeries:(photos - 1)];
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int dataSize = CVPixelBufferGetDataSize(pixelBuffer);
CFDataRef data = CFDataCreate(NULL, (const UInt8 *)CVPixelBufferGetBaseAddress(pixelBuffer), dataSize);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData(data);
CFRelease(data);
CGImageRef image = CGImageCreate(CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer),
8, 32,
CVPixelBufferGetBytesPerRow(pixelBuffer),
colorspace,
kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,
dataProvider, NULL, true, kCGRenderingIntentDefault);
CFRelease(dataProvider);
CFArrayAppendValue(shotPictures, image);
CFRelease(image);
}];
}
}
While this works rather well it is very slow. How come apps like ClearCam can shoot pictures much faster in series than this and how can I do it too?
After capturing the image, store the sample buffer in a CFArray, and once you're done taking all your phones, THEN convert them to images (or in your case CGImageRefs).