How to make a movie from set of images using UIGetScreenImage - iphone

I have used this method and get multiple images. I am able to successfully create a movie but my problem is that when I play the movie, it seems to be playing too fast i.e. the movie doesn't have all the frames. Here is my code.
-(UIImage *)uiImageScreen
{
CGImageRef screen = UIGetScreenImage();
UIImage* image = [UIImage imageWithCGImage:screen];
CGImageRelease(screen);
UIImageWriteToSavedPhotosAlbum(image, self,nil, nil);
return image;
}
-(void) writeSample: (NSTimer*) _timer
{
if (assetWriterInput.readyForMoreMediaData) {
// CMSampleBufferRef sample = nil;
CVReturn cvErr = kCVReturnSuccess;
// get screenshot image!
CGImageRef image = (CGImageRef) [[self uiImageScreen] CGImage];
NSLog (#"made screenshot");
// prepare the pixel buffer
CVPixelBufferRef pixelBuffer = NULL;
CFDataRef imageData= CGDataProviderCopyData(CGImageGetDataProvider(image));
NSLog (#"copied image data");
cvErr = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
FRAME_WIDTH,
FRAME_HEIGHT,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(imageData),
CGImageGetBytesPerRow(image),
NULL,
NULL,
NULL,
&pixelBuffer);
NSLog (#"CVPixelBufferCreateWithBytes returned %d", cvErr);
// calculate the time
CFAbsoluteTime thisFrameWallClockTime = CFAbsoluteTimeGetCurrent();
CFTimeInterval elapsedTime = thisFrameWallClockTime - firstFrameWallClockTime;
NSLog (#"elapsedTime: %f", elapsedTime);
CMTime presentationTime = CMTimeMake (elapsedTime * 600, 600);
// write the sample
BOOL appended = [assetWriterPixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];
if (appended)
{
NSLog (#"appended sample at time %lf", CMTimeGetSeconds(presentationTime));
}
else
{
NSLog (#"failed to append");
}
}
}
Then I call this method to create movie.
-(void)StartRecording
{
NSString *moviePath = [[self pathToDocumentsDirectory] stringByAppendingPathComponent:OUTPUT_FILE_NAME];
if ([[NSFileManager defaultManager] fileExistsAtPath:moviePath]) {
[[NSFileManager defaultManager] removeItemAtPath:moviePath error:nil];
}
NSURL *movieURL = [NSURL fileURLWithPath:moviePath];
NSLog(#"path=%#",movieURL);
NSError *movieError = nil;
[assetWriter release];
assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL
fileType: AVFileTypeQuickTimeMovie
error: &movieError];
NSDictionary *assetWriterInputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:320], AVVideoWidthKey,
[NSNumber numberWithInt:480], AVVideoHeightKey,
nil];
assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType: AVMediaTypeVideo
outputSettings:assetWriterInputSettings];
assetWriterInput.expectsMediaDataInRealTime = YES;
[assetWriter addInput:assetWriterInput];
[assetWriterPixelBufferAdaptor release];
assetWriterPixelBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptor alloc]
initWithAssetWriterInput:assetWriterInput
sourcePixelBufferAttributes:nil];
[assetWriter startWriting];
firstFrameWallClockTime = CFAbsoluteTimeGetCurrent();
[assetWriter startSessionAtSourceTime: CMTimeMake(0, 1000)];
// start writing samples to it
[assetWriterTimer release];
assetWriterTimer = [NSTimer scheduledTimerWithTimeInterval:0.1
target:self
selector:#selector (writeSample:)
userInfo:nil
repeats:YES] ;
}

try this method....
if (![videoWriterInput isReadyForMoreMediaData]) {
NSLog(#"Not ready for video data");
}
else {
#synchronized (self) {
UIImage* newFrame = [self.currentScreen retain];
CVPixelBufferRef pixelBuffer = NULL;
CGImageRef cgImage = CGImageCreateCopy([newFrame CGImage]);
CFDataRef image = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
int status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, avAdaptor.pixelBufferPool, &pixelBuffer);
if(status != 0){
//could not get a buffer from the pool
NSLog(#"Error creating pixel buffer: status=%d", status);
}
// set image data into pixel buffer
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
uint8_t* destPixels = CVPixelBufferGetBaseAddress(pixelBuffer);
CFDataGetBytes(image, CFRangeMake(0, CFDataGetLength(image)), destPixels); //XXX: will work if the pixel buffer is contiguous and has the same bytesPerRow as the input data
if(status == 0){
BOOL success = [avAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:time];
if (!success)
NSLog(#"Warning: Unable to write buffer to video");
}
//clean up
[newFrame release];
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CVPixelBufferRelease( pixelBuffer );
CFRelease(image);
CGImageRelease(cgImage);
}
}

Related

iPhone - AVAssetWriter - Error creating movie from photos at 1920×1080 pixels

I am trying to create a movie from some pictures. It works just fine with hd pictures ({720, 1280}) or lower resolutions . But when i try to create the movie with full hd pictures {1080, 1920} , the video is scrambled. Here is a link to see how it looks http://www.youtube.com/watch?v=BfYldb8e_18 . Do you have any ideas what i may be doing wrong?
- (void) createMovieWithOptions:(NSDictionary *) options
{
#autoreleasepool {
NSString *path = [options valueForKey:#"path"];
CGSize size = [(NSValue *)[options valueForKey:#"size"] CGSizeValue];
NSArray *imageArray = [options valueForKey:#"pictures"];
NSInteger recordingFPS = [[options valueForKey:#"fps"] integerValue];
BOOL success=YES;
NSError *error = nil;
AVAssetWriter *assetWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path]
fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(assetWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithFloat:size.width], AVVideoWidthKey,
[NSNumber numberWithFloat:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput *videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
// Configure settings for the pixel buffer adaptor.
NSDictionary* bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
sourcePixelBufferAttributes:bufferAttributes];
NSParameterAssert(videoWriterInput);
NSParameterAssert([assetWriter canAddInput:videoWriterInput]);
videoWriterInput.expectsMediaDataInRealTime = NO;
[assetWriter addInput:videoWriterInput];
//Start a session:
[assetWriter startWriting];
[assetWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
//convert uiimage to CGImage.
int frameCount = 0;
float progress = 0;
float progressFromFrames = _progressView.progress; //only for create iflipbook movie
for(UIImage * img in imageArray)
{
if([[NSThread currentThread] isCancelled])
{
[NSThread exit];
}
[condCreateMovie lock];
if(isCreateMoviePaused)
{
[condCreateMovie wait];
}
uint64_t totalFreeSpace=[Utils getFreeDiskspace];
if(((totalFreeSpace/1024ll)/1024ll)<50)
{
success=NO;
break;
}
// #autoreleasepool {
NSLog(#"size:%#",NSStringFromCGSize(img.size));
buffer = [[MovieWritter sharedMovieWritter] pixelBufferFromCGImage:[img CGImage] andSize:size];
BOOL append_ok = NO;
int j = 0;
while (!append_ok && j < 60)
{
if(adaptor.assetWriterInput.readyForMoreMediaData)
{
CMTime frameTime = CMTimeMake(frameCount, recordingFPS);
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
CVPixelBufferRelease(buffer);
[NSThread sleepForTimeInterval:0.1];
if(isCreatingiFlipBookFromImported)
progress = (float)frameCount/(float)[imageArray count]/2.0 + progressFromFrames;
else
progress = (float)frameCount/(float)[imageArray count];
[[NSNotificationCenter defaultCenter] postNotificationName:#"movieCreationProgress" object:[NSNumber numberWithFloat:progress]];
}
else
{
[NSThread sleepForTimeInterval:0.5];
}
j++;
}
if (!append_ok)
{
NSLog(#"error appending image %d times %d\n", frameCount, j);
}
frameCount++;
[condCreateMovie unlock];
}
//Finish the session:
[videoWriterInput markAsFinished];
[assetWriter finishWriting];
NSDictionary *dict = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:success], #"success",
path, #"path", nil];
[[NSNotificationCenter defaultCenter] postNotificationName:#"movieCreationFinished" object:dict];
}
}
*Edit . Here is the code for [[MovieWritter sharedMovieWritter] pixelBufferFromCGImage:]
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image andSize:(CGSize) size
{
#autoreleasepool {
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
}
I had the same problem and this answer resolved it: the size of the video must be a multiple of 16.
Pretty sure that this is either a HW limitation or a bug. Please file a Radar.
how about something like this to get pixel buffer
//you could use a cgiimageref here instead
CFDataRef imageData= CGDataProviderCopyData(CGImageGetDataProvider(imageView.image.CGImage));
NSLog (#"copied image data");
cvErr = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
FRAME_WIDTH,
FRAME_HEIGHT,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(imageData),
CGImageGetBytesPerRow(imageView.image.CGImage),
NULL,
NULL,
NULL,
&pixelBuffer);
NSLog (#"CVPixelBufferCreateWithBytes returned %d", cvErr);
CFAbsoluteTime thisFrameWallClockTime = CFAbsoluteTimeGetCurrent();
CFTimeInterval elapsedTime = thisFrameWallClockTime - firstFrameWallClockTime;
NSLog (#"elapsedTime: %f", elapsedTime);
CMTime presentationTime = CMTimeMake(elapsedTime * TIME_SCALE, TIME_SCALE);
// write the sample
BOOL appended = [assetWriterPixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];
CVPixelBufferRelease(pixelBuffer);
CFRelease(imageData);
if (appended) {
NSLog (#"appended sample at time %lf", CMTimeGetSeconds(presentationTime));
} else {
NSLog (#"failed to append");
[self stopRecording];
self.startStopButton.selected = NO;
}
You may also want to set the capture settings preset , although high usually is suitable and that is default
*/
Constants to define capture setting presets using the sessionPreset property.
NSString *const AVCaptureSessionPresetPhoto;
NSString *const AVCaptureSessionPresetHigh;
NSString *const AVCaptureSessionPresetMedium;
NSString *const AVCaptureSessionPresetLow;
NSString *const AVCaptureSessionPreset352x288;
NSString *const AVCaptureSessionPreset640x480;
NSString *const AVCaptureSessionPreset1280x720;
NSString *const AVCaptureSessionPreset1920x1080;
NSString *const AVCaptureSessionPresetiFrame960x540;
NSString *const AVCaptureSessionPresetiFrame1280x720;
*/
//set it like this
self.captureSession.sessionPreset = AVCaptureSessionPreset1920x1080;
//or like this when you define avcapturesession
[self.captureSession setSessionPreset:AVCaptureSessionPreset1920x1080];

Memory Issue while reading video frames iPhone

I'm having memory issues while reading video frames from an existing video chosen from the iPhone library. First I added the UIImage-frames themselves into an Array, but I thought that the array was too big for the memory after a while, so instead I save the UIImages in the documents folder and add the imagepath to the array. However, I still get the same memory warnings even though checking with Instruments for allocations. The total allocated memory never gets over 2.5mb. Also there are no leaks found... Can anyone think of something?
-(void)addFrame:(UIImage *)image
{
NSString *imgPath = [NSString stringWithFormat:#"%#/Analysis%d-%d.png", docFolder, currentIndex, framesArray.count];
[UIImagePNGRepresentation(image) writeToFile:imgPath atomically:YES];
[framesArray addObject:imgPath];
frameCount++;
}
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
[picker dismissModalViewControllerAnimated:YES];
[framesArray removeAllObjects];
frameCount = 0;
// incoming video
NSURL *videoURL = [info valueForKey:UIImagePickerControllerMediaURL];
//NSLog(#"Video : %#", videoURL);
// AVURLAsset to read input movie (i.e. mov recorded to local storage)
NSDictionary *inputOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
AVURLAsset *inputAsset = [[AVURLAsset alloc] initWithURL:videoURL options:inputOptions];
// Load the input asset tracks information
[inputAsset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:#"tracks"] completionHandler: ^{
NSError *error = nil;
nrFrames = CMTimeGetSeconds([inputAsset duration]) * 30;
NSLog(#"Total frames = %d", nrFrames);
// Check status of "tracks", make sure they were loaded
AVKeyValueStatus tracksStatus = [inputAsset statusOfValueForKey:#"tracks" error:&error];
if (!tracksStatus == AVKeyValueStatusLoaded)
// failed to load
return;
/* Read video samples from input asset video track */
AVAssetReader *reader = [AVAssetReader assetReaderWithAsset:inputAsset error:&error];
NSMutableDictionary *outputSettings = [NSMutableDictionary dictionary];
[outputSettings setObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey: (NSString*)kCVPixelBufferPixelFormatTypeKey];
AVAssetReaderTrackOutput *readerVideoTrackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:[[inputAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] outputSettings:outputSettings];
// Assign the tracks to the reader and start to read
[reader addOutput:readerVideoTrackOutput];
if ([reader startReading] == NO) {
// Handle error
NSLog(#"Error reading");
}
NSAutoreleasePool *pool = [NSAutoreleasePool new];
while (reader.status == AVAssetReaderStatusReading)
{
if(!memoryProblem)
{
CMSampleBufferRef sampleBufferRef = [readerVideoTrackOutput copyNextSampleBuffer];
if (sampleBufferRef)
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBufferRef);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
/*We unlock the image buffer*/
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
/*We release some components*/
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage *image= [UIImage imageWithCGImage:newImage scale:[UIScreen mainScreen].scale orientation:UIImageOrientationRight];
//[self addFrame:image];
[self performSelectorOnMainThread:#selector(addFrame:) withObject:image waitUntilDone:YES];
/*We release the CGImageRef*/
CGImageRelease(newImage);
CMSampleBufferInvalidate(sampleBufferRef);
CFRelease(sampleBufferRef);
sampleBufferRef = NULL;
}
}
else
{
break;
}
}
[pool release];
NSLog(#"Finished");
}];
}
You do one thing and try.
Move the NSAutoreleasePool inside the while loop and drain that inside the loop.
So that it would be like as follows:
while (reader.status == AVAssetReaderStatusReading)
{
NSAutoreleasePool *pool = [NSAutoreleasePool new];
.....
[pool drain];
}

Intentionally skip frames while processing video using AVFoundation

I'm trying to process a local video file and simply do some analysis on the pixel data. Nothing is being output.
My current code iterates through each frame of the video but I'd actually like to skip ~15 frames at a time to speed things up. Is there a way to skip over frames without decoding them?
In Ffmpeg, I could simply call av_read_frame without calling avcodec_decode_video2.
Thanks! Here's my current code:
- (void) readMovie:(NSURL *)url
{
[self performSelectorOnMainThread:#selector(updateInfo:) withObject:#"scanning" waitUntilDone:YES];
startTime = [NSDate date];
AVURLAsset * asset = [AVURLAsset URLAssetWithURL:url options:nil];
[asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:#"tracks"] completionHandler:
^{
dispatch_async(dispatch_get_main_queue(),
^{
AVAssetTrack * videoTrack = nil;
NSArray * tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
if ([tracks count] == 1)
{
videoTrack = [tracks objectAtIndex:0];
videoDuration = CMTimeGetSeconds([videoTrack timeRange].duration);
NSError * error = nil;
// _movieReader is a member variable
_movieReader = [[AVAssetReader alloc] initWithAsset:asset error:&error];
if (error)
NSLog(#"%#", error.localizedDescription);
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt: kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
AVAssetReaderTrackOutput* output = [AVAssetReaderTrackOutput
assetReaderTrackOutputWithTrack:videoTrack
outputSettings:videoSettings];
output.alwaysCopiesSampleData = NO;
[_movieReader addOutput:output];
if ([_movieReader startReading])
{
NSLog(#"reading started");
[self readNextMovieFrame];
}
else
{
NSLog(#"reading can't be started");
}
}
});
}];
}
- (void) readNextMovieFrame
{
//NSLog(#"readNextMovieFrame called");
if (_movieReader.status == AVAssetReaderStatusReading)
{
//NSLog(#"status is reading");
AVAssetReaderTrackOutput * output = [_movieReader.outputs objectAtIndex:0];
CMSampleBufferRef sampleBuffer = [output copyNextSampleBuffer]; // this is the most expensive call
if (sampleBuffer)
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the image buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get information of the image
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
//
// Here's where you can process the buffer!
// (your code goes here)
//
// Finish processing the buffer!
//
// Unlock the image buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CFRelease(sampleBuffer);
[self readNextMovieFrame];
}
else
{
NSLog(#"could not copy next sample buffer. status is %d", _movieReader.status);
NSTimeInterval scanDuration = -[startTime timeIntervalSinceNow];
float scanMultiplier = videoDuration / scanDuration;
NSString* info = [NSString stringWithFormat:#"Done\n\nvideo duration: %f seconds\nscan duration: %f seconds\nmultiplier: %f", videoDuration, scanDuration, scanMultiplier];
[self performSelectorOnMainThread:#selector(updateInfo:) withObject:info waitUntilDone:YES];
}
}
else
{
NSLog(#"status is now %d", _movieReader.status);
}
}
- (void) updateInfo: (id*)message
{
NSString* info = [NSString stringWithFormat:#"%#", message];
[infoTextView setText:info];
}
Just add a bool Value to your code
- (void) readMovie:(NSURL *)url
{
[self performSelectorOnMainThread:#selector(updateInfo:) withObject:#"scanning" waitUntilDone:YES];
startTime = [NSDate date];
AVURLAsset * asset = [AVURLAsset URLAssetWithURL:url options:nil];
[asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:#"tracks"] completionHandler:
^{
dispatch_async(dispatch_get_main_queue(),
^{
AVAssetTrack * videoTrack = nil;
NSArray * tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
if ([tracks count] == 1)
{
videoTrack = [tracks objectAtIndex:0];
videoDuration = CMTimeGetSeconds([videoTrack timeRange].duration);
NSError * error = nil;
// _movieReader is a member variable
_movieReader = [[AVAssetReader alloc] initWithAsset:asset error:&error];
if (error)
NSLog(#"%#", error.localizedDescription);
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt: kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
AVAssetReaderTrackOutput* output = [AVAssetReaderTrackOutput
assetReaderTrackOutputWithTrack:videoTrack
outputSettings:videoSettings];
output.alwaysCopiesSampleData = NO;
[_movieReader addOutput:output];
if ([_movieReader startReading])
{
NSLog(#"reading started");
[self readNextMovieFrame];
}
else
{
NSLog(#"reading can't be started");
}
}
});
}];
}
BOOL skipFrame = FALSE;
- (void) readNextMovieFrame
{
//NSLog(#"readNextMovieFrame called");
if (_movieReader.status == AVAssetReaderStatusReading)
{
//NSLog(#"status is reading");
AVAssetReaderTrackOutput * output = [_movieReader.outputs objectAtIndex:0];
CMSampleBufferRef sampleBuffer = [output copyNextSampleBuffer];
if (sampleBuffer && !skipFrame)
{
skipFrame = TRUE;
// I'm guessing this is the expensive part that we can skip if we want to skip frames
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the image buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get information of the image
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
//
// Here's where you can process the buffer!
// (your code goes here)
//
// Finish processing the buffer!
//
// Unlock the image buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CFRelease(sampleBuffer);
}
else
{
skipFrame = FALSE;
NSLog(#"could not copy next sample buffer. status is %d", _movieReader.status);
NSTimeInterval scanDuration = -[startTime timeIntervalSinceNow];
float scanMultiplier = videoDuration / scanDuration;
NSString* info = [NSString stringWithFormat:#"Done\n\nvideo duration: %f seconds\nscan duration: %f seconds\nmultiplier: %f", videoDuration, scanDuration, scanMultiplier];
[self performSelectorOnMainThread:#selector(updateInfo:) withObject:info waitUntilDone:YES];
}
[self readNextMovieFrame];
}
else
{
NSLog(#"status is now %d", _movieReader.status);
}
}
- (void) updateInfo: (id*)message
{
NSString* info = [NSString stringWithFormat:#"%#", message];
[infoTextView setText:info];
}

How to fix leak CVPixelBuffer

please tell me where is leak in this code...
//here I did video with images from Document Directory
- (void) testCompressionSession:(NSString *)path
{
if ([[NSFileManager defaultManager] fileExistsAtPath:path]) {
[[NSFileManager defaultManager] removeItemAtPath:path error:nil];
}
NSArray *array = [dictInfo objectForKey:#"sortedKeys"];
NSString *betaCompressionDirectory = path;
NSError *error = nil;
unlink([betaCompressionDirectory UTF8String]);
NSLog(#"array = %#",array);
NSData *imgDataTmp = [NSData dataWithContentsOfFile:[projectPath stringByAppendingPathComponent:[array objectAtIndex:0]]];
NSLog(#"link : %#",[projectPath stringByAppendingPathComponent:[array objectAtIndex:0]]);
CGSize size = CGSizeMake([UIImage imageWithData:imgDataTmp].size.width, [UIImage imageWithData:imgDataTmp].size.height);
//----initialize compression engine
NSLog(#"size : w : %f, h : %f",size.width,size.height);
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory]
fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(videoWriter);
if(error)
NSLog(#"error = %#", [error localizedDescription]);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey, nil];
AVAssetWriterInput *writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
if ([videoWriter canAddInput:writerInput])
NSLog(#"I can add this input");
else
NSLog(#"i can't add this input");
[videoWriter addInput:writerInput];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
dispatch_queue_t dispatchQueue = dispatch_queue_create("mediaInputQueue", NULL);
[writerInput requestMediaDataWhenReadyOnQueue:dispatchQueue usingBlock:^{
//BOOL isEffect = NO;
int i = 0;
float totalTime = 0.0f;
float nextTime = 0;
if ([writerInput isReadyForMoreMediaData]) {
while (1)
{
if (i <= [array count] && i > 0) {
nextTime = [[dictInfo objectForKey:[array objectAtIndex:i-1]] floatValue];
}
totalTime += i == 0 ? 0 : nextTime;
CMTime presentTime=CMTimeMake(totalTime, 1);
printf("presentTime : %f ",CMTimeGetSeconds(presentTime));
if (i >= [array count])
{
NSData *imgData = [NSData dataWithContentsOfFile:[projectPath stringByAppendingPathComponent:[array objectAtIndex:i-1]]];
UIImage* tmpImg = [UIImage imageWithData:imgData];
tmpImg = [self imageWithImage:tmpImg scaledToSize:size];
while ( !writerInput.readyForMoreMediaData)
{
sleep(0.01);
}
CVPixelBufferRef buffer = NULL;
buffer = (CVPixelBufferRef)[self pixelBufferFromCGImage:[tmpImg CGImage] size:size];
[adaptor appendPixelBuffer:buffer withPresentationTime:CMTimeMake(totalTime-nextTime+(nextTime/2.0), 1)];
NSLog(#"%f",totalTime-nextTime+(nextTime/2.0));
[writerInput markAsFinished];
[videoWriter finishWriting];
//CVPixelBufferPoolRelease(adaptor.pixelBufferPool);
[videoWriter release];
break;
} else {
NSData *imgData = [NSData dataWithContentsOfFile:[projectPath stringByAppendingPathComponent:[array objectAtIndex:i]]];
UIImage* tmpImg = [UIImage imageWithData:imgData];
//tmpImg = [self imageWithImage:tmpImg scaledToSize:size];
//UIImageWriteToSavedPhotosAlbum(tmpImg, nil, nil, nil);
while (!adaptor.assetWriterInput.readyForMoreMediaData && !writerInput.readyForMoreMediaData)
{
sleep(0.01);
}
CVPixelBufferRef buffer = NULL;
buffer = (CVPixelBufferRef)[self pixelBufferFromCGImage:[tmpImg CGImage] size:size];
if (buffer)
{
if(![adaptor appendPixelBuffer:buffer withPresentationTime:presentTime])
NSLog(#"FAIL");
else
NSLog(#"Success:%d",i);
CVPixelBufferRelease(buffer);
}
}
i++;
}
}
}];
//and here I did CVPixelBufferRef from CGImageRef
- (CVPixelBufferRef )pixelBufferFromCGImage:(CGImageRef)image size:(CGSize)size
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
leak log is :
CVObject CFRetain 00:37.957.985 2 0x1ecae0 0 CoreVideo CVPixelBufferPool::createPixelBuffer(__CFAllocator const*, __CFDictionary const*, int*)
Malloc 96 Bytes Malloc 00:40.015.872 1 0x1f0750 96 CoreVideo CVBuffer::init()
CVPixelBuffer Malloc 00:40.969.716 1 0x1f2570 96 CoreVideo CVObject::alloc(unsigned long, __CFAllocator const*, unsigned long, unsigned long)
Look here:
CVPixelBufferRef buffer = NULL;
CVPixelBufferPoolCreatePixelBuffer(NULL, adaptor.pixelBufferPool, &buffer);
CVPixelBufferLockBaseAddress(buffer, 0);
buffer = (CVPixelBufferRef)[self pixelBufferFromCGImage:[tmpImg CGImage] size:size];
first a pixel buffer gets created and its address put info buffer variable, then the same variable gets overwritten by pixelBufferFromCGImage, so its previous content cannot be released any more.
EDIT
you've just removed the code I used, so my answer is now no more applicable.
Now this part:
CVPixelBufferRef buffer = NULL;
buffer = (CVPixelBufferRef)[self pixelBufferFromCGImage:[tmpImg CGImage] size:size];
[adaptor appendPixelBuffer:buffer withPresentationTime:CMTimeMake(totalTime-nextTime+(nextTime/2.0), 1)];
NSLog(#"%f",totalTime-nextTime+(nextTime/2.0));
...
You have a commented out CVPixelBufferPoolRelease(adaptor.pixelBufferPool), which is okay, since in this version you have not pixel buffer pool, but I miss here a call to CVPixelBufferRelease(buffer).

Iphone SDK,Create a Video from UIImage

i needs to create a video from the image selected.
i have code it shoudl work but its giving error while appending buffer.
This is how both type of images has been saved.
-(void)imagePickerController:(UIImagePickerController *)picker didFinishPickingImage:(UIImage *)img editingInfo:(NSDictionary *)editInfo
{
// NSLog(#"Came From Here");
imgv.image = img;
AppDelegate *app = (AppDelegate *)[[UIApplication sharedApplication] delegate];
app.imgmain = img;
[self dismissModalViewControllerAnimated:YES];
RecordVoice *rec = [[RecordVoice alloc] initWithNibName:#"RecordVoice" bundle:nil];
rec.hidesBottomBarWhenPushed = YES;
// rec.img.image = img;
[self.navigationController pushViewController:rec animated:YES];
//[self presentModalViewController:rec animated:YES];
[rec release];
// flag =#"yes";
// need to show the upload image button now
// [username, ititle resignFirstResponder];
}
on the other view controller i am showing this image on a uiimage view.
and on button click i am converting that image to video with this code.
-(void)createVideoFile
{
NSString *documentsDirectoryPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSArray *dirContents = [[NSFileManager defaultManager] contentsOfDirectoryAtPath:documentsDirectoryPath error:nil];
for (NSString *tString in dirContents) {
if ([tString isEqualToString:#"test.mp4"])
{
[[NSFileManager defaultManager]removeItemAtPath:[NSString stringWithFormat:#"%#/%#",documentsDirectoryPath,tString] error:nil];
}
}
NSString *nfile = [documentsDirectoryPath stringByAppendingPathComponent:#"test.mp4"];
AVURLAsset * urlAsset = [AVURLAsset URLAssetWithURL:recordedTmpFile options:nil];
NSLog(#"Write Started");
NSError *error = nil;
CGSize size = img.image.size; //CGSizeMake(320, 480);
NSLog(#"Write Started");
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:nfile] fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* videoWriterInput = [[AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings] retain];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(videoWriterInput);
NSParameterAssert([videoWriter canAddInput:videoWriterInput]);
videoWriterInput.expectsMediaDataInRealTime = YES;
[videoWriter addInput:videoWriterInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
//convert uiimage to CGImage.
int frameCount = 0;
buffer = [self pixelBufferFromCGImage:[img.image CGImage] andSize:size];
BOOL append_ok = NO;
int j = 0;
while (!append_ok && j < 30)
{
if (adaptor.assetWriterInput.readyForMoreMediaData)
{
printf("appending %d attemp %d\n", frameCount, j);
CMTime frameTime = urlAsset.duration;//CMTimeMake(frameCount,(int32_t) 10);
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
//if(buffer)
// CVBufferRelease(buffer);
[NSThread sleepForTimeInterval:0.05];
}
else
{
printf("adaptor not ready %d, %d\n", frameCount, j);
[NSThread sleepForTimeInterval:0.1];
}
j++;
}
if (!append_ok) {
printf("error appending image %d times %d\n", frameCount, j);
}
frameCount++;
[videoWriterInput markAsFinished];
[videoWriter finishWriting];
[self CompileFilesToMakeMovie];
[altv dismissWithClickedButtonIndex:0 animated:YES];
[altv release];
NSLog(#"Write Ended");
}
But this is not working...
i am stuck can anyone please help me in this???? :(
i have figured out the problem.
if we use image with big size it wont work. like the pictures taken from camera app has a big size.
so i am compressing it to low level and then it works..
i didnt yet got why its working like but got the solution