Memory Leak CGBitmapContextCreateImage - iphone

In Instruments it tells me that there is a leak caused by CGBitmapContextCreateImage in my resizedImage method. However after a lot of research and trial and error, I have come to the conclusion that it is caused somewhere else in the call chain.
The call chain is as follows:
takeFoto -> saveFoto -> setImage_bg -> bolly -> resizedImage
Here is all the related code
-(void)takeFoto
{
[stillImageOutput captureStillImageAsynchronouslyFromConnection:self.videoConnection completionHandler:
^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
NSData* idata = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];//[od copy];
UIImage *image = [UIImage imageWithData:idata];
CGImageRef cgi = [image CGImage];
CGImageRef cgi2 = CGImageCreateWithImageInRect(cgi, CGRectMake(0, 0, CGImageGetWidth(cgi), CGImageGetHeight(cgi));
UIImageOrientation iori;
if(self.devOri==UIInterfaceOrientationLandscapeRight)
{
if([self isFrontCamera]) iori = UIImageOrientationDownMirrored;
else iori = UIImageOrientationUp;
}
else if(self.devOri==UIInterfaceOrientationLandscapeLeft)
{
if([self isFrontCamera]) iori = UIImageOrientationUpMirrored;
else iori = UIImageOrientationDown;
}
else if(self.devOri==UIInterfaceOrientationPortraitUpsideDown)
{
if([self isFrontCamera]) iori = UIImageOrientationRightMirrored;
else iori = UIImageOrientationLeft;
}
else
{
if([self isFrontCamera]) iori = UIImageOrientationLeftMirrored;
else iori = UIImageOrientationRight;
}
UIImage *scaledImage = [[UIImage alloc] initWithCGImage:cgi2 scale:1 orientation:iori];
CGImageRelease(cgi2);
self.foto = scaledImage;
[scaledImage release];
[parent saveFoto];
}];
}
-(void)saveFoto
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
[self setImage_bg:captureManager.foto];
[pool release];
}
-(void)setImage_bg:(UIImage*)oimg
{
CGSize sz = savePreview.frame.size;
UIImage* img = [oimg copy];
[savePreview setImage:[filters resizeImage:img size:sz]];
sz = CGSizeMake(64, 64);
UIImage* img2 = [filters resizeImage:img size:sz];
bolPrv.image = [filters bolly:img2];
[img release];
}
//filters bolly
-(UIImage*)bolly:(UIImage*)img
{
CIImage *beginImage = [CIImage imageWithCGImage:img.CGImage];
CIContext *context = [CIContext contextWithOptions:nil];
UIImage* bb = [UIImage fromFile:#"bollywoodBlend3.png"];
UIImage* bb2 = [bb resizedImage:img.size interpolationQuality:kCGInterpolationMedium];
CIFilter *filter = [CIFilter filterWithName:#"CIOverlayBlendMode"
keysAndValues: kCIInputImageKey, beginImage,
#"inputBackgroundImage", [CIImage imageWithCGImage:bb2.CGImage], nil];
CIImage *outputImage = filter.outputImage;
filter = [CIFilter filterWithName:#"CIColorControls"
keysAndValues: kCIInputImageKey, outputImage,
#"inputSaturation", [NSNumber numberWithFloat:1.8],
#"inputBrightness", [NSNumber numberWithFloat:0.1],
#"inputContrast", [NSNumber numberWithFloat:1.5],
nil];
outputImage = filter.outputImage;
CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]];
UIImage *newImg = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
return [newImg autorelease];
}
// filters resizeImage (same code used for resizedImage)
- (UIImage *)resizeImage:(UIImage*)img size:(CGSize)newSize
{
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = img.CGImage;
// Build a context that's the same dimensions as the new size
CGColorSpaceRef csr = CGImageGetColorSpace(imageRef);
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(imageRef),
4*newRect.size.width,
csr,
CGImageGetBitmapInfo(imageRef));
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, newRect, imageRef);
// CGImageSourceCreateThumbnailAtIndex
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef scale:img.scale orientation:img.imageOrientation];
// Clean up
CGImageRelease(newImageRef);
CGContextRelease(bitmap);
return newImage;
}
- (UIImage *)resizedImage:(CGSize)newSize
transform:(CGAffineTransform)transform
drawTransposed:(BOOL)transpose
interpolationQuality:(CGInterpolationQuality)quality
{
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGRect transposedRect = CGRectMake(0, 0, newRect.size.height, newRect.size.width);
CGImageRef imageRef = self.CGImage;
// Build a context that's the same dimensions as the new size
CGColorSpaceRef csr = CGImageGetColorSpace(imageRef);
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(imageRef),
4*newRect.size.width,
csr,
CGImageGetBitmapInfo(imageRef));
// Rotate and/or flip the image if required by its orientation
CGContextConcatCTM(bitmap, transform);
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(bitmap, quality);
// Draw into the context; this scales the image
CGContextDrawImage(bitmap, transpose ? transposedRect : newRect, imageRef);
// CGImageSourceCreateThumbnailAtIndex
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(bitmap);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
// Clean up
CGImageRelease(newImageRef);
CGContextRelease(bitmap);
return newImage;
}
// UIImage from file
+(UIImage*)fromFile:(NSString*)fname
{
NSString* bundlePath = [[NSBundle mainBundle] bundlePath];
return [UIImage imageWithContentsOfFile:[NSString stringWithFormat:#"%#/%#", bundlePath,fname]];
}

Related

How to save image from NSData after doing the action of appendData?

I want to save image from NSdata after appending additional bytes to the NSMutableData. Below is the sample code for my requirement.
NSData *sourceData = = UIImageJPEGRepresentation([info objectForKey:#"UIImagePickerControllerOriginalImage"], 1);
NSMutableData *concatenatedData = [NSMutableData data];
[concatenatedData appendData:sourceData];
[concatenatedData appendData:sourceData];
UIImage *myFinalImage = [[UIImage alloc] initWithData:concatenatedData];
UIImageWriteToSavedPhotosAlbum(myFinalImage, self, nil, nil);
I am appending the sourcedata 2 times but my final Image is saving with only one sourceData bytes.
Am I missing something here?
Here is the Sample:
CGSize newSize = CGSizeMake({Here give width}, {Here give height});
UIImage *myFinalImage1 = [[UIImage alloc] initWithData:concatenatedData];
UIImage *myFinalImage2 = [[UIImage alloc] initWithData:concatenatedData];
// Set up width height with values.
CGSize newSize = CGSizeMake(width, height);
UIGraphicsBeginImageContext( newSize );
[myFinalImage1 drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
[myFinalImage2 drawInRect:CGRectMake(newSize.width,newSize.height,newSize.width,newSize.height*2) blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *mergedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

application crashed when make video from images

Please find the below code.
-(void) writeImagesAsMovie:(NSArray *)array toPath:(NSString*)path numPhoto:(NSInteger)totPics {
ALAsset *asset = [assets objectAtIndex:0];
ALAssetRepresentation *assetRepresentation = [asset defaultRepresentation];
UIImage *getImage = [UIImage imageWithCGImage:[assetRepresentation fullScreenImage] scale:[assetRepresentation scale] orientation:(UIImageOrientation)[assetRepresentation orientation]];
UIImage *first = [getImage imageByScalingAndCroppingForSize:CGSizeMake(720.0, 960.0)];
CGSize frameSize = CGSizeMake(first.size.width,first.size.height);
NSLog(#"frameSize = %#",NSStringFromCGSize(frameSize));
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:path] fileType:AVFileTypeQuickTimeMovie
error:&error];
if(error) {
NSLog(#"error creating AssetWriter: %#",[error description]);
}
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:frameSize.width], AVVideoWidthKey,
[NSNumber numberWithInt:frameSize.height], AVVideoHeightKey,
AVVideoScalingModeResizeAspect,AVVideoScalingModeKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
CGAffineTransform transform = CGAffineTransformIdentity;
UIImageOrientation orient = first.imageOrientation;
CGSize imageSize = first.size;
switch(orient) {
case UIImageOrientationUp: //EXIF = 1
transform = CGAffineTransformIdentity;
break;
case UIImageOrientationUpMirrored: //EXIF = 2
transform = CGAffineTransformMakeTranslation(imageSize.width, 0.0);
transform = CGAffineTransformScale(transform, -1.0, 1.0);
break;
case UIImageOrientationDown: //EXIF = 3
transform = CGAffineTransformMakeTranslation(imageSize.width, imageSize.height);
transform = CGAffineTransformRotate(transform, M_PI);
break;
case UIImageOrientationDownMirrored: //EXIF = 4
transform = CGAffineTransformMakeTranslation(0.0, imageSize.height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
break;
case UIImageOrientationLeftMirrored: //EXIF = 5
transform = CGAffineTransformMakeTranslation(imageSize.height, imageSize.width);
transform = CGAffineTransformScale(transform, -1.0, 1.0);
transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0);
break;
case UIImageOrientationLeft: //EXIF = 6
transform = CGAffineTransformMakeTranslation(0.0, imageSize.width);
transform = CGAffineTransformRotate(transform, 3.0 * M_PI / 2.0);
break;
case UIImageOrientationRightMirrored: //EXIF = 7
transform = CGAffineTransformMakeScale(-1.0, 1.0);
transform = CGAffineTransformRotate(transform, M_PI / 2.0);
break;
case UIImageOrientationRight: //EXIF = 8
transform = CGAffineTransformMakeTranslation(imageSize.height, 0.0);
transform = CGAffineTransformRotate(transform, M_PI / 2.0);
break;
default:
[NSException raise:NSInternalInconsistencyException format:#"Invalid image orientation"];
}
writerInput.transform = transform;
NSMutableDictionary *attributes = [[NSMutableDictionary alloc] init];
[attributes setObject:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32ARGB] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
[attributes setObject:[NSNumber numberWithUnsignedInt:frameSize.width] forKey:(NSString*)kCVPixelBufferWidthKey];
[attributes setObject:[NSNumber numberWithUnsignedInt:frameSize.height] forKey:(NSString*)kCVPixelBufferHeightKey];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:attributes];
[videoWriter addInput:writerInput];
// fixes all errors
writerInput.expectsMediaDataInRealTime = YES;
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
buffer = [self pixelBufferFromCGImage:[first CGImage]];
BOOL result = [adaptor appendPixelBuffer:buffer withPresentationTime:kCMTimeZero];
if (result == NO)
NSLog(#"failed to append buffer");
if(buffer) {
CVBufferRelease(buffer);
}
int fps = 2;
for(int i=0; i<totPics; i++)
{
if (adaptor.assetWriterInput.readyForMoreMediaData) {
CMTime frameTime = CMTimeMake(1, fps);
CMTime lastTime = CMTimeMake(i, fps);
CMTime presentTime = CMTimeAdd(lastTime, frameTime);
NSLog(#"presentTime = %f",CMTimeGetSeconds(presentTime));
ALAsset *asset = [assets objectAtIndex:i];
ALAssetRepresentation *assetRepresentation = [asset defaultRepresentation];
UIImage *imgGetFrame = [UIImage imageWithCGImage:[assetRepresentation fullScreenImage] scale:[assetRepresentation scale] orientation:(UIImageOrientation)[assetRepresentation orientation]];
UIImage *imgFrame = [imgGetFrame imageByScalingAndCroppingForSize:CGSizeMake(720.0, 960.0)];
buffer = [self pixelBufferFromCGImage:[imgFrame CGImage]];
BOOL result = [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
if (result == NO) //failes on 3GS, but works on iphone 4
{
NSLog(#"failed to append buffer");
NSLog(#"The error is %#", [videoWriter error]);
}
if(buffer) {
CVBufferRelease(buffer);
}
} else {
NSLog(#"error");
}
}
//Finish the session:
[writerInput markAsFinished];
[videoWriter finishWritingWithCompletionHandler:^{
NSLog(#"Complete");
}];
CVPixelBufferPoolRelease(adaptor.pixelBufferPool);
}
-(CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image {
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
Cropped Image method
-(UIImage*)imageByScalingAndCroppingForSize:(CGSize)targetSize {
UIImage *sourceImage = self;
UIImage *newImage = nil;
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO)
{
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor > heightFactor)
{
scaleFactor = widthFactor; // scale to fit height
}
else
{
scaleFactor = heightFactor; // scale to fit width
}
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the image
if (widthFactor > heightFactor)
{
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else
{
if (widthFactor < heightFactor)
{
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
}
UIGraphicsBeginImageContext(targetSize); // this will crop
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
if(newImage == nil) {
NSLog(#"could not scale image");
}
//pop the context to get back to the default
UIGraphicsEndImageContext();
return newImage;
}
I have pass the image from the user photo library and add the image after crop it to 720 x 960.
When I took 100 images then I got memory warning error. Also when I checked the application in instrument then it took around 400 mb. So please help me if anyone has an idea what I am doing wrong.
The problem is that you are using up all app memory in your video processing loop. You cannot just allocate hundreds of images in memory at the same time, the code will crash when run on your iOS device. Please read my blog post on the subject at video_and_memory_usage_on_ios_devices and then change your for loop so that an autorelease pool is created for each iteration of the loop to fix the runaway memory usage. Also note that kCVPixelFormatType_32ARGB will be very slow, you should use kCVPixelFormatType_32BGRA.

Capture overlay image without setting drawInRect

I need to take overlay image without setting drawInRect. When i set drawInRect it gives output of setting size. I need to take picture with new size without using following code.
- (void)captureStillImageWithOverlay:(UIImage*)overlay
{
AVCaptureConnection *videoConnection = nil;
for (AVCaptureConnection *connection in [[self stillImageOutput] connections]) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([[port mediaType] isEqual:AVMediaTypeVideo]) {
videoConnection = connection;
break;
}
}
if (videoConnection) {
break;
}
}
NSLog(#"about to request a capture from: %#", [self stillImageOutput]);
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:videoConnection
completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error) {
CFDictionaryRef exifAttachments = CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
NSLog(#"attachements: %#", exifAttachments);
} else {
NSLog(#"no attachments");
}
NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
UIImage *image = [[UIImage alloc] initWithData:imageData];
CGSize imageSize = [image size];
CGSize overlaySize = [overlay size];
UIGraphicsBeginImageContext(imageSize);
[image drawInRect:CGRectMake(0, 0, imageSize.width, imageSize.height)];
CGFloat xScaleFactor = imageSize.width / 320;
CGFloat yScaleFactor = imageSize.height / 480;
[overlay drawInRect:CGRectMake(30 * xScaleFactor, 100 * yScaleFactor, overlaySize.width * xScaleFactor, overlaySize.height * yScaleFactor)]; // rect used in AROverlayViewController was (30,100,260,200)
// [overlay drawInRect:CGRectMake(30 * xScaleFactor, 100 * yScaleFactor, overlaySize.width * xScaleFactor, overlaySize.width* yScaleFactor)];
UIImage *combinedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// NSData * data = UIImagePNGRepresentation(image);
// [data writeToFile:#"foo.png" atomically:YES];
[self setStillImage:combinedImage];
[image release];
[[NSNotificationCenter defaultCenter] postNotificationName:kImageCapturedSuccessfully object:nil];
}];
}
Try this code :
UIGraphicsBeginImageContext(self.view.bounds.size);
// retrieve the current graphics context
CGContextRef context = UIGraphicsGetCurrentContext();
// render view into context
[self.view.layer renderInContext:context];
// create image from context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
image=[self cropImage:image];
UIGraphicsEndImageContext();
- (UIImage *)cropImage:(UIImage *)oldImage
{
CGSize imageSize = oldImage.size;
UIGraphicsBeginImageContextWithOptions(CGSizeMake( imageSize.width,imageSize.height - 150),NO,0.);
[oldImage drawAtPoint:CGPointMake( 0, -80) blendMode:kCGBlendModeCopy alpha:1.];
UIImage *croppedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return croppedImage;
}

CVPixelBuffer leak

I create and use a CVPixelBuffer buffer like so :
//Create buffer
CVPixelBufferRef conversionBuffer = nil;
if (indx < [self.sideImageList count] - 1)
{
UIImageView *tempImageView = [self.sideImageList objectAtIndex:indx + 1];
CGRect originalBackgroundFrame = [tempImageView frame];
UIView *tempView = [tempImageView.subviews objectAtIndex:0];
[tempView removeFromSuperview];
tempImageView.layer.borderColor = [UIColor clearColor].CGColor;
[tempImageView setFrame:CGRectMake(0, 0, kVideoWidth, kVideoHeight)];
UIImage *backgroundImage = [UIImage imageFromView:tempImageView];
[tempImageView addSubview:tempView];
tempImageView.layer.borderColor = [UIColor yellowColor].CGColor;
[tempImageView setFrame:originalBackgroundFrame];
UIImage *resizedBackground = [ICVideoViewController imageWithImage:backgroundImage
scaledToSize:CGSizeMake(kVideoWidth, kVideoHeight)];
UIImage *appendedImage = [ICVideoViewController appendImage:resizedBackground
to:conversionImage
atPoint:CGPointMake((kVideoWidth - conversionImage.size.width)/2,
(kVideoHeight - conversionImage.size.height)/2)
otherPoint:CGPointZero];
//Fill Buffer
conversionBuffer = [ICVideoViewController pixelBufferFromCGImage:[appendedImage CGImage] size:CGSizeMake(kVideoWidth, kVideoHeight)];
}
else
{
//Fill buffer
conversionBuffer = [ICVideoViewController pixelBufferFromCGImage:[conversionImage CGImage] size:CGSizeMake(kVideoWidth, kVideoHeight)];
}
while (adaptor.assetWriterInput.readyForMoreMediaData==NO)
{
[NSThread sleepForTimeInterval:0.1];
}
if (indx == [self.sideImageList count] - 1)
{
break;
}
[adaptor appendPixelBuffer:conversionBuffer withPresentationTime:CMTimeMake((frameRate*i)+frames-1, frameRate)];
while (adaptor.assetWriterInput.readyForMoreMediaData==NO)
{}
//release buffer
CVPixelBufferRelease(conversionBuffer);
and the method use to populate the buffer is .. i got this method from somewhere but there does'nt seem to be anything wrong with it ..
+ (CVPixelBufferRef) pixelBufferFromCGImage:(CGImageRef)image size:(CGSize)size
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I seem to getting a leak in this method(courtesy leaks tool) and i still dont know what im doing wrong.

iPhone Realtime Image Processing using OpenCV and AVFoundation Frameworks?

I want to doing image processing in real time by using openCV.
My final target is showing the result in realtime on the screen while the other side camera is capturing the video by using AVFoundation frameworks.
How can I process every video frame by OpenCV, and show the result on the screen in real time?
Use AVAssertReader
//Setup Reader
AVURLAsset * asset = [AVURLAsset URLAssetWithURL:urlvalue options:nil];
[asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:#"tracks"] completionHandler: ^{ dispatch_async(dispatch_get_main_queue(), ^{
AVAssetTrack * videoTrack = nil;
NSArray * tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
if ([tracks count] == 1) {
videoTrack = [tracks objectAtIndex:0];
NSError * error = nil;
_movieReader = [[AVAssetReader alloc] initWithAsset:asset error:&error];
if (error)
NSLog(error.localizedDescription);
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_4444AYpCbCr16]; NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[_movieReader addOutput:[AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack outputSettings:videoSettings]];
[_movieReader startReading];
}
});
}];
to get next movie frame
static int frameCount=0;
- (void) readNextMovieFrame {
if (_movieReader.status == AVAssetReaderStatusReading) {
AVAssetReaderTrackOutput * output = [_movieReader.outputs objectAtIndex:0];
CMSampleBufferRef sampleBuffer = [output copyNextSampleBuffer];
if (sampleBuffer) {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the image buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get information of the image
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
/*We unlock the image buffer*/
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
/*We release some components*/
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
/*We display the result on the custom layer*/
/*self.customLayer.contents = (id) newImage;*/
/*We display the result on the image view (We need to change the orientation of the image so that the video is displayed correctly)*/
UIImage *image= [UIImage imageWithCGImage:newImage scale:0.0 orientation:UIImageOrientationRight];
UIGraphicsBeginImageContext(image.size);
[image drawAtPoint:CGPointMake(0, 0)];
// UIImage *img=UIGraphicsGetImageFromCurrentImageContext();
videoImage=UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//videoImage=image;
// if (frameCount < 40) {
NSLog(#"readNextMovieFrame==%d",frameCount);
NSString* filename = [NSString stringWithFormat:#"Documents/frame_%d.png", frameCount];
NSString* pngPath = [NSHomeDirectory() stringByAppendingPathComponent:filename];
[UIImagePNGRepresentation(videoImage) writeToFile: pngPath atomically: YES];
frameCount++;
// }
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CFRelease(sampleBuffer);
}
}
}
once your _movieReader reach end then you need to restart again.