Getting frame rate from AVCaptureVideoDataOutput - iphone

I'm using AVCaptureVideoDataOutput to grab frames, process them, and then write them to a MOV file using AVAssetWriter. I understand that to set the minimum frame rate for the data input, I need only write
myDataOutput.minFrameDuration = someCMTime;
How can I get the actual frame duration for a given cycle of
- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
I need the actual duration so that I can input an accurate duration time for the asset writer. I've been playing around with using CMSampleBufferGetDuration(sampleBuffer), but with limited success. Any idea how to get this value?
Here is my current captureOutput method implementation:
- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
if([writerInput isReadyForMoreMediaData])
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if(imageBuffer){
totalTime = CMTimeAdd(totalTime, CMSampleBufferGetDuration(sampleBuffer));
if([adaptor appendPixelBuffer:imageBuffer withPresentationTime:totalTime]){
NSLog(#"frame added");
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}else{
NSLog(#"frame NOT added");
}
}else{
NSLog(#"no buffer");
}
}else{
NSLog(#"writerinput not ready");
}
}

In the end, the problem was that I was using AVAssetWriterInputPixelBufferAdaptor, which requires the user to set the presentation time. Instead, I ended up just adding frames directly by calling appendBuffer on AVAssetWriterInput

Related

What are these extra bytes coming from the iPhone camera in portrait mode?

When I get a frame from - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection I am getting back the following data:
BytesPerRow: 1,472 Length: 706,560 Height: 480 Width: 360 format:
BGRA
This is from the front camera on an iPhone 6 plus.
This doesn't make sense because bytes per row should be (width * channels) (channels in this case is 4). However, it's (width+8)*channels. Where is this extra 8 bytes coming from?
Here's my code:
Attaching the output to the session I set the orientation to portrait
bool attachOutputToSession(AVCaptureSession *session, id cameraDelegate)
{
assert(cameraDelegate);
AVCaptureVideoDataOutput *m_videoOutput = [[AVCaptureVideoDataOutput alloc] init];
//create a queue for capturing frames
dispatch_queue_t captureQueue = dispatch_queue_create("captureQueue", DISPATCH_QUEUE_SERIAL);
//Use the AVCaptureVideoDataOutputSampleBufferDelegate capabilities of CameraDelegate:
[m_videoOutput setSampleBufferDelegate:cameraDelegate queue:captureQueue];
//setup the video outputs
m_videoOutput.alwaysDiscardsLateVideoFrames = YES;
NSNumber *framePixelFormat = [NSNumber numberWithInt:kCVPixelFormatType_32BGRA];//This crashes with 24RGB b/c that isn't supported on iPhone
m_videoOutput.videoSettings = [ NSDictionary dictionaryWithObject:framePixelFormat forKey:(id)kCVPixelBufferPixelFormatTypeKey];
//Check if it already has an output from a previous session
if ([session canAddOutput:m_videoOutput])
{
[session addOutput:m_videoOutput];
}
//set connection settings
for (AVCaptureConnection *connection in m_videoOutput.connections)
{
if (connection.isVideoMirroringSupported)
connection.videoMirrored = true;
if (connection.isVideoOrientationSupported)
connection.videoOrientation = AVCaptureVideoOrientationPortrait;
}
return true;
}
When I set the orientation to LandscapeRight I do not have this issue. The bytes per row is equal to width*channels.
Here's where I'm getting the numbers mentioned above:
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
}
OK turns out this is part of the image "stride". If an image width is not divisible by the chosen memory allotment then this extra padding is included. When I receive the portrait image it is 360x480. Since 360 is not divisible by 16, 8 extra bytes are added as padding. 16 is the memory space in this case.
I was not having this issue before because 480 is divisible by 16.
You can get this number by calling CVPixelBufferGetBytesPerRowOfPlane (imageBuffer, 1);
What's weird though, is that it returns a 0 the first time, 1 the second time, and so on until it reaches the real buffer level (8). Then it returns 0 again on the ninth image.
According to rpappalax on this page http://gstreamer-devel.966125.n4.nabble.com/iOS-capture-problem-td4656685.html
The stride is effectively CVPixelBufferGetBytesPerRowOfPlane() and
includes padding (if any). When no padding is present
CVPixelBufferGetBytesPerRowOfPlane() will be equal to
CVPixelBufferGetWidth(), otherwise it'll be greater.
Although that wasn't exactly my experience.

Convert CVImageBufferRef to CVPixelBufferRef

I am new to iOS programming and multimedia and I was going through a sample project named RosyWriter provided by apple at this link. Here I saw that in the code there is a function named captureOutput:didOutputSampleBuffer:fromConnection in the code given below:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
if ( connection == videoConnection ) {
// Get framerate
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp( sampleBuffer );
[self calculateFramerateAtTimestamp:timestamp];
// Get frame dimensions (for onscreen display)
if (self.videoDimensions.width == 0 && self.videoDimensions.height == 0)
self.videoDimensions = CMVideoFormatDescriptionGetDimensions( formatDescription );
// Get buffer type
if ( self.videoType == 0 )
self.videoType = CMFormatDescriptionGetMediaSubType( formatDescription );
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Synchronously process the pixel buffer to de-green it.
[self processPixelBuffer:pixelBuffer];
// Enqueue it for preview. This is a shallow queue, so if image processing is taking too long,
// we'll drop this frame for preview (this keeps preview latency low).
OSStatus err = CMBufferQueueEnqueue(previewBufferQueue, sampleBuffer);
if ( !err ) {
dispatch_async(dispatch_get_main_queue(), ^{
CMSampleBufferRef sbuf = (CMSampleBufferRef)CMBufferQueueDequeueAndRetain(previewBufferQueue);
if (sbuf) {
CVImageBufferRef pixBuf = CMSampleBufferGetImageBuffer(sbuf);
[self.delegate pixelBufferReadyForDisplay:pixBuf];
CFRelease(sbuf);
}
});
}
}
CFRetain(sampleBuffer);
CFRetain(formatDescription);
dispatch_async(movieWritingQueue, ^{
if ( assetWriter ) {
BOOL wasReadyToRecord = (readyToRecordAudio && readyToRecordVideo);
if (connection == videoConnection) {
// Initialize the video input if this is not done yet
if (!readyToRecordVideo)
readyToRecordVideo = [self setupAssetWriterVideoInput:formatDescription];
// Write video data to file
if (readyToRecordVideo && readyToRecordAudio)
[self writeSampleBuffer:sampleBuffer ofType:AVMediaTypeVideo];
}
else if (connection == audioConnection) {
// Initialize the audio input if this is not done yet
if (!readyToRecordAudio)
readyToRecordAudio = [self setupAssetWriterAudioInput:formatDescription];
// Write audio data to file
if (readyToRecordAudio && readyToRecordVideo)
[self writeSampleBuffer:sampleBuffer ofType:AVMediaTypeAudio];
}
BOOL isReadyToRecord = (readyToRecordAudio && readyToRecordVideo);
if ( !wasReadyToRecord && isReadyToRecord ) {
recordingWillBeStarted = NO;
self.recording = YES;
[self.delegate recordingDidStart];
}
}
CFRelease(sampleBuffer);
CFRelease(formatDescription);
});
}
Here a function named pixelBufferReadyForDisplay is called which expects a parameter of type CVPixelBufferRef
Prototype of pixelBufferReadyForDisplay
- (void)pixelBufferReadyForDisplay:(CVPixelBufferRef)pixelBuffer;
But in the code above while calling this function it passes the variable pixBuf which is of type CVImageBufferRef
So my question is that isn't it required to use any function or typecasting to convert a CVImageBufferRef to CVPixelBufferRef or is this done implicitly by the compiler?
Thanks.
If you do a search on CVPixelBufferRef in the Xcode docs, you'll find the following:
typedef CVImageBufferRef CVPixelBufferRef;
So a CVImageBufferRef is a synonym for a CVPixelBufferRef. They are interchangeable.
You are looking at some pretty gnarly code. RosyWriter, and another sample app called "Chromakey" do some pretty low-level processing on pixel buffers. If you're new to iOS development AND new to multimedia you might not want to dig so deep, so fast. It's a bit like a first year medical student trying to perform a heart-lung transplant.

AVCaptureDeviceOutput not calling delegate method captureOutput

I am building an iOS application (my first) that processes video still frames on the fly. To dive into this, I followed an example from the AV* documentation from Apple.
The process involves setting up an input (the camera) and an output. The output works with a delegate, which in this case is the controller itself (it conforms and implements the method needed).
The problem I am having is that the delegate method never gets called. The code below is the implementation of the controller and it has a couple of NSLogs. I can see the "started" message, but the "delegate method called" never shows.
This code is all within a controller that implements the "AVCaptureVideoDataOutputSampleBufferDelegate" protocol.
- (void)viewDidLoad {
[super viewDidLoad];
// Initialize AV session
AVCaptureSession *session = [AVCaptureSession new];
if ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone)
[session setSessionPreset:AVCaptureSessionPreset640x480];
else
[session setSessionPreset:AVCaptureSessionPresetPhoto];
// Initialize back camera input
AVCaptureDevice *camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:camera error:&error];
if( [session canAddInput:input] ){
[session addInput:input];
}
// Initialize image output
AVCaptureVideoDataOutput *output = [AVCaptureVideoDataOutput new];
NSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[output setVideoSettings:rgbOutputSettings];
[output setAlwaysDiscardsLateVideoFrames:YES]; // discard if the data output queue is blocked (as we process the still image)
//[output addObserver:self forKeyPath:#"capturingStillImage" options:NSKeyValueObservingOptionNew context:#"AVCaptureStillImageIsCapturingStillImageContext"];
videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
[output setSampleBufferDelegate:self queue:videoDataOutputQueue];
if( [session canAddOutput:output] ){
[session addOutput:output];
}
[[output connectionWithMediaType:AVMediaTypeVideo] setEnabled:YES];
[session startRunning];
NSLog(#"started");
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
NSLog(#"delegate method called");
CGImageRef cgImage = [self imageFromSampleBuffer:sampleBuffer];
self.theImage.image = [UIImage imageWithCGImage: cgImage ];
CGImageRelease( cgImage );
}
Note: I'm building with iOS 5.0 as a target.
Edit:
I've found a question that, although asking for a solution to a different problem, is doing exactly what my code is supposed to do. I've copied the code from that question verbatim into a blank xcode app, added NSLogs to the captureOutput function and it doesn't get called. Is this a configuration issue? Is there something I'm missing?
Your session is a local variable. Its scope is limited to viewDidLoad. Since this is a new project, I assume it's safe to say that you're using ARC. In that case that object won't leak and therefore continue to live as it would have done in the linked question, rather the compiler will ensure the object is deallocated before viewDidLoad exits.
Hence your session isn't running because it no longer exists.
(aside: the self.theImage.image = ... is unsafe since it performs a UIKit action of the main queue; you probably want to dispatch_async that over to dispatch_get_main_queue())
So, sample corrections:
#implementation YourViewController
{
AVCaptureSession *session;
}
- (void)viewDidLoad {
[super viewDidLoad];
// Initialize AV session
session = [AVCaptureSession new];
if ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone)
[session setSessionPreset:AVCaptureSessionPreset640x480];
else
/* ... etc ... */
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
NSLog(#"delegate method called");
CGImageRef cgImage = [self imageFromSampleBuffer:sampleBuffer];
dispatch_sync(dispatch_get_main_queue(),
^{
self.theImage.image = [UIImage imageWithCGImage: cgImage ];
CGImageRelease( cgImage );
});
}
Most people advocate using an underscore at the beginning of instance variable names nowadays but I omitted it for simplicity. You can use Xcode's built in refactor tool to fix that up after you've verified that the diagnosis is correct.
I moved the CGImageRelease inside the block sent to the main queue to ensure its lifetime extends beyond its capture into a UIImage. I'm not immediately able to find any documentation to confirm that CoreFoundation objects have their lifetime automatically extended when captured in a block.
I've found one more reason why didOutputSampleBuffer delegate method may not be called — save to file and get sample buffer output connections are mutually exclusive. In other words, if your session already has AVCaptureMovieFileOutput and then you add AVCaptureVideoDataOutput, only AVCaptureFileOutputRecordingDelegate delegate methods are called.
Just for the reference, I couldn't find anywhere in AV Foundation framework documentation explicit description of this limitation, but Apple support confirmed this a few years ago, as noted in this SO answer.
One way to solve the problem is to remove AVCaptureMovieFileOutput entirely and manually write recorded frames to the file in didOutputSampleBuffer delegate method, alongside your custom buffer data processing. You may find these two SO answers useful.
In my case the problem is there because I call
if ([_session canAddOutput:_videoDataOutput])
[_session addOutput:_videoDataOutput];
before I call
[_session startRunning];
I'm just start call addOutput: after startRunning
Hope it's help somebody.
My captureOutput function was not called either. And the accepted answer did not exactly point at my problem, as my session was already an instance variable.
BUT, my DispatchQueue for my video frames was local. And the dispatchQueue must ALSO be an instance variable. I don't quite understand why this should be necessary. Perhaps the underlying AVCapture code only keeps a weak pointer to it?
The documentation is very confusing on this.

Getting Potential Leak of an object in CVPixelBuffer

i am creating a app to screen capture from the iphone. So after i did the coding i used profiling and analyzing to check memory leaks. I am getting only one memory leak in one section in the code. Here is my code which gives me the memory leak.
-(void) writeSample: (NSTimer*) _timer {
if (assetWriterInput.readyForMoreMediaData) {
// CMSampleBufferRef sample = nil;
CVReturn cvErr = kCVReturnSuccess;
// get screenshot image!
CGImageRef image = (CGImageRef) [[self screenshot] CGImage];
NSLog (#"made screenshot");
// prepare the pixel buffer
CVPixelBufferRef pixelBuffer = NULL;
CFDataRef imageData= CGDataProviderCopyData(CGImageGetDataProvider(image));
NSLog (#"copied image data");
cvErr = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
FRAME_WIDTH,
FRAME_HEIGHT,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(imageData),
CGImageGetBytesPerRow(image),
NULL,
NULL,
NULL,
&pixelBuffer);
NSLog (#"CVPixelBufferCreateWithBytes returned %d", cvErr);
// calculate the time
CFAbsoluteTime thisFrameWallClockTime = CFAbsoluteTimeGetCurrent();
CFTimeInterval elapsedTime = thisFrameWallClockTime - firstFrameWallClockTime;
NSLog (#"elapsedTime: %f", elapsedTime);
CMTime presentationTime = CMTimeMake (elapsedTime * TIME_SCALE, TIME_SCALE);
// write the sample
BOOL appended = [assetWriterPixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];
if (appended) {
NSLog (#"appended sample at time %lf", CMTimeGetSeconds(presentationTime));
} else {
NSLog (#"failed to append");
[self stopRecording];
self.startStopButton.selected = NO;
}
}
}
it says Potential leak of an object stored into 'imageData'. Can any one help me with finding the error in the above code. There is a memory leak in above code when i check it with the memory management tools too. If any one can help me it would be a great help.
Thanks in Advance !!
From comments -
Do a CFRelease on your imageData when your done with it?
You can put it right before or right after NSLog (#"CVPixelBufferCreateWithBytes returned %d", cvErr);
CFRelease(imageData);
http://developer.apple.com/library/mac/#documentation/QuartzCore/Reference/CVPixelBufferRef/Reference/reference.html
I am not sure about the rest of the code you have, but generally when there is a call with Crete as a word in it, it has to have a corresponding release statement. Please check the documentation above.
CVPixelBufferRelease
Releases a pixel buffer.
void CVPixelBufferRelease (
CVPixelBufferRef texture
);

AVCaptureSession only getting one frame for iPhone 3gs

I have a piece of code that sets up a capture session from the camera to process the frames using OpenCV and then set the image property of a UIImageView with a generated UIImage from the frame. When the app starts, the image view's image is nil and no frames show up until I push another view controller on the stack and then pop it off. Then the image stays the same until I do it again. NSLog statements show that the callback is called at approximately the correct frame rate. Any ideas why it doesn't show up? I reduced the framerate all the way to 2 frames a second. Is it not processing fast enough?
Here's the code:
- (void)setupCaptureSession {
NSError *error = nil;
// Create the session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetLow;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
output.alwaysDiscardsLateVideoFrames = YES;
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
output.minFrameDuration = CMTimeMake(1, 1);
// Start the session running to start the flow of data
[session startRunning];
// Assign session to an ivar.
[self setSession:session];
}
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if (!colorSpace)
{
NSLog(#"CGColorSpaceCreateDeviceRGB failure");
return nil;
}
// Get the base address of the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the data size for contiguous planes of the pixel buffer.
size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);
// Create a Quartz direct-access data provider that uses data we supply
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, baseAddress, bufferSize,
NULL);
// Create a bitmap image from data supplied by our data provider
CGImageRef cgImage =
CGImageCreate(width,
height,
8,
32,
bytesPerRow,
colorSpace,
kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,
provider,
NULL,
true,
kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
// Create and return an image object representing the specified Quartz image
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return image;
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
// Create a UIImage from the sample buffer data
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
[self.delegate cameraCaptureGotFrame:image];
}
This could be related to threading—Try:
[self.delegate performSelectorOnMainThread:#selector(cameraCaptureGotFrame:) withObject:image waitUntilDone:NO];
This looks like a threading issue. You cannot update your views in any other thread than in the main thread. In your setup, which is good, the delegate function captureOutput:didOutputSampleBuffer: is called in a secondary thread. So you cannot set the image view from there. Art Gillespie's answer is one way of solving it if you can get rid of the bad access error.
Another way is to modify the sample buffer in captureOutput:didOutputSampleBuffer: and have is shown by adding a AVCaptureVideoPreviewLayer instance to your capture session. That's certainly the preferred way if you only modify a small part of the image such as highlighting something.
BTW: Your bad access error could arise because you don't retain the created image in the secondary thread and so it will be freed before cameraCaptureGotFrame is called on the main thread.
Update:
To properly retain the image, increase the reference count in captureOutput:didOutputSampleBuffer: (in the secondary thread) and decrement it in cameraCaptureGotFrame: (in the main thread).
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
// increment ref count
[image retain];
[self.delegate performSelectorOnMainThread:#selector(cameraCaptureGotFrame:)
withObject:image waitUntilDone:NO];
}
- (void) cameraCaptureGotFrame:(UIImage*)image
{
// whatever this function does, e.g.:
imageView.image = image;
// decrement ref count
[image release];
}
If you don't increment the reference count, the image is freed by the auto release pool of the second thread before the cameraCaptureGotFrame: is called in the main thread. If you don't decrement it in the main thread, the images are never freed and you run out of memory within a few seconds.
Are you doing a setNeedsDisplay on the UIImageView after each new image property update?
Edit:
Where and when are you updating the background image property in your image view?