I am making an application which saves the camera roll images as blob in sqlite3 database.
When the image is retrieved, its dimensions changes(height and width get interchanged). This happens only on iphone and not on simulator. Please help.
Basically when i retrieve an image from db and run it on iphone device it changes from potrait to landscape mode
Hi,
Please see the code below,
The first snippet is storing image in db, the 2nd is retrieving and the 3rd is drawing.
-(int) InsertImageInImagesTable:(UIImage *)taggedImage{
NSData *imgData=UIImagePNGRepresentation(taggedImage);
//unsigned char aBuufer[[imgData length]];
//[imgData getBytes:aBuufer length:[imgData length]];
NSString *sql=[NSString stringWithFormat:#"INSERT INTO tblImages(Image)values(?)"];
sqlite3_stmt *statement;
int imageId=-1;
if(statement=[self prepare:sql])
{
sqlite3_bind_blob(statement, 1, [imgData bytes], [imgData length],nil);
//sqlite3_bind_int((statement, 2, [imgData bytes], [imgData length],nil);
sqlite3_step(statement);
imageId=sqlite3_last_insert_rowid(dbh);
}
sqlite3_finalize(statement);
return imageId;
}
-(NSDictionary *)SelectImagesFromImagesTable{
NSMutableArray *imgArr=[[NSMutableArray alloc]init];
NSMutableArray *idArr=[[NSMutableArray alloc]init];
NSString *slctSql=#"Select ImageId, Image from tblImages order by ImageId asc";
sqlite3_stmt *statement;
//NSData *imgData;
int imageId=-1;
if(statement=[self prepare:slctSql])
{
//sqlite3_
while(sqlite3_step(statement)==SQLITE_ROW)
{
imageId=sqlite3_column_int(statement, 0);
NSData *imgData=[[NSData alloc] initWithBytes:sqlite3_column_blob(statement, 1) length:sqlite3_column_bytes(statement, 1)];
UIImage *im=[UIImage imageWithData:imgData];
[imgArr addObject:[UIImage imageWithData:imgData]];
[idArr addObject:[NSString stringWithFormat:#"%d",imageId]];
[imgData release];
}
sqlite3_finalize(statement);
}
NSMutableDictionary *imgDict=[NSMutableDictionary dictionaryWithObjects:imgArr forKeys:idArr];
[imgArr release];
[idArr release];
//array=idArr;
return imgDict;
}
-(void)drawRect:(CGRect)rect
{
float newHeight;
float newWidth;
float ratio=1.0;
CGContextRef ctx = UIGraphicsGetCurrentContext();
if (myPic != NULL)
{
int hh=myPic.size.height;
int ww=myPic.size.width;
if(myPic.size.height>367)
{
ratio = myPic.size.height/367;
if (myPic.size.width/320 > ratio) {
ratio = myPic.size.width/320;
}
}
newHeight = myPic.size.height/ratio;
newWidth = myPic.size.width/ratio;
[myPic drawInRect:CGRectMake(self.center.x-(newWidth/2),self.center.y-newHeight/2),newWidth,newHeight)];
}
}
It's hard to say without code but off the top of my head, there are a couple of possible sources for this problem:
(1) The problem might be with the image view and not the image itself. Something in the code might be resizing the view. Check the views resizing properties. If the image view is full screen, then the view controller might actually think it is landscape orientation and it is actually displaying the image correctly but for the wrong orientation. (Confusing orientation constants is an easy way to make that happen.)
(2) The images are misformatted when they are saved. When the UIImage regenerates them, the width and height are swapped. I've never seen that happen but an image in the wrong format could in theory cause that.
Since this only happens on the device, I would bet that the problem is (1). Ignore the images for a bit and instead look at how the view controllers handle their orientation and sizing.
Related
i am creating a app to screen capture from the iphone. So after i did the coding i used profiling and analyzing to check memory leaks. I am getting only one memory leak in one section in the code. Here is my code which gives me the memory leak.
-(void) writeSample: (NSTimer*) _timer {
if (assetWriterInput.readyForMoreMediaData) {
// CMSampleBufferRef sample = nil;
CVReturn cvErr = kCVReturnSuccess;
// get screenshot image!
CGImageRef image = (CGImageRef) [[self screenshot] CGImage];
NSLog (#"made screenshot");
// prepare the pixel buffer
CVPixelBufferRef pixelBuffer = NULL;
CFDataRef imageData= CGDataProviderCopyData(CGImageGetDataProvider(image));
NSLog (#"copied image data");
cvErr = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
FRAME_WIDTH,
FRAME_HEIGHT,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(imageData),
CGImageGetBytesPerRow(image),
NULL,
NULL,
NULL,
&pixelBuffer);
NSLog (#"CVPixelBufferCreateWithBytes returned %d", cvErr);
// calculate the time
CFAbsoluteTime thisFrameWallClockTime = CFAbsoluteTimeGetCurrent();
CFTimeInterval elapsedTime = thisFrameWallClockTime - firstFrameWallClockTime;
NSLog (#"elapsedTime: %f", elapsedTime);
CMTime presentationTime = CMTimeMake (elapsedTime * TIME_SCALE, TIME_SCALE);
// write the sample
BOOL appended = [assetWriterPixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];
if (appended) {
NSLog (#"appended sample at time %lf", CMTimeGetSeconds(presentationTime));
} else {
NSLog (#"failed to append");
[self stopRecording];
self.startStopButton.selected = NO;
}
}
}
it says Potential leak of an object stored into 'imageData'. Can any one help me with finding the error in the above code. There is a memory leak in above code when i check it with the memory management tools too. If any one can help me it would be a great help.
Thanks in Advance !!
From comments -
Do a CFRelease on your imageData when your done with it?
You can put it right before or right after NSLog (#"CVPixelBufferCreateWithBytes returned %d", cvErr);
CFRelease(imageData);
http://developer.apple.com/library/mac/#documentation/QuartzCore/Reference/CVPixelBufferRef/Reference/reference.html
I am not sure about the rest of the code you have, but generally when there is a call with Crete as a word in it, it has to have a corresponding release statement. Please check the documentation above.
CVPixelBufferRelease
Releases a pixel buffer.
void CVPixelBufferRelease (
CVPixelBufferRef texture
);
I'm kinda puzzeled about image storage in iOS devices for an app i'm making.
My requirement is to load an Image onto a tableViewCell, lets say in the default Image space of a UITableViewCell and hence isnt a background image.
Now The user can either add an Image either via the PhotoDirectory or take an entirely new image.
If a new image is taken, where should that image be stored preferebly? In the default photo directory ? or in the documents folder of the app sandbox?
Because these are image files, I'm afraid that store images within the app bundle can make it pretty big, I'm afraid I dont wana cross the size limit.
Performance wise though... what would be a better option?
I have an app that also does some of the things you describe. My solutions was to create a singleton that I call my imageStore. You can find information about a singleton here
In this imageStore, I store all my "full size" images; however, like you I am concerned about the size of these images, so instead of using them directly, I use thumbnails. What I do is this. For each object that I want to represent in the table, I make sure the object has a UIImage defined that is about thumnail size (64x64 or any size you desire). Then an object is created, I create a thumbnail that I store along with the object. I use this thumbnail instead of the larger images where I can get a way with it, like on a table cell.
I'm not behind my Mac at the moment, but if you want I can post some code later to demonstrate both the singleton and the creation and usage of the thumbnail.
Here is my header file for the ImageStore
#import <Foundation/Foundation.h>
#interface BPImageStore : NSObject {
NSMutableDictionary *dictionary;
}
+ (BPImageStore *)defaultImageStore;
- (void)setImage:(UIImage *)i forKey:(NSString *)s;
- (UIImage *)imageForKey:(NSString *)s;
- (void)deleteImageForKey:(NSString *)s;
#end
Here is the ImageStore.m file - my Singleton
#import "BPImageStore.h"
static BPImageStore *defaultImageStore = nil;
#implementation BPImageStore
+ (id)allocWithZone:(NSZone *)zone {
return [[self defaultImageStore] retain];
}
+ (BPImageStore *)defaultImageStore {
if(!defaultImageStore) {
defaultImageStore = [[super allocWithZone:NULL] init];
}
return defaultImageStore;
}
- (id)init
{
if(defaultImageStore) {
return defaultImageStore;
}
self = [super init];
if (self) {
dictionary = [[NSMutableDictionary alloc] init];
NSNotificationCenter *nc = [NSNotificationCenter defaultCenter];
[nc addObserver:self selector:#selector(clearCach:) name:UIApplicationDidReceiveMemoryWarningNotification object:nil];
}
return self;
}
- (void) clearCache:(NSNotification *)note {
[dictionary removeAllObjects];
}
- (oneway void) release {
// no op
}
- (id)retain {
return self;
}
- (NSUInteger)retainCount {
return NSUIntegerMax;
}
- (void)setImage:(UIImage *)i forKey:(NSString *)s {
[dictionary setObject:i forKey:s];
// Create full path for image
NSString *imagePath = pathInDocumentDirectory(s);
// Turn image into JPEG data
NSData *d = UIImageJPEGRepresentation(i, 0.5);
// Write it to full path
[d writeToFile:imagePath atomically:YES];
}
- (UIImage *)imageForKey:(NSString *)s {
// if possible, get it from the dictionary
UIImage *result = [dictionary objectForKey:s];
if(!result) {
// Create UIImage object from file
result = [UIImage imageWithContentsOfFile:pathInDocumentDirectory(s)];
if (result)
[dictionary setObject:result forKey:s];
}
return result;
}
- (void)deleteImageForKey:(NSString *)s {
if(!s) {
return;
}
[dictionary removeObjectForKey:s];
NSString *path = pathInDocumentDirectory(s);
[[NSFileManager defaultManager] removeItemAtPath:path error:NULL];
}
#end
Here is where I use the image store. In my Object "player", I have a UIImage to store the thumbnail and I have an NSString to house a key that I create. Each original image I put into the store has a key. I store the key with my Player. If I ever need the original image, I get by the unique key. It is also worth noting here that I don't even store the original image at full size, I cut it down a bit already. After all in my case, it is a picture of a player and nobody has too look so good as to have a full resolution picture :)
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
NSString *oldKey = [player imageKey];
// did the player already have an image?
if(oldKey) {
// delete the old image
[[BPImageStore defaultImageStore] deleteImageForKey:oldKey];
}
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
// Create a CFUUID object it knows how to create unique identifier
CFUUIDRef newUniqueID = CFUUIDCreate(kCFAllocatorDefault);
// Create a string from unique identifier
CFStringRef newUniqueIDString = CFUUIDCreateString(kCFAllocatorDefault, newUniqueID);
// Use that unique ID to set our player imageKey
[player setImageKey:(NSString *)newUniqueIDString];
// we used Create in the functions to make objects, we need to release them
CFRelease(newUniqueIDString);
CFRelease(newUniqueID);
//Scale the images down a bit
UIImage *smallImage = [self scaleImage:image toSize:CGSizeMake(160.0,240.0)];
// Store image in the imageStore with this key
[[BPImageStore defaultImageStore] setImage:smallImage
forKey:[player imageKey]];
// Put that image onto the screen in our image view
[playerView setImage:smallImage];
[player setThumbnailDataFromImage:smallImage];
}
Here is an example of going back to get the original image from the imageStore:
// Go get image
NSString *imageKey = [player imageKey];
if (imageKey) {
// Get image for image key from image store
UIImage *imageToDisplay = [[BPImageStore defaultImageStore] imageForKey:imageKey];
[playerView setImage:imageToDisplay];
} else {
[playerView setImage:nil];
}
Finally, here is how I create a thumbnail from the original image:
- (void)setThumbnailDataFromImage:(UIImage *)image {
CGSize origImageSize = [image size];
CGRect newRect;
newRect.origin = CGPointZero;
newRect.size = [[self class] thumbnailSize]; // just give a size you want here instead
// How do we scale the image
float ratio = MAX(newRect.size.width/origImageSize.width, newRect.size.height/origImageSize.height);
// Create a bitmap image context
UIGraphicsBeginImageContext(newRect.size);
// Round the corners
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:newRect cornerRadius:5.0];
[path addClip];
// Into what rectangle shall I composite the image
CGRect projectRect;
projectRect.size.width = ratio * origImageSize.width;
projectRect.size.height = ratio *origImageSize.height;
projectRect.origin.x = (newRect.size.width - projectRect.size.width) / 2.0;
projectRect.origin.y = (newRect.size.height - projectRect.size.height) / 2.0;
// Draw the image on it
[image drawInRect:projectRect];
// Get the image from the image context, retain it as our thumbnail
UIImage *small = UIGraphicsGetImageFromCurrentImageContext();
[self setThumbnail:small];
// Get the image as a PNG data
NSData *data = UIImagePNGRepresentation(small);
[self setThumbnailData:data];
// Cleanup image context resources
UIGraphicsEndImageContext();
}
I'm using some sample code I got from a tutorial to create basically a snapshot using AVCamRecorder. It doesn't save a picture, it just displays it in a little rect under the live camera view whenever I click a button. It seemed to be allocating more and more memory each time I clicked the button to update the image, so I put an if (image) {[image release]} and then continued with the rest of the code to capture the image. The problem I ran into there is that eventually I hit an EXC_BAD_ACCESS if I click the button fast enough repeatedly. I tried inserting an if (image) immediately before assigning it to my view, but I still get EXC_BAD_ACCESS. I tried adding an [NSThread sleepForTimeInterval:1] at the end, but that didn't help either. I still get the EXC_BAD_ACCESS after clicking the button several times. Is there a proper way to reuse this view? Thanks
if (image) {
[image release];
exifAttachments = nil;
}
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:stillImageConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
exifAttachments = CMGetAttachment(imageDataSamplebuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
// NSLog
} else {
// NSLog
}
NSData *imagedata = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
image = [[UIImage alloc] initWithData:imageData];
if (image) {
self.capturedPicView.image = image;
}
}];`
Is the image variable declared as __block? If not, you may get all sorts of weird things because you can't modify it within a block.
You probably don't need a separate image variable - just do:
self.capturedPicView.image = [[[UIImage alloc] initWithData:imageData] autorelease];
in your block.
P.S. And it looks like your original memory leak was due to not releasing the new image - you could have added autorelease to UIImage creation or just release it right after assigning (UIImageView.image retains it anyway):
image = [[UIImage alloc] initWithData:imageData];
if (image) {
self.capturedPicView.image = image;
[image release];
}
I am trying to modify Apple's PhotoScroller example and I encountered a problem that I couldn't solve.
Basically, the PhotoScroller originally loaded a bunch of image in an array locally. I try to modify this and change to it request an image file dynamically from an URL. Once the user scroll to the next page, it will fetch the next image from a new URL.
In order to improve the performance, I wanted to preload the next page so user doesn't need to wait for the image being downloaded while scrolling to the next page. Once the next page is on current page, the page after that will be loaded and so on...
I'm not quite sure how I can achieve this and I hope someone can show me what to do.
Here is my custom code: (Please refer the full code from Apple's PhotoScroller example)
tilePage method: (It will be call at the beginning and every time when user did scroll the scrollView)
- (void)tilePages
{
// Calculate which pages are visible
CGRect visibleBounds = pagingScrollView.bounds;
int firstNeededPageIndex = floorf(CGRectGetMinX(visibleBounds) / CGRectGetWidth(visibleBounds));
int lastNeededPageIndex = floorf((CGRectGetMaxX(visibleBounds)-1) / CGRectGetWidth(visibleBounds));
firstNeededPageIndex = MAX(firstNeededPageIndex, 0);
lastNeededPageIndex = MIN(lastNeededPageIndex, [self imageCount] - 1);
// Recycle no-longer-visible pages
for (ImageScrollView *page in visiblePages) {
if (page.index < firstNeededPageIndex || page.index > lastNeededPageIndex) {
[recycledPages addObject:page];
[page removeFromSuperview];
}
}
[visiblePages minusSet:recycledPages];
// add missing pages
for (int index = firstNeededPageIndex; index <= lastNeededPageIndex; index++) {
if (![self isDisplayingPageForIndex:index]) {
ImageScrollView *page = [self dequeueRecycledPage];
//ImageScrollView *nextpage = [self dequeueRecycledPage];
if (page == nil) {
page = [[[ImageScrollView alloc] init] autorelease];
}
[self configurePage:page forIndex:index];
[pagingScrollView addSubview:page];
[visiblePages addObject:page];
}
}
}
To configure page index and content:
- (void)configurePage:(ImageScrollView *)page forIndex:(NSUInteger)index
{
//set page index
page.index = index;
//set page frame
page.frame = [self frameForPageAtIndex:index];
//Actual method to call image to display
[page displayImage:[self imageAtIndex:index]];
NSLog(#"index: %i", index);
}
To fetch image from URL:
- (UIImage *)imageAtIndex:(NSUInteger)index {
NSString *string1 = [NSString stringWithString:#"http://abc.com/00"];
NSString *string2 = [NSString stringWithFormat:#"%d",index+1];
NSString *string3 = [NSString stringWithString:#".jpg"];
NSString *finalString = [NSString stringWithFormat:#"%#%#%#",string1,string2,string3];
NSLog(#"final string is: %#", finalString);
NSURL *imgURL = [NSURL URLWithString: finalString];
NSData *imgData = [NSData dataWithContentsOfURL:imgURL];
return [UIImage imageWithData:imgData];
}
Thanks for helping!
Lawrence
You can use concurrent operations to fetch images one after another. Using NSOperations, you can set a dependency chain so images are loaded in a serial fashion in the background, or you can have them all downloaded concurrently.
The problem with large images is that even though you save them in the file system, there is a "startup" time to get the images rendered. Also, in PhotoScroller, Apple "cheats" by having all the images pre tiled for each level of detail. Apple provides no way to render just a Plain ole JPEG in PhotoScroller, so unless you can pre tile it will be of no use to you.
If you are curious, you can see how to both download multiple images and pre tile them into a temporary file by perusing the PhotoScrollerNetwork project on github.
I have a piece of code that sets up a capture session from the camera to process the frames using OpenCV and then set the image property of a UIImageView with a generated UIImage from the frame. When the app starts, the image view's image is nil and no frames show up until I push another view controller on the stack and then pop it off. Then the image stays the same until I do it again. NSLog statements show that the callback is called at approximately the correct frame rate. Any ideas why it doesn't show up? I reduced the framerate all the way to 2 frames a second. Is it not processing fast enough?
Here's the code:
- (void)setupCaptureSession {
NSError *error = nil;
// Create the session
AVCaptureSession *session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
session.sessionPreset = AVCaptureSessionPresetLow;
// Find a suitable AVCaptureDevice
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[session addInput:input];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];
output.alwaysDiscardsLateVideoFrames = YES;
[session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// If you wish to cap the frame rate to a known value, such as 15 fps, set
// minFrameDuration.
output.minFrameDuration = CMTimeMake(1, 1);
// Start the session running to start the flow of data
[session startRunning];
// Assign session to an ivar.
[self setSession:session];
}
// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if (!colorSpace)
{
NSLog(#"CGColorSpaceCreateDeviceRGB failure");
return nil;
}
// Get the base address of the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the data size for contiguous planes of the pixel buffer.
size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);
// Create a Quartz direct-access data provider that uses data we supply
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, baseAddress, bufferSize,
NULL);
// Create a bitmap image from data supplied by our data provider
CGImageRef cgImage =
CGImageCreate(width,
height,
8,
32,
bytesPerRow,
colorSpace,
kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,
provider,
NULL,
true,
kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
// Create and return an image object representing the specified Quartz image
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return image;
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
// Create a UIImage from the sample buffer data
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
[self.delegate cameraCaptureGotFrame:image];
}
This could be related to threading—Try:
[self.delegate performSelectorOnMainThread:#selector(cameraCaptureGotFrame:) withObject:image waitUntilDone:NO];
This looks like a threading issue. You cannot update your views in any other thread than in the main thread. In your setup, which is good, the delegate function captureOutput:didOutputSampleBuffer: is called in a secondary thread. So you cannot set the image view from there. Art Gillespie's answer is one way of solving it if you can get rid of the bad access error.
Another way is to modify the sample buffer in captureOutput:didOutputSampleBuffer: and have is shown by adding a AVCaptureVideoPreviewLayer instance to your capture session. That's certainly the preferred way if you only modify a small part of the image such as highlighting something.
BTW: Your bad access error could arise because you don't retain the created image in the secondary thread and so it will be freed before cameraCaptureGotFrame is called on the main thread.
Update:
To properly retain the image, increase the reference count in captureOutput:didOutputSampleBuffer: (in the secondary thread) and decrement it in cameraCaptureGotFrame: (in the main thread).
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
// increment ref count
[image retain];
[self.delegate performSelectorOnMainThread:#selector(cameraCaptureGotFrame:)
withObject:image waitUntilDone:NO];
}
- (void) cameraCaptureGotFrame:(UIImage*)image
{
// whatever this function does, e.g.:
imageView.image = image;
// decrement ref count
[image release];
}
If you don't increment the reference count, the image is freed by the auto release pool of the second thread before the cameraCaptureGotFrame: is called in the main thread. If you don't decrement it in the main thread, the images are never freed and you run out of memory within a few seconds.
Are you doing a setNeedsDisplay on the UIImageView after each new image property update?
Edit:
Where and when are you updating the background image property in your image view?