I have an application in which I need to draw text on the screen using OpenGL textures. In my render loop I request the textures from my TextureController, which has functions such as
- (GLuint)textureForImageNamed:(NSString*)fileName;
{
// we're checking a hash.. possibly slow
NSNumber* textureHandle = [textures objectForKey:fileName];
if(!textureHandle)
textureHandle = [self loadTexture:fileName];
GLuint handle = (GLuint)[textureHandle unsignedIntValue];
return handle;
}
where loadTexture loads the texture into memory like this
- (NSNumber*)loadTexture:(NSString*)fileName
{
GLuint texID = [GLKTextureLoader textureWithCGImage:
[[UIImage imageNamed:#"star.png"] CGImage]
options:nil
error:nil].name;
NSNumber* num = [NSNumber numberWithUnsignedInt:texID];
[textures setObject:num forKey:fileName];
return num;
}
And similar functions for text:
- (GLuint)textureForTextWithString:(NSString*)text;
- (NSNumber*)loadTextureForString:(NSString*)text;
This all works fine when I have requested the image once before my render loop, but the whole point is that the text and images that need to be drawn are not known in advance. So I want to be able to call textureForTextWithString:#"fooBar" and either give me the texture if it was loaded before, or load it in -- add it to the dictionary and then give it to me. However, I get a black texture for all the textures that weren't preloaded (i.e. for which textureForTextWithString:#"fooBar" was not called before I started the render loop).
Relevant code in the render loop:
GLuint secondTex = [textures textureForTextWithString:#"NO"];
glBindTexture(GL_TEXTURE_2D, secondTex);
glBindBuffer(GL_ARRAY_BUFFER, some_vertex_buffer);
glDrawArrays(GL_POINTS, 0, num);
I am looking for a solution, or better yet -- advice on how to better handle this particular problem.
EDIT: The two files in question can be found here:
https://gist.github.com/3153940 - The main OpenGL VC
https://gist.github.com/3153941 - The texture controller
Related
I'm creating and loading a lot of textures (made of strings). To keep the animation running smoothly, I offload the work to a separate worker thread. It seems to work more or less exactly the way I want, but on older devices (iPhone 3GS) I sometimes notice a long (1 sec) lag. It only occurs sometimes. Now I'm wondering if I'm doing this correctly or if there is any conceptual issue. I paste the source code below.
I should also mention that I do not want to use the GLKit TextureLoader because I also want to offload the texture generating work to the other thread, not just the loading part.
In case you're wondering what I need these textures for, have a look at this video: http://youtu.be/U03p4ZhLjvY?hd=1
NSLock* _textureLock;
NSMutableDictionary* _texturesWithString;
NSMutableArray* _texturesWithStringLoading;
// This is called when I request a new texture from the drawing routine.
// If this function returns 0, it means the texture is not ready and Im not displaying it.
-(unsigned int)getTextureWithString:(NSString*)string {
Texture2D* _texture = [_texturesWithString objectForKey:string];
if (_texture==nil){
if (![_texturesWithStringLoading containsObject:string]){
[_texturesWithStringLoading addObject:string];
NSDictionary* dic = [[NSDictionary alloc] initWithObjectsAndKeys:string,#"string", nil];
NSThread* thread = [[NSThread alloc] initWithTarget:self selector:#selector(loadTextureWithDictionary:)object:dic];
thread.threadPriority = 0.01;
[thread start];
[thread release];
}
return 0;
}
return _texture.name;
}
// This is executed on a separate worker thread.
// The lock makes sure that there are not hundreds of separate threads all creating a texture at the same time and therefore slowing everything down.
// There must be a smarter way of doing that. Please let me know if you know how! ;-)
-(void)loadTextureWithOptions:(NSDictionary*)_dic{
[_textureLock lock];
EAGLContext* context = [[SharegroupManager defaultSharegroup] getNewContext];
[EAGLContext setCurrentContext: context];
NSString* string = [_dic objectForKey:#"string"];
Texture2D* _texture = [[Texture2D alloc] initWithStringModified:string];
if (_texture!=nil) {
NSDictionary* _newdic = [[NSDictionary alloc] initWithObjectsAndKeys:_texture,#"texture",string,#"string", nil];
[self performSelectorOnMainThread:#selector(doneLoadingTexture:) withObject:_newdic waitUntilDone:NO];
[_newdic release];
[_texture release];
}
[EAGLContext setCurrentContext: nil];
[context release];
[_textureLock unlock];
}
// This callback is executed on the main thread and marks adds the texture to the texture cache.
-(void)doneLoadingTextureWithDictionary:(NSDictionary*)_dic{
[_texturesWithString setValue:[_dic objectForKey:#"texture"] forKey:[_dic objectForKey:#"string"]];
[_texturesWithStringLoading removeObject:[_dic objectForKey:#"string"]];
}
The problem was that too many threads were started at the same time. Now I am using a NSOperationQueue rather than NSThreads. That allows me to set maxConcurrentOperationCount and only run one extra background thread that does the texture loading.
Is it possible, and supported, to use the iOS hardware accelerated h.264 decoding API to decode a local (not streamed) video file, and then compose other objects on top of it?
I would like to make an application that involves drawing graphical objects in front of a video, and use the playback timer to synchronize what I am drawing on top, to what is being played on the video. Then, based on the user's actions, change what I am drawing on top (but not the video)
Coming from DirectX, OpenGL and OpenGL ES for Android, I am picturing something like rendering the video to a texture, and using that texture to draw a full screen quad, then use other sprites to draw the rest of the objects; or maybe writing an intermediate filter just before the renderer, so I can manipulate the individual output frames and draw my stuff; or maybe drawing to a 2D layer on top of the video.
It seems like AV Foundation, or Core Media may help me do what I am doing, but before I dig into the details, I would like to know if it is possible at all to do what I want to do, and what are my main routes to approach the problem.
Please refrain from "this is too advanced for you, try hello world first" answers. I know my stuff, and just want to know if what I want to do is possible (and most importantly, supported, so the app won't get eventually rejected), before I study the details by myself.
edit:
I am not knowledgeable in iOS development, but professionally do DirectX, OpenGL and OpenGL ES for Android. I am considering making an iOS version of an Android application I currently have, and I just want to know if this is possible. If so, I have enough time to start iOS development from scratch, up to doing what I want to do. If it is not possible, then I will just not invest time studying the entire platform at this time.
Therefore, this is a technical feasibility question. I am not requesting code. I am looking for answers of the type "Yes, you can do that. Just use A and B, use C to render into D and draw your stuff with E", or "No, you can't. The hardware accelerated decoding is not available for third-party applications" (which is what a friend told me). Just this, and I'll be on my way.
I have read the overview for the video technologies in page 32 of the ios technology overview. It pretty much says that I can use Media Player for the most simple playback functionality (not what I'm looking for), UIKit for embedding videos with a little more control over the embedding, but not over the actual playback (not what I'm looking for), AVFoundation for more control over playback (maybe what I need, but most of the resources I find online talk about how to use the camera), or Core Media to have full low-level control over video (probably what I need, but extremely poorly documented, and even more lacking in resources on playback than even AVFoundation).
I am concerned that I may dedicate the next six months to learn iOS programming full time, only to find at the end that the relevant API is not available for third party developers, and what I want to do is unacceptable for iTunes store deployment. This is what my friend told me, but I can't seem to find anything relevant in the app development guidelines. Therefore, I came here to ask people who have more experience in this area, whether or not what I want to do is possible. No more.
I consider this a valid high level question, which can be misunderstood as an I-didn't-do-my-homework-plz-give-me-teh-codez question. If my judgement in here was mistaken, feel free to delete, or downvote this question to your heart's contempt.
Yes, you can do this, and I think your question was specific enough to belong here. You're not the only one who has wanted to do this, and it does take a little digging to figure out what you can and can't do.
AV Foundation lets you do hardware-accelerated decoding of H.264 videos using an AVAssetReader, at which point you're handed the raw decoded frames of video in BGRA format. These can be uploaded to a texture using either glTexImage2D() or the more efficient texture caches in iOS 5.0. From there, you can process for display or retrieve the frames from OpenGL ES and use an AVAssetWriter to perform hardware-accelerated H.264 encoding of the result. All of this uses public APIs, so at no point do you get anywhere near something that would lead to a rejection from the App Store.
However, you don't have to roll your own implementation of this. My BSD-licensed open source framework GPUImage encapsulates these operations and handles all of this for you. You create a GPUImageMovie instance for your input H.264 movie, attach filters onto it (such as overlay blends or chroma keying operations), and then attach these filters to a GPUImageView for display and/or a GPUImageMovieWriter to re-encode an H.264 movie from the processed video.
The one issue I currently have is that I don't obey the timestamps in the video for playback, so frames are processed as quickly as they are decoded from the movie. For filtering and re-encoding of a video, this isn't a problem, because the timestamps are passed through to the recorder, but for direct display to the screen this means that the video can be sped up by as much as 2-4X. I'd welcome any contributions that would let you synchronize the playback rate to the actual video timestamps.
I can currently play back, filter, and re-encode 640x480 video at well over 30 FPS on an iPhone 4 and 720p video at ~20-25 FPS, with the iPhone 4S being capable of 1080p filtering and encoding at significantly higher than 30 FPS. Some of the more expensive filters can tax the GPU and slow this down a bit, but most filters operate in these framerate ranges.
If you want, you can examine the GPUImageMovie class to see how it does this uploading to OpenGL ES, but the relevant code is as follows:
- (void)startProcessing;
{
NSDictionary *inputOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
AVURLAsset *inputAsset = [[AVURLAsset alloc] initWithURL:self.url options:inputOptions];
[inputAsset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:#"tracks"] completionHandler: ^{
NSError *error = nil;
AVKeyValueStatus tracksStatus = [inputAsset statusOfValueForKey:#"tracks" error:&error];
if (!tracksStatus == AVKeyValueStatusLoaded)
{
return;
}
reader = [AVAssetReader assetReaderWithAsset:inputAsset error:&error];
NSMutableDictionary *outputSettings = [NSMutableDictionary dictionary];
[outputSettings setObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey: (NSString*)kCVPixelBufferPixelFormatTypeKey];
// Maybe set alwaysCopiesSampleData to NO on iOS 5.0 for faster video decoding
AVAssetReaderTrackOutput *readerVideoTrackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:[[inputAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] outputSettings:outputSettings];
[reader addOutput:readerVideoTrackOutput];
NSArray *audioTracks = [inputAsset tracksWithMediaType:AVMediaTypeAudio];
BOOL shouldRecordAudioTrack = (([audioTracks count] > 0) && (self.audioEncodingTarget != nil) );
AVAssetReaderTrackOutput *readerAudioTrackOutput = nil;
if (shouldRecordAudioTrack)
{
audioEncodingIsFinished = NO;
// This might need to be extended to handle movies with more than one audio track
AVAssetTrack* audioTrack = [audioTracks objectAtIndex:0];
readerAudioTrackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:nil];
[reader addOutput:readerAudioTrackOutput];
}
if ([reader startReading] == NO)
{
NSLog(#"Error reading from file at URL: %#", self.url);
return;
}
if (synchronizedMovieWriter != nil)
{
__unsafe_unretained GPUImageMovie *weakSelf = self;
[synchronizedMovieWriter setVideoInputReadyCallback:^{
[weakSelf readNextVideoFrameFromOutput:readerVideoTrackOutput];
}];
[synchronizedMovieWriter setAudioInputReadyCallback:^{
[weakSelf readNextAudioSampleFromOutput:readerAudioTrackOutput];
}];
[synchronizedMovieWriter enableSynchronizationCallbacks];
}
else
{
while (reader.status == AVAssetReaderStatusReading)
{
[self readNextVideoFrameFromOutput:readerVideoTrackOutput];
if ( (shouldRecordAudioTrack) && (!audioEncodingIsFinished) )
{
[self readNextAudioSampleFromOutput:readerAudioTrackOutput];
}
}
if (reader.status == AVAssetWriterStatusCompleted) {
[self endProcessing];
}
}
}];
}
- (void)readNextVideoFrameFromOutput:(AVAssetReaderTrackOutput *)readerVideoTrackOutput;
{
if (reader.status == AVAssetReaderStatusReading)
{
CMSampleBufferRef sampleBufferRef = [readerVideoTrackOutput copyNextSampleBuffer];
if (sampleBufferRef)
{
runOnMainQueueWithoutDeadlocking(^{
[self processMovieFrame:sampleBufferRef];
});
CMSampleBufferInvalidate(sampleBufferRef);
CFRelease(sampleBufferRef);
}
else
{
videoEncodingIsFinished = YES;
[self endProcessing];
}
}
else if (synchronizedMovieWriter != nil)
{
if (reader.status == AVAssetWriterStatusCompleted)
{
[self endProcessing];
}
}
}
- (void)processMovieFrame:(CMSampleBufferRef)movieSampleBuffer;
{
CMTime currentSampleTime = CMSampleBufferGetOutputPresentationTimeStamp(movieSampleBuffer);
CVImageBufferRef movieFrame = CMSampleBufferGetImageBuffer(movieSampleBuffer);
int bufferHeight = CVPixelBufferGetHeight(movieFrame);
int bufferWidth = CVPixelBufferGetWidth(movieFrame);
CFAbsoluteTime startTime = CFAbsoluteTimeGetCurrent();
if ([GPUImageOpenGLESContext supportsFastTextureUpload])
{
CVPixelBufferLockBaseAddress(movieFrame, 0);
[GPUImageOpenGLESContext useImageProcessingContext];
CVOpenGLESTextureRef texture = NULL;
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, coreVideoTextureCache, movieFrame, NULL, GL_TEXTURE_2D, GL_RGBA, bufferWidth, bufferHeight, GL_BGRA, GL_UNSIGNED_BYTE, 0, &texture);
if (!texture || err) {
NSLog(#"Movie CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)", err);
return;
}
outputTexture = CVOpenGLESTextureGetName(texture);
// glBindTexture(CVOpenGLESTextureGetTarget(texture), outputTexture);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
for (id<GPUImageInput> currentTarget in targets)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger targetTextureIndex = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
[currentTarget setInputSize:CGSizeMake(bufferWidth, bufferHeight) atIndex:targetTextureIndex];
[currentTarget setInputTexture:outputTexture atIndex:targetTextureIndex];
[currentTarget newFrameReadyAtTime:currentSampleTime];
}
CVPixelBufferUnlockBaseAddress(movieFrame, 0);
// Flush the CVOpenGLESTexture cache and release the texture
CVOpenGLESTextureCacheFlush(coreVideoTextureCache, 0);
CFRelease(texture);
outputTexture = 0;
}
else
{
// Upload to texture
CVPixelBufferLockBaseAddress(movieFrame, 0);
glBindTexture(GL_TEXTURE_2D, outputTexture);
// Using BGRA extension to pull in video frame data directly
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_BGRA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(movieFrame));
CGSize currentSize = CGSizeMake(bufferWidth, bufferHeight);
for (id<GPUImageInput> currentTarget in targets)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger targetTextureIndex = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
[currentTarget setInputSize:currentSize atIndex:targetTextureIndex];
[currentTarget newFrameReadyAtTime:currentSampleTime];
}
CVPixelBufferUnlockBaseAddress(movieFrame, 0);
}
if (_runBenchmark)
{
CFAbsoluteTime currentFrameTime = (CFAbsoluteTimeGetCurrent() - startTime);
NSLog(#"Current frame time : %f ms", 1000.0 * currentFrameTime);
}
}
New to developing on iOS and in particular the new OpenGL related features on iOS 5, so I apologize if any of my questions are so basic.
The app I am working on is designed to receive camera frames and display them on screen via OpenGL ES (the graphic folks will take over this and add the actual OpenGL drawing about which I know very little). The application is developed XCode4, and the target is iPhone4 running iOS 5. For the moment, I used the ARC and the GLKit functionality and all is working fine except for the memory leak in loading the images as texture. The app receives a "memory warning" very soon.
Specifically, I would like to ask how to release the textures allocated by
#property(retain) GLKTextureInfo *texture;
-(void)setTextureCGImage:(CGImageRef)image
{
NSError *error;
self.texture = [GLKTextureLoader textureWithCGImage:image options:nil error:&error];
if (error)
{
NSLog(#"Error loading texture from image: %#",error);
}
}
The image is a quartz image built from the camera frame (sample code from apple). I know the problem is not in that part of the code since if I disable the assignment, the app does not receive the warning.
Super hacky solution I believe, but it seems to work:
Add the following before the assignment:
GLuint name = self.texture.name;
glDeleteTextures(1, &name);
If there's a more official way (or if this is the official way), I would appreciate if someone could let me know.
Not a direct answer, but something I noticed and it wont really fit in a comment.
If you're using GLKTextureLoader to load textures in the background to replace an existing texture, you have to delete the existing texture on the main thread. Deleting a texture in the completion handler will not work.
AFAIK this is because:
Every iOS thread requires its own EAGLContext, so the background queue has its own thread with its own context.
The completion handler is run on the queue you passed in, which is most likely not the main queue. (Else you wouldn't be doing the loading in the background...)
That is, this will leak memory.
NSDictionary *options = #{GLKTextureLoaderOriginBottomLeft:#YES};
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
[self.asyncTextureLoader textureWithContentsOfFile:#"my_texture_path.png"
options:options
queue:queue
completionHandler:^(GLKTextureInfo *texture, NSError *e){
GLuint name = self.myTexture.name;
//
// This delete textures call has no effect!!!
//
glDeleteTextures(1, &name);
self.myTexture = texture;
}];
To get around this issue you can either:
Delete the texture before the upload happens. Potentially sketchy depending on how your GL is architected.
Delete the texture on the main queue in the completion handler.
So, to fix the leak you need to do this:
//
// Method #1, delete before upload happens.
// Executed on the main thread so it works as expected.
// Potentially leaves some GL content untextured if you're still drawing it
// while the texture is being loaded in.
//
// Done on the main thread so it works as expected
GLuint name = self.myTexture.name;
glDeleteTextures(1, &name)
NSDictionary *options = #{GLKTextureLoaderOriginBottomLeft:#YES};
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
[self.asyncTextureLoader textureWithContentsOfFile:#"my_texture_path.png"
options:options
queue:queue
completionHandler:^(GLKTextureInfo *texture, NSError *e){
// no delete required, done previously.
self.myTexture = texture;
}];
or
//
// Method #2, delete in completion handler but do it on the main thread.
//
NSDictionary *options = #{GLKTextureLoaderOriginBottomLeft:#YES};
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
[self.asyncTextureLoader textureWithContentsOfFile:#"my_texture_path.png"
options:options
queue:queue
completionHandler:^(GLKTextureInfo *texture, NSError *e){
// you could potentially do non-gl related work here, still in the background
// ...
// Force the actual texture delete and re-assignment to happen on the main thread.
dispatch_sync(dispatch_get_main_queue(), ^{
GLuint name = self.myTexture.name;
glDeleteTextures(1, &name);
self.myTexture = texture;
});
}];
Is there a way to simply replace the contents of the texture to the same GLKTextureInfo.name handle? When using glgentextures you can use the returned texture handle to load new texuture data using glteximage2d. But with GLKTextureLoader it seems that glgentextures is being called every time new texture data is loaded...
I have a view that generates an image based on a series of layers. I have images for the background, for the thumbnail, and finally for an overlay. Together, it makes one cohesive display.
It seems to work a dream, except for when it doesn't. For seemingly no reason, I get an EXC_BAD_ACCESS on the specified line below after it's generated somewhere between 8 and 20 images. I've run it through the memory leak tool and allocation tool, and it's not eating up tons of memory and it's not leaking. I'm totally stumped.
Here's the relevant code:
- (UIImage *)addLayer:(UIImage *)layer toImage:(UIImage *)background atPoint:(CGPoint)point {
CGSize size = CGSizeMake(240, 240);
UIGraphicsBeginImageContext(size);
[background drawAtPoint:CGPointMake(0, 0)]; // <--- error here
[layer drawAtPoint:point];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
}
// Build the layered image -- thingPage onto thingBackground,
// then the screenshot on that, then the thingTop on top of it all.
// thingBackground, thingPage and thingTop are all preloaded UIImages.
-(UIImage *)getImageForThing:(Thing *)t {
[self loadImageCacheIfNecessary];
if (!t.screenshot) {
return [UIImage imageNamed:#"NoPreview.png"];
} else {
UIImage *screenshot = t.screenshot;
UIImage *currentImage = [self addLayer:thingPage toImage:thingBackground atPoint:CGPointMake(0, 0)];
currentImage = [self addLayer:screenshot toImage:currentImage atPoint:CGPointMake(39, 59)];
currentImage = [self addLayer:thingTop toImage:currentImage atPoint:CGPointMake(0, 1)];
return currentImage;
}
}
I can't find anywhere that this is going wrong, and I've been tearing my hair out for a couple of hours on this. It's the final known bug in the system, so you can imagine how antsy I am to fix it! :-)
Thanks in advance for any help.
As to me, I always use -(void)drawInRect: instead of -(void)drawAtPoint:
CGRect rtDraw;
rtDraw.origin = CGPointZero;
rtDraw.size = size;
[background drawInRect:rtDraw];
[layer drawInRect:rtDraw];
And ....
The paint method with UIGraphicsBeginImageContext(size) and UIGraphicsEndImageContext() is not thread-safe.
Those functions will push or pop a context with stack struct, which is managed by system.
EXC_BAD_ACCESS is almost always due to accessing an object that has already been released. In your code this seems to be t.screenshot. Check creation (and retaining if it is an instance variable) of the object returned by Thing's screenshot property.
As it turns out, the error wasn't in the code I posted, it was in my caching of the thingBackground, thingPage and thingTop images. I wasn't retaining them. Here's the missing code, fixed:
-(void)loadImageCacheIfNecessary {
if (!folderBackground) {
thingBackground = [[UIImage imageNamed:#"ThingBack.png"] retain];
}
if (!folderPage) {
thingPage = [[UIImage imageNamed:#"ThingPage.png"] retain];
}
if (!folderTop) {
thingTop = [[UIImage imageNamed:#"ThingTop.png"] retain];
}
}
I will admit I'm still not comfortable with the whole retain/release/autorelease stuff in Objective C. Hopefully it'll sink in one day soon. :-)
This appears to be the the classic method for scanning images from the iPhone. I have a thread that is dispatched from the main thread to go and scan for Codes. It essentially creates a new UIImage each time then removes it.
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
{
while (![thread isCancelled]) {
#ifdef DEBUG
NSLog(#"Decoding Loop");
#endif
// [self performSelectorOnMainThread:#selector(updateImageBuffer) withObject:nil waitUntilDone:YES];
CGImageRef cgScreen = UIGetScreenImage();
UIImage *uiimage = [UIImage imageWithCGImage:cgScreen];
if (uiimage){
CGSize size = [uiimage size];
CGRect cropRect = CGRectMake(0.0, 80.0, size.width, 360); // Crop to centre of the screen - makes it more robust
#ifdef DEBUG
NSLog(#"picked image size = (%f, %f)", size.width, size.height);
#endif
[decoder decodeImage:uiimage cropRect:cropRect];
}
[uiimage release];
CGImageRelease(cgScreen);
}
}
[pool release];
the problem is that the [pool release] causes an ERROR_BAD_EXC (that old classic) and the program bombs. I'm told that there is no need to call [uiimage release] as I havent explicitly allocated a UIImage but this doesn't seem to be the case. If I take that line out, Memory usage goes through the roof and the program quits dues to lack of memory. It appears I can't have this work the way I'd like.
Is there a way to create a UIImage "in-place"? I.e, have a buffer that is written to again and again as a UIImage? I suspect that would work?
Update!
Tried executing the UIKit related calls on the main thread as follows:
-(void)performDecode:(id)arg{
// Perform the decoding in a seperate thread. This should, in theory, bounce back with a
// decoded or not decoded message. We can quit at the end of this thread.
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
{
while (![thread isCancelled]) {
#ifdef DEBUG
NSLog(#"Decoding Loop");
#endif
[self performSelectorOnMainThread:#selector(updateImageBuffer) withObject:nil waitUntilDone:YES];
if (uiimage){
CGSize size = [uiimage size];
CGRect cropRect = CGRectMake(0.0, 80.0, 320, 360); // Crop to centre of the screen - makes it more robust
#ifdef DEBUG
NSLog(#"picked image size = (%f, %f)", size.width, size.height);
#endif
[decoder decodeImage:uiimage cropRect:cropRect];
}
}
}
[pool drain];
#ifdef DEBUG
NSLog(#"finished decoding.");
#endif
}
-(void) updateImageBuffer {
CGImageRef cgScreen = UIGetScreenImage();
uiimage = [UIImage imageWithCGImage:cgScreen];
//[uiimage release];
CGImageRelease(cgScreen);
}
No joy however as EXC_BAD_ACCESS rears its ugly head when one wishes to grab the "Size" of the UIImage
As has been stated by others, you should not release the UIImage returned from imageWithCGImage: . It is autoreleased. When your pool drains, it tries sending a release message to your already-released image objects, leading to your crash.
The reason why your memory usage keeps climbing is that you only drain the autorelease pool outside of the loop. Your autoreleased objects keep accumulating inside of the loop. (By the way, you need to release your autorelease pool at the end of that method, because it is currently being leaked.) To prevent this accumulation, you could drain the pool at regular intervals within the loop.
However, I'd suggest switching to doing [[UIImage alloc] initWithCGImage:cgScreen] and then releasing the image when done. I try to avoid using autoreleased objects whereever I can within iPhone applications in order to have tighter control over memory usage and overall better performance.
UIGetScreenImage() is private and undocumented so you flat-out cannot use it. Saying that nothing about it suggests that you now own CGImageRef cgScreen so why do you release it? You also have no way of knowing if it is thread safe and so should assume it isn't. You then go on to release the IImage *uiimage which you did not init, retain or copy, so again - you don't own it. Review the docs.
[uiimage release] is definitely wrong in this context. Also, Apple stresses that all UIKit methods must be executed on the main thread. That includes UIGetScreenImage() and +[UIImage imageWithCGImage:].
Edit: So you get an exception when calling -[UIImage size] on the wrong thread. This probably shouldn't surprise you because it is not permitted.
UIImage *uiimage = [[UIImage alloc] initWithCGImage: cgScreen];
Explicitly stating that I know best when to release the object seemed to work. Virtual Memory still increases but physical now stays constant. Thanks for pointing out the UIKit Thread Safe issues though. That is a point I'd missed but seems not affect the running at this point.
Also, I should point out, Red Laser and Quickmark both use this method of scanning camera information ;)