Schedule a low-priority task on the main thread - iphone

I have a drawRect method that is rather slow (100-200ms). To save time, I need to cache the results. I am doing the actual caching like this:
// some code to check if caching would be desirable goes here. If it is desirable, then
UIGraphicsBeginImageContext(viewSize);
CGContextRef c = UIGraphicsGetCurrentContext();
[view.layer renderInContext: c];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
[self.cachedImageArray addObject:image];
UIGraphicsEndImageContext();
The caching itself can take up to 40ms. This is still easily worth it. But the caching has to wait until everything is rendered, or it will come out wrong. Additionally, the caching is a low-priority task. Once everything is displayed, it is possible that other things will still be going on, and if so, the caching can wait. But since it uses UIKit, it has to be on the main thread.
Rather than putting in some arbitrary delay, is there a bulletproof way to wait like this?

The caching itself doesn't have to be done on the main thread. You can get a copy/reference of the image context or bitmap data, and launch it using an NSThread only when the rendering is done. Example:
- (void) drawRect:(CGRect)rect {
do_rendering_here();
// when rendering completed:
NSThread *t = [[NSThread alloc] initWithTarget:self selector:#selector(doCaching:) object:c];
[t start];
[t release];
}
- (void) doCaching:(CGContextRef)ctx {
// do whatever kind of caching is needed
}

Related

running operation in background thread and have a completion block?

I have a task which is reading from a disk, potentially going to take quite some time, so don't want to do it in a main thread.. and what I want is to call a function X after reading from the disk. What is the best way to do this in iOS?
So far this is what I've tried:
NSInvocationOperation *processDataOperation = [[NSInvocationOperation alloc] initWithTarget:self selector:#selector(readDisk:) object:nil];
[processDataOperation setQueuePriority:NSOperationQueuePriorityVeryHigh];
[processDataOperation setCompletionBlock:^(void){
NSMutableArray *feedItemsArray = [self generateFeedItemsFromDictionary:streamDiskData];
[self postFetchCompletedNotificationForDict:queryStringDict withFeedItems:feedItemsArray isFresh:NO];
}];
basically I am using NSInvocationOperation and then set it's completion block, however the issue is that in my completion block I need the result that is generated in readDisk. How do I access that in the completion block? It's nearly imposible right?
Using NSInvocations it is possible, but far more complicated than necessary, to achieve a trivial amount of work beyond the main thread.
Both GCD and NSOperations can be used to implement a wide array of concurrency strategies. From an object-oriented perspective, NSOperations are more highly abstracted than CGD blocks, which makes them (imo) easier to "design" with, and potentially optimized beyond the scope of where I'm implementing them. GCD is lower-level: This makes interacting with it appear slightly more complicated (it really isn't), but people who are in to that sorta stuff will tell you that it is "more efficient" and carries "less overhead".
My personal approach is to use NSOperations in scenarios where I have a designed/orchestrated concurrency pattern in my application, and use GCD for trivial concurrent/background operations.
If all I need to do is fire some arbitrary task that is not relevant to the design but needs to be done in the background, I'd use CGD. That's what I'd probably use in this case:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
[self readDisk];
NSMutableArray *feedItemsArray = [weakSelf generateFeedItemsFromDictionary:streamDiskData];
dispatch_sync(dispatch_get_main_queue(), ^{
//Call back to the main thread before performing/posting anything touching UIKit
[self postFetchCompletedNotificationForDict:queryStringDict withFeedItems:feedItemsArray isFresh:NO];
})
})];
You could always use grand central dispatch to do your operation in the background instead.
Since it is a block you can just call the method normally and store the result. Then grab the main queue if you need to update any UI or do whatever you need to after completion.
dispatch_queue_t queue = dispatch_queue_create("read disc", NULL);
dispatch_async(queue, ^{
result = [self readDisc];
dispatch_async(dispatch_get_main_queue(), ^{
//update UI or do whatever you need to do with the result of readDisc
});
});
dispatch_release(queue);

How to measure the draw time of a UIImageView?

I have been working on tracking down a performance issue in one of our apps. What seems to happen is that sometimes a UIImageView takes seconds to render an image and, because of the way the code is written, this blocks the main thread.
I've tracked down the issue to the fact that the slow images are progressive JPEGs at retina resolutions. For whatever reason, when the file reaches a certain size, decoding the JPEG becomes a very expensive operation.
Anyway, in the course of writing a simple test application, I realized I didn't know how to time how long the draw event takes. It clearly is blocking the main thread, so I decided to just try and time the run loop iteration. Unfortunately, it ended up a bit hackish. Here's the relevant code:
///////////////
//
// This bit is used to time the runloop. I don't know a better way to do this
// but I assume there is... for now... HACK HACK HACK. :-)
buttonTriggeredDate_ = [NSDate date];
[[NSRunLoop mainRunLoop] performSelector:#selector(fire:) target:self argument:[NSNumber numberWithInt:0] order:1 modes:[NSArray arrayWithObject:NSDefaultRunLoopMode]];
///////////////
NSString* path = [[NSBundle mainBundle] pathForResource:imageName ofType:type];
self.imageView.image = [UIImage imageWithContentsOfFile:path];
The callback is as follows (more hackiness!):
- (void)fire:(NSNumber*)counter {
int iterCount = [counter intValue];
NSLog(#"mark %d", iterCount);
NSTimeInterval interv = [[NSDate date] timeIntervalSinceDate:buttonTriggeredDate_];
// We really need the second pass through - if it's less than X, assume
// it's just that first runloop iteration before the draw happens. Just wait
// for the next one.
if (iterCount < 1) {
iterCount++;
[[NSRunLoop mainRunLoop] performSelector:#selector(fire:)
target:self
argument:[NSNumber numberWithInt:iterCount]
order:1
modes:[NSArray arrayWithObject:NSDefaultRunLoopMode]];
} else {
self.statusDisplay.text = [NSString stringWithFormat:#"%# - Took %f Seconds",
self.statusDisplay.text,
interv];
}
}
So, my question is, basically, how would you do this? I want to be able to drop in different images and run a benchmark to make sure I have a rough idea of how long it takes to run this. I also would like to have it be reasonably consistent and free of jitter.
Hmm, maybe I should just subclass UIImageView and record times around [super drawRect:frame]?
What would you do?
You're likely trying to time the wrong part of the problem. As you suspect, the problem is most likely in the decode (and probably also in memory allocation and copying). It is not likely that the problem is in the final "drawing" step per se.
This is the kind of problem that Instruments is built for. Fire up Instruments and look for your hotspots.
As a possible solution, if Instruments tells you that decoding is what's blocking you, you may consider rendering the images onto a CALayer on a background thread before putting them into the view.

Using GCD/asynchronous code in drawRect:

I'm trying to load an image from a url inside of drawRect:. Our image-loading code is a method that looks like this: - (void) loadImage:(NSURL*)url done:(void(^)(UIImage*))done;, which creates an asynchronous NSURLConnection and calls back with the image.
So, my code in drawRect: looks like this:
CGContextRef context = UIGraphicsGetCurrentContext()
[service loadImage:url done:^(UIImage * image){
CGContextDrawImage(context, frame, image.CGImage);
}];
Unfortunately, this doesn't work. The image is never drawn.
I've also tried using synchronous connections ([NSData dataWithContentsOfUrl:]), but it blocks the thread and slows things down unnecessarily. I don't want to use a UIImageView.
What is the correct way to do this? Thanks!
I think the correct way would either be to load the image beforehand or draw the image as soon as you receive it by either calling "setNeedsDisplayInRect:" or doing your own drawing handling in a separate method (iOS might discourage you from doing this though, I am not sure). Obviously the setNeedsDisplayInRect variation would require you to have the image at hand for drawing the second time around (cache it).
If you load the image asynchronously your drawRect method will moste likely have exited before the image is loaded and the draw focus is no longer locked on your drawRect, thus making it impossible to draw the image in the context. Did you try to set a breakpoint to verify this though? I might be wrong about the locking, but thought I read something alike this at some point.
Think it through--why would it run? There's no guarantee that context is still the current context (or even that its backing store is still fully valid) by the time the block is entered. By using an asynchronous callback, be it a block or a delegate method or something else, you're intentionally deferring code to a later point in time. The call to -drawRect: has ended and is off the stack long before your block is entered.
The first thing I would try is to make sure the async handler runs on the main thread:
[service loadImage:url done:^(UIImage * image) {
dispatch_async(dispatch_get_main_queue(), ^{
CGContextDrawImage(context, frame, image.CGImage);
});
}];
If that does not help, I would start to wonder if the context is still valid. The pointer is, but the context probably not. Maybe you could move the context acquisition to the handler:
[service loadImage:url done:^(UIImage * image) {
dispatch_async(dispatch_get_main_queue(), ^{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, frame, image.CGImage);
…
});
}];
Not sure if that makes sense API-wise, I don’t do much stuff like this.
Have you tried:
__block CGContextRef context = UIGraphicsGetCurrentContext()
[service loadImage:url done:^(UIImage * image){
CGContextDrawImage(context, frame, image.CGImage);
}];
You may have to do the same thing with frame, so when the block returns it knows what context and frame it should modify.

Running out of memory with UIImage creation on an offscreen Bitmap Context by NSOperation

I have an app with multiple UIView subclasses that acts as pages for a UIScrollView. UIViews are moved back and forth to provide a seamless experience to the user. Since the content of the views is rather slow to draw, it's rendered on a single shared CGBitmapContext guarded by locks by NSOperation subclasses - executed one at once in an NSOperationQueue - wrapped up in an UIImage and then used by the main thread to update the content of the views.
-(void)main {
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc]init];
if([self isCancelled]) {
return;
}
if(nil == data) {
return;
}
// Buffer is the shared instance of a CG Bitmap Context wrapper class
// data is a dictionary
CGImageRef img = [buffer imageCreateWithData:data];
UIImage * image = [[UIImage alloc]initWithCGImage:img];
CGImageRelease(img);
if([self isCancelled]) {
[image release];
return;
}
NSDictionary * result = [[NSDictionary alloc]initWithObjectsAndKeys:image,#"image",id,#"id",nil];
// target is the instance of the UIView subclass that will use
// the image
[target performSelectorOnMainThread:#selector(updateContentWithData:) withObject:result waitUntilDone:NO];
[result release];
[image release];
[pool release];
}
The updateContentWithData: of the UIView subclass performed on the main thread is just as simple
-(void)updateContentWithData:(NSDictionary *)someData {
NSDictionary * data = [someData retain];
if([[data valueForKey:#"id"]isEqualToString:[self pendingRequestId]]) {
UIImage * image = [data valueForKey:#"image"];
[self setCurrentImage:image];
[self setNeedsDisplay];
}
// If the image has not been retained, it should be released together
// with the dictionary retaining it
[data release];
}
The drawLayer:inContext: method of the subclass will just get the CGImage from the UIImage and use it to update the backing layer or part of it. No retain or release is involved in the process.
The problem is that after a while I run out of memory. The number of the UIViews is static. CGImageRef and UIImage are created, retained and released correctly (or so it seems to me). Instruments does not show any leaks, just the free memory available dip constantly, rise a few times, and then dip even lower until the application is terminated. The app cycles through about 2-300 of the aforementioned pages before that, but I would expect to have the memory usage reach a more or less stable level of used memory after a bunch of pages have been already skimmed at fast speed or, since the images are up to 3MB in size, deplete way earlier.
Any suggestion will be greatly appreciated.
I realize this is an old posting, but in case it helps anybody else .... This looks like a case of memory fragmentation. We have an app that behaves the same way. The amount of memory actually allocated by the app never reaches dangerous levels, but if you look at the amount of resident memory for the app (using VM Tracker snapshots in the Allocations Instrument, or the Activity Monitor Instrument), it climbs inexorably over time until a not-very-large transient spike kills the app.
The app in question is a multi-threaded app that makes tons of transient allocations in a large range of sizes, the timing of which can't be predicted or controlled. Such an app has to be paranoid about releasing unneeded memory allocations, not because they take up too much memory per se, but because they can create holes that prevent larger images from fitting into the allocated blocks. Even smaller allocations that tend to be overlooked are important in fragmentation (granted that the low-level allocator does group allocations by size, which is helpful to an extent). Memory zones are theoretically helpful for addressing fragmentation but pretty hard to make effective, at least in my experience. Also, use custom auto-release pools, or better yet, alloc/init as much as you can, and release as early as possible. The fact that the underlying frameworks are always making their own allocations for caching purposes probably doesn't help.

Caching behavior of UIImage

Does UIImage ever removes images from its cache? Can I keep a pointer to an image I got from imageNamed: and use it as long as I like or must I always call imageNamed:?
The UIImage object that is returned from imageNamed: is treated like all other objects as far a memory management goes. If you want to keep the reference to the object between method calls, you should retain it and release it when you are done to decrement the reference count.
UIImage * cachedImage;
-(void) getTheImage {
UIImage * cachedImage = [[UImage imageNamed:#"MyImage.png"] retain];
//Do something with the image...
}
//In some other method or dealloc
[cachedImage release];
Also, note that the UIImage class reference says:
In low-memory situations, image data
may be purged from a UIImage object to
free up memory on the system. This
purging behavior affects only the
image data stored internally by the
UIImage object and not the object
itself. When you attempt to draw an
image whose data has been purged, the
image object automatically reloads the
data from its original file. This
extra load step, however, may incur a
small performance penalty.
UIImage caches the data itself. You must not hold a pointer and just pass that around. That can be unsafe since when there is a memory warning and there was no strong ref to that object then UIImage will purge cached data. Call [UIImage imageNamed:] every time. It is fast and returns the ref to the image from memory. If the image is no longer in memory it will reload it and pass that ref