I know that drawLayer: and drawlayer:inContext: are called on multiple threads when using a CATiledlayer, but what about drawRect:?
Apple's PhotoScroller example code uses drawRect: to get its images from disk, and it has no special code for handling threads.
I am trying to determine whether my model for a CATiledLayer must be thread-safe.
I have found CATiledLayer is using multiple background threads in the iOS Simulator, but a single background thread on my iPhone.
My Mac has a dual core processor, while my iPhone has a single core (A4).
I suspect an iOS device with an A5 CPU will also use multiple threads.
Yes, drawRect can and will be called on multiple threads (tested on OS 4.2).
This behaviour is less obvious if your drawing is fast enough to outpace the arrival of new zoom gestures so your app may work fine until tested with rapid input of zoom gestures.
One alternative is to make your model thread-safe.
If thread-safety is achieved by synchronizing most of the access to the data model to one drawing thread at a time then then you might do just as well to mutex the body of drawRect with something like #syncrhonize(self) which seems to work.
I haven't found a way to request that CATiledLayer only uses one background thread.
Have you seen this technical Q&A from Apple?
It doesn't answer your question directly, but it could help you decide how to implement your model.
Related
I am trying to understand why UI operations can't be performed using multiple threads. Is this also a requirement in other frameworks like OpenGL or cocos2d?
How about other languages like C# and javascript? I tried looking in google but people mention something about POSIX threads which I don't understand.
In Cocoa Touch, the UIApplication i.e. the instance of your application is attached to the main thread because this thread is created by UIApplicatioMain(), the entry point function of Cocoa Touch. It sets up main event loop, including the application’s run loop, and begins processing events. Application's main event loop receives all the UI events i.e. touch, gestures etc.
From docs UIApplicationMain(),
This function instantiates the application object from the principal class and instantiates the delegate (if any) from the given class and sets the delegate for the application. It also sets up the main event loop, including the application’s run loop, and begins processing events. If the application’s Info.plist file specifies a main nib file to be loaded, by including the NSMainNibFile key and a valid nib file name for the value, this function loads that nib file.
These application UI events are further forwarded to UIResponder's following the chain of responders usually like UIApplication->UIWindow->UIViewController->UIView->subviews(UIButton,etc.)
Responders handle events like button press, tap, pinch zoom, swipe etc. which get translated as change in the UI. Hence as you can see these chain of events occur on main thread which is why UIKit, the framework which contains the responders should operate on main thread.
From docs again UIKit,
For the most part, UIKit classes should be used only from an application’s main thread. This is particularly true for classes derived from UIResponder or that involve manipulating your application’s user interface in any way.
EDIT
Why drawRect needs to be on main thread?
drawRect: is called by UIKit as part of UIView's lifecycle. So drawRect: is bound to main thread. Drawing in this way is expensive because it is done using the CPU on the main thread. The hardware accelerate graphics is provided by using the CALayer technique (Core Animation).
CALayer on the other hand acts as a backing store for the view. The view will then just display cached bitmap of its current state. Any change to the view properties will result in changes in the backing store which get performed by GPU on the backed copy. However, the view still needs to provide the initial content and periodically update view. I have not really worked on OpenGL but I think it also uses layers(I could be wrong).
I have tried to answer this to the best of my knowledge. Hope that helps!
from : https://www.objc.io/issues/2-concurrency/thread-safe-class-design/
It’s a conscious design decision from Apple’s side to not have UIKit be thread-safe. Making it thread-safe wouldn’t buy you much in terms of performance; it would in fact make many things slower. And the fact that UIKit is tied to the main thread makes it very easy to write concurrent programs and use UIKit. All you have to do is make sure that calls into UIKit are always made on the main thread.
So according to this the fact that UIKit objects must be accessed on the main thread is a design decision by apple to favor performance.
C# behaves the same (see eg here: Keep the UI thread responsive). UI updates have to be done in the UI thread - most other things should be done in the background hen possible.
If that wouldn't be the case there would probably be a synchronization hell between all updates that have to be done in the UI ...
Every system, every library, needs to be concerned about thread safety and must do things to ensure thread safety, while at the same time looking after correctness and performance as well.
In the case of the iOS and MacOS X user interface, the decision was made to make the UI thread safe by only allowing UI methods to be called and executed on the main thread. And that's it.
Since there are lots of complicated things going on that would need at least serialisation to prevent total chaos from happening, I don't see very much gained from allowing UI on a background thread.
Because you want the user to be able to see the UI changes as they happen. If you were to be able to perform UI changes in a background thread and display them when complete, it would seem the app doesn't behave right.
All non-UI operations (or at least the ones that are very costly, like downloading stuff or making database queries) should take place on a background thread, whereas all UI changes must always happen on the main thread to provide as smooth of a user experience possible.
I don't know what it's like in C# for Windows Phone apps, but I would expect it to be the same. On Android the system won't even let you do things like downloading on the main thread, making you create a background thread directly.
As a rule of thumb - when you think main thread, think "what the user sees".
I'm working on an iPhone game that involves only two dimensional, translation-based animation of just one object. This object is subclassed from UIView and drawn with Quartz-2D. The translation is currently put into effect by an NSTimer that ticks each frame and tells the UIView to change its location.
However, there is some fairly complex math that goes behind determining where the UIView should move during the next frame. Testing the game on the iOS simulator works fine, but when testing on an iPhone it definitely seems to be skipping frames.
My question is this: is my method of translating the view frame by frame simply a bad method? I know OpenGL is more typically used for games, but it seems a shame to set up OpenGL for such a simple animation. Nonetheless, is it worth the hassle?
It's hard to say without knowing what kind of complex math is going on to calculate the translations. Using OpenGL for this only makes sense if the GPU is really the bottleneck. I would suspect that this is not the case, but you have to test which parts are causing the skipped frames.
Generally, UIView and CALayer are implemented on top of OpenGL, so animating the translation of a UIView already makes use of the GPU.
As an aside, using CADisplayLink instead of NSTimer would probably be better for a game loop.
The problem with the iPhone simulator is it has access to the same resources as your mac. Your macs ram, video card etc. What I would suggest doing is opening instruments.app that comes with the iPhone SDK, and using the CoreAnimation template to have a look at how your resources are being managed. You could also look at allocations to see if its something hogging ram. CPU could also help.
tl;dr iPhone sim uses your macs ram and GFX card. Try looking at the sequence in Instruments to see if theres some lag.
I'm working on an iPhone app that I'm going to be demo'ing to a live audience soon.
I'd really like to demo the app live over VGA to a projector, rather than show screenshots.
I bought a VGA adapter for iPhone, and have adapted Rob Terrell's TVOutManager to suit my needs. Unfortunately, the frame rate after testing on my television at home just isn't that good - even on an iPhone 4 (perhaps 4-5 frames per second, it varies).
I believe the reason for this slowness is that the main routine I'm using to capture the device's screen (which is then being displayed on an external display) is UIGetScreenImage(). This routine, which is no longer allowed to be part of shipping apps, is actually quite slow. Here's the code I'm using to capture the screen (FYI mirrorView is a UIImageView):
CGImageRef cgScreen = UIGetScreenImage();
self.mirrorView.image = [UIImage imageWithCGImage:cgScreen];
CGImageRelease(cgScreen);
Is there a faster method I can use to capture the iPhone's screen and achieve a better frame rate (shooting for 20+ fps)? It doesn't need to pass Apple's app review - this demo code won't be in the shipping app. If anyone knows of any faster private APIs, I'd really appreciate the help!
Also, the above code is being executed using a repeating NSTimer which fires every 1.0/desiredFrameRate seconds (currently every 0.1 seconds). I'm wondering if instead wrapping those calls in a block and using GCD or an NSOperationQueue might be more efficient than having the NSTimer invoke my updateTVOut obj-c method that currently contains those calls. Would appreciate some input on that too - some searching seems to indicate that obj-c message sending is somewhat slow compared to other operations.
Finally, as you can see above, the CGImageRef that UIGetScreenImage() returns is being turned into a UIImage and then that UIImage is being passed to a UIImageView, which is probably resizing the image on the fly. I'm wondering if the resizing might be slowing things down even more. Ideas of how to do this faster?
Have you looked at Apple's recommended alternatives to UIGetScreenImage? From the "Notice regarding UIGetScreenImage()" thread:
Applications using UIGetScreenImage() to capture images from the camera should instead use AVCaptureSession and related classes in the AV Foundation Framework. For an example, see Technical Q&A QA1702, "How to capture video frames from the camera as images using AV Foundation". Note that use of AVCaptureSession is supported in iOS4 and above only.
Applications using UIGetScreenImage() to capture the contents of interface views and layers should instead use the -renderInContext: method of CALayer in the QuartzCore framework. For an example, see Technical Q&A QA1703, "Screen capture in UIKit applications".
Applications using UIGetScreenImage() to capture the contents of OpenGL ES based views and layers should instead use the glReadPixels() function to obtain pixel data. For an example, see Technical Q&A QA1704, "OpenGL ES View Snapshot".
New solution: get an iPad 2 and mirror the output! :)
I don't know how fast is this but it worth a try ;)
CGImageRef screenshot = [[UIApplication sharedApplication] _createDefaultImageSnapshot];
[myVGAView.layer setContents:(id)screenshot];
where _createDefaultImageSnapshot is a private API. (Since is for demo... its ok I suppose)
and myVGAView is a normal UIView.
If you get CGImageRefs then just pass them to the contents of a layer, its lighter and should be a little bit faster (but just a little bit ;) )
I haven't got the solution you want (simulating video mirroring) but you can move your views to the external display. This is what I do and there is no appreciable impact on the frame rate. However, obviously since the view is no longer on the device's screen you can no longer directly interact with it or see it. If you have something like a game controlled with the accelerometer this shouldn't be a problem, however something touch based will require some work. What I do is have an alternative view on the device when the primary view is external. For me this is a 2D control view to "command" the normal 3D view. If you have a game you could perhaps create an alternative input view to control the game with (virtual buttons/joystick etc.) really depends on what you have as to how to work around it best.
Not having jailbroken myself I can't say for sure but I am under the impression that a jailbroken device can essentially enable video mirroring (like they use at the apple demos...). If true, that is likely your easiest route if all you want is a demo.
I have an scroll view with some sophisticated animations happening during scrolling. Although after 2 weeks of finetuning the performance is acceptable now, the scrolling is not 100% smooth when the animations happen.
I know that core animation does animations in a background thread. But I wonder if it would help to split those animation blocks (10 of them at pretty much the same time) into threads.
There are a few methods that look interesting:
– performSelector:onThread:withObject:waitUntilDone:
– performSelectorInBackground:withObject:
or is that nonsense to do?
No, it won't help. As you correctly stated yourself, Core Animation already runs in a seperate thread. Core Animation is smart enough to handle animation blocks as efficiently as possible. I wouldn't advise interfering with it.
The Core Animation Programming Guide says:
An abstract animation interface that
allows animations to run on a separate
thread, independent of your
application's run loop. Once an
animation is configured and starts,
Core Animation assumes full
responsibility for running it at frame
rate.
Are you sure the choppy behavior is really from CA? Do you have anything else going on?
If you have any background network access, consider moving that into a separate thread - the time taken to service those calls takes away from time the UI spends updating the screen as you scroll.
I'm drawing offscreen to a CGContext created using CGBitmapContextCreate, then later generating a CGImage from it with CGBitmapContextCreateImage and drawing that onto my view in drawRect (I'm also drawing some other stuff on top of that - this is an exercise in isolating different levels of variability and complexity).
This all works fine when it's all running on the main thread. However one of the motivations for splitting this out this way was so that the offscreen part could be run on a background thread (which I had thought should be ok since it's not rendering to an onscreen context).
However, when I do this the resulting image is empty! I've checked over the code, and placed judicious NSLog's to verify that everything is happening in the right order.
My next step is to boil this down to the simplest code that reproduces the issue (or find some silly thing I'm missing and fix it) - at which point I'd have some code to post here if necessary. But I first wanted to check here that I'm not going down the wrong path with this. I couldn't find anything in my travels around the googlesphere that sheds light either way - but a friend did mention that he ran into a similar issue while trying to resize images in a background thread - suggesting there may be some general limitation here.
[edit]
Thanks for the responses so far. If nothing else they have told me that at least I'm not alone in not having an answer for this - which was part of what I wanted to find out. At this point I'm going to put the extra work into getting the simplest possible example and may come back with some code or more information. In the meantime keep any ideas coming :-)
One point to bring up: A couple of people have used the term thread safety with respect to APIs. It should be noted that there are two types of thread safety in this context:
Threadability of the API itself - ie can it be used at all from more than one thread (global state and other re-entrancy issues such as C's strtok are common reasons that an API might not be thread safe too).
Atomicity of individual operations - can multiple threads interact with the same objects and resources through API without application level locking?
I suspect that mention so far has been of the first type, but would appreciate if you could clarify.
[edit2 - solved!]
Ok, I got it all working. Executive summary is that the problem was with me, rather than bitmap contexts themselves.
In my background thread, just before I drew into the bitmap context, I was doing some preparation on some other objects. It turns out that, indirectly, the calls to those other objects where leading to setNeedsDisplay being called on some views!
By separating the part that did that out to the main thread it now all works perfectly.
So for anyone who hits this question wondering if they can draw to a bitmap context on a background thread, the answer is you can (with the caveats that have been presented here and in the answers).
Thanks all
Just a guess, but if you are trying to call setNeedsDisplay from another thread, you need to call it via performSelectorOnMainThread instead.
What you're doing should work if you're working with the CGContextRef in one and only one thread. I've done this before with 8 cores working on 8 different parts of an image and then compositing the different resultant CGImageRefs together and drawing them onscreen.
Apple don't say anything about thread safety on iPhone but Cocoa (as opposed to UIKit) is generally thread safe for drawing. As they share a lot of drawing code, I would assume drawing on iPhone is threadsafe.
That said, your experience would imply there are problems. Could it be that you are using your image before it is rendered?
Not all APIs are thread-safe. Some require locking or require that they be run on the main thread. You may want to scour the documentation. I believe there is a page that summarizes which parts of the SDK are thread-safe and which aren't.
In case anyone is/was searching for exactly how to do this i've written a blog post that describes how to do this, and wraps the whole thing in a NSOperation subclass.