Load an OpenGl view in the background. iPhone - iphone

I have an OpenGL view that renders a 3D model. It is a basic modification on Apples EAGLView. This view is added to a controller's .view and displayed with presentModalViewController: . I would like to do all of the model loading, and OpenGL state configuration in a background thread at app launch before the user chooses to display the view. Is this possible? Can I load textures, setup lighting, and generally just get everything ready to render in a background thread? My fear is that the Cocoa touch portions of the app on the main are going to manipulate the OpenGL state while I am setting up my renderer in the background. The controller will be displayed from the main thread of course. This level of understanding of OpenGl-ES is not something I deal with often, so please be gentile if my question is strange in any way :)

You absolutely can do background loading on a thread. Some of the key points:
- There is probably not much of a win to moving OGL state setup to a background thread - the total amount of change you'd induce in a context before the start of the first draw doesn't add up to a ton of time. Background loading is useful for textures and VBOs, as well as the load time that has to happen first to get the data to feed to the GL.
- You'll need to detach the context from the main thread and move it to the worker thread. We do this using pthreads to "send" the context to the worker.
- In our use, we hide the GL view to ensure that it doesn't need to be drawn while in a load state. (Frankly during load it may not contain anything useful.) So during async load the visible UI is all non-GL Cocoa.
This approach is more difficult than what you would do on the desktop: simply share the objects in two contexts (so that you can load and draw at the same time). When we looked at that approach more than a year ago, it wasn't possible on IOS; it may be possible now, I do not know.

Related

Why must UIKit operations be performed on the main thread?

I am trying to understand why UI operations can't be performed using multiple threads. Is this also a requirement in other frameworks like OpenGL or cocos2d?
How about other languages like C# and javascript? I tried looking in google but people mention something about POSIX threads which I don't understand.
In Cocoa Touch, the UIApplication i.e. the instance of your application is attached to the main thread because this thread is created by UIApplicatioMain(), the entry point function of Cocoa Touch. It sets up main event loop, including the application’s run loop, and begins processing events. Application's main event loop receives all the UI events i.e. touch, gestures etc.
From docs UIApplicationMain(),
This function instantiates the application object from the principal class and instantiates the delegate (if any) from the given class and sets the delegate for the application. It also sets up the main event loop, including the application’s run loop, and begins processing events. If the application’s Info.plist file specifies a main nib file to be loaded, by including the NSMainNibFile key and a valid nib file name for the value, this function loads that nib file.
These application UI events are further forwarded to UIResponder's following the chain of responders usually like UIApplication->UIWindow->UIViewController->UIView->subviews(UIButton,etc.)
Responders handle events like button press, tap, pinch zoom, swipe etc. which get translated as change in the UI. Hence as you can see these chain of events occur on main thread which is why UIKit, the framework which contains the responders should operate on main thread.
From docs again UIKit,
For the most part, UIKit classes should be used only from an application’s main thread. This is particularly true for classes derived from UIResponder or that involve manipulating your application’s user interface in any way.
EDIT
Why drawRect needs to be on main thread?
drawRect: is called by UIKit as part of UIView's lifecycle. So drawRect: is bound to main thread. Drawing in this way is expensive because it is done using the CPU on the main thread. The hardware accelerate graphics is provided by using the CALayer technique (Core Animation).
CALayer on the other hand acts as a backing store for the view. The view will then just display cached bitmap of its current state. Any change to the view properties will result in changes in the backing store which get performed by GPU on the backed copy. However, the view still needs to provide the initial content and periodically update view. I have not really worked on OpenGL but I think it also uses layers(I could be wrong).
I have tried to answer this to the best of my knowledge. Hope that helps!
from : https://www.objc.io/issues/2-concurrency/thread-safe-class-design/
It’s a conscious design decision from Apple’s side to not have UIKit be thread-safe. Making it thread-safe wouldn’t buy you much in terms of performance; it would in fact make many things slower. And the fact that UIKit is tied to the main thread makes it very easy to write concurrent programs and use UIKit. All you have to do is make sure that calls into UIKit are always made on the main thread.
So according to this the fact that UIKit objects must be accessed on the main thread is a design decision by apple to favor performance.
C# behaves the same (see eg here: Keep the UI thread responsive). UI updates have to be done in the UI thread - most other things should be done in the background hen possible.
If that wouldn't be the case there would probably be a synchronization hell between all updates that have to be done in the UI ...
Every system, every library, needs to be concerned about thread safety and must do things to ensure thread safety, while at the same time looking after correctness and performance as well.
In the case of the iOS and MacOS X user interface, the decision was made to make the UI thread safe by only allowing UI methods to be called and executed on the main thread. And that's it.
Since there are lots of complicated things going on that would need at least serialisation to prevent total chaos from happening, I don't see very much gained from allowing UI on a background thread.
Because you want the user to be able to see the UI changes as they happen. If you were to be able to perform UI changes in a background thread and display them when complete, it would seem the app doesn't behave right.
All non-UI operations (or at least the ones that are very costly, like downloading stuff or making database queries) should take place on a background thread, whereas all UI changes must always happen on the main thread to provide as smooth of a user experience possible.
I don't know what it's like in C# for Windows Phone apps, but I would expect it to be the same. On Android the system won't even let you do things like downloading on the main thread, making you create a background thread directly.
As a rule of thumb - when you think main thread, think "what the user sees".

How Can I Record the Screen with Acceptable Performance While Keeping the UI Responsive?

I'm looking for help with a performance issue in an Objective-C based iOS app.
I have an iOS application that captures the screen's contents using CALayer's renderInContext method. It attempts to capture enough screen frames to create a video using AVFoundation. The screen recording is then combined with other elements for research purposes on usability. While the screen is being captured, the app may also be displaying the contents of a UIWebView, going out over the network to fetch data, etc... The content of the Web view is not under my control - it is arbitrary content from the Web.
This setup is working but as you might imagine, it's not buttery smooth. Since the layer must be rendered on the main thread, there's more UI contention than I'd like. What I'd like to do is to have a setup where the responsiveness of the UI is prioritized over the screen capture. For instance, if the user is scrolling the Web view, I'd rather drop frames on the recording than have a terrible scrolling experience.
I've experimented with several techniques, from dispatch_source coalescing to submitting the frame capture requests as blocks to the main queue to CADisplayLink. So far they all seem to perform about the same. The frame capture is currently being triggered in the drawRect of the screen's main view.
What I'm asking here is: given the above, what techniques would you suggest I try to achieve my goals? I realize the answer may be that there is no great answer... but I'd like to try anything, however wacky it might sound.
NOTE: Whatever techniques need to be App Store friendly. Can't use something like the CoreSurface hack that Display Recorder used/uses.
Thanks for your help!
"Since the layer must be rendered on the main thread" this is not true, as long as you don't touch UIKit.
Please see https://stackoverflow.com/a/12844171/136305
Maybe you can record at half resolution to speed up things, if that fits the requirements?

Preloading assets on iOS

Suppose you have several animations that we want to achieve with a sequence of PNG in a UIImageView.
If we have many images in the animation we have a delay from the moment we send the message [myImgView startAnimation] this becausethe loading of all images in memory.
I noticed that the loading is lazy: as long as the startAnimation message is not sent images are not loaded into memory.
To avoid the problem of delay i load all the animation in the app delegate, attach as subview and than animate once. I want to understand what is the best solution? And if my actual solution have a draback?
You're right about lazy loading. I've never been able to determine if its actually lazy spooling from disk or lazy decompression but in either case, a UIImage (and, underneath, a CGImage) is not necessarily fully processed and ready to draw until it is actually used. I assume that it may conceivably become unready again in the future, depending on exactly how Apple handle memory warnings internally.
If you wanted to be really keen, I guess the best solution would be to use CoreGraphics to load and decompress the images in the background, pausing to finish loading upon startAnimation only if loading hasn't already finished. You can't achieve that directly with UIKit objects since they're not callable anywhere but on the main thread. You'd need to get a CGImage, draw to a bitmap context then create an image from the bitmap context and post that onto the main thread to be wrapped into a UIImage. And it'd probably be smart to use an NSOperationQueue to marshall the complete list of operations.
Supposing you don't mind the additional startup cost of blocking while you load all the PNGs, and are dealing with memory warnings correctly, there shouldn't be any problems with your current approach and I can't think of a better solution while remaining in high level Objective-C stuff.

mixing OpenGL and Interface Builder/ UI Controls - bad idea? Why? (iPhone)

I've heard that OpenGL ES and standard iPhone UI controls don't play well together, but I'm wondering if anyone knows why, and what the effects are? I'm writing an OpenGL based game, and the view is loaded from a nib file with ui controls, and it seems to work ok, but the game is really simple at this point... does using ui controls cause some kind of performance hit?
UI events momentarily pause timers, like when scrolling a tableview. You can get around this by using the common runtime mode when creating a timer. It may slow down your rendering if you have a lot of layers because they all need to get redrawn every-time you refresh. So if your game runs at 60fps it will also redraw everything on top of the GLView, like UIImageViews, buttons etc. 60 times a second, which is a huge waste. It might not make a huge impact on your frame rate but it may make the device run hotter and drain the battery faster. Its best to draw your HUD using OpenGL, but it depends on the situation. For something that will be displayed only for a short time, like a menu I think you can get away with it.
Theres nothing wrong with it, its just wasteful.

iPhone Animated Loading Screen

Is there a way to have an animated loading screen for my iPhone application as opposed to the Default.png that I currently am using?
In short - no. The purpose of the Default.png is to give the iPhone OS something to display to the user while it loads your application in. The best you can do is to speed up the initial load of your application (say defer your resource loading until after the program is running), then add your own animation while you actually load your resources 'behind the scenes'.
If you think of it as an animated loading screen then no, but having the first view of you application load all the data and do something while it is doing that then surely yes, but I am trying to do that and am failing at the moment
As far as I know, unfortunately not. The point of the lightness of default.png is to allow the app to do intensive ramp-up behind the scenes. Animation would eat precious CPU cycles.
However, if you need to do more processing once your app has launched - you could do a threaded CAnimation during this time.
no, but if your initialization take lengthy time.
you can add an customized animating launching view once the application is launched.
for short.
after launched, before all the real initialization, alloc, init and display a view which is exactly the same as default.png but with animating effect.
while that animating view is displaying, init the real stuffs of your application in background
replace the animating view while done
You can do what one of the app which I know does. They have created series of images, which when displayed in sequence will make one believe that the splash screen is animating. You can check this app to get an idea: TravellerID
Hope this helps.