OpenGL not show result from secondary thread on iPhone - iphone

I created EAGLContext and use it in the secondary thread. But there is no output displayed. The same code works fine if run in main loop.
Do I need to notify somewhere on each render completion?

EGL Context may be only active in one thread. You need to use eglMakeCurrent(). Also, you need to use eglSwapBuffers() once your rendering is done.

Related

How does Dispatch.main.async "update the UI"?

I've been using Swift for a little while and GCD still confuses me a bit.
I've read:
https://www.raywenderlich.com/60749/grand-central-dispatch-in-depth-part-1
As well as the Apple docs on dispatch:
https://developer.apple.com/documentation/dispatch
I understand the overall concept that GCD allows multiple tasks to be run on different threads (I think that's right).
What I don't quite understand is how Dispatch.main.async "updates the UI".
For example if I make a call to an api somewhere and data is returned - say it takes 5 seconds to return all the data, then how does using Dispatch.main.async help with updating the UI? How does Dispatch.main.async know what UI to update?
And i still don't quite get the place of GCD and why instead can't some kind of observer or a delegate or a closure be used that is called when all the data is loaded?
And re: "updating the UI" with GCD if I'm making an api call but not using the data immediately eg. just storing the data in an array until I decide to use it is there then any need to use Dispatch.main.async?
And I've been using firebase/firestore as a db for a little while now. Firebase has it's own listeners and runs asynchronously. I still can't get a great answer re: the best way to handle the asynchronous return from firebase in iOS/Swift. For example when my app loads if I go to firebase to get data to populate a tableviewcontroller what is the best way to know when all the data has returned? I've been using a delegate for this but was wondering if and how Dispatch.main.async might be used.
Dispatch.main.async does not update the UI. The story goes into a different direction: If you want to update the UI, you must do so from the main thread. If you're current code is not running on the main thread, Dispatch.main.async is the most convenient way to have some code run on the main thread.
It's an old restrictions that affects most operating systems: UI related actions such as changing elements in the UI must only be called from a specific thread, usually the so called main thread.
In many cases that's not a problem since your UI related code usually acts when triggered by some UI event (user clicking or tapping, key pressed etc.). These event callback happen on the main thread. So there is no threading issue.
With GCD, you can run long-running tasks on separate threads so the tasks doesn't slow down or even block the UI. So when these tasks are finished and you want to update the UI (e.g. to display the result), you must do so on the main thread. With Dispatch.main.async you can ask GCD to run a piece of code on the main thread. GCD doesn't know about the UI. Your code must know what to update. GCD just runs your code on the desired thread.
If at the end of your tasks there is nothing to display or otherwise update in the UI, then you don't need to call Dispatch.main.async.
Update re Firebase
The Firebase Database client performs all network and disk operations in separate background thread off the main thread.
The Firebase Database client invokes all callbacks to your code on the main thread.
So no need to call Dispatch.main.async in the Firebase callbacks. You are already on the main thread.
FYI the reason that all of the UI code needs to go on the main thread is because drawing is a (relatively in CPU time) long and expensive process involving many data structures and millions of pixels. The graphics code essentially needs to lock a copy of all of the UI resources when its doing a frame update, so you cannot edit these in the middle of a draw, otherwise you would have wierd artifacts if you went and changed things half way through when the system is rendering those objects. Since all the drawing code is on the main thread, this lets he system block main until its done rendering, so none of your changes get processed until the current frame is done. Also since some of the drawing is cached (basically rendered to texture until you call something like setNeedsDisplay or setNeedsLayout) if you try to update something from a background thread its entirely possible that it just won't show up and will lead to inconsistent state, which is why you aren't supposed to call any UI code on the background threads.

Why must UIKit operations be performed on the main thread?

I am trying to understand why UI operations can't be performed using multiple threads. Is this also a requirement in other frameworks like OpenGL or cocos2d?
How about other languages like C# and javascript? I tried looking in google but people mention something about POSIX threads which I don't understand.
In Cocoa Touch, the UIApplication i.e. the instance of your application is attached to the main thread because this thread is created by UIApplicatioMain(), the entry point function of Cocoa Touch. It sets up main event loop, including the application’s run loop, and begins processing events. Application's main event loop receives all the UI events i.e. touch, gestures etc.
From docs UIApplicationMain(),
This function instantiates the application object from the principal class and instantiates the delegate (if any) from the given class and sets the delegate for the application. It also sets up the main event loop, including the application’s run loop, and begins processing events. If the application’s Info.plist file specifies a main nib file to be loaded, by including the NSMainNibFile key and a valid nib file name for the value, this function loads that nib file.
These application UI events are further forwarded to UIResponder's following the chain of responders usually like UIApplication->UIWindow->UIViewController->UIView->subviews(UIButton,etc.)
Responders handle events like button press, tap, pinch zoom, swipe etc. which get translated as change in the UI. Hence as you can see these chain of events occur on main thread which is why UIKit, the framework which contains the responders should operate on main thread.
From docs again UIKit,
For the most part, UIKit classes should be used only from an application’s main thread. This is particularly true for classes derived from UIResponder or that involve manipulating your application’s user interface in any way.
EDIT
Why drawRect needs to be on main thread?
drawRect: is called by UIKit as part of UIView's lifecycle. So drawRect: is bound to main thread. Drawing in this way is expensive because it is done using the CPU on the main thread. The hardware accelerate graphics is provided by using the CALayer technique (Core Animation).
CALayer on the other hand acts as a backing store for the view. The view will then just display cached bitmap of its current state. Any change to the view properties will result in changes in the backing store which get performed by GPU on the backed copy. However, the view still needs to provide the initial content and periodically update view. I have not really worked on OpenGL but I think it also uses layers(I could be wrong).
I have tried to answer this to the best of my knowledge. Hope that helps!
from : https://www.objc.io/issues/2-concurrency/thread-safe-class-design/
It’s a conscious design decision from Apple’s side to not have UIKit be thread-safe. Making it thread-safe wouldn’t buy you much in terms of performance; it would in fact make many things slower. And the fact that UIKit is tied to the main thread makes it very easy to write concurrent programs and use UIKit. All you have to do is make sure that calls into UIKit are always made on the main thread.
So according to this the fact that UIKit objects must be accessed on the main thread is a design decision by apple to favor performance.
C# behaves the same (see eg here: Keep the UI thread responsive). UI updates have to be done in the UI thread - most other things should be done in the background hen possible.
If that wouldn't be the case there would probably be a synchronization hell between all updates that have to be done in the UI ...
Every system, every library, needs to be concerned about thread safety and must do things to ensure thread safety, while at the same time looking after correctness and performance as well.
In the case of the iOS and MacOS X user interface, the decision was made to make the UI thread safe by only allowing UI methods to be called and executed on the main thread. And that's it.
Since there are lots of complicated things going on that would need at least serialisation to prevent total chaos from happening, I don't see very much gained from allowing UI on a background thread.
Because you want the user to be able to see the UI changes as they happen. If you were to be able to perform UI changes in a background thread and display them when complete, it would seem the app doesn't behave right.
All non-UI operations (or at least the ones that are very costly, like downloading stuff or making database queries) should take place on a background thread, whereas all UI changes must always happen on the main thread to provide as smooth of a user experience possible.
I don't know what it's like in C# for Windows Phone apps, but I would expect it to be the same. On Android the system won't even let you do things like downloading on the main thread, making you create a background thread directly.
As a rule of thumb - when you think main thread, think "what the user sees".

Directx control in browser plugin

I have to insert a directx control to a firebreath plug in for a browser.
Can anyone post a sample how to do it? I have no knowledge in plugins...
10x
I don't have an example that I can give you, but I can tell you roughly what you need to do.
First, read this: http://colonelpanic.net/2010/11/firebreath-tips-drawing-on-windows/
That will give you an overview of how drawing works in FireBreath.
First, you set everything up when handling AttachedEvent.
Create a new thread to handle drawing (your DirectX drawing must not be on the main thread)
Get the HWND from the PluginWindowWin object (cast the FB::PluginWindow* to FB::PluginWindowWin and call getHWND())
Initialize DirectX on the secondary thread with the provided HWND. Set up some form of render loop and make sure you can send it commands from the main thread.
Handle the RefreshEvent (comes from WM_PAINT) by posting a message somehow to your render thread so it redraws when that event is fired.
Make sure that on DetachedEvent you shut down your thread.
You need to do all initialization, drawing, and shutdown of the DirectX stuff on the same thread. This needs to all happen on a thread that is not just the main thread (don't just use timers) because otherwise it'll mess up the browser rendering context on some versions of Firefox -- not sure why.
Anyway, hope this helps.
Edit: To pass parameters into the start of a boost::thread, should that be the threading abstraction you decide to use, simply pass it in as a parameter.
boost::thread t(&MyClass::someFunction, this, theHWND);
That will start the thread. In actuality, you probably want to make the thread a class variable or something so that you can access it later -- remember that you'll want the thread to have stopped during the handling of DetachedEvent. For messages I'd probably use FB::SafeQueue, which is a threadsafe queue that is part of FireBreath. Look at the sources for how to use it; it's pretty straightforward (stolen from a codeproject article, I think).
// Inside MyClass
void someFunction(HWND theHWND) {
...
}

NSTimer Lag - iPhone SDK

I made a game that uses many timers throughout the code. However the timer has to deal with many tasks in such a small amount of time which leads to the problem where there is lag in my game. For example, my timer runs at an interval of (0.05) and it needs to draw and update many of the images on the screen. Is there any way I can distribute the work flow so that the program runs much smoother?
Thanks
Kevin
I would use an NSThread instead of an NSTimer. I have had more success in this area using NSThread because it runs on an independant thread and is not fired off your main ui thread. In the loop for the thread sleep it for 1/20 (your 0.05) of a second. Because the thread is not running on the UI thread all of its tasks should not slow your UI down. However beacsue it is not running on the UI you will have to call performSelectorOnMainThread to get the UI to update from this background thread. I put a lock on my update method (a simple boolean) that says if the last ui update has not happened, just skip this one. then if im running out of processing time i just drop a frame or two here and there. I also do a lot of checking to see if anything has actually changed before i redraw.
Simple solution: Ditch NSTimer.
Move your redrawing code to a single method, then use CADisplayLink. The issue with using your NSTimer approach is that everything is being redrawn too fast or too slow for the screen. By using CADisplayLink, you can synchronize your redraw code to the screen refresh rate. All you need to do then is touch up your code so that it can deal with not being called at a specific time.
And yes, check to make sure you don't need to redraw as Aran Mulholland said above. Just make sure the checks don't take as long as a redraw.
And remember to optimize your code. A lot. Use ivars to access objects, but the whole property (self.myObject =) to set your objects.

Multi-threaded OpenGL Programming in Cocos2D-iPhone

In an attempt to create a loading bar for an iPhone game I'm developing (using Cocos2D), I wanted to use a multithreaded approach.
One thread shows a loading screen and runs the main application event loop while a new thread silently loads all the Sprites in the background (through spriteWithFile) and then adds them to a layer.
I create the new thread using NSThread's detachNewThreadSelector method (which sends updates of the loading status to the main thread via performSelectorOnMainThread).
The problem I'm facing is that any OpenGL calls (such as those found within the spriteWithFile method) in the new thread die with a BUS ERROR or memory access error of some sort. I'm assuming this is because both threads are attempting to make OpenGL calls at the same time or the new thread is unaware of the OpenGL context.
What has to be done to allow multiple threads to make OpenGL calls on the iPhone using Cocos2D-iPhone.
For the record, the new thread needs to execute the following two lines to be able to use the OpenGL API from a concurrent thread:
EAGLContext *k_context = [[[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1 sharegroup:[[[[Director sharedDirector] openGLView] context] sharegroup]] autorelease];
[EAGLContext setCurrentContext:k_context];
This is now made obsolete by the addImageAsync method provided by the TextureMgr class in Cocos2D 0.8.x onwards that does asynchronous texture loading for you.
I want to do this too.
I'm starting from this thread.
PS: This answer is very old, now I'm not sure that asynchronous texture loading is as useful as it once was since iOS5 added "free" texture uploads via CVOpenGLESTextureCaches. Sure you still can (& should) load your assets in a secondary thread, but giving that thread an EAGLContext doesn't seem as necessary now.
Apple has some good guidelines for multithreaded OpenGL here.
Cocos2d best practices recommend against using NSTimer and I assume the same applies to threads as well. You should probably use Cocos' Timer object. This will leave the thread management to Cocos and should also let you access the correct graphics context.
HTH.