Directx control in browser plugin - plugins

I have to insert a directx control to a firebreath plug in for a browser.
Can anyone post a sample how to do it? I have no knowledge in plugins...
10x

I don't have an example that I can give you, but I can tell you roughly what you need to do.
First, read this: http://colonelpanic.net/2010/11/firebreath-tips-drawing-on-windows/
That will give you an overview of how drawing works in FireBreath.
First, you set everything up when handling AttachedEvent.
Create a new thread to handle drawing (your DirectX drawing must not be on the main thread)
Get the HWND from the PluginWindowWin object (cast the FB::PluginWindow* to FB::PluginWindowWin and call getHWND())
Initialize DirectX on the secondary thread with the provided HWND. Set up some form of render loop and make sure you can send it commands from the main thread.
Handle the RefreshEvent (comes from WM_PAINT) by posting a message somehow to your render thread so it redraws when that event is fired.
Make sure that on DetachedEvent you shut down your thread.
You need to do all initialization, drawing, and shutdown of the DirectX stuff on the same thread. This needs to all happen on a thread that is not just the main thread (don't just use timers) because otherwise it'll mess up the browser rendering context on some versions of Firefox -- not sure why.
Anyway, hope this helps.
Edit: To pass parameters into the start of a boost::thread, should that be the threading abstraction you decide to use, simply pass it in as a parameter.
boost::thread t(&MyClass::someFunction, this, theHWND);
That will start the thread. In actuality, you probably want to make the thread a class variable or something so that you can access it later -- remember that you'll want the thread to have stopped during the handling of DetachedEvent. For messages I'd probably use FB::SafeQueue, which is a threadsafe queue that is part of FireBreath. Look at the sources for how to use it; it's pretty straightforward (stolen from a codeproject article, I think).
// Inside MyClass
void someFunction(HWND theHWND) {
...
}

Related

How to exit playmode from another thread

Is it possible (even in a hacky way) to call EditorApplication functions from another thread? More specifically I want to exit play mode from another thread (not main Unity thread).
My use case is, I'm trying to write a small snippet that detects endless loops while in the editor, and breaks out of them in case of detection. So far the "best" I found is killing the process, but this doesn't really help.
You cannot. Unity thread is not like normal programs, in the sense that it is frame-based. You can ask Unity main thread to run methods, by setting a bool that is being checked at every frame or by setting an <Action> Or <Task> Queue, like it is explained in the link suggested by remy_rm in the comment above.
But these hacks don’t run methods on unity main thread, they just gracefully ask Unity main thread itself to run it. The difference, while subtle in normal cases, becomes vital with your problem. You want to call a method on the main thread to kill it when it’s stuck on an endless loop, but isn’t that case, Unity’s main thread will never reach the point in Update() where it’s supposed to kill itself. You basically send a letter to someone who is never coming back home, and he will never read it if he is stuck somewhere and cannot reach home.
Best way, that springs in my mind, in those cases, is to attach a debugger and stop the thread from it.

How does Dispatch.main.async "update the UI"?

I've been using Swift for a little while and GCD still confuses me a bit.
I've read:
https://www.raywenderlich.com/60749/grand-central-dispatch-in-depth-part-1
As well as the Apple docs on dispatch:
https://developer.apple.com/documentation/dispatch
I understand the overall concept that GCD allows multiple tasks to be run on different threads (I think that's right).
What I don't quite understand is how Dispatch.main.async "updates the UI".
For example if I make a call to an api somewhere and data is returned - say it takes 5 seconds to return all the data, then how does using Dispatch.main.async help with updating the UI? How does Dispatch.main.async know what UI to update?
And i still don't quite get the place of GCD and why instead can't some kind of observer or a delegate or a closure be used that is called when all the data is loaded?
And re: "updating the UI" with GCD if I'm making an api call but not using the data immediately eg. just storing the data in an array until I decide to use it is there then any need to use Dispatch.main.async?
And I've been using firebase/firestore as a db for a little while now. Firebase has it's own listeners and runs asynchronously. I still can't get a great answer re: the best way to handle the asynchronous return from firebase in iOS/Swift. For example when my app loads if I go to firebase to get data to populate a tableviewcontroller what is the best way to know when all the data has returned? I've been using a delegate for this but was wondering if and how Dispatch.main.async might be used.
Dispatch.main.async does not update the UI. The story goes into a different direction: If you want to update the UI, you must do so from the main thread. If you're current code is not running on the main thread, Dispatch.main.async is the most convenient way to have some code run on the main thread.
It's an old restrictions that affects most operating systems: UI related actions such as changing elements in the UI must only be called from a specific thread, usually the so called main thread.
In many cases that's not a problem since your UI related code usually acts when triggered by some UI event (user clicking or tapping, key pressed etc.). These event callback happen on the main thread. So there is no threading issue.
With GCD, you can run long-running tasks on separate threads so the tasks doesn't slow down or even block the UI. So when these tasks are finished and you want to update the UI (e.g. to display the result), you must do so on the main thread. With Dispatch.main.async you can ask GCD to run a piece of code on the main thread. GCD doesn't know about the UI. Your code must know what to update. GCD just runs your code on the desired thread.
If at the end of your tasks there is nothing to display or otherwise update in the UI, then you don't need to call Dispatch.main.async.
Update re Firebase
The Firebase Database client performs all network and disk operations in separate background thread off the main thread.
The Firebase Database client invokes all callbacks to your code on the main thread.
So no need to call Dispatch.main.async in the Firebase callbacks. You are already on the main thread.
FYI the reason that all of the UI code needs to go on the main thread is because drawing is a (relatively in CPU time) long and expensive process involving many data structures and millions of pixels. The graphics code essentially needs to lock a copy of all of the UI resources when its doing a frame update, so you cannot edit these in the middle of a draw, otherwise you would have wierd artifacts if you went and changed things half way through when the system is rendering those objects. Since all the drawing code is on the main thread, this lets he system block main until its done rendering, so none of your changes get processed until the current frame is done. Also since some of the drawing is cached (basically rendered to texture until you call something like setNeedsDisplay or setNeedsLayout) if you try to update something from a background thread its entirely possible that it just won't show up and will lead to inconsistent state, which is why you aren't supposed to call any UI code on the background threads.

Why must UIKit operations be performed on the main thread?

I am trying to understand why UI operations can't be performed using multiple threads. Is this also a requirement in other frameworks like OpenGL or cocos2d?
How about other languages like C# and javascript? I tried looking in google but people mention something about POSIX threads which I don't understand.
In Cocoa Touch, the UIApplication i.e. the instance of your application is attached to the main thread because this thread is created by UIApplicatioMain(), the entry point function of Cocoa Touch. It sets up main event loop, including the application’s run loop, and begins processing events. Application's main event loop receives all the UI events i.e. touch, gestures etc.
From docs UIApplicationMain(),
This function instantiates the application object from the principal class and instantiates the delegate (if any) from the given class and sets the delegate for the application. It also sets up the main event loop, including the application’s run loop, and begins processing events. If the application’s Info.plist file specifies a main nib file to be loaded, by including the NSMainNibFile key and a valid nib file name for the value, this function loads that nib file.
These application UI events are further forwarded to UIResponder's following the chain of responders usually like UIApplication->UIWindow->UIViewController->UIView->subviews(UIButton,etc.)
Responders handle events like button press, tap, pinch zoom, swipe etc. which get translated as change in the UI. Hence as you can see these chain of events occur on main thread which is why UIKit, the framework which contains the responders should operate on main thread.
From docs again UIKit,
For the most part, UIKit classes should be used only from an application’s main thread. This is particularly true for classes derived from UIResponder or that involve manipulating your application’s user interface in any way.
EDIT
Why drawRect needs to be on main thread?
drawRect: is called by UIKit as part of UIView's lifecycle. So drawRect: is bound to main thread. Drawing in this way is expensive because it is done using the CPU on the main thread. The hardware accelerate graphics is provided by using the CALayer technique (Core Animation).
CALayer on the other hand acts as a backing store for the view. The view will then just display cached bitmap of its current state. Any change to the view properties will result in changes in the backing store which get performed by GPU on the backed copy. However, the view still needs to provide the initial content and periodically update view. I have not really worked on OpenGL but I think it also uses layers(I could be wrong).
I have tried to answer this to the best of my knowledge. Hope that helps!
from : https://www.objc.io/issues/2-concurrency/thread-safe-class-design/
It’s a conscious design decision from Apple’s side to not have UIKit be thread-safe. Making it thread-safe wouldn’t buy you much in terms of performance; it would in fact make many things slower. And the fact that UIKit is tied to the main thread makes it very easy to write concurrent programs and use UIKit. All you have to do is make sure that calls into UIKit are always made on the main thread.
So according to this the fact that UIKit objects must be accessed on the main thread is a design decision by apple to favor performance.
C# behaves the same (see eg here: Keep the UI thread responsive). UI updates have to be done in the UI thread - most other things should be done in the background hen possible.
If that wouldn't be the case there would probably be a synchronization hell between all updates that have to be done in the UI ...
Every system, every library, needs to be concerned about thread safety and must do things to ensure thread safety, while at the same time looking after correctness and performance as well.
In the case of the iOS and MacOS X user interface, the decision was made to make the UI thread safe by only allowing UI methods to be called and executed on the main thread. And that's it.
Since there are lots of complicated things going on that would need at least serialisation to prevent total chaos from happening, I don't see very much gained from allowing UI on a background thread.
Because you want the user to be able to see the UI changes as they happen. If you were to be able to perform UI changes in a background thread and display them when complete, it would seem the app doesn't behave right.
All non-UI operations (or at least the ones that are very costly, like downloading stuff or making database queries) should take place on a background thread, whereas all UI changes must always happen on the main thread to provide as smooth of a user experience possible.
I don't know what it's like in C# for Windows Phone apps, but I would expect it to be the same. On Android the system won't even let you do things like downloading on the main thread, making you create a background thread directly.
As a rule of thumb - when you think main thread, think "what the user sees".

OpenGL not show result from secondary thread on iPhone

I created EAGLContext and use it in the secondary thread. But there is no output displayed. The same code works fine if run in main loop.
Do I need to notify somewhere on each render completion?
EGL Context may be only active in one thread. You need to use eglMakeCurrent(). Also, you need to use eglSwapBuffers() once your rendering is done.

Objective C - Single Background Thread

I want to run a single background thread for an iPhone application which is available in background all the time and gets executed when specific event fires and go to wait for specific event to fire to start its execution again. During the execution of thread if specific event is fired again then thread should restart its work.
I am working on a custom map application. On TouchesMoved event, I need to load the map image tiles according to the positions moved in a background thread. The problem is that when I move the map with speed the touchesMoved event is fired the previous thread has not finished its work and new thread is started. It causes thread safety issue and my application is crashed.
So I am thinking of a solution to have a single thread all the time available and starts its work when touchesMoved is fired if touchesMoved is fired again it should restart its work instead of starting a new thread. I think it will prevent the thread safety issue.
Please help
Firstly I'd echo the use of NSOperation and NSOperationQueue. You could fall-back to using NSThread directly, but the point of NSOperation is that it hides threading from you, leaving you to concentrate on the processing you need to do. Try firing NSOperation requests as and when required, see what the performance is like in your use-case; even if these operations get data in an async manner, it should provide you with a cleaner solution with good performance, plus future proof.
I've successfully used NSInvocationOperation to fire requests as often as required, and it sounds like the sort-of requirements and behaviour you're after. I would suggest more generally that you experiment with these in a test project; here you can test performance.
The following weblog's helped me start playing with NSOperation:
http://www.dribin.org/dave/blog/archives/2009/09/13/snowy_concurrent_operations/
http://www.cimgf.com/2008/02/16/cocoa-tutorial-nsoperation-and-nsoperationqueue/
As always, the Apple Threading Programming Guide is a key read, to figure out which way to go depending on needs.
This sounds like an ideal job for an NSOperationQueue. Have a read of the operation queue section of the concurrency guide.
Essentially, you create an NSOperation object for each map tile load and place them on a queue that only allows them to execute one at a time.
Put a run loop in your background compute thread. Then use an NSOperation queue to manage sending messages to it. The queue plus the run loop will serialize all the work requests for you.