Firebreath NPAPI plugin rendering video to top level browser window (HWND) - plugins

I am working on a audio/video rendering plugin that is using FireBreath and we have a need to get HTML elements to overlay on top of the video. I am aware that to do this I need to use the windowless mode in FireBreath. However since I am using DirectX to render the video I cannot initialize DirectX with the HDC handle (it requires a HWND) that I get when I am instructed to render in windowless mode.
Also for other software security reasons I cannot render the video to an off-screen surface then Blt the bits to the HDC.
The alternative I was trying to accomplish is to use the Hardware Overlay feature in DirectX and use the browser's TOP level HWND to initialize DirectX, then use the HDC and coordinates to tell directX where in the TOP browser window to render the video frame. And render it directly to the top parent browser window.
I have tired a proof of concept, but I am seeing my video frames getting erased quite often after I draw them and thus the video appears to be flickering. I am trying to understand why that might be and I am wondering if this is not a viable solution given my parameters.
Also I am wide open to suggestions on how to accomplish this given my constraints.
Any help would be greatly appreciated!

In the FireBreath-dev group, John Tan wrote:
As what I know, you practically have no control precisely when the screen is going to draw. What can only be done is:
1) Inform the browser to repaint by issuing the windowless invalidatewindow
2) browser draw event arrives with the hdc. Draw on the hdc
John is completely correct. In addition, the HDC could potentially (perhaps likely will) be different each time your draw is called. I don't know of anyone who has successfully gotten directx drawing using windowless mode, and you have absolutely no guarantee that what you are doing will ever work as even if you got it working the browser may change the way or order that it draws in in a way that would break it.
You might want to look at the async surface API; I don't know which browsers this works on but I suspect likely only Firefox and IE. It was implemented in this commit.
I haven't used this at all, so I can't tell you how it works, but it was intended to solve exactly the problem you're describing. Your main issue will be browser support. What documentation there is is here.
Hope this helps

Related

google glass camera parameter settings

I'm working on an app that includes a custom camera application on Glass. I want to be able to hard-set different camera parameters, but I'm having difficulty figuring out which ones I actually have access to.
I tried calling parameters.flatten() and got a whole bunch of options that I thought I would be able to use, but when I tried testing them, nothing happened. (For example, when I tried setting the color effect to sepia, the result was still in normal color). Is there any documentation or code I can look at that will tell me which parameter options I actually have?
There are a few open issues on our issue tracker about camera parameters that do not behave as expected:
Issue 302: GDK: Camera effects not registered or take affect
Issue 303: GDK: Request: Additional camera focus modes (with auto focus)
Issue 304: GDK: Camera scene mode does not register or take affect
You may want to follow those so that you can be updated as the GDK evolves.

Rendering a preview of a UIView in another UIView

I have a very intriguing obstacle to overcome. I am trying to display the live contents of a UIView in another, separate UIView.
What I am trying to accomplish is very similar to Mission Control in Mac OS X. In Mission Control, there are large views in the center, displaying the desktop or an application. Above that, there are small views that can be reorganized. These small views display a live preview of their corresponding app. The preview is instant, and the framerate is exact. Ultimately, I am trying to recreate this effect, as cheaply as possible.
I have tried many possible solutions, and the one shown here is as close as I have gotten. It works, however the - (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx method isn't called on every change. My solution was to call [cloneView setNeedsDisplay] using a CADisplayLink, so it is called on every screen refresh. It is very near my goal, but the framerate is extremely low. I think that [CALayer renderInContext:] is much too slow.
If it is possible to have two CALayers render the same source, that would be golden. However, I am not sure how to approach this. Luckily, this is simply a concept app and isn't destined for the App Store, so I can make use of the private APIs. I have looked into IOSurface and Quartz contexts, but I haven't been able to solve this puzzle so far. Any input would be greatly appreciated!
iOS and OSX are actually mostly the same underneath at the lowest level. (However, when you get higher up the stack iOS is actually largely more advanced than OSX as it is newer and had a fresh start)
However, in this case they both use the same thing (I believe). You'll notice something about Mission Control. It isolates "windows" rather than views. On iOS each UIWindow has a ".contentID" property and CALayerHost can use to make the render server share the render context between the 2 of them (2 layers that is).
So my advice is to make your views separate UIWindows and get native mirroring for free-(ish). (In my experience the CALayerHost takes over the target layers place with the render server and so if both the CALayerHost and the window are visible the window won't be anymore, only the layer host will be (which the way they are used on OSX and iOS isn't a problem).)
So if you are after true mirroring, 2 copies of it, you'll need to resort to the sort of thing you were thinking about.
1 Option for this is to create a UIView subclass that uses
https://github.com/yyfrankyy/iOS5.1-Framework-Headers/blob/master/UIKit.framework/UIView-Rendering.h#L12
this UIView private method to get an IOSurface for a target view and then using a CADisplayLink once per second get and draw the surface.
Another option which may work (I'm not sure as I don't know your setup or desired effect) is possibly just to use a CAReplicatorLayer which displays a mirror of a CALayer using the same backing store (very fast and efficient + public stable API).
Sorry I couldn't give you a fixed, "this is the answer reply", but hopefully I've given you enough ideas and possibilities to get started.
I've also included some links to things you might find useful to read.
What's the magic behind CAReplicatorLayer?
http://aptogo.co.uk/2011/08/no-fuss-reflections/
http://iphonedevwiki.net/index.php/SBAppContextHostManager
http://iphonedevwiki.net/index.php/SBAppContextHostView
http://iphonedevwiki.net/index.php/CALayerHost
http://iky1e.tumblr.com/post/33109276151/recreating-remote-views-ios5
http://iky1e.tumblr.com/post/14886675036/current-projects-understanding-ios-app-rendering

Fallback for CSS background image transition

I'm using CSS transitions for the background-image property, though from what I can gather they are only supported by Chrome and Webkit (it doesn't seem to work in Safari 5.1.7). I really don't want to use jQuery for the transition since its only solution is to fade out the element (and with it the content) and fade back with a new background. Normally I would do it the standard way and have multiple divs or images inside a wrapper to rotate between, but the way this site is set up that simply wouldn't work (well technically it could, but it just seems ridiculously and needlessly over-complicated).
Visit the site here and you'll see what I mean: http://bos.rggwebdesigns.com/
Is there some way to safely fall back for other browsers that don't yet support background image transitions, either by disabling it completely or some other method? If the browser can't handle it, I don't want the user to just see the background change abruptly.
You can use Modernizr for feature detection, and then change your CSS on the fly accordingly.
Modernizr

How can I turn off hardware acceleration for certain HTML elements via CSS?

I created a very complex web app using HTML5, CSS3 and jQueryMobile.
It seems like jQueryMobile turns on hardware acceleration here and there via translate3D and/or translateZ.
Now I want to turn this off for certain HTML elements.
This gives me two questions:
Is there a css property/attribute or something that I can use to tell the browser to turn off hardware acceleration for certain elements?
If not: I will have to find the places where either translate3D or translateZ is used and simply remove them, right? How can I do that? The whole markup is very complex with many HTML elements. I can't go through each element in the inspector and search for it.
Update: The reason why I want to fix this
In my web app there are some elements which need to be swipeable (e.g. an image gallery). In this case I need hardware acceleration. Same for div containers that require iScroll and every other element which should be animated (e.g. slide- and fade-animations).
However, there are many parts of the app which are static (not animated). Using a special startup option in Safari, I was able to make the parts which get hardware-accelerated visible. This way I noticed that THE WHOLE app gets hardware-accelerated, not only the necessary parts.
IMHO this is not a good thing because:
Accelerating the whole thing will cause heavy load to the GPU which makes the whole app stutter while scrolling.
AFAIK it's best practice to let the CPU do the static stuff while the GPU only handles all the fancy animated stuff.
When animations have ended, hardware acceleration should be deactived because it's not necessary anymore and would shorten battery lifetime.
After going through thousands of thousands of lines of CSS code, I found this:
.ui-page{-webkit-backface-visibility: hidden !important}
This was active for all pages and caused the problem. Removing that line fixed it for me.

How Can I Record the Screen with Acceptable Performance While Keeping the UI Responsive?

I'm looking for help with a performance issue in an Objective-C based iOS app.
I have an iOS application that captures the screen's contents using CALayer's renderInContext method. It attempts to capture enough screen frames to create a video using AVFoundation. The screen recording is then combined with other elements for research purposes on usability. While the screen is being captured, the app may also be displaying the contents of a UIWebView, going out over the network to fetch data, etc... The content of the Web view is not under my control - it is arbitrary content from the Web.
This setup is working but as you might imagine, it's not buttery smooth. Since the layer must be rendered on the main thread, there's more UI contention than I'd like. What I'd like to do is to have a setup where the responsiveness of the UI is prioritized over the screen capture. For instance, if the user is scrolling the Web view, I'd rather drop frames on the recording than have a terrible scrolling experience.
I've experimented with several techniques, from dispatch_source coalescing to submitting the frame capture requests as blocks to the main queue to CADisplayLink. So far they all seem to perform about the same. The frame capture is currently being triggered in the drawRect of the screen's main view.
What I'm asking here is: given the above, what techniques would you suggest I try to achieve my goals? I realize the answer may be that there is no great answer... but I'd like to try anything, however wacky it might sound.
NOTE: Whatever techniques need to be App Store friendly. Can't use something like the CoreSurface hack that Display Recorder used/uses.
Thanks for your help!
"Since the layer must be rendered on the main thread" this is not true, as long as you don't touch UIKit.
Please see https://stackoverflow.com/a/12844171/136305
Maybe you can record at half resolution to speed up things, if that fits the requirements?