How does canvas rendering through javascript is working iternally? - dom

Can anyone explain how browser is rendering the canvas objects and allowing to draw objects using javascript.

Related

Trying to control menu through render texture

Im currently rendering a canvas through a render texture and wanting to control said menu through a different cam. It works, sort of.... only trouble is the canvas doesn't seem to be aligned example:
How the TV menu should be
How it currently is
Honestly no clue on whats causing it, any clues on how to fix?
Info: Camera is set to ortho, canvas is screenspace to cam that can only see the UI

Custom drawing layer in Mapbox GL JS (and Leaflet)

I'm starting research to add a user feature to an existing map built in Mapbox GL JS (wrapped in an Angular 2+ application). What I need to do, is allow a user to be able to draw and rotate ellipses and text labels over the top of a map, and be able to save screen captures of the result.
I'm coming into this with no experience in Mapbox or Leaflet, so I have a lot to figure out. My first goal is to determine if I can do this in Mapbox directly (with a plugin?), of if I will need to render a canvas over the top of my map with some third-part drawing library (I have a lot of experience with those).
The obvious advantage to doing this in Mapbox directly would be that we might still be able zoom and pan.
The Mapbox-gl-draw library lets the user author features in a map, but probably not to the extent you need.
If the features the user creates don't need to live "in map space" (ie, the map is static, and the labels are statically positioned over the top, for printing), working directly on a canvas will give you much more flexibility. You'll also have access to a much wider variety of libraries.

What does the painting of the leaflet-described elements on a map?

I'm using Leaflet + CartoDB; I've also used Leaflet + Mapbox; and there may also be some Leaflet + GoogleMaps in my future.
My customer asked me this question: where do the Leaflet layers get painted onto the tiles? Is that done by Leaflet? Or by the Tile engine?
Does this change if I'm using a "regular" map engine (such as Mapbox) or if I'm using something like the KML-rendering plugin?
where do the Leaflet layers get painted onto the tiles? Is that done by Leaflet?
By default (unless you're doing something weird), that happens in your web browser, which is compositing DOM elements on top of each other. You can check this by using the developer tools in your browser and inspecting the DOM elements for the tiles, and the <canvas> or <svg> with your vector geometries. They are separate DOM elements, thus your browser is doing the compositing.
Does this change if I'm using a "regular" map engine (such as Mapbox) or if I'm using something like the KML-rendering plugin?
Not really. Mapbox-gl-js uses insane amounts of WebGL, so that means that the brunt of the workload moves from the browser's compositor to a WebGL stack. It still happens in the web browser, albeit in a different part of the browser.
There is no "KML rendering plugin" for leaflet, just KML loading plugins. Vector geometries are still rendered in a <canvas> or <svg> separate from the image tiles for the basemap, then composited.
You can, of course, run your own tile server (with software such as Geoserver, Mapserver, Mapproxy, mapnik+mod_tile, tirex, tilestream, or dozens of others). In that case, you obviously know you are rasterizing your data into tiles.

Get Texture2D from a Canvas in Unity?

I am trying to post an image of my game hero to facebook. So I instantiate the prefab which contains number of UI elements, icons and texts.
I'd like to get a Texture2D or image data from it so I could send that as a payload to my facebook plugin.
If it was visible in camera and it wasn't a UI element I would get its texture from the camera, if I let it appear on the screen the full screen camera won't be looking at it, because it is rendered using canvas. Also canvas is also looking at other objects around it because this prefab doesn't occupy full screen.
I may be wrong but I don't think that you can render the UI on a texture. It doesn't seem ideal, but you could use CaptureScreenshot, then load the image and cut the part you want based on the resolution of Screen.

web page: semi-transparent elements -> PNG

I have a web site that makes use of CSS box shadows. To make the web site look good in more browsers, I want to make use of semi-transparent PNGs instead. And to avoid having to redraw elements in a graphics program, I would like to know:
Is there a way to extract
semi-transparent elements from a web
page and store them in
semi-transparent PNGs?
One solution that I could try if I hadn't borrowed my Mac to a friend: print to PDF from Safari. If I'm lucky, then the PDF has all the elements stacked in layers.
You can render DOM elements to canvas via SVG and XHTML embedded in <foreignObject>.