In the dev tools timeline, what are the empty green rectangles? - google-chrome-devtools

In the Chrome dev tools timeline, what is the difference between the filled, green rectangles (which represent paint operations) and the empty, green rectangles (which apparently also represent something about paint operations...)?

Painting is really two tasks: draw calls and rasterization.
Draw calls. This is a list of things you'd like to draw, and its derived from the CSS applied to your elements. Ultimately there is a list of draw calls not dissimilar to the Canvas element's: moveTo, lineTo, fillRect (though they have slightly different names in Skia, Chrome's painting backend, it's a similar concept.)
Rasterization. The process of stepping through those draw calls and filling out actual pixels into buffers that can be uploaded to the GPU for compositing.
So, with that background, here we go:
The solid green blocks are the draw calls being recorded by Chrome. These are done on the main thread alongside JavaScript, style calculations, and layout. These draw calls are grouped together as a data structure and passed to the compositor thread.
The empty green blocks are the rasterization. These are handled by a worker thread spawned by the compositor.
Essentially, then, both are paint, they just represent different sub-tasks of the overall job. If you're having performance issues (and from the grab you provided you appear to be paint bound), then you may need to examine what properties you're changing via CSS or JavaScript and see if there is a compositor-only way to achieve the same ends. CSS Triggers can probably help here.

Related

Why does unity material not render semi-transparency properly?

I have a Unity material whose albedo is based on a spritesheet. The sprite has semi-transparency, and is formatted to RGBA 32bit.
Now, the transparency renders in the sprite, but not in the material.
How do I do this without also making supposedly opaque parts of the albedo not transparent?
I have tried setting render mode to transparent, fade, and unlit/transparent. The result looks like this:
I tried opaque, but it ruins the texture. I tried cutout, but the semi-transparency will get out or become fully opaque. (depending on cutout)
There is no code to this.
I expect the output to make the semi-transparent parts of the material semi-transparent, and the opaque parts opaque. The actual output is either fully opaque or fully "semi-transparent", which is super annoying.
Edit
So I delayed work and I added submesh. So, it is really close to solving the problem.
It's still doing that glitch.
Okay, good news and bad news. The good news is, this problem is not uncommon. It's not even unique to Unity. The bad news is, the reason it's not uncommon or unique to Unity is because it's a universal issue with no perfect solution. But we may be able to find you a work around, so let's go through this together.
There's a fundamental issue in 3D Graphics: In what order do you draw things? If you're drawing a regular picture in real life, the obvious answer is you draw the things based on how far away from the viewer they are. This works fine for a while, but what do you do with objects that aren't cleanly "in front" of other things? Consider the following image:
Is the fruit in that basket in front of the bowl, or behind it? It's kind of neither, right? And even if you can split objects up into front and back, how do you deal with intersecting objects? Enter the Z-Buffer:
The Z-Buffer is a simple idea: When drawing the pixels of an object, you also draw the depth of those pixels. That is, how far away from the camera they are. When you draw a new object into the scene, you check the depth of the underlying pixel and compare it with the depth of the new one. If the new pixel is closer, you overwrite the old one. If the old pixel is closer, you don't do anything. The Z Buffer is generally a single channel (read: greyscale) image that never gets shown directly. As well as depth sorting, it can also be used for various post processing effects such as fog or ambient occlusion.
Now, one key component of the depth buffer is that it can only store one value per pixel. Most of the time, this is fine; After all, if you're just trying to sort a bunch of objects, the only depth you really care about is the depth of the front-most one. Anything behind that front-most object will be rendered invisible, and that's all you need to know or care about.
That is, unless your front-most object is transparent.
The issue here is that the renderer doesn't know how to deal with drawing an object behind a transparent one. To avoid this, a smart renderer (including unity) goes through the following steps:
Draw all opaque objects, in any order.
Sort all transparent objects by distance from the camera.
Draw all transparent objects, from furthest to closest.
This way, the chances of running into weird depth sorting issues is minimized. But this will still fall apart in a couple of places. When you make your object use a transparent material, the fact that 99% of the object is actually solid is completely irrelevant. As far as Unity is concerned, your entire object is transparent, and so it gets drawn according to its depth relative to other transparent objects in the scene. If you've got lots of transparent objects, you're going to have problems the moment you have intersecting meshes.
So, how do you deal with these problems? You have a few options.
The first and most important thing you can do is limit transparent materials to areas that are explicitly transparent. I believe the rendering order is based on materials above all else, so having a mesh with several opaque materials and a single transparent one will probably work fine, with the opaque parts being rendered before the single transparent part, but don't quote me on that.
Secondly, if you have alternatives, use them. The reason "cutout" mode seems to be a binary mask rather than real transparency is because it is. Because it's not really transparent, you don't run into any of the depth sorting issues that you typically would. It's a trade-off.
Third, try to avoid large intersecting objects with transparent materials. Large bodies of water are notorious for causing problems in this regard. Think carefully about what you have to work with.
Finally, if you absolutely must have multiple large intersecting transparent objects, consider breaking them up into multiple pieces.
I appreciate that none of these answers are truly satisfying, but the key takeaway from all this that this is not a bug so much as a limitation of the medium. If you're really keen you could try digging into custom render pipelines that solve your problem explicitly, but keep in mind you'll be paying a premium in performance if you do.
Good luck.
You said you tried Transparent Shader mode, but, did you tried to change Alpha Channel values in your material color after it?
The second image seems like the Alpha in RGBA is 0, try changing it.

Multithreading with Metal

I'm new to Apple's Metal API and graphics programming in general. I'm slowly building a game engine of sorts, starting with UI. My UI is based on nodes each with their own list of child nodes. So if I have a 'menu' node with three 'buttons' as children, calling render(:MTLDrawable:CommandQueue) the menu will render itself to the drawable by committing a command buffer to the queue, and then call the same method for all of its children with the same drawable and queue, until the entire node tree has been rendered from top to bottom. I want a separate subthread to be spawned for he rendering of every node in the tree--can I just wrap each render function in a dispatch-async call? Is the command queue inherently thread-safe? What is the accepted solution for concurrently rendering multiple objects to a single texture before presenting it using Metal? All I've seen in any Metal tutorial so far is a single thread that renders everything in order using a single command buffer per frame, calling presentDrawable() and then commit() at the end of each frame.
Edit
When I say I want to use multithreading, it applies only to command encoding, not execution itself. I don't want to end up with the buttons in my theoretical menu being drawn and then covered up with the menu background as a result of bad execution order. I just want each object's render operation to be encoded on a separate thread, before being handed to the command queue.
Using a separate command buffer and render pass for each UI element is extreme overkill, even if you want to use CPU-side concurrency to do your rendering. I would contend that you should start out by writing the simplest thing that could possibly work, then optimize from there. You can set a lot of state and issue a lot of draw calls before the CPU becomes your bottleneck, which is why people start with a simple, single-threaded approach.
Dispatching work to threads isn't free. It introduces overhead, and that overhead will likely dominate the work you're doing to issue the commands for drawing any given element, especially once you factor in the bandwidth required to repeatedly load and store your render targets.
Once you've determined you're CPU-bound (probably once you're issuing thousands of draw calls per frame), you can look at splitting the encoding up across threads with an MTLParallelRenderCommandEncoder, or multipass solutions. But well before you reach that point, you should probably introduce some kind of batching system that removes the responsibility of issuing draw calls from your UI elements, because although that seems tidy from an OOP perspective, it's likely to be a large architectural misstep if you care about performance at scale.
For one example, you could take a look at this Metal implementation of a backend renderer for the popular dear imgui project to see how to architect a system that does draw call batching in the context of UI rendering.

Timeline Paint Profiler in Devtools suggests everything is being painted

When we use the Paint Profiler in Chrome we can see what's being painted. I created a simple example that adds a new div to the page every 3 seconds and here is what is shown as being painted:
But when I use the paint profiler in the Timeline it looks like everything is being repainted:
As shown in the screenshot, on the fifth paint we have 5 calls to drawTextBlob calls. This suggests that all the 5 divs where painted. I was expecting only one.
Can someone shed some light into this?
The exact meaning of "Paint" event has changed over time. It used to be that during Paint the renderer directly updated the layer's bitmap, which was often slow. Back in those days, you would likely find that the painted rectangle matches the area that you actually invalidated (i.e. would be just the last line in your case), as you probably expect.
Present implementation of Chrome's rendering subsystem performs rasterization either on other threads (in an attempt to keep things off the main thread which is busy enough with JavaScript execution, DOM layout and lots of other things) or on GPU (check out "Rasterization" and "Multiple raster threads" in chrome://gpu if you're curious to know what's the current mode on your platform). So the "Paint" event that you see on the main thread just covers recording a log of paint commands (i.e. what you see on the left pane of the Paint Profiler), without actually producing any pixels -- this is relatively fast, and Chrome chooses to re-record the entire layer so it can later pick what part of it to rasterize (e.g. in an anticipated case of scrolling) without going to main thread again, which is likely to be busy by running JavaScript or doing a layout.
Now if you switch Timeline into Flame Chart mode (right icon near "View" label in Toolbar), you'll see "Rasterize Paint" event which is actual rasterization -- Chrome picks the paint command log recorded during Paint event on the main thread and re-plays it producing actual pixels for a fragment of the layer. You can see what part of the layer was being rasterized and the Paint Profiler for this part when you select "Rasterize Paint". Note that there are many small Rasterize Paint events for different fragments, possible on different threads, but they still all have the entire log (i.e. 5 drawTextBlob commands in your example). However, those paint commands that do not affect the fragment being rasterized will be culled as they fall outside of the fragment's clip rectangle and hence won't have noticeable effect on rasterization time.
Then, you'll probably notice that the fragments being rasterized are still larger than the area you've actually invalidated. This is because Chrome manages rasterized layers in terms of tiles, small rectangular bitmaps (often 128 x 128, but may vary by platform), so that for large layers (e.g. pages much longer than viewport), only the parts visible in the viewport could be stored on the GPU (which often has a limited memory) and the parts that suddenly become visible as a result of scrolling could be uploaded fast.
Finally, the parts that you see highlighted with green as a result of you ticking "Show Paint rectangles" in Rendering options, are technically an "invalidation" rectangles -- i.e. that's the areas of your page that have really changed as a result of changed layout/styles etc. These areas are what you really as an author can directly affect, but as you see, Chrome will likely paint and rasterize more than that, mostly out of concerns of managing the scrolling of large pages efficiently.

glDrawElements behavior

I am currently reading an iPhone OpenGL ES project that draws some 3D shapes (shpere, cone, ..). I am a little bit confused about the behavior of glDrawElements.
After binding the vertexbuffer to GL_ARRAY_BUFFER, and the indexbuffer to GL_ELEMENT_ARRAY_BUFFER, the function glDrawElements is called:
glDrawElements(GL_TRIANGLES, IndexCount, GL_UNSIGNED_SHORT, 0);
At first I thought this function draws the shapes on screen, but actually the shapes are later drawn on the screen using:
[m_context presentRenderbuffer:GL_RENDERBUFFER];
So what does glDrawElements do? The manual describes it as render primitives from array data. But I don't understand the real meaning of render & it's difference from draw (my native language is not english)
The DrawElements call is really what "does" the drawing. Or rather it tells the GPU to draw. And the GPU will do that eventually.
The present call is only needed because the GPU usually works double buffered: One buffer that you don't see but draw to, and one buffer that is currently on display on the screen. Once you are done with all the drawing you flip them.
If you would not do this you would see flickering while drawing.
Also it allows for parallel operation. When you call DrawElements you call it multiple times for one frame. Only when you call present does the GPU have to wait for all of them to be done.
It's true that glDraw commands are responsible for your drawing and that you don't see any visible results until you call the presentRenderbuffer: method, but it's not about double buffering.
All iOS devices use GPUs designed around Tile-Based Deferred Rendering (TBDR). This is an algorithm that works very well in embedded environments with less compute resources, memory, and power-usage budget than a desktop GPU. On a desktop (stream-based) GPU, every draw command immediately does work that (typically) ends with some pixels being painted into a renderbuffer. (Usually, you don't see this because, usually, desktop GL apps are set up to use double or triple buffering, drawing a complete frame into an offscreen buffer and then swapping it onto the screen.)
TBDR is different. You can think about it being sort of like a laser printer: for that, you put together a bunch of PostScript commands (set state, draw this, set a different state, draw that, and so on), but the printer doesn't do any work until you've sent all the draw commands and finished with the one that says "okay, start laying down ink!" Then the printer computes the areas that need to be black and prints them all in one pass (instead of running up and down the page printing over areas it's already printed).
The advantage of TBDR is that the GPU knows about the whole scene before it starts painting -- this lets it use tricks like hidden surface removal to avoid doing work that won't result in visible pixels being painted.
So, on an iOS device, glDraw still "does" the drawing, but that work doesn't happen until the GPU needs it to happen. In the simple case, the GPU doesn't need to start working until you call presentRenderbuffer: (or, if you're using GLKView, until you return from its drawRect: or its delegate's glkView:drawInRect: method, which implicitly presents the renderbuffer). If you're using more advanced tricks, like rendering into textures and then using those textures to render to the screen, the GPU starts working for one render target as soon as you switch to another (using glBindFramebuffer or similar).
There's a great explanation of how TBDR works in the Advances in OpenGL ES talk from WWDC 2013.

Most effective "architecture" for layered 2D app using OpenGL on iPhone?

I'm working on an iPhone OS app whose primary view is a 2-D OpenGL view (this is a subclass of Apple's EAGLView class, basically setting up an ortho-projected 2D environment) that the user interacts with directly.
Sometimes (not at all times) I'd like to render some controls on top of this baseline GL view-- think like a Heads-Up Display. Note that the baseline view underneath may be scrolling/animating while controls should appear to be fixed on the screen above.
I'm good with Cocoa views in general, and I'm pretty good with CoreGraphics, but I'm green with Open GL, and the EAGLView's operations (and its relationship to CALayers) is fairly opaque to me. I'm not sure how to mix in other elements most effectively (read: best performance, least hassle, etc). I know that in a pinch, I can create and keep around geometry for all the other controls, and render those on top of my baseline geometry every time I paint/swap, and thus just keep everything the user sees on one single view. But I'm less certain about other techniques, such as having another view on top (UIKit/CG or GL?) or somehow creating other layers in my single view, etc.
If people would be so kind to write up some brief observations if they've travelled these roads before, or at least point me to documentation or existing discussion on this issue, I'd greatly appreciate it.
Thanks.
Create your animated view as normal. Render it to a render target. What does this mean? Well, usually, when you 'draw' the polygons to the screen, you're actually doing it to a normal surface (the primary surface), that just so happens to be the one that eventually goes to the screen. Instead of rendering to the screen surface, you can render to any old surface.
Now, your HUD. Will this be exactly the same all the time or will it change? Will only bits of it change?
If all of it changes, you'll need to keep all the HUD geometry and textures in memory, and will have to render them onto your 'scrolling' surface as normal. You can them apply this final, composite render to the screen. I wouldn't worry too much about hassle and performance here -- the HUD can hardly be as complex as the background. You'll have a few textures quads at most?
If all of the hud is static, then you can render it to a separate surface when your app starts, then each frame render from that surface onto the animated surface you're drawing each frame. This way you can unload all the HUD geom and textures right at the start. Of course, it might be the case that the surface takes up more memory -- it depends on what resources your app needs most.
If your had half changes and half not, then technically, you can pre-render the static parts and then render the other parts as you're going along, but this is more hassle than the other two options.
Your two main options depend on the dynamicness of the HUD. If it moves, you will need to redraw it onto your scene every frame. It sucks, but I can hardly imagine that geometry is complex compared to the rest of it. If it's static, you can pre-render and just alpha blend one surface onto another before sending to the screen.
As I said, it all depends on what resources your app will have spare.