How does the "needsCompositing" bit work? - flutter

I'm trying to better understand how the Flutter framework interprets the "needsCompositing" / "alwaysNeedsCompositing" bits.
When the needsCompositing bit is set on a render object, does every single ancestor render object up to the nearest repaint boundary also need compositing (i.e., its own composited layer)? Is this because any of those objects might, say, add a clip which may affect the newly composited child and in order to ensure that it does, a clip layer has to be used instead?
The part that seems surprising is that this would appear to add N new layers for N render objects just because one descendant needs compositing.
If this is true, I suppose this explains why you'd want to organize things into a shallow "repaint boundary sandwich."

needsCompositing is just going to signal to the render object whether it should be pushing a layer or not, which may or may not be true depending on what properties are set on the object. alwaysNeedsCompositing is unconditionally saying that.
It has no effect on ancestors - they can independently need or not need compositing, which again will control how the layer tree is constructed so that composition will happen on the engine side.
On an update, we do pump logic up the tree to make sure things get properly marked dirty (up to the nearest RepaintBoundary). However, individual render objects may respond to that by only pumping the data further up the tree and not setting their needsCompositing to true, if they never actually need compositing.

Related

In WebGPU, can you reuse the same render pass in multiple frames?

In WebGPU you can create a render pass by defining its descriptor:
const renderPassDesc: GPURenderPassDescriptor = {
colorAttachments: [
{
view: context.getCurrentTexture().createView(),
loadValue: [0.2, 0.3, 0.5, 1],
storeOp: "store"
}
]
};
And then run it through the command encoder and start recording.
const commandEncoder = device.createCommandEncoder();
const renderPass = commandEncoder.beginRenderPass(renderPassDesc);
So, essentially, it appears that you need the current texture to start recording (i.e. without calling context.getCurrentTexture().createView() you can't create the descriptor and without it you can't start the recording). But the API seems to suggest that the texture can change every frame (note that this used to be the case even months ago, when the API was different and you would be retrieving the texture from the swap chain). So, basically, it appears that you can't reuse render passes across different frames (unless of course you don't render to the swap chain, and target an offscreen texture instead).
So, the question is. In WebGPU, can you reuse the same render pass in multiple frames?
Comparison with Vulkan
My question stems from the (little) exposure I had to Vulkan. In Vulkan, you can reuse recorded resources because there is a way to know upfront how many VKImage objects are in the swap chain; they are going to have 0-based indices such as 0, 1 and 2. I can't remember the exact syntax, but I remember that basically you can record 3 separate command buffers, one per VKImage and reuse them across frames. All you have to do is query in the render loop the index of the current VKImage and retrieve the corresponding recorded command buffer.
By seeing the specification about the getCurrentTexture it seems that there is no control over the number of "swap" textures, at this time.
The texture is created (if it is null or it is destroyed) in the "allocate a new context texture" step, as the note there states that:
If a previously presented texture from context matches the required criteria, its GPU memory may be re-used.
Each time on the "update the rendering [of the] Document" step, if the current texture is not null and its not destroyed then it will be presented, destroyed, and set to null.
Another note from the specs:
Developers can expect that the same GPUTexture object will be returned by every call to getCurrentTexture() made within the same frame (i.e. between invocations of Update the rendering) unless configure() is called.
All of this seems to point that you have to get the current texture for each frame and create all related other objects as well.

Transparent layer Unity

In my scene I have 2 camera(split screen). it's possible change the trasparency of a layer only for one camera? for examples the "layer1" have alphatrasparency = 0.5 for right camera while left camera show "layer1" without trasparency.
Ostensibly no
There's a way to do it, though. It's a bit of a hack though, as it doesn't depend on physics layers, but rather the presence of a custom monobehavior script. Here's what I remember about this off the top of my head (I can dig up an implementation later, if needed).
Step 1: you will need a MonoBehaviour script attached to every game object you want to have rendered differently. This script is absolutely essential.
Step 2: this script will contain one function (you can use an existing behaviour script if all the objects already have one in common and you can just add this function to that script). Call it whatever you want, but something like AboutToBeRendered(Camera cam)
Step 3: create another script and attach it to both cameras. This script will also have one function in it: OnPreRender()
Step 4: In this OnPreRender method you will need to do:
find all game objects from Step 1
get their component with the AboutToBeRendered method
invoke that method, passing the camera as the paramter
Step 5: Writing the AboutToBeRendered method.
determinie which of the two cameras was passed to the method
set the material's color to be transparent or not, as needed
OnPreRender() is only called on scripts with a camera component on the same GameObject, indicating that this camera is about to render the scene. But what we actually want is for the object about to be rendered to know that it's about to be rendered and by which camera. Which is why we need the script in step 1.
I suppose you could skip step 1 and only look at every object in the scene and look at its physics layer, but that's going to be more expensive to figure out than "get me all instances of this component." You could do it based on Tag instead, as FindObjectsWithTag is generally considered to be pretty fast, but if you're already using the tag for something else, you're out of luck, and there's no comparable method for getting objects in a given physics layer.
However, you'd have to tweak the material's alpha value in the camera script rather than letting the object itself decide what value it should be.
In either case, the object's material would need to support transparency. When I did this I was preventing the object from being rendered entirely, so I just disabled its MeshRenderer component.

Maya Plugin attribute validation

I am trying to validate my custom MPxEmitterNode attributes.
I have force_min and force_max attributes that are double3 typed in maya parlance, basically two objects containing double[3] data.
I want to ensure the force_min is less than force_max for each of its 3 components. I'd like to do this by just swapping the min and max around if someone enters a value on the attribute in the attribute editor, or calls mels setAttr for those attributes, which then fails the "min < max" check.
I have tried setting up ATTRIBUTE_AFFECTS relationships between force_min, force_max and their individual component x,y,z objects. That seems to cause a cyclic issue leading to Maya crashing. I have also tried editing the custom compute function for the derived MPxEmitterNode, so it sets the force_min and force_max values to swap. The force_* attributes are seemingly never computed in this case.
Any help would be much appreciated.
Generally the 'Maya' way to do this would be to let the output look wrong if the min and max are set incorrectly. You don't know who is going to set those attributes -- it could be as connection or a script, and it could even get reset in between frames of an animation -- and so it's better to let the dag evaluation flow through even if the result is nonsense. It's like setting a radius of zero on a sphere node --it's 'correct' even thought it's wrong.
You can however swap the values inside your compute() method to get the same effect as swapping the values without resetting the plug values themselves. Setting an input plug from inside compute is a bad idea, because it introduces a loop into the flow of the dag evaluation. Dag nodes must be acyclical (that's the "a" in dag: Directed Acyclic Graph)

How can I scroll a Clutter.ScrollActor with a scrollbar?

I have a a GtkClutter.Embed that holds a complete graph of clutter actors. The most important actor is container_actor that holds a variable number of actors (laid out with a FlowLayout) that may overflow the height allocated to the parent Embed.
At some point, the container_actor takes the stage and be the only actor displayed (along with its children).
At this point I would like to be able to scroll through the content of container_actor.
Making my Embed implementing Gtk.Scrollable gives the ability to have a scrollbar. Also I've noticed that Clutter proposes a Clutter.ScrollActor.
Is using those two classes the recommended way to go?
Or do I need to use implement Gtk.Scrollable and move my container_actor manually on vadjustment.value_changed ?
edit: here's a sample in c for ScrollActor
ClutterScrollActor does not know anything about GtkScrollable or GtkAdjustment, so you will have to implement scrolling manually. It's not necessary to implement GtkScrollable — you just need a GtkScrollbar widget, a GtkAdjustment and some code that connects to the GtkAdjustment::value-changed signal to determine the point to which you wish to scroll the contents of the ClutterScrollActor.

What's the strategy in game engines to perform secure state changes?

I created a run loop in OpenGL ES which is called by a CADisplayLink at 60fps. AFAIK CADisplayLink calls it's target on a background thread.
I have about 100 state variables which are used by the run loop.
The problem: From the main thread, I want to change state variables which are used in the run loop to draw something. A frame must be drawn only after all state variables have been set to their target values.
I am afraid that at some point when I change a state variable, and I'm not done yet changing them all (in one big method in same run loop iteration on main thread), for example position of a geometric shape, there is multi-threading related crash or problem where the CADisplayLink will kick in right in the middle of my method that updates the state variables, and then draw garbage or crash.
Obviously when I just use synchronized or atomic properties it won't help because it is still not transactional. I think I need transactions.
My naive approach is this:
Instance variable read by run loop:
BOOL updatingState;
The run loop method will skip drawing if updatingState reads YES.
Then before starting to change state I set it to YES. And when everything is changed, I set it back to NO.
Now of course, problem: What if -while I am changing this- the run loop method is reading the values?
How do game engines deal with this problem? What kind of locking mechanisms do they have so the changing of the state variables can be finished before the next frame is going to be drawn?
You might find a read-copy-update strategy useful. One possible implementation is that each object actually contains two copies of the rendering parameters and an atomic flag is used to tell the rendering thread which to use. You will need to use a read memory barrier in the renderer to make sure that the flag is read before reading any of the parameters and a write memory barrier in the updater thread to make sure that all of the parameter updates are written before flipping the flag.
The usual way how this is done is that all state updates happen at each run loop iteration, before the drawing is done. That is, the run loop looks schematically like this:
updateState();
draw();
With this model, the drawing only happens after the a consistent state has been reached.
For this to work, you need to have a model where events such as key presses are polled for on each updateState() instead of happening asychronously, and a time measurement on each iteration to tell you how much time elapsed since the last frame.
I can't help you how this is realized in the concrete case of iOS programming, though, as I don't know anything about that. But I hope I could point you in the right direction.
I think this is a common problem in concurrency, so there are several ways to do it:
Use an immutable state class to hold the state variables.
Use a locking mechanism (if an immutable class cannot be used) to protect the state variables.
Have multiple states which you can modify, but only one is "active." This will allow you to reuse states and it will reduce copying and memory allocation.
Additionally, consider this situation:
Thread 1. Start drawing something.
Thread 1. Read 1/2 of the state 01 parameters (first state).
Thread 2. Swap out state 01 with state 02 (second state).
Thread 1. Reads the other 1/2 of state 02, but it's different from the state 01 parameters.
So the best option is not to allow the update of the state during the drawing, so option 3 might be the best way to do it because you would simply pick up the latest state and draw it. Let's say you have two states: drawingState and nonDrawingState. In your draw function you will always use the drawingState to draw while other threads modify the nonDrawingState. Once you're done drawing, then you can swap the states and continue drawing with the latest state modifications.